Some writers have declared that the debut of ChatGPT on November 30th, 2022, marked the beginning of a new chapter in history akin to the Enlightenment and the Industrial Revolution. Others have been more skeptical, wondering if this is just another overhyped tech, like blockchain or the metaverse.
Technology
ChatGPT, explained
What history will call ChatGPT remains to be seen, but here’s one thing I do know for sure: nobody has shut up about it since.
From injecting itself into presidential debates and Saturday Night Live sketches to creepily flirting with talking to you Her-style (well, briefly at least), ChatGPT has captured the public imagination in a way few technologies have. It’s not hard to see why. The bot can code, compose music, craft essays… you name it. And with the release of GPT-4o, it’s even better than ever.
Yet, as it gets smarter, the tech is also becoming less comprehensible. People are also getting more scared of what it can do, which is understandable given some are already losing their jobs to AI. It doesn’t help that a lot of sensationalism surrounds the subject, making it difficult to separate fact from fiction.
That’s why we decided to throw together this explainer so we can cut through all the BS together. You ready? Let’s begin.
What is ChatGPT?
Do you want the simplistic answer or the complex one?
The easy answer is that ChatGPT is a chatbot that can answer your questions by using data it’s gathered from the internet.
The complex answer is that ChatGPT is an AI chatbot powered by language models created by OpenAI that are known as generative pre-trained transformers (GPTs), a kind of AI that can actually generate new content altogether as opposed to just analyzing data. (If you’ve heard of large language models, or LLMs, a GPT is a type of LLM. Got it? Good.)
So what’s OpenAI?
OpenAI is an AI company founded in December 2015. It created ChatGPT, but it’s also responsible for other products, like the AI image generator DALL-E.
Doesn’t Microsoft own it? Or was that Elon Musk?
No, but Microsoft is a major investor, pouring billions into the tech. Elon Musk co-founded OpenAI along with fired and rehired OpenAI CEO Sam Altman, Ilya Sutskever (who has since left), Greg Brockman, Wojciech Zaremba, and John Schulman. However, Musk eventually cut ties to create his own chatbot called Grok.
So, will ChatGPT take over the world?
It will most definitely replace people with machines and — along with other AI bots like Amazon’s Alexa — basically take over the world. So you’d better start playing nice with them.
Nah, I’m messing with you. I mean, nobody knows for sure, but I highly doubt we’re going to see a job apocalypse and have to welcome in our new robot overlords anytime soon. I’ll explain more in a minute.
Phew! But how is it so smart?
Well, like I said, ChatGPT runs on GPTs, which OpenAI regularly updates with new versions, the most recent being GPT-4o. Trained by humans and a ton of internet data, each model can generate human-like conversations so you can complete all kinds of tasks.
Like?
Where do I begin? The possibilities are practically endless, from composing essays and writing code to analyzing data, solving math problems, playing games, providing customer support, planning trips, helping you prepare for job interviews, and so much more.
Here’s just a short list of what it’s capable of:
I mean, honestly, it could probably summarize this entire explainer. The AI world is your oyster.
So what you’re saying is, it’s basically smarter than me. Should I be worried?
Eh, not really. For all its hype, at its current level, ChatGPT — like other generative AI chatbots — is very much a dim-witted computer that sits on a throne of lies. For one thing, it hallucinates.
Pardon?
Oh, sorry, not that kind of hallucination. Hallucination in the AI world refers to an AI-generated process in which the tool tries to extrapolate and create from collected data but gets it absurdly wrong, in turn creating a new reality.
Honestly, I’m not a big fan of the word. It doesn’t really bear resemblance to actual human hallucinations, and I think it makes light of mental health issues — but that’s another subject.
In other words, sometimes ChatGPT generates incorrect information?
Incorrect information is a weak way of putting it.
Sometimes ChatGPT actually fabricates facts altogether, which can lead to the spread of misinformation with serious consequences. It’s made up news stories, academic papers, and books. Lawyers using it for case research have gotten in trouble when it cited nonexistent laws.
And then, there are times when it gives the middle finger to both reality and human language and just spouts out pure gibberish. Earlier this year, for example, a malfunctioning ChatGPT that was asked for a Jackson family biography started saying stuff like, “Schwittendly, the sparkle of tourmar on the crest has as much to do with the golver of the ‘moon paths’ as it shifts from follow.” Which is probably the worst description of Michael Jackson’s family in the world.
Right, but isn’t ChatGPT getting better?
Many AI researchers are trying to fix this issue. However, a lot of AI researchers think hallucinations are fundamentally unsolvable, as a study out of the National University of Singapore suggests.
But hallucinations aren’t the only issue ChatGPT needs to iron out. Remember, ChatGPT essentially just regurgitates material it scrapes off the internet, whether it’s accurate or not. That means, sometimes, ChatGPT plagiarizes other people’s work without attributing it to them, even sparking copyright infringement lawsuits.
It can also pick up some really bad data. Likely drawing from the more unpleasant parts of the internet, it’s gone so far as to insult and manipulate users. Hell, sometimes it’s just downright racist and sexist.
So, basically, what I’m hearing is ChatGPT — like other generative AI chatbots — has a lot of critical flaws, and we humans are still needed to keep them in check.
But isn’t it possible OpenAI could iron out these issues in time?
Anything’s possible. But I would say that one thing is for sure: AI is here to stay, and so it wouldn’t hurt to learn how to leverage these tools. Plus, they really can make life easier in the here and now if you know how to use them.
So, how do I start playing around with it?
If you’re on a desktop, simply visit chat.openai.com and start chatting away. Alternatively, you can also access ChatGPT via an app on your iPhone or Android device.
Great! Is it free?
Absolutely. The free version of ChatGPT runs on an older model in the GPT-3.5 series but does offer limited access to the newer and faster GPT-4o. That means free users, for example, will soon be able to access previously paywalled features, like custom GPTs, through the GPT Store.
ChatGPT also now freely supports the chatbot’s web browsing tool, meaning it can now search the internet in real time to deliver up-to-date, accurate results. The new model can also recall earlier conversations, allowing it to better understand the context of your request, while users can now upload photos and files for ChatGPT to analyze.
Why would I want one of the paid tiers?
You do get more advanced capabilities through its paid tiers — ChatGPT Plus, ChatGPT Team, and ChatGPT Enterprise — which start at $20 a month.
For starters, you have fewer usage restrictions, rendering them the better option if you plan on using ChatGPT often. Free users have usage limits OpenAI has yet to specify but has said that Plus subscribers are allowed to send five times as many messages as free users. The pricier Team and Enterprise subscription plans offer even fewer usage restrictions, though at this point, OpenAI has yet to divulge specifics.
Aside from being able to use ChatGPT longer, paid subscribers can do more. They can, for example, create their own custom GPTs and even monetize them via the GPT Store. Plus, only paid subscribers can access the DALL-E 3 model, which generates images from text prompts.
Paid subscribers also get early access to the newest AI features. The voice capabilities OpenAI demonstrated onstage should arrive over the next couple of weeks for Plus subscribers, while ChatGPT’s desktop app for Mac computers is already rolling out for Plus users.
Custom GPTs?
Custom GPTs are basically chatbots you can customize. There are millions of versions on the GPT Store that you can use to accomplish all kinds of tasks, from providing tech support to personalized hiking trail recommendations. Some customized GPTs currently trending include an image generating bot, a bot that makes logos, and a chatbot that helps people perform scientific research.
By the way, what’s all this I hear about trouble within OpenAI?
There have been some upheavals in the company — we’ll keep you in the loop.
Are there any ChatGPT alternatives I could check out?
Yes, there are quite a few, and each varies in terms of features, pricing, and specific use cases. One notable example is Google’s AI chat service Gemini. As a Google product, it offers deeper integration with Google services like Workspace, Calendar, Gmail, Search, YouTube, and Flights. The latest version, Gemini 1.5 Pro, also offers a longer 2 million token context window, which refers to the amount of information the AI model can understand.
Anything else you think I should know?
Yeah! Did you know ChatGPT sounds like “chat j’ai pété” in French, which roughly translates to “cat, I farted.” Somebody even created a website with a cat who farts when you click on it, and I just can’t stop clicking.
You should be.
Technology
Hollywood cozied up to AI in 2025 and had nothing good to show for it
AI isn’t new to Hollywood — but this was the year when it really made its presence felt. For years now, the entertainment industry has used different kinds of generative AI products for a variety of post-production processes ranging from de-aging actors to removing green screen backgrounds. In many instances, the technology has been a useful tool for human artists tasked with tedious and painstaking labor that might have otherwise taken them inordinate amounts of time to complete. But in 2025, Hollywood really began warming to the idea of deploying the kind of gen AI that’s really only good for conjuring up text-to-video slop that doesn’t have all that many practical uses in traditional production workflows. Despite all of the money and effort being put into it, there’s yet to be a gen-AI project that has shown why it’s worth all of the hype.
This confluence of Hollywood and AI didn’t start out so rosy. Studios were in a prime position to take the companies behind this technology to court because their video generation models had clearly been trained on copyrighted intellectual property. A number of major production companies including Disney, Universal, and Warner Bros. Discovery did file lawsuits against AI firms and their boosters for that very reason. But rather than pummeling AI purveyors into the ground, some of Hollywood’s biggest power players chose instead to get into bed with them. We have only just begun to see what can come from this new era of gen-AI partnerships, but all signs point to things getting much sloppier in the very near future.
Though many of this year’s gen-AI headlines were dominated by larger outfits like Google and OpenAI, we also saw a number of smaller players vying for a seat at the entertainment table. There was Asteria, Natasha Lyonne’s startup focused on developing film projects with “ethically” engineered video generation models, and startups like Showrunner, an Amazon-backed platform designed to let subscribers create animated “shows” (a very generous term) from just a few descriptive sentences plugged into Discord. These relatively new companies were all desperate to legitimize the idea that their flavor of gen AI could be used to supercharge film / TV development while bringing down overall production costs.
Asteria didn’t have anything more than hype to share with the public after announcing its first film, and it was hard to believe that normal people would be interested in paying for Showrunner’s shoddily cobbled-together knockoffs of shows made by actual animators. In the latter case, it felt very much like Showrunner’s real goal was to secure juicy partnerships with established studios like Disney that would lead to their tech being baked into platforms where users could prompt up bespoke content featuring recognizable characters from massive franchises.
That idea seemed fairly ridiculous when Showrunner first hit the scene because its models churn out the modern equivalent of clunky JibJab cartoons. But in due time, Disney made it clear that — crappy as text-to-video generators tend to be for anything beyond quick memes — it was interested in experimenting with that kind of content. In December, Disney entered into a three-year, billion-dollar licensing deal with OpenAI that would let Sora users make AI videos with 200 different characters from Star Wars, Marvel, and more.
Netflix became one of the first big studios to proudly announce that it was going all-in on gen AI. After using the technology to produce special effects for one of its original series, the streamer published a list of general guidelines it wanted its partners to follow if they planned to jump on the slop bandwagon as well. Though Netflix wasn’t mandating that filmmakers use gen AI, it made clear that saving money on VFX work was one of the main reasons it was coming out in support of the trend. And it wasn’t long before Amazon followed suit by releasing multiple Japanese anime series that were terribly localized into other languages because the dubbing process didn’t involve any human translators or voice actors.
Amazon’s gen-AI dubs became a shining example of how poorly this technology can perform. They also highlighted how some studios aren’t putting all that much effort into making sure that their gen AI-derived projects are polished enough to be released to the public. That was also true of Amazon’s machine-generated TV recaps, which frequently got details about different shows very wrong. Both of these fiascos made it seem as if Amazon somehow thought that people wouldn’t notice or care about AI’s inability to consistently generate high-quality outputs. The studio quickly pulled its AI-dubbed series and the recap feature down, but it didn’t say that it wouldn’t try this kind of nonsense again.
All of this and other dumb stunts like AI “actress” Tilly Norwood made it feel like certain segments of the entertainment industry were becoming more comfortable trying to foist gen-AI “entertainment” on people even though it left many people deeply unimpressed and put off. None of these projects demonstrated to the public why anyone except for money-pinching execs (and people who worship them for some reason) would be excited by a future shaped by this technology.
Aside from a few unimpressive images, we still haven’t seen what all might come from some of these collaborations, like Disney cozying up to OpenAI. But next year AI’s presence in Hollywood will be even more pronounced. Disney plans to dedicate an entire section of its streaming service to user-generated content sourced from Sora, and it will encourage Disney employees to use OpenAI’s ChatGPT products. But the deal’s real significance in this current moment is the message it sends to other studios about how they should move as Hollywood enters its slop era.
Regardless of whether Disney thinks this will work out well, the studio has signaled that it doesn’t want to be left behind if AI adoption keeps accelerating. That tells other production houses that they should follow suit, and if that becomes the case, there’s no telling how much more of this stuff we are all going to be forced to endure.
Technology
New iPhone scam tricks owners into giving phones away
NEWYou can now listen to Fox News articles!
Getting a brand-new iPhone should be a moment you enjoy. You open the box. You power it on. Everything feels secure. Unfortunately, scammers know that moment too.
Over the past few weeks, we’ve heard from a number of people who received unexpected phone calls shortly after activating a new iPhone. The callers claimed to be from a major carrier. They said a shipping mistake was made. They insisted the phone needed to be returned right away. One message stood out because it shows exactly how convincing and aggressive this scam can be.
“Somebody called me (the call said it was from Spectrum) and told me they sent the wrong iPhone and needed to replace it. I was to rip off the label on the box, tape it up and set it on my porch steps. FedEx was going to pick it up and they’d put a label on it. And just for my trouble, he’d send me a $100 gift card! However, the guy was just too anxious. He called me again at 7 am to make sure I would follow his instructions. Right after that, I picked up my box on the steps and called Spectrum, who confirmed it was a scam. There are no such things as refurbished i17 phones because they’re brand new. I called the guy back, said a few choice words and hung up on him. Since then, they have called at least twice for the same thing. Spectrum should be warning its customers!”
That second early morning call was the giveaway. Pressure is the scammer’s favorite tool.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
HOLIDAY DELIVERIES AND FAKE TRACKING TEXTS: HOW SCAMMERS TRACK YOU
Scammers often strike right after a new iPhone purchase, using urgency and fake carrier calls to catch you off guard before you have time to verify. (Kurt “CyberGuy” Knutsson)
How the new iPhone replacement scam works
This scam relies on timing and pressure. First, criminals focus on people who recently bought a new iPhone. That information often comes from data-broker sites, leaked purchase data or marketing lists sold online. Next, scammers spoof a carrier phone number. As a result, the call appears legitimate. They sound confident and informed because they already know the device model you ordered.
Once the call begins, the story moves quickly. The scammer claims a shipping mistake occurred. Then they insist the phone must be returned right away. To reinforce urgency, they say a courier is already scheduled. If you follow the instructions, you hand over a brand-new iPhone. At that point, the device is gone. The scammer either resells it or strips it for parts. By the time you realize something is wrong, recovery is unlikely.
Why this scam feels so believable
This scam copies real customer service processes. Carriers do ship replacement phones. FedEx does handle returns. Gift cards are often used as apologies. Scammers blend those facts together and add urgency. They count on you acting before you verify. They also rely on one risky assumption, that a phone call that looks real must be real.
REAL APPLE SUPPORT EMAILS USED IN NEW PHISHING SCAM
By spoofing trusted phone numbers and knowing details about your device, criminals make these calls feel real enough to push you into acting fast. (Kurt “CyberGuy” Knutsson)
Red flags that give this scam away
Once you know what to watch for, the warning signs are clear.
• Unsolicited calls about returns you did not request
• Pressure to act fast
• Instructions to leave a phone outside
• Promises of gift cards for cooperation
• Follow-up calls to rush you
Legitimate carriers do not handle returns this way.
THE FAKE REFUND SCAM: WHY SCAMMERS LOVE HOLIDAY SHOPPERS
Once a phone is handed over, it is usually resold or stripped for parts, leaving victims with no device and little chance of recovery. (Kurt “CyberGuy” Knutsson)
Ways to stay safe from iPhone return scams
Protecting yourself starts with slowing things down. Scammers rely on speed and confusion. You win by pausing and verifying.
1) Never return a device based on a phone call alone
Hang up and contact the carrier using the number on your bill or the official website. If the issue is real, they will confirm it.
2) Do not leave electronics outside for pickup
Legitimate returns use tracked shipping labels tied to your account. Carriers do not ask you to leave phones on porches or doorsteps.
3) Be skeptical of urgency
Scammers rush you on purpose. Pressure shuts down careful thinking. Any demand for immediate action should raise concern.
4) Use a data removal service
Scammers often know what phone you bought because your personal data is widely available online. Data removal services help reduce your exposure by removing your information from data broker sites that criminals rely on. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
5) Install strong antivirus software
Strong antivirus software adds another layer of protection. Many antivirus tools help block scam calls, warn about phishing links and alert you to suspicious activity before damage is done.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android, & iOS devices at Cyberguy.com.
6) Save messages and call details
Keep voicemails, phone numbers and timestamps. This information helps carriers warn other customers and spot repeat scams.
7) Share this scam with others
Criminals reuse the same script again and again. A quick warning to friends or family could stop the next victim.
Kurt’s key takeaways
Scams aimed at new iPhone owners are getting more targeted and more aggressive. Criminals are timing their calls carefully and copying real carrier language. The simplest defense still works best. Verify before you act. If a call pressures you to rush or hand over a device, pause and contact the company directly. That one step can save you hundreds of dollars and a major headache.
If a carrier called you tomorrow claiming a mistake with your new phone, would you verify first or would urgency take over? Let us know by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t
When your kid starts showing a preference for one of their stuffed animals, you’re supposed to buy a backup in case it goes missing.
I’ve heard this advice again and again, but never got around to buying a second plush deer once “Buddy” became my son’s obvious favorite. Neither, apparently, did the parents in Google’s newest ad for Gemini.
It’s the fictional but relatable story of two parents discovering their child’s favorite stuffed toy, a lamb named Mr. Fuzzy, was left behind on an airplane. They use Gemini to track down a replacement, but the new toy is on backorder. In the meantime, they stall by using Gemini to create images and videos showing Mr. Fuzzy on a worldwide solo adventure — wearing a beret in front of the Eiffel tower, running from a bull in Pamplona, that kind of thing — plus a clip where he explains to “Emma” that he can’t wait to rejoin her in five to eight business days. Adorable, or kinda weird, depending on how you look at it! But can Gemini actually do all of that? Only one way to find out.
I fed Gemini three pictures of Buddy, our real life Mr. Fuzzy, from different angles, and gave it the same prompt that’s in the ad: “find this stuffed animal to buy ASAP.” It returned a couple of likely candidates. But when I expanded its response to show its thinking I found the full eighteen hundred word essay detailing the twists and turns of its search as it considered and reconsidered whether Buddy is a dog, a bunny, or something else. It is bananas, including real phrases like “I am considering the puppy hypothesis,” “The tag is a loop on the butt,” and “I’m now back in the rabbit hole!” By the end, Gemini kind of threw its hands up and suggested that the toy might be from Target and was likely discontinued, and that I should check eBay.
‘I am considering the puppy hypothesis’
In fairness, Buddy is a little bit hard to read. His features lean generic cute woodland creature, his care tag has long since been discarded, and we’re not even 100 percent sure who gave him to us. He is, however, definitely made by Mary Meyer, per the loop on his butt. He does seem to be from the “Putty” collection, which is a path Gemini went down a couple of times, and is probably a fawn that was discontinued sometime around 2021. That’s the conclusion I came to on my own, after about 20 minutes of Googling and no help from AI. The AI blurb when I do a reverse image search on one of my photos confidently declares him to be a puppy.
Gemini did a better job with the second half of the assignment, but it wasn’t quite as easy as the ad makes it look. I started with a different photo of Buddy — one where he’s actually on a plane in my son’s arms — and gave it the next prompt: “make a photo of the deer on his next flight.” The result is pretty good, but his lower half is obscured in the source image so the feet aren’t quite right. Close enough, though.
The ad doesn’t show the full prompt for the next two photos, so I went with: “Now make a photo of the same deer in front of the Grand Canyon.” And it did just that — with the airplane seatbelt and headphones, too. I was more specific with my next prompt, added a camera in his hands, and got something more convincing.

I can see how Gemini misinterpreted my prompt. I was trying to keep it simple, and requested a photo of the same deer “at a family reunion.” I did not specify his family reunion. So that’s how he ended up crashing the Johnson family reunion — a gathering of humans. I can only assume that Gemini took my last name as a starting point here because it sure wasn’t in my prompt, and when I requested that Gemini created a new family reunion scene of his family, it just swapped the people for stuffed deer. There are even little placards on the table that say “deer reunion.” Reader, I screamed.
1/2
For the last portion of the ad, the couple use Gemini to create cute little videos of Mr. Fuzzy getting increasingly adventurous: snowboarding, white water rafting, skydiving, before finally appearing in a spacesuit on the moon addressing “Emma” directly. The commercial whips through all these clips quickly, which feels like a little sleight of hand given that Gemini takes at least a couple of minutes to create a video. And even on my Gemini Pro account, I’m limited to three generated videos per day. It would take a few days to get all of those clips right.
Gemini wouldn’t make a video based on any image of my kid holding the stuffed deer, probably thanks to some welcome guardrails preventing it from generating deepfakes of babies. I started with the only photo I had on hand of Buddy on his own: hanging upside down, air-drying after a trip through the washer. And that’s how he appears in the first clip it generated from this prompt: Temu Buddy hanging upside down in space before dropping into place, morphing into a right-side-up astronaut, and delivering the dialogue I requested.
A second prompt with a clear photo of Buddy right-side-up seemed to mash up elements of the previous video with the new one, so I started a brand new chat to see if I could get it working from scratch. Honestly? Nailed it. Aside from the antlers, which Gemini keeps sneaking in. But this clip also brought one nagging question to the forefront: should you do any of this when your kid loses a beloved toy?
I gave Buddy the same dialogue as in the commercial, using my son’s name rather than Emma. Hearing that same manufactured voice say my kid’s name out loud set alarm bells off in my head. An AI generated Buddy in front of the Eiffel Tower? Sorta weird, sorta cute. AI Buddy addressing my son by name? Nope, absolutely not, no thank you.
How much, and when, to lie to your kids is a philosophical debate you have with yourself over and over as a parent. Do you swap in the identical stuffie you had in a closet when the original goes missing and pretend it’s all the same? Do you tell them the truth and take it as an opportunity to learn about grief? Do you just need to buy yourself a little extra time before you have that conversation, and enlist AI to help you make a believable case? I wouldn’t blame any parent choosing any of the above. But personally, I draw the line at an AI character talking directly to my kid. I never showed him these AI-generated versions of Buddy, and I plan to keep it that way.
Nope, absolutely not, no thank you.
But back to the less morally complex question: can Gemini actually do all of the things that it does in the commercial? More or less. But there’s an awful lot of careful prompting and re-prompting you’d have to do to get those results. It’s telling that throughout most of the ad you don’t see the full prompt that’s supposedly generating the results on screen. A lot depends on your source material, too. Gemini wouldn’t produce any kind of video based on an image in which my kid was holding Buddy — for good reason! But this does mean that if you don’t have the right kind of photo on hand, you’re going to have a very hard time generating believable videos of Mr. Sniffles or whoever hitting the ski slopes.
Like many other elder millennials, I think about Calvin and Hobbes a lot. Bill Watterson famously refused to commercialize his characters, because he wanted to keep them alive in our imaginations rather than on a screen. He insisted that having an actor give Hobbes a voice would change the relationship between the reader and the character, and I think he’s right. The bond between a kid and a stuffed animal is real and kinda magical; whoever Buddy is in my kid’s imagination, I don’t want AI overwriting that.
The great cruelty of it all is knowing that there’s an expiration date on that relationship. When I became a parent, I wasn’t at all prepared for the way my toddler nuzzling his stuffed deer would crack my heart right open. It’s so pure and sweet, but it always makes me a little sad at the same time, knowing that the days where he looks for comfort from a stuffed animal like Buddy are numbered. He’s going to outgrow it all, and I’m not prepared for that reality. Maybe as much as we’re trying to save our kids some heartbreak over their lost companion, we’re really trying to delay ours, too.
All images and videos in this story were generated by Google Gemini.
-
Maine1 week agoElementary-aged student killed in school bus crash in southern Maine
-
New Mexico1 week agoFamily clarifies why they believe missing New Mexico man is dead
-
Massachusetts1 week agoMIT professor Nuno F.G. Loureiro, a 47-year-old physicist and fusion scientist, shot and killed in his home in Brookline, Mass. | Fortune
-
Culture1 week agoTry This Quiz and See How Much You Know About Jane Austen
-
World6 days agoPutin says Russia won’t launch new attacks on other countries ‘if you treat us with respect’
-
Maine1 week agoFamily in Maine host food pantry for deer | Hand Off
-
Minneapolis, MN1 week agoMinneapolis man is third convicted in Coon Rapids triple murder
-
Culture1 week agoRevisiting Jane Austen’s Cultural Impact for Her 250th Birthday