Connect with us

Technology

I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t

Published

on

I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t

When your kid starts showing a preference for one of their stuffed animals, you’re supposed to buy a backup in case it goes missing.

I’ve heard this advice again and again, but never got around to buying a second plush deer once “Buddy” became my son’s obvious favorite. Neither, apparently, did the parents in Google’s newest ad for Gemini.

It’s the fictional but relatable story of two parents discovering their child’s favorite stuffed toy, a lamb named Mr. Fuzzy, was left behind on an airplane. They use Gemini to track down a replacement, but the new toy is on backorder. In the meantime, they stall by using Gemini to create images and videos showing Mr. Fuzzy on a worldwide solo adventure — wearing a beret in front of the Eiffel tower, running from a bull in Pamplona, that kind of thing — plus a clip where he explains to “Emma” that he can’t wait to rejoin her in five to eight business days. Adorable, or kinda weird, depending on how you look at it! But can Gemini actually do all of that? Only one way to find out.

I fed Gemini three pictures of Buddy, our real life Mr. Fuzzy, from different angles, and gave it the same prompt that’s in the ad: “find this stuffed animal to buy ASAP.” It returned a couple of likely candidates. But when I expanded its response to show its thinking I found the full eighteen hundred word essay detailing the twists and turns of its search as it considered and reconsidered whether Buddy is a dog, a bunny, or something else. It is bananas, including real phrases like “I am considering the puppy hypothesis,” “The tag is a loop on the butt,” and “I’m now back in the rabbit hole!” By the end, Gemini kind of threw its hands up and suggested that the toy might be from Target and was likely discontinued, and that I should check eBay.

‘I am considering the puppy hypothesis’

Advertisement

In fairness, Buddy is a little bit hard to read. His features lean generic cute woodland creature, his care tag has long since been discarded, and we’re not even 100 percent sure who gave him to us. He is, however, definitely made by Mary Meyer, per the loop on his butt. He does seem to be from the “Putty” collection, which is a path Gemini went down a couple of times, and is probably a fawn that was discontinued sometime around 2021. That’s the conclusion I came to on my own, after about 20 minutes of Googling and no help from AI. The AI blurb when I do a reverse image search on one of my photos confidently declares him to be a puppy.

Gemini did a better job with the second half of the assignment, but it wasn’t quite as easy as the ad makes it look. I started with a different photo of Buddy — one where he’s actually on a plane in my son’s arms — and gave it the next prompt: “make a photo of the deer on his next flight.” The result is pretty good, but his lower half is obscured in the source image so the feet aren’t quite right. Close enough, though.

The ad doesn’t show the full prompt for the next two photos, so I went with: “Now make a photo of the same deer in front of the Grand Canyon.” And it did just that — with the airplane seatbelt and headphones, too. I was more specific with my next prompt, added a camera in his hands, and got something more convincing.

Looks plausible enough.
Image: Gemini / The Verge

Safety first, Buddy.
Image: Gemini / The Verge

I can see how Gemini misinterpreted my prompt. I was trying to keep it simple, and requested a photo of the same deer “at a family reunion.” I did not specify his family reunion. So that’s how he ended up crashing the Johnson family reunion — a gathering of humans. I can only assume that Gemini took my last name as a starting point here because it sure wasn’t in my prompt, and when I requested that Gemini created a new family reunion scene of his family, it just swapped the people for stuffed deer. There are even little placards on the table that say “deer reunion.” Reader, I screamed.

Advertisement
<em>I’m pretty sure I’ve seen this family in a pharmaceutical commercial before.</em>” data-chromatic=”ignore” loading=”lazy” decoding=”async” data-nimg=”fill” class=”_1etxtj17 _1etxtj15 _1etxtj14 x271pn0″ style=”position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(“data:image/svg+xml;charset=utf-8,%3Csvg xmlns=’http://www.w3.org/2000/svg’ %3E%3Cfilter id=’b’ color-interpolation-filters=’sRGB’%3E%3CfeGaussianBlur stdDeviation=’20’/%3E%3CfeColorMatrix values=’1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1′ result=’s’/%3E%3CfeFlood x=’0′ y=’0′ width=’100%25′ height=’100%25’/%3E%3CfeComposite operator=’out’ in=’s’/%3E%3CfeComposite in2=’SourceGraphic’/%3E%3CfeGaussianBlur stdDeviation=’20’/%3E%3C/filter%3E%3Cimage width=’100%25′ height=’100%25′ x=’0′ y=’0′ preserveAspectRatio=’none’ style=’filter: url(%23b);’ href=’data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII=’/%3E%3C/svg%3E”)” sizes=”(max-width: 1023px) 100vw, 744px” srcset=”https://i3.wp.com/platform.theverge.com/wp-content/uploads/sites/2/2025/12/Gemini_Generated_Image_7sgdp77sgdp77sgd.png?quality=90&strip=all&w=2400+2400w&ssl=1″ fifu-data-src=”https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/Gemini_Generated_Image_7sgdp77sgdp77sgd.png?quality=90&strip=all&w=2400″/></div>
<div class=<em>Oh deer.</em>” data-chromatic=”ignore” loading=”lazy” decoding=”async” data-nimg=”fill” class=”_1etxtj17 _1etxtj16 _1etxtj14 x271pn0″ style=”position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(“data:image/svg+xml;charset=utf-8,%3Csvg xmlns=’http://www.w3.org/2000/svg’ %3E%3Cfilter id=’b’ color-interpolation-filters=’sRGB’%3E%3CfeGaussianBlur stdDeviation=’20’/%3E%3CfeColorMatrix values=’1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1′ result=’s’/%3E%3CfeFlood x=’0′ y=’0′ width=’100%25′ height=’100%25’/%3E%3CfeComposite operator=’out’ in=’s’/%3E%3CfeComposite in2=’SourceGraphic’/%3E%3CfeGaussianBlur stdDeviation=’20’/%3E%3C/filter%3E%3Cimage width=’100%25′ height=’100%25′ x=’0′ y=’0′ preserveAspectRatio=’none’ style=’filter: url(%23b);’ href=’data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII=’/%3E%3C/svg%3E”)” sizes=”(max-width: 1023px) 100vw, 744px” srcset=”https://i1.wp.com/platform.theverge.com/wp-content/uploads/sites/2/2025/12/Gemini_Generated_Image_lu7frflu7frflu7f.png?quality=90&strip=all&w=2400+2400w&ssl=1″ fifu-data-src=”https://platform.theverge.com/wp-content/uploads/sites/2/2025/12/Gemini_Generated_Image_lu7frflu7frflu7f.png?quality=90&strip=all&w=2400″/></div>
<p><button class=Previous

1/2

I’m pretty sure I’ve seen this family in a pharmaceutical commercial before.
Image: Gemini / The Verge

For the last portion of the ad, the couple use Gemini to create cute little videos of Mr. Fuzzy getting increasingly adventurous: snowboarding, white water rafting, skydiving, before finally appearing in a spacesuit on the moon addressing “Emma” directly. The commercial whips through all these clips quickly, which feels like a little sleight of hand given that Gemini takes at least a couple of minutes to create a video. And even on my Gemini Pro account, I’m limited to three generated videos per day. It would take a few days to get all of those clips right.

Gemini wouldn’t make a video based on any image of my kid holding the stuffed deer, probably thanks to some welcome guardrails preventing it from generating deepfakes of babies. I started with the only photo I had on hand of Buddy on his own: hanging upside down, air-drying after a trip through the washer. And that’s how he appears in the first clip it generated from this prompt: Temu Buddy hanging upside down in space before dropping into place, morphing into a right-side-up astronaut, and delivering the dialogue I requested.

A second prompt with a clear photo of Buddy right-side-up seemed to mash up elements of the previous video with the new one, so I started a brand new chat to see if I could get it working from scratch. Honestly? Nailed it. Aside from the antlers, which Gemini keeps sneaking in. But this clip also brought one nagging question to the forefront: should you do any of this when your kid loses a beloved toy?

I gave Buddy the same dialogue as in the commercial, using my son’s name rather than Emma. Hearing that same manufactured voice say my kid’s name out loud set alarm bells off in my head. An AI generated Buddy in front of the Eiffel Tower? Sorta weird, sorta cute. AI Buddy addressing my son by name? Nope, absolutely not, no thank you.

How much, and when, to lie to your kids is a philosophical debate you have with yourself over and over as a parent. Do you swap in the identical stuffie you had in a closet when the original goes missing and pretend it’s all the same? Do you tell them the truth and take it as an opportunity to learn about grief? Do you just need to buy yourself a little extra time before you have that conversation, and enlist AI to help you make a believable case? I wouldn’t blame any parent choosing any of the above. But personally, I draw the line at an AI character talking directly to my kid. I never showed him these AI-generated versions of Buddy, and I plan to keep it that way.

Advertisement

Nope, absolutely not, no thank you.

But back to the less morally complex question: can Gemini actually do all of the things that it does in the commercial? More or less. But there’s an awful lot of careful prompting and re-prompting you’d have to do to get those results. It’s telling that throughout most of the ad you don’t see the full prompt that’s supposedly generating the results on screen. A lot depends on your source material, too. Gemini wouldn’t produce any kind of video based on an image in which my kid was holding Buddy — for good reason! But this does mean that if you don’t have the right kind of photo on hand, you’re going to have a very hard time generating believable videos of Mr. Sniffles or whoever hitting the ski slopes.

Like many other elder millennials, I think about Calvin and Hobbes a lot. Bill Watterson famously refused to commercialize his characters, because he wanted to keep them alive in our imaginations rather than on a screen. He insisted that having an actor give Hobbes a voice would change the relationship between the reader and the character, and I think he’s right. The bond between a kid and a stuffed animal is real and kinda magical; whoever Buddy is in my kid’s imagination, I don’t want AI overwriting that.

The great cruelty of it all is knowing that there’s an expiration date on that relationship. When I became a parent, I wasn’t at all prepared for the way my toddler nuzzling his stuffed deer would crack my heart right open. It’s so pure and sweet, but it always makes me a little sad at the same time, knowing that the days where he looks for comfort from a stuffed animal like Buddy are numbered. He’s going to outgrow it all, and I’m not prepared for that reality. Maybe as much as we’re trying to save our kids some heartbreak over their lost companion, we’re really trying to delay ours, too.

All images and videos in this story were generated by Google Gemini.

Advertisement
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Advertisement

Technology

Why the Microsoft 365 Copilot bug matters for data security

Published

on

Why the Microsoft 365 Copilot bug matters for data security

NEWYou can now listen to Fox News articles!

You trust your email security settings for a reason. So when an AI assistant quietly reads and summarizes messages marked confidential, that trust takes a hit.

Microsoft says a bug in Microsoft 365 Copilot allowed its AI chat feature to process sensitive emails since late January.

The issue bypassed Data Loss Prevention policies that organizations rely on to protect private information. Put simply, emails that were supposed to stay locked down were being summarized anyway.

Sign up for my FREE CyberGuy Report 

Advertisement

Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter    

Microsoft 365 Copilot’s work chat interface sits at the center of the issue after a bug allowed it to summarize confidential emails. (Microsoft)

Microsoft 365 Copilot bug summarized confidential emails

Microsoft says a coding error impacted Microsoft 365 Copilot Chat, specifically the “work tab” feature. The AI assistant helps business users summarize content, draft responses and analyze information across Word, Excel, PowerPoint, Outlook and OneNote.

Beginning Jan. 21, an internal bug labeled CW1226324 caused Copilot to read and summarize emails stored in Sent Items and Drafts folders.

The real concern runs deeper. Several of those messages carried confidentiality or sensitivity labels.

Advertisement

Companies apply those labels along with DLP policies to block automated systems from accessing restricted content. Despite those safeguards, Copilot still generated summaries. 

We reached out to Microsoft, and a spokesperson provided CyberGuy with the following statement:

“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.” 

Why the Microsoft 365 Copilot bug matters for data security

AI tools feel helpful. They save time and reduce busy work. But they also rely on deep access to your data. When safeguards fail, even temporarily, sensitive content can move in ways you did not expect.

YOUR PHONE SHARES DATA AT NIGHT: HERE’S HOW TO STOP IT

Advertisement

For businesses, that could mean:

Legal discussions summarized outside intended controls

Financial projections processed despite restrictions

HR communications are exposed to automated analysis

Even if no data leaves the organization, the bypass itself raises concerns about how AI integrates with enterprise security systems.

Advertisement

Business users rely on Copilot to streamline work, but a recent bug raised concerns about how it handles sensitive email content. (Microsoft)

How Microsoft is fixing the Microsoft 365 Copilot bug

Microsoft says it began rolling out a fix in early February. The company continues to monitor deployment and is contacting some affected users to verify the fix works.

However, Microsoft has not provided a final timeline for full remediation. It has also not disclosed how many organizations were affected.

The issue is tagged as an advisory, which usually signals limited scope or impact. Still, many security professionals will want deeper clarity before feeling comfortable.

What this Microsoft 365 Copilot issue reveals about AI security

This incident highlights something many companies are wrestling with right now. AI assistants sit inside productivity platforms. They need access to email, documents and collaboration tools to work well.

Advertisement

TIKTOK AFTER THE US SALE: WHAT CHANGED AND HOW TO USE IT SAFELY

At the same time, those platforms contain your most sensitive information. When AI features expand quickly, security policies must evolve just as fast. Otherwise, even a small code mistake can create unexpected exposure.

The Copilot chat feature was designed to boost productivity, yet a code error let it process emails labeled confidential. (Microsoft)

Ways to stay safe after the Microsoft 365 Copilot bug

If your organization uses Microsoft 365 Copilot, here are practical steps to reduce risk:

1) Review Copilot access settings

Work with your IT team to confirm which folders and data sources Copilot can access.

Advertisement

2) Revalidate DLP policies

Test sensitivity labels and DLP (Data Loss Prevention)  rules to ensure they block AI processing as intended.

3) Monitor advisory updates

Stay current on Microsoft service alerts and verify that the fix is fully deployed in your tenant.

4) Limit AI scope during investigations

If you have concerns, consider temporarily restricting Copilot features until verification is complete.

5) Train employees on AI boundaries

Remind staff that AI assistants can process drafts and send messages. Encourage careful handling of sensitive content.

6) Audit Copilot activity logs

Review audit logs to see whether Copilot accessed or summarized labeled emails. This helps determine actual exposure rather than assumed risk.

Advertisement

7) Review sensitivity label configuration

Confirm that confidential labels are configured to block AI processing where required. Misconfigured labels can create gaps even after a bug is fixed.

8) Reassess retention and draft policies

Because the issue involved Sent Items and Drafts, evaluate whether sensitive drafts should be stored long-term or deleted after sending.

9) Limit Copilot to specific user groups

Instead of enabling Copilot organization-wide, consider a phased deployment to departments with lower sensitivity exposure.

10) Conduct a post-incident security review

Use this moment to reassess how AI tools integrate with compliance controls. Treat it as a learning opportunity rather than a one-time glitch.

Pro Tip: This Copilot bug centers on enterprise controls. Even so, AI tools operate on your devices and accounts, so keeping software up to date and using strong antivirus software adds an important layer of defense. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com

Advertisement

Considering a more private email provider

Enterprise AI bugs raise a bigger question: how much access should email platforms have to your data in the first place? If you want an added layer of privacy beyond mainstream providers, privacy-focused email services are worth exploring.

Some offer end-to-end encryption, support for PGP encryption and a strict no-ads business model that avoids scanning messages for marketing purposes.

AI WEARABLE HELPS STROKE SURVIVORS SPEAK AGAIN

Many also allow you to create disposable email aliases, which can reduce spam and limit exposure if one address is compromised.

While no provider is immune to software bugs, choosing an email service built around privacy rather than data monetization can limit how much of your information is accessible to automated systems in the first place.

Advertisement

For individuals, journalists and small businesses especially, that added control can make a meaningful difference.

For recommendations on private and secure email providers that offer alias addresses, visit Cyberguy.com

Kurt’s key takeaways

AI assistants are becoming part of daily work life. They promise speed, efficiency and smarter workflows. But convenience should never outrun security.

This Copilot bug may have a limited impact. Still, it serves as a reminder that AI tools are only as strong as the guardrails behind them.

When those guardrails slip, even briefly, sensitive information can move in unexpected ways. As AI becomes more embedded in business software, trust will depend on transparency, fast fixes and clear communication.

Advertisement

Here is the real question: If your AI assistant can see everything you write, are you fully confident it respects every boundary you set? Let us know by writing to us at Cyberguy.com

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter 

Copyright 2026 CyberGuy.com.  All rights reserved.  

Advertisement

Related Article

149 million passwords exposed in massive credential leak
Continue Reading

Technology

Samsung’s Digital Home Key lets you use your phone as your key

Published

on

Samsung’s Digital Home Key lets you use your phone as your key

Just days after showing off the Galaxy S26, Samsung is finally rolling out the ability for users to unlock their home with a tap of their phone or by simply approaching their door. The new feature, called Digital Home Key, will live inside Samsung Wallet and is powered by the Aliro smart home standard.

Samsung first teased its Digital Home Key feature in 2024 and said the feature would be available in 2025. That didn’t pan out, as the CSA’s Aliro standard — which will let users unlock smart locks with any phone — only arrived in February of this year. The new standard uses near-field communication (NFC) for its tap-to-unlock technology. It also supports ultra-wideband (UWB), giving users the ability to unlock their door as they approach and without pulling out their phone.

To add a Digital Home Key to your wallet, you’ll need to set up a compatible smart lock through SmartThings using Matter. Only some Galaxy smartphones support both NFC and UWB, including the Galaxy Z Fold 4 and up, as well as the Galaxy S22 Ultra and up. You can view the full list of compatible devices on Samsung’s website.

Continue Reading

Technology

China’s ultrasound brain tech race heats up

Published

on

China’s ultrasound brain tech race heats up

NEWYou can now listen to Fox News articles!

When you hear “brain-computer interface,” you probably picture surgery, wires and a chip in your head. Now picture something quieter. No implant. No incision. Just sound waves directed at the brain.

That is the approach behind a new wave of ultrasound brain-computer interface companies in China. One of the newest is Gestala, founded in Chengdu with offices in Shanghai and Hong Kong. The company says it is developing technology that can stimulate and eventually study brain activity using focused ultrasound.

Yes, the same basic technology is used in medical imaging. But this time, it targets neural circuits.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Advertisement

Brain imaging highlights the regions researchers study as companies explore noninvasive ultrasound brain-computer interface technology. (Kurt “CyberGuy” Knutsson)

What is an ultrasound brain computer interface?

Most brain-computer interface systems rely on electrodes that detect electrical signals from neurons. Neuralink is the most visible example. It places tiny threads inside the brain to record activity. Ultrasound works differently.

Instead of measuring electrical signals directly, it uses high-frequency sound waves. Depending on intensity and focus, those waves can:

  • Create images of internal tissue
  • Destroy abnormal tissue such as tumors
  • Modulate neural activity without open surgery.

Focused ultrasound treatments are already approved for Parkinson’s disease, uterine fibroids and certain tumors. That clinical history gives companies like Gestala a foundation to build on. However, studying or interpreting brain signals with ultrasound is far more complex than delivering targeted stimulation.

WHAT TRUMP’S ‘RATEPAYER PROTECTION PLEDGE’ MEANS FOR YOU

Unlike implant-based systems such as Neuralink, ultrasound brain computer interface research focuses on stimulating the brain without surgery. (Neuralink)

Advertisement

 

How Gestala plans to treat chronic pain with focused ultrasound

Gestala’s first product is focused on chronic pain. The company plans to target the anterior cingulate cortex, a brain region linked to the emotional experience of pain. Early pilot studies suggest that stimulating this area can reduce pain intensity for up to a week in some patients. The first-generation device will be a stationary system used in clinics. Patients would visit a hospital for treatment sessions. Later, the company plans to develop a wearable helmet designed for supervised use at home. Over time, Gestala says it wants to expand into depression, other mental health conditions, stroke rehabilitation, Alzheimer’s disease and sleep disorders. That is an ambitious roadmap. Each condition involves different brain networks and clinical hurdles.

Can ultrasound read brain activity without implants?

Like other brain tech startups, Gestala is also exploring whether ultrasound could help interpret brain activity. The long-term concept is straightforward in theory. A device could detect patterns linked to chronic pain or depression, then deliver stimulation to specific regions in response.

Unlike traditional brain implants, which capture electrical signals from limited areas, an ultrasound-based system may have the potential to access broader regions of the brain. That possibility is one reason researchers are paying attention. Still, translating that concept into reliable data is a major engineering challenge.

The global race to build noninvasive brain interfaces

China is not alone in exploring ultrasound brain-computer interface systems. Earlier this month, OpenAI announced a significant investment in Merge Labs, a startup cofounded by Sam Altman along with researchers linked to Forest Neurotech.

Advertisement

Public materials from Merge Labs mention restoring lost abilities, supporting healthier brain states and deepening human connection with advanced AI. That language signals long-term ambitions. Yet experts caution that real-world applications are still years away.

GOOGLE DISMANTLES 9M-DEVICE ANDROID HIJACK NETWORK

Researchers use MRI guidance to precisely target the anterior cingulate cortex with focused ultrasound during chronic pain studies. (Gestala)

The technical limits of ultrasound brain interfaces

Ultrasound faces technical limits. First, the skull weakens and distorts sound waves. That makes it harder to obtain precise signals. In research settings, detailed readouts of neural activity have required special implants that allow ultrasound to pass more clearly than bone.

Second, ultrasound measures changes in blood flow. Blood flow shifts more slowly than electrical firing in neurons. That delay may limit applications that require fast, detailed signal decoding, such as real-time speech translation. In short, stimulation is one challenge. Accurate readout is another level entirely.

Advertisement

What this means to you

Right now, this technology is experimental. You are not about to buy a brain helmet at your local electronics store. Still, the direction matters. If noninvasive ultrasound devices can reduce chronic pain or support mental health treatment, more patients may consider therapy without facing brain surgery.

At the same time, devices that analyze brain states introduce new privacy questions. Brain-related data is deeply personal. Regulators, hospitals and companies will need clear rules about how that data is stored, shared and protected. Finally, the link between AI companies and brain interface startups shows how closely digital intelligence and neuroscience are becoming intertwined. That connection could reshape medicine, wellness, and even how we interact with technology.

Take my quiz: How safe is your online security?

Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

Brain-computer interfaces used to feel far off and experimental. Now they are a serious focus of global research and investment. China’s push to develop an ultrasound-based brain-computer interface adds momentum to a field already shaped by companies like Neuralink and new ventures backed by OpenAI. Progress is steady but measured. The potential is significant. The technical hurdles are real. What happens next will depend on whether researchers can turn promising lab results into safe, reliable treatments people can actually use.

If sound waves could one day interpret your mental state, who should decide how that information is used? Let us know by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright 2026 CyberGuy.com.  All rights reserved.  

Advertisement

Related Article

New York halts robotaxi expansion plan
Continue Reading

Trending