You can’t shake a stick without hitting an AI gadget at CES this year, with artificial smarts now embedded in just about every wearable, screen, and appliance across the show floor, not to mention the armies of AI companions, toys, and robots.
Technology
FBI warns of fake kidnapping photos used in new scam
NEWYou can now listen to Fox News articles!
The FBI is warning about a disturbing scam that turns family photos into powerful weapons. Cybercriminals are stealing images from social media accounts, altering them and using them as fake proof of life in virtual kidnapping scams.
These scams do not involve real abductions. Instead, criminals rely on fear, speed and believable images to pressure victims into paying ransom before they can think clearly.
Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
FACEBOOK SETTLEMENT SCAM EMAILS TO AVOID NOW
Scammers steal photos from public social media accounts and manipulate them to create fake proof of life images that fuel fear and urgency. (Kurt “CyberGuy” Knutsson)
How the fake kidnapping scam works
According to the FBI, scammers usually start with a text message. They claim they have kidnapped a loved one and demand immediate payment for their release. To make the threat feel real, the criminals send an altered photo pulled from social media. The FBI says these images may be sent using timed messages to limit how long victims can examine them. The agency warns that scammers often threaten extreme violence if the ransom is not paid right away. This urgency is designed to shut down rational thinking.
Signs the photo may be fake
When victims slow down and look closely, the altered images often fall apart. The FBI says warning signs may include missing scars or tattoos, strange body proportions or details that do not match reality. Scammers may also spoof a loved one’s phone number, which makes the message feel even more convincing. Reports on sites like Reddit show this tactic is already being used in the real world.
Why this fake kidnapping scam is so effective
Virtual kidnapping scams work because they exploit emotion. Fear pushes people to act fast, especially when the message appears to come from someone they trust. The FBI notes that criminals use publicly available information to personalize their threats. Even posts meant to help others, such as missing person searches, can provide useful details for scammers.
Ways to stay safe from virtual kidnapping scams
The FBI recommends several steps to protect yourself and your family.
- Be mindful of what you post online, especially photos and personal details
- Avoid sharing travel information in real time
- Create a family code word that only trusted people know
- Pause and question whether the claims make sense
- Screenshot or record proof of life photos
- If you receive a message like this, try to contact your loved one directly before doing anything else.
Staying calm is one of your strongest defenses. Slowing down gives you time to spot red flags and avoid costly mistakes.
How to strengthen your digital defenses against virtual kidnapping scams
When scammers can access your photos, phone numbers and personal details, they can turn fear into leverage. These steps help reduce what criminals can find and give you clear actions to take if a threat appears.
1) Lock down your social media accounts
Review the privacy settings on every social platform you use. Set profiles to private so only trusted friends and family can see your photos, posts and personal updates. Virtual kidnapping scams rely heavily on publicly visible images. Limiting access makes it harder for criminals to steal photos and create fake proof-of-life images.
Limiting what you share online and slowing down to verify claims can help protect your family from panic-driven scams like this one. (Jaap Arriens/NurPhoto via Getty Images)
2) Be cautious about what you share online
Avoid posting real-time travel updates, daily routines or detailed family information. Even close-up photos that show tattoos, scars or locations can give scammers useful material. The less context criminals have, the harder it is for them to make a threat feel real and urgent.
3) Use strong antivirus software on all devices
Install strong antivirus software on computers, phones and tablets. Strong protection helps block phishing links, malicious downloads and spyware often tied to scam campaigns. Keeping your operating system and security tools updated also closes security gaps that criminals exploit to gather personal data.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
NEW EMAIL SCAM USES HIDDEN CHARACTERS TO SLIP PAST FILTERS
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
4) Consider a data removal service to reduce exposure
Data brokers collect and sell personal information pulled from public records and online activity. A data removal service helps locate and remove your details from these databases. Reducing what is available online makes it harder for scammers to impersonate loved ones or personalize fake kidnapping threats.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com
5) Limit facial data in public profiles
Review older public photo albums and remove images that clearly show faces from multiple angles. Avoid posting large collections of high-resolution facial photos publicly. Scammers often need multiple images to convincingly alter photos. Reducing facial data weakens their ability to manipulate images.
6) Establish a family verification plan
Create a simple verification plan with loved ones before an emergency happens. This may include a shared code word, a call back rule or a second trusted contact. Scammers depend on panic. Having a preset way to verify safety gives you something steady to rely on when emotions run high.
7) Secure phone accounts and enable SIM protection
Contact your mobile carrier and ask about SIM protection or a port-out PIN. This helps prevent criminals from hijacking phone numbers or spoofing calls and texts. Since many fake kidnapping scams begin with messages that appear to come from a loved one, securing phone accounts adds an important layer of protection.
The FBI warns that these virtual kidnapping scams often begin with a text message that pressures victims to pay a ransom immediately. (Getty Images)
8) Save evidence and report the scam
If you receive a threat, save screenshots, phone numbers, images and message details. Do not continue engaging with the sender. Report the incident to the FBI’s Internet Crime Complaint Center. Even if no money is lost, reports help investigators track patterns and warn others.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s key takeaways
Virtual kidnapping scams show how quickly personal photos can be weaponized. Criminals do not need real victims when fear alone can drive action. Taking time to verify claims, limiting what you share online and strengthening your digital defenses can make a major difference. Awareness and preparation remain your best protection.
Have you or someone you know encountered a scam like this? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Xbox’s Towerborne is switching from a free-to-play game to a paid one
Towerborne, a side-scrolling action RPG published by Xbox Game Studios that has been available in early access, will officially launch on February 26th. But instead of launching as a free-to-play, always-on online game as originally planned, Towerborne is instead going to be a paid game that you can play offline.
“You will own the complete experience permanently, with offline play and online co-op,” Trisha Stouffer, CEO and president of Towerborne developer Stoic, says in an Xbox Wire blog post. “This change required deep structural rebuilding over the past year, transforming systems originally designed around constant connectivity. The result is a stronger, more accessible, and more player-friendly version of Towerborne — one we’re incredibly proud to bring to launch.”
“After listening to our community during Early Access and Game Preview, we learned players wanted a complete, polished experience without ongoing monetization mechanics,” according to an FAQ. “Moving to a premium model lets us deliver the full game upfront—no live-service grind, no pay-to-win systems—just the best version of Towerborne.”
With the popular live service games like Fortnite and Roblox getting harder to usurp, Towerborne’s switch to a premium, offline-playable experience could make it more enticing for players who don’t want to jump into another time-sucking forever game. It makes Towerborne more appealing to me, at least.
With the 1.0 release of the game, Towerborne will have a “complete” story, new bosses, and a “reworked” difficulty system. You’ll also be able to acquire all in-game cosmetics for free through gameplay, with “no more cosmetic purchasing.” Players who are already part of early access will still be able to play the game.
Towerborne will launch on February 26th on Xbox Series X / S, Xbox on PC, Game Pass, Steam, and PS5. The standard edition will cost $24.99, while the deluxe edition will cost $29.99.
Technology
Hackers abuse Google Cloud to send trusted phishing emails
NEWYou can now listen to Fox News articles!
Cybercriminals have found a clever new way to get phishing emails straight into inboxes.
Instead of spoofing brands, they are abusing real cloud tools that people already trust. Security researchers say attackers recently hijacked a legitimate email feature inside Google Cloud.
The result was thousands of phishing messages that looked and felt like normal Google notifications. Many slipped past spam filters with ease.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – when you join my CYBERGUY.COM newsletter.
How this Google Cloud phishing attack worked
At the center of the campaign was Google Cloud Application Integration. This service allows businesses to send automated email notifications from workflows they build. Attackers exploited the Send Email task inside that system. Because the messages came from a real Google address, they appeared authentic to both users and security tools.
According to Check Point, a global cybersecurity firm that tracks and analyzes large-scale threat campaigns, the emails were sent from a legitimate Google-owned address and closely matched Google’s notification style. Fonts, wording, and layout all looked familiar. Over a two-week period in December 2025, attackers sent more than 9,000 phishing emails targeting roughly 3,200 organizations across the U.S., Europe, Canada, Asia Pacific, and Latin America.
Attackers used trusted Google Cloud infrastructure to route victims through multiple redirects before revealing the scam. (Thomas Fuller/SOPA Images/LightRocket via Getty Images)
MALICIOUS CHROME EXTENSIONS CAUGHT STEALING SENSITIVE DATA
Why Google phishing emails were so convincing
The messages looked like routine workplace alerts. Some claimed you had received a voicemail. Others said you were granted access to a shared document, like a Q4 file. That sense of normalcy lowered suspicion. Many people are used to seeing these exact messages every day. Even more concerning, the emails bypassed common protections like SPF and DMARC because they were sent through Google-owned infrastructure. To email systems, nothing looked fake.
What happens after you click
The attack did not stop at the email. Once a victim clicked the link, they were sent to a page hosted on storage.cloud.google.com. That added another layer of trust. From there, the link redirected again to googleusercontent.com. Next came a fake CAPTCHA or image check. This step blocked automated security scanners while letting real users continue. After passing that screen, victims landed on a fake Microsoft login page hosted on a non-Microsoft domain. Any credentials entered there were captured by the attackers.
Who was targeted in the Google Cloud phishing attack
Check Point says the campaign focused heavily on industries that rely on automated alerts and shared documents. That included manufacturing, technology, finance, professional services, and retail. Other sectors like healthcare, education, government, energy, travel and media were also targeted. These environments see constant permission requests and file-sharing notices, which made the lures feel routine.
“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration,” a Google spokesperson told Cyberguy. “Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”
The incident demonstrates how attackers can weaponize legitimate cloud automation tools without resorting to traditional spoofing.
Ways to stay safe from trusted-looking phishing emails
Phishing emails are getting harder to spot, especially when attackers abuse real cloud platforms like Google Cloud. These steps help reduce risk when emails look familiar and legitimate.
1) Slow down before acting on alerts
Attackers rely on urgency. Messages about voicemails, shared files or permission changes are designed to make you click fast. Pause before taking action. Ask yourself whether you were actually expecting that alert. If not, verify it another way.
2) Inspect links before you click
Always hover over links to preview the destination domain. In this campaign, links jumped across multiple trusted-looking Google domains before landing on a fake login page. If the final destination does not match the service asking you to sign in, close the page immediately.
3) Treat file access and permission emails with caution
Shared document alerts are a favorite lure because they feel routine at work. If an email claims you were granted access to a file you do not recognize, do not click directly from the message. Instead, open your browser and sign in to Google Drive or OneDrive manually to check for new files.
The final step led users to a fake Microsoft login page, where entered credentials were silently stolen. (Stack Social)
4) Use a password manager to catch fake login pages
Password managers can be a strong last line of defense. They will not autofill credentials on fake Microsoft or Google login pages hosted on non-official domains. If your password manager refuses to fill in a login, that is a red flag worth paying attention to.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
NEW GOOGLE AI MAKES ROBOTS SMARTER WITHOUT THE CLOUD
5) Run strong antivirus software with phishing protection
Modern antivirus tools do more than scan files. Many now detect malicious links, fake CAPTCHA pages, and credential harvesting sites in real time. Strong antivirus software can block phishing pages even after a click, which matters in multi-stage attacks like this one.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
6) Reduce your exposure with a data removal service
Phishing campaigns often succeed because attackers already know your email, employer or role. That information is commonly pulled from data broker sites. A data removal service helps remove your personal information from these databases, making it harder for attackers to craft convincing, targeted emails.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
7) Enable two-factor authentication (2FA) everywhere
Even if attackers steal your password, two-factor authentication (2FA) can stop them from accessing your account. Use app-based authentication or hardware keys when possible, especially for work email, cloud storage, and Microsoft accounts.
8) Report suspicious emails immediately
If something feels off, report it. Flag suspicious Google or Microsoft alerts to your IT or security team so they can warn others. Early reporting can stop a phishing campaign before it spreads further inside an organization.
Google phishing emails looked like routine workplace alerts. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
This campaign highlights a growing shift in phishing tactics. Attackers no longer need to fake brands when they can abuse trusted cloud services directly. As automation becomes more common, security awareness matters more than ever. Even familiar emails deserve a second look, especially when they push urgency or ask for credentials.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If a phishing email comes from a real Google address, how confident are you that you would spot it before clicking? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Most dubious uses of AI at CES 2026
But those are just the beginning. We’ve seen AI pop up in much stranger places too, from hair clippers to stick vacs, and at least one case where even the manufacturer itself seemed unsure what made its products “AI.”
Here are the gadgets we’ve seen at CES 2026 so far that really take the “intelligence” out of “artificial intelligence.”
Glyde smart hair clippers
This is a product that would be silly enough without the AI add-on. These smart hair clippers help amateur hairdressers deliver the perfect fade by dynamically altering the closeness of the cut, helped along by an ominous face mask that looks like it belongs in an optician’s office.
But it’s taken to the next level by the real-time AI coach, which gives you feedback as you cut. Glyde told me it’s working on voice controls for the AI too, and that eventually it will be able to recommend specific hairstyles, so long as you’re willing to trust its style advice. Are you?

“Where Pills meet AI.”
That was the message emblazoned across the SleepQ booth, where company reps were handing out boxes of pills — a multivitamin with ashwagandha extract according to the box, supposedly good for sleep, though I wasn’t brave enough to test that claim on my jetlag.
Manufacturer Welt, originally spun out of a Samsung incubator, calls its product “AI-upgraded pharmacotherapy.” It’s really just using biometric data from your smartwatch or sleep tracker to tell you the optimal time to take a sleeping pill each day, with plans to eventually cover anxiety meds, weight-management drugs, pain relief, and more.
There may well be an argument that fine-tuning the time people pop their pills could make them more effective, but I feel safe in saying we don’t need to start throwing around the term “AI-enhanced drugs.”

Startup Deglace claims that its almost unnecessarily sleek-looking Fraction vacuum cleaner uses AI in two different ways: first to “optimize suction,” and then to manage repairs and replacements for the modular design.
It says its Neural Predictive AI monitors vacuum performance “to detect issues before they happen,” giving you health scores for each of the vacuum’s components, which can be conveniently replaced with a quick parts order from within the accompanying app. A cynic might worry this is all in the name of selling users expensive and proprietary replacement parts, but I can at least get behind the promise of modular upgrades — assuming Deglace is able to deliver on that promise.

Most digital picture frames let you display photos of loved ones, old holiday snaps, or your favorite pieces of art. Fraimic lets you display AI slop.
It’s an E Ink picture frame with a microphone and voice controls, so you can describe whatever picture you’d like, which the frame will then generate using OpenAI’s GPT Image 1.5 model. The frame itself starts at $399, which gets you 100 image generations each year, with the option to buy more if you run out.
What makes the AI in Fraimic so dubious is that it might be a pretty great product without it. The E Ink panel looks great, you can use it to show off your own pictures and photos too, and it uses so little power that it can run for years without being plugged in. We’d just love it a lot more without the added slop.

Infinix, a smaller phone manufacturer that’s had success across Asia for its affordable phones, didn’t launch any actual new products at CES this year, but it did bring five concepts that could fit into future phones. Some are clever, like various color-changing rear finishes and a couple of liquid-cooling designs. And then there’s the AI ModuVerse.
Modular phone concepts are nothing new, so the AI hook is what makes ModuVerse unique — in theory. One of the “Modus” makes sense: a meeting attachment that connects magnetically, generating AI transcripts and live translation onto a mini display on the back.
But when I asked what made everything else AI, Infinix didn’t really have any good answers. The gimbal camera has AI stabilization, the vlogging lens uses AI to detect faces, and the microphone has AI voice isolation — all technically AI-based, but not in any way that’s interesting. As for the magnetic, stackable power banks, Infinix’s reps eventually admitted they don’t really have any AI at all. Color me shocked.

There’s a growing trend for AI and robotic cooking hardware — The Verge’s Jen Tuohy reviewed a $1,500 robot chef just last month — but Wan AIChef is something altogether less impressive: an AI-enabled microwave.
It runs on what looks suspiciously like Android, with recipe suggestions, cooking instructions, and a camera inside so you can see the progress of what you’re making. But… it’s just a microwave. So it can’t actually do any cooking for you, other than warm up your food to just the right temperature (well, just right plus or minus 3 degrees Celsius, to be accurate).
It’ll do meal plans and food tracking and calorie counting too, which all sounds great so long as you’re willing to commit to eating all of your meals out of the AI microwave. Please, I beg you, do not eat all of your meals out of the AI microwave.

The tech industry absolutely loves reinventing the vending machine and branding it either robotics or AI, and AI Barmen is no different.
This setup — apparently already in use for private parties and corporate events — is really just an automatic cocktail machine with a few AI smarts slapped on top.
The AI uses the connected webcam to estimate your age — it was off by eight years in my case — and confirm you’re sober enough to get another drink. It can also create custom drinks, with mixed success: When asked for something to “fuck me up,” it came up with the Funky Tequila Fizz, aka tequila, triple sec, and soda. What, no absinthe?

Photo: Dominic Preston / The Verge
Should you buy your kid an AI toy that gives it a complete LLM-powered chatbot to speak to? Probably not. But what if that AI chatbot looked like chibi Elon Musk?
He’s just one of the many avatars offered by the Luka AI Cube, including Hayao Miyazaki, Steve from Minecraft, and Harry Potter. Kids can chat to them about their day, ask for advice, or even share the AI Cube’s camera feed to show the AI avatars where they are and what they’re up to. Luka says it’s a tool for fun, but also learning, with various educational activities and language options.
The elephant in the room is whether you should trust any company’s guardrails enough to give a young kid access to an LLM. Leading with an AI take on Elon Musk — whose own AI, Grok, is busy undressing children as we speak — doesn’t exactly inspire confidence.
-
Detroit, MI5 days ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology3 days agoPower bank feature creep is out of control
-
Dallas, TX4 days agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Health5 days agoViral New Year reset routine is helping people adopt healthier habits
-
Nebraska2 days agoOregon State LB transfer Dexter Foster commits to Nebraska
-
Iowa3 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Nebraska3 days agoNebraska-based pizza chain Godfather’s Pizza is set to open a new location in Queen Creek
-
Entertainment2 days agoSpotify digs in on podcasts with new Hollywood studios