You can’t shake a stick without hitting an AI gadget at CES this year, with artificial smarts now embedded in just about every wearable, screen, and appliance across the show floor, not to mention the armies of AI companions, toys, and robots.
Technology
AI could drive US unemployment to 20%, senators warn as new bill targets job tracking
NEWYou can now listen to Fox News articles!
A new bipartisan push in Washington is shining a spotlight on AI’s impact on jobs. Senators Josh Hawley, R-Mo., and Mark Warner, D-Va., introduced the AI-Related Job Impacts Clarity Act, which would require major companies and federal agencies to report AI-related job impacts to the U.S. Department of Labor (DOL).
The legislation is designed to shed light on how artificial intelligence is affecting the U.S. workforce.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Key requirements of the AI-Related Job Impacts Clarity Act
The AI-Related Job Impacts Clarity Act sets out several core obligations:
- Covered entities must quarterly disclose job effects tied to AI. This includes layoffs, hires and positions left open because tasks were automated.
- The DOL must compile those disclosures and publish a public report, including to Congress.
- Non-publicly traded companies may be included under certain thresholds.
TRUMP’S AI PLAN IS A BULWARK AGAINST THE RISING THREAT FROM CHINA
The goal is to create a clear, consistent data source on how AI changes employment.
Why the AI-Related Job Impacts Clarity Act matters
AI is already reshaping the American workforce, and lawmakers from both parties say the country needs a clear view of what that means for jobs.
Sens. Josh Hawley and Mark Warner join forces on a new bipartisan bill to track how AI is changing American jobs. (Valerie Plesch/Bloomberg via Getty Images)
Hawley warned that the trend is accelerating.
“Artificial intelligence is already replacing American workers, and experts project AI could drive unemployment up to 10 to 20% in the next five years,” Hawley said. “The American people need to have an accurate understanding of how AI is affecting our workforce, so we can ensure that AI works for the people, not the other way around.”
Warner agreed, saying good data is key to good policy
“This bipartisan legislation will finally give us a clear picture of AI’s impact on the workforce, what jobs are being eliminated, which workers are being retrained, and where new opportunities are emerging,” he said. “Armed with this information, we can make sure AI drives opportunity instead of leaving workers behind.”
PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS
Their shared goal is simple. The AI-Related Job Impacts Clarity Act would make AI’s workforce impact visible and accountable. It gives you and policymakers the hard data needed to guide smarter decisions about automation and employment.
Challenges in tracking AI-related job impacts
While the bill sounds promising, several hurdles remain. The biggest challenge is consistency. Each company decides what counts as an AI-related job impact, which could lead to uneven or incomplete reporting.
Smaller businesses might also escape the rules altogether if they fall outside the reporting thresholds. That could leave big gaps in understanding how automation affects local or niche industries.
Data quality is another concern. Even with reporting requirements, the system relies on companies to share accurate information. The Department of Labor will need strong verification to make sure the reports reflect reality.
LIZ PEEK: AI LAYOFFS COULD SPARK A SOCIALIST SURGE IF AMERICA IGNORES THE WARNING SIGNS
And while transparency is valuable, it doesn’t automatically protect jobs. The law can expose the problem, but real progress will depend on what policymakers and employers do with that data.
The AI-Related Job Impacts Clarity Act would make companies report when automation replaces, adds or reshapes jobs. (Kurt “CyberGuy” Knutsson)
What this means for you
If you work in an industry where AI tools are becoming common, this bill could directly affect you. It would make it easier to see how automation changes jobs across the country. You’ll be able to find out which roles are being replaced and which ones are being created.
This new level of visibility could also pressure employers to be more transparent about layoffs. Companies may start explaining whether job cuts are truly due to AI or part of broader business shifts. That accountability could help workers plan smarter for the future.
With clearer data, policymakers and training programs can step in faster. If large numbers of people in a certain field lose work because of automation, the government could push for retraining or job placement efforts. It may even help workers prepare earlier by learning new digital or technical skills before AI impacts their roles.
SEN SANDERS: AI MUST BENEFIT EVERYONE, NOT JUST A HANDFUL OF BILLIONAIRES
Overall, this bill puts information in the public’s hands so workers can understand what’s happening to their jobs instead of being left in the dark.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
The AI-Related Job Impacts Clarity Act marks a major step toward tracking how automation changes the American workforce. It doesn’t stop AI from transforming industries, but it gives workers and policymakers the facts they need to respond. Transparency can’t stop every job loss, but it can help guide smarter policies, retraining programs and career planning.
The Department of Labor would publish regular reports showing where AI is creating challenges and new opportunities for workers. (Getty)
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If this new data shows your field is being reshaped by AI, would you start retraining now or wait to see how it plays out? Let us know by writing to us at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Xbox’s Towerborne is switching from a free-to-play game to a paid one
Towerborne, a side-scrolling action RPG published by Xbox Game Studios that has been available in early access, will officially launch on February 26th. But instead of launching as a free-to-play, always-on online game as originally planned, Towerborne is instead going to be a paid game that you can play offline.
“You will own the complete experience permanently, with offline play and online co-op,” Trisha Stouffer, CEO and president of Towerborne developer Stoic, says in an Xbox Wire blog post. “This change required deep structural rebuilding over the past year, transforming systems originally designed around constant connectivity. The result is a stronger, more accessible, and more player-friendly version of Towerborne — one we’re incredibly proud to bring to launch.”
“After listening to our community during Early Access and Game Preview, we learned players wanted a complete, polished experience without ongoing monetization mechanics,” according to an FAQ. “Moving to a premium model lets us deliver the full game upfront—no live-service grind, no pay-to-win systems—just the best version of Towerborne.”
With the popular live service games like Fortnite and Roblox getting harder to usurp, Towerborne’s switch to a premium, offline-playable experience could make it more enticing for players who don’t want to jump into another time-sucking forever game. It makes Towerborne more appealing to me, at least.
With the 1.0 release of the game, Towerborne will have a “complete” story, new bosses, and a “reworked” difficulty system. You’ll also be able to acquire all in-game cosmetics for free through gameplay, with “no more cosmetic purchasing.” Players who are already part of early access will still be able to play the game.
Towerborne will launch on February 26th on Xbox Series X / S, Xbox on PC, Game Pass, Steam, and PS5. The standard edition will cost $24.99, while the deluxe edition will cost $29.99.
Technology
Hackers abuse Google Cloud to send trusted phishing emails
NEWYou can now listen to Fox News articles!
Cybercriminals have found a clever new way to get phishing emails straight into inboxes.
Instead of spoofing brands, they are abusing real cloud tools that people already trust. Security researchers say attackers recently hijacked a legitimate email feature inside Google Cloud.
The result was thousands of phishing messages that looked and felt like normal Google notifications. Many slipped past spam filters with ease.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – when you join my CYBERGUY.COM newsletter.
How this Google Cloud phishing attack worked
At the center of the campaign was Google Cloud Application Integration. This service allows businesses to send automated email notifications from workflows they build. Attackers exploited the Send Email task inside that system. Because the messages came from a real Google address, they appeared authentic to both users and security tools.
According to Check Point, a global cybersecurity firm that tracks and analyzes large-scale threat campaigns, the emails were sent from a legitimate Google-owned address and closely matched Google’s notification style. Fonts, wording, and layout all looked familiar. Over a two-week period in December 2025, attackers sent more than 9,000 phishing emails targeting roughly 3,200 organizations across the U.S., Europe, Canada, Asia Pacific, and Latin America.
Attackers used trusted Google Cloud infrastructure to route victims through multiple redirects before revealing the scam. (Thomas Fuller/SOPA Images/LightRocket via Getty Images)
MALICIOUS CHROME EXTENSIONS CAUGHT STEALING SENSITIVE DATA
Why Google phishing emails were so convincing
The messages looked like routine workplace alerts. Some claimed you had received a voicemail. Others said you were granted access to a shared document, like a Q4 file. That sense of normalcy lowered suspicion. Many people are used to seeing these exact messages every day. Even more concerning, the emails bypassed common protections like SPF and DMARC because they were sent through Google-owned infrastructure. To email systems, nothing looked fake.
What happens after you click
The attack did not stop at the email. Once a victim clicked the link, they were sent to a page hosted on storage.cloud.google.com. That added another layer of trust. From there, the link redirected again to googleusercontent.com. Next came a fake CAPTCHA or image check. This step blocked automated security scanners while letting real users continue. After passing that screen, victims landed on a fake Microsoft login page hosted on a non-Microsoft domain. Any credentials entered there were captured by the attackers.
Who was targeted in the Google Cloud phishing attack
Check Point says the campaign focused heavily on industries that rely on automated alerts and shared documents. That included manufacturing, technology, finance, professional services, and retail. Other sectors like healthcare, education, government, energy, travel and media were also targeted. These environments see constant permission requests and file-sharing notices, which made the lures feel routine.
“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration,” a Google spokesperson told Cyberguy. “Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”
The incident demonstrates how attackers can weaponize legitimate cloud automation tools without resorting to traditional spoofing.
Ways to stay safe from trusted-looking phishing emails
Phishing emails are getting harder to spot, especially when attackers abuse real cloud platforms like Google Cloud. These steps help reduce risk when emails look familiar and legitimate.
1) Slow down before acting on alerts
Attackers rely on urgency. Messages about voicemails, shared files or permission changes are designed to make you click fast. Pause before taking action. Ask yourself whether you were actually expecting that alert. If not, verify it another way.
2) Inspect links before you click
Always hover over links to preview the destination domain. In this campaign, links jumped across multiple trusted-looking Google domains before landing on a fake login page. If the final destination does not match the service asking you to sign in, close the page immediately.
3) Treat file access and permission emails with caution
Shared document alerts are a favorite lure because they feel routine at work. If an email claims you were granted access to a file you do not recognize, do not click directly from the message. Instead, open your browser and sign in to Google Drive or OneDrive manually to check for new files.
The final step led users to a fake Microsoft login page, where entered credentials were silently stolen. (Stack Social)
4) Use a password manager to catch fake login pages
Password managers can be a strong last line of defense. They will not autofill credentials on fake Microsoft or Google login pages hosted on non-official domains. If your password manager refuses to fill in a login, that is a red flag worth paying attention to.
Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.
NEW GOOGLE AI MAKES ROBOTS SMARTER WITHOUT THE CLOUD
5) Run strong antivirus software with phishing protection
Modern antivirus tools do more than scan files. Many now detect malicious links, fake CAPTCHA pages, and credential harvesting sites in real time. Strong antivirus software can block phishing pages even after a click, which matters in multi-stage attacks like this one.
The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
6) Reduce your exposure with a data removal service
Phishing campaigns often succeed because attackers already know your email, employer or role. That information is commonly pulled from data broker sites. A data removal service helps remove your personal information from these databases, making it harder for attackers to craft convincing, targeted emails.
While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
7) Enable two-factor authentication (2FA) everywhere
Even if attackers steal your password, two-factor authentication (2FA) can stop them from accessing your account. Use app-based authentication or hardware keys when possible, especially for work email, cloud storage, and Microsoft accounts.
8) Report suspicious emails immediately
If something feels off, report it. Flag suspicious Google or Microsoft alerts to your IT or security team so they can warn others. Early reporting can stop a phishing campaign before it spreads further inside an organization.
Google phishing emails looked like routine workplace alerts. (Kurt “CyberGuy” Knutsson)
Kurt’s key takeaways
This campaign highlights a growing shift in phishing tactics. Attackers no longer need to fake brands when they can abuse trusted cloud services directly. As automation becomes more common, security awareness matters more than ever. Even familiar emails deserve a second look, especially when they push urgency or ask for credentials.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
If a phishing email comes from a real Google address, how confident are you that you would spot it before clicking? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Technology
Most dubious uses of AI at CES 2026
But those are just the beginning. We’ve seen AI pop up in much stranger places too, from hair clippers to stick vacs, and at least one case where even the manufacturer itself seemed unsure what made its products “AI.”
Here are the gadgets we’ve seen at CES 2026 so far that really take the “intelligence” out of “artificial intelligence.”
Glyde smart hair clippers
This is a product that would be silly enough without the AI add-on. These smart hair clippers help amateur hairdressers deliver the perfect fade by dynamically altering the closeness of the cut, helped along by an ominous face mask that looks like it belongs in an optician’s office.
But it’s taken to the next level by the real-time AI coach, which gives you feedback as you cut. Glyde told me it’s working on voice controls for the AI too, and that eventually it will be able to recommend specific hairstyles, so long as you’re willing to trust its style advice. Are you?

“Where Pills meet AI.”
That was the message emblazoned across the SleepQ booth, where company reps were handing out boxes of pills — a multivitamin with ashwagandha extract according to the box, supposedly good for sleep, though I wasn’t brave enough to test that claim on my jetlag.
Manufacturer Welt, originally spun out of a Samsung incubator, calls its product “AI-upgraded pharmacotherapy.” It’s really just using biometric data from your smartwatch or sleep tracker to tell you the optimal time to take a sleeping pill each day, with plans to eventually cover anxiety meds, weight-management drugs, pain relief, and more.
There may well be an argument that fine-tuning the time people pop their pills could make them more effective, but I feel safe in saying we don’t need to start throwing around the term “AI-enhanced drugs.”

Startup Deglace claims that its almost unnecessarily sleek-looking Fraction vacuum cleaner uses AI in two different ways: first to “optimize suction,” and then to manage repairs and replacements for the modular design.
It says its Neural Predictive AI monitors vacuum performance “to detect issues before they happen,” giving you health scores for each of the vacuum’s components, which can be conveniently replaced with a quick parts order from within the accompanying app. A cynic might worry this is all in the name of selling users expensive and proprietary replacement parts, but I can at least get behind the promise of modular upgrades — assuming Deglace is able to deliver on that promise.

Most digital picture frames let you display photos of loved ones, old holiday snaps, or your favorite pieces of art. Fraimic lets you display AI slop.
It’s an E Ink picture frame with a microphone and voice controls, so you can describe whatever picture you’d like, which the frame will then generate using OpenAI’s GPT Image 1.5 model. The frame itself starts at $399, which gets you 100 image generations each year, with the option to buy more if you run out.
What makes the AI in Fraimic so dubious is that it might be a pretty great product without it. The E Ink panel looks great, you can use it to show off your own pictures and photos too, and it uses so little power that it can run for years without being plugged in. We’d just love it a lot more without the added slop.

Infinix, a smaller phone manufacturer that’s had success across Asia for its affordable phones, didn’t launch any actual new products at CES this year, but it did bring five concepts that could fit into future phones. Some are clever, like various color-changing rear finishes and a couple of liquid-cooling designs. And then there’s the AI ModuVerse.
Modular phone concepts are nothing new, so the AI hook is what makes ModuVerse unique — in theory. One of the “Modus” makes sense: a meeting attachment that connects magnetically, generating AI transcripts and live translation onto a mini display on the back.
But when I asked what made everything else AI, Infinix didn’t really have any good answers. The gimbal camera has AI stabilization, the vlogging lens uses AI to detect faces, and the microphone has AI voice isolation — all technically AI-based, but not in any way that’s interesting. As for the magnetic, stackable power banks, Infinix’s reps eventually admitted they don’t really have any AI at all. Color me shocked.

There’s a growing trend for AI and robotic cooking hardware — The Verge’s Jen Tuohy reviewed a $1,500 robot chef just last month — but Wan AIChef is something altogether less impressive: an AI-enabled microwave.
It runs on what looks suspiciously like Android, with recipe suggestions, cooking instructions, and a camera inside so you can see the progress of what you’re making. But… it’s just a microwave. So it can’t actually do any cooking for you, other than warm up your food to just the right temperature (well, just right plus or minus 3 degrees Celsius, to be accurate).
It’ll do meal plans and food tracking and calorie counting too, which all sounds great so long as you’re willing to commit to eating all of your meals out of the AI microwave. Please, I beg you, do not eat all of your meals out of the AI microwave.

The tech industry absolutely loves reinventing the vending machine and branding it either robotics or AI, and AI Barmen is no different.
This setup — apparently already in use for private parties and corporate events — is really just an automatic cocktail machine with a few AI smarts slapped on top.
The AI uses the connected webcam to estimate your age — it was off by eight years in my case — and confirm you’re sober enough to get another drink. It can also create custom drinks, with mixed success: When asked for something to “fuck me up,” it came up with the Funky Tequila Fizz, aka tequila, triple sec, and soda. What, no absinthe?

Photo: Dominic Preston / The Verge
Should you buy your kid an AI toy that gives it a complete LLM-powered chatbot to speak to? Probably not. But what if that AI chatbot looked like chibi Elon Musk?
He’s just one of the many avatars offered by the Luka AI Cube, including Hayao Miyazaki, Steve from Minecraft, and Harry Potter. Kids can chat to them about their day, ask for advice, or even share the AI Cube’s camera feed to show the AI avatars where they are and what they’re up to. Luka says it’s a tool for fun, but also learning, with various educational activities and language options.
The elephant in the room is whether you should trust any company’s guardrails enough to give a young kid access to an LLM. Leading with an AI take on Elon Musk — whose own AI, Grok, is busy undressing children as we speak — doesn’t exactly inspire confidence.
-
Detroit, MI6 days ago2 hospitalized after shooting on Lodge Freeway in Detroit
-
Technology3 days agoPower bank feature creep is out of control
-
Dallas, TX4 days agoDefensive coordinator candidates who could improve Cowboys’ brutal secondary in 2026
-
Health5 days agoViral New Year reset routine is helping people adopt healthier habits
-
Nebraska2 days agoOregon State LB transfer Dexter Foster commits to Nebraska
-
Iowa3 days agoPat McAfee praises Audi Crooks, plays hype song for Iowa State star
-
Nebraska3 days agoNebraska-based pizza chain Godfather’s Pizza is set to open a new location in Queen Creek
-
Entertainment2 days agoSpotify digs in on podcasts with new Hollywood studios