Technology
300,000 Chrome users hit by fake AI extensions
NEWYou can now listen to Fox News articles!
Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.
They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)
What you need to know about fake AI extensions
Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.
Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.
These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.
While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:
- AI Assistant
- Llama
- Gemini AI Sidebar
- AI Sidebar
- ChatGPT Sidebar
- Grok
- Asking ChatGPT
- ChatGBT
- Chat Bot GPT
- Grok Chatbot
- Chat With Gemini
- XAI
- Google Gemini
- Ask Gemini
- AI Letter Generator
- AI Message Generator
- AI Translator
- AI For Translation
- AI Cover Letter Generator
- AI Image Generator ChatGPT
- Ai Wallpaper Generator
- Ai Picture Generator
- DeepSeek Download
- AI Email Writer
- Email Generator AI
- DeepSeek Chat
- ChatGPT Picture Generator
- ChatGPT Translate
- AI GPT
- ChatGPT Translation
- ChatGPT for Gmail
FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE
These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)
How the fake AI Chrome extension attack works
These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.
Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.
In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.
The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.
Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.
If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.
We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”
BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK
Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)
7 ways you can protect yourself from malicious Chrome extensions
If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.
1) Remove any suspicious or unused browser extensions
On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.
2) Change your passwords
If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.
3) Use a password manager to create and protect strong passwords
A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.
4) Install strong antivirus software and keep it active
Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
5) Use an identity theft protection service
Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.
6) Keep your browser and computer fully updated
Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.
7) Use a personal data removal service
Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaway
Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.
Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Jury finds Elon Musk’s ‘stupid tweets’ caused Twitter investors’ losses
A California jury determined that Elon Musk misled Twitter investors before making a $44 billion deal to buy the company in 2022, reports CNBC. The New York Times reports that Musk had testified this month that he didn’t believe his posts would spook markets, but he did say that “If this was a trial about whether I made stupid tweets, I would say I’m guilty.”
CNBC reports Musk’s attorneys are expected to file an appeal, as damages could reach as high as $2.6 billion, according to attorneys representing the plaintiffs.
While finding that Musk did not engage in a specific scheme to defraud shareholders, the jury cited two of Musk’s tweets, from May 13th and May 27th, 2022, as materially false or misleading, causing some investors to sell shares in Twitter at values below the $54.20 per share bid.
Twitter deal temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of users
20% fake/spam accounts, while 4 times what Twitter claims, could be *much* higher.
My offer was based on Twitter’s SEC filings being accurate.
Yesterday, Twitter’s CEO publicly refused to show proof of
This deal cannot move forward until he does.
Technology
AI smart glasses could generate fake photos instantly
NEWYou can now listen to Fox News articles!
Smart glasses are gaining new momentum thanks to artificial intelligence (AI). Companies like Google, Meta, Samsung and possibly Apple are exploring AI-powered glasses that combine cameras, speakers, voice assistants and computer vision in a wearable device.
At first glance, the features sound familiar. Smart glasses can take photos, give directions, answer questions and help you navigate the world hands-free. However, a recent demo hints at something much bigger.
These glasses may soon generate or alter photos instantly. In other words, the image you capture may no longer reflect what was actually there.
That raises an important question: If AI can change a photo the moment it is taken, how do we know what is real anymore?
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
SMART GLASSES DETECTOR APP WARNS IF YOU’RE BEING RECORDED
Google product lead Dieter Bohn demonstrates prototype AI smart glasses during a demo showing how the device can capture and modify photos using generative AI. (X/ @backlon)
A new AI trick inside smart glasses
During a demo of upcoming smart glasses, Google’s Dieter Bohn showed how the device could capture a photo and modify it using AI. The prototype, shown as Android XR glasses with a display, connects to Google’s generative AI tools, including Google Gemini and an experimental image generator called Nano Banana.
In the demonstration, Bohn asked the glasses to take a photo of people in the room. Then he gave another command. He asked the system to place those people in front of the famous church in Barcelona that he could not remember by name.
Within moments, the AI produced a new image showing the group standing in front of the Sagrada Família. The people in the photo never traveled to Spain. The background came from AI. To someone viewing the image later, it could look like a real travel photo.
Smart glasses are following the same playbook
The hardware approach behind these devices looks similar across the industry.
Most smart glasses include:
- A built-in camera
- Speakers for audio feedback
- A microphone and a voice assistant
- Computer vision powered by AI
- Navigation and contextual information
- Optional displays inside the lenses
This design mirrors products like the Ray-Ban Meta Smart Glasses, which combine sunglasses with an AI assistant and camera. Those glasses already allow users to capture photos, livestream video and ask questions using voice commands. However, the editing tools currently available inside Meta’s glasses focus more on artistic effects. For example, the system can transform photos into a cartoon or painting style. The goal is creative expression rather than photorealistic manipulation. Google’s demo hints at something different. It shows how AI can place people into entirely new scenes that never happened.
INSIDE MICROSOFT’S AI CONTENT VERIFICATION PLAN
A close-up of prototype Android XR glasses with a built-in display, part of Google’s concept for AI-powered smart glasses. (X/ @backlon)
Why this matters for photography
AI-generated images already exist across social media. Smartphones have also introduced powerful editing tools. Google’s Pixel phones, for example, have leaned heavily into AI photography with tools that remove objects, adjust lighting and generate backgrounds.
The difference with smart glasses is speed. The technology removes the delay between taking a photo and editing it. Instead of capturing an image and opening editing software later, the AI can change the photo immediately. That could make altered images far more common. Photos that once served as proof of where someone was or what happened may become harder to trust.
The demo still leaves open questions
It is important to note that the Google demo was short and carefully staged. The company acknowledged that parts of the video were edited. That suggests the AI process may take longer in real-world conditions.
There is also the question of reliability. Generative AI tools sometimes produce mistakes, strange artifacts or unrealistic details. Still, even an imperfect system could change how people interact with cameras and images. As the technology improves, the gap between real and AI-generated photos may shrink.
What this means for you
Smart glasses could soon become another everyday device. That means the way we capture and share images may shift again. If these tools become common, you may start seeing photos that were generated or heavily modified by AI. A picture posted online may look like a real moment from someone’s life. In reality, it could be a mix of real people and AI-generated scenery. That does not mean every image is fake. It does mean digital images may carry less proof than they once did. Understanding how AI editing works can help you approach viral photos, travel shots or dramatic images with a healthy level of skepticism.
Ray-Ban Meta smart glasses combine cameras, speakers and an AI assistant, showing how wearable devices are bringing artificial intelligence into everyday eyewear. (Meta)
How to spot AI-generated or altered photos
AI editing tools are becoming easier to use. That means altered images may appear more often online. A few habits can help you avoid being misled.
1) Question images that look too perfect
If a photo looks unusually polished or dramatic, pause before assuming it is real. AI images often create scenes that feel cinematic or unusually clean.
2) Look closely at small details
AI systems sometimes struggle with small elements. Check hands, reflections, shadows and background objects for strange shapes or mismatched lighting.
3) Check where the image came from
If a photo spreads quickly online, try to trace the original source. Reverse image search can reveal if the picture appeared somewhere else first.
4) Be cautious with viral travel or event photos
AI tools can place people into locations they have never visited. A convincing background does not guarantee that the moment actually happened.
5) Watch for photos used in scams or misinformation
AI-generated images can appear in fake travel posts, romance scams or misleading news claims. If a photo appears alongside urgent requests for money or emotional stories, take time to verify it before reacting. Avoid clicking suspicious links and consider using strong antivirus software that can block malicious websites and scam pages before they load. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
6) Treat photos online as information, not proof
Photos once served as strong evidence of where someone was or what occurred. With generative AI, an image may be a mix of real people and computer-generated scenes.
Take my quiz: How safe is your online security?
Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt’s key takeaways
Smart glasses promise convenience, hands-free computing and powerful AI tools. At the same time, they blur the line between photography and digital creation. Technology keeps pushing toward a world where capturing a moment and generating one can happen in the same instant. The devices themselves may become smaller and smarter. The challenge may be deciding how much we trust the images they produce.
So here is the question worth asking. If AI glasses can create realistic photos of places you’ve never visited, will pictures still count as proof of reality? Let us know by writing to us at Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Microsoft is ending the Windows Update nightmare — and letting you pause them indefinitely
While Microsoft isn’t doing away with automatic updates entirely, Windows boss Pavan Davuluri is promising that in future, you’ll be able to pause them “for as long as you need.” You’ll be able to reboot or shut down your computer “without being forced to install them.” To be fair to Microsoft, I’ve seen an option to reboot or shutdown without updating for a while now.
Even if you fail to pause them, you’ll only have to reboot your computer once a month, Microsoft promises — though its says you’ll be able to get updates faster if you wish. If you’re the kind of user who wants new features so quickly that you’re part of the Windows Insider Program, Microsoft says it’ll make that easier and make it clearer what you’ll get.
And as part of those updates, Microsoft says that this year, it will improve performance, responsiveness and stability, reduce memory consumption, make File Explorer and other apps launch and run faster, reduce crashes, improve drivers, make devices wake up more reliably, and much, much more.
It feels like Microsoft has also taken our feedback about the recent ridiculous hour-plus setup process for some Windows handhelds and laptops to heart. Davuluri writes that we’ll have “the ability to skip updates during device setup to get to the desktop faster.” And even if you sit through, there should be “fewer pages and reboots to getting started is simpler.” Plus, Microsoft will finally let you use gamepad controls to create your PIN during setup, instead of making you smudge the touchscreen.
Bravo, Microsoft, if this is all true, and if you can implement it in a reasonable length of time.
Davuluri writes that his team has spent months analyzing the feedback of Windows users, and “What came through was the voice of people who care deeply about Windows and want it to be better.”
-
Detroit, MI2 days agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Oklahoma6 days agoFamily rallies around Oklahoma father after head-on crash
-
Nebraska1 week agoWildfire forces immediate evacuation order for Farnam residents
-
Georgia5 days agoHow ICE plans for a detention warehouse pushed a Georgia town to fight back | CNN Politics
-
Massachusetts1 week agoMassachusetts community colleges to launch apprenticeship degree programs – The Boston Globe
-
Alaska6 days agoPolice looking for man considered ‘armed and dangerous’
-
Colorado1 week ago‘It’s Not a Penalty’: Bednar Rips Officials For MacKinnon Ejection | Colorado Hockey Now
-
Southwest1 week agoTalarico reportedly knew Colbert interview wouldn’t air on TV before he left to film it