Some writers have declared that the debut of ChatGPT on November 30th, 2022, marked the beginning of a new chapter in history akin to the Enlightenment and the Industrial Revolution. Others have been more skeptical, wondering if this is just another overhyped tech, like blockchain or the metaverse.
Technology
ChatGPT, explained
What history will call ChatGPT remains to be seen, but here’s one thing I do know for sure: nobody has shut up about it since.
From injecting itself into presidential debates and Saturday Night Live sketches to creepily flirting with talking to you Her-style (well, briefly at least), ChatGPT has captured the public imagination in a way few technologies have. It’s not hard to see why. The bot can code, compose music, craft essays… you name it. And with the release of GPT-4o, it’s even better than ever.
Yet, as it gets smarter, the tech is also becoming less comprehensible. People are also getting more scared of what it can do, which is understandable given some are already losing their jobs to AI. It doesn’t help that a lot of sensationalism surrounds the subject, making it difficult to separate fact from fiction.
That’s why we decided to throw together this explainer so we can cut through all the BS together. You ready? Let’s begin.
What is ChatGPT?
Do you want the simplistic answer or the complex one?
The easy answer is that ChatGPT is a chatbot that can answer your questions by using data it’s gathered from the internet.
The complex answer is that ChatGPT is an AI chatbot powered by language models created by OpenAI that are known as generative pre-trained transformers (GPTs), a kind of AI that can actually generate new content altogether as opposed to just analyzing data. (If you’ve heard of large language models, or LLMs, a GPT is a type of LLM. Got it? Good.)
So what’s OpenAI?
OpenAI is an AI company founded in December 2015. It created ChatGPT, but it’s also responsible for other products, like the AI image generator DALL-E.
Doesn’t Microsoft own it? Or was that Elon Musk?
No, but Microsoft is a major investor, pouring billions into the tech. Elon Musk co-founded OpenAI along with fired and rehired OpenAI CEO Sam Altman, Ilya Sutskever (who has since left), Greg Brockman, Wojciech Zaremba, and John Schulman. However, Musk eventually cut ties to create his own chatbot called Grok.
So, will ChatGPT take over the world?
It will most definitely replace people with machines and — along with other AI bots like Amazon’s Alexa — basically take over the world. So you’d better start playing nice with them.
Nah, I’m messing with you. I mean, nobody knows for sure, but I highly doubt we’re going to see a job apocalypse and have to welcome in our new robot overlords anytime soon. I’ll explain more in a minute.
Phew! But how is it so smart?
Well, like I said, ChatGPT runs on GPTs, which OpenAI regularly updates with new versions, the most recent being GPT-4o. Trained by humans and a ton of internet data, each model can generate human-like conversations so you can complete all kinds of tasks.
Like?
Where do I begin? The possibilities are practically endless, from composing essays and writing code to analyzing data, solving math problems, playing games, providing customer support, planning trips, helping you prepare for job interviews, and so much more.
Here’s just a short list of what it’s capable of:
I mean, honestly, it could probably summarize this entire explainer. The AI world is your oyster.
So what you’re saying is, it’s basically smarter than me. Should I be worried?
Eh, not really. For all its hype, at its current level, ChatGPT — like other generative AI chatbots — is very much a dim-witted computer that sits on a throne of lies. For one thing, it hallucinates.
Pardon?
Oh, sorry, not that kind of hallucination. Hallucination in the AI world refers to an AI-generated process in which the tool tries to extrapolate and create from collected data but gets it absurdly wrong, in turn creating a new reality.
Honestly, I’m not a big fan of the word. It doesn’t really bear resemblance to actual human hallucinations, and I think it makes light of mental health issues — but that’s another subject.
In other words, sometimes ChatGPT generates incorrect information?
Incorrect information is a weak way of putting it.
Sometimes ChatGPT actually fabricates facts altogether, which can lead to the spread of misinformation with serious consequences. It’s made up news stories, academic papers, and books. Lawyers using it for case research have gotten in trouble when it cited nonexistent laws.
And then, there are times when it gives the middle finger to both reality and human language and just spouts out pure gibberish. Earlier this year, for example, a malfunctioning ChatGPT that was asked for a Jackson family biography started saying stuff like, “Schwittendly, the sparkle of tourmar on the crest has as much to do with the golver of the ‘moon paths’ as it shifts from follow.” Which is probably the worst description of Michael Jackson’s family in the world.
Right, but isn’t ChatGPT getting better?
Many AI researchers are trying to fix this issue. However, a lot of AI researchers think hallucinations are fundamentally unsolvable, as a study out of the National University of Singapore suggests.
But hallucinations aren’t the only issue ChatGPT needs to iron out. Remember, ChatGPT essentially just regurgitates material it scrapes off the internet, whether it’s accurate or not. That means, sometimes, ChatGPT plagiarizes other people’s work without attributing it to them, even sparking copyright infringement lawsuits.
It can also pick up some really bad data. Likely drawing from the more unpleasant parts of the internet, it’s gone so far as to insult and manipulate users. Hell, sometimes it’s just downright racist and sexist.
So, basically, what I’m hearing is ChatGPT — like other generative AI chatbots — has a lot of critical flaws, and we humans are still needed to keep them in check.
But isn’t it possible OpenAI could iron out these issues in time?
Anything’s possible. But I would say that one thing is for sure: AI is here to stay, and so it wouldn’t hurt to learn how to leverage these tools. Plus, they really can make life easier in the here and now if you know how to use them.
So, how do I start playing around with it?
If you’re on a desktop, simply visit chat.openai.com and start chatting away. Alternatively, you can also access ChatGPT via an app on your iPhone or Android device.
Great! Is it free?
Absolutely. The free version of ChatGPT runs on an older model in the GPT-3.5 series but does offer limited access to the newer and faster GPT-4o. That means free users, for example, will soon be able to access previously paywalled features, like custom GPTs, through the GPT Store.
ChatGPT also now freely supports the chatbot’s web browsing tool, meaning it can now search the internet in real time to deliver up-to-date, accurate results. The new model can also recall earlier conversations, allowing it to better understand the context of your request, while users can now upload photos and files for ChatGPT to analyze.
Why would I want one of the paid tiers?
You do get more advanced capabilities through its paid tiers — ChatGPT Plus, ChatGPT Team, and ChatGPT Enterprise — which start at $20 a month.
For starters, you have fewer usage restrictions, rendering them the better option if you plan on using ChatGPT often. Free users have usage limits OpenAI has yet to specify but has said that Plus subscribers are allowed to send five times as many messages as free users. The pricier Team and Enterprise subscription plans offer even fewer usage restrictions, though at this point, OpenAI has yet to divulge specifics.
Aside from being able to use ChatGPT longer, paid subscribers can do more. They can, for example, create their own custom GPTs and even monetize them via the GPT Store. Plus, only paid subscribers can access the DALL-E 3 model, which generates images from text prompts.
Paid subscribers also get early access to the newest AI features. The voice capabilities OpenAI demonstrated onstage should arrive over the next couple of weeks for Plus subscribers, while ChatGPT’s desktop app for Mac computers is already rolling out for Plus users.
Custom GPTs?
Custom GPTs are basically chatbots you can customize. There are millions of versions on the GPT Store that you can use to accomplish all kinds of tasks, from providing tech support to personalized hiking trail recommendations. Some customized GPTs currently trending include an image generating bot, a bot that makes logos, and a chatbot that helps people perform scientific research.
By the way, what’s all this I hear about trouble within OpenAI?
There have been some upheavals in the company — we’ll keep you in the loop.
Are there any ChatGPT alternatives I could check out?
Yes, there are quite a few, and each varies in terms of features, pricing, and specific use cases. One notable example is Google’s AI chat service Gemini. As a Google product, it offers deeper integration with Google services like Workspace, Calendar, Gmail, Search, YouTube, and Flights. The latest version, Gemini 1.5 Pro, also offers a longer 2 million token context window, which refers to the amount of information the AI model can understand.
Anything else you think I should know?
Yeah! Did you know ChatGPT sounds like “chat j’ai pété” in French, which roughly translates to “cat, I farted.” Somebody even created a website with a cat who farts when you click on it, and I just can’t stop clicking.
You should be.
Technology
Here’s your first look at Kratos in Amazon’s God of War show
Amazon has slowly been teasing out casting details for its live-action adaptation of God of War, and now we have our first look at the show. It’s a single image but a notable one showing protagonist Kratos and his son Atreus. The characters are played by Ryan Hurst and Callum Vinson, respectively, and they look relatively close to their video game counterparts.
There aren’t a lot of other details about the show just yet, but this is Amazon’s official description:
The God of War series storyline follows father and son Kratos and Atreus as they embark on a journey to spread the ashes of their wife and mother, Faye. Through their adventures, Kratos tries to teach his son to be a better god, while Atreus tries to teach his father how to be a better human.
That sounds a lot like the recent soft reboot of the franchise, which started with 2018’s God of War and continued through Ragnarök in 2022. For the Amazon series, Ronald D. Moore, best-known for his work on For All Mankind and Battlestar Galactica, will serve as showrunner. The rest of the cast includes: Mandy Patinkin (Odin), Ed Skrein (Baldur), Max Parker (Heimdall), Ólafur Darri Ólafsson (Thor), Teresa Palmer (Sif), Alastair Duncan (Mimir), Jeff Gulka (Sindri), and Danny Woodburn (Brok).
While production is underway on the God of War series, there’s no word on when it might start streaming.
Technology
300,000 Chrome users hit by fake AI extensions
NEWYou can now listen to Fox News articles!
Your web browser may feel like a safe place, especially when you install helpful tools that promise to make your life easier. But security researchers have uncovered a dangerous campaign in which more than 300,000 people installed Chrome extensions pretending to be artificial intelligence (AI) assistants. Instead of helping, these fake tools secretly collect sensitive information like your emails, passwords and browsing activity.
They used familiar names like ChatGPT, Gemini and AI Assistant. If you use Chrome and have installed any AI-related extension, your personal information may already be exposed. Even worse, some of these malicious extensions are still available today, putting more people at risk without their knowing.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
More than 300,000 Chrome users installed fake AI extensions that secretly harvested sensitive data. (Kurt “CyberGuy” Knutsson)
What you need to know about fake AI extensions
Security researchers at browser security company LayerX discovered a large campaign involving 30 malicious Chrome extensions disguised as AI-powered assistants (via BleepingComputer). Together, these extensions were installed more than 300,000 times by unsuspecting users.
Some of the most popular extensions included names like AI Sidebar with 70,000 users, AI Assistant with 60,000 users, ChatGPT Translate with 30,000 users, and Google Gemini with 10,000 users. Another extension called Gemini AI Sidebar had 80,000 users before it was removed.
These extensions were distributed through the official Chrome Web Store, which made them appear legitimate and trustworthy. Even more concerning, researchers found that many of these extensions were connected to the same malicious server, showing they were part of a coordinated effort.
While some extensions have since been removed, others remain available. This means new users could still unknowingly install them and expose their personal data. Here’s the list of the affected extensions:
- AI Assistant
- Llama
- Gemini AI Sidebar
- AI Sidebar
- ChatGPT Sidebar
- Grok
- Asking ChatGPT
- ChatGBT
- Chat Bot GPT
- Grok Chatbot
- Chat With Gemini
- XAI
- Google Gemini
- Ask Gemini
- AI Letter Generator
- AI Message Generator
- AI Translator
- AI For Translation
- AI Cover Letter Generator
- AI Image Generator ChatGPT
- Ai Wallpaper Generator
- Ai Picture Generator
- DeepSeek Download
- AI Email Writer
- Email Generator AI
- DeepSeek Chat
- ChatGPT Picture Generator
- ChatGPT Translate
- AI GPT
- ChatGPT Translation
- ChatGPT for Gmail
FAKE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE
These malicious tools were listed in the official Chrome Web Store, making them appear legitimate and trustworthy. (LayerX)
How the fake AI Chrome extension attack works
These fake extensions pretend to offer helpful AI features, such as translating text, summarizing emails, or acting as an AI assistant. But behind the scenes, they quietly monitor what you are doing online.
Once installed, the extension gains permission to view and interact with the websites you visit. This allows it to read the contents of web pages, including login screens where you enter your username and password.
In some cases, the extensions specifically targeted Gmail. They could read your email messages directly from your browser, including emails you received and even drafts you were still writing. This means attackers could access private conversations, financial information and sensitive personal details.
The extensions then sent this information to servers controlled by the attackers. Because they loaded content remotely, the attackers could change their behavior at any time without needing to update the extension.
Some versions could also activate voice features through your browser. This could potentially capture spoken conversations near your device and send transcripts back to the attackers.
If you installed one of these extensions, attackers may already have access to extremely sensitive information. This includes your email content, login credentials, browsing habits and possibly even voice recordings.
We reached out to Google for comment, and a spokesperson told CyberGuy that the company “can confirm that the extensions from this report have all been removed from the Google Web Store.”
BROWSER EXTENSION MALWARE INFECTED 8.8M USERS IN DARKSPECTRE ATTACK
Once installed, the extensions could read emails, capture passwords, monitor browsing activity and send the data to attacker-controlled servers. (Bildquelle/ullstein bild via Getty Images)
7 ways you can protect yourself from malicious Chrome extensions
If you have ever installed an AI-related Chrome extension, taking a few simple precautions now can help protect your accounts and prevent further damage.
1) Remove any suspicious or unused browser extensions
On a Windows PC or Mac, open Chrome and type chrome://extensions into the address bar. Review every extension listed. If you see anything unfamiliar, especially AI assistants you don’t remember installing, click “Remove” immediately. Malicious extensions depend on going unnoticed. Removing them stops further data collection and cuts off the attacker’s access to your information.
2) Change your passwords
If you installed any suspicious extension, assume your passwords may be compromised. Start by changing your email password first, since email controls access to most other accounts. Then update passwords for banking, shopping and social media accounts. This prevents attackers from using stolen credentials to break into your accounts.
3) Use a password manager to create and protect strong passwords
A password manager generates unique, complex passwords for each account and stores them securely. This prevents attackers from accessing multiple accounts if one password is stolen. Password managers also alert you if your login credentials appear in known data breaches, helping you respond quickly and protect your identity. Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.
4) Install strong antivirus software and keep it active
Good antivirus software can detect malicious browser extensions, spyware, and other hidden threats. It scans your system for suspicious activity and blocks harmful programs before they can steal your information. This adds an important layer of protection that works continuously in the background to keep your device safe. Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.
5) Use an identity theft protection service
Identity theft protection services monitor your personal data, including email addresses, financial accounts, and Social Security numbers, for signs of misuse. If criminals try to open accounts or commit fraud using your information, you receive alerts quickly. Early detection allows you to act fast and limit financial and personal damage. See my tips and best picks on how to protect yourself from identity theft at Cyberguy.com.
6) Keep your browser and computer fully updated
Software updates fix security vulnerabilities that attackers exploit. Enable automatic updates for Chrome and your operating system so you always have the latest protections. These updates strengthen your defenses against malicious extensions and prevent attackers from taking advantage of known weaknesses.
7) Use a personal data removal service
Personal data removal services scan data broker websites that collect and sell your personal information. They help remove your data from these sites, reducing what attackers can find and use against you. Less exposed information means fewer opportunities for criminals to target you with scams, identity theft or phishing attacks.
Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.
Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.
Kurt’s key takeaway
Even tools designed to make your life easier can become tools for cybercriminals. Malicious extensions often hide behind trusted names and convincing features, making them difficult to spot. You can significantly reduce your risk by reviewing your browser extensions regularly, removing anything suspicious and using protective tools like password managers and strong antivirus software.
Have you checked your browser extensions recently? Let us know your thoughts by writing to us at Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.
Copyright 2026 CyberGuy.com. All rights reserved.
Technology
Anthropic refuses Pentagon’s new terms, standing firm on lethal autonomous weapons and mass surveillance
Less than 24 hours before the deadline in an ultimatum issued by the Pentagon, Anthropic has refused the Department of Defense’s demands for unrestricted access to its AI.
It’s the culmination of a dramatic exchange of public statements, social media posts, and behind-the-scenes negotiations, coming down to Defense Secretary Pete Hegseth’s desire to renegotiate all AI labs’ current contracts with the military. But Anthropic, so far, has refused to back down from its two current red lines: no mass surveillance of Americans, and no lethal autonomous weapons (or weapons with license to kill targets with no human oversight whatsoever). OpenAI and xAI had reportedly already agreed to the new terms, while Anthropic’s refusal had led to CEO Dario Amodei being summoned to the White House this week for a meeting with Hegseth himself, in which the Secretary reportedly issued an ultimatum to the CEO to back down by the end of business day on Friday or else.
In a statement late Thursday, Amodei wrote, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”
He added that the company has “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner” but that in a “narrow set of cases, we believe AI can undermine, rather than defend, democratic values” — going on to specifically mention mass domestic surveillance and fully autonomous weapons. (Amodei mentioned that “partial autonomous weapons … are vital to the defense of democracy” and that fully autonomous weapons may eventually “prove critical for our national defense,” but that “today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” He did not rule out Anthropic acquiescing to the military’s use of fully autonomous weapons in the future but mentioned that they were not ready now.)
The Pentagon had already reportedly asked major defense contractors to assess their dependence on Anthropic’s Claude, which could be seen as the first step to designating the company a “supply chain risk” – a public threat that the Pentagon had made recently (and a classification usually reserved for threats to national security). The Pentagon was also reportedly considering invoking the Defense Production Act to make Anthropic comply.
Amodei wrote in his statement that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” He also wrote that “should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”
-
World2 days agoExclusive: DeepSeek withholds latest AI model from US chipmakers including Nvidia, sources say
-
Massachusetts2 days agoMother and daughter injured in Taunton house explosion
-
Montana1 week ago2026 MHSA Montana Wrestling State Championship Brackets And Results – FloWrestling
-
Oklahoma1 week agoWildfires rage in Oklahoma as thousands urged to evacuate a small city
-
Louisiana4 days agoWildfire near Gum Swamp Road in Livingston Parish now under control; more than 200 acres burned
-
Technology6 days agoYouTube TV billing scam emails are hitting inboxes
-
Denver, CO2 days ago10 acres charred, 5 injured in Thornton grass fire, evacuation orders lifted
-
Technology6 days agoStellantis is in a crisis of its own making