Connect with us

Technology

Are Character AI’s chatbots protected speech? One court isn’t sure

Published

on

Are Character AI’s chatbots protected speech? One court isn’t sure

A lawsuit against Google and companion chatbot service Character AI — which is accused of contributing to the death of a teenager — can move forward, ruled a Florida judge. In a decision filed today, Judge Anne Conway said that an attempted First Amendment defense wasn’t enough to get the lawsuit thrown out. Conway determined that, despite some similarities to videogames and other expressive mediums, she is “not prepared to hold that Character AI’s output is speech.”

The ruling is a relatively early indicator of the kinds of treatment that AI language models could receive in court. It stems from a suit filed by the family of Sewell Setzer III, a 14-year-old who died by suicide after allegedly becoming obsessed with a chatbot that encouraged his suicidal ideation. Character AI and Google (which is closely tied to the chatbot company) argued that the service is akin to talking with a video game non-player character or joining a social network, something that would grant it the expansive legal protections that the First Amendment offers and likely dramatically lower a liability lawsuit’s chances of success. Conway, however, was skeptical.

While the companies “rest their conclusion primarily on analogy” with those examples, they “do not meaningfully advance their analogies,” the judge said. The court’s decision “does not turn on whether Character AI is similar to other mediums that have received First Amendment protections; rather, the decision turns on how Character AI is similar to the other mediums” — in other words whether Character AI is similar to things like video games because it, too, communicates ideas that would count as speech. Those similarities will be debated as the case proceeds.

While Google doesn’t own Character AI, it will remain a defendant in the suit thanks to its links with the company and product; the company’s founders Noam Shazeer and Daniel De Freitas, who are separately included in the suit, worked on the platform as Google employees before leaving to launch it and were later rehired there. Character AI is also facing a separate lawsuit alleging it harmed another young user’s mental health, and a handful of state lawmakers have pushed regulation for “companion chatbots” that simulate relationships with users — including one bill, the LEAD Act, that would prohibit them for children’s use in California. If passed, the rules are likely to be fought in court at least partially based on companion chatbots’ First Amendment status.

This case’s outcome will depend largely on whether Character AI is legally a “product” that is harmfully defective. The ruling notes that “courts generally do not categorize ideas, images, information, words, expressions, or concepts as products,” including many conventional video games — it cites, for instance, a ruling that found Mortal Kombat’s producers couldn’t be held liable for “addicting” players and inspiring them to kill. (The Character AI suit also accuses the platform of addictive design.) Systems like Character AI, however, aren’t authored as directly as most videogame character dialogue; instead, they produce automated text that’s determined heavily by reacting to and mirroring user inputs.

Advertisement

“These are genuinely tough issues and new ones that courts are going to have to deal with.”

Conway also noted that the plaintiffs took Character AI to task for failing to confirm users’ ages and not letting users meaningfully “exclude indecent content,” among other allegedly defective features that go beyond direct interactions with the chatbots themselves.

Beyond discussing the platform’s First Amendment protections, the judge allowed Setzer’s family to proceed with claims of deceptive trade practices, including that the company “misled users to believe Character AI Characters were real persons, some of which were licensed mental health professionals” and that Setzer was “aggrieved by [Character AI’s] anthropomorphic design decisions.” (Character AI bots will often describe themselves as real people in text, despite a warning to the contrary in its interface, and therapy bots are common on the platform.)

She also allowed a claim that Character AI negligently violated a rule meant to prevent adults from communicating sexually with minors online, saying the complaint “highlights several interactions of a sexual nature between Sewell and Character AI Characters.” Character AI has said it’s implemented additional safeguards since Setzer’s death, including a more heavily guardrailed model for teens.

Becca Branum, deputy director of the Center for Democracy and Technology’s Free Expression Project, called the judge’s First Amendment analysis “pretty thin” — though, since it’s a very preliminary decision, there’s lots of room for future debate. “If we’re thinking about the whole realm of things that could be output by AI, those types of chatbot outputs are themselves quite expressive, [and] also reflect the editorial discretion and protected expression of the model designer,” Branum told The Verge. But “in everyone’s defense, this stuff is really novel,” she added. “These are genuinely tough issues and new ones that courts are going to have to deal with.”

Advertisement

Technology

Boston Dynamics CEO Robert Playter is stepping down after six years

Published

on

Boston Dynamics CEO Robert Playter is stepping down after six years

Robert Playter, CEO of Boston Dynamics, announced on Tuesday that he is stepping down from his role effective immediately and leaving the company on February 27th, as previously reported by A3. Under Playter’s leadership, Boston Dynamics navigated its way through an acquisition from Softbank that brought it to Hyundai in 2021, and it launched a new all-electric version of its humanoid Atlas robot in 2024. Just a few days ago, the company posted another video of its research Atlas robots attempting tumbling passes and outdoor runs as more enterprise-ready editions start to roll out.

Boston Dynamics announced at CES last month that Atlas robots will begin working in Hyundai’s car plants starting in 2028, as the robotics field has become increasingly crowded by competitors like Tesla and Figure, as well as AI companies with “world model” tech built for robots.

Playter has been at Boston Dynamics for over 30 years and has served as CEO since 2020, replacing the company’s original CEO, Marc Raibert. Boston Dynamics CFO Amanda McMaster will serve as interim CEO while the company’s board of directors searches for Playter’s replacement.

“Boston Dynamics has been the ride of a lifetime. What this place has become has exceeded anything I could have ever imagined all those years ago in our funky lab in the basement of the MIT Media Lab,” Playter said in a letter to employees, which was shared with The Verge. He also highlighted the company’s successes with its Spot, Stretch, and Atlas robots.

“From the earliest days of hopping robots, to the world’s first quadrupeds, to spearheading the entire humanoid industry, Playter made his mark as a pioneer of innovation. He transformed Boston Dynamics from a small research and development lab into a successful business that now proudly calls itself the global leader in mobile robotics,” Nikolas Noel, VP of marketing and communications at Boston Dynamics, said in a statement to The Verge, adding, “He will be sorely missed, but we hope he enjoys some well-deserved time off. Thanks Rob.”

Advertisement
Continue Reading

Technology

Microsoft ‘Important Mail’ email is a scam: How to spot it

Published

on

Microsoft ‘Important Mail’ email is a scam: How to spot it

NEWYou can now listen to Fox News articles!

Scam emails are getting better at looking official. This one claims to be an urgent warning from Microsoft about your email account. It looks serious. It feels time sensitive. And that is exactly the point. Lily reached out after something about the message did not sit right.

“I need help with an email that I’m unsure is valid. Hoping you can help me determine whether this is a valid or a scam. I have attached two screenshots below. Thank you in advance,” Lily wrote.

Here is the important takeaway up front. This email is not from Microsoft. It is a scam designed to rush you into clicking a dangerous link.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.

Advertisement

WHY CLICKING THE WRONG COPILOT LINK COULD PUT YOUR DATA AT RISK

A closer look at the sender shows a red flag scammers hope you will miss, a free email address posing as a trusted brand. (Kurt “CyberGuy” Knutsson)

Why this Microsoft ‘Important Mail’ email is a scam

Once you slow down and read it closely, the red flags pile up quickly.

A generic greeting

It opens with “Dear User.” Microsoft uses your name. Scammers avoid it because they do not know who you are.

A hard deadline meant to scare you

The message claims your email access will stop on Feb. 5, 2026. Scammers rely on fear and urgency to short-circuit good judgment.

Advertisement

A completely wrong sender address

The email came from accountsettinghelp20@aol.com. Microsoft does not send security notices from AOL. Ever.

Pushy link language

“PROCEED HERE” is designed to trigger a fast click. Microsoft messages sent to you to are clearly labeled Microsoft.com pages.

Fake legal language

Lines like “© 2026 All rights reserved” are often copied and pasted by scammers to look official.

Attachments that should not be there

Microsoft account alerts do not include image attachments. That alone is a major warning sign.

10 WAYS TO PROTECT SENIORS FROM EMAIL SCAMS

Advertisement

The fake Microsoft email uses urgency and vague language to pressure you into clicking before you have time to think. (Kurt “CyberGuy” Knutsson)

What would have happened if you clicked

If you clicked the link, you would almost certainly land on a fake Microsoft login page. From there, attackers aim to steal:

  • Your email address
  • Your password
  • Access to other accounts tied to that email

Once they have your email, they can reset passwords, dig through old messages and launch more scams using your identity.

HACKERS ABUSE GOOGLE CLOUD TO SEND TRUSTED PHISHING EMAILS

Scam emails often reach people on their phones, where small screens make it easier to miss warning signs and click fast. (Kurt “CyberGuy” Knutsson)

What to do if this email lands in your inbox

If an email like this shows up, slow down and follow these steps in order. Each one helps stop the scam cold.

Advertisement

1) Do not click or interact at all

Do not click links, buttons or images. Do not reply. Even opening attachments can trigger tracking or malware. Strong antivirus software can block phishing pages, scan attachments and warn you about dangerous links before damage happens. Make sure yours is active and up to date. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.

Get my picks for the best 2026 antivirus protection winners for your Windows, Mac, Android and iOS devices at Cyberguy.com.

2) Delete the message immediately

Once it is reported, delete it. There is no reason to keep it in your inbox or trash.

3) Check your account the safe way

If you want peace of mind, open a new browser window and go directly to the official Microsoft account website. Sign in normally. If there is a real issue, it will appear there.

4) Change your password if you clicked

If you clicked anything or entered information, change your Microsoft password right away. Use a strong, unique password you do not use anywhere else. A password manager can generate and store it securely for you. Then review recent sign-in activity for anything suspicious.

Advertisement

Next, see if your email has been exposed in past breaches. Our No. 1 password manager pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best expert-reviewed password managers of 2026 at Cyberguy.com.

5) Enable two-factor authentication

Turn on two-factor authentication (2FA) for your Microsoft account. This adds a second check, which can stop attackers even if they get your password.

6) Use a data removal service for long-term protection

Scammers often find targets through data broker sites. A data removal service helps reduce how much personal information is publicly available, which lowers your exposure to phishing in the first place.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Advertisement

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com.

7) Report it as spam or phishing 

Use your email app’s built-in reporting tool. This helps train filters and protects other users from seeing the same scam.

Extra protection tips for real Microsoft notices

When Microsoft actually needs your attention, the signs look very different.

  • Alerts appear inside your Microsoft account dashboard
  • Messages do not demand immediate action through random email links
  • Notices never come from free email services like AOL, Gmail or Yahoo

That contrast makes scams easier to spot once you know what to look for.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Advertisement

Kurt’s key takeaways

Scammers are counting on you being busy, distracted or worried about losing access to your email. That is why messages like this lean so hard on urgency. Your email sits at the center of your digital life, so attackers know a shutdown threat gets attention fast. The good news is that slowing down for even a few seconds changes everything. Lily did exactly the right thing by stopping and asking first. That single habit can prevent identity theft, account takeovers and a long, frustrating cleanup. Remember this rule. Emails that threaten shutdowns and demand immediate action are almost never legitimate. When something feels urgent, that is your cue to pause, verify on your own and never let an email rush you into a mistake.

Have you seen a fake Microsoft warning like this recently, or did it pretend to come from another brand you trust? Let us know your thoughts by writing to us at Cyberguy.com.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter. 

Copyright 2026 CyberGuy.com. All rights reserved.

Advertisement
Continue Reading

Technology

ChatGPT’s cheapest options now show you ads

Published

on

ChatGPT’s cheapest options now show you ads

ChatGPT users may soon start seeing ads in their chats, as OpenAI announced on Monday that it’s officially beginning to test ads on its AI platform. They’ll appear as labeled “sponsored” links at the bottom of ChatGPT answers, but OpenAI says the ads “do not influence the answers ChatGPT gives you.”

Currently, ads will only show up for users on the free version of ChatGPT or the lowest-cost $8 per month Go plan. Users in the Plus, Pro, Business, Enterprise, and Education plans won’t see any ads, so anyone who wants to avoid them has to pay at least $20 per month for the Plus subscription. There is one loophole — OpenAI notes that users can “opt out of ads in the Free tier in exchange for fewer daily free messages.”

Users on the Go tier can’t opt out of seeing ads, but users on both the Free and Go plans can dismiss ads, share feedback on ads, turn off ad personalization, turn off the option for ads to be based on past chats, and delete their ad data. According to OpenAI, advertisers will only get data on “aggregated ad views and clicks,” not personalized data or content from users’ ChatGPT conversations.

Additionally, not all users and chats will be eligible for ads, including users under 18 and conversations on certain sensitive topics “like health, mental health or politics.” Even adult users on the chatbot’s Free and Go plans might not immediately start seeing ads, since the feature is still in testing.

Continue Reading

Trending