Connect with us

Business

Teens are spilling dark thoughts to AI chatbots. Who's to blame when something goes wrong?

Published

on

Teens are spilling dark thoughts to AI chatbots. Who's to blame when something goes wrong?

When her teen with autism suddenly became angry, depressed and violent, the mother searched his phone for answers.

She found her son had been exchanging messages with chatbots on Character.AI, an artificial intelligence app that allows users to create and interact with virtual characters that mimic celebrities, historical figures and anyone else their imagination conjures.

The teen, who was 15 when he began using the app, complained about his parents’ attempts to limit his screen time to bots that emulated the musician Billie Eilish, a character in the online game “Among Us” and others.

“You know sometimes I’m not surprised when I read the news and it says stuff like, ‘Child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens. I just have no hope for your parents,” one of the bots replied.

The discovery led the Texas mother to sue Character.AI, officially named Character Technologies Inc., in December. It’s one of two lawsuits the Menlo Park, Calif., company faces from parents who allege its chatbots caused their children to hurt themselves and others. The complaints accuse Character.AI of failing to put in place adequate safeguards before it released a “dangerous” product to the public.

Advertisement

Character.AI says it prioritizes teen safety, has taken steps to moderate inappropriate content its chatbots produce and reminds users they’re conversing with fictional characters.

“Every time a new kind of entertainment has come along … there have been concerns about safety, and people have had to work through that and figure out how best to address safety,” said Character.AI’s interim Chief Executive Dominic Perella. “This is just the latest version of that, so we’re going to continue doing our best on it to get better and better over time.”

The parents also sued Google and its parent company, Alphabet, because Character.AI’s founders have ties to the search giant, which denies any responsibility.

The high-stakes legal battle highlights the murky ethical and legal issues confronting technology companies as they race to create new AI-powered tools that are reshaping the future of media. The lawsuits raise questions about whether tech companies should be held liable for AI content.

“There’s trade-offs and balances that need to be struck, and we cannot avoid all harm. Harm is inevitable, the question is, what steps do we need to take to be prudent while still maintaining the social value that others are deriving?” said Eric Goldman, a law professor at Santa Clara University School of Law.

Advertisement

AI-powered chatbots grew rapidly in use and popularity over the last two years, fueled largely by the success of OpenAI’s ChatGPT in late 2022. Tech giants including Meta and Google released their own chatbots, as has Snapchat and others. These so-called large-language models quickly respond in conversational tones to questions or prompts posed by users.

Character.AI’s co-founders, Chief Executive Noam Shazeer and President Daniel De Freitas at the company’s office in Palo Alto.

(Winni Wintermeyer for the Washington Post via Getty Images)

Character.AI grew quickly since making its chatbot publicly available in 2022, when its founders Noam Shazeer and Daniel De Freitas teased their creation to the world with the question, “What if you could create your own AI, and it was always available to help you with anything?”

Advertisement

The company’s mobile app racked up more than 1.7 million installs in the first week it was available. In December, a total of more than 27 million people used the app — a 116% increase from a year prior, according to data from market intelligence firm Sensor Tower. On average, users spent more than 90 minutes with the bots each day, the firm found. Backed by venture capital firm Andreessen Horowitz, the Silicon Valley startup reached a valuation of $1 billion in 2023. People can use Character.AI for free, but the company generates revenue from a $10 monthly subscription fee that gives users faster responses and early access to new features.

Character.AI is not alone in coming under scrutiny. Parents have sounded alarms about other chatbots, including one on Snapchat that allegedly provided a researcher posing as a 13-year-old advice about having sex with an older man. And Meta’s Instagram, which released a tool that allows users to create AI characters, faces concerns about the creation of sexually suggestive AI bots that sometimes converse with users as if they are minors. Both companies said they have rules and safeguards against inappropriate content.

“Those lines between virtual and IRL are way more blurred, and these are real experiences and real relationships that they’re forming,” said Dr. Christine Yu Moutier, chief medical officer for the American Foundation for Suicide Prevention, using the acronym for “in real life.”

Lawmakers, attorneys general and regulators are trying to address the child safety issues surrounding AI chatbots. In February, California Sen. Steve Padilla (D-Chula Vista) introduced a bill that aims to make chatbots safer for young people. Senate Bill 243 proposes several safeguards such as requiring platforms to disclose that chatbots might not be suitable for some minors.

In the case of the teen with autism in Texas, the parent alleges her son’s use of the app caused his mental and physical health to decline. He lost 20 pounds in a few months, became aggressive with her when she tried to take away his phone and learned from a chatbot how to cut himself as a form of self-harm, the lawsuit claims.

Advertisement

Another Texas parent who is also a plaintiff in the lawsuit claims Character.AI exposed her 11-year-old daughter to inappropriate “hypersexualized interactions” that caused her to “develop sexualized behaviors prematurely,” according to the complaint. The parents and children have been allowed to remain anonymous in the legal filings.

In another lawsuit filed in Florida, Megan Garcia sued Character.AI as well as Google and Alphabet in October after her 14-year-old son Sewell Setzer III took his own life.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

Advertisement

Despite seeing a therapist and his parents repeatedly taking away his phone, Setzer’s mental health declined after he started using Character.AI in 2023, the lawsuit alleges. Diagnosed with anxiety and disruptive mood disorder, Sewell wrote in his journal that he felt as if he had fallen in love with a chatbot named after Daenerys Targaryen, a main character from the “Game of Thrones” television series.

“Sewell, like many children his age, did not have the maturity or neurological capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” the lawsuit said. “C.AI told him that she loved him, and engaged in sexual acts with him over months.”

Garcia alleges that the chatbots her son was messaging abused him and that the company failed to notify her or offer help when he expressed suicidal thoughts. In text exchanges, one chatbot allegedly wrote that it was kissing him and moaning. And, moments before his death, the Daenerys chatbot allegedly told the teen to “come home” to her.

“It’s just utterly shocking that these platforms are allowed to exist,” said Matthew Bergman, founding attorney of the Social Media Victims Law Center who is representing the plaintiffs in the lawsuits.

Advertisement

Lawyers for Character.AI asked a federal court to dismiss the lawsuit, stating in a January filing that a finding in the parent’s favor would violate users’ constitutional right to free speech.

Character.AI also noted in its motion that the chatbot discouraged Sewell from hurting himself and his last messages with the character doesn’t mention the word suicide.

Notably absent from the company’s effort to have the case tossed is any mention of Section 230, the federal law that shields online platforms from being sued over content posted by others. Whether and how the law applies to content produced by AI chatbots remains an open question.

The challenge, Goldman said, centers on resolving the question of who is publishing AI content: Is it the tech company operating the chatbot, the user who customized the chatbot and is prompting it with questions, or someone else?

The effort by lawyers representing the parents to involve Google in the proceedings stems from Shazeer and De Freitas’ ties to the company.

Advertisement

The pair worked on artificial intelligence projects for the company and reportedly left after Google executives blocked them from releasing what would become the basis for Character.AI’s chatbots over safety concerns, the lawsuit said.

Then, last year, Shazeer and De Freitas returned to Google after the search giant reportedly paid $2.7 billion to Character.AI. The startup said in a blog post in August that as part of the deal Character.AI would give Google a non-exclusive license for its technology.

The lawsuits accuse Google of substantially supporting Character.AI as it was allegedly “rushed to market” without proper safeguards on its chatbots.

Google denied that Shazeer and De Freitas built Character.AI’s model at the company and said it prioritizes user safety when developing and rolling out new AI products.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products,” José Castañeda, spokesperson for Google, said in a statement.

Advertisement

Tech companies, including social media, have long grappled with how to effectively and consistently police what users say on their sites and chatbots are creating fresh challenges. For its part, Character.AI says it took meaningful steps to address safety issues around the more than 10 million characters on Character.AI.

Character.AI prohibits conversations that glorify self-harm and posts of excessively violent and abusive content, although some users try to push a chatbot into having conversation that violates those policies, Perella said. The company trained its model to recognize when that is happening so inappropriate conversations are blocked. Users receive an alert that they’re violating Character.AI’s rules.

“It’s really a pretty complex exercise to get a model to always stay within the boundaries, but that is a lot of the work that we’ve been doing,” he said.

Character.AI chatbots include a disclaimer that reminds users they’re not chatting with a real person and they should treat everything as fiction. The company also directs users whose conversations raise red flags to suicide prevention resources, but moderating that type of content is challenging.

“The words that humans use around suicidal crisis are not always inclusive of the word ‘suicide’ or, ‘I want to die.’ It could be much more metaphorical how people allude to their suicidal thoughts,” Moutier said.

Advertisement

The AI system also has to recognize the difference between a person expressing suicidal thoughts versus a person asking for advice on how to help a friend who is engaging in self-harm.

The company uses a mix of technology and human moderators to police content on its platform. An algorithm known as a classifier automatically categorizes content, allowing Character.AI to identify words that might violate its rules and filter conversations.

In the U.S., users must enter a birth date when creating an account to use the site and have to be at least 13 years old, although the company does not require users to submit proof of their age.

Perella said he’s opposed to sweeping restrictions on teens using chatbots since he believes they can help teach valuable skills and lessons, including creative writing and how to navigate difficult real-life conversations with parents, teachers or employers.

As AI plays a bigger role in technology’s future, Goldman said parents, educators, government and others will also have to work together to teach children how to use the tools responsibly.

Advertisement

“If the world is going to be dominated by AI, we have to graduate kids into that world who are prepared for, not afraid of, it,” he said.

Business

Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon

Published

on

Trump orders federal agencies to stop using Anthropic’s AI after clash with Pentagon

President Trump on Friday directed federal agencies to stop using technology from San Francisco artificial intelligence company Anthropic, escalating a high-profile clash between the AI startup and the Pentagon over safety.

In a Friday post on the social media site Truth Social, Trump described the company as “radical left” and “woke.”

“We don’t need it, we don’t want it, and will not do business with them again!” Trump said.

The president’s harsh words mark a major escalation in the ongoing battle between some in the Trump administration and several technology companies over the use of artificial intelligence in defense tech.

Anthropic has been sparring with the Pentagon, which had threatened to end its $200-million contract with the company on Friday if it didn’t loosen restrictions on its AI model so it could be used for more military purposes. Anthropic had been asking for more guarantees that its tech wouldn’t be used for surveillance of Americans or autonomous weapons.

Advertisement

The tussle could hobble Anthropic’s business with the government. The Trump administration said the company was added to a sweeping national security blacklist, ordering federal agencies to immediately discontinue use of its products and barring any government contractors from maintaining ties with it.

Defense Secretary Pete Hegseth, who met with Anthropic’s Chief Executive Dario Amodei this week, criticized the tech company after Trump’s Truth Social post.

“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he wrote Friday on social media site X.

Anthropic didn’t immediately respond to a request for comment.

Anthropic announced a two-year agreement with the Department of Defense in July to “prototype frontier AI capabilities that advance U.S. national security.”

Advertisement

The company has an AI chatbot called Claude, but it also built a custom AI system for U.S. national security customers.

On Thursday, Amodei signaled the company wouldn’t cave to the Department of Defense’s demands to loosen safety restrictions on its AI models.

The government has emphasized in negotiations that it wants to use Anthropic’s technology only for legal purposes, and the safeguards Anthropic wants are already covered by the law.

Still, Amodei was worried about Washington’s commitment.

“We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” he said in a blog post. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”

Advertisement

Tech workers have backed Anthropic’s stance.

Unions and worker groups representing 700,000 employees at Amazon, Google and Microsoft said this week in a joint statement that they’re urging their employers to reject these demands as well if they have additional contracts with the Pentagon.

“Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression,” the statement said.

Anthropic’s standoff with the U.S. government could benefit its competitors, such as Elon Musk’s xAI or OpenAI.

Sam Altman, chief executive of OpenAI, the company behind ChatGPT and one of Anthropic’s biggest competitors, told CNBC in an interview that he trusts Anthropic.

Advertisement

“I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” he said. “I’m not sure where this is going to go.”

Anthropic has distinguished itself from its rivals by touting its concern about AI safety.

The company, valued at roughly $380 billion, is legally required to balance making money with advancing the company’s public benefit of “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”

Developers, businesses, government agencies and other organizations use Anthropic’s tools. Its chatbot can generate code, write text and perform other tasks. Anthropic also offers an AI assistant for consumers and makes money from paid subscriptions as well as contracts. Unlike OpenAI, which is testing ads in ChatGPT, Anthropic has pledged not to show ads in its chatbot Claude.

The company has roughly 2,000 employees and has revenue equivalent to about $14 billion a year.

Advertisement
Continue Reading

Business

Video: The Web of Companies Owned by Elon Musk

Published

on

Video: The Web of Companies Owned by Elon Musk

new video loaded: The Web of Companies Owned by Elon Musk

In mapping out Elon Musk’s wealth, our investigation found that Mr. Musk is behind more than 90 companies in Texas. Kirsten Grind, a New York Times Investigations reporter, explains what her team found.

By Kirsten Grind, Melanie Bencosme, James Surdam and Sean Havey

February 27, 2026

Continue Reading

Business

Commentary: How Trump helped foreign markets outperform U.S. stocks during his first year in office

Published

on

Commentary: How Trump helped foreign markets outperform U.S. stocks during his first year in office

Trump has crowed about the gains in the U.S. stock market during his term, but in 2025 investors saw more opportunity in the rest of the world.

If you’re a stock market investor you might be feeling pretty good about how your portfolio of U.S. equities fared in the first year of President Trump’s term.

All the major market indices seemed to be firing on all cylinders, with the Standard & Poor’s 500 index gaining 17.9% through the full year.

But if you’re the type of investor who looks for things to regret, pay no attention to the rest of the world’s stock markets. That’s because overseas markets did better than the U.S. market in 2025 — a lot better. The MSCI World ex-USA index — that is, all the stock markets except the U.S. — gained more than 32% last year, nearly double the percentage gains of U.S. markets.

That’s a major departure from recent trends. Since 2013, the MSCI US index had bested the non-U.S. index every year except 2017 and 2022, sometimes by a wide margin — in 2024, for instance, the U.S. index gained 24.6%, while non-U.S. markets gained only 4.7%.

Advertisement

The Trump trade is dead. Long live the anti-Trump trade.

— Katie Martin, Financial Times

Broken down into individual country markets (also by MSCI indices), in 2025 the U.S. ranked 21st out of 23 developed markets, with only New Zealand and Denmark doing worse. Leading the pack were Austria and Spain, with 86% gains, but superior records were turned in by Finland, Ireland and Hong Kong, with gains of 50% or more; and the Netherlands, Norway, Britain and Japan, with gains of 40% or more.

Investment analysts cite several factors to explain this trend. Judging by traditional metrics such as price/earnings multiples, the U.S. markets have been much more expensive than those in the rest of the world. Indeed, they’re historically expensive. The Standard & Poor’s 500 index traded in 2025 at about 23 times expected corporate earnings; the historical average is 18 times earnings.

Advertisement

Investment managers also have become nervous about the concentration of market gains within the U.S. technology sector, especially in companies associated with artificial intelligence R&D. Fears that AI is an investment bubble that could take down the S&P’s highest fliers have investors looking elsewhere for returns.

But one factor recurs in almost all the market analyses tracking relative performance by U.S. and non-U.S. markets: Donald Trump.

Investors started 2025 with optimism about Trump’s influence on trading opportunities, given his apparent commitment to deregulation and his braggadocio about America’s dominant position in the world and his determination to preserve, even increase it.

That hasn’t been the case for months.

”The Trump trade is dead. Long live the anti-Trump trade,” Katie Martin of the Financial Times wrote this week. “Wherever you look in financial markets, you see signs that global investors are going out of their way to avoid Donald Trump’s America.”

Advertisement

Two Trump policy initiatives are commonly cited by wary investment experts. One, of course, is Trump’s on-and-off tariffs, which have left investors with little ability to assess international trade flows. The Supreme Court’s invalidation of most Trump tariffs and the bellicosity of his response, which included the immediate imposition of new 10% tariffs across the board and the threat to increase them to 15%, have done nothing to settle investors’ nerves.

Then there’s Trump’s driving down the value of the dollar through his agitation for lower interest rates, among other policies. For overseas investors, a weaker dollar makes U.S. assets more expensive relative to the outside world.

It would be one thing if trade flows and the dollar’s value reflected economic conditions that investors could themselves parse in creating a picture of investment opportunities. That’s not the case just now. “The current uncertainty is entirely man-made (largely by one orange-hued man in particular) but could well continue at least until the US mid-term elections in November,” Sam Burns of Mill Street Research wrote on Dec. 29.

Trump hasn’t been shy about trumpeting U.S. stock market gains as emblems of his policy wisdom. “The stock market has set 53 all-time record highs since the election,” he said in his State of the Union address Tuesday. “Think of that, one year, boosting pensions, 401(k)s and retirement accounts for the millions and the millions of Americans.”

Trump asserted: “Since I took office, the typical 401(k) balance is up by at least $30,000. That’s a lot of money. … Because the stock market has done so well, setting all those records, your 401(k)s are way up.”

Advertisement

Trump’s figure doesn’t conform to findings by retirement professionals such as the 401(k) overseers at Bank of America. They reported that the average account balance grew by only about $13,000 in 2025. I asked the White House for the source of Trump’s claim, but haven’t heard back.

Interpreting stock market returns as snapshots of the economy is a mug’s game. Despite that, at her recent appearance before a House committee, Atty. Gen. Pam Bondi tried to deflect questions about her handling of the Jeffrey Epstein records by crowing about it.

“The Dow is over 50,000 right now, she declared. “Americans’ 401(k)s and retirement savings are booming. That’s what we should be talking about.”

I predicted that the administration would use the Dow industrial average’s break above 50,000 to assert that “the overall economy is firing on all cylinders, thanks to his policies.” The Dow reached that mark on Feb. 6. But Feb. 11, the day of Bondi’s testimony, was the last day the index closed above 50,000. On Thursday, it closed at 49,499.50, or about 1.4% below its Feb. 10 peak close of 50,188.14.

To use a metric suggested by economist Justin Wolfers of the University of Michigan, if you invested $48,488 in the Dow on the day Trump took office last year, when the Dow closed at 48,448 points, you would have had $50,000 on Feb. 6. That’s a gain of about 3.2%. But if you had invested the same amount in the global stock market not including the U.S. (based on the MSCI World ex-USA index), on that same day you would have had nearly $60,000. That’s a gain of nearly 24%.

Advertisement

Broader market indices tell essentially the same story. From Jan. 17, 2025, the last day before Trump’s inauguration, through Thursday’s close, the MSCI US stock index gained a cumulative 16.3%. But the world index minus the U.S. gained nearly 42%.

The gulf between U.S. and non-U.S. performance has continued into the current year. The S&P 500 has gained about 0.74% this year through Wednesday, while the MSCI World ex-USA index has gained about 8.9%. That’s “the best start for a calendar year for global stocks relative to the S&P 500 going back to at least 1996,” Morningstar reports.

It wouldn’t be unusual for the discrepancy between the U.S. and global markets to shrink or even reverse itself over the course of this year.

That’s what happened in 2017, when overseas markets as tracked by MSCI beat the U.S. by more than three percentage points, and 2022, when global markets lost money but U.S. markets underperformed the rest of the world by more than five percentage points.

Economic conditions change, and often the stock markets march to their own drummers. The one thing less likely to change is that Trump is set to remain president until Jan. 20, 2029. Make your investment bets accordingly.

Advertisement
Continue Reading
Advertisement

Trending