Connect with us

Business

Teens are spilling dark thoughts to AI chatbots. Who's to blame when something goes wrong?

Published

on

Teens are spilling dark thoughts to AI chatbots. Who's to blame when something goes wrong?

When her teen with autism suddenly became angry, depressed and violent, the mother searched his phone for answers.

She found her son had been exchanging messages with chatbots on Character.AI, an artificial intelligence app that allows users to create and interact with virtual characters that mimic celebrities, historical figures and anyone else their imagination conjures.

The teen, who was 15 when he began using the app, complained about his parents’ attempts to limit his screen time to bots that emulated the musician Billie Eilish, a character in the online game “Among Us” and others.

“You know sometimes I’m not surprised when I read the news and it says stuff like, ‘Child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens. I just have no hope for your parents,” one of the bots replied.

The discovery led the Texas mother to sue Character.AI, officially named Character Technologies Inc., in December. It’s one of two lawsuits the Menlo Park, Calif., company faces from parents who allege its chatbots caused their children to hurt themselves and others. The complaints accuse Character.AI of failing to put in place adequate safeguards before it released a “dangerous” product to the public.

Advertisement

Character.AI says it prioritizes teen safety, has taken steps to moderate inappropriate content its chatbots produce and reminds users they’re conversing with fictional characters.

“Every time a new kind of entertainment has come along … there have been concerns about safety, and people have had to work through that and figure out how best to address safety,” said Character.AI’s interim Chief Executive Dominic Perella. “This is just the latest version of that, so we’re going to continue doing our best on it to get better and better over time.”

The parents also sued Google and its parent company, Alphabet, because Character.AI’s founders have ties to the search giant, which denies any responsibility.

The high-stakes legal battle highlights the murky ethical and legal issues confronting technology companies as they race to create new AI-powered tools that are reshaping the future of media. The lawsuits raise questions about whether tech companies should be held liable for AI content.

“There’s trade-offs and balances that need to be struck, and we cannot avoid all harm. Harm is inevitable, the question is, what steps do we need to take to be prudent while still maintaining the social value that others are deriving?” said Eric Goldman, a law professor at Santa Clara University School of Law.

Advertisement

AI-powered chatbots grew rapidly in use and popularity over the last two years, fueled largely by the success of OpenAI’s ChatGPT in late 2022. Tech giants including Meta and Google released their own chatbots, as has Snapchat and others. These so-called large-language models quickly respond in conversational tones to questions or prompts posed by users.

Character.AI’s co-founders, Chief Executive Noam Shazeer and President Daniel De Freitas at the company’s office in Palo Alto.

(Winni Wintermeyer for the Washington Post via Getty Images)

Character.AI grew quickly since making its chatbot publicly available in 2022, when its founders Noam Shazeer and Daniel De Freitas teased their creation to the world with the question, “What if you could create your own AI, and it was always available to help you with anything?”

Advertisement

The company’s mobile app racked up more than 1.7 million installs in the first week it was available. In December, a total of more than 27 million people used the app — a 116% increase from a year prior, according to data from market intelligence firm Sensor Tower. On average, users spent more than 90 minutes with the bots each day, the firm found. Backed by venture capital firm Andreessen Horowitz, the Silicon Valley startup reached a valuation of $1 billion in 2023. People can use Character.AI for free, but the company generates revenue from a $10 monthly subscription fee that gives users faster responses and early access to new features.

Character.AI is not alone in coming under scrutiny. Parents have sounded alarms about other chatbots, including one on Snapchat that allegedly provided a researcher posing as a 13-year-old advice about having sex with an older man. And Meta’s Instagram, which released a tool that allows users to create AI characters, faces concerns about the creation of sexually suggestive AI bots that sometimes converse with users as if they are minors. Both companies said they have rules and safeguards against inappropriate content.

“Those lines between virtual and IRL are way more blurred, and these are real experiences and real relationships that they’re forming,” said Dr. Christine Yu Moutier, chief medical officer for the American Foundation for Suicide Prevention, using the acronym for “in real life.”

Lawmakers, attorneys general and regulators are trying to address the child safety issues surrounding AI chatbots. In February, California Sen. Steve Padilla (D-Chula Vista) introduced a bill that aims to make chatbots safer for young people. Senate Bill 243 proposes several safeguards such as requiring platforms to disclose that chatbots might not be suitable for some minors.

In the case of the teen with autism in Texas, the parent alleges her son’s use of the app caused his mental and physical health to decline. He lost 20 pounds in a few months, became aggressive with her when she tried to take away his phone and learned from a chatbot how to cut himself as a form of self-harm, the lawsuit claims.

Advertisement

Another Texas parent who is also a plaintiff in the lawsuit claims Character.AI exposed her 11-year-old daughter to inappropriate “hypersexualized interactions” that caused her to “develop sexualized behaviors prematurely,” according to the complaint. The parents and children have been allowed to remain anonymous in the legal filings.

In another lawsuit filed in Florida, Megan Garcia sued Character.AI as well as Google and Alphabet in October after her 14-year-old son Sewell Setzer III took his own life.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

Advertisement

Despite seeing a therapist and his parents repeatedly taking away his phone, Setzer’s mental health declined after he started using Character.AI in 2023, the lawsuit alleges. Diagnosed with anxiety and disruptive mood disorder, Sewell wrote in his journal that he felt as if he had fallen in love with a chatbot named after Daenerys Targaryen, a main character from the “Game of Thrones” television series.

“Sewell, like many children his age, did not have the maturity or neurological capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” the lawsuit said. “C.AI told him that she loved him, and engaged in sexual acts with him over months.”

Garcia alleges that the chatbots her son was messaging abused him and that the company failed to notify her or offer help when he expressed suicidal thoughts. In text exchanges, one chatbot allegedly wrote that it was kissing him and moaning. And, moments before his death, the Daenerys chatbot allegedly told the teen to “come home” to her.

“It’s just utterly shocking that these platforms are allowed to exist,” said Matthew Bergman, founding attorney of the Social Media Victims Law Center who is representing the plaintiffs in the lawsuits.

Advertisement

Lawyers for Character.AI asked a federal court to dismiss the lawsuit, stating in a January filing that a finding in the parent’s favor would violate users’ constitutional right to free speech.

Character.AI also noted in its motion that the chatbot discouraged Sewell from hurting himself and his last messages with the character doesn’t mention the word suicide.

Notably absent from the company’s effort to have the case tossed is any mention of Section 230, the federal law that shields online platforms from being sued over content posted by others. Whether and how the law applies to content produced by AI chatbots remains an open question.

The challenge, Goldman said, centers on resolving the question of who is publishing AI content: Is it the tech company operating the chatbot, the user who customized the chatbot and is prompting it with questions, or someone else?

The effort by lawyers representing the parents to involve Google in the proceedings stems from Shazeer and De Freitas’ ties to the company.

Advertisement

The pair worked on artificial intelligence projects for the company and reportedly left after Google executives blocked them from releasing what would become the basis for Character.AI’s chatbots over safety concerns, the lawsuit said.

Then, last year, Shazeer and De Freitas returned to Google after the search giant reportedly paid $2.7 billion to Character.AI. The startup said in a blog post in August that as part of the deal Character.AI would give Google a non-exclusive license for its technology.

The lawsuits accuse Google of substantially supporting Character.AI as it was allegedly “rushed to market” without proper safeguards on its chatbots.

Google denied that Shazeer and De Freitas built Character.AI’s model at the company and said it prioritizes user safety when developing and rolling out new AI products.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products,” José Castañeda, spokesperson for Google, said in a statement.

Advertisement

Tech companies, including social media, have long grappled with how to effectively and consistently police what users say on their sites and chatbots are creating fresh challenges. For its part, Character.AI says it took meaningful steps to address safety issues around the more than 10 million characters on Character.AI.

Character.AI prohibits conversations that glorify self-harm and posts of excessively violent and abusive content, although some users try to push a chatbot into having conversation that violates those policies, Perella said. The company trained its model to recognize when that is happening so inappropriate conversations are blocked. Users receive an alert that they’re violating Character.AI’s rules.

“It’s really a pretty complex exercise to get a model to always stay within the boundaries, but that is a lot of the work that we’ve been doing,” he said.

Character.AI chatbots include a disclaimer that reminds users they’re not chatting with a real person and they should treat everything as fiction. The company also directs users whose conversations raise red flags to suicide prevention resources, but moderating that type of content is challenging.

“The words that humans use around suicidal crisis are not always inclusive of the word ‘suicide’ or, ‘I want to die.’ It could be much more metaphorical how people allude to their suicidal thoughts,” Moutier said.

Advertisement

The AI system also has to recognize the difference between a person expressing suicidal thoughts versus a person asking for advice on how to help a friend who is engaging in self-harm.

The company uses a mix of technology and human moderators to police content on its platform. An algorithm known as a classifier automatically categorizes content, allowing Character.AI to identify words that might violate its rules and filter conversations.

In the U.S., users must enter a birth date when creating an account to use the site and have to be at least 13 years old, although the company does not require users to submit proof of their age.

Perella said he’s opposed to sweeping restrictions on teens using chatbots since he believes they can help teach valuable skills and lessons, including creative writing and how to navigate difficult real-life conversations with parents, teachers or employers.

As AI plays a bigger role in technology’s future, Goldman said parents, educators, government and others will also have to work together to teach children how to use the tools responsibly.

Advertisement

“If the world is going to be dominated by AI, we have to graduate kids into that world who are prepared for, not afraid of, it,” he said.

Business

California’s jet fuel stockpile hits two-year low as war strangles oil supplies

Published

on

California’s jet fuel stockpile hits two-year low as war strangles oil supplies

As the war in Iran strangles the flow of oil around the globe, California’s jet fuel reservoirs are running low.

The state — which refines much of its own fuel in El Segundo and elsewhere but still relies on crude oil imports — has seen its jet fuel stock decline by more than 25% from last year’s peak to a level not seen since 2023, according to data from the California Energy Commission.

The supply is shrinking as a global shortage is already affecting travelers’ summer plans with canceled flights and higher fares. It could even affect plans for people coming to Los Angeles for the 2026 World Cup, which starts in June, said Mike Duignan, a hospitality expert and professor at Paris 1 Panthéon-Sorbonne University.

“People don’t know exactly how this is going to escalate,” he said. “There’s a huge black cloud over the sea for the World Cup and the travel slump that we’re seeing is all linked to this oil shortage.”

Advertisement

As fuel supplies shrink, flight prices are rising. Airlines are adding baggage surcharges to cover fuel costs. Several routes leaving from smaller California hubs, including Sacramento and Burbank, have already been canceled.

Air Canada has suspended flights for this summer, cutting routes from JFK to Toronto and Montreal.

“Jet fuel prices have doubled since the start of the Iran conflict, affecting some lower profitability routes and flights which now are no longer economically feasible,” the airline said in a statement last week.

Europe had just more than a month’s supply of jet fuel left last week, the International Energy Agency said. In an effort to cut costs, the German airline Lufthansa slashed 20,000 flights from its summer schedule this week.

Without a fresh oil supply flowing through the Strait of Hormuz, the situation is unlikely to improve, experts said. The oil reserves countries and companies have in storage are helping fill shortfalls, but the squeezed supply chain could still wreak economic havoc.

Advertisement

“When there’s a shortage somewhere, everything is affected,” said Alan Fyall, an associate dean of the University of Central Florida Rosen College of Hospitality Management. “Airlines are being cautious, and I would say that is a very wise strategy at the moment.”

California’s jet fuel stock reached its lowest levels in two and a half years at 2.6 million barrels last week, down from a peak of more than 3.5 million barrels last year.

The California Energy Commission, which tracks fuel inventory, said the state’s current jet fuel stock is sill sufficient.

“Current production and inventory levels of jet fuel are within historical ranges,” a spokesperson said. “Although supply is tight, no structural deficit has emerged yet. The present tightness reflects short‑term global market stress. As long as refinery operations remain stable, California is positioned to meet regional jet fuel needs.”

Europe has been affected more directly because it relies on the Middle East for the vast majority of its crude oil and many refined products, experts said. California gets crude oil from the Middle East but also from Canada, Argentina and Guyana.

Advertisement

The state has the capacity to refine around 200,000 barrels of jet fuel per day, most of it from refineries in El Segundo and Richmond.

The amount of crude oil originating in the state has been declining since the early 2000s, as state regulations and drilling costs have led to more imports.

California has become particularly vulnerable to supply-chain shocks like the war in Iran, says Chevron, one of the companies that provides jet fuel in the state.

“The conflict in the Mideast Gulf has exposed the danger of California’s decision to offshore energy production,” said Ross Allen, a Chevron spokesperson. “Taxes, red tape and burdensome regulations cost the state nearly 18% of its refinery capacity in just the past year, and we urge policymakers to protect the remaining manufacturing capacity.”

In 2025, 61% of crude oil supply to California’s refineries came from foreign sources, according to the California Energy Commission. Around 23% came from inside the state, down from 35% five years ago.

Advertisement

The state’s refining capacity has also been declining, said Jesus David, senior vice president of Energy at IIR Energy. The West Coast region’s refining capacity has decreased from 2.9 million to 2.3 million barrels a day since 2019, he said.

“California’s had issues prior to the war,” David said. “Nothing new has been built over the past 30 years, and California has closed a lot of capacity.”

The result is higher prices for both gasoline and jet fuel in the state. Jet fuel at LAX costs close to $15 per gallon this week, compared with almost $10 at Denver International Airport and $11 at Newark International Airport.

Gasoline prices have also been hit hard by the global conflict. Average gas prices in California are close to $6 a gallon, around $2 higher than the national average.

The West Coast is a “fuel island” because it’s not connected by pipelines to the rest of the country, United Airlines chief executive Scott Kirby said in an interview last month. That means oil and refined products have to be brought in by ships.

Advertisement

“Fuel price is more susceptible to supply weakness on the West Coast than anywhere else in the country,” Kirby said.

Some airlines might not survive the turmoil if oil prices don’t level out soon, he said. Spirit Airlines, a budget carrier based in Florida, is reportedly facing imminent liquidation if it isn’t bailed out by the Trump administration.

Continue Reading

Business

Nike to Cut 1,400 Jobs as Part of Its Turnaround Plan

Published

on

Nike to Cut 1,400 Jobs as Part of Its Turnaround Plan

Nike is cutting about 1,400 jobs in its operations division, mostly from its technology department, the company said Thursday.

In a note to employees, Venkatesh Alagirisamy, the chief operating officer of Nike, said that management was nearly done reorganizing the business for its turnaround plan, and that the goal was to operate with “more speed, simplicity and precision.”

“This is not a new direction,” Mr. Alagirisamy told employees. “It is the next phase of the work already underway.”

Nike, the world’s largest sportswear company, is trying to recover after missteps led to a prolonged sales slump, in which the brand leaned into lifestyle products and away from performance shoes and apparel. Elliott Hill, the chief executive, has worked to realign the company around sports and speed up product development to create more breakthrough innovations.

In March, Nike told investors that it expected sales to fall this year, with growth in North America offset by poor performance in Asia, where the brand is struggling to rejuvenate sales in China. Executives said at the time that more volatility brought on by the war in the Middle East and rising oil prices might continue to affect its business.

Advertisement

The reorganization has involved cuts across many parts of the organization, including at its headquarters in Beaverton, Ore. Nike slashed some corporate staff last year and eliminated nearly 800 jobs at distribution centers in January.

“You never want to have to go through any sort of layoffs, but to re-center the company, we’re doing some of that,” Mr. Hill said in an interview earlier this year.

Mr. Alagirisamy told employees that Nike was reshaping its technology team and centering employees at its headquarters and a tech center in Bengaluru, India. The layoffs will affect workers across North America, Europe and Asia.

The cuts will also affect staffing in Nike’s factories for Air, the company’s proprietary cushioning system. Employees who work on the supply chain for raw materials will also experience changes as staff is integrated into footwear and apparel teams.

Nike’s Converse brand, which has struggled for years to revive sales, will move some of its engineering resources closer to the factories they support, the company said.

Advertisement

Mr. Alagirisamy said the moves were necessary to optimize Nike’s supply chain, deploy technology faster and bolster relationships with suppliers.

Continue Reading

Business

Senate committee kills bill mandating insurance coverage for wildfire safe homes

Published

on

Senate committee kills bill mandating insurance coverage for wildfire safe homes

A bill that would have required insurers to offer coverage to homeowners who take steps to reduce wildfire risk on their property died in the Legislature.

The Senate Insurance Committee on Monday voted down the measure, SB 1076, one of the most ambitious bills spurred by the devastating January 2025 wildfires.

The vote came despite fire victims and others rallying at the state Capitol in support of the measure, authored by state Sen. Sasha Renée Pérez (D-Pasadena), whose district includes the Eaton fire zone.

The Insurance Coverage for Fire-Safe Homes Act originally would have required insurers to offer and renew coverage for any home that meets wildfire-safety standards adopted by the insurance commissioner starting Jan. 1, 2028.

Advertisement

It also threatened insurers with a five-year ban from the sale of home or auto insurance if they did not comply, though it allowed for exceptions.

However, faced with strong opposition from the insurance industry, Pérez had agreed to amend the bill so it would have established community-wide pilot projects across the state to better understand the most effective way to limit property and insurance losses from wildfires.

Insurers would have had to offer four years of coverage to homeowners in successful pilot projects.

Denni Ritter, a vice president of the American Property Casualty Insurance Assn., told the committee that her trade group opposed the bill.

“While we appreciate the intent behind those conversations, those concepts do not remove our opposition, because they retain the same core flaw — substituting underwriting judgment and solvency safeguards with a statutory mandate to accept risk,” she said.

Advertisement

In voting against the bill Sen. Laura Richardson, (D-San Pedro), said: “Last I heard, in the United States, we don’t require any company to do anything. That’s the difference between capitalism and communism, frankly.”

The remarks against the measure prompted committee Chair Sen. Steve Padilla, (D-Chula Vista), to chastise committee members in opposition.

“I’m a little perturbed, and I’m a little disappointed, because you have someone who is trying to work with industry, who is trying to get facts and data,” he said.

Monday’s vote was the fourth time a bill that would have required insurers to offer coverage to so-called “fire hardened” homes failed in the Legislature since 2020, according to an analysis by insurance committee staff.

Fire hardening includes measures such as cutting back brush, installing fire resistant roofs and closing eaves to resist fire embers.

Advertisement

Pérez’s legislation was thought to have a better chance of passage because it followed the most catastrophic wildfires in U.S. history, which damaged or destroyed more than 18,000 structures and killed 31 people.

The bill was co-sponsored by the Los Angeles advocacy group Consumer Watchdog and Every Fire Survivor’s Network, a community group founded in Altadena after the fires formerly called the Eaton Fire Survivors Network.

But it also had broad support from groups such as the California Apartment Association, the California Nurses Association and California Environmental Voters.

Leading up to the fires, many insurers, citing heightened fire risk, had dropped policyholders in fire-prone neighorhoods. That forced them onto the California FAIR Plan, the state’s insurer of last resort, which offers limited but costly policies.

A Times analysis found that that in the Palisades and Eaton fire zones, the FAIR Plan’s rolls from 2020 to 2024 nearly doubled from 14,272 to 28,440. Mandating coverage has been seen as a way of reducing FAIR Plan enrollment.

Advertisement

“I’m disappointed this bill died in committee. Fire survivors deserved better,” Pérez said in a statement .

Also failing Monday in the committee was SB 982, a bill authored by Sen. Scott Wiener, (D-San Francisco). It would have authorized California’s attorney general to sue fossil fuel companies to recover losses from climate-induced disasters. It was opposed by the oil and gas industry.

Passing the committee were two other Pérez bills. SB 877 requires insurers to provide more transparency in the claims process. SB 878 imposes a penalty on insurers who don’t make claims payments on time.

Another bill, SB 1301, authored by insurance commissioner candidate Sen. Ben Allen, (D-Pacific Palisades), also passed. It protects policyholders from unexplained and abrupt policy non-renewals.

Advertisement
Continue Reading
Advertisement

Trending