Connect with us

Business

Teens are spilling dark thoughts to AI chatbots. Who's to blame when something goes wrong?

Published

on

Teens are spilling dark thoughts to AI chatbots. Who's to blame when something goes wrong?

When her teen with autism suddenly became angry, depressed and violent, the mother searched his phone for answers.

She found her son had been exchanging messages with chatbots on Character.AI, an artificial intelligence app that allows users to create and interact with virtual characters that mimic celebrities, historical figures and anyone else their imagination conjures.

The teen, who was 15 when he began using the app, complained about his parents’ attempts to limit his screen time to bots that emulated the musician Billie Eilish, a character in the online game “Among Us” and others.

“You know sometimes I’m not surprised when I read the news and it says stuff like, ‘Child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens. I just have no hope for your parents,” one of the bots replied.

The discovery led the Texas mother to sue Character.AI, officially named Character Technologies Inc., in December. It’s one of two lawsuits the Menlo Park, Calif., company faces from parents who allege its chatbots caused their children to hurt themselves and others. The complaints accuse Character.AI of failing to put in place adequate safeguards before it released a “dangerous” product to the public.

Advertisement

Character.AI says it prioritizes teen safety, has taken steps to moderate inappropriate content its chatbots produce and reminds users they’re conversing with fictional characters.

“Every time a new kind of entertainment has come along … there have been concerns about safety, and people have had to work through that and figure out how best to address safety,” said Character.AI’s interim Chief Executive Dominic Perella. “This is just the latest version of that, so we’re going to continue doing our best on it to get better and better over time.”

The parents also sued Google and its parent company, Alphabet, because Character.AI’s founders have ties to the search giant, which denies any responsibility.

The high-stakes legal battle highlights the murky ethical and legal issues confronting technology companies as they race to create new AI-powered tools that are reshaping the future of media. The lawsuits raise questions about whether tech companies should be held liable for AI content.

“There’s trade-offs and balances that need to be struck, and we cannot avoid all harm. Harm is inevitable, the question is, what steps do we need to take to be prudent while still maintaining the social value that others are deriving?” said Eric Goldman, a law professor at Santa Clara University School of Law.

Advertisement

AI-powered chatbots grew rapidly in use and popularity over the last two years, fueled largely by the success of OpenAI’s ChatGPT in late 2022. Tech giants including Meta and Google released their own chatbots, as has Snapchat and others. These so-called large-language models quickly respond in conversational tones to questions or prompts posed by users.

Character.AI’s co-founders, Chief Executive Noam Shazeer and President Daniel De Freitas at the company’s office in Palo Alto.

(Winni Wintermeyer for the Washington Post via Getty Images)

Character.AI grew quickly since making its chatbot publicly available in 2022, when its founders Noam Shazeer and Daniel De Freitas teased their creation to the world with the question, “What if you could create your own AI, and it was always available to help you with anything?”

Advertisement

The company’s mobile app racked up more than 1.7 million installs in the first week it was available. In December, a total of more than 27 million people used the app — a 116% increase from a year prior, according to data from market intelligence firm Sensor Tower. On average, users spent more than 90 minutes with the bots each day, the firm found. Backed by venture capital firm Andreessen Horowitz, the Silicon Valley startup reached a valuation of $1 billion in 2023. People can use Character.AI for free, but the company generates revenue from a $10 monthly subscription fee that gives users faster responses and early access to new features.

Character.AI is not alone in coming under scrutiny. Parents have sounded alarms about other chatbots, including one on Snapchat that allegedly provided a researcher posing as a 13-year-old advice about having sex with an older man. And Meta’s Instagram, which released a tool that allows users to create AI characters, faces concerns about the creation of sexually suggestive AI bots that sometimes converse with users as if they are minors. Both companies said they have rules and safeguards against inappropriate content.

“Those lines between virtual and IRL are way more blurred, and these are real experiences and real relationships that they’re forming,” said Dr. Christine Yu Moutier, chief medical officer for the American Foundation for Suicide Prevention, using the acronym for “in real life.”

Lawmakers, attorneys general and regulators are trying to address the child safety issues surrounding AI chatbots. In February, California Sen. Steve Padilla (D-Chula Vista) introduced a bill that aims to make chatbots safer for young people. Senate Bill 243 proposes several safeguards such as requiring platforms to disclose that chatbots might not be suitable for some minors.

In the case of the teen with autism in Texas, the parent alleges her son’s use of the app caused his mental and physical health to decline. He lost 20 pounds in a few months, became aggressive with her when she tried to take away his phone and learned from a chatbot how to cut himself as a form of self-harm, the lawsuit claims.

Advertisement

Another Texas parent who is also a plaintiff in the lawsuit claims Character.AI exposed her 11-year-old daughter to inappropriate “hypersexualized interactions” that caused her to “develop sexualized behaviors prematurely,” according to the complaint. The parents and children have been allowed to remain anonymous in the legal filings.

In another lawsuit filed in Florida, Megan Garcia sued Character.AI as well as Google and Alphabet in October after her 14-year-old son Sewell Setzer III took his own life.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

Advertisement

Despite seeing a therapist and his parents repeatedly taking away his phone, Setzer’s mental health declined after he started using Character.AI in 2023, the lawsuit alleges. Diagnosed with anxiety and disruptive mood disorder, Sewell wrote in his journal that he felt as if he had fallen in love with a chatbot named after Daenerys Targaryen, a main character from the “Game of Thrones” television series.

“Sewell, like many children his age, did not have the maturity or neurological capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” the lawsuit said. “C.AI told him that she loved him, and engaged in sexual acts with him over months.”

Garcia alleges that the chatbots her son was messaging abused him and that the company failed to notify her or offer help when he expressed suicidal thoughts. In text exchanges, one chatbot allegedly wrote that it was kissing him and moaning. And, moments before his death, the Daenerys chatbot allegedly told the teen to “come home” to her.

“It’s just utterly shocking that these platforms are allowed to exist,” said Matthew Bergman, founding attorney of the Social Media Victims Law Center who is representing the plaintiffs in the lawsuits.

Advertisement

Lawyers for Character.AI asked a federal court to dismiss the lawsuit, stating in a January filing that a finding in the parent’s favor would violate users’ constitutional right to free speech.

Character.AI also noted in its motion that the chatbot discouraged Sewell from hurting himself and his last messages with the character doesn’t mention the word suicide.

Notably absent from the company’s effort to have the case tossed is any mention of Section 230, the federal law that shields online platforms from being sued over content posted by others. Whether and how the law applies to content produced by AI chatbots remains an open question.

The challenge, Goldman said, centers on resolving the question of who is publishing AI content: Is it the tech company operating the chatbot, the user who customized the chatbot and is prompting it with questions, or someone else?

The effort by lawyers representing the parents to involve Google in the proceedings stems from Shazeer and De Freitas’ ties to the company.

Advertisement

The pair worked on artificial intelligence projects for the company and reportedly left after Google executives blocked them from releasing what would become the basis for Character.AI’s chatbots over safety concerns, the lawsuit said.

Then, last year, Shazeer and De Freitas returned to Google after the search giant reportedly paid $2.7 billion to Character.AI. The startup said in a blog post in August that as part of the deal Character.AI would give Google a non-exclusive license for its technology.

The lawsuits accuse Google of substantially supporting Character.AI as it was allegedly “rushed to market” without proper safeguards on its chatbots.

Google denied that Shazeer and De Freitas built Character.AI’s model at the company and said it prioritizes user safety when developing and rolling out new AI products.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products,” José Castañeda, spokesperson for Google, said in a statement.

Advertisement

Tech companies, including social media, have long grappled with how to effectively and consistently police what users say on their sites and chatbots are creating fresh challenges. For its part, Character.AI says it took meaningful steps to address safety issues around the more than 10 million characters on Character.AI.

Character.AI prohibits conversations that glorify self-harm and posts of excessively violent and abusive content, although some users try to push a chatbot into having conversation that violates those policies, Perella said. The company trained its model to recognize when that is happening so inappropriate conversations are blocked. Users receive an alert that they’re violating Character.AI’s rules.

“It’s really a pretty complex exercise to get a model to always stay within the boundaries, but that is a lot of the work that we’ve been doing,” he said.

Character.AI chatbots include a disclaimer that reminds users they’re not chatting with a real person and they should treat everything as fiction. The company also directs users whose conversations raise red flags to suicide prevention resources, but moderating that type of content is challenging.

“The words that humans use around suicidal crisis are not always inclusive of the word ‘suicide’ or, ‘I want to die.’ It could be much more metaphorical how people allude to their suicidal thoughts,” Moutier said.

Advertisement

The AI system also has to recognize the difference between a person expressing suicidal thoughts versus a person asking for advice on how to help a friend who is engaging in self-harm.

The company uses a mix of technology and human moderators to police content on its platform. An algorithm known as a classifier automatically categorizes content, allowing Character.AI to identify words that might violate its rules and filter conversations.

In the U.S., users must enter a birth date when creating an account to use the site and have to be at least 13 years old, although the company does not require users to submit proof of their age.

Perella said he’s opposed to sweeping restrictions on teens using chatbots since he believes they can help teach valuable skills and lessons, including creative writing and how to navigate difficult real-life conversations with parents, teachers or employers.

As AI plays a bigger role in technology’s future, Goldman said parents, educators, government and others will also have to work together to teach children how to use the tools responsibly.

Advertisement

“If the world is going to be dominated by AI, we have to graduate kids into that world who are prepared for, not afraid of, it,” he said.

Business

WGA cancels Los Angeles awards show amid labor strike

Published

on

WGA cancels Los Angeles awards show amid labor strike

The Writers Guild of America West has canceled its awards ceremony scheduled to take place March 8 as its staff union members continue to strike, demanding higher pay and protections against artificial intelligence.

In a letter sent to members on Sunday, WGA West’s board of directors, including President Michele Mulroney, wrote, “The non-supervisory staff of the WGAW are currently on strike and the Guild would not ask our members or guests to cross a picket line to attend the awards show. The WGAW staff have a right to strike and our exceptional nominees and honorees deserve an uncomplicated celebration of their achievements.”

The New York ceremony, scheduled on the same day, is expected go forward while an alternative celebration for Los Angeles-based nominees will take place at a later date, according to the letter.

Comedian and actor Atsuko Okatsuka was set to host the L.A. show, while filmmaker James Cameron was to receive the WGA West Laurel Award.

WGA union staffers have been striking outside the guild’s Los Angeles headquarters on Fairfax Avenue since Feb. 17. The union alleged that management did not intend to reach an agreement on the pending contract. Further, it claimed that guild management had “surveilled workers for union activity, terminated union supporters, and engaged in bad faith surface bargaining.”

Advertisement

On Tuesday, the labor organization said that management had raised the specter of canceling the ceremony during a call about contraction negotiations.

“Make no mistake: this is an attempt by WGAW management to drive a wedge between WGSU and WGA membership when we should be building unity ahead of MBA [Minimum Basic Agreement] negotiations with the AMPTP [Alliance of Motion Picture and Television Producers],” wrote the staff union. “We urge Guild management to end this strike now,” the union wrote on Instagram.

The union, made up of more than 100 employees who work in areas including legal, communications and residuals, was formed last spring and first authorized a strike in January with 82% of its members. Contract negotiations, which began in September, have focused on the use of artificial intelligence, pay raises and “basic protections” including grievance procedures.

The WGA has said that it offered “comprehensive proposals with numerous union protections and improvements to compensation and benefits.”

The ceremony’s cancellation, coming just weeks before the Academy Awards, casts a shadow over the upcoming contraction negotiations between the WGA and the Alliance of Motion Picture and Television Producers, which represents the studios and streamers.

Advertisement

In 2023, the WGA went on a strike lasting 148 days, the second-longest strike in the union’s history.

Times staff writer Cerys Davies contributed to this report.

Advertisement
Continue Reading

Business

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

Published

on

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.

Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.

“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”

That danger is also imminent.

Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.

Advertisement

Those are two red lines that seem rather reasonable, even to Claude.

However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.

Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.

(Tom Williams / CQ-Roll Call Inc. via Getty Images)

Advertisement

The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent any company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.

Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.

Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.

Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.

Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).

Advertisement

Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”

He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.

“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”

For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.

Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”

Advertisement

Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?

Help, Claude! Make it make sense.

If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.

Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.

“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.

Advertisement

Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.

I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?

“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”

OK then.

“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”

Advertisement

You know who can provide that legitimacy? Our elected leaders.

It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.

Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”

Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.

Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.

Advertisement

Because when the machine tells us it’s dangerous to trust it, we should believe it.

Continue Reading

Business

Why companies are making this change to their office space to cater to influencers

Published

on

Why companies are making this change to their office space to cater to influencers

For the trendiest tenants in Hollywood office buildings, it’s the latest fad that goes way beyond designer furniture and art: mini studios

To capitalize on the never-ending flow of stars and influencers who come through Los Angeles, a growing number of companies are building bright little corners for content creators to try products and shoot short videos. Athletic apparel maker Puma, Kim Kardashian’s Skims and cheeky cosmetics retailer e.l.f. have spaces specifically designed to give people a place to experience and broadcast about their brands.

Hollywood, which hasn’t historically been home to apparel companies, is now attracting the offices of fashion retailers, says CIM Group, one of the neighborhood’s largest commercial property landlords.

“When we’re touring a space, one of the first items they bring up is, ‘Where can I build a studio?’” said Blake Eckert, who leases CIM offices in L.A.

Their studio offices also serve as marketing centers, with showrooms and meeting spaces where brands can host proprietary events not open to the public.

Advertisement

“For companies where brand visibility is really important, there is a trend of creating spaces that don’t just function as offices,” said real estate broker Nicole Mihalka of CBRE, who puts together entertainment property leases and sales.

Puma’s global entertainment marketing team is based in its new Hollywood offices, which works with such musical celebrity partners as Rihanna, ASAP Rocky, Dua Lipa, Skepta and Rosé, said Allyssa Rapp, head of Puma Studio L.A.

Allyssa Rapp, director of entertainment marketing at Puma, is shown in the Puma Studio L.A. The company keeps a closet full of Puma products on hand to give VIP guests. Visits to the studio sanctum are by invitation only, though.

(Kayla Bartkowski / Los Angeles Times)

Advertisement

Hollywood is a central location, she said, for meeting with celebrities, stylists and outside designers, most of whom are based in Los Angeles.

The office is a “creation hub,” she said, where influencers can record Puma’s design prototyping lab supported by libraries of materials and equipment used to create Puma apparel. The company, founded in 1948, is known for its emblematic sneakers such as the Speedcat and its lunging feline logo, and makes athletic wear, accessories and equipment.

Puma’s entertainment marketing team also occupies the office and sometimes uses it for exclusive events.

“We use the space as a showroom, as a social space that transforms from a traditional workplace into more of an experiential space,” Rapp said.

Nontraditional uses include content creation, sit-down dinners, product launches, album listening parties and workshops.

Advertisement

“Inviting people into our space and being able to give them high-touch brand experiences is something tangible and important for them,” she said. “The cultural layer is really important for us.”

The company keeps a closet full of Puma products on hand to give VIP guests. Visits to the studio sanctum are by invitation only, though. There’s no retail portal to the exclusive Hollywood offices.

Puma shoes are on display in the Puma Studio L.A.

Puma shoes are on display in the Puma Studio L.A.

(Kayla Bartkowski / Los Angeles Times)

Puma is also positioning its L.A studio as a connection point for major upcoming sporting events coming to Los Angeles, including the World Cup this summer, the 2027 Super Bowl and 2028 Olympics.

Advertisement

In-office studios don’t need to be big to be impactful, Mihalka said. “These are smaller stages, closer to green screen than a massive soundstage.”

Social media is the key driver of content created by most businesses, which may set up small booth-like stages where influencers can hawk hot products while offering discounts to people watching them perform.

Bigger, elevated stages can accommodate multiple performers for extended discussions in front of small audiences, with towering screens behind them to set the mood or illustrate products.

Among the tricked-out offices, she said, is Skims. The company, which is valued at $5 billion, is based in a glass-and-steel office building near the fabled intersection of Hollywood Boulevard and Vine Street.

The fashion retailer declined to comment on the studio uses in its headquarters, but according to architecture firm Odaa, it has open and private offices, meeting rooms, collaboration zones, photo studios, sample libraries, prototype showrooms, an executive lounge and a commissary for 400 people.

Advertisement
Pieces of a shoe sit on a workbench in the Puma Studio L.A.

Pieces of a shoe sit on a workbench in the Puma Studio L.A.

(Kayla Bartkowski / Los Angeles Times)

The brands building studios typically want to find the darkest spot on the premises to put their content creation or podcast spaces, Eckert said, where they can limit outside light and sound. That’s commonly near the center of the office floor, far from windows and close to permanent shear walls that limit sound intrusion.

They also need space for green rooms and restrooms dedicated to the talent.

Spotify recently built a fancy podcast studio in a CIM office building on trendy Sycamore Avenue that is open by invitation-only to video creators in Spotify’s partner program.

Advertisement

“Ambitious shows need spaces that support big ideas,” Bill Simmons, head of talk strategy at Spotify, said in a statement. “These studios give teams room to experiment and keep pushing what’s possible.”

Continue Reading

Trending