Connect with us

Science

Growing need. Glaring gaps. Why mental health care can be a struggle for autistic youth

Published

on

Growing need. Glaring gaps. Why mental health care can be a struggle for autistic youth

In April, a group of Orange County parents flew to Sacramento to attend a conference hosted by Disability Voices United, an advocacy group for people with disabilities and their families.

They wanted to emphasize three issues to state officials at the event: the paucity of mental health care for children with developmental disabilities, the confusing mess of government systems meant to help them, and the gaps in availability of day-to-day caregiving.

Among them was Christine LyBurtus, a single mom living in Fullerton. Last fall, after repeated rounds of 911 calls and emergency hospitalizations, she had made the agonizing decision to move her son, Noah, who is autistic, into a state-operated facility for at least a year.

LyBurtus had struggled to find the support she needed to keep him at home. “Families are being forced to give up their children to group homes and treatment centers over 12 hours from their homes … or out of the state of California entirely,” she told the crowd at the conference.

Christine Lyburtus is embraced by Beth Martinko outside elevators in the Capitol Annex Swing Space building in Sacramento.

Advertisement

(Jose Luis Villegas / For The Times)

“I beg you to hear us,” she said to state officials before turning from the microphone.

Despite the growing diagnosis of autism, which has been estimated to affect more than 2 million children and teens across the country, experts and advocates have bemoaned glaring gaps in services to meet the mental health needs of autistic youth.

Some researchers have estimated that upward of 90% of autistic youth have overlapping conditions like anxiety, depression or ADHD. Many have suffered alarming levels of trauma.

Advertisement

Yet “there are very few specialized facilities in the country that meet the unique needs of individuals with autism and co-occurring mental health conditions,” especially in crisis situations, said Cynthia Martin, senior clinical psychologist at the Child Mind Institute, which is based in New York.

Between 2020 and 2021, the number of California children and teens served by the state developmental disability system who were deemed to have “complex needs” — a state term for those who needed a range of crisis services or landed in a locked psychiatric ward — rose from 536 to 677, according to a report released last year by the California Department of Developmental Services.

California has been working to build more facilities to house and support such youth, including STAR homes that provide “crisis stabilization” for roughly a year, like the one into which Noah moved. But the state has seen an uptick in the number of people in need of such programs, as well as more former residents boomeranging back for “further stabilization,” the state report said.

As of this summer, the STAR homes could accommodate only 15 teens across the state; the one that accepted Noah budgets for more than $1 million per resident annually.

There are other community facilities where developmentally disabled youth in crisis can be placed, but “there remains a critical need for a ‘can’t say no’ option for individuals whom private sector vendors cannot or will not serve,” the state report concluded.

Advertisement

Autistic people and their families have also lamented that they cannot find adequate help in their communities before they reach a crisis point. Researchers have found that mental health workers are often unprepared to work with people with intellectual or developmental disabilities or may chalk up symptoms to their disabilities, rather than overlapping needs.

Christine LyBurtus looks at a drawing of her son, Noah

Christine LyBurtus looks at a drawing of her son, Noah, in their Fullerton home.

(Mel Melcon / Los Angeles Times)

“It’s pretty common for a mental health practitioner to turn away someone with a developmental disability or say, ‘I don’t serve that population,’” said Zoe Gross, director of advocacy for the Autistic Self Advocacy Network.

Alison D. Morantz, director of the Stanford Intellectual and Developmental Disabilities Law and Policy Project, called it a “scandal” that amid a scarcity of psychiatric beds for youth, “if a family member discloses that their child is on the autistic spectrum, they can say, ‘No thank you.’”

Advertisement

“It puts parents in impossible situations,” she said.

The biggest challenges for many families of autistic youth often surround aggression, which isn’t a core feature of autism, but the symptom of other issues that need to be uncovered, child and adolescent psychiatrist Dr. Matthew Siegel told a federal committee last year.

“You have to look underneath or in front of that … for what could be contributing or what is driving this aggression,” said Siegel, founder of the Autism and Developmental Disorders Inpatient Research Collaborative. He and other researchers have seen promising results from specialized units at hospitals, but few exist — “not even one per state.”

“Even specialized clinics that can work on these challenges are quite rare,” he said.

The Supreme Court has ruled that institutionalizing people with disabilities who could live in the community is discriminatory if a community placement “can be reasonably accommodated.” Federal investigations have, at times, faulted states for failing to provide needed services for people to stay in their homes or communities.

Advertisement

The law “requires that services are provided in the most integrated setting appropriate to the needs of a person with a disability,” according to the U.S. Department of Health and Human Services.

But the struggle to find needed services can end up pushing autistic people with mental health needs out of their communities. Bonnie Ivers, director of clinical services for the Regional Center of Orange County, said last year that “more and more families are having to review options that are outside of our county.”

Some Californians even go outside the state: As of June 2022, there were 49 youth with “complex needs” getting services outside of California, and an additional 33 “at risk of being referred to out-of-state resources,” according to the developmental services department.

In the following year, that number grew to 57 youth out of state — and an additional 64 who might be at risk of joining them. The numbers may actually be higher: The state agency says it learns about out-of-state placements only when families inform the regional centers that coordinate developmental disability services.

Nancy Bargmann, director of the California Department of Developmental Services, said their goal is to provide “a continuum of supports” so that families “don’t need to make that really hard decision of having their child not live at home.”

Advertisement

California has launched more than a dozen teams focused on crisis prevention, called START teams, which it says have helped keep people in their homes. Their services include connecting different systems that assist families, such as mental health providers and disability services.

But they do not yet exist everywhere in the state. California also has mobile “Crisis Assessment Stabilization Teams” — or CAST — that are meant for people who have exhausted other kinds of help or are at risk of having to move into more restrictive settings. There were three of them as of this spring, according to the developmental services department.

Judy Mark, president of the advocacy group Disability Voices United, argued it is counterproductive to try to stabilize a child away from his or her family. If at all possible, she said, California should be ensuring constant support in the home, which she argued would also be less costly than caring for a child in a STAR facility.

But disability services providers say that getting such caregivers has continued to be a challenge, with state rates for such workers outstripped by what they can earn elsewhere. Increases in those provider rates have been slowly phased in over time, with the next bump slated for January.

In many cases, “what you’d want to see is somebody, 24 hours a day, in the home helping the parent,” said Larry Landauer, executive director of the Regional Center of Orange County. But “that’s where we have been just drastically short on staffing.”

Advertisement

All the gaps in the system can come to a head when young people with developmental disabilities hit puberty, especially if they face “the inability to communicate in such a complex and confusing time,” said California Commission on Disability Access member Hector Ramírez, who is autistic and lives in the San Fernando Valley.

If autistic teens and their families cannot get the support they need, Ramírez said, it “has compounding consequences that result in people just getting worse — when they shouldn’t be getting worse.”

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Science

'I don't want him to go': An autistic teen and his family face stark choices

Published

on

'I don't want him to go': An autistic teen and his family face stark choices

Christine LyBurtus was aching and fearful of what might happen when her 13-year-old son returned home.

Noah had been sent to Children’s Hospital of Orange County for a psychiatric hold lasting up to 72 hours after he punched at walls, flipped over a table, ripped out a chunk of his mother’s hair and tried to break a car window.

“There’s nothing else to call it except a psychotic episode,” LyBurtus said.

The clock was ticking on that August day in 2022. The single mother wanted help to prevent such an episode from happening again, maybe with a different medication. Hospital staff were waiting for a psychiatric bed, possibly at another hospital with a dedicated unit for patients with autism or other developmental disabilities.

But as the hours ran out on the hold, it became clear that wasn’t happening. LyBurtus brought Noah home to their Fullerton apartment.

Advertisement

“When he came back home, it kind of broke my heart,” said his sister, Karissa, who is two years older. “He looked like, ‘What the heck did you guys put me into?’”

Christine LyBurtus makes a snack for Noah.

(Allen J. Schaben / Los Angeles Times)

The next night, Noah was back in the ER after smashing a television and attacking his mother. This time, he was transferred to a different hospital for three weeks, prescribed medications for psychosis, and then sent to a residential facility in Garden Grove.

Advertisement

LyBurtus said she was told it would be a stopgap measure — just for three weeks — until she could line up more help at home. But when she phoned to ask about visiting her son, LyBurtus said she was told she couldn’t see him for a month.

Advertisement

“He lives here now,” someone told her, she said, and the staff needed time to “break him in.”

LyBurtus felt like she was being pushed to give up her son, instead of getting the help her family needed. She insisted on bringing him home.

::

Autism is a developmental condition that can shape how people think, communicate, move and process sensory information. When Noah was 3, a doctor noted he was a “very cute little boy” who played alone, rocked back and forth, and sometimes bit himself. Noah’s eye contact was “fleeting.” He could speak about 20 words, but often cried or pulled his mother’s hand to communicate.

The physician summed up his behavior as “characteristic of a DSM-IV diagnosis of autistic disorder.”

Advertisement

When he was in elementary school, LyBurtus stopped working full time outside the home and enrolled in a state program that paid her as his caregiver. She relies on Medi-Cal for his medical care, and much of his schooling has been in Orange County-run programs for children with moderate to severe disabilities.

Noah does not speak but sometimes uses pictures, an app on a tablet, or some sign language to communicate. When a reporter visited their home last year, Noah bobbed his head and shoulders as he listened to music on his iPad. He flapped his hands as LyBurtus made him a peanut-butter-and-banana smoothie, and then dutifully followed her instructions to chuck the peel and put the almond milk away. It was a good day, LyBurtus said with relief.

But on other days, LyBurtus said her son could be rigid; his demands, unpredictable. “Some days he’s fixated on having three pairs of pants on … Some days he wants to take seven showers. The next day, I can’t get him to take showers.”

Christine LyBurtus greets son Noah as he arrives home on a bus

Christine LyBurtus greets Noah as he arrives home from school.

(Allen J. Schaben / Los Angeles Times)

Advertisement

When frustrated, Noah might erupt, banging his head against walls and trying to jump out the windows of their apartment. He had kicked and bitten his mother when she tried to redirect him. In the worst instances, LyBurtus had resorted to hiding in the bathroom — her “safe room” — and urged Karissa to lock herself in the bedroom.

As Noah grew taller and stronger, LyBurtus stripped bare the walls of her apartment to try to make it safe, installed shatterproof windows and removed a knob from a closet door to prevent Noah from using it as a foothold to scale over the top of the closet door. She made sure to flag her address for the Fullerton Police Department so it knew her son was developmentally disabled.

“I’m just so grateful that my son never got shot,” LyBurtus said.

Each of the 911 calls was the start of a Sisyphean routine. Noah “has been challenging to place in [a] mental health facility due to behavioral care needs with severe autism,” a doctor wrote when he was back at Children’s Hospital of Orange County yet again.

Noah leaps into the air while inside his home in Fullerton

Noah leaps into the air inside his Fullerton home. At left is Terrence Morris, one of Noah’s caregivers.

(Mel Melcon / Los Angeles Times)

Advertisement

As the family tried to get through each crisis, LyBurtus was also facing a common struggle among parents of California children with disabilities: not getting the help they were supposed to receive from the state.

LyBurtus was getting assistance through a local regional center, one of the nonprofit agencies contracted by the California Department of Developmental Services. She said she’d been authorized to receive 40 hours weekly of respite care — meant to relieve families of children with disabilities for short periods — but was sometimes receiving only 12 to 16 hours.

She was also supposed to have two workers at a time, LyBurtus said, but caregivers were so scarce that she was scheduling one at a time in order to cover as many hours as she could.

In the meantime, Noah wasn’t sleeping and she was going through so much laundry detergent and quarters that her grocery budget was drained. At one point, she wanted to go to a food bank, but there would be no one to watch him.

Advertisement

“I could not be anymore tired and frustrated!!!!” she wrote to her regional center coordinator. “Is the only way Noah is going to get help [is] if I abandoned him and surrender him to the State!?!?”

Christine LyBurtus

Christine LyBurtus said she’s struggled to find the right care for Noah.

(Allen J. Schaben / Los Angeles Times)

::

Across the country, surging numbers of young people have landed in emergency rooms in the throes of a mental health crisis amid a shortage of needed care. Children in need of psychiatric care are routinely held in emergency departments for hours or even days. Even amid COVID, as people tried to avoid emergency rooms, mental health-related visits continued to rise among teens in 2021 and 2022.

Advertisement

Among those hit hardest by the crisis are autistic youth, who turn up in emergency rooms at higher rates than other kids — and are much more likely to do so for psychiatric issues. Many have overlapping conditions such as anxiety, and researchers have also found they face a higher risk of abuse and trauma.

“We’re a misunderstood, marginalized population of people” at higher risk of suicide, Lisa Morgan, founder of the Autism and Suicide Prevention Workgroup, said at a national meeting.

Yet the available assistance is “not designed for us.”

According to the National Autism Indicators Report, more than half of parents of autistic youth who were surveyed had trouble getting the mental health services their autistic kids needed, with 22% saying it was “very difficult” or “impossible.” A report commissioned by L.A. County found autistic youth were especially likely to languish in ERs amid few options for ongoing psychiatric treatment.

 Karissa left, interacts with brother Noah on a couch

Karissa interacts with her brother, Noah, as he watches a video after school.

(Allen J. Schaben / Los Angeles Times)

Advertisement

In decades past, many psychiatrists were unwilling to diagnose mental health disorders in autistic people, believing “it was either part of the autism or for other reasons it was undiagnosable,” said Jessica Rast, an assistant research professor affiliated with the A.J. Drexel Autism Institute. Much more is now known about both autism and mental health treatment, but experts say the two fields aren’t consistently linked in practice.

Mental health providers may focus on an autism diagnosis for a prospective patient and say, “‘Well, that’s not in our wheelhouse. We’re treating things like depression or anxiety,’” said Brenna Maddox, assistant professor of psychiatry at the University of North Carolina School of Medicine.

Yet patients or their families “weren’t asking for autism treatment. They were asking for depression or anxiety or other mental health treatment,” Maddox said.

In the meantime, the system that serves children with developmental disabilities has faltered.

Advertisement

“Never have I seen that we can’t staff the needed things on so many cases,” Larry Landauer, executive director of the Regional Center of Orange County, said last year. Statewide, “there’s thousands and thousands of cases that are struggling.”

“If I’m a respite worker and I get called on to provide help to families … who am I going to select?” Landauer asked. “The [person] that watches TV and plays on his iPad and I just sit and monitor him? Or do I take someone that is significantly behaviorally challenged — that pulls my hair, that scares me all the time, that tries to run out the door? … Those are the ones getting left out.”

::

The fall and winter of 2022 were so trying that LyBurtus eventually took matters into her own hands. Noah bit his mother and smashed a bathroom window and tried to climb out before the Fullerton Fire Department arrived. Weeks later, LyBurtus had to dial 911 again after he bit his sister’s finger badly enough to draw blood.

Caregiver Terrence Morris, left, keeps a watchful eye on Noah

Caregiver Terrence Morris, left, keeps a watchful eye on Noah.

(Mel Melcon / Los Angeles Times)

Advertisement

He ended up in a hold at Children’s Hospital of Orange County, which searched for another facility that might help him, but “all placement options declined patient placement,” according to his medical records.

Noah was again sent home with his mother, but the next day, he was back at Children’s Hospital of Orange County after slamming his head against a tile floor.

LyBurtus, frantic and bruised, made call after call and finally used her credit card to pay for an ambulance to take him to UCLA Resnick Neuropsychiatric Hospital, where he was admitted.

Week by week, psychiatrists there said Noah seemed to be making some strides as they adjusted his alphabet soup of medications. But hospital staff struggled to understand what would set him off.

Advertisement

Once, while playing cards, Noah suddenly started knocking the cards off the table and struck another patient in the face. Another day, he appeared suddenly to be frightened after using the bathroom, and then charged at a computer plugged in nearby.

But there were also days when he danced to a Michael Jackson song, or played Giant Jenga outside on the deck. One day, a doctor wrote, “He made eye contact for a few seconds. I waved to him, and he looked at his hand, as though he was wondering what to do with it in return.”

Christine LyBurtus washes the face of son Noah

Christine LyBurtus washes her son’s face. When Noah was 3, a doctor noted he was a “very cute little boy” who played alone, rocked back and forth, and sometimes bit himself.

(Mel Melcon / Los Angeles Times)

LyBurtus was straining to find more help at home so UCLA held off on discharging him, but at the end of January 2023 Noah was sent home. With no changes in medication planned, “and the strong possibility that Noah grew tired of the inpatient setting, the ward no longer was deemed therapeutic or necessary,” a doctor wrote.

Advertisement

Less than a month later, he was back in the emergency room at Children’s Hospital of Orange County after biting and attacking his mother.

A psychiatrist at the pediatric hospital wrote that because he had limited ability to communicate, another round of psychiatric hospitalization would do little unless it was specialized for “individuals with neurodevelopmental needs.” When the 72-hour hold at children’s hospital ran out, LyBurtus asked for an ambulance to take Noah home, fearful of driving him herself.

In May, the month Noah turned 14, LyBurtus heard the regional center had found a place for Noah: a four-bed facility in Rio Linda, a tiny town near Sacramento that she’d never heard of. He could live there for more than a year, she was told, and then hopefully return home with the right support.

Christine LyBurtus shows photographs to son Noah

Christine LyBurtus shows photographs to Noah.

(Mel Melcon / Los Angeles Times)

Advertisement

But LyBurtus fretted about what she would do if something happened to him so far away. She felt, she said, like she had failed her child. Months passed as they waited for a spot there; LyBurtus said she was told they were trying to hire the needed staff.

“I don’t want him to go,” she said, “but I don’t want to continue going on the way that we’re going on.”

Then in August, LyBurtus was told the regional center had found a spot at a facility much closer to home: the state-run South STAR facility in Costa Mesa, about 20 miles from their apartment. Noah would occupy one of only 15 STAR beds across the state for developmentally disabled adolescents in “acute crisis.”

On a bright September morning, LyBurtus pulled up at an unassuming gray house with a “Home Sweet Home” sign by the door. The three teens living there were gone for the morning while an administrator and South STAR program director Kim Hamilton-Royse showed LyBurtus around the house.

Minutes into the tour, LyBurtus found herself crying. Hamilton-Royse stopped her explanation of the daily schedule. “I know this is super hard for you,” she said gently.

Advertisement

But LyBurtus brightened at the sight of the sensory room outfitted with crash pads and a mesmerizing, colorful cylinder of bubbling water. Hamilton-Royse pointed out a vibrating chair and added that they had a projector that would fill the room with illuminated stars.

LyBurtus took photos on her smartphone to show Noah. “You’re not going to be able to get him out of here,” she said.

As they rounded the rest of the house — bedrooms with dressers secured to the wall, a living room with paintings of sailboats, a fish tank — Hamilton-Royse asked if LyBurtus felt any better.

Christine LyBurtus reacts while boxing up items for son Noah

Christine LyBurtus reacts while boxing up items for Noah’s move.

(Mel Melcon / Los Angeles Times)

Advertisement

“I do,” she said. “I just hope that he can behave.”

Hamilton-Royse reassured her that South STAR had never kicked anyone out. “And we’ve had some really challenging folks,” she said.

“I promise you we’ll take very good care of him.”

As she returned to her car, LyBurtus took a deep breath. “It’s hard not to feel like I’m betraying him,” she said, her voice shaking. “But I can’t keep living like this, you know?”

1

Advertisement
Christine Lyburtus visiting a care facility

2 Christine LyBurtus in a hallway of a care facility

3 Christine LyBurtus standing at the front gate of a residential care facility

1. Christine Lyburtus tours a residential care facility in Costa Mesa, about 20 minutes from her home. (Irfan Khan / Los Angeles Times) 2. At the South STAR facility, LyBurtus was told, Noah would occupy one of only 15 STAR beds across the state for developmentally disabled adolescents in “acute crisis.” (Irfan Khan / Los Angeles Times) 3. “I just hope that he can behave,” LyBurtus said of son Noah. (Irfan Khan / Los Angeles Times)

Three days later, Noah went back to the Children’s Hospital of Orange County on another psychiatric hold. He came home, then was back in the emergency department a week and a half later.

::

The October night before Noah left home, LyBurtus had brought home sushi for him, one of his favorite foods. He fell asleep around 6:30 p.m, and woke up again at 1 a.m. LyBurtus gave him his medication and as he drifted back to sleep, his mother held him, enjoying the peace.

Advertisement

When he woke up in the morning, she could tell he knew something was up. His clothes had been packed. She’d already shown him photos of the Costa Mesa home and told him, “This is where you’re going. I’m still your mom. I’m still going to go and see you.”

Noah, 14, embraces his mother Christine LyBurtus

Noah embraces his mother shortly before he was picked up and driven to a residential care facility in Costa Mesa.

(Mel Melcon / Los Angeles Times)

When the black SUV arrived, LyBurtus offered Oreos to coax him into the unfamiliar car. She followed the SUV in her car, staying far enough behind to avoid having Noah see her when he arrived. LyBurtus had been told it would ease the transition.

Back at home, she sank into the bathtub, utterly spent. “I’m going to have to just go with trusting this process as much as I can,” she said, “because I don’t have another choice right now.”

Advertisement

The next day, she met with the South STAR staff to tell them more about Noah. What he likes to eat. What triggers him. His favorite things to do. The Costa Mesa home called whenever staff had physically restrained Noah, but when a weekend passed without a call, she felt some relief.

Lyburtus smiled at the photos and videos sent home: putting together an elaborate stacking toy, washing dishes. It felt like things were going well, LyBurtus said. The staff had scaled back the amount of psychiatric medication he was taking.

But more than a month later, when she first went to visit Noah, he excitedly took her to the front door, as if to say, “Let’s go,” she recalled. She gently told him she was just visiting.

Christine LyBurtus is comforted by Schahara Zad, left, and Terrence Morris, caregivers for her son Noah

Christine LyBurtus is comforted by caregivers Schahara Zad, left, and Terrence Morris after Noah moved into his residential care facility.

(Mel Melcon / Los Angeles Times)

Advertisement

He led her to the side door instead. She steered him away again. They stepped into the courtyard, and Noah immediately went to the gate to exit.

LyBurtus fell into a funk. As she worried about Noah, she was also figuring out how to make ends meet. With Noah in the Costa Mesa home, Lyburtus was no longer being paid more than $4,000 a month as his caregiver, her sole source of income for years. She tried a number of jobs but ultimately found the work that suited her: caregiving for an elderly woman and children with disabilities.

Her second and third visits with Noah were easier. She snapped photos — Mother and son nestled together on the couch. Noah touching her forehead.

The STAR program runs up to 13 months. As time passed, the regional center had started talking to her about where Noah would go next. LyBurtus was startled.

Wasn’t the plan for him to come home, she asked?

Advertisement
Christine Lyburtus, left, is briefed by  Kim Hamilton-Royse while touring a residential care facility

Christine LyBurtus, left, is briefed by Kim Hamilton-Royse while touring a residential care facility for her son.

(Irfan Khan/Los Angeles Times)

That was still on the table, LyBurtus said she was told. But if he wasn’t ready, they didn’t want to wait until the last minute to find somewhere else for Noah, who turned 15 in May.

LyBurtus wanted to block out the idea of him going to another facility.

“I never want to live the way we were living again,” she said.

Advertisement

“But is that worse than him being hours away? I don’t know.”

Continue Reading

Science

How much more water and power does AI computing demand? Tech firms don't want you to know

Published

on

How much more water and power does AI computing demand? Tech firms don't want you to know

Every time someone uses ChatGPT to write an essay, create an image or advise them on planning their day, the environment pays a price.

A query on the chatbot that uses artificial intelligence is estimated to require at least 10 times more electricity than a standard search on Google.

If all Google searches similarly used generative AI, they might consume as much electricity as a country the size of Ireland, calculates Alex de Vries, the founder of Digiconomist, a website that aims to expose the unintended consequences of digital trends.

Yet someone using ChatGPT or another artificial intelligence application has no way of knowing how much power their questions will consume as they are processed in the tech companies’ enormous data centers.

De Vries said the skyrocketing energy demand of AI technologies will no doubt require the world to burn more climate-warming oil, gas and coal.

Advertisement

“Even if we manage to feed AI with renewables, we have to realize those are limited in supply, so we’ll be using more fossil fuels elsewhere,” he said. “The ultimate outcome of this is more carbon emissions.”

AI is also thirsty for water. ChatGPT gulps roughly a 16-ounce bottle in as few as 10 queries, calculates Shaolei Ren, associate professor of electrical and computer engineering at UC Riverside, and his colleagues.

The increasing consumption of energy and water by AI has raised concerns in California and around the globe. Experts have detailed how it could stall the transition to green energy, while increasing consumer’s electric bills and the risk of blackouts.

To try to prevent those consequences, De Vries, Ren and other experts are calling on the tech companies to disclose to users how much power and water their queries will consume.

“I think the first step is to have more transparency,” Ren said. The AI developers, he said, “tend to be secretive about their energy usage and their water consumption.”

Advertisement

Ren said users should be told on the websites where they are asked to type in their queries how much energy and water their requests will require. He said this would be similar to how Google now tells people searching for airline flights how much carbon emissions the trip will generate.

“If we had that knowledge,” he said, “then we could make more informed decisions.”

Data centers — enormous warehouses of computer servers that support the internet — have long been big power users. But the specialized computer chips required for generative AI use far more electricity because they are designed to read through vast amounts of data.

The new chips also generate so much heat that even more power and water is needed to keep them cool.

Even though the benefits and risks of AI aren’t yet fully known, companies are increasingly incorporating the technology into existing products.

Advertisement

In May, for example, Google announced that it was adding what it called “AI Overviews” to its search engine. Whenever someone now types a question into Google search, the company’s AI generates an answer from the search results, which is highlighted at the top.

Not all of Google’s AI-generated answers have been correct, including when it told a user to add Elmer’s glue to pizza sauce to keep cheese from sliding off the crust.

But searchers who don’t want those AI-generated answers or want to avoid the extra use of power and water can’t turn off the feature.

“Right now, the user doesn’t have the option to opt out,” Ren said.

Google did not respond to questions from The Times.

Advertisement

OpenAI, the company that created ChatGPT, responded with a prepared statement, but declined to answer specific questions, such as how much power and water the chatbot used.

“AI can be energy-intensive and that’s why we are constantly working to improve efficiencies,” OpenAI said. “We carefully consider the best use of our computing power and support our partners’ efforts to achieve their sustainability goals. We also believe that AI can play a key role in accelerating scientific progress in the discovery of climate solutions.”

Three years ago, Google vowed to reach “net-zero” — where its emissions of greenhouse gases would be equal to what it removed — by 2030.

The company isn’t making progress toward that goal. In 2023, its total carbon emissions increased by 13%, the company disclosed in a July report. Since 2019, its emissions are up 48%.

“As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute,” the company said in the report.

Advertisement

Google added that it expects its emissions to continue to rise before dropping sometime in the future. It did not say when that may be.

The company also disclosed that its data centers consumed 6.1 billion gallons of water in 2023 — 17% more than the year before.

“We’re committed to developing AI responsibly by working to address its environmental footprint,” the report said.

De Vries said he was disappointed Google had not disclosed in the report how much AI was adding to its power needs. The company said in the report that such a “distinction between AI and other workloads” would “not be meaningful.”

By not separately reporting the power use of AI, he said, it is impossible to calculate just how much more electricity Google search was now using with the addition of AI Overviews.

Advertisement

“While capable of delivering the required info,” he said, “they are now withholding it.”

Continue Reading

Science

When A.I.’s Output Is a Threat to A.I. Itself

Published

on

When A.I.’s Output Is a Threat to A.I. Itself

The internet is becoming awash in words and images generated by artificial intelligence.

Sam Altman, OpenAI’s chief executive, wrote in February that the company generated about 100 billion words per day — a million novels’ worth of text, every day, an unknown share of which finds its way onto the internet.

A.I.-generated text may show up as a restaurant review, a dating profile or a social media post. And it may show up as a news article, too: NewsGuard, a group that tracks online misinformation, recently identified over a thousand websites that churn out error-prone A.I.-generated news articles.

Advertisement

In reality, with no foolproof methods to detect this kind of content, much will simply remain undetected.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.

Here’s a simple illustration of what happens when an A.I. system is trained on its own output, over and over again:

Advertisement

This is part of a data set of 60,000 handwritten digits.

When we trained an A.I. to mimic those digits, its output looked like this.

This new set was made by an A.I. trained on the previous A.I.-generated digits. What happens if this process continues?

Advertisement

After 20 generations of training new A.I.s on their predecessors’ output, the digits blur and start to erode.

After 30 generations, they converge into a single shape.

Advertisement

While this is a simplified example, it illustrates a problem on the horizon.

Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.

Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

Advertisement

In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”

The eroding digits we just saw show this collapse. When untethered from human input, the A.I. output dropped in quality (the digits became blurry) and in diversity (they grew similar).

How an A.I. that draws digits “collapses” after being trained on its own output

If only some of the training data were A.I.-generated, the decline would be slower or more subtle. But it would still occur, researchers say, unless the synthetic data was complemented with a lot of new, real data.

Degenerative A.I.

Advertisement

In one example, the researchers trained a large language model on its own sentences over and over again, asking it to complete the same prompt after each round.

When they asked the A.I. to complete a sentence that started with “To cook a turkey for Thanksgiving, you…,” at first, it responded like this:

Even at the outset, the A.I. “hallucinates.” But when the researchers further trained it on its own sentences, it got a lot worse…

An example of text generated by an A.I. model.

Advertisement

After two generations, it started simply printing long lists.

An example of text generated by an A.I. model after being trained on its own sentences for 2 generations.

And after four generations, it began to repeat phrases incoherently.

Advertisement

An example of text generated by an A.I. model after being trained on its own sentences for 4 generations.

“The model becomes poisoned with its own projection of reality,” the researchers wrote of this phenomenon.

Advertisement

This problem isn’t just confined to text. Another team of researchers at Rice University studied what would happen when the kinds of A.I. that generate images are repeatedly trained on their own output — a problem that could already be occurring as A.I.-generated images flood the web.

They found that glitches and image artifacts started to build up in the A.I.’s output, eventually producing distorted images with wrinkled patterns and mangled fingers.

When A.I. image models are trained on their own output, they can produce distorted images, mangled fingers or strange patterns.

A.I.-generated images by Sina Alemohammad and others.

Advertisement

“You’re kind of drifting into parts of the space that are like a no-fly zone,” said Richard Baraniuk, a professor who led the research on A.I. image models.

The researchers found that the only way to stave off this problem was to ensure that the A.I. was also trained on a sufficient supply of new, real data.

While selfies are certainly not in short supply on the internet, there could be categories of images where A.I. output outnumbers genuine data, they said.

For example, A.I.-generated images in the style of van Gogh could outnumber actual photographs of van Gogh paintings in A.I.’s training data, and this may lead to errors and distortions down the road. (Early signs of this problem will be hard to detect because the leading A.I. models are closed to outside scrutiny, the researchers said.)

Why collapse happens

Advertisement

All of these problems arise because A.I.-generated data is often a poor substitute for the real thing.

This is sometimes easy to see, like when chatbots state absurd facts or when A.I.-generated hands have too many fingers.

But the differences that lead to model collapse aren’t necessarily obvious — and they can be difficult to detect.

When generative A.I. is “trained” on vast amounts of data, what’s really happening under the hood is that it is assembling a statistical distribution — a set of probabilities that predicts the next word in a sentence, or the pixels in a picture.

For example, when we trained an A.I. to imitate handwritten digits, its output could be arranged into a statistical distribution that looks like this:

Advertisement

Distribution of A.I.-generated data

Examples of
initial A.I. output:

Advertisement

The distribution shown here is simplified for clarity.

The peak of this bell-shaped curve represents the most probable A.I. output — in this case, the most typical A.I.-generated digits. The tail ends describe output that is less common.

Notice that when the model was trained on human data, it had a healthy spread of possible outputs, which you can see in the width of the curve above.

But after it was trained on its own output, this is what happened to the curve:

Advertisement

Distribution of A.I.-generated data when trained on its own output

It gets taller and narrower. As a result, the model becomes more and more likely to produce a smaller range of output, and the output can drift away from the original data.

Meanwhile, the tail ends of the curve — which contain the rare, unusual or surprising outcomes — fade away.

This is a telltale sign of model collapse: Rare data becomes even rarer.

If this process went unchecked, the curve would eventually become a spike:

Advertisement

Distribution of A.I.-generated data when trained on its own output

This was when all of the digits became identical, and the model completely collapsed.

Why it matters

This doesn’t mean generative A.I. will grind to a halt anytime soon.

The companies that make these tools are aware of these problems, and they will notice if their A.I. systems start to deteriorate in quality.

Advertisement

But it may slow things down. As existing sources of data dry up or become contaminated with A.I. “slop,” researchers say it makes it harder for newcomers to compete.

A.I.-generated words and images are already beginning to flood social media and the wider web. They’re even hiding in some of the data sets used to train A.I., the Rice researchers found.

“The web is becoming increasingly a dangerous place to look for your data,” said Sina Alemohammad, a graduate student at Rice who studied how A.I. contamination affects image models.

Big players will be affected, too. Computer scientists at N.Y.U. found that when there is a lot of A.I.-generated content in the training data, it takes more computing power to train A.I. — which translates into more energy and more money.

“Models won’t scale anymore as they should be scaling,” said ​​Julia Kempe, the N.Y.U. professor who led this work.

Advertisement

The leading A.I. models already cost tens to hundreds of millions of dollars to train, and they consume staggering amounts of energy, so this can be a sizable problem.

‘A hidden danger’

Finally, there’s another threat posed by even the early stages of collapse: an erosion of diversity.

And it’s an outcome that could become more likely as companies try to avoid the glitches and “hallucinations” that often occur with A.I. data.

This is easiest to see when the data matches a form of diversity that we can visually recognize — people’s faces:

Advertisement

This set of A.I. faces was created by the same Rice researchers who produced the distorted faces above. This time, they tweaked the model to avoid visual glitches.

A grid of A.I.-generated faces showing variations in their poses, expressions, ages and races.

This is the output after they trained a new A.I. on the previous set of faces. At first glance, it may seem like the model changes worked: The glitches are gone.

Advertisement

After one generation of training on A.I. output, the A.I.-generated faces appear more similar.

After two generations …

After two generations of training on A.I. output, the A.I.-generated faces are less diverse than the original image.

Advertisement

After three generations …

After three generations of training on A.I. output, the A.I.-generated faces grow more similar.

After four generations, the faces all appeared to converge.

After four generations of training on A.I. output, the A.I.-generated faces appear almost identical.

Advertisement

This drop in diversity is “a hidden danger,” Mr. Alemohammad said. “You might just ignore it and then you don’t understand it until it’s too late.”

Just as with the digits, the changes are clearest when most of the data is A.I.-generated. With a more realistic mix of real and synthetic data, the decline would be more gradual.

Advertisement

But the problem is relevant to the real world, the researchers said, and will inevitably occur unless A.I. companies go out of their way to avoid their own output.

Related research shows that when A.I. language models are trained on their own words, their vocabulary shrinks and their sentences become less varied in their grammatical structure — a loss of “linguistic diversity.”

And studies have found that this process can amplify biases in the data and is more likely to erase data pertaining to minorities.

Ways out

Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.

Advertisement

One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet, ensuring both human origin and high quality.

OpenAI and Google have made deals with some publishers or websites to use their data to improve A.I. (The New York Times sued OpenAI and Microsoft last year, alleging copyright infringement. OpenAI and Microsoft say their use of the content is considered fair use under copyright law.)

Better ways to detect A.I. output would also help mitigate these problems.

Google and OpenAI are working on A.I. “watermarking” tools, which introduce hidden patterns that can be used to identify A.I.-generated images and text.

But watermarking text is challenging, researchers say, because these watermarks can’t always be reliably detected and can easily be subverted (they may not survive being translated into another language, for example).

Advertisement

A.I. slop is not the only reason that companies may need to be wary of synthetic data. Another problem is that there are only so many words on the internet.

Some experts estimate that the largest A.I. models have been trained on a few percent of the available pool of text on the internet. They project that these models may run out of public data to sustain their current pace of growth within a decade.

“These models are so enormous that the entire internet of images or conversations is somehow close to being not enough,” Professor Baraniuk said.

To meet their growing data needs, some companies are considering using today’s A.I. models to generate data to train tomorrow’s models. But researchers say this can lead to unintended consequences (such as the drop in quality or diversity that we saw above).

There are certain contexts where synthetic data can help A.I.s learn — for example, when output from a larger A.I. model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go.

Advertisement

And new research suggests that when humans curate synthetic data (for example, by ranking A.I. answers and choosing the best one), it can alleviate some of the problems of collapse.

Companies are already spending a lot on curating data, Professor Kempe said, and she believes this will become even more important as they learn about the problems of synthetic data.

But for now, there’s no replacement for the real thing.

About the data

To produce the images of A.I.-generated digits, we followed a procedure outlined by researchers. We first trained a type of a neural network known as a variational autoencoder using a standard data set of 60,000 handwritten digits.

Advertisement

We then trained a new neural network using only the A.I.-generated digits produced by the previous neural network, and repeated this process in a loop 30 times.

To create the statistical distributions of A.I. output, we used each generation’s neural network to create 10,000 drawings of digits. We then used the first neural network (the one that was trained on the original handwritten digits) to encode these drawings as a set of numbers, known as a “latent space” encoding. This allowed us to quantitatively compare the output of different generations of neural networks. For simplicity, we used the average value of this latent space encoding to generate the statistical distributions shown in the article.

Continue Reading

Trending