Science
When A.I.’s Output Is a Threat to A.I. Itself
The internet is becoming awash in words and images generated by artificial intelligence.
Sam Altman, OpenAI’s chief executive, wrote in February that the company generated about 100 billion words per day — a million novels’ worth of text, every day, an unknown share of which finds its way onto the internet.
A.I.-generated text may show up as a restaurant review, a dating profile or a social media post. And it may show up as a news article, too: NewsGuard, a group that tracks online misinformation, recently identified over a thousand websites that churn out error-prone A.I.-generated news articles.
In reality, with no foolproof methods to detect this kind of content, much will simply remain undetected.
All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.
In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.
Here’s a simple illustration of what happens when an A.I. system is trained on its own output, over and over again:
This is part of a data set of 60,000 handwritten digits.
When we trained an A.I. to mimic those digits, its output looked like this.
This new set was made by an A.I. trained on the previous A.I.-generated digits. What happens if this process continues? After 20 generations of training new A.I.s on their predecessors’ output, the digits blur and start to erode.
After 30 generations, they converge into a single shape.
While this is a simplified example, it illustrates a problem on the horizon.
Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.
Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.
In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”
The eroding digits we just saw show this collapse. When untethered from human input, the A.I. output dropped in quality (the digits became blurry) and in diversity (they grew similar).
How an A.I. that draws digits “collapses” after being trained on its own output
If only some of the training data were A.I.-generated, the decline would be slower or more subtle. But it would still occur, researchers say, unless the synthetic data was complemented with a lot of new, real data.
Degenerative A.I.
In one example, the researchers trained a large language model on its own sentences over and over again, asking it to complete the same prompt after each round.
When they asked the A.I. to complete a sentence that started with “To cook a turkey for Thanksgiving, you…,” at first, it responded like this:
Even at the outset, the A.I. “hallucinates.” But when the researchers further trained it on its own sentences, it got a lot worse…
An example of text generated by an A.I. model. After two generations, it started simply printing long lists.
An example of text generated by an A.I. model after being trained on its own sentences for 2 generations.
And after four generations, it began to repeat phrases incoherently. An example of text generated by an A.I. model after being trained on its own sentences for 4 generations.
“The model becomes poisoned with its own projection of reality,” the researchers wrote of this phenomenon.
This problem isn’t just confined to text. Another team of researchers at Rice University studied what would happen when the kinds of A.I. that generate images are repeatedly trained on their own output — a problem that could already be occurring as A.I.-generated images flood the web.
They found that glitches and image artifacts started to build up in the A.I.’s output, eventually producing distorted images with wrinkled patterns and mangled fingers.
When A.I. image models are trained on their own output, they can produce distorted images, mangled fingers or strange patterns.
A.I.-generated images by Sina Alemohammad and others.
“You’re kind of drifting into parts of the space that are like a no-fly zone,” said Richard Baraniuk, a professor who led the research on A.I. image models.
The researchers found that the only way to stave off this problem was to ensure that the A.I. was also trained on a sufficient supply of new, real data.
While selfies are certainly not in short supply on the internet, there could be categories of images where A.I. output outnumbers genuine data, they said.
For example, A.I.-generated images in the style of van Gogh could outnumber actual photographs of van Gogh paintings in A.I.’s training data, and this may lead to errors and distortions down the road. (Early signs of this problem will be hard to detect because the leading A.I. models are closed to outside scrutiny, the researchers said.)
Why collapse happens
All of these problems arise because A.I.-generated data is often a poor substitute for the real thing.
This is sometimes easy to see, like when chatbots state absurd facts or when A.I.-generated hands have too many fingers.
But the differences that lead to model collapse aren’t necessarily obvious — and they can be difficult to detect.
When generative A.I. is “trained” on vast amounts of data, what’s really happening under the hood is that it is assembling a statistical distribution — a set of probabilities that predicts the next word in a sentence, or the pixels in a picture.
For example, when we trained an A.I. to imitate handwritten digits, its output could be arranged into a statistical distribution that looks like this:
Distribution of A.I.-generated data
Examples of
initial A.I. output:
The distribution shown here is simplified for clarity.
The peak of this bell-shaped curve represents the most probable A.I. output — in this case, the most typical A.I.-generated digits. The tail ends describe output that is less common.
Notice that when the model was trained on human data, it had a healthy spread of possible outputs, which you can see in the width of the curve above.
But after it was trained on its own output, this is what happened to the curve:
Distribution of A.I.-generated data when trained on its own output
It gets taller and narrower. As a result, the model becomes more and more likely to produce a smaller range of output, and the output can drift away from the original data.
Meanwhile, the tail ends of the curve — which contain the rare, unusual or surprising outcomes — fade away.
This is a telltale sign of model collapse: Rare data becomes even rarer.
If this process went unchecked, the curve would eventually become a spike:
Distribution of A.I.-generated data when trained on its own output
This was when all of the digits became identical, and the model completely collapsed.
Why it matters
This doesn’t mean generative A.I. will grind to a halt anytime soon.
The companies that make these tools are aware of these problems, and they will notice if their A.I. systems start to deteriorate in quality.
But it may slow things down. As existing sources of data dry up or become contaminated with A.I. “slop,” researchers say it makes it harder for newcomers to compete.
A.I.-generated words and images are already beginning to flood social media and the wider web. They’re even hiding in some of the data sets used to train A.I., the Rice researchers found.
“The web is becoming increasingly a dangerous place to look for your data,” said Sina Alemohammad, a graduate student at Rice who studied how A.I. contamination affects image models.
Big players will be affected, too. Computer scientists at N.Y.U. found that when there is a lot of A.I.-generated content in the training data, it takes more computing power to train A.I. — which translates into more energy and more money.
“Models won’t scale anymore as they should be scaling,” said Julia Kempe, the N.Y.U. professor who led this work.
The leading A.I. models already cost tens to hundreds of millions of dollars to train, and they consume staggering amounts of energy, so this can be a sizable problem.
‘A hidden danger’
Finally, there’s another threat posed by even the early stages of collapse: an erosion of diversity.
And it’s an outcome that could become more likely as companies try to avoid the glitches and “hallucinations” that often occur with A.I. data.
This is easiest to see when the data matches a form of diversity that we can visually recognize — people’s faces:
This set of A.I. faces was created by the same Rice researchers who produced the distorted faces above. This time, they tweaked the model to avoid visual glitches.
A grid of A.I.-generated faces showing variations in their poses, expressions, ages and races.
This is the output after they trained a new A.I. on the previous set of faces. At first glance, it may seem like the model changes worked: The glitches are gone.
After one generation of training on A.I. output, the A.I.-generated faces appear more similar.
After two generations …
After two generations of training on A.I. output, the A.I.-generated faces are less diverse than the original image.
After three generations …
After three generations of training on A.I. output, the A.I.-generated faces grow more similar.
After four generations, the faces all appeared to converge.
After four generations of training on A.I. output, the A.I.-generated faces appear almost identical.
This drop in diversity is “a hidden danger,” Mr. Alemohammad said. “You might just ignore it and then you don’t understand it until it’s too late.”
Just as with the digits, the changes are clearest when most of the data is A.I.-generated. With a more realistic mix of real and synthetic data, the decline would be more gradual.
But the problem is relevant to the real world, the researchers said, and will inevitably occur unless A.I. companies go out of their way to avoid their own output.
Related research shows that when A.I. language models are trained on their own words, their vocabulary shrinks and their sentences become less varied in their grammatical structure — a loss of “linguistic diversity.”
And studies have found that this process can amplify biases in the data and is more likely to erase data pertaining to minorities.
Ways out
Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.
One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet, ensuring both human origin and high quality.
OpenAI and Google have made deals with some publishers or websites to use their data to improve A.I. (The New York Times sued OpenAI and Microsoft last year, alleging copyright infringement. OpenAI and Microsoft say their use of the content is considered fair use under copyright law.)
Better ways to detect A.I. output would also help mitigate these problems.
Google and OpenAI are working on A.I. “watermarking” tools, which introduce hidden patterns that can be used to identify A.I.-generated images and text.
But watermarking text is challenging, researchers say, because these watermarks can’t always be reliably detected and can easily be subverted (they may not survive being translated into another language, for example).
A.I. slop is not the only reason that companies may need to be wary of synthetic data. Another problem is that there are only so many words on the internet.
Some experts estimate that the largest A.I. models have been trained on a few percent of the available pool of text on the internet. They project that these models may run out of public data to sustain their current pace of growth within a decade.
“These models are so enormous that the entire internet of images or conversations is somehow close to being not enough,” Professor Baraniuk said.
To meet their growing data needs, some companies are considering using today’s A.I. models to generate data to train tomorrow’s models. But researchers say this can lead to unintended consequences (such as the drop in quality or diversity that we saw above).
There are certain contexts where synthetic data can help A.I.s learn — for example, when output from a larger A.I. model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go.
And new research suggests that when humans curate synthetic data (for example, by ranking A.I. answers and choosing the best one), it can alleviate some of the problems of collapse.
Companies are already spending a lot on curating data, Professor Kempe said, and she believes this will become even more important as they learn about the problems of synthetic data.
But for now, there’s no replacement for the real thing.
About the data
To produce the images of A.I.-generated digits, we followed a procedure outlined by researchers. We first trained a type of a neural network known as a variational autoencoder using a standard data set of 60,000 handwritten digits. We then trained a new neural network using only the A.I.-generated digits produced by the previous neural network, and repeated this process in a loop 30 times.
To create the statistical distributions of A.I. output, we used each generation’s neural network to create 10,000 drawings of digits. We then used the first neural network (the one that was trained on the original handwritten digits) to encode these drawings as a set of numbers, known as a “latent space” encoding. This allowed us to quantitatively compare the output of different generations of neural networks. For simplicity, we used the average value of this latent space encoding to generate the statistical distributions shown in the article.
Science
Southern California mountain lions recommended for threatened status
The California Department of Fish and Wildlife has recommended granting threatened species status to roughly 1,400 mountain lions roaming the Central Coast and Southern California, pointing to grave threats posed by freeways, rat poison and fierce wildfires.
The determination, released Wednesday, is not the final say but signals a possibility that several clans of the iconic cougars will be listed under the California Endangered Species Act.
It’s a move that supporters say would give the vulnerable animals a chance at recovery, but detractors have argued would make it harder to get rid of lions that pose a safety risk to people and livestock.
The recommendation was “overdue,” Charlton Bonham, director of the state wildlife department, said during a California Fish and Game Commission meeting.
It arrives about six years after the Center for Biological Diversity and Mountain Lion Foundation petitioned the commission to consider listing a half-dozen isolated lion populations that have suffered from being hit by cars, poisoned by rodenticides and trapped by development.
The following year, in 2020 the Commission found the request might be warranted, giving the lions temporary endangered species protections as “candidates” for listing. It also prompted the state wildlife department to put together a report to inform the commission’s final decision.
The next step is for state wildlife commissioners to to vote on the protections, possibly in February.
Brendan Cummings, conservation director for Center for Biological Diversity, hailed the moment as “a good day, not just for mountain lions, but for Californians.”
If the commissioners adopt the recommendation, as he believes they will, then the “final listing of the species removes any uncertainty about the state’s commitment to conserving and recovering these ecologically important, charismatic and well-loved species that are so much a part of California.”
The report recommends listing lions “in an area largely coinciding” with what the petitioners requested, which includes the Santa Ana, San Gabriel, San Bernardino, Santa Monica, Santa Cruz and Tehachapi mountains.
It trims off portions along the northern and eastern borders of what was proposed, including agricultural lands in the Bay Area and a southeastern portion of desert — areas where state experts had no records of lions, according to Cummings.
Officials in the report note that most of the lion groups proposed for listing are contending with a lack of gene flow because urban barriers keep them from reaching one another.
In Southern California, lions have shown deformities from inbreeding, including kinked tails and malformed sperm. There’s an almost 1 in 4 chance, according to research, that mountain lions could become extinct in the Santa Monica and Santa Ana mountains within 50 years.
The late P-22 — a celebrity mountain lion that inhabited Griffith Park – personified the tribulations facing his kind. Rat poison and car collisions battered him from the inside out. He was captured and euthanized in late 2022, deemed too sick to return to the wild because of injuries and infection.
For some species, protections come in the form of stopping chainsaws or bulldozers. But imperiled lions, Cummings said, need their habitats stitched together in the form of wildlife crossings — such as the gargantuan one being built over the 101 Freeway in Agoura Hills. He added that developments that could restrict their movement should get more scrutiny under the proposed protections.
Critics of the effort to list lion populations have said that it will stymie residential and commercial projects.
California is home to roughly 4,170 mountain lions, according to the recent report, but not all are equal in their struggle.
Many lion populations, particularly in northwest coastal forests, are hearty and healthy.
Protections are not being sought for those cats. Some, in fact, would like to see their numbers reduced amid some high-profile conflicts.
Bonham, the director of the state Department of Fish and Wildlife, spoke to concerns about public safety at the recent meeting, alluding to the tragic death of young man who was mauled by a cougar last year in Northern California.
“These are really delicate issues and the conversation I know in the coming years is going to have to grapple with all that,” said Bonham, who will be stepping down this month after nearly 15 years in his role.
California’s lions already enjoy certain protections. In 1990, voters approved a measure that designated them a “specially protected species” and banned hunting them for sport.
Science
California’s last nuclear plant clears major hurdle to power on
California environmental regulators on Thursday struck a landmark deal with Pacific Gas & Electric to extend the life of the state’s last remaining nuclear power plant in exchange for thousands of acres of new land conservation in San Luis Obispo County.
PG&E’s agreement with the California Coastal Commission is a key hurdle for the Diablo Canyon nuclear plant to remain online until at least 2030. The plant was slated to close this year, largely due to concerns over seismic safety, but state officials pushed to delay it — saying the plant remains essential for the reliable operation of California’s electrical grid. Diablo Canyon provides nearly 9% of the electricity generated in the state, making it the state’s single largest source.
The Coastal Commission voted 9-3 to approve the plan, settling the fate of some 12,000 acres that surround the power plant as a means of compensation for environmental harm caused by its continued operation.
Nuclear power does not emit greenhouse gases. But Diablo Canyon uses an estimated 2.5 billion gallons of ocean water each day to absorb heat in a process known as “once-through cooling,” which kills an estimated two billion or more marine organisms each year.
Some stakeholders in the region celebrated the conservation deal, while others were disappointed by the decision to trade land for marine impacts — including a Native tribe that had hoped the land would be returned to them. Diablo Canyon sits along one of the most rugged and ecologically rich stretches of the California coast.
Under the agreement, PG&E will immediately transfer a 4,500-acre parcel on the north side of the property known as the “North Ranch” into a conservation easement and pursue transfer of its ownership to a public agency such as the California Department of Parks and Recreation, a nonprofit land conservation organization or tribe. A purchase by State Parks would result in a more than 50% expansion of the existing Montaña de Oro State Park.
PG&E will also offer a 2,200-acre parcel on the southern part of the property known as “Wild Cherry Canyon” for purchase by a government agency, nonprofit land conservation organization or tribe. In addition, the utility will provide $10 million to plan and manage roughly 25 miles of new public access trails across the entire property.
“It’s going to be something that changes lives on the Central Coast in perpetuity,” Commissioner Christopher Lopez said at the meeting. “This matters to generations that have yet to exist on this planet … this is going to be a place that so many people mark in their minds as a place that transforms their lives as they visit and recreate and love it in a way most of us can’t even imagine today.”
Critically, the plan could see Diablo Canyon remain operational much longer than the five years dictated by Thursday’s agreement. While the state Legislature only authorized the plant to operate through 2030, PG&E’s federal license renewal would cover 20 years of operations, potentially keeping it online until 2045.
Should that happen, the utility would need to make additional land concessions, including expanding an existing conservation area on the southern part of the property known as the “South Ranch” to 2,500 acres. The plan also includes rights of first refusal for a government agency or a land conservation group to purchase the entirety of the South Ranch, 5,000 acres, along with Wild Cherry Canyon — after 2030.
Pelicans along the concrete breakwater at Pacific Gas and Electric’s Diablo Canyon Power Plant
(Brian van der Brug/Los Angeles Times)
Many stakeholders were frustrated by the carve-out for the South Ranch, but still saw the agreement as an overall victory for Californians.
“It is a once in a lifetime opportunity,” Sen. John Laird (D-Santa Cruz) said in a phone call ahead of Thursday’s vote. “I have not been out there where it has not been breathtakingly beautiful, where it is not this incredible, unique location, where you’re not seeing, for much of it, a human structure anywhere. It is just one of those last unique opportunities to protect very special land near the California coast.”
Others, however, described the deal as disappointing and inadequate.
That includes many of the region’s Native Americans who said they felt sidelined by the agreement. The deal does not preclude tribal groups from purchasing the land in the future, but it doesn’t guarantee that or give them priority.
The yak titʸu titʸu yak tiłhini Northern Chumash Tribe of San Luis Obispo County and Region, which met with the Coastal Commission several times in the lead-up to Thursday’s vote, had hoped to see the land returned to them.
Scott Lanthrop is a member of the tribe’s board and has worked on the issue for several years.
“The sad part is our group is not being recognized as the ultimate conservationist,” he told The Times. “Any normal person, if you ask the question, would you rather have a tribal group that is totally connected to earth and wind and water, or would you like to have some state agency or gigantic NGO manage this land, I think the answer would be, ‘Hey, you probably should give it back to the tribe.’”
Tribe chair Mona Tucker said she fears that free public access to the land could end up harming it instead of helping it, as the Coastal Commission intends.
“In my mind, I’m not understanding how taking the land … is mitigation for marine life,” Tucker said. “It doesn’t change anything as far as impacts to the water. It changes a lot as far as impacts to the land.”
Montaña de Oro State Park.
(Christopher Reynolds / Los Angeles Times)
The deal has been complicated by jurisdictional questions, including who can determine what happens to the land. While PG&E owns the North Ranch parcel that could be transferred to State Parks, the South Ranch and Wild Cherry Canyon are owned by its subsidiary, Eureka Energy Company.
What’s more, the California Public Utilities Commission, which regulates utilities such as PG&E, has a Tribal Land Transfer Policy that calls for investor-owned power companies to transfer land they no longer want to Native American tribes.
In the case of Diablo Canyon, the Coastal Commission became the decision maker because it has the job of compensating for environmental harm from the facility’s continued operation. Since the commission determined Diablo’s use of ocean water can’t be avoided, it looked at land conservation as the next best method.
This “out-of-kind” trade-off is a rare, but not unheard of way of making up for the loss of marine life. It’s an approach that is “feasible and more likely to succeed” than several other methods considered, according to the commission’s staff report.
“This plan supports the continued operation of a major source of reliable electricity for California, and is in alignment with our state’s clean energy goals and focus on coastal protection,” Paula Gerfen, Diablo Canyon’s senior vice president and chief nuclear officer, said in a statement.
But Assemblymember Dawn Addis (D-Morro Bay) said the deal was “not the best we can do” — particularly because the fate of the South Ranch now depends on the plant staying in operation beyond 2030.
“I believe the time really is now for the immediate full conservation of the 12,000 [acres], and to bring accountability and trust back for the voters of San Luis Obispo County,” Addis said during the meeting.
There are also concerns about the safety of continuing to operate a nuclear plant in California, with its radioactive waste stored in concrete casks on the site. Diablo Canyon is subject to ground shaking and earthquake hazards, including from the nearby Hosgri Fault and the Shorline Fault, about 2.5 miles and 1 mile from the facility, respectively.
PG&E says the plant has been built to withstand hazards. It completed a seismic hazard assessment in 2024, and determined Diablo Canyon is safe to continue operation through 2030. The Coastal Commission, however, found if the plant operates longer, it would warrant further seismic study.
A key development for continuing Diablo Canyon’s operation came in 2022 with Senate Bill 846, which delayed closure by up to five additional years. At the time, California was plagued by rolling blackouts driven extreme heat waves, and state officials were growing wary about taking such a major source of power offline.
But California has made great gains in the last several years — including massive investments in solar energy and battery storage — and some questioned whether the facility is still needed at all.
Others said conserving thousands of acres of land still won’t make up for the harms to the ocean.
“It is unmitigatable,” said David Weisman, executive director of the nonprofit Alliance for Nuclear Responsibility. He noted that the Coastal Commission’s staff report says it would take about 99 years to balance the loss of marine life with the benefits provided by 4,500 acres of land conservation. Twenty more years of operation would take about 305 years to strike that same balance.
But some pointed out that neither the commission nor fisheries data find Diablo’s operations cause declines in marine life. Ocean harm may be overestimated, said Seaver Wang, an oceanographer and the climate and energy director at the Breakthrough Institute, a Berkeley-based research center.
In California’s push to transition to clean energy, every option comes with downsides, Wang said. In the case of nuclear power — which produces no greenhouse gas emissions — it’s all part of the trade off, he said.
“There’s no such thing as impacts-free energy,” he said.
The Coastal Commission’s vote is one of the last remaining obstacles to keeping the plant online. PG&E will also need a final nod from the Regional Water Quality Control Board, which decides on a pollution discharge permit in February.
The federal Nuclear Regulatory Commission will also have to sign off on Diablo’s extension.
Science
In search for autism’s causes, look at genes, not vaccines, researchers say
Earlier this year, Health and Human Services Secretary Robert F. Kennedy Jr. pledged that the search for autism’s cause — a question that has kept researchers busy for the better part of six decades — would be over in just five months.
“By September, we will know what has caused the autism epidemic, and we’ll be able to eliminate those exposures,” Kennedy told President Trump during a Cabinet meeting in April.
That ambitious deadline has come and gone. But researchers and advocates say that Kennedy’s continued fixation on autism’s origins — and his frequent, inaccurate claims that childhood vaccines are somehow involved — is built on fundamental misunderstandings of the complex neurodevelopmental condition.
Even after more than half a century of research, no one yet knows exactly why some people have autistic traits and others do not, or why autism spectrum disorder looks so different across the people who have it. But a few key themes have emerged.
Researchers believe that autism is most likely the result of a complex set of interactions between genes and the environment that unfold while a child is in the womb. It can be passed down through families, or originate with a spontaneous gene mutation.
Environmental influences may indeed play a role in some autism cases, but their effect is heavily influenced by a person’s genes. There is no evidence for a single trigger that causes autism, and certainly not one a child encounters after birth: not a vaccine, a parenting style or a post-circumcision Tylenol.
“The real reason why it’s complicated, the more fundamental one, is that there’s not a single cause,” said Irva Hertz-Picciotto, a professor of public health science and director of the Environmental Health Sciences Center at UC Davis. “It’s not a single cause from one person to the next, and not a single cause within any one person.”
Kennedy, an attorney who has no medical or scientific training, has called research into autism’s genetics a “dead end.” Autism researchers counter that it’s the only logical place to start.
“If we know nothing else, we know that autism is primarily genetic,” said Joe Buxbaum, a molecular neuroscientist who directs the Seaver Autism Center for Research and Treatment at the Icahn School of Medicine at Mount Sinai. “And you don’t have to actually have the exact genes [identified] to know that something is genetic.”
Some neurodevelopment disorders arise from a difference in a single gene or chromosome. People with Down syndrome have an extra copy of chromosome 21, for example, and Fragile X syndrome results when the FMR1 gene isn’t expressed.
Autism in most cases is polygenetic, which means that multiple genes are involved, with each contributing a little bit to the overall picture.
Researchers have found hundreds of genes that could be associated with autism; there may be many more among the roughly 20,000 in the human genome.
In the meantime, the strongest evidence that autism is genetic comes from studies of twins and other sibling groups, Buxbaum and other researchers said.
The rate of autism in the U.S. general population is about 2.8%, according to a study published last year in the journal Pediatrics. Among children with at least one autistic sibling, it’s 20.2% — about seven times higher than the general population, the study found.
Twin studies reinforce the point. Both identical and fraternal twins develop in the same womb and are usually raised in similar circumstances in the same household. The difference is genetic: identical twins share 100% of their genetic information, while fraternal twins share about 50% (the same as nontwin siblings).
If one fraternal twin is autistic, the chance that the other twin is also autistic is about 20%, or about the same as it would be for a nontwin sibling.
But if one in a pair of identical twins is autistic, the chance that the other twin is also autistic is significantly higher. Studies have pegged the identical twin concurrence rate anywhere from 60% to 90%, though the intensity of the twins’ autistic traits may differ significantly.
Molecular genetic studies, which look at the genetic information shared between siblings and other blood relatives, have found similar rates of genetic influence on autism, said Dr. John Constantino, a professor of pediatrics, psychiatry and behavioral sciences at the Emory University School of Medicine and chief of behavioral and mental health at Children’s Healthcare of Atlanta.
Together, he said, “those studies have indicated that a vast share of the causation of autism can be traced to the effects of genetic influences. That is a fact.”
Buxbaum compares the heritability of autism to the heritability of height, another polygenic trait.
“There’s not one gene that’s making you taller or shorter,” Buxbaum said. Hundreds of genes play a role in where you land on the height distribution curve. A lot of those genes run in families — it’s not unusual for very tall people, for example, to have very tall relatives.
But parents pass on a random mix of their genes to their children, and height distribution across a group of same-sex siblings can vary widely. Genetic mutations can change the picture. Marfan syndrome, a condition caused by mutations in the FBN1 gene, typically makes people grow taller than average. Hundreds of genetic mutations are associated with dwarfism, which causes shorter stature.
Then once a child is born, external factors such as malnutrition or disease can affect the likelihood that they reach their full height potential.
So genes are important. But the environment — which in developmental science means pretty much anything that isn’t genetics, including parental age, nutrition, air pollution and viruses — can play a major role in how those genes are expressed.
“Genetics does not operate in a vacuum, and at the same time, the impact of the environment on people is going to depend on a person’s individual genetics,” said Brian K. Lee, a professor of epidemiology and biostatistics at Drexel University who studies the genetics of developmental disorders.
Unlike the childhood circumstances that can affect height, the environmental exposures associated with autism for the most part take place in utero.
Researchers have identified multiple factors linked to increased risks of the disorder, including older parental age, infant prematurity and parental exposure to air pollution and industrial solvents.
Investigations into some of these linkages were among the more than 50 autism-related studies whose funding Kennedy has cut since taking office, a ProPublica investigation found. In contrast, no credible study has found links between vaccines and autism — and there have been many.
One move from the Department of Health and Human Services has been met with cautious optimism: even as Kennedy slashed funding to other research projects, the department in September announced a $50-million initiative to explore the interactions of genes and environmental factors in autism, which has been divided among 13 different research groups at U.S. universities, including UCLA and UC San Diego.
The department’s selection of well-established, legitimate research teams was met with relief by many autism scientists.
But many say they fear that such decisions will be an anomaly under Kennedy, who has repeatedly rejected facts that don’t conform to his preferred hypotheses, elevated shoddy science and muddied public health messaging on autism with inaccurate information.
Disagreements are an essential part of scientific inquiry. But the productive ones take place in a universe of shared facts and build on established evidence.
And when determining how to spend limited resources, researchers say, making evidence-based decisions is vital.
“There are two aspects of these decisions: Is it a reasonable expenditure based on what we already know? And if you spend money here, will you be taking money away from HHS that people are in desperate need of?” Constantino said. “If you’re going to be spending money, you want to do that in a way that is not discarding what we already know.”
-
Alaska6 days agoHowling Mat-Su winds leave thousands without power
-
Texas7 days agoTexas Tech football vs BYU live updates, start time, TV channel for Big 12 title
-
Ohio1 week ago
Who do the Ohio State Buckeyes hire as the next offensive coordinator?
-
Washington4 days agoLIVE UPDATES: Mudslide, road closures across Western Washington
-
Iowa5 days agoMatt Campbell reportedly bringing longtime Iowa State staffer to Penn State as 1st hire
-
Miami, FL6 days agoUrban Meyer, Brady Quinn get in heated exchange during Alabama, Notre Dame, Miami CFP discussion
-
Cleveland, OH5 days agoMan shot, killed at downtown Cleveland nightclub: EMS
-
World5 days ago
Chiefs’ offensive line woes deepen as Wanya Morris exits with knee injury against Texans