Connect with us

Science

When A.I.’s Output Is a Threat to A.I. Itself

Published

on

When A.I.’s Output Is a Threat to A.I. Itself

The internet is becoming awash in words and images generated by artificial intelligence.

Sam Altman, OpenAI’s chief executive, wrote in February that the company generated about 100 billion words per day — a million novels’ worth of text, every day, an unknown share of which finds its way onto the internet.

A.I.-generated text may show up as a restaurant review, a dating profile or a social media post. And it may show up as a news article, too: NewsGuard, a group that tracks online misinformation, recently identified over a thousand websites that churn out error-prone A.I.-generated news articles.

Advertisement

In reality, with no foolproof methods to detect this kind of content, much will simply remain undetected.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.

Here’s a simple illustration of what happens when an A.I. system is trained on its own output, over and over again:

Advertisement

This is part of a data set of 60,000 handwritten digits.

When we trained an A.I. to mimic those digits, its output looked like this.

This new set was made by an A.I. trained on the previous A.I.-generated digits. What happens if this process continues?

Advertisement

After 20 generations of training new A.I.s on their predecessors’ output, the digits blur and start to erode.

After 30 generations, they converge into a single shape.

Advertisement

While this is a simplified example, it illustrates a problem on the horizon.

Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.

Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

Advertisement

In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”

The eroding digits we just saw show this collapse. When untethered from human input, the A.I. output dropped in quality (the digits became blurry) and in diversity (they grew similar).

How an A.I. that draws digits “collapses” after being trained on its own output

If only some of the training data were A.I.-generated, the decline would be slower or more subtle. But it would still occur, researchers say, unless the synthetic data was complemented with a lot of new, real data.

Degenerative A.I.

Advertisement

In one example, the researchers trained a large language model on its own sentences over and over again, asking it to complete the same prompt after each round.

When they asked the A.I. to complete a sentence that started with “To cook a turkey for Thanksgiving, you…,” at first, it responded like this:

Even at the outset, the A.I. “hallucinates.” But when the researchers further trained it on its own sentences, it got a lot worse…

An example of text generated by an A.I. model.

Advertisement

After two generations, it started simply printing long lists.

An example of text generated by an A.I. model after being trained on its own sentences for 2 generations.

And after four generations, it began to repeat phrases incoherently.

Advertisement

An example of text generated by an A.I. model after being trained on its own sentences for 4 generations.

“The model becomes poisoned with its own projection of reality,” the researchers wrote of this phenomenon.

Advertisement

This problem isn’t just confined to text. Another team of researchers at Rice University studied what would happen when the kinds of A.I. that generate images are repeatedly trained on their own output — a problem that could already be occurring as A.I.-generated images flood the web.

They found that glitches and image artifacts started to build up in the A.I.’s output, eventually producing distorted images with wrinkled patterns and mangled fingers.

When A.I. image models are trained on their own output, they can produce distorted images, mangled fingers or strange patterns.

A.I.-generated images by Sina Alemohammad and others.

Advertisement

“You’re kind of drifting into parts of the space that are like a no-fly zone,” said Richard Baraniuk, a professor who led the research on A.I. image models.

The researchers found that the only way to stave off this problem was to ensure that the A.I. was also trained on a sufficient supply of new, real data.

While selfies are certainly not in short supply on the internet, there could be categories of images where A.I. output outnumbers genuine data, they said.

For example, A.I.-generated images in the style of van Gogh could outnumber actual photographs of van Gogh paintings in A.I.’s training data, and this may lead to errors and distortions down the road. (Early signs of this problem will be hard to detect because the leading A.I. models are closed to outside scrutiny, the researchers said.)

Why collapse happens

Advertisement

All of these problems arise because A.I.-generated data is often a poor substitute for the real thing.

This is sometimes easy to see, like when chatbots state absurd facts or when A.I.-generated hands have too many fingers.

But the differences that lead to model collapse aren’t necessarily obvious — and they can be difficult to detect.

When generative A.I. is “trained” on vast amounts of data, what’s really happening under the hood is that it is assembling a statistical distribution — a set of probabilities that predicts the next word in a sentence, or the pixels in a picture.

For example, when we trained an A.I. to imitate handwritten digits, its output could be arranged into a statistical distribution that looks like this:

Advertisement

Distribution of A.I.-generated data

Examples of
initial A.I. output:

Advertisement

The distribution shown here is simplified for clarity.

The peak of this bell-shaped curve represents the most probable A.I. output — in this case, the most typical A.I.-generated digits. The tail ends describe output that is less common.

Notice that when the model was trained on human data, it had a healthy spread of possible outputs, which you can see in the width of the curve above.

But after it was trained on its own output, this is what happened to the curve:

Advertisement

Distribution of A.I.-generated data when trained on its own output

It gets taller and narrower. As a result, the model becomes more and more likely to produce a smaller range of output, and the output can drift away from the original data.

Meanwhile, the tail ends of the curve — which contain the rare, unusual or surprising outcomes — fade away.

This is a telltale sign of model collapse: Rare data becomes even rarer.

If this process went unchecked, the curve would eventually become a spike:

Advertisement

Distribution of A.I.-generated data when trained on its own output

This was when all of the digits became identical, and the model completely collapsed.

Why it matters

This doesn’t mean generative A.I. will grind to a halt anytime soon.

The companies that make these tools are aware of these problems, and they will notice if their A.I. systems start to deteriorate in quality.

Advertisement

But it may slow things down. As existing sources of data dry up or become contaminated with A.I. “slop,” researchers say it makes it harder for newcomers to compete.

A.I.-generated words and images are already beginning to flood social media and the wider web. They’re even hiding in some of the data sets used to train A.I., the Rice researchers found.

“The web is becoming increasingly a dangerous place to look for your data,” said Sina Alemohammad, a graduate student at Rice who studied how A.I. contamination affects image models.

Big players will be affected, too. Computer scientists at N.Y.U. found that when there is a lot of A.I.-generated content in the training data, it takes more computing power to train A.I. — which translates into more energy and more money.

“Models won’t scale anymore as they should be scaling,” said ​​Julia Kempe, the N.Y.U. professor who led this work.

Advertisement

The leading A.I. models already cost tens to hundreds of millions of dollars to train, and they consume staggering amounts of energy, so this can be a sizable problem.

‘A hidden danger’

Finally, there’s another threat posed by even the early stages of collapse: an erosion of diversity.

And it’s an outcome that could become more likely as companies try to avoid the glitches and “hallucinations” that often occur with A.I. data.

This is easiest to see when the data matches a form of diversity that we can visually recognize — people’s faces:

Advertisement

This set of A.I. faces was created by the same Rice researchers who produced the distorted faces above. This time, they tweaked the model to avoid visual glitches.

A grid of A.I.-generated faces showing variations in their poses, expressions, ages and races.

This is the output after they trained a new A.I. on the previous set of faces. At first glance, it may seem like the model changes worked: The glitches are gone.

Advertisement

After one generation of training on A.I. output, the A.I.-generated faces appear more similar.

After two generations …

After two generations of training on A.I. output, the A.I.-generated faces are less diverse than the original image.

Advertisement

After three generations …

After three generations of training on A.I. output, the A.I.-generated faces grow more similar.

After four generations, the faces all appeared to converge.

After four generations of training on A.I. output, the A.I.-generated faces appear almost identical.

Advertisement

This drop in diversity is “a hidden danger,” Mr. Alemohammad said. “You might just ignore it and then you don’t understand it until it’s too late.”

Just as with the digits, the changes are clearest when most of the data is A.I.-generated. With a more realistic mix of real and synthetic data, the decline would be more gradual.

Advertisement

But the problem is relevant to the real world, the researchers said, and will inevitably occur unless A.I. companies go out of their way to avoid their own output.

Related research shows that when A.I. language models are trained on their own words, their vocabulary shrinks and their sentences become less varied in their grammatical structure — a loss of “linguistic diversity.”

And studies have found that this process can amplify biases in the data and is more likely to erase data pertaining to minorities.

Ways out

Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.

Advertisement

One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet, ensuring both human origin and high quality.

OpenAI and Google have made deals with some publishers or websites to use their data to improve A.I. (The New York Times sued OpenAI and Microsoft last year, alleging copyright infringement. OpenAI and Microsoft say their use of the content is considered fair use under copyright law.)

Better ways to detect A.I. output would also help mitigate these problems.

Google and OpenAI are working on A.I. “watermarking” tools, which introduce hidden patterns that can be used to identify A.I.-generated images and text.

But watermarking text is challenging, researchers say, because these watermarks can’t always be reliably detected and can easily be subverted (they may not survive being translated into another language, for example).

Advertisement

A.I. slop is not the only reason that companies may need to be wary of synthetic data. Another problem is that there are only so many words on the internet.

Some experts estimate that the largest A.I. models have been trained on a few percent of the available pool of text on the internet. They project that these models may run out of public data to sustain their current pace of growth within a decade.

“These models are so enormous that the entire internet of images or conversations is somehow close to being not enough,” Professor Baraniuk said.

To meet their growing data needs, some companies are considering using today’s A.I. models to generate data to train tomorrow’s models. But researchers say this can lead to unintended consequences (such as the drop in quality or diversity that we saw above).

There are certain contexts where synthetic data can help A.I.s learn — for example, when output from a larger A.I. model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go.

Advertisement

And new research suggests that when humans curate synthetic data (for example, by ranking A.I. answers and choosing the best one), it can alleviate some of the problems of collapse.

Companies are already spending a lot on curating data, Professor Kempe said, and she believes this will become even more important as they learn about the problems of synthetic data.

But for now, there’s no replacement for the real thing.

About the data

To produce the images of A.I.-generated digits, we followed a procedure outlined by researchers. We first trained a type of a neural network known as a variational autoencoder using a standard data set of 60,000 handwritten digits.

Advertisement

We then trained a new neural network using only the A.I.-generated digits produced by the previous neural network, and repeated this process in a loop 30 times.

To create the statistical distributions of A.I. output, we used each generation’s neural network to create 10,000 drawings of digits. We then used the first neural network (the one that was trained on the original handwritten digits) to encode these drawings as a set of numbers, known as a “latent space” encoding. This allowed us to quantitatively compare the output of different generations of neural networks. For simplicity, we used the average value of this latent space encoding to generate the statistical distributions shown in the article.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Science

How to protect yourself from the smoke caused by L.A. wildfires

Published

on

How to protect yourself from the smoke caused by L.A. wildfires

You don’t have to live close to a wildfire to be affected by its smoke. With severe winds fanning the fires in and around Pacific Palisades, the Pasadena foothills and Simi Valley, huge swaths of the Southland are contending with dangerous air quality.

Wildfire smoke can irritate your eyes, nose, throat and lungs. The soot may contain all kinds of dangerous pollutants, including some that may cause cancer. The tiniest particles in smoke can travel deep into your lungs or even enter your bloodstream.

Conditions like these aren’t good for anyone, but they’re particularly bad for people in vulnerable groups, including children, those with asthma or other respiratory conditions, people with heart disease and those who are pregnant.

Here’s what you should know to keep yourself safe.

Advertisement

Stay indoors

Minimize your exposure to unhealthy air by staying inside and keeping your doors and windows shut.

If you have a central heating and air conditioning system, you can keep your indoor air clean by turning it on and keeping it running. Make sure the fresh-air intake is closed so that you’re not drawing in outdoor air.

Keep your pets inside

They shouldn’t breathe the unhealthy air either.

Check your air filters

Clean filters work better than dirty ones, and high-efficiency filters work better than regular ones. The California Air Resources Board and the South Coast Air Quality Management District recommend filters with a MERV rating of 13 or higher.

You might consider using portable high-efficiency air cleaner in a room where you spend the most time. The U.S. Environmental Protection Agency has information about them here, and CARB has a list of certified cleaning devices here.

Advertisement

Don’t pollute your indoor air

That means no burning candles or incense. If your power is out and you need to see in the dark, you’re much better off with a flashlight or headlamp.

If you’re cold, bundle up. This is not the time to start a cozy fire in the fireplace. Don’t use a gas stove or wood-fired appliances, since these will make your indoor air quality worse, not better, the AQMD says.

The CDC also advises against vacuuming, since it can stir up dust and release fine particles into the air.

Take care when cleaning up

You don’t want your skin to come into contact with wildfire ash. That means you should wear long sleeves, pants, gloves, socks and shoes. The AQMD even wants you to wear goggles.

If you’re sweeping up ash outdoors, get a hose and mist it with water first. That will keep it from flying up in the air as you move it around. Once the ash is wet, sweep it up gently with a broom or mop. Bag it up in a plastic bag and throw it away.

Advertisement

It’s a good idea to wash your vehicles and outdoor toys if they’re covered in ash. Try not to send ashy water into storm drains. Direct the dirty water into ground areas instead, the AQMD advises.

Those with lung or heart problems should avoid clean-up activities.

Discard spoiled food…

If you lost power for a significant length of time, the food in your refrigerator or freezer may be spoiled.

Food kept in a fridge should stay safe for up to four hours if you’ve kept the door closed. If you’ve been without power for longer than that, you’ll need to toss all perishable items, including meat, poultry, fish, eggs, milk and cut fruits and vegetables. Anything with “an unusual smell, color, or texture” should be thrown out as well, according to the U.S. Centers for Disease and Control Prevention.

Refrigerated medicines should be OK unless the power was out for more than a day. Check the label to make sure.

Advertisement

…even if it was in the freezer

Your freezer may be in better shape, especially if it’s well-stocked. Items in a full freezer may be safe for up to 48 hours if it’s been kept shut, and a half-full freezer may be OK for up to 24 hours. (The frozen items help keep each other cold, so the more the better.)

If items have remained below 40 degrees Fahrenheit (4 degrees Celsius) or you can still see ice crystals in them, they may be OK to use or refreeze, according to the federal government’s food safety website.

Ice cream and frozen yogurt should be thrown out if the power goes out for any amount of time. Meat, poultry, seafood, eggs, milk and most other dairy products need to go if they were exposed to temperatures above 40 degrees F for two hours or longer. The same goes for frozen meals, casseroles, soups, stews and cakes, pies and pastries with custard or cheese fillings.

Fruit and fruit juices that have started to thaw can be refrozen unless they’ve started to get moldy, slimy or smell like yeast. Vegetables and vegetable juices should be discarded if they’ve been above 40 degrees F for six hours or more, even if they look and smell fine.

Breakfast items like waffles and bagels can be refrozen, as can breads, rolls, muffins and other baked goods without custard fillings.

Advertisement

Consider alternative shelter

If you’ve done everything you can but your eyes are still watering, you can’t stop coughing, or you just don’t feel well, seek alternative shelter where the air quality is better.

Hold off on vigorous exercise

Doing anything that would cause you to breathe in more deeply is a bad idea right now.

Mask up outdoors

If you need to be outside for an extended time, be sure to wear a high-quality mask. A surgical mask or cloth mask won’t cut it — health authorities agree that you should reach for an N95 or P-100 respirator with a tight seal.

Are young children at greater risk of wildfire smoke?

Very young children are especially vulnerable to the effects of wildfire smoke because their lungs are still rapidly developing. And because they breathe much faster than adults, they are taking in more toxic particulate matter relative to their tiny bodies, which can trigger inflammation, coughing and wheezing.

Any kind of air pollution can be dangerous to young children, but wildfire smoke is about 10 times as toxic for children compared to air pollution from burning fossil fuels, said Dr. Lisa Patel, clinical associate professor of pediatrics at Stanford Children’s Health. Young children with preexisting respiratory problems like asthma are at even greater risk.

Advertisement

Patel advises parents to keep their young children indoors as much as possible, create a safe room in their home with an air purifier, and try to avoid using gas stoves to avoid polluting the indoor air.

Children over the age of 2 should also wear a well-fitting KN95 mask if they will be outdoors for a long period of time. Infants and toddlers younger than that don’t need to mask up because it can be a suffocation risk, Patel said.

What are the risks for pregnant people?

Pregnant people should also take extra precautions around wildfire smoke, which can cross the placenta and affect a developing fetus. Studies have found that exposure to wildfire smoke during pregnancy can increase the risk of premature birth and low birth weight. Researchers have also linked the toxic chemicals in smoke with maternal health complications including hypertension and preeclampsia.

What about other high-risk populations?

Certain chronic diseases including asthma, chronic obstructive pulmonary disease or other respiratory conditions can also make you particularly vulnerable to wildfire smoke. People with heart disease, diabetes and chronic kidney disease should take extra care to breathe clean air, the CDC says. The tiny particles in wildfire smoke can aggravate existing health problems, and may make heart attacks or strokes more likely, CARB warns.

Get ready for the next emergency

Living in Southern California means another wildfire is coming sooner or later. To prepare for the bad air, you can:

Advertisement
  • Stock up on disposable respirators, like N95 or P-100s.
  • Have clean filters ready for your A/C system and change them out when things get smoky.
  • Know how to check the air quality where you live and work. The AQMD has an interactive map that’s updated hourly. Just type in an address and it will zoom in on the location. You can also sign up to get air quality alerts by email or on your smartphone.
  • Know where your fire extinguisher is and keep it handy.
  • If you have a heart or lung condition, keep at least five days’ worth of medication on hand.

Times staff writer Karen Garcia contributed to this report.

Continue Reading

Science

Punk and Emo Fossils Are a Hot Topic in Paleontology

Published

on

Punk and Emo Fossils Are a Hot Topic in Paleontology

Mark Sutton, an Imperial College London paleontologist, is not a punk.

“I’m more of a folk and country person,” he said.

But when Dr. Sutton pieced together 3-D renderings of a tiny fossil mollusk, he was struck by the spikes that covered its wormlike body. “This is like a classic punk hairstyle, the way it’s sticking up,” he thought. He called the fossil “Punk.” Then he found a similar fossil with downward-tipped spines reminiscent of long, side-swept “emo” bangs. He nicknamed that specimen after the emotional alt-rock genre.

On Wednesday, Dr. Sutton and his colleagues published a paper in the journal Nature formally naming the creatures as the species Punk ferox and Emo vorticaudum. True to their names, these worm-mollusks are behind something of an upset (if not quite “anarchy in the U.K.”) over scientists’ understanding of the origins of one of the biggest groups of animals on Earth.

In terms of sheer number of species, mollusks are second only to arthropods (the group that contains insects, spiders and crustaceans). The better-known half of the mollusk family tree, conchiferans, contains animals like snails, clams and octopuses. “The other half is this weird and wacky group of spiny things,” Dr. Sutton said. Some animals in this branch, the aculiferans, resemble armored marine slugs, while others are “obscure, weird molluscan worms,” he said.

Advertisement

Punk and Emo, the forerunners of today’s worm-mollusks, lived on the dark seafloor amid gardens of sponges, nearly 200 million years before the first dinosaurs emerged on land. Today, their ancient seafloor is a fossil site at the border between England and Wales.

The site is littered with rounded rocky nodules that “look a bit like potatoes,” Dr. Sutton said. “And then you crack them open, and some of them have got these fossils inside. But the thing is, they don’t really look like much at first.”

While the nodules can preserve an entire animal’s body in 3-D, the cross-section that becomes visible when a nodule is cracked open can be difficult to interpret “because you’re not seeing the full anatomy,” Dr. Sutton said.

Paleontologists can use CT scans to see parts of fossils still hidden in rock, essentially taking thousands of X-rays of the fossil and then stitching those X-ray slices together into one digital 3-D image. But in these nodules, the fossilized creatures and the rock surrounding them are too similar in density to be easily differentiated by X-rays. Instead, Dr. Sutton essentially recreated this process of slicing and imaging by hand.

“We grind away a slice at a time, take a photo, repeat at 20-micron intervals or so, and basically destroy but digitize the fossil as we go,” Dr. Sutton said. At the end of the process, the original fossil nodule is “a sad-looking pile of dust,” but the thousands of images, when painstakingly digitally combined, provide a remarkable picture of the fossil animal.

Advertisement

Punk and Emo’s Hot Topic-worthy spikes set them apart from other fossils from the aculiferan branch of the mollusk family. “We don’t know much about aculiferans, and it’s unusual to find out we’ve suddenly got two,” Dr. Sutton said.

Stewart Edie, the curator of fossil bivalves at the Smithsonian National Museum of Natural History, said that Punk and Emo’s bizarre appearances shook up a long-held understanding of how mollusks evolved. Traditionally, scientists thought that the group of mollusks containing snails, clams and cephalopods “saw all of the evolutionary action,” said Dr. Edie, who was not involved with the new discovery. “And the other major group, the aculiferans, were considerably less adventurous.” But Punk and Emo “buck that trend,” he said.

The new alt-rock aculiferans reveal the hidden diversity of their group in the distant past and raise questions about why their descendants make up such a small part of the mollusk class today. “This is really giving us an almost unprecedented window into the sorts of things that were actually around when mollusks were getting going,” Dr. Sutton said. “It’s just this little weird, unexpected, really clear view of what was going on in the early history of one of the most important groups of animals.”

Continue Reading

Science

FDA sets limits for lead in many baby foods as California disclosure law takes effect

Published

on

FDA sets limits for lead in many baby foods as California disclosure law takes effect

The U.S. Food and Drug Administration this week set maximum levels for lead in baby foods such as jarred fruits and vegetables, yogurts and dry cereal, part of an effort to cut young kids’ exposure to the toxic metal that causes developmental and neurological problems.

The agency issued final guidance that it estimated could reduce lead exposure from processed baby foods by about 20% to 30%. The limits are voluntary, not mandatory, for food manufacturers, but they allow the FDA to take enforcement action if foods exceed the levels.

It’s part of the FDA’s ongoing effort to “reduce dietary exposure to contaminants, including lead, in foods to as low as possible over time, while maintaining access to nutritious foods,” the agency said in a statement.

Consumer advocates, who have long sought limits on lead in children’s foods, welcomed the guidance first proposed two years ago, but said it didn’t go far enough.

“FDA’s actions today are a step forward and will help protect children,” said Thomas Galligan, a scientist with the Center for Science in the Public Interest. “However, the agency took too long to act and ignored important public input that could have strengthened these standards.”

Advertisement

The new limits on lead for children younger than 2 don’t cover grain-based snacks such as puffs and teething biscuits, which some research has shown contain higher levels of lead. And they don’t limit other metals such as cadmium that have been detected in baby foods.

The FDA’s announcement comes just one week after a new California law took effect that requires baby food makers selling products in California to provide a QR code on their packaging to take consumers to monthly test results for the presence in their product of four heavy metals: lead, mercury, arsenic and cadmium.

The change, required under a law passed by the California Legislature in 2023, will affect consumers nationwide. Because companies are unlikely to create separate packaging for the California market, QR codes are likely to appear on products sold across the country, and consumers everywhere will be able to view the heavy metal concentrations.

Although companies are required to start printing new packaging and publishing test results of products manufactured beginning in January, it may take time for the products to hit grocery shelves.

The law was inspired by a 2021 congressional investigation that found dangerously high levels of heavy metals in packaged foods marketed for babies and toddlers. Baby foods and their ingredients had up to 91 times the arsenic level, up to 177 times the lead level, up to 69 times the cadmium level, and up to five times the mercury level that the U.S. allows to be present in bottled or drinking water, the investigation found.

Advertisement

There’s no safe level of lead exposure for children, according to the U.S. Centers for Disease Control and Prevention. The metal causes “well-documented health effects,” including brain and nervous system damage and slowed growth and development. However, lead occurs naturally in some foods and comes from pollutants in air, water and soil, which can make it impossible to eliminate entirely.

The FDA guidance sets a lead limit of 10 parts per billion for fruits, most vegetables, grain and meat mixtures, yogurts, custards and puddings and single-ingredient meats. It sets a limit of 20 parts per billion for single-ingredient root vegetables and for dry infant cereals. The guidance covers packaged processed foods sold in jars, pouches, tubs or boxes.

Jaclyn Bowen, executive director of the Clean Label Project, an organization that certifies baby foods as having low levels of toxic substances, said consumers can use the new FDA guidance in tandem with the new California law: The FDA, she said, has provided parents a “hard and fast number” to consider a benchmark when looking at the new monthly test results.

But Brian Ronholm, director of food policy for Consumer Reports, called the FDA limits “virtually meaningless because they’re based more on industry feasibility and not on what would best protect public health.” A product with a lead level of 10 parts per billion is “still too high for baby food. What we’ve heard from a lot of these manufacturers is they are testing well below that number.”

The new FDA guidance comes more than a year after lead-tainted pouches of apple cinnamon puree sickened more than 560 children in the U.S. between October 2023 and April 2024, according to the CDC.

Advertisement

The levels of lead detected in those products were more than 2,000 times higher than the FDA’s maximum. Officials stressed that the agency doesn’t need guidance to take action on foods that violate the law.

Aleccia writes for the Associated Press. Gold reports for The Times’ early childhood education initiative, focusing on the learning and development of California children from birth to age 5. For more information about the initiative and its philanthropic funders, go to latimes.com/earlyed.

Continue Reading
Advertisement

Trending