Connect with us

Science

When A.I.’s Output Is a Threat to A.I. Itself

Published

on

When A.I.’s Output Is a Threat to A.I. Itself

The internet is becoming awash in words and images generated by artificial intelligence.

Sam Altman, OpenAI’s chief executive, wrote in February that the company generated about 100 billion words per day — a million novels’ worth of text, every day, an unknown share of which finds its way onto the internet.

A.I.-generated text may show up as a restaurant review, a dating profile or a social media post. And it may show up as a news article, too: NewsGuard, a group that tracks online misinformation, recently identified over a thousand websites that churn out error-prone A.I.-generated news articles.

Advertisement

In reality, with no foolproof methods to detect this kind of content, much will simply remain undetected.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.

Here’s a simple illustration of what happens when an A.I. system is trained on its own output, over and over again:

Advertisement

This is part of a data set of 60,000 handwritten digits.

When we trained an A.I. to mimic those digits, its output looked like this.

This new set was made by an A.I. trained on the previous A.I.-generated digits. What happens if this process continues?

Advertisement

After 20 generations of training new A.I.s on their predecessors’ output, the digits blur and start to erode.

After 30 generations, they converge into a single shape.

Advertisement

While this is a simplified example, it illustrates a problem on the horizon.

Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.

Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

Advertisement

In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”

The eroding digits we just saw show this collapse. When untethered from human input, the A.I. output dropped in quality (the digits became blurry) and in diversity (they grew similar).

How an A.I. that draws digits “collapses” after being trained on its own output

If only some of the training data were A.I.-generated, the decline would be slower or more subtle. But it would still occur, researchers say, unless the synthetic data was complemented with a lot of new, real data.

Degenerative A.I.

Advertisement

In one example, the researchers trained a large language model on its own sentences over and over again, asking it to complete the same prompt after each round.

When they asked the A.I. to complete a sentence that started with “To cook a turkey for Thanksgiving, you…,” at first, it responded like this:

Even at the outset, the A.I. “hallucinates.” But when the researchers further trained it on its own sentences, it got a lot worse…

An example of text generated by an A.I. model.

Advertisement

After two generations, it started simply printing long lists.

An example of text generated by an A.I. model after being trained on its own sentences for 2 generations.

And after four generations, it began to repeat phrases incoherently.

Advertisement

An example of text generated by an A.I. model after being trained on its own sentences for 4 generations.

“The model becomes poisoned with its own projection of reality,” the researchers wrote of this phenomenon.

Advertisement

This problem isn’t just confined to text. Another team of researchers at Rice University studied what would happen when the kinds of A.I. that generate images are repeatedly trained on their own output — a problem that could already be occurring as A.I.-generated images flood the web.

They found that glitches and image artifacts started to build up in the A.I.’s output, eventually producing distorted images with wrinkled patterns and mangled fingers.

When A.I. image models are trained on their own output, they can produce distorted images, mangled fingers or strange patterns.

A.I.-generated images by Sina Alemohammad and others.

Advertisement

“You’re kind of drifting into parts of the space that are like a no-fly zone,” said Richard Baraniuk, a professor who led the research on A.I. image models.

The researchers found that the only way to stave off this problem was to ensure that the A.I. was also trained on a sufficient supply of new, real data.

While selfies are certainly not in short supply on the internet, there could be categories of images where A.I. output outnumbers genuine data, they said.

For example, A.I.-generated images in the style of van Gogh could outnumber actual photographs of van Gogh paintings in A.I.’s training data, and this may lead to errors and distortions down the road. (Early signs of this problem will be hard to detect because the leading A.I. models are closed to outside scrutiny, the researchers said.)

Why collapse happens

Advertisement

All of these problems arise because A.I.-generated data is often a poor substitute for the real thing.

This is sometimes easy to see, like when chatbots state absurd facts or when A.I.-generated hands have too many fingers.

But the differences that lead to model collapse aren’t necessarily obvious — and they can be difficult to detect.

When generative A.I. is “trained” on vast amounts of data, what’s really happening under the hood is that it is assembling a statistical distribution — a set of probabilities that predicts the next word in a sentence, or the pixels in a picture.

For example, when we trained an A.I. to imitate handwritten digits, its output could be arranged into a statistical distribution that looks like this:

Advertisement

Distribution of A.I.-generated data

Examples of
initial A.I. output:

Advertisement

The distribution shown here is simplified for clarity.

The peak of this bell-shaped curve represents the most probable A.I. output — in this case, the most typical A.I.-generated digits. The tail ends describe output that is less common.

Notice that when the model was trained on human data, it had a healthy spread of possible outputs, which you can see in the width of the curve above.

But after it was trained on its own output, this is what happened to the curve:

Advertisement

Distribution of A.I.-generated data when trained on its own output

It gets taller and narrower. As a result, the model becomes more and more likely to produce a smaller range of output, and the output can drift away from the original data.

Meanwhile, the tail ends of the curve — which contain the rare, unusual or surprising outcomes — fade away.

This is a telltale sign of model collapse: Rare data becomes even rarer.

If this process went unchecked, the curve would eventually become a spike:

Advertisement

Distribution of A.I.-generated data when trained on its own output

This was when all of the digits became identical, and the model completely collapsed.

Why it matters

This doesn’t mean generative A.I. will grind to a halt anytime soon.

The companies that make these tools are aware of these problems, and they will notice if their A.I. systems start to deteriorate in quality.

Advertisement

But it may slow things down. As existing sources of data dry up or become contaminated with A.I. “slop,” researchers say it makes it harder for newcomers to compete.

A.I.-generated words and images are already beginning to flood social media and the wider web. They’re even hiding in some of the data sets used to train A.I., the Rice researchers found.

“The web is becoming increasingly a dangerous place to look for your data,” said Sina Alemohammad, a graduate student at Rice who studied how A.I. contamination affects image models.

Big players will be affected, too. Computer scientists at N.Y.U. found that when there is a lot of A.I.-generated content in the training data, it takes more computing power to train A.I. — which translates into more energy and more money.

“Models won’t scale anymore as they should be scaling,” said ​​Julia Kempe, the N.Y.U. professor who led this work.

Advertisement

The leading A.I. models already cost tens to hundreds of millions of dollars to train, and they consume staggering amounts of energy, so this can be a sizable problem.

‘A hidden danger’

Finally, there’s another threat posed by even the early stages of collapse: an erosion of diversity.

And it’s an outcome that could become more likely as companies try to avoid the glitches and “hallucinations” that often occur with A.I. data.

This is easiest to see when the data matches a form of diversity that we can visually recognize — people’s faces:

Advertisement

This set of A.I. faces was created by the same Rice researchers who produced the distorted faces above. This time, they tweaked the model to avoid visual glitches.

A grid of A.I.-generated faces showing variations in their poses, expressions, ages and races.

This is the output after they trained a new A.I. on the previous set of faces. At first glance, it may seem like the model changes worked: The glitches are gone.

Advertisement

After one generation of training on A.I. output, the A.I.-generated faces appear more similar.

After two generations …

After two generations of training on A.I. output, the A.I.-generated faces are less diverse than the original image.

Advertisement

After three generations …

After three generations of training on A.I. output, the A.I.-generated faces grow more similar.

After four generations, the faces all appeared to converge.

After four generations of training on A.I. output, the A.I.-generated faces appear almost identical.

Advertisement

This drop in diversity is “a hidden danger,” Mr. Alemohammad said. “You might just ignore it and then you don’t understand it until it’s too late.”

Just as with the digits, the changes are clearest when most of the data is A.I.-generated. With a more realistic mix of real and synthetic data, the decline would be more gradual.

Advertisement

But the problem is relevant to the real world, the researchers said, and will inevitably occur unless A.I. companies go out of their way to avoid their own output.

Related research shows that when A.I. language models are trained on their own words, their vocabulary shrinks and their sentences become less varied in their grammatical structure — a loss of “linguistic diversity.”

And studies have found that this process can amplify biases in the data and is more likely to erase data pertaining to minorities.

Ways out

Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.

Advertisement

One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet, ensuring both human origin and high quality.

OpenAI and Google have made deals with some publishers or websites to use their data to improve A.I. (The New York Times sued OpenAI and Microsoft last year, alleging copyright infringement. OpenAI and Microsoft say their use of the content is considered fair use under copyright law.)

Better ways to detect A.I. output would also help mitigate these problems.

Google and OpenAI are working on A.I. “watermarking” tools, which introduce hidden patterns that can be used to identify A.I.-generated images and text.

But watermarking text is challenging, researchers say, because these watermarks can’t always be reliably detected and can easily be subverted (they may not survive being translated into another language, for example).

Advertisement

A.I. slop is not the only reason that companies may need to be wary of synthetic data. Another problem is that there are only so many words on the internet.

Some experts estimate that the largest A.I. models have been trained on a few percent of the available pool of text on the internet. They project that these models may run out of public data to sustain their current pace of growth within a decade.

“These models are so enormous that the entire internet of images or conversations is somehow close to being not enough,” Professor Baraniuk said.

To meet their growing data needs, some companies are considering using today’s A.I. models to generate data to train tomorrow’s models. But researchers say this can lead to unintended consequences (such as the drop in quality or diversity that we saw above).

There are certain contexts where synthetic data can help A.I.s learn — for example, when output from a larger A.I. model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go.

Advertisement

And new research suggests that when humans curate synthetic data (for example, by ranking A.I. answers and choosing the best one), it can alleviate some of the problems of collapse.

Companies are already spending a lot on curating data, Professor Kempe said, and she believes this will become even more important as they learn about the problems of synthetic data.

But for now, there’s no replacement for the real thing.

About the data

To produce the images of A.I.-generated digits, we followed a procedure outlined by researchers. We first trained a type of a neural network known as a variational autoencoder using a standard data set of 60,000 handwritten digits.

Advertisement

We then trained a new neural network using only the A.I.-generated digits produced by the previous neural network, and repeated this process in a loop 30 times.

To create the statistical distributions of A.I. output, we used each generation’s neural network to create 10,000 drawings of digits. We then used the first neural network (the one that was trained on the original handwritten digits) to encode these drawings as a set of numbers, known as a “latent space” encoding. This allowed us to quantitatively compare the output of different generations of neural networks. For simplicity, we used the average value of this latent space encoding to generate the statistical distributions shown in the article.

Science

5 Great Stargazing Trains

Published

on

5 Great Stargazing Trains

Stargazing, it turns out, doesn’t have to be a stationary activity.

On railway lines around the world, from the Arctic Circle to New Zealand, a select set of evening train excursions take riders deep into dark-sky territory — some en route to remote station stops decked out with telescopes, others featuring onboard astronomers.

These five rail journeys (all of which are accessible) range from two- to three-hour desert outings to a hunt for the northern lights. One route even has a planetarium on rails. All promise a renewed appreciation of train travel — and of our pale blue dot’s improbable place in the cosmos.

Nevada

Any stargazing train worth its salt requires one thing: a dark sky. The Star Train resoundingly checks that box, traveling through a part of eastern Nevada that is one of the least-populated places in the lower 48.

Advertisement

Run by the Nevada Northern Railway in partnership with nearby Great Basin National Park, the train departs the historic East Ely Depot, in Ely, Nev., early enough in the evening to catch the sunset over the Steptoe Valley, and then cruises through darkening skies to its destination: a remote corner of the desert appropriately called Star Flat, where a stargazing platform outfitted with telescopes awaits. There, riders disembark (equipped with red-light necklaces to help preserve their night vision) and take turns viewing the cosmos, guided by professional astronomers. (Last year’s onboard stargazing guides came from Caltech; in previous seasons, the National Park Service’s Dark Rangers, who specialize in night-sky activities, accompanied trips.)

The Star Train makes its two-and-a-half-hour round-trip journey most Friday evenings between mid-May and mid-September, and tickets ($65 for adults) can sell out almost a year in advance — though members of the Nevada Northern Railway Museum get early access. Alternatively, the railroad’s more frequent Sunset, Stars and Champagne excursions trade telescopes for desert sundowners but feature the same expert stargazers and the same Nevada night sky, which is often dark enough to see the Milky Way with the naked eye.

New Mexico

While plenty of heritage railroads across the United States offer twilight rides and nighttime excursions, at the moment there’s only one other dedicated, regularly scheduled stargazing train in North America besides the Star Train: the Stargazer, operated by Sky Railway, in Santa Fe, N.M.

Much like its Nevada counterpart, the Stargazer makes a two-and-a-half-hour round trip through dark-sky country, though in this case, the journey really is the destination, because it doesn’t make any stops. More of a rolling night-sky revue, the Stargazer features live music and professional astronomers who share their celestial knowledge and stories as the train rumbles into the vast Galisteo Basin south of Santa Fe. Sky Railway’s colorfully painted trains feature heated, enclosed passenger cars to stave off the evening chill and flatbed cars open to the night sky.

Advertisement

Departing from the Santa Fe Depot downtown, the train normally runs once a month (adult tickets from $139, including a champagne welcome toast). Sky Railway also occasionally schedules excursions for special celestial events.

New Zealand

With its alpine landscapes and rugged coastline, New Zealand’s South Island is practically tailor-made for scenic daytime train journeys. But when night falls, the sparsely populated island — home to the Southern Hemisphere’s largest International Dark Sky Reserve — is heaven for stargazers, too.

This year, Great Journeys New Zealand, which operates the country’s tourist-centric long-distance trains, is offering a special nighttime run of the Coastal Pacific, whose route skirts the South Island’s northeastern coast. Timed to Matariki, the Maori new year, which is heralded by the first rising of the Pleiades star cluster, the eight-hour round trip from Christchurch is a cultural and astronomical celebration.

After the first half of a four-course onboard dinner, the train arrives in Kaikoura, in dark-sky country, for a guided stargazing stop with a range of telescopes — and fire pits and a night market. (The rain plan involves a virtual stargazing session at the local museum using virtual reality headsets.) Dinner resumes back on the train as it returns to Christchurch. This is a strictly limited engagement, on the rails for one night only: July 11, for 499 New Zealand dollars, about $295, per person.

Advertisement

In the far northern reaches of Norway, inside the Arctic Circle, you can ride a train that chases another wonder of the night sky: the aurora borealis. Twice a week from October to March, the Northern Lights Train takes its riders into the dark polar night in pursuit of the aurora’s celestial light show.

From the remote town of Narvik, the train travels along the Ofoten Railway, the northernmost passenger rail line in Western Europe. The destination on this three-hour round-trip excursion (1,495 kroner, or about $160) is Katterat, a mountain village accessible only by rail and free of light pollution, making it an ideal place to spot the aurora. At the Katterat station, local guides and a campfire cookout await, as does a lavvu, the traditional tent used by the Sami people of northern Scandinavia, offering a respite from the cold (as well as hot drinks and an open fire for roasting sausages).

And aboard the train, the lights stay off, which means that on a clear night, you might even catch the northern lights on the way there and back.

Leave it to Japan to take the stargazing train to another level.

The High Rail 1375 train — so named because it runs along Japan’s highest-elevation railway line (the high point is 1,375 meters, or roughly 4,500 feet, above sea level) — is one of JR East’s deliberately unhurried Joyful Trains, which the railway company describes as “not only a means of transportation, but also a package of various pleasures.” This astronomy-themed train certainly packs plenty of joy into its two cars, with seat upholstery inspired by constellations, a snack bar, a souvenir shop and a planetarium car with a library of astronomy books and images of the night sky projected onto its domed ceiling.

Advertisement

The train makes two daytime runs along the mountainous Koumi Line, taking a little over two hours to travel between Kobuchizawa (accessible by express train from Tokyo) and Komoro. But the main event is the High Rail Hoshizora (“Starry Sky”) evening trip, which includes an extended stop at Nobeyama Station (the highest in the country) for a guided stargazing session. A one-way ride on High Rail 1375, which runs on weekends and occasional weekdays, requires a seat reservation if you’re traveling on a Japan Rail pass, or a stand-alone ticket plus seat reservation (2,440 yen, or about $15). And remember to preorder a special “Starry Sky” bento box.


Follow New York Times Travel on Instagram and sign up for our Travel Dispatch newsletter to get expert tips on traveling smarter and inspiration for your next vacation. Dreaming up a future getaway or just armchair traveling? Check out our 52 Places to Go in 2026.

Continue Reading

Science

A Physicist Who Thinks in Poetry from the Cosmic Edge

Published

on

A Physicist Who Thinks in Poetry from the Cosmic Edge

Much of the praise for Chanda Prescod-Weinstein’s debut book in 2021, “The Disordered Cosmos: A Journey Into Dark Matter, Spacetime, and Dreams Deferred,” lauded the way she used personal experiences in physics to discuss the social and political inequities that exist alongside scientific breakthroughs.

“It contains the narrative of dreams deferred,” Dr. Prescod-Weinstein, a physicist at the University of New Hampshire, explained in April at a bookstore in Chicago. But its very existence, she said, also “represented a dream deferred, because that was not the dream of what my first book was going to be.”

Her second book reclaims that dream. Released on April 7, “The Edge of Space-Time: Particles, Poetry, and the Cosmic Dream Boogie” is less pain and more play, a homage to the big questions that made Dr. Prescod-Weinstein want to become a physicist in the first place. She begins the book by asserting that it is humanity’s duty to uncover and share the story of our universe. Her latest offering toward that duty is a journey through physics that is tightly bound to her own cultural roots.

In the midst of a multicity book tour, Dr. Prescod-Weinstein spoke with The New York Times about guiding readers through the cosmos from her own point of view and about some of the art, poetry and literature she drew on to shape that journey. This conversation has been edited for brevity and clarity.

Why include so many references to poetry in a book about physics?

Advertisement

I knew poetry before I knew physics. It was part of my upbringing. I loved A.A. Milne’s “Now We Are Six” and Edward Lear’s “Nonsense Limericks.” Both of my books draw their subtitles from Langston Hughes’s “Montage of a Dream Deferred.”

Adrienne Rich’s poem “The Burning of Paper Instead of Children” became a guiding light for how my work would move in the world. It also opened up for me that I need language. That’s true among physicists. Even an equation is a sentence; even an equation is telling a story.

As physicists, we’re always working in language to connect what we learn with what we know. Poetry is one of the first places that my brain goes to draw those links. Language, as it moves in my brain, is often in Hughes and Rich and Shakespeare. Those are the lines that flicker up for me.

What if we got away from the argument that doing cosmology and particle physics is practical or materially valuable? Then we have to accept that we’re like the poets. What we do is important culturally in the same way poetry is. A piece of this book is me saying there is value in banding with the poets, and fighting for the value of being curious and trying to articulate the world with whatever tools are available to us. Not for the purposes of selling something, but for the purpose of fulfilling our humanity.

Another theme throughout the book is the story of Lewis Carroll’s Alice and her adventures in Wonderland.

Advertisement

Being a science adviser on future installments in The Legendborn Cycle, a fantasy series written by Tracy Deonn, is one reason Alice is in my book. It has allowed me to be open to the playful side that physics, as a Black queer person, can take from you. I wanted the book to be whimsical, because that’s who I was when I first arrived in physics, and that’s who I want to be when I die.

Part of the call of quantum physics is to change what our sense and sensibility are. When you look at the world through this framework — like the idea that particles have spin but don’t really spin — it sounds like nonsense. Except that’s literally how the universe works. Physics is our “through the looking glass.” It’s real.

Your first chapter invites readers to reflect on the metaphors used to describe the universe, like the “fabric” of space-time or electromagnetic “fields.” Why open in this way?

A lot of books about quantum physics start with its history. I wanted as much as possible not to just do that. I had actually planned to start it with the Stern-Gerlach experiment of 1922. But then I read an essay by the poet Natasha Trethewey about abiding metaphors and started to ask myself what the abiding metaphors of my physics training were.

We don’t ever take time in our classes to ask, “What do we mean when we say ‘space’? What do we mean when we say ‘space-time’?” There are these metaphysical questions that I often told myself were for the philosophers. This book was me letting myself think of them as physics.

Advertisement

One metaphor you invoke is the “edge” — not only the edge of the universe and of scientists’ understanding, but also existing at the edge of certain identities.

In “Disordered Cosmos,” I talked a lot about being at the margin and looking toward the center. With “The Edge of Space-Time,” I’m choosing to make the margin the center of the story. Part of that was me fully embracing what makes me the physicist I am. I’m an L.A. Dodgers fan. I love “Alice in Wonderland.” I love “Star Trek.” There’s lots of all of that in the book.

Picking a metaphor is a culturally situated decision. I wrote a line that says black holes are the best laid edges in the universe. I did, at some point, think that only some people were going to get this. But for people who don’t understand the reference to Black hairstyles, the sentence is still legible. And for those who do, it will feel like we just had an in-group moment. Anyone who thinks about laying their edges deserves to have an in-group moment in a physics book. Because we are physics, too.

Black students are often told that if you want to be a physicist, then you will make yourself as close to such-and-such mold as possible. At a young age, we have this understanding that whiteness and science are associated with each other, but we are also witnessing in ourselves that this can’t be entirely correct. There’s this narration of, “Well, sure, you can be Black in physics, but that means you have to acclimate to the ‘in physics’ part, and never that physics has to acclimate to the Black part.”

I use the example of rapper Big K.R.I.T.’s song “My Sub Pt. 3 (Big Bang),” in which someone tries to wire up subwoofers in his car but fries the wires because he doesn’t ground them properly. I don’t know if Big K.R.I.T. would think of this as a science story, but I think we should learn to read it as one. Not to contain it in science, but to say it overlaps there. This can be a rap song. It can be about the cultural significance of subwoofers and the Big Bang as a metaphor for the beat. And it can also be about cosmology and about how everybody who wires up cars or does this kind of work is a scientist, too.

Advertisement

How do you want readers to approach this book?

There is this feeling that you’re supposed to read a book like this and walk away an expert. That’s actually not the point of this book at all. The point is to wander through physics. Even if math terrifies you, you are entitled to spend some time with it.

And so here, I have made you a book with a bunch of tidbits on the oddities of the universe. The universe is stranger and more queer and more wonderful and more full of possibility than whatever limitations you might be experiencing right now. Physics challenges what we are told are social norms. For example, non-trinary neutrinos are fundamental to our standard model of physics.

“Non-trinary,” as in they shift between three different forms.

Non-trinary is natural. It’s such a challenge to the current anti-trans rhetoric that says people can only ever be one thing.

Advertisement

I don’t need my book to be the most important thing that someone reads. But I want it to be a source of hope. If it reminds you that, as my mom says, the universe is bigger than the bad things that are happening to us, then that’s all you need to remember. I’m good with that.

Continue Reading

Science

Footage shows Central Valley dairy workers kicking young calves, pulling them with pliers

Published

on

Footage shows Central Valley dairy workers kicking young calves, pulling them with pliers

In late February, animal rights activists flew a drone over a calf ranch in the Central Valley and watched as workers kicked and punched the animals.

For the record:

7:15 p.m. May 12, 2026This article has been updated to reflect that no calves from Agresti Calf Ranch have ever gone on to be used for Clover Sonoma milk supplies, and the calf ranch opened only in 2025. In additional comments, Clover Sonoma also said in the future, no animals from Agresti Calf Ranch will be part of its supply.

Footage reviewed by The Times shows a worker pulling a calf by the nose with pliers.

It shows two workers removing the budding horns of a calf with a hot iron. While one held the frightened animal’s head, the other — wearing a sweatshirt with an image of the Virgin Mary — applied the iron to a horn. After a puff of smoke, the calf fell to its side, appearing motionless.

Advertisement
  • Share via

Advertisement

Both male and female calves produce horns. To prevent injury to the animals and their handlers, these are commonly removed. Humane guidelines require anesthesia.

The footage was collected by the group Direct Action Everywhere, known for tactics including releasing beagles from medical breeding facilities and abused calves from farms. It was shot at the Agresti Calf Ranch in Ceres, near Modesto, which is certified by the American Humane Society for its ethical treatment of animals. The workers could not be reached for comment. One was subsequently terminated, the Humane Society said.

Advertisement

The Agresti Calf Ranch opened in 2025 and is operated by the owners of Double D Dairy, just up the road. Double D Dairy owns more than 10,000 cows across several operations.

The owner of Double D, Dominic Assali, declined to answer questions in person. A phone number for the dairy online is disconnected. In response to an email to his personal account, Assali said, “Animal welfare and safety are incredibly important to us, and we have a zero-tolerance policy for any mistreatment.

“We’ll always take immediate, thorough action to address any operational issues, as we have in this instance,” the email said.

The American Humane Society is a 150-year-old nonprofit focused on animal welfare. Among other things, it certifies animal safety on farms as well as on movie sets. In a statement, it said only 10% of animals raised on farms in the U.S. are certified as humanely treated.

Assali is the grandson of the farm’s founders, Harold and Marlene Agresti. He is a board member of Western United Dairies, the largest dairy trade group in California.

Advertisement

The mistreatment captured on video has also created a headache for a prominent California sustainable milk brand, Clover Sonoma, based in Sonoma County.

It gets 10% to 15% of its milk from Double D, and Assali and his family are featured on Clover Sonoma’s website. No calves from Agresti Calf Ranch have ever gone on to be used in Clover Sonoma milk supplies, the company said in a statement. It’s unclear whether the abused calves were being raised for beef or dairy.

A Clover Sonoma sign hung outside the main dairy complex on a recent visit.

Clover Sonoma markets its milk, yogurt and cheese products as humanely sourced and environmentally sound. It was the first dairy company to receive a cruelty-free certification from the American Humane Society in 2000. The website also features a “Our Promise” page, which states the company demands “the humane treatment of animals.”

“We were deeply concerned by the reported mistreatment of some cows captured on video at Agresti Calf Ranch during a separate cow operation,” the company said in an email.

Advertisement

“The rough handling shown at Agresti Calf Ranch is contrary and inconsistent with the humane practices we have fostered for decades and which we demand of all our suppliers.”

Clover Sonoma said it suspended business with Double D as soon as it became aware of the incidents and began “a rigorous audit,” which just ended.

“Clover and the American Humane Society have concluded that the mistreatment was an isolated issue, not systemic or reflective of Agresti Calf Ranch’s personnel. Corrections have been made, including the termination of the employee in the video. As such, we are comfortable reinstating the milk from Double D Dairy.”

After this story published, Clover went further and said a condition of Double D’s reinstatement will be that no animals from Agresti Calf Ranch will be part of Clover’s dairy supply.

A statement from the Humane Society said Clover Sonoma is working with Double D to strengthen its whistleblower policy and training, and has “reiterated its commitment to ongoing independent, third-party audits,” with both announced and unannounced visits.

Advertisement

Clover Sonoma mainly buys and processes milk from dairies in verdant Sonoma County, as the company’s marketing suggests. Double D Dairy is one of its few suppliers in the Central Valley, which is associated more with industrial-scale agriculture.

On a recent weekday, the calf ranch and dairy farm were visible from a public road. Holstein calves, a popular dairy breed, could be seen in cages through small trees in front of the enclosures. The sound of mooing and a pressure washer could be heard. The smell of manure and dirt wafted in the humid air.

Most dairy companies remove calves from their mothers after birth, raising them separately so they don’t take the mother’s commercially valuable milk. Some dairy farms send calves out to third-party calf ranches for rearing. Others raise them on-site. Female calves are typically raised to become milk cows. Male calves are sent away to become beef or other meat-based products, such as pet food.

A 2025 State Water Board document shows the farm houses an average of 700 calves at any one time, with a maximum 1,400.

The Direct Action Everywhere activists were recently on a public road near Double D’s main farm, flying a drone over the property. Within 30 minutes of their arrival, seven Stanislaus County sheriff’s vehicles arrived and surrounded the activists.

Advertisement

A heavily armed officer asked to see the drone pilot’s Federal Aviation Administration license, which he provided. After confirming it was valid, a sheriff’s deputy — one of nine at the scene — told the activists they could remain on the road but could not trespass.

Asked about the heavy response, a deputy said there had been several recent violent incidents from animal rights groups at the site, and mentioned the groups had sent in “busloads” of activists.

The Times reached out to the Sheriff’s Office to get more details about those events but did not get a response.

Temple Grandin, author and professor of livestock medicine at Colorado State University, said that punching and kicking livestock is considered abusive.

An expert in livestock welfare, she said that handlers can tap, push and nudge animals. But if the level of force goes beyond what could bend the side of a cardboard box, “it’s abuse. Period.”

Advertisement

She said the calves’ reaction to the hot iron indicates that pain medication, such as lidocaine, was not applied before the procedure. Double D did not respond to a question about whether medication was given before the procedure.

A pickup truck rolls by the barns at Agresti Calf Ranch at sunrise in Ceres.

A pickup truck rolls by the barns at Agresti Calf Ranch at sunrise in Ceres.

(Tomas Ovalle/For The Times)

Advertisement
Continue Reading
Advertisement

Trending