Connect with us

Science

When A.I.’s Output Is a Threat to A.I. Itself

Published

on

When A.I.’s Output Is a Threat to A.I. Itself

The internet is becoming awash in words and images generated by artificial intelligence.

Sam Altman, OpenAI’s chief executive, wrote in February that the company generated about 100 billion words per day — a million novels’ worth of text, every day, an unknown share of which finds its way onto the internet.

A.I.-generated text may show up as a restaurant review, a dating profile or a social media post. And it may show up as a news article, too: NewsGuard, a group that tracks online misinformation, recently identified over a thousand websites that churn out error-prone A.I.-generated news articles.

Advertisement

In reality, with no foolproof methods to detect this kind of content, much will simply remain undetected.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.

Here’s a simple illustration of what happens when an A.I. system is trained on its own output, over and over again:

Advertisement

This is part of a data set of 60,000 handwritten digits.

When we trained an A.I. to mimic those digits, its output looked like this.

This new set was made by an A.I. trained on the previous A.I.-generated digits. What happens if this process continues?

Advertisement

After 20 generations of training new A.I.s on their predecessors’ output, the digits blur and start to erode.

After 30 generations, they converge into a single shape.

Advertisement

While this is a simplified example, it illustrates a problem on the horizon.

Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.

Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

Advertisement

In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”

The eroding digits we just saw show this collapse. When untethered from human input, the A.I. output dropped in quality (the digits became blurry) and in diversity (they grew similar).

How an A.I. that draws digits “collapses” after being trained on its own output

If only some of the training data were A.I.-generated, the decline would be slower or more subtle. But it would still occur, researchers say, unless the synthetic data was complemented with a lot of new, real data.

Degenerative A.I.

Advertisement

In one example, the researchers trained a large language model on its own sentences over and over again, asking it to complete the same prompt after each round.

When they asked the A.I. to complete a sentence that started with “To cook a turkey for Thanksgiving, you…,” at first, it responded like this:

Even at the outset, the A.I. “hallucinates.” But when the researchers further trained it on its own sentences, it got a lot worse…

An example of text generated by an A.I. model.

Advertisement

After two generations, it started simply printing long lists.

An example of text generated by an A.I. model after being trained on its own sentences for 2 generations.

And after four generations, it began to repeat phrases incoherently.

Advertisement

An example of text generated by an A.I. model after being trained on its own sentences for 4 generations.

“The model becomes poisoned with its own projection of reality,” the researchers wrote of this phenomenon.

Advertisement

This problem isn’t just confined to text. Another team of researchers at Rice University studied what would happen when the kinds of A.I. that generate images are repeatedly trained on their own output — a problem that could already be occurring as A.I.-generated images flood the web.

They found that glitches and image artifacts started to build up in the A.I.’s output, eventually producing distorted images with wrinkled patterns and mangled fingers.

When A.I. image models are trained on their own output, they can produce distorted images, mangled fingers or strange patterns.

A.I.-generated images by Sina Alemohammad and others.

Advertisement

“You’re kind of drifting into parts of the space that are like a no-fly zone,” said Richard Baraniuk, a professor who led the research on A.I. image models.

The researchers found that the only way to stave off this problem was to ensure that the A.I. was also trained on a sufficient supply of new, real data.

While selfies are certainly not in short supply on the internet, there could be categories of images where A.I. output outnumbers genuine data, they said.

For example, A.I.-generated images in the style of van Gogh could outnumber actual photographs of van Gogh paintings in A.I.’s training data, and this may lead to errors and distortions down the road. (Early signs of this problem will be hard to detect because the leading A.I. models are closed to outside scrutiny, the researchers said.)

Why collapse happens

Advertisement

All of these problems arise because A.I.-generated data is often a poor substitute for the real thing.

This is sometimes easy to see, like when chatbots state absurd facts or when A.I.-generated hands have too many fingers.

But the differences that lead to model collapse aren’t necessarily obvious — and they can be difficult to detect.

When generative A.I. is “trained” on vast amounts of data, what’s really happening under the hood is that it is assembling a statistical distribution — a set of probabilities that predicts the next word in a sentence, or the pixels in a picture.

For example, when we trained an A.I. to imitate handwritten digits, its output could be arranged into a statistical distribution that looks like this:

Advertisement

Distribution of A.I.-generated data

Examples of
initial A.I. output:

Advertisement

The distribution shown here is simplified for clarity.

The peak of this bell-shaped curve represents the most probable A.I. output — in this case, the most typical A.I.-generated digits. The tail ends describe output that is less common.

Notice that when the model was trained on human data, it had a healthy spread of possible outputs, which you can see in the width of the curve above.

But after it was trained on its own output, this is what happened to the curve:

Advertisement

Distribution of A.I.-generated data when trained on its own output

It gets taller and narrower. As a result, the model becomes more and more likely to produce a smaller range of output, and the output can drift away from the original data.

Meanwhile, the tail ends of the curve — which contain the rare, unusual or surprising outcomes — fade away.

This is a telltale sign of model collapse: Rare data becomes even rarer.

If this process went unchecked, the curve would eventually become a spike:

Advertisement

Distribution of A.I.-generated data when trained on its own output

This was when all of the digits became identical, and the model completely collapsed.

Why it matters

This doesn’t mean generative A.I. will grind to a halt anytime soon.

The companies that make these tools are aware of these problems, and they will notice if their A.I. systems start to deteriorate in quality.

Advertisement

But it may slow things down. As existing sources of data dry up or become contaminated with A.I. “slop,” researchers say it makes it harder for newcomers to compete.

A.I.-generated words and images are already beginning to flood social media and the wider web. They’re even hiding in some of the data sets used to train A.I., the Rice researchers found.

“The web is becoming increasingly a dangerous place to look for your data,” said Sina Alemohammad, a graduate student at Rice who studied how A.I. contamination affects image models.

Big players will be affected, too. Computer scientists at N.Y.U. found that when there is a lot of A.I.-generated content in the training data, it takes more computing power to train A.I. — which translates into more energy and more money.

“Models won’t scale anymore as they should be scaling,” said ​​Julia Kempe, the N.Y.U. professor who led this work.

Advertisement

The leading A.I. models already cost tens to hundreds of millions of dollars to train, and they consume staggering amounts of energy, so this can be a sizable problem.

‘A hidden danger’

Finally, there’s another threat posed by even the early stages of collapse: an erosion of diversity.

And it’s an outcome that could become more likely as companies try to avoid the glitches and “hallucinations” that often occur with A.I. data.

This is easiest to see when the data matches a form of diversity that we can visually recognize — people’s faces:

Advertisement

This set of A.I. faces was created by the same Rice researchers who produced the distorted faces above. This time, they tweaked the model to avoid visual glitches.

A grid of A.I.-generated faces showing variations in their poses, expressions, ages and races.

This is the output after they trained a new A.I. on the previous set of faces. At first glance, it may seem like the model changes worked: The glitches are gone.

Advertisement

After one generation of training on A.I. output, the A.I.-generated faces appear more similar.

After two generations …

After two generations of training on A.I. output, the A.I.-generated faces are less diverse than the original image.

Advertisement

After three generations …

After three generations of training on A.I. output, the A.I.-generated faces grow more similar.

After four generations, the faces all appeared to converge.

After four generations of training on A.I. output, the A.I.-generated faces appear almost identical.

Advertisement

This drop in diversity is “a hidden danger,” Mr. Alemohammad said. “You might just ignore it and then you don’t understand it until it’s too late.”

Just as with the digits, the changes are clearest when most of the data is A.I.-generated. With a more realistic mix of real and synthetic data, the decline would be more gradual.

Advertisement

But the problem is relevant to the real world, the researchers said, and will inevitably occur unless A.I. companies go out of their way to avoid their own output.

Related research shows that when A.I. language models are trained on their own words, their vocabulary shrinks and their sentences become less varied in their grammatical structure — a loss of “linguistic diversity.”

And studies have found that this process can amplify biases in the data and is more likely to erase data pertaining to minorities.

Ways out

Perhaps the biggest takeaway of this research is that high-quality, diverse data is valuable and hard for computers to emulate.

Advertisement

One solution, then, is for A.I. companies to pay for this data instead of scooping it up from the internet, ensuring both human origin and high quality.

OpenAI and Google have made deals with some publishers or websites to use their data to improve A.I. (The New York Times sued OpenAI and Microsoft last year, alleging copyright infringement. OpenAI and Microsoft say their use of the content is considered fair use under copyright law.)

Better ways to detect A.I. output would also help mitigate these problems.

Google and OpenAI are working on A.I. “watermarking” tools, which introduce hidden patterns that can be used to identify A.I.-generated images and text.

But watermarking text is challenging, researchers say, because these watermarks can’t always be reliably detected and can easily be subverted (they may not survive being translated into another language, for example).

Advertisement

A.I. slop is not the only reason that companies may need to be wary of synthetic data. Another problem is that there are only so many words on the internet.

Some experts estimate that the largest A.I. models have been trained on a few percent of the available pool of text on the internet. They project that these models may run out of public data to sustain their current pace of growth within a decade.

“These models are so enormous that the entire internet of images or conversations is somehow close to being not enough,” Professor Baraniuk said.

To meet their growing data needs, some companies are considering using today’s A.I. models to generate data to train tomorrow’s models. But researchers say this can lead to unintended consequences (such as the drop in quality or diversity that we saw above).

There are certain contexts where synthetic data can help A.I.s learn — for example, when output from a larger A.I. model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go.

Advertisement

And new research suggests that when humans curate synthetic data (for example, by ranking A.I. answers and choosing the best one), it can alleviate some of the problems of collapse.

Companies are already spending a lot on curating data, Professor Kempe said, and she believes this will become even more important as they learn about the problems of synthetic data.

But for now, there’s no replacement for the real thing.

About the data

To produce the images of A.I.-generated digits, we followed a procedure outlined by researchers. We first trained a type of a neural network known as a variational autoencoder using a standard data set of 60,000 handwritten digits.

Advertisement

We then trained a new neural network using only the A.I.-generated digits produced by the previous neural network, and repeated this process in a loop 30 times.

To create the statistical distributions of A.I. output, we used each generation’s neural network to create 10,000 drawings of digits. We then used the first neural network (the one that was trained on the original handwritten digits) to encode these drawings as a set of numbers, known as a “latent space” encoding. This allowed us to quantitatively compare the output of different generations of neural networks. For simplicity, we used the average value of this latent space encoding to generate the statistical distributions shown in the article.

Science

Cancer survival rates soar nationwide, but L.A. doctors warn cultural and educational barriers leave some behind

Published

on

Cancer survival rates soar nationwide, but L.A. doctors warn cultural and educational barriers leave some behind

The American Cancer Society’s 2026 Cancer Statistics report, released Tuesday, marks a major milestone for U.S. cancer survival rates. For the first time, the annual report shows that 70% of Americans diagnosed with cancer can expect to live at least five years, compared with just 49% in the mid-1970s.

The new findings, based on data from national cancer records and death statistics from 2015 to 2021, also show promising progress in survival rates for people with the deadliest, most advanced and hardest-to-treat cancers when compared with rates from the mid-1990s. The five-year survival rate for myeloma, for example, nearly doubled (from 32% to 62%). The survival rate for liver cancer tripled (from 7% to 22%), for late-stage lung cancer nearly doubled (from 20% to 37%), and for both melanoma and rectal cancer more than doubled (from 16% to 35% and from 8% to 18%, respectively).

For all cancers, the five-year survival rate more than doubled since the mid-1990s, rising from 17% to 35%.

This also signals a 34% drop in cancer mortality since 1991, translating to an estimated 4.8 million fewer cancer deaths between 1991 and 2023. These significant public health advances result from years of public investment in research, early detection and prevention, and improved cancer treatment, according to the report.

“This stunning victory is largely the result of decades of cancer research that provided clinicians with the tools to treat the disease more effectively, turning many cancers from a death sentence into a chronic disease,” said Rebecca Siegel, senior scientific director at the American Cancer Society and lead author of the report.

Advertisement

As more people survive cancer, there is also a growing focus on the quality of life after treatment. Patients, families and caregivers face physical, financial and emotional challenges. Dr. William Dahut, the American Cancer Society’s chief scientific officer, said that ongoing innovation must go hand in hand with better support services and policies, so all survivors — not just the privileged — can have “not only more days, but better days.”

Indeed, the report also shows that not everyone has benefited equally from the advances of the last few decades. American Indian and Alaska Native people now have the highest cancer death rates in the country, with deaths from kidney, liver, stomach and cervical cancers about double that of white Americans.

Additionally, Black women are more likely to die from breast and uterine cancers than non-Black women — and Black men have the highest cancer rates of any American demographic. The report connects these disparities in survival to long-standing issues such as income inequity and the effects of past discrimination, such as redlining, affecting where people live — forcing historically marginalized populations to be disproportionately exposed to environmental carcinogens.

Dr. René Javier Sotelo, a urologic oncologist at Keck Medicine of USC, notes that the fight against cancer in Southern California, amid long-standing disparities facing vulnerable communities, is very much about overcoming educational, cultural and socioeconomic barriers.

While access to care and insurance options in Los Angeles are relatively robust, many disparities persist because community members often lack crucial information about risk factors, screening and early warning signs. “We need to insist on the importance of education and screening,” Sotelo said. He emphasized that making resources, helplines and culturally tailored materials readily available to everyone is crucial.

Advertisement

He cites penile cancer as a stark example: rates are higher among Latino men in L.A., not necessarily due to lack of access, but because of gaps in awareness and education around HPV vaccination and hygiene.

Despite these persisting inequities, the dramatic nationwide improvement in cancer survival is unquestionably good news, bringing renewed hope to many individuals and families. However, the report also gives a clear warning: Proposed federal cuts to cancer research and health insurance could stop or even undo these important gains.

“We can’t stop now,” warned Shane Jacobson, the American Cancer Society’s chief executive.

“We need to understand that we are not yet there,” Sotelo concurred. ”Cancer is still an issue.”

Advertisement
Continue Reading

Science

Clashing with the state, L.A. City moves to adopt lenient wildfire ‘Zone Zero’ regulations

Published

on

Clashing with the state, L.A. City moves to adopt lenient wildfire ‘Zone Zero’ regulations

As the state continues multiyear marathon discussions on rules for what residents in wildfire hazard zones must do to make the first five feet from their houses — an area dubbed “Zone Zero” — ember-resistant, the Los Angeles City Council voted Tuesday to start creating its own version of the regulations that is more lenient than most proposals currently favored in Sacramento.

Critics of Zone Zero, who are worried about the financial burden and labor required to comply as well as the detrimental impacts to urban ecosystems, have been particularly vocal in Los Angeles. However, wildfire safety advocates worry the measures endorsed by L.A.’s City Council will do little to prevent homes from burning.

“My motion is to get advice from local experts, from the Fire Department, to actually put something in place that makes sense, that’s rooted in science,” said City Councilmember John Lee, who put forth the motion. “Sacramento, unfortunately, doesn’t consult with the largest city in the state — the largest area that deals with wildfires — and so, this is our way of sending a message.”

Tony Andersen — executive officer of the state’s Board of Forestry and Fire Protection, which is in charge of creating the regulations — has repeatedly stressed the board’s commitment to incorporating L.A.’s feedback. Over the last year, the board hosted a contentious public meeting in Pasadena, walking tours with L.A. residents and numerous virtual workshops and hearings.

Advertisement
  • Share via

Advertisement

Some L.A. residents are championing a proposed fire-safety rule, referred to as “Zone Zero,” requiring the clearance of flammable material within the first five feet of homes. Others are skeptical of its value.

With the state long past its original Jan. 1, 2023, deadline to complete the regulations, several cities around the state have taken the matter into their own hands and adopted regulations ahead of the state, including Berkeley and San Diego.

“With the lack of guidance from the State Board of Forestry and Fire Protection, the City is left in a precarious position as it strives to protect residents, property, and the landscape that creates the City of Los Angeles,” the L.A. City Council motion states.

Advertisement

However, unlike San Diego and Berkeley, whose regulations more or less match the strictest options the state Board of Forestry is considering, Los Angeles is pushing for a more lenient approach.

The statewide regulations, once adopted, are expected to override any local versions that are significantly more lenient.

The Zone Zero regulations apply only to rural areas where the California Department of Forestry and Fire Protection responds to fires and urban areas that Cal Fire has determined have “very high” fire hazard. In L.A., that includes significant portions of Silver Lake, Echo Park, Brentwood and Pacific Palisades.

Fire experts and L.A. residents are generally fine with many of the measures within the state’s Zone Zero draft regulations, such as the requirement that there be no wooden or combustible fences or outbuildings within the first five feet of a home. Then there are some measures already required under previous wildfire regulations — such as removing dead vegetation like twigs and leaves, from the ground, roof and gutters — that are not under debate.

However, other new measures introduced by the state have generated controversy, especially in Los Angeles. The disputes have mainly centered around what to do about trees and other living vegetation, like shrubs and grass.

Advertisement

The state is considering two options for trees: One would require residents to trim branches within five feet of a house’s walls and roof; the other does not. Both require keeping trees well-maintained and at least 10 feet from chimneys.

On vegetation, the state is considering options for Zone Zero ranging from banning virtually all vegetation beyond small potted plants to just maintaining the regulations already on the books, which allow nearly all healthy vegetation.

Lee’s motion instructs the Los Angeles Fire Department to create regulations in line with the most lenient options that allow healthy vegetation and do not require the removal of tree limbs within five feet of a house. It is unclear whether LAFD will complete the process before the Board of Forestry considers finalized statewide regulations, which it expects to do midyear.

The motion follows a pointed report from LAFD and the city’s Community Forest Advisory Committee that argued the Board of Forestry’s draft regulations stepped beyond the intentions of the 2020 law creating Zone Zero, would undermine the city’s biodiversity goals and could result in the loss of up to 18% of the urban tree canopy in some neighborhoods.

The board has not decided which approach it will adopt statewide, but fire safety advocates worry that the lenient options championed by L.A. do little to protect vulnerable homes from wildfire.

Advertisement

Recent studies into fire mechanics have generally found that the intense heat from wildfire can quickly dry out these plants, making them susceptible to ignition from embers, flames and radiant heat. And anything next to a house that can burn risks taking the house with it.

Another recent study that looked at five major wildfires in California from the last decade, not including the 2025 Eaton and Palisades fires, found that 20% of homes with significant vegetation in Zone Zero survived, compared to 37% of homes that had cleared the vegetation.

Continue Reading

Science

At 89, he’s heard six decades of L.A.’s secrets and is ready to talk about what he’s learned

Published

on

At 89, he’s heard six decades of L.A.’s secrets and is ready to talk about what he’s learned

Dr. Arnold Gilberg’s sunny consultation room sits just off Wilshire Boulevard. Natural light spills onto a wooden floor, his houndstooth-upholstered armchair, the low-slung couch draped with a colorful Guatemalan blanket.

The Beverly Hills psychiatrist has been seeing patients for more than 60 years, both in rooms like this and at Cedars-Sinai Medical Center, where he has been an attending physician since the 1960s.

He treats wildly famous celebrities and people with no fame at all. He sees patients without much money and some who could probably buy his whole office building and not miss the cash.

Gilberg, 89, has treated enough people in Hollywood, and advised so many directors and actors on character psychology, that his likeness shows up in films the way people float through one another’s dreams.

The Nancy Meyers film “It’s Complicated” briefly features a psychiatrist character with an Airedale terrier — a doppelganger of Belle, Gilberg’s dog who sat in on sessions until her death in 2018, looking back and forth between doctor and patient like a Wimbledon spectator.

Advertisement

“If you were making a movie, he would be central casting for a Philip Roth‑esque kind of psychiatrist,” said John Burnham, a longtime Hollywood talent agent who was Gilberg’s patient for decades starting in his 20s. “He’s always curious and interested. He gave good advice.”

Since Gilberg opened his practice in 1965, psychiatry and psychotherapy have gone from highly stigmatized secrets to something people acknowledge in award show acceptance speeches. His longtime prescriptions of fresh food, sunshine, regular exercise and meditation are now widely accepted building blocks of health, and are no longer the sole province of ditzy L.A. hippies.

Beverly Hills psychiatrist Dr. Arnold Gilberg, 89, is the last living person to have trained under Franz Alexander, a disciple of Sigmund Freud.

(Robert Gauthier / Los Angeles Times)

Advertisement

He’s watched people, himself included, grow wiser and more accepting of the many ways there are to live. He’s also watched people grow lonelier and more rigid in their political beliefs.

On a recent afternoon, Gilbert sat for a conversation with The Times at the glass-topped desk in his consultation room, framed by a wall full of degrees. At his elbow was a stack of copies of his first book, “The Myth of Aging: A Prescription for Emotional and Physical Well-Being,” which comes out Tuesday.

In just more than 200 pages, the book contains everything Gilberg wishes he could tell the many people who will never make it into his office. After a lifetime of listening, the doctor is ready to talk.

Gilberg moved to Los Angeles in 1961 for an internship at what is now Los Angeles General Medical Center. He did his residency at Mount Sinai Hospital (later Cedars-Sinai) with the famed Hungarian American psychoanalyst Dr. Franz Alexander.

Among his fellow disciples of Sigmund Freud, Alexander was a bit of an outlier. He balked at Freud’s insistence that patients needed years of near-daily sessions on an analyst’s couch, arguing that an hour or two a week in a comfortable chair could do just as much good. He believed patients’ psychological problems stemmed more often from difficulties in their current personal relationships than from dark twists in their sexual development.

Advertisement

Not all of Alexander’s theories have aged well, Gilberg said — repressed emotions do not cause asthma, to name one since-debunked idea. But Gilberg is the last living person to have trained with Alexander directly and has retained some of his mentor’s willingness to go against the herd.

If you walk into Gilberg’s office demanding an antidepressant prescription, for example, he will suggest you go elsewhere. Psychiatric medication is appropriate for some mental conditions, he said, but he prefers that patients first try to fix any depressing situations in their lives.

He has counseled patients to care for their bodies long before “wellness” was a cultural buzzword. It’s not that he forces them to adopt regimens of exercise and healthy eating, exactly, but if they don’t, they’re going to hear about it.

“They know how I feel about all this stuff,” he said.

He tells many new patients to start with a 10-session limit. If they haven’t made any progress after 10 visits, he reasons, there’s a good chance he’s not the right doctor for them. If he is, he’ll see them as long as they need.

Advertisement

One patient first came to see him at 19 and returned regularly until her death a few years ago at the age of 79.

“He’s had patients that he’s taken care of over the span, and families that have come back to him over time,” said Dr. Itai Danovitch, who chairs the psychiatry department at Cedars-Sinai. “It’s one of the benefits of being an incredibly thoughtful clinician.”

Not long after opening his private practice in 1965, Gilberg was contacted by a prominent Beverly Hills couple seeking care for their son. The treatment went well, Gilberg said, and the satisfied family passed his name to several well-connected friends.

As a result, over the years his practice has included many names you’d recognize right away (no, he will not tell you who) alongside people who live quite regular lives.

They all have the same concerns, Gilberg says: Their relationships. Their children. Their purpose in life and their place in the world. Whatever you achieve in life, it appears, your worries remain largely the same.

Advertisement

When it’s appropriate, Gilberg is willing to share that his own life has had bumps and detours.

He was born in Chicago in 1936, the middle of three boys. His mother was a homemaker and his father worked in scrap metal. Money was always tight. Gilberg spent a lot of time with his paternal grandparents, who lived nearby with their adult daughter, Belle.

The house was a formative place for Gilberg. He was especially close to his grandfather — a rabbi in Poland who built a successful career in waste management after immigrating to the U.S. — and to his Aunt Belle.

Disabled after a childhood accident, Belle spent most of her time indoors, radiating a sadness that even at the age of 4 made Gilberg worry for her safety.

“It’s one of the things that brought me into medicine, and then ultimately psychiatry,” Gilberg said. “I felt very, very close to her.”

Advertisement

He and his first wife raised two children in Beverly Hills. Jay Gilberg is now a real estate developer and Dr. Susanne Gilberg-Lenz is an obstetrician-gynecologist (and the other half of the only father-daughter pair of physicians at Cedars-Sinai).

The marriage ended when he was in his 40s, and though the split was painful, he said, it helped him better understand the kind of losses his patients experienced.

He found love again in his 70s with Gloria Lushing-Gilberg. The couple share 16 grandchildren and seven great-grandchildren. They married four years ago, after nearly two decades together.

“As a psychoanalyst or psychiatrist ages, we have the ability, through our own life experiences, to be more understanding and more aware,” he said.

It’s part of what keeps him going. Though he has reduced his hours considerably, he isn’t ready to retire. He has stayed as active as he advises his patients to be, both personally (he was ordained as a rabbi several years ago) and professionally.

Advertisement

For all the strides society has made during the course of his career toward acceptance and inclusivity, he also sees that patients are lonelier than they used to be. They spend less time with friends and family, have a harder time finding partners.

We’re isolated and suffering for it, he said, as individuals and as a society. People still need care.

Unlike a lot of titles on the self-help shelves, Gilberg’s book promises no sly little hack to happiness, no “you’ve-been-thinking-about-this-all-wrong” twist.

Psychiatrist Dr. Arnold Gilberg, 89, authored "The Myth of Aging: A Prescription for Emotional and Physical Well-Being."

After 60 years working with Hollywood stars and regular Angelenos, Gilberg is ready to share what he’s learned with the world.

(Robert Gauthier / Los Angeles Times)

Advertisement

His prescriptions run along deceptively simple lines: Care for your health. Say thank you. Choose to let go of harmless slights and petty conflicts. Find people you belong with, and stop holding yourself and others to impossibly high standards.

“People have the capacity to self-heal, and I have become a firm believer in that. Not everyone needs to be in therapy for 10 years to figure it out,” he said. “A lot of this is inside yourself. You have an opportunity to overcome the things and obstacles that are in you, and you can do it.”

So what is “it”? What does it mean to live a good life?

Gilberg considered the question, hands clasped beneath his chin, the traffic outside humming expectantly.

“It means that the person has been able to look at themselves,” he said, “and feel somewhat happy about their existence.”

Advertisement

The best any of us can hope for is to be … somewhat happy?

Correct, Gilberg said. “A somewhat happy existence, off and on, which is normal. And hopefully, if the person wants to pursue that, some kind of a personal relationship.”

As it turns out, there is no housing in happiness. You can visit, but nobody really lives there. The happiest people know that. They live in OK neighborhoods that are not perfect but could be worse. They try to be nice to the neighbors. The house is a mess a lot of the time. They still let people in.

Somewhat happy, sometimes, with someone else to talk to.

It is that simple. It is that hard.

Advertisement
Continue Reading

Trending