Business
A.I. Computing Power Is Splitting the World Into Haves and Have-Nots
Where A.I. Data Centers Are Located
Only 32 nations, mostly in the Northern Hemisphere, have A.I.-specialized data centers.
Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.
Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.
Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.
“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”
Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.
The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”
The United States and China, which dominate the tech world, have particular influence. American and Chinese companies operate more than 90 percent of the data centers that other companies and institutions use for A.I. work, according to the Oxford data and other research.
In contrast, Africa and South America have almost no A.I. computing hubs, while India has at least five and Japan at least four, according to the Oxford data. More than 150 countries have nothing.
Today’s A.I. data centers dwarf their predecessors, which powered simpler tasks like email and video streaming. Vast, power-hungry and packed with powerful chips, these hubs cost billions to build and require infrastructure that not every country can provide. With ownership concentrated among a few tech giants, the effects of the gap between those with such computing power and those without it are already playing out.
The world’s most used A.I. systems, which power chatbots like OpenAI’s ChatGPT, are more proficient and accurate in English and Chinese, languages spoken in the countries where the compute power is concentrated. Tech giants with access to the top equipment are using A.I. to process data, automate tasks and develop new services. Scientific breakthroughs, including drug discovery and gene editing, rely on powerful computers. A.I.-powered weapons are making their way onto battlefields.
Nations with little or no A.I. compute power are running into limits in scientific work, in the growth of young companies and in talent retention. Some officials have become alarmed by how the need for computing resources has made them beholden to foreign corporations and governments.
“Oil-producing countries have had an oversized influence on international affairs; in an A.I.-powered near future, compute producers could have something similar since they control access to a critical resource,” said Vili Lehdonvirta, an Oxford professor who conducted the research on A.I. data centers with his colleagues Zoe Jay Hawkins and Boxi Wu.
A.I. computing power is so precious that the components in data centers, such as microchips, have become a crucial part of foreign and trade policies for China and the United States, which are jockeying for influence in the Persian Gulf, in Southeast Asia and elsewhere. At the same time, some countries are beginning to pour public funds into A.I. infrastructure, aiming for more control over their technological futures.
The Oxford researchers mapped the world’s A.I. data centers, information that companies and governments often keep secret. To create a representative sample, they went through the customer websites of nine of the world’s biggest cloud-service providers to see what compute power was available and where their hubs were at the end of last year. The companies were the U.S. firms Amazon, Google and Microsoft; China’s Tencent, Alibaba and Huawei; and Europe’s Exoscale, Hetzner and OVHcloud.
The research does not include every data center worldwide, but the trends were unmistakable. U.S. companies operated 87 A.I. computing hubs, which can sometimes include multiple data centers, or almost two-thirds of the global total, compared with 39 operated by Chinese firms and six by Europeans, according to the research. Inside the data centers, most of the chips — the foundational components for making calculations — were from the U.S. chipmaker Nvidia.
“We have a computing divide at the heart of the A.I. revolution,” said Lacina Koné, the director general of Smart Africa, which coordinates digital policy across the continent. He added: “It’s not merely a hardware problem. It’s the sovereignty of our digital future.”
‘Sometimes I Want to Cry’
There has long been a tech gap between rich and developing countries. Over the past decade, cheap smartphones, expanding internet coverage and flourishing app-based businesses led some experts to conclude that the divide was diminishing. Last year, 68 percent of the world’s population used the internet, up from 33 percent in 2012, according to the International Telecommunication Union, a United Nations agency.
With a computer and knowledge of coding, getting a company off the ground became cheaper and easier. That lifted tech industries across the world, be they mobile payments in Africa or ride hailing in Southeast Asia.
But in April, the U.N. warned that the digital gap would widen without action on A.I. Just 100 companies, mostly in the United States and China, were behind 40 percent of global investment in the technology, the U.N. said. The biggest tech companies, it added, were “gaining control over the technology’s future.”
Few Companies Control A.I. Computing
Tiles show total availability zones for A.I. offered by each company, a metric used by researchers as a proxy for A.I. data centers.
The gap stems partly from a component everyone wants: a microchip known as a graphics processing unit, or GPU. The chips require multibillion-dollar factories to produce. Packed into data centers by the thousands and mostly made by Nvidia, GPUs provide the computing power for creating and delivering cutting-edge A.I. models.
Obtaining these pieces of silicon is difficult. As demand has increased, prices for the chips have soared, and everyone wants to be at the front of the line for orders. Adding to the challenges, these chips then need to be corralled into giant data centers that guzzle up dizzying amounts of power and water.
Many wealthy nations have access to the chips in data centers, but other countries are being left behind, according to interviews with more than two dozen tech executives and experts across 20 countries. Renting computing power from faraway data centers is common but can lead to challenges, including high costs, slower connection speeds, compliance with different laws, and vulnerability to the whims of American and Chinese companies.
Qhala, a start-up in Kenya, illustrates the issues. The company, founded by a former Google engineer, is building an A.I. system known as a large language model that is based on African languages. But Qhala has no nearby computing power and rents from data centers outside Africa. Employees cram their work into the morning, when most American programmers are sleeping, so there is less traffic and faster speeds to transfer data across the world.
“Proximity is essential,” said Shikoh Gitau, 44, Qhala’s founder.
“If you don’t have the resources for compute to process the data and to build your A.I. models, then you can’t go anywhere,” said Kate Kallot, a former Nvidia executive and the founder of Amini, another A.I. start-up in Kenya.
In the United States, by contrast, Amazon, Microsoft, Google, Meta and OpenAI have pledged to spend more than $300 billion this year, much of it on A.I. infrastructure. The expenditure approaches Canada’s national budget. Harvard’s Kempner Institute, which focuses on A.I., has more computing power than all African-owned facilities on that continent combined, according to one survey of the world’s largest supercomputers.
Brad Smith, Microsoft’s president, said many countries wanted more computing infrastructure as a form of sovereignty. But closing the gap will be difficult, particularly in Africa, where many places do not have reliable electricity, he said. Microsoft, which is building a data center in Kenya with a company in the United Arab Emirates, G42, chooses data center locations based largely on market need, electricity and skilled labor.
“The A.I. era runs the risk of leaving Africa even further behind,” Mr. Smith said.
Jay Puri, Nvidia’s executive vice president for global business, said the company was also working with various countries to build out their A.I. offerings.
“It is absolutely a challenge,” he said.
Chris Lehane, OpenAI’s vice president of global affairs, said the company had started a program to adapt its products for local needs and languages. A risk of the A.I. divide, he said, is that “the benefits don’t get broadly distributed, they don’t get democratized.”
Tencent, Alibaba, Huawei, Google, Amazon, Hetzner and OVHcloud declined to comment.
The gap has led to brain drains. In Argentina, Dr. Wolovick, 51, the computer science professor, cannot offer much compute power. His top students regularly leave for the United States or Europe, where they can get access to GPUs, he said.
“Sometimes I want to cry, but I don’t give up,” he said. “I keep talking to people and saying: ‘I need more GPUs. I need more GPUs.’”
Few Choices
The uneven distribution of A.I. computing power has split the world into two camps: nations that rely on China and those that depend on the United States.
The two countries not only control the most data centers but are set to build more than others by far. And they have wielded their tech advantage to exert influence. The Biden and Trump administrations have used trade restrictions to control which countries can buy powerful A.I. chips, allowing the United States to pick winners. China has used state-backed loans to encourage sales of its companies’ networking equipment and data centers.
The effects are evident in Southeast Asia and the Middle East.
In the 2010s, Chinese companies made inroads into the tech infrastructure of Saudi Arabia and the Emirates, which are key American partners, with official visits and generous financing. The United States sought to use its A.I. lead to push back. In one deal with the Biden administration, an Emirati company promised to keep out Chinese technology in exchange for access to A.I. technology from Nvidia and Microsoft.
In May, President Trump signed additional deals to give Saudi Arabia and the Emirates even more access to American chips.
A similar jostling is taking place in Southeast Asia. Chinese and U.S. companies like Amazon, Alibaba, Nvidia, Google and ByteDance, the owner of TikTok, are building data centers in Singapore and Malaysia to deliver services across Asia.
Globally, the United States has the lead, with American companies building 63 A.I computing hubs outside the country’s borders, compared with 19 by China, according to the Oxford data. All but three of the data centers operated by Chinese firms outside their home country use chips from Nvidia, despite efforts by China to produce competing chips. Chinese firms were able to buy Nvidia chips before U.S. government restrictions.
Companies and countries throughout the world rely mostly on major American and Chinese cloud operators for A.I. facilities.
Where the World Gets Its A.I.
Even U.S.-friendly countries have been left out of the A.I. race by trade limits. Last year, William Ruto, Kenya’s president, visited Washington for a state dinner hosted by President Joseph R. Biden Jr. Several months later, Kenya was omitted from a list of countries that had open access to needed semiconductors.
That has given China an opening, even though experts consider the country’s A.I. chips to be less advanced. In Africa, policymakers are talking with Huawei, which is developing its own A.I. chips, about converting existing data centers to include Chinese-made chips, said Mr. Koné of Smart Africa.
“Africa will strike a deal with whoever can give access to GPUs,” he said.
If You Build It
Alarmed by the concentration of A.I. power, many countries and regions are trying to close the gap. They are providing access to land and cheaper energy, fast-tracking development permits and using public funds and other resources to acquire chips and construct data centers. The goal is to create “sovereign A.I.” available to local businesses and institutions.
In India, the government is subsidizing compute power and the creation of an A.I. model proficient in the country’s languages. In Africa, governments are discussing collaborating on regional compute hubs. Brazil has pledged $4 billion on A.I. projects.
“Instead of waiting for A.I. to come from China, the U.S., South Korea, Japan, why not have our own?” Brazil’s president, Luiz Inácio Lula da Silva, said last year when he proposed the investment plan.
Even in Europe, there is growing concern that American companies control most of the data centers. In February, the European Union outlined plans to invest 200 billion euros for A.I. projects, including new data centers across the 27-nation bloc.
Mathias Nobauer, the chief executive of Exoscale, a cloud computing provider in Switzerland, said many European businesses want to reduce their reliance on U.S. tech companies. Such a change will take time and “doesn’t happen overnight,” he said.
Still, closing the divide is likely to require help from the United States or China.
Cassava, a tech company founded by a Zimbabwean billionaire, Strive Masiyiwa, is scheduled to open one of Africa’s most advanced data centers this summer. The plans, three years in the making, culminated in an October meeting in California between Cassava executives and Jensen Huang, Nvidia’s chief executive, to buy hundreds of his company’s chips. Google is also one of Cassava’s investors.
The data center is part of a $500 million effort to build five such facilities across Africa. Even so, Cassava expects it to address only 10 percent to 20 percent of the region’s demand for A.I. At least 3,000 start-ups have expressed interest in using the computing systems.
“I don’t think Africa can afford to outsource this A.I. sovereignty to others,” said Hardy Pemhiwa, Cassava’s chief executive. “We absolutely have to focus on and ensure that we don’t get left behind.”
Business
How the landmark verdict against Meta and YouTube could hit their businesses
A Los Angeles jury dealt a blow to social media giants Meta and YouTube this week when it found that the platforms were negligent for designing addictive features that harmed the mental health of a California woman.
Both companies plan to appeal, but the ruling has ignited uncertainty around the tech companies’ future and sparked questions about the potential fallout.
The seven-week trial kicked off in February, featuring testimony from Meta and YouTube executives.
Kaley G.M., a 20-year-old Chico, Calif., woman, sued the platforms in 2023, alleging that using social media at a young age led to her mental health problems such as body dysmorphia and depression. She also sued TikTok and Santa Monica-based Snap and those companies settled ahead of the trial.
Lawyers representing the woman argued that the platforms hook in young users with features such as infinite scrolling, autoplaying videos and beauty filters.
People use social media to keep up with their friends and family, but teens can also feel inadequate, sad or anxious when they compare themselves to a curated version of other people’s lives online. They’re also spending a lot of time watching a seemingly endless amount of short videos.
A jury determined that Meta was 70% responsible for Kaley’s harms and YouTube was 30% responsible. They awarded her a total of $6 million. The ruling came shortly after a New Mexico jury found Meta liable for $375 million in damages after the state Atty. Gen. Raúl Torrez alleged the platform’s features enabled predators and pedophiles to exploit children.
“These verdicts mark an unsurprising breaking point. Negative sentiment toward social media has been building for years, and now it’s finally boiled over,” said Mike Proulx, a director at Forrester, a market research company.
How have the companies reacted to the verdict?
Meta and Google, which owns YouTube, said they disagreed with the ruling and plan to appeal.
“This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site,” said Jose Castañeda, a Google spokesman, in a statement.
Meta spokesman Andy Stone posted the company’s statement on social media site X.
“Teen mental health is profoundly complex and cannot be linked to a single app. We will continue to defend ourselves vigorously as every case is different, and we remain confident in our record of protecting teens online,” the statement said.
Tech companies have been responding to mental health concerns, rolling out new parental controls so parents can keep track of their children’s screen time and moderating harmful content. Instagram and YouTube have versions of their apps meant for young people.
Some child advocacy groups and lawmakers, though, say these changes aren’t enough.
The ruling could affect how much money YouTube’s parent company, Alphabet, and Meta earn as they spend more on legal battles. While they make billions of dollars from advertising, investors are wary about higher expenses. The companies are already spending billions of dollars on artificial intelligence and developing new hardware such as smartglasses.
On Thursday, Meta’s stock fell more than 7% to $549 per share. Alphabet saw its share price drop more than 2% to roughly $280.
In 2025, Meta’s annual revenue grew 22% from the previous year to $200.97 billion.
Last year, YouTube’s annual revenue surpassed more than $60 billion. Both Google and Meta have been laying off workers as they spend more on AI.
The ongoing backlash hasn’t stopped tech companies from growing their users.
A majority of U.S. teens use YouTube, TikTok, Instagram and Snapchat, according to a 2025 Pew Research Center survey. More than 3.5 billion people use one of Meta’s products, which include Instagram and Facebook.
Social media has continued to change over the years as companies double down on short videos and AI chatbots.
Mental health concerns have only heightened as AI chatbots that respond to questions and generate content become more popular. Families have sued OpenAI, Character.AI and Google after their loved ones who used chatbots killed themselves.
Some analysts remain skeptical that Meta and YouTube would make drastic changes to their products because they’ve weathered crises before.
“Neither Meta nor YouTube is going to do anything different until a court orders them to, or there’s a significant drop in user or advertiser use,” said Max Willens, Principal Analyst at eMarketer.
Other analysts said legal risks could also affect how tech companies develop new AI-powered products and features.
“It’s likely that tech firms will now face increased scrutiny over the design of their platforms, which should drive more thoughtful inclusion of features that foster healthier interactions and safeguard mental health,” said Andrew Frank, an analyst with Gartner for Marketing Leaders.
At the very least, the verdicts serve as a “dire warning about how we handle the next wave of technology,” Proulx said.
“If we’re still struggling to put effective guardrails around social media after nearly two decades, we’re far from prepared for the growing harms of AI, which is moving faster, scaling wider, and embedding itself far deeper into people’s lives,” he said.
Times staff writer Sonja Sharp contributed to this report.
Business
Justin Vineyards pays $1.49 million to settle sex harassment case
Justin Vineyards & Winery has agreed to workplace reforms and to pay $1.49 million to settle a federal lawsuit accusing it of allowing female employees to be sexually harassed and then retaliating against them for reporting it.
The Paso Robles business reached the settlement with the federal Equal Employment Opportunity Commission. It was was approved Thursday by a federal judge.
Also named in the lawsuit and settlement is the Wonderful Co., the Los Angeles agribusiness owned by Beverly Hills billionaires Lynda and Stewart Resnick.
In 2010, Wonderful acquired Justin, which includes production facilities, a tasting room, inn and Michelin-starred restaurant.
The lawsuit, filed in 2022, alleged that female employees were subject since August 2017 to comments about their appearance; texts containing inappropriate photos; touching of their breasts, buttocks and genitals; forced kissing and other harassment by their male supervisors.
It further alleged that the companies “knew or should have known” about the hostile work environment.
The lawsuit also said that when complaints were made about the harassment, they were not properly investigated and the employees were subject to retaliation, including being given double shifts, being accused of wrongdoing and being berated and yelled at by supervisors.
Aside from the monetary penalty, the settlement requires Justin and Wonderful to halt any harassment or retaliation, undergo compliance audits and take other measures at the vineyard operations.
The companies denied all the allegations and agreed to the settlement to resolve the litigation, according to the consent decree.
In a statement, Justin said that the matter “dates back many years and was dealt with immediately and decisively the moment we became aware of any allegations of conduct that did not align with what is appropriate in the workplace.
“With this agreement reached, we look forward to putting this chapter fully behind us and continuing to focus on the incredibly talented team we have in place today,” the statement said.
Beatriz Andre, acting regional attorney for the EEOC’s Los Angeles District Office, commended Justin and Wonderful for reaching the settlement.
“The policy changes and reporting to which the companies agreed are important steps in ensuring a workplace free of discrimination,” she said in a statement.
In 2016, workers cut down dozens of oaks trees on land managed by Justin to make room for new grape plantings, stirring up controversy.
The Resnicks said they were unaware of the cutting, apologized, donated the land to a nature conservancy and agreed to plant thousands of trees on vineyard property.
After buying Justin, Wonderful acquired Landmark Vineyards in Sonoma County and Lewis Cellars in Napa Valley.
Business
Commentary: How a custody fight over an old dog showed why lawyers should never trust AI to tell the truth
The seemingly limitless proliferation of cases in which lawyers have been caught letting fictitious AI-generated legal citations contaminate their briefs continues to amaze.
That’s not only because judges are fining more lawyers for their laziness, but because the publicity about these embarrassments has been inescapable.
Here’s one involving a dog named Kyra.
She’s a 16-year-old Labrador retriever who became the target of a nasty custody fight between a California couple after the dissolution of their domestic partnership. In the course of the lawsuit, one lawyer published two AI-fabricated citations in a filing. The opposing law firm didn’t catch the flaw and cited the same fake cases in its filings, including in a court order signed by a judge.
Most lawyers grew up in a time when you could expect the other side to spin and even to lie about the record some of the time, but just lying or making a mistake about the existence of a case was basically unheard of up until a few years ago.
— Eugene Volokh, UCLA law school
The case of Joan Pablo Torres Campos vs. Leslie Ann Munoz also points to how AI, touted worldwide as a labor-saving technology, has actually increased the workload in some trades and professions, like lawyering. For litigators, it has created a new imperative: ferreting out citations that have been fabricated by AI bots in their own court filings — and their adversaries’.
I’ve written before about the proliferation of AI-generated fabrications infiltrating legal filings and even legal rulings, despite the advice drilled into the heads of even law students about making sure that their citations to precedential cases are accurate. But the wave keeps building: A database of AI hallucinations maintained by the French researcher Damien Charlotin now numbers 1,174 cases, of which some 750 are from U.S. courts.
That’s almost certainly a conservative count. Most AI fabrications may not even come to the attention of litigants or judges, especially in state courts.
“For every case that talks about this, my guess is that there are many that aren’t visible,” says Eugene Volokh of UCLA law school and the Hoover Institution, who keeps a weather eye on AI-related courthouse developments. He believes there may be thousands escaping notice.
AI has introduced mistakes that were never seen in the past. “Most lawyers grew up in a time when you could expect the other side to spin and even to lie about the record some of the time, but just lying or making a mistake about the existence of a case was basically unheard of up until a few years ago,” Volokh told me. “That’s because there would be no source of hallucinations — maybe you’d get the citations slightly wrong or you mischaracterized or misquoted them, but to talk about a case that doesn’t exist — that didn’t happen. Now it happens a lot.”
The judiciary is getting increasingly nervous about AI fabrications becoming part of the judicial record. “Reliance on fake cases…seriously undermines the integrity of the outcome and erodes public confidence in our judicial system,” an appelate judge stated.
Therefore, he added, “it is imperative for both the court and the parties to verify that the citations in all orders are genuine….This is especially vital with the increasing incidence of hallucinated case citations generated by AI tools.”
Judges are still reluctant to bring down the hammer for AI-fabrications if lawyers acknowledge their fault and “throw themselves on the mercy of the court,” Volokh says. But they’re getting tougher on lawyers who deny their reliance on AI or try to shift blame.
As recently as Monday, federal Magistrate Mark D. Clarke of Medford, Ore., ordered the attorneys representing the plaintiff in a civil lawsuit to pay more than $90,000 in legal fees, on top of an earlier sanction of $15,500 imposed on one of the lawyers, for incorporating 15 fabricated case citations and eight misquotations into case filings.
Clarke also dismissed the $29-million lawsuit, which arose from a ferocious dispute among the sibling heirs to an Oregon winery fortune, with prejudice, so it can’t be refiled. It was an extraordinary punishment, Clarke acknowledged — and the largest penalty imposed in any case in Charlotin’s database.
“In the quickly expanding universe of cases involving sanctions for the misuse of artificial intelligence, this case is a notorious outlier in both degree and volume,” Clarke wrote. Among other faults, he noted, the plaintiff’s lawyers never adequately fessed up to their wrongdoing. “If there was ever an ‘appropriate case’ to grant terminating sanctions for the misuse of artificial intelligence,” he wrote, “this is it.”
That brings us back to the custody battle over Kyra. The case originated in 2024, two years after a family court judge in San Diego dissolved the domestic partnership of Joan Torres Campos and Munoz. The dissolution order allowed them to keep their own property, but didn’t mention the dog, who lived with Munoz.
Torres Campos subsequently sought shared custody of Kyra and visitation rights. (Pet custody battles have long been a cultural fixture: Film aficionados might recognize this case’s similarity to the custody fight over the wire-haired terrier Mr. Smith in the 1937 Cary Grant/Irene Dunne vehicle “The Awful Truth,” surely the funniest movie ever made by Hollywood.)
Munoz rejected Torres Campos’ request, arguing that he didn’t really care about the dog, but only aimed to harass her. A family court judge sided with her, but Torres Campos appealed.
In her initial reply to Torres Campos, Munoz’s lawyer, Roxanne Chung Bonar, cited California cases from 1984 and 1995 that she said supported her client’s refusal to grant visitation rights.
Both case citations were fictitious. The 1984 case, Marriage of Twigg, didn’t exist at all; Bonar’s citation pointed to a criminal case that had “nothing to do with pets or custody determinations,” California Appellate Judge Martin N. Buchanan wrote for a unanimous three-judge panel, upholding the family court judge . The second reference was to Marriage of Teegarden, which was handed down in 1986, not 1995, and also had nothing to do with the issue at hand.
Things only got more complicated from there. Torres Campos’ lawyer, in a reply brief and a subsequent proposed court order, didn’t mention that Twigg and Teegarden were fabricated cases, perhaps because the lawyer hadn’t checked the references personally. The family court judge signed the proposed order, including the fake citations, resulting on their infiltration into the official record. (Although Torres Campos’ lawyer drafted the proposed order, it actually rejected his lawsuit.)
It was only in the course of appealing the family court ruling did Torres Campos’ lawyer mention that the two cited precedents were “invented case law.”
There was one more turn of the screw: In responding to Torres Campos’ appellate filing, Bonar “doubled down,” Buchanan wrote. Bonar insisted that Twigg was a “valid, published precedent” and added three more purported citations to the case. All were “just as phony as the original citation,” Buchanan noted.
Bonar even taunted Torres Campos’ lawyer for his “failure to conduct basic legal research” to verify the ostensibly genuine precedents, adding that his “inability to locate them underscores the incompetence that led to his appeal’s dismissal.”
Where did these references come from? It turned out that the Twigg reference originally came from a Reddit article written by an Oregon blogger and animal rescuer who posts under the name “Sassafras Patterdale,” in which she cited the fictitious case in a post about pet custody battles. Munoz had received the article from a friend and passed it on to Bonar. Both of them assumed that everything in it was accurate.
According to the appellate ruling, the additional citations to Twigg don’t appear in the Reddit post. Bonar never explained where they came from. She did concede, however, that the fictitious citations “‘may have’ come from her use of AI tools,” Buchanan noted. He sanctioned her with a $5,000 fine, largely because she did not initially acknowledge that her citations were fake and tried to shift blame to her opposing counsel.
Although the appeals judges could have awarded the case to Torres Campos due to Bonar’s performance, they declined to do so — because Torres Campos’ lawyers hadn’t checked their opposing counsel’s citations themselves. At this stage, Munoz still has custody of the dog and the lawsuit is essentially over, according to Torres Campos’ attorney, David C. Beavens of San Diego.
Beavens says he took the case because he hoped to use it to obtain judicial clarification of a state law enacted in 2019, which authorized courts to issue orders regarding the ownership and care of pets in divorce cases. The appellate judges, sidetracked by the AI issue, never touched on that. But Beavens says he agreed with the panel’s position AI fabrications have become such a problem in court that “we need to hold everyone accountable” — lawyers on both sides of a case and the judges as well.
Bonar told me that she was not challenging the sanction but declined to comment on it further.
I did ask Bonar if she had any advice for other lawyers tempted to use AI in their work. “Yes,” she said: “Verify all third-party sources.”
-
Detroit, MI1 week agoDrummer Brian Pastoria, longtime Detroit music advocate, dies at 68
-
Movie Reviews1 week ago‘Youth’ Twitter review: Ken Karunaas impresses audiences; Suraj Venjaramoodu adds charm; music wins praise | – The Times of India
-
Sports1 week agoIOC addresses execution of 19-year-old Iranian wrestler Saleh Mohammadi
-
New Mexico6 days agoClovis shooting leaves one dead, four injured
-
Business1 week agoDisney’s new CEO says his focus is on storytelling and creativity
-
Tennessee5 days agoTennessee Police Investigating Alleged Assault Involving ‘Reacher’ Star Alan Ritchson
-
Technology6 days agoYouTube job scam text: How to spot it fast
-
Texas1 week agoHow to buy Houston vs. Texas A&M 2026 March Madness tickets