Business
Column: Who's really winning in Sarah Silverman's copyright suit against OpenAI?
If you’ve been following the war between authors and the purveyors of AI chatbots over whether the latter are infringing the copyrights of the former, you might have concluded that comedian and author Sarah Silverman and several fellow authors suffered a crushing blow in their lawsuit against OpenAI, the leading bot maker.
In his ruling Feb. 12, federal Judge Areceli Martínez-Olguín of San Francisco indeed tossed most of the copyright claims Silverman et. al had brought against OpenAI in lawsuits filed last year.
That’s the way much of the press portrayed the outcome: “Judge dismisses most of Sarah Silverman’s copyright infringement lawsuit” (VentureBeat). And “OpenAI Scores Court Victory” (Forbes). And “Sarah Silverman, Authors See Most Claims Against OpenAI Dismissed by Judge” (Hollywood Reporter).
If someone tells you it’s not about the money but the principle, they’re really talking about the money.
— Robin Feldman, UC College of the Law
Well, not really. Of the six counts in the authors’ lawsuit, one — whether OpenAI directly copied or distributed the plaintiffs’ works — wasn’t even before the judge because OpenAI hadn’t asked him to dismiss it. It’s a key allegation, and it’s still alive.
Of the other five, the judge cleared one to proceed; that’s a claim that OpenAI engaged in an “unfair” business practice under California law. He dismissed four others but gave the plaintiffs permission to amend their complaint and try again. The amended complaint is due before him by March 13.
At best, this is a mixed victory for both sides. But this lawsuit and a couple of other similar cases provide a road map for how the copyright issue may play out, in and out of court: with settlements that outline how much the artificial intelligence industry should pay copyright holders for using their works, and how those payments should be made.
Any such settlements would have to recognize that AI chatbots are here to stay, but also that they can’t mine published material for free.
“It’s hard to imagine that you could put the genie back in the bottle — that courts would decide that generative AI may not be used under any circumstances at any time,” says Robin Feldman, an expert in intellectual property law at UC College of the Law. “At the same time, it’s hard to imagine that generative AI could end up free to do whatever it wants at any time with copyrighted material.”
It’s fair to imagine, as well, that the issue is going to pose a headache for judges right up to the point that it lands before the Supreme Court, as Feldman believes is likely. That’s because of two aspects that are anything but cut-and-dried: copyright law and a new technology. U.S. copyright law is extremely complicated, and the technology bears features that don’t resemble anything seen in earlier technology transitions. Put them together, and the complexities are magnified exponentially.
Before going further, let’s define the landscape.
OpenAI is a high-tech firm with an investment from Microsoft that has been reported to be as much as $13 billion. Its best-known product is ChatGPT, a chatbot that spits out human-sounding answers to questions posed in plain language, though sometimes the “humans” it strives to emulate come off like idiots or plagiarists.
As I’ve reported, the chatbot business, like artificial intelligence research throughout its history, has been infected with hype. But it’s currently the target of a high-tech gold rush based on expectations that it will dramatically remake industries such as manufacturing, medicine, law — almost anything you can name. We’ll see.
As I’ve also reported, it’s a misnomer to call chatbots “artificial intelligence.” They’re not intelligent by any common definition of the word; they’re just good at seeming intelligent to an outsider unaware of the electronic processing inside them — a simulacrum of human thought, not the product of cogitation.
Chatbots don’t create content, as such. They have to be “trained” by pumping their databases full of human-produced content — books, newspaper articles, junk scraped from the web, etc. All this material allows the bots to generate superficially coherent answers to questions by generating prose patterns and sometimes repeating facts they dredge up from their databases.
That brings us back to the copyright issue. Silverman and other plaintiffs, including the writers Michael Chabon and Ta-Nehesi Coates, who filed a complaint similar to hers last year, contend that in using their works to train its chatbots, OpenAI is copying their works without permission, compensation or credit. Having “ingested” their works, the bots are “able to emit convincingly naturalistic text outputs.”
Indeed, Silverman’s lawsuit states that when asked to do so, ChatGPT is able to generate accurate summaries of the copyrighted works — “something only possible if ChatGPT was trained” on those works.
Among OpenAI’s defenses is that its use of copyrighted material falls within the exemption known as “fair use.” That’s a concept that allows snippets of published works to be quoted in reviews, summaries, news reports, research papers and the like, or to be parodied or repurposed in a “transformative” way.
OpenAI argues that previous court rulings say that creating copies of a copyrighted work as a preliminary step in developing a new, non-infringing product falls safely under the fair use protection, and that’s all it’s doing.
But it’s not at all clear that OpenAI’s interpretation will stand. In copyright law, fair use is a moving target, interpreted by judges on a case-by-case basis. “There are no hard-and-fast rules, only general guidelines and varied court decisions,” according to a digest by Stanford University librarians.
As chatbot developers snarf up more content to “train” their products, the potential copyright claims are only going to multiply. A disclosure: At least three of my books are in a database used to train some chatbots. I’m not a plaintiff in any of these lawsuits, but since they’re all fashioned as class actions in which I might qualify as a class member, it’s conceivable that if any go to trial and end with a class settlement, I might get a (probably vanishingly tiny) payout.
The lawsuits by individual writers are only one category. As I reported earlier, Getty Images has sued an AI company for copying millions of historical and contemporary photographs to which it holds licensing rights, allegedly to build a competing business. Dozens of music publishers have sued another AI firm for its “mass copying and ingesting” of copyrighted song lyrics to enable its bot to regurgitate them to its users by generating “identical or nearly identical copies of those lyrics” on request.
A lawsuit brought by New York Times Co. against Microsoft and OpenAI has attracted heavy attention not only because of the prominence of the plaintiff but because the newspaper produced evidence that OpenAI’s chatbot actually spits out lengthy verbatim passages from Times articles. This allows the Times to assert that the chatbot is cutting into the market for its work, a factor that judges have sometimes considered to reject a fair-use defense.
That’s a claim that the Silverman and Chabon lawsuits weren’t able to back up with evidence, which is what prompted Judge Martínez-Olguín to put some of their copyright claims on hold. He invited the plaintiffs to come back with allegations “that any particular output is substantially similar — or similar at all — to their books,” at which point he might reconsider.
Feldman observes that this entire legal issue is in the early “posturing” stage. The AI industry bases its defense on the principle that it’s doing nothing wrong and doesn’t owe creators anything. The creators say the principle is that what the chatbot developers are up to produces “an irreparable injury that cannot fully be compensated or measured in money,” to quote the Silverman lawsuit.
But money has settled previous donnybrooks over new technologies. Most notably, the recording industry and broadcasters solved their dispute over radio and television broadcasting of music with a licensing arrangement initially reached more than 80 years ago and that has survived in its essence to cover not only radio and television stations but also “streaming services, concert venues, bars, restaurants, and retail establishments.” (That’s not to say that artists are necessarily fairly compensated for these uses.)
That’s the best bet for how the chatbot issue will unfold, in time: with a financial arrangement sufficiently fair to both sides to be blessed by a judge. Feldman advises not to buy into the assertions on both sides that with principles at stake, no financial arrangement is possible. The New York Times, indeed, says that it filed its lawsuit only after negotiations to place a financial value on the use of its content failed to produce a “resolution.”
Feldman cites an adage (often attributed to the turn-of-the-century humorist Kin Hubbard) that holds: “If someone tells you it’s not about the money but the principle, they’re really talking about the money.”
Business
U.S. Space Force awards $1.6 billion in contracts to South Bay satellite builders
The U.S. Space Force announced Friday it has awarded satellite contracts with a combined value of about $1.6 billion to Rocket Lab in Long Beach and to the Redondo Beach Space Park campus of Northrop Grumman.
The contracts by the Space Development Agency will fund the construction by each company of 18 satellites for a network in development that will provide warning of advanced threats such as hypersonic missiles.
Northrop Grumman has been awarded contracts for prior phases of the Proliferated Warfighter Space Architecture, a planned network of missile defense and communications satellites in low Earth orbit.
The contract announced Friday is valued at $764 million, and the company is now set to deliver a total of 150 satellites for the network.
The $805-million contract awarded to Rocket Lab is its largest to date. It had previously been awarded a $515 million contract to deliver 18 communications satellites for the network.
Founded in 2006 in New Zealand, the company builds satellites and provides small-satellite launch services for commercial and government customers with its Electron rocket. It moved to Long Beach in 2020 from Huntington Beach and is developing a larger rocket.
“This is more than just a contract. It’s a resounding affirmation of our evolution from simply a trusted launch provider to a leading vertically integrated space prime contractor,” said Rocket Labs founder and chief executive Peter Beck in online remarks.
The company said it could eventually earn up to $1 billion due to the contract by supplying components to other builders of the satellite network.
Also awarded contracts announced Friday were a Lockheed Martin group in Sunnyvalle, Calif., and L3Harris Technologies of Fort Wayne, Ind. Those contracts for 36 satellites were valued at nearly $2 billion.
Gurpartap “GP” Sandhoo, acting director of the Space Development Agency, said the contracts awarded “will achieve near-continuous global coverage for missile warning and tracking” in addition to other capabilities.
Northrop Grumman said the missiles are being built to respond to the rise of hypersonic missiles, which maneuver in flight and require infrared tracking and speedy data transmission to protect U.S. troops.
Beck said that the contracts reflects Rocket Labs growth into an “industry disruptor” and growing space prime contractor.
Business
California-based company recalls thousands of cases of salad dressing over ‘foreign objects’
A California food manufacturer is recalling thousands of cases of salad dressing distributed to major retailers over potential contamination from “foreign objects.”
The company, Irvine-based Ventura Foods, recalled 3,556 cases of the dressing that could be contaminated by “black plastic planting material” in the granulated onion used, according to an alert issued by the U.S. Food and Drug Administration.
Ventura Foods voluntarily initiated the recall of the product, which was sold at Costco, Publix and several other retailers across 27 states, according to the FDA.
None of the 42 locations where the product was sold were in California.
Ventura Foods said it issued the recall after one of its ingredient suppliers recalled a batch of onion granules that the company had used n some of its dressings.
“Upon receiving notice of the supplier’s recall, we acted with urgency to remove all potentially impacted product from the marketplace. This includes urging our customers, their distributors and retailers to review their inventory, segregate and stop the further sale and distribution of any products subject to the recall,” said company spokesperson Eniko Bolivar-Murphy in an emailed statement. “The safety of our products is and will always be our top priority.”
The FDA issued its initial recall alert in early November. Costco also alerted customers at that time, noting that customers could return the products to stores for a full refund. The affected products had sell-by dates between Oct. 17 and Nov. 9.
The company recalled the following types of salad dressing:
- Creamy Poblano Avocado Ranch Dressing and Dip
- Ventura Caesar Dressing
- Pepper Mill Regal Caesar Dressing
- Pepper Mill Creamy Caesar Dressing
- Caesar Dressing served at Costco Service Deli
- Caesar Dressing served at Costco Food Court
- Hidden Valley, Buttermilk Ranch
Business
They graduated from Stanford. Due to AI, they can’t find a job
A Stanford software engineering degree used to be a golden ticket. Artificial intelligence has devalued it to bronze, recent graduates say.
The elite students are shocked by the lack of job offers as they finish studies at what is often ranked as the top university in America.
When they were freshmen, ChatGPT hadn’t yet been released upon the world. Today, AI can code better than most humans.
Top tech companies just don’t need as many fresh graduates.
“Stanford computer science graduates are struggling to find entry-level jobs” with the most prominent tech brands, said Jan Liphardt, associate professor of bioengineering at Stanford University. “I think that’s crazy.”
While the rapidly advancing coding capabilities of generative AI have made experienced engineers more productive, they have also hobbled the job prospects of early-career software engineers.
Stanford students describe a suddenly skewed job market, where just a small slice of graduates — those considered “cracked engineers” who already have thick resumes building products and doing research — are getting the few good jobs, leaving everyone else to fight for scraps.
“There’s definitely a very dreary mood on campus,” said a recent computer science graduate who asked not to be named so they could speak freely. “People [who are] job hunting are very stressed out, and it’s very hard for them to actually secure jobs.”
The shake-up is being felt across California colleges, including UC Berkeley, USC and others. The job search has been even tougher for those with less prestigious degrees.
Eylul Akgul graduated last year with a degree in computer science from Loyola Marymount University. She wasn’t getting offers, so she went home to Turkey and got some experience at a startup. In May, she returned to the U.S., and still, she was “ghosted” by hundreds of employers.
“The industry for programmers is getting very oversaturated,” Akgul said.
The engineers’ most significant competitor is getting stronger by the day. When ChatGPT launched in 2022, it could only code for 30 seconds at a time. Today’s AI agents can code for hours, and do basic programming faster with fewer mistakes.
Data suggests that even though AI startups like OpenAI and Anthropic are hiring many people, it is not offsetting the decline in hiring elsewhere. Employment for specific groups, such as early-career software developers between the ages of 22 and 25 has declined by nearly 20% from its peak in late 2022, according to a Stanford study.
It wasn’t just software engineers, but also customer service and accounting jobs that were highly exposed to competition from AI. The Stanford study estimated that entry-level hiring for AI-exposed jobs declined 13% relative to less-exposed jobs such as nursing.
In the Los Angeles region, another study estimated that close to 200,000 jobs are exposed. Around 40% of tasks done by call center workers, editors and personal finance experts could be automated and done by AI, according to an AI Exposure Index curated by resume builder MyPerfectResume.
Many tech startups and titans have not been shy about broadcasting that they are cutting back on hiring plans as AI allows them to do more programming with fewer people.
Anthropic Chief Executive Dario Amodei said that 70% to 90% of the code for some products at his company is written by his company’s AI, called Claude. In May, he predicted that AI’s capabilities will increase until close to 50% of all entry-level white-collar jobs might be wiped out in five years.
A common sentiment from hiring managers is that where they previously needed ten engineers, they now only need “two skilled engineers and one of these LLM-based agents,” which can be just as productive, said Nenad Medvidović, a computer science professor at the University of Southern California.
“We don’t need the junior developers anymore,” said Amr Awadallah, CEO of Vectara, a Palo Alto-based AI startup. “The AI now can code better than the average junior developer that comes out of the best schools out there.”
To be sure, AI is still a long way from causing the extinction of software engineers. As AI handles structured, repetitive tasks, human engineers’ jobs are shifting toward oversight.
Today’s AIs are powerful but “jagged,” meaning they can excel at certain math problems yet still fail basic logic tests and aren’t consistent. One study found that AI tools made experienced developers 19% slower at work, as they spent more time reviewing code and fixing errors.
Students should focus on learning how to manage and check the work of AI as well as getting experience working with it, said John David N. Dionisio, a computer science professor at LMU.
Stanford students say they are arriving at the job market and finding a split in the road; capable AI engineers can find jobs, but basic, old-school computer science jobs are disappearing.
As they hit this surprise speed bump, some students are lowering their standards and joining companies they wouldn’t have considered before. Some are creating their own startups. A large group of frustrated grads are deciding to continue their studies to beef up their resumes and add more skills needed to compete with AI.
“If you look at the enrollment numbers in the past two years, they’ve skyrocketed for people wanting to do a fifth-year master’s,” the Stanford graduate said. “It’s a whole other year, a whole other cycle to do recruiting. I would say, half of my friends are still on campus doing their fifth-year master’s.”
After four months of searching, LMU graduate Akgul finally landed a technical lead job at a software consultancy in Los Angeles. At her new job, she uses AI coding tools, but she feels like she has to do the work of three developers.
Universities and students will have to rethink their curricula and majors to ensure that their four years of study prepare them for a world with AI.
“That’s been a dramatic reversal from three years ago, when all of my undergraduate mentees found great jobs at the companies around us,” Stanford’s Liphardt said. “That has changed.”
-
Iowa5 days agoAddy Brown motivated to step up in Audi Crooks’ absence vs. UNI
-
Iowa7 days agoHow much snow did Iowa get? See Iowa’s latest snowfall totals
-
Maine4 days agoElementary-aged student killed in school bus crash in southern Maine
-
Maryland5 days agoFrigid temperatures to start the week in Maryland
-
Technology1 week agoThe Game Awards are losing their luster
-
South Dakota6 days agoNature: Snow in South Dakota
-
New Mexico3 days agoFamily clarifies why they believe missing New Mexico man is dead
-
Nebraska1 week agoNebraska lands commitment from DL Jayden Travers adding to early Top 5 recruiting class