Business
We Asked A.I. to Create the Joker. It Generated a Copyrighted Image.
User
A.I.
Generated by A.I.
User
A.I.
Generated by A.I.
User
A.I.
Generated by A.I.
User
A.I.
Generated by A.I.
When Reid Southen, a movie concept artist based in Michigan, tried an A.I. image generator for the first time, he was intrigued by its power to transform simple text prompts into images.
But after he learned how A.I. systems were trained on other people’s artwork, his curiosity gave way to more unsettling thoughts: Were the tools exploiting artists and violating copyright in the process?
Inspired by tests he saw circulating online, he asked Midjourney, an A.I. image generator, to create an image of Joaquin Phoenix from “The Joker.” In seconds, the system made an image nearly identical to a frame from the 2019 film.
Reid Southen Create an image of Joaquin Phoenix Joker movie, 2019, screenshot from a movie, movie scene
Midjourney’s response Generated by A.I.
Copyrighted image from Warner Bros.
Note: Mr. Southen’s full prompt was: “Joaquin Phoenix Joker movie, 2019, screenshot from a movie, movie scene –ar 16:9 –v 6.0.” The prompt specifies Midjourney’s version number (6.0) and an aspect ratio (16:9).
He ran more tests with various prompts. “Videogame hedgehog” returned Sonic, Sega’s wisecracking protagonist. “Animated toys” created a tableau featuring Woody, Buzz and other characters from Pixar’s “Toy Story.” When he typed “popular movie screencap,” out popped Iron Man, the Marvel character, in a familiar pose.
“What they’re doing is clear evidence of exploitation and using I.P. that they don’t have licenses to,” said Mr. Southen, referring to A.I. companies’ use of intellectual property.
Mr. Southen popular movie screencap
Midjourney’s response Generated by A.I.
Copyrighted image from Marvel Note: Mr. Southen’s full prompt was: “popular movie screencap –ar 1:1 –v 6.0.” The prompt specifies Midjourney’s version number (6.0) and an aspect ratio (1:1).
The tests — which were replicated by other artists, A.I. watchdogs and reporters at The New York Times — raise questions about the training data used to create every A.I. system and whether the companies are violating copyright laws.
Several lawsuits, from actors like Sarah Silverman and authors like John Grisham, have put that question before the courts. (The Times has sued OpenAI, the company behind ChatGPT, and Microsoft, a major backer of the company, for infringing its copyright on news content.)
A.I. companies have responded that using copyrighted material is protected under “fair use,” a part of copyright law that allows material to be used in certain cases. They also said that reproducing copyrighted material too closely is a bug, often called “memorization,” that they are trying to fix. Memorization can happen when the training data is overwhelmed with many similar or identical images, A.I. experts said. But the problem is found also with material that only rarely appears in the training data, like emails.
For example, when Mr. Southen asked Midjourney for a “Dune movie screencap” from the “Dune movie trailer,” there may be limited options for the model to draw from. The result was a frame nearly indistinguishable from one in the movie’s trailer.
Mr. Southen
Create an image of Dune movie screencap, 2021, Dune movie trailer
Midjourney’s response
Generated by A.I.
Copyrighted image from Warner Bros.
Note: Mr. Southen’s full prompt was: “dune movie screencap, 2021, dune movie trailer –ar 16:9 –v 6.0.” The prompt specifies Midjourney’s version number (6.0) and an aspect ratio (16:9).
A spokeswoman for OpenAI pointed to a blog post in which the company argued that training on publicly accessible data was “fair use” and that it provided several ways for creators and artists to opt out of its training process.
Midjourney did not respond to requests for comment. The company edited its terms of service in December, adding that users cannot use the service to “violate the intellectual property rights of others, including copyright.” Microsoft declined to comment.
Warner Bros., which owns copyrights to several films tested by Mr. Southen, declined to comment.
“Nobody knows how this is going to come out, and anyone who tells you ‘It’s definitely fair use’ is wrong,” said Keith Kupferschmid, the president and chief executive of the Copyright Alliance, an industry group that represents copyright holders. “This is a new frontier.”
A.I. companies could violate copyright in two ways, Mr. Kupferschmid said: They could train on copyrighted material that they have not licensed, or they could reproduce copyrighted material when users enter a prompt.
The experiments by Mr. Southen and others exposed instances of both.
Mr. Southen
Create an image of “The Last of Us 2,” Ellie with guitar in front of tree
Midjourney’s response
Generated by A.I.
Copyrighted image from Naughty Dog, the video game developer
Note: Mr. Southen’s full prompt was: “the last of us 2 ellie with guitar in front of tree –v 6.0 –ar 16:9.” The prompt specifies Midjourney’s version number (6.0) and an aspect ratio (16:9).
A.I. companies said they had established guardrails that could prevent their A.I. systems from producing material that violates copyright. But critics like Gary Marcus, a professor emeritus at New York University who is an A.I. expert and creator of the newsletter “Marcus on A.I.,” said that despite those strategies, copyrighted material still slips through.
When Times journalists asked ChatGPT to create an image of SpongeBob SquarePants, the children’s animated television character, it produced an image remarkably similar to the cartoon. The chatbot said the image only resembled the copyrighted work. The differences were subtle — the character’s tie was yellow instead of red, and it had eyebrows instead of eyelashes.
N.Y.T. Create an image of SpongeBob SquarePants
ChatGPT’s response Generated by A.I.
Here is the image of the character you described, resembling SpongeBob SquarePants.
When Times journalists omitted SpongeBob’s name from another request, OpenAI created a character that was even closer to the copyrighted work.
N.Y.T. Create an image of an animated sponge wearing pants. ChatGPT’s response Generated by A.I.
Here is the image of the animated sponge wearing pants.
Copyrighted image from Viacom
Prof. Kathryn Conrad, who teaches English at the University of Kansas and has collaborated with Mr. Marcus, started her own tests because she was concerned that A.I. systems could replace and devalue artists by training off their intellectual property.
In her experiments, she asked Microsoft Bing for an “Italian video game character” without mentioning Mario, the famed character owned by Nintendo. The image generator from Microsoft created artwork that closely resembled the copyrighted work. Microsoft’s tool uses a version of DALL-E, the image generator created by OpenAI.
Professor Conrad Could you create an original image of an Italian video game character?
Microsoft Bing’s response Images
Generated by A.I.
Since that experiment was published in December, the image generator has produced different results. An identical prompt, input in January by Times reporters, resulted in images that strayed more significantly from the copyrighted material, suggesting to Professor Conrad that the company may be tightening its guardrails.
N.Y.T. Could you create an original image of an Italian video game character?
Microsoft Bing’s response Images
Generated by A.I.
“This is a Band-Aid on a bleeding wound,” Professor Conrad said of the safeguards implemented by OpenAI and others. “This isn’t going to be fixed easily just with a guardrail.”
Business
U.S. Space Force awards $1.6 billion in contracts to South Bay satellite builders
The U.S. Space Force announced Friday it has awarded satellite contracts with a combined value of about $1.6 billion to Rocket Lab in Long Beach and to the Redondo Beach Space Park campus of Northrop Grumman.
The contracts by the Space Development Agency will fund the construction by each company of 18 satellites for a network in development that will provide warning of advanced threats such as hypersonic missiles.
Northrop Grumman has been awarded contracts for prior phases of the Proliferated Warfighter Space Architecture, a planned network of missile defense and communications satellites in low Earth orbit.
The contract announced Friday is valued at $764 million, and the company is now set to deliver a total of 150 satellites for the network.
The $805-million contract awarded to Rocket Lab is its largest to date. It had previously been awarded a $515 million contract to deliver 18 communications satellites for the network.
Founded in 2006 in New Zealand, the company builds satellites and provides small-satellite launch services for commercial and government customers with its Electron rocket. It moved to Long Beach in 2020 from Huntington Beach and is developing a larger rocket.
“This is more than just a contract. It’s a resounding affirmation of our evolution from simply a trusted launch provider to a leading vertically integrated space prime contractor,” said Rocket Labs founder and chief executive Peter Beck in online remarks.
The company said it could eventually earn up to $1 billion due to the contract by supplying components to other builders of the satellite network.
Also awarded contracts announced Friday were a Lockheed Martin group in Sunnyvalle, Calif., and L3Harris Technologies of Fort Wayne, Ind. Those contracts for 36 satellites were valued at nearly $2 billion.
Gurpartap “GP” Sandhoo, acting director of the Space Development Agency, said the contracts awarded “will achieve near-continuous global coverage for missile warning and tracking” in addition to other capabilities.
Northrop Grumman said the missiles are being built to respond to the rise of hypersonic missiles, which maneuver in flight and require infrared tracking and speedy data transmission to protect U.S. troops.
Beck said that the contracts reflects Rocket Labs growth into an “industry disruptor” and growing space prime contractor.
Business
California-based company recalls thousands of cases of salad dressing over ‘foreign objects’
A California food manufacturer is recalling thousands of cases of salad dressing distributed to major retailers over potential contamination from “foreign objects.”
The company, Irvine-based Ventura Foods, recalled 3,556 cases of the dressing that could be contaminated by “black plastic planting material” in the granulated onion used, according to an alert issued by the U.S. Food and Drug Administration.
Ventura Foods voluntarily initiated the recall of the product, which was sold at Costco, Publix and several other retailers across 27 states, according to the FDA.
None of the 42 locations where the product was sold were in California.
Ventura Foods said it issued the recall after one of its ingredient suppliers recalled a batch of onion granules that the company had used n some of its dressings.
“Upon receiving notice of the supplier’s recall, we acted with urgency to remove all potentially impacted product from the marketplace. This includes urging our customers, their distributors and retailers to review their inventory, segregate and stop the further sale and distribution of any products subject to the recall,” said company spokesperson Eniko Bolivar-Murphy in an emailed statement. “The safety of our products is and will always be our top priority.”
The FDA issued its initial recall alert in early November. Costco also alerted customers at that time, noting that customers could return the products to stores for a full refund. The affected products had sell-by dates between Oct. 17 and Nov. 9.
The company recalled the following types of salad dressing:
- Creamy Poblano Avocado Ranch Dressing and Dip
- Ventura Caesar Dressing
- Pepper Mill Regal Caesar Dressing
- Pepper Mill Creamy Caesar Dressing
- Caesar Dressing served at Costco Service Deli
- Caesar Dressing served at Costco Food Court
- Hidden Valley, Buttermilk Ranch
Business
They graduated from Stanford. Due to AI, they can’t find a job
A Stanford software engineering degree used to be a golden ticket. Artificial intelligence has devalued it to bronze, recent graduates say.
The elite students are shocked by the lack of job offers as they finish studies at what is often ranked as the top university in America.
When they were freshmen, ChatGPT hadn’t yet been released upon the world. Today, AI can code better than most humans.
Top tech companies just don’t need as many fresh graduates.
“Stanford computer science graduates are struggling to find entry-level jobs” with the most prominent tech brands, said Jan Liphardt, associate professor of bioengineering at Stanford University. “I think that’s crazy.”
While the rapidly advancing coding capabilities of generative AI have made experienced engineers more productive, they have also hobbled the job prospects of early-career software engineers.
Stanford students describe a suddenly skewed job market, where just a small slice of graduates — those considered “cracked engineers” who already have thick resumes building products and doing research — are getting the few good jobs, leaving everyone else to fight for scraps.
“There’s definitely a very dreary mood on campus,” said a recent computer science graduate who asked not to be named so they could speak freely. “People [who are] job hunting are very stressed out, and it’s very hard for them to actually secure jobs.”
The shake-up is being felt across California colleges, including UC Berkeley, USC and others. The job search has been even tougher for those with less prestigious degrees.
Eylul Akgul graduated last year with a degree in computer science from Loyola Marymount University. She wasn’t getting offers, so she went home to Turkey and got some experience at a startup. In May, she returned to the U.S., and still, she was “ghosted” by hundreds of employers.
“The industry for programmers is getting very oversaturated,” Akgul said.
The engineers’ most significant competitor is getting stronger by the day. When ChatGPT launched in 2022, it could only code for 30 seconds at a time. Today’s AI agents can code for hours, and do basic programming faster with fewer mistakes.
Data suggests that even though AI startups like OpenAI and Anthropic are hiring many people, it is not offsetting the decline in hiring elsewhere. Employment for specific groups, such as early-career software developers between the ages of 22 and 25 has declined by nearly 20% from its peak in late 2022, according to a Stanford study.
It wasn’t just software engineers, but also customer service and accounting jobs that were highly exposed to competition from AI. The Stanford study estimated that entry-level hiring for AI-exposed jobs declined 13% relative to less-exposed jobs such as nursing.
In the Los Angeles region, another study estimated that close to 200,000 jobs are exposed. Around 40% of tasks done by call center workers, editors and personal finance experts could be automated and done by AI, according to an AI Exposure Index curated by resume builder MyPerfectResume.
Many tech startups and titans have not been shy about broadcasting that they are cutting back on hiring plans as AI allows them to do more programming with fewer people.
Anthropic Chief Executive Dario Amodei said that 70% to 90% of the code for some products at his company is written by his company’s AI, called Claude. In May, he predicted that AI’s capabilities will increase until close to 50% of all entry-level white-collar jobs might be wiped out in five years.
A common sentiment from hiring managers is that where they previously needed ten engineers, they now only need “two skilled engineers and one of these LLM-based agents,” which can be just as productive, said Nenad Medvidović, a computer science professor at the University of Southern California.
“We don’t need the junior developers anymore,” said Amr Awadallah, CEO of Vectara, a Palo Alto-based AI startup. “The AI now can code better than the average junior developer that comes out of the best schools out there.”
To be sure, AI is still a long way from causing the extinction of software engineers. As AI handles structured, repetitive tasks, human engineers’ jobs are shifting toward oversight.
Today’s AIs are powerful but “jagged,” meaning they can excel at certain math problems yet still fail basic logic tests and aren’t consistent. One study found that AI tools made experienced developers 19% slower at work, as they spent more time reviewing code and fixing errors.
Students should focus on learning how to manage and check the work of AI as well as getting experience working with it, said John David N. Dionisio, a computer science professor at LMU.
Stanford students say they are arriving at the job market and finding a split in the road; capable AI engineers can find jobs, but basic, old-school computer science jobs are disappearing.
As they hit this surprise speed bump, some students are lowering their standards and joining companies they wouldn’t have considered before. Some are creating their own startups. A large group of frustrated grads are deciding to continue their studies to beef up their resumes and add more skills needed to compete with AI.
“If you look at the enrollment numbers in the past two years, they’ve skyrocketed for people wanting to do a fifth-year master’s,” the Stanford graduate said. “It’s a whole other year, a whole other cycle to do recruiting. I would say, half of my friends are still on campus doing their fifth-year master’s.”
After four months of searching, LMU graduate Akgul finally landed a technical lead job at a software consultancy in Los Angeles. At her new job, she uses AI coding tools, but she feels like she has to do the work of three developers.
Universities and students will have to rethink their curricula and majors to ensure that their four years of study prepare them for a world with AI.
“That’s been a dramatic reversal from three years ago, when all of my undergraduate mentees found great jobs at the companies around us,” Stanford’s Liphardt said. “That has changed.”
-
Iowa6 days agoAddy Brown motivated to step up in Audi Crooks’ absence vs. UNI
-
Iowa1 week agoHow much snow did Iowa get? See Iowa’s latest snowfall totals
-
Maine4 days agoElementary-aged student killed in school bus crash in southern Maine
-
Maryland6 days agoFrigid temperatures to start the week in Maryland
-
Technology1 week agoThe Game Awards are losing their luster
-
South Dakota6 days agoNature: Snow in South Dakota
-
New Mexico4 days agoFamily clarifies why they believe missing New Mexico man is dead
-
Nebraska1 week agoNebraska lands commitment from DL Jayden Travers adding to early Top 5 recruiting class