Connect with us

Business

AI safety bill passes California Legislature

Published

on

AI safety bill passes California Legislature

A controversial bill that would require developers of advanced AI models to adopt safety measures is one step closer to becoming law.

The bill, SB 1047, would require developers of future advanced AI models to create guardrails to prevent the technology from being misused to conduct cyberattacks on critical infrastructure such as power plants.

Developers would need to submit their safety plans to the attorney general, who could hold them liable if AI models they directly control were to cause harm or imminent threat to public safety.

The bill, introduced by Sen. Scott Wiener (D-San Francisco), passed the state Assembly on Wednesday, with 41 votes in favor and nine opposed. On Thursday, the measure was approved by the state Senate in a concurrence vote. It now heads to Gov. Gavin Newsom’s office, though it’s unclear whether Newsom will sign or veto the bill.

“Innovation and safety can go hand in hand — and California is leading the way,” Wiener said in a statement.

Advertisement

A spokesperson for Newsom said the governor will evaluate the bill when it reaches his desk.

Wiener’s bill was fiercely debated in the Bay Area’s tech community. It received support from the Center for AI Safety, Tesla Chief Executive Elon Musk, the L.A. Times editorial board and San Francisco-based AI startup Anthropic.

But it was opposed by Democratic congressional leaders as well as prominent AI players including Meta and OpenAI, who raised concerns about whether the legislation would stifle innovation in California.

Democratic congressional leaders, including former House Speaker Nancy Pelosi, Rep. Ro Khanna (D-Fremont) and Rep. Zoe Lofgren (D-San José), have also opposed the bill and urged Newsom to veto it. They argue the legislation could hurt California’s growing AI industry, home to ChatGPT maker OpenAI, and cite efforts Congress is making related to AI.

“There is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California,” Khanna, Lofgren and six other Democratic congressional representatives wrote in a letter to Newsom.

Advertisement

“While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit,” Pelosi said in a statement.

Wiener and other legislators supporting the bill disagree, contending it would foster innovation while also protecting the public.

“You have to put guardrails,” Assemblymember Devon Mathis (R-Visalia) said before the Assembly’s vote on Wednesday afternoon. “We have to make sure they are going to be responsible players.”

Proponents of SB 1047 say it requires developers to be responsible for the safety of advanced AI models in their control, which could help prevent catastrophic AI events in the future.

“I worry that technology companies will not solve the significant risks associated with AI on their own because they’re locked in their race for market share and profit maximization,” Yoshua Bengio, a professor at Université de Montréal and the founder and scientific director of Mila — Quebec Artificial Intelligence Institute, said at a media briefing this week. “We simply can’t let them grade their own homework and hope for the best.”

Advertisement

Backers also say AI should be regulated similar to other industries that pose potential safety risks.

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” Musk wrote on X on Monday. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

Earlier this month, the bill passed a key state Senate committee after Wiener made significant changes, including removing a perjury penalty and changing the legal standard for developers regarding the safety of their advanced AI models.

San Francisco-based AI startup Anthropic’s CEO, Dario Amodei, said he believed the bill’s “benefits likely outweigh its costs” in an Aug. 21 letter to Newsom. The letter did not endorse the bill but shared the company’s viewpoint on the pros and cons.

“We want to be clear … that SB 1047 addresses real and serious concerns with catastrophic risk in AI systems,” Amodei wrote. “AI systems are advancing in capabilities extremely quickly, which offers both great promise for California’s economy and substantial risk.”

Advertisement

But some tech companies including OpenAI said they opposed the bill even after the changes.

“The broad and significant implications of AI for U.S. competitiveness and national security require that regulation of frontier models be shaped and implemented at the federal level,” OpenAI Chief Strategy Officer Jason Kwon wrote in an Aug. 21 letter to Wiener. “A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the U.S. to lead the development of global standards.”

Wiener said he would welcome a strong federal AI safety law that preempts his bill.

“If past experience is any indication, enacting such a [federal] law will be an uphill fight,” Wiener said in a statement. “In the meantime, California should continue to lead on policies like SB 1047 that foster innovation while also protecting the public.”

SB 1047 is among roughly 50 AI-related bills in the Legislature that address various aspects of the technology’s impact on the public, including jobs, deepfakes and safety.

Advertisement

Business

U.S. Space Force awards $1.6 billion in contracts to South Bay satellite builders

Published

on

U.S. Space Force awards .6 billion in contracts to South Bay satellite builders

The U.S. Space Force announced Friday it has awarded satellite contracts with a combined value of about $1.6 billion to Rocket Lab in Long Beach and to the Redondo Beach Space Park campus of Northrop Grumman.

The contracts by the Space Development Agency will fund the construction by each company of 18 satellites for a network in development that will provide warning of advanced threats such as hypersonic missiles.

Northrop Grumman has been awarded contracts for prior phases of the Proliferated Warfighter Space Architecture, a planned network of missile defense and communications satellites in low Earth orbit.

The contract announced Friday is valued at $764 million, and the company is now set to deliver a total of 150 satellites for the network.

Advertisement

The $805-million contract awarded to Rocket Lab is its largest to date. It had previously been awarded a $515 million contract to deliver 18 communications satellites for the network.

Founded in 2006 in New Zealand, the company builds satellites and provides small-satellite launch services for commercial and government customers with its Electron rocket. It moved to Long Beach in 2020 from Huntington Beach and is developing a larger rocket.

“This is more than just a contract. It’s a resounding affirmation of our evolution from simply a trusted launch provider to a leading vertically integrated space prime contractor,” said Rocket Labs founder and chief executive Peter Beck in online remarks.

The company said it could eventually earn up to $1 billion due to the contract by supplying components to other builders of the satellite network.

Also awarded contracts announced Friday were a Lockheed Martin group in Sunnyvalle, Calif., and L3Harris Technologies of Fort Wayne, Ind. Those contracts for 36 satellites were valued at nearly $2 billion.

Advertisement

Gurpartap “GP” Sandhoo, acting director of the Space Development Agency, said the contracts awarded “will achieve near-continuous global coverage for missile warning and tracking” in addition to other capabilities.

Northrop Grumman said the missiles are being built to respond to the rise of hypersonic missiles, which maneuver in flight and require infrared tracking and speedy data transmission to protect U.S. troops.

Beck said that the contracts reflects Rocket Labs growth into an “industry disruptor” and growing space prime contractor.

Advertisement
Continue Reading

Business

California-based company recalls thousands of cases of salad dressing over ‘foreign objects’

Published

on

California-based company recalls thousands of cases of salad dressing over ‘foreign objects’

A California food manufacturer is recalling thousands of cases of salad dressing distributed to major retailers over potential contamination from “foreign objects.”

The company, Irvine-based Ventura Foods, recalled 3,556 cases of the dressing that could be contaminated by “black plastic planting material” in the granulated onion used, according to an alert issued by the U.S. Food and Drug Administration.

Ventura Foods voluntarily initiated the recall of the product, which was sold at Costco, Publix and several other retailers across 27 states, according to the FDA.

None of the 42 locations where the product was sold were in California.

Ventura Foods said it issued the recall after one of its ingredient suppliers recalled a batch of onion granules that the company had used n some of its dressings.

Advertisement

“Upon receiving notice of the supplier’s recall, we acted with urgency to remove all potentially impacted product from the marketplace. This includes urging our customers, their distributors and retailers to review their inventory, segregate and stop the further sale and distribution of any products subject to the recall,” said company spokesperson Eniko Bolivar-Murphy in an emailed statement. “The safety of our products is and will always be our top priority.”

The FDA issued its initial recall alert in early November. Costco also alerted customers at that time, noting that customers could return the products to stores for a full refund. The affected products had sell-by dates between Oct. 17 and Nov. 9.

The company recalled the following types of salad dressing:

  • Creamy Poblano Avocado Ranch Dressing and Dip
  • Ventura Caesar Dressing
  • Pepper Mill Regal Caesar Dressing
  • Pepper Mill Creamy Caesar Dressing
  • Caesar Dressing served at Costco Service Deli
  • Caesar Dressing served at Costco Food Court
  • Hidden Valley, Buttermilk Ranch
Continue Reading

Business

They graduated from Stanford. Due to AI, they can’t find a job

Published

on

They graduated from Stanford. Due to AI, they can’t find a job

A Stanford software engineering degree used to be a golden ticket. Artificial intelligence has devalued it to bronze, recent graduates say.

The elite students are shocked by the lack of job offers as they finish studies at what is often ranked as the top university in America.

When they were freshmen, ChatGPT hadn’t yet been released upon the world. Today, AI can code better than most humans.

Top tech companies just don’t need as many fresh graduates.

“Stanford computer science graduates are struggling to find entry-level jobs” with the most prominent tech brands, said Jan Liphardt, associate professor of bioengineering at Stanford University. “I think that’s crazy.”

Advertisement

While the rapidly advancing coding capabilities of generative AI have made experienced engineers more productive, they have also hobbled the job prospects of early-career software engineers.

Stanford students describe a suddenly skewed job market, where just a small slice of graduates — those considered “cracked engineers” who already have thick resumes building products and doing research — are getting the few good jobs, leaving everyone else to fight for scraps.

“There’s definitely a very dreary mood on campus,” said a recent computer science graduate who asked not to be named so they could speak freely. “People [who are] job hunting are very stressed out, and it’s very hard for them to actually secure jobs.”

The shake-up is being felt across California colleges, including UC Berkeley, USC and others. The job search has been even tougher for those with less prestigious degrees.

Eylul Akgul graduated last year with a degree in computer science from Loyola Marymount University. She wasn’t getting offers, so she went home to Turkey and got some experience at a startup. In May, she returned to the U.S., and still, she was “ghosted” by hundreds of employers.

Advertisement

“The industry for programmers is getting very oversaturated,” Akgul said.

The engineers’ most significant competitor is getting stronger by the day. When ChatGPT launched in 2022, it could only code for 30 seconds at a time. Today’s AI agents can code for hours, and do basic programming faster with fewer mistakes.

Data suggests that even though AI startups like OpenAI and Anthropic are hiring many people, it is not offsetting the decline in hiring elsewhere. Employment for specific groups, such as early-career software developers between the ages of 22 and 25 has declined by nearly 20% from its peak in late 2022, according to a Stanford study.

It wasn’t just software engineers, but also customer service and accounting jobs that were highly exposed to competition from AI. The Stanford study estimated that entry-level hiring for AI-exposed jobs declined 13% relative to less-exposed jobs such as nursing.

In the Los Angeles region, another study estimated that close to 200,000 jobs are exposed. Around 40% of tasks done by call center workers, editors and personal finance experts could be automated and done by AI, according to an AI Exposure Index curated by resume builder MyPerfectResume.

Advertisement

Many tech startups and titans have not been shy about broadcasting that they are cutting back on hiring plans as AI allows them to do more programming with fewer people.

Anthropic Chief Executive Dario Amodei said that 70% to 90% of the code for some products at his company is written by his company’s AI, called Claude. In May, he predicted that AI’s capabilities will increase until close to 50% of all entry-level white-collar jobs might be wiped out in five years.

A common sentiment from hiring managers is that where they previously needed ten engineers, they now only need “two skilled engineers and one of these LLM-based agents,” which can be just as productive, said Nenad Medvidović, a computer science professor at the University of Southern California.

“We don’t need the junior developers anymore,” said Amr Awadallah, CEO of Vectara, a Palo Alto-based AI startup. “The AI now can code better than the average junior developer that comes out of the best schools out there.”

To be sure, AI is still a long way from causing the extinction of software engineers. As AI handles structured, repetitive tasks, human engineers’ jobs are shifting toward oversight.

Advertisement

Today’s AIs are powerful but “jagged,” meaning they can excel at certain math problems yet still fail basic logic tests and aren’t consistent. One study found that AI tools made experienced developers 19% slower at work, as they spent more time reviewing code and fixing errors.

Students should focus on learning how to manage and check the work of AI as well as getting experience working with it, said John David N. Dionisio, a computer science professor at LMU.

Stanford students say they are arriving at the job market and finding a split in the road; capable AI engineers can find jobs, but basic, old-school computer science jobs are disappearing.

As they hit this surprise speed bump, some students are lowering their standards and joining companies they wouldn’t have considered before. Some are creating their own startups. A large group of frustrated grads are deciding to continue their studies to beef up their resumes and add more skills needed to compete with AI.

“If you look at the enrollment numbers in the past two years, they’ve skyrocketed for people wanting to do a fifth-year master’s,” the Stanford graduate said. “It’s a whole other year, a whole other cycle to do recruiting. I would say, half of my friends are still on campus doing their fifth-year master’s.”

Advertisement

After four months of searching, LMU graduate Akgul finally landed a technical lead job at a software consultancy in Los Angeles. At her new job, she uses AI coding tools, but she feels like she has to do the work of three developers.

Universities and students will have to rethink their curricula and majors to ensure that their four years of study prepare them for a world with AI.

“That’s been a dramatic reversal from three years ago, when all of my undergraduate mentees found great jobs at the companies around us,” Stanford’s Liphardt said. “That has changed.”

Advertisement
Continue Reading
Advertisement

Trending