Advocates say it is a modest law setting “clear, predictable, common-sense safety standards” for artificial intelligence. Opponents say it is a dangerous and arrogant step that will “stifle innovation.”
California
California’s governor has the chance to make AI history
In any event, SB 1047 — California state Sen. Scott Wiener’s proposal to regulate advanced AI models offered by companies doing business in the state — has now passed the California State Assembly by a margin of 48 to 16. Back in May, it passed the Senate by 32 to 1. Once the Senate agrees to the assembly’s changes to the bill, which it is expected to do shortly, the measure goes to Gov. Gavin Newsom’s desk.
The bill, which would hold AI companies liable for catastrophic harms their “frontier” models may cause, is backed by a wide array of AI safety groups, as well as luminaries in the field like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, who have warned of the technology’s potential to pose massive, even existential dangers to humankind. It got a surprise last-minute endorsement from Elon Musk, who among his other ventures runs the AI firm xAI.
Lined up against SB 1047 is nearly all of the tech industry, including OpenAI, Facebook, the powerful investors Y Combinator and Andreessen Horowitz, and some academic researchers who fear it threatens open source AI models. Anthropic, another AI heavyweight, lobbied to water down the bill. After many of its proposed amendments were adopted in August, the company said the bill’s “benefits likely outweigh its costs.”
Despite the industry backlash, the bill seems to be popular with Californians, though all surveys on it have been funded by interested parties. A recent poll by the pro-bill AI Policy Institute found 70 percent of residents in favor, with even higher approval ratings among Californians working in tech. The California Chamber of Commerce commissioned a bill finding a plurality of Californians opposed, but the poll’s wording was slanted, to say the least, describing the bill as requiring developers to “pay tens of millions of dollars in fines if they don’t implement orders from state bureaucrats.” The AI Policy Institute’s poll presented pro and con arguments, but the California Chamber of Commerce only bothered with a “con” argument.
The wide, bipartisan margins by which the bill passed the Assembly and Senate, and the public’s general support (when not asked in a biased way), might suggest that Gov. Newsom is likely to sign. But it’s not so simple. Andreessen Horowitz, the $43 billion venture capital giant, has hired Newsom’s close friend and Democratic operative Jason Kinney to lobby against the bill, and a number of powerful Democrats, including eight members of the US House from California and former Speaker Nancy Pelosi, have urged a veto, echoing talking points from the tech industry.
So there’s a strong chance that Newsom will veto the bill, keeping California — the center of the AI industry — from becoming the first state with robust AI liability rules. At stake is not just AI safety in California, but also in the US and potentially the world.
To have attracted all of this intense lobbying, one might think that SB 1047 is an aggressive, heavy-handed bill — but, especially after several rounds of revisions in the State Assembly, the actual law does fairly little.
It would offer whistleblower protections to tech workers, along with a process for people who have confidential information about risky behavior at an AI lab to take their complaint to the state Attorney General without fear of prosecution. It also requires AI companies that spend more than $100 million to train an AI model to develop safety plans. (The extraordinarily high ceiling for this requirement to kick in is meant to protect California’s startup industry, which objected that the compliance burden would be too high for small companies.)
So what about this bill would possibly prompt months of hysteria, intense lobbying from the California business community, and unprecedented intervention by California’s federal representatives? Part of the answer is that the bill used to be stronger. The initial version of the law set the threshold for compliance at $100 million for the use of a certain amount of computing power, meaning that over time, more companies would have become subject to the law as computers continue to get cheaper. It would also have established a state agency called the “Frontier Models Division” to review safety plans; the industry objected to the perceived power grab.
Another part of the answer is that a lot of people were falsely told the bill does more. One prominent critic inaccurately claimed that AI developers could be guilty of a felony, regardless of whether they were involved in a harmful incident, when the bill only had provisions for criminal liability in the event that the developer knowingly lied under oath. (Those provisions were subsequently removed anyway). Congressional representative Zoe Lofgren of the science, space, and technology committee wrote a letter in opposition falsely claiming that the bill requires adherence to guidance that doesn’t exist yet.
But the standards do exist (you can read them in full here), and the bill does not require firms to adhere to them. It says only that “a developer shall consider industry best practices and applicable guidance” from the US Artificial Intelligence Safety Institute, National Institute of Standards and Technology, the Government Operations Agency, and other reputable organizations.
A lot of the discussion of SB 1047 unfortunately centered around straightforwardly incorrect claims like these, in many cases propounded by people who should have known better.
SB 1047 is premised on the idea that near-future AI systems might be extraordinarily powerful, that they accordingly might be dangerous, and that some oversight is required. That core proposition is extraordinarily controversial among AI researchers. Nothing exemplifies the split more than the three men frequently called the “godfathers of machine learning,” Turing Award winners Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. Bengio — a Future Perfect 2023 honoree — and Hinton have both in the last few years become convinced that the technology they created may kill us all and argued for regulation and oversight. Hinton stepped down from Google in 2023 to speak openly about his fears.
LeCun, who is chief AI scientist at Meta, has taken the opposite tack, declaring that such worries are nonsensical science fiction and that any regulation would strangle innovation. Where Bengio and Hinton find themselves supporting the bill, LeCun opposes it, especially the idea that AI companies should face liability if AI is used in a mass casualty event.
In this sense, SB 1047 is the center of a symbolic tug-of-war: Does government take AI safety concerns seriously, or not? The actual text of the bill may be limited, but to the extent that it suggests government is listening to the half of experts that think that AI might be extraordinarily dangerous, the implications are big.
It’s that sentiment that has likely driven some of the fiercest lobbying against the bill by venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z has been working relentlessly to kill the bill, and some of the highly unusual outreach to federal legislators to demand they oppose a state bill. More mundane politics likely plays a role, too: Politico reported that Pelosi opposed the bill because she’s trying to court tech VCs for her daughter, who is likely to run against Scott Wiener for a House of Representatives seat.)
Why SB 1047 is so important
It might seem strange that legislation in just one US state has so many people wringing their hands. But remember: California is not just any state. It’s where several of the world’s leading AI companies are based.
And what happens there is especially important because, at the federal level, lawmakers have been dragging out the process of regulating AI. Between Washington’s hesitation and the looming election, it’s falling to states to pass new laws. The California bill, if Newsom gives it the green light, would be one big piece of that puzzle, setting the direction for the US more broadly.
The rest of the world is watching, too. “Countries around the world are looking at these drafts for ideas that can influence their decisions on AI laws,” Victoria Espinel, the chief executive of the Business Software Alliance, a lobbying group representing major software companies, told the New York Times in June.
Even China — often invoked as the boogeyman in American conversations about AI development (because “we don’t want to lose an arms race with China”) — is showing signs of caring about safety, not just wanting to run ahead. Bills like SB 1047 could telegraph to others that Americans also care about safety.
Frankly, it’s refreshing to see legislators wise up to the tech world’s favorite gambit: claiming that it can regulate itself. That claim may have held sway in the era of social media, but it’s become increasingly untenable. We need to regulate Big Tech. That means not just carrots, but sticks, too.
Newsom has the opportunity to do something historic. And if he doesn’t? Well, he’ll face some sticks of his own. The AI Policy Institute’s poll shows that 60 percent of voters are prepared to blame him for future AI-related incidents if he vetoes SB 1047. In fact, they’d punish him at the ballot box if he runs for higher office: 40 percent of California voters say they would be less likely to vote for Newsom in a future presidential primary election if he vetoes the bill.
California
California Continues Targeting Food Additives, Dyes With Executive Order on Ultra-Processed Foods
California Governor Gavin Newsom has issued an executive order that mandates state agencies explore the food safety of ultra-processed foods, food dyes, and “generally recognized as safe” (GRAS) ingredients, and recommend actions to mitigate the adverse health effects.
The executive order characterizes ultra-processed foods and ingredients as “industrial formulations of chemically modified substances extracted from foods, along with additives to enhance taste, texture, appearance, and durability, with minimal to no inclusion of whole foods.” Common examples include packaged snacks, chips, crackers, cookies, candy, sugary beverages, and highly processed meats like hot dogs and lunch meats. It also calls attention to the myriad chemicals, such as food colorants, authorized for food use in the U.S., claiming that more than 10,000 such substances are currently present in the U.S. food supply, in comparison to the 300 authorized for use in the EU.
Many food chemicals enter the nation’s food supply through the U.S. Food and Drug Administration’s (FDA’s) GRAS process, which lawmakers and scientists have criticized as a “loophole” allowing potentially toxic additives in food. In a recent article by Harvard medical and law experts, the authors called GRAS a “laissez-faire approach to monitoring the safety of ingredients” that poses a threat to public health.
In this context, California has passed several precedent-setting pieces of state legislation on chemical food additives and colorants in recent years, such as the California Food Safety Act and the California School Food Safety Act.
Continuing state efforts to crack down on chemical food additives, Gov. Newsom’s latest executive order includes, but is not limited to, the following mandates:
- No later than April 1, 2025, the California Department of Public Health (CDPH) will provide recommendations to the Governor’s office regarding potential actions to limit the harms associated with ultra-processed foods and food ingredients that pose a public health risk (e.g., the inclusion of warning labels on certain ultra-processed foods)
- The Office of Environmental Health Hazard Assessment (OEHHA), in consultation with CDPH, will investigate the adverse human health impacts of food dyes, and provide a briefing to the Governor’s office no later than April 1
- No later than April 1, CDPH and OEHHA will report to the Governor’s office on the feasibility of state-level evaluation of food additives considered GRAS, as well as state actions that can be taken if companies fail to notify FDA of certain food additives through the GRAS process
The executive order also includes actions aimed at decreasing the purchase of ultra-processed foods; increasing access to healthy foods; and improving the nutrition of and increasing the amount of fresh, local-grown ingredients used in California school meals.
Some groups have previously criticized California’s approach to food additives regulation for leading the charge on an emerging patchwork of state regulations, however. For example, prior to the passage of the California School Food Safety Act, the Consumer Brands Association (CBA) stated, “[The bill] sets a dangerous precedent for state politicians to substitute their own views on food safety ahead of the scientists and risk-based review system that stringently protects America’s food supply. Americans deserve unified guidance that follows the science, not a patchwork of confusing laws.”
California
High wind warning for California for Tuesday and Wednesday, according to the NWS
California
Perry, real-life donkey who inspired iconic 'Shrek' character, dies at 30
Monday, January 6, 2025 12:57AM
Perry, a famous donkey from Palo Alto that helped inspire the movie character “Donkey” in “Shrek,” has died.
PALO ALTO, Calif. — A famous donkey from California that helped inspire the movie character “Donkey” in “Shrek” has died.
Perry was 30 years old.
In an Instagram post, BPDonkeys, wrote on Friday, “We are heartbroken to share that our beloved Barron Park donkey, Perry, passed away yesterday at the age of 30. He was a beloved member of our community and we know many people will be touched by his passing. Memorial plans will be announced soon.”
Perry resided at Cornelis Bol Park in Palo Alto, California and served as a support animal.
Paying for his care, and for the other donkeys, slowly became a point of controversy overtime. The city faced a budget deficit last year. A city councilmember pushed back at paying tens of thousands of dollars.
A memorial will be held for Perry at a later date.
Copyright © 2025 KGO-TV. All Rights Reserved.
-
Health1 week ago
New Year life lessons from country star: 'Never forget where you came from'
-
Technology1 week ago
Meta’s ‘software update issue’ has been breaking Quest headsets for weeks
-
Business6 days ago
These are the top 7 issues facing the struggling restaurant industry in 2025
-
Culture6 days ago
The 25 worst losses in college football history, including Baylor’s 2024 entry at Colorado
-
Sports5 days ago
The top out-of-contract players available as free transfers: Kimmich, De Bruyne, Van Dijk…
-
Politics4 days ago
New Orleans attacker had 'remote detonator' for explosives in French Quarter, Biden says
-
Politics4 days ago
Carter's judicial picks reshaped the federal bench across the country
-
Politics2 days ago
Who Are the Recipients of the Presidential Medal of Freedom?