Connect with us

Washington

Silicon Valley Takes AGI Seriously—Washington Should Too

Published

on

Silicon Valley Takes AGI Seriously—Washington Should Too


Artificial General Intelligence—machines that can learn and perform any cognitive task that a human can—has long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; it’s an impending reality that demands our immediate attention.

On Sept. 17, during a Senate Judiciary Subcommittee hearing titled “Oversight of AI: Insiders’ Perspectives,” whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown University’s Center for Security and Emerging Technology, testified that, “The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence.” She continued that leading AI companies such as OpenAI, Google, and Anthropic are “treating building AGI as an entirely serious goal.”

Toner’s co-witness William Saunders—a former researcher at OpenAI who recently resigned after losing faith in OpenAI acting responsibly—echoed similar sentiments to Toner, testifying that, “Companies like OpenAI are working towards building artificial general intelligence” and that “they are raising billions of dollars towards this goal.”

Read More: When Might AI Outsmart Us? It Depends Who You Ask

Advertisement

All three leading AI labs—OpenAI, Anthropic, and Google DeepMind—are more or less explicit about their AGI goals. OpenAI’s mission states: “To ensure that artificial general intelligence—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Anthropic focuses on “building reliable, interpretable, and steerable AI systems,” aiming for “safe AGI.” Google DeepMind aspires “to solve intelligence” and then to use the resultant AI systems “to solve everything else,” with co-founder Shane Legg stating unequivocally that he expects “human-level AI will be passed in the mid-2020s.” New entrants into the AI race, such as Elon Musk’s xAI and Ilya Sutskever’s Safe Superintelligence Inc., are similarly focused on AGI.

Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last month’s hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are “folks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe don’t have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.”

Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. “The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. It’s very far from science fiction. It’s here and now—one to three years has been the latest prediction,” he said. He didn’t mince words about where responsibility lies: “What we should learn from social media, that experience is, don’t trust Big Tech.”

The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGI’s imminence. In a July 2023 survey conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed “within the next 5 years.” Some 82% of respondents also said we should “go slowly and deliberately” in AI development.

That’s because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of “novel biological weapons,” and Toner warned that many leading AI figures believe that in a worst-case scenario AGI “could lead to literal human extinction.”

Advertisement

Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?

First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.

Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into what’s going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI weren’t a possibility, but the prospect of AGI heightens their importance.

Read More: What an American Approach to AI Regulation Should Look Like

In a particularly concerning part of Saunders’ testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to “bypass access controls and steal the company’s most advanced AI systems, including GPT-4.” This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.

Advertisement

Finally, public engagement is essential. AGI isn’t just a technical issue; it’s a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.

No one knows how long we have until AGI—what Senator Blumenthal referred to as “the 64 billion dollar question”—but the window for action may be rapidly closing. Some AI figures including Saunders think it may be in as little as three years.

Ignoring the potentially imminent challenges of AGI won’t make them disappear. It’s time for policymakers to begin to get their heads out of the cloud.



Source link

Advertisement

Washington

Washington Spirit goalkeeper Aubrey Kingsbury announces she’s pregnant

Published

on

Washington Spirit goalkeeper Aubrey Kingsbury announces she’s pregnant


play

Washington Spirit goalkeeper Aubrey Kingsbury has announced that she and her husband Matt are expecting a baby in July.

Advertisement

The couple made the announcement in a video on the Spirit’s social media channels, holding a baby goalkeeper jersey on the pitch at Audi Field.

Kingsbury becomes the most recent Spirit star to go on maternity leave, following defender Casey Krueger, midfielder Andi Sullivan and forward Ashley Hatch.

Sullivan gave birth to daughter Millie in July, while Hatch welcomed her son Leo in January.

Krueger announced she was pregnant with her second child in October.

Kingsbury has served as the Spirit’s starting goalkeeper since 2018, and has been named the NWSL Goalkeeper of the Year twice (2019 and 2021).

Advertisement

The 34-year-old has two caps with the U.S. women’s national team, and was named to the 2023 World Cup roster.

The club captain will leave a major void for the Spirit, who have finished as NWSL runner-up in back-to-back seasons.

Sandy MacIver and Kaylie Collins are expected to compete for the starting role while Kingsbury is on maternity leave.

Advertisement

The Spirit kick off their 2026 campaign on March 13 against the Portland Thorns.





Source link

Continue Reading

Washington

Washington state board awards Yakima $985,600 loan for Sixth Avenue project design

Published

on

Washington state board awards Yakima 5,600 loan for Sixth Avenue project design


Yakima could soon take a major step toward redesigning Sixth Avenue after the Washington State Public Works Board awarded the city a $985,600 loan.

The loan was approved for the design engineering phase of the Sixth Avenue project. The funding can also be used along Sixth Avenue for utility replacement and updated ADA use.

The Yakima City Council must decide whether to accept the award. If the council accepts it, the city’s engineering work will move forward with the design of Sixth Avenue.

The cost of installing trolley lines is excluded from the plan. The historic trolleys would need to raise the funds required to add trolley lines.

Advertisement

The award is scheduled to be discussed during next week’s City Council meeting.



Source link

Continue Reading

Washington

Microsoft promises more AI investments at University of Washington

Published

on

Microsoft promises more AI investments at University of Washington


Microsoft will ramp up its investment in the University of Washington.

Brad Smith, the company’s president, made the announcement at a press conference with University of Washington President Robert Jones on Tuesday.

That means hiring more UW graduates as interns at Microsoft, he said.

And he said all students, faculty, and researchers should have access to free, or at least deeply-discounted, AI.

Advertisement

“ Some of it is compute that Microsoft is donating, and some of it is pursuant to an agreement where, believe me, we give the University of Washington probably the best pricing that anybody’s gonna find anywhere,” Smith said. He assured the small group of reporters present that it would be “many millions of dollars of additional computational resources.”

The announcement today didn’t include any specific numbers.

But Smith said Microsoft has already invested $165 million in the UW over several decades.

He pointed to Jones’ vision to spur “radical collaborations with businesses and communities to advance positive change,” and eliminate “any artificial barriers between the university and the communities it serves.”

Advertisement

Microsoft’s goal is for AI to help UW researchers solve some of the world’s biggest problems without introducing new ones.

At Tuesday’s announcement, several research students were present to demonstrate how AI supports their work.

Enlarge Icon

Amelia Keyser-Gibson is an environmental scientist at the UW. She’s using AI to analyze photographs of vines, to find which adapt best to climate change.

It’s a paradox: AI produces carbon emissions. At the same time, it’s also a new tool to help reduce them.

Advertisement

So how do those things square for Keyser-Gibson?

“ That’s a great question, and honestly, I don’t know the answer to that,” she said. “I’m highly aware that there’s a lot of environmental impact of using AI, but what I can say is that this has allowed us to make research innovations that wouldn’t have been possible otherwise.”

“If we had had to manually annotate every single image that would’ve been an undergrad doing that for hours,” Keyser-Gibson continued. “And we didn’t have the budget. We didn’t have the manpower to do that.”

“AI exists. If we don’t use it as researchers, we’re gonna fall behind.”

Advertisement

Microsoft reports on its own carbon emissions. But like most AI companies, it doesn’t reveal everything.

That’s one reason another UW student named Zhihan Zhang is using AI to estimate how much energy AI is using.



Source link

Advertisement
Continue Reading

Trending