Connect with us

Technology

OpenAI made economic proposals — here’s what DC thinks of them

Published

on

OpenAI made economic proposals — here’s what DC thinks of them

Happy ceasefire day and welcome to Regulator, a newsletter for Verge subscribers about Big Tech’s rocky journey through the world of politics. If you’re not a subscriber yet, you can do so here, but my only request is that you sign up before Donald Trump decides to revisit his previous threats toward Iran and kickstart World War III.

I’m back after being waylaid last week by the deadly combo of a moderate cold and the beginning of pollen season. (Twenty-one percent of the District’s acreage is taken up by public green space, and DC is consistently ranked the best city park system in America. Unfortunately, I am allergic to every tree and grass.) If you’ve got tips on anything I may have missed or anything I should know about the upcoming weeks, send ’em to tina.nguyen+tips@theverge.com.

Do you actually believe anything OpenAI says?

On Monday, OpenAI published a 13-page policy paper addressing the impact that artificial intelligence would have on the American workforce. The company also proposed what it believed was the solution: putting higher capital gains taxes on corporations replacing their workers with AI and using that money to create a bigger public safety net. Its solutions included a public wealth fund, a four-day workweek funded by “efficiency dividends,” and government programs to help transition workers into “human-centered” work, all financed by the abundance that artificial intelligence would deliver.

Unfortunately, it was released the day that The New Yorker’s Ronan Farrow and Andrew Marantz published a meticulously reported, 17,000-word-plus article chronicling Sam Altman’s history of lying to everyone around him, including to his Silicon Valley backers, his employees, his board, and — relevant in this case — lawmakers trying to regulate AI. The New Yorker article reinforced a long-standing narrative about Altman, and OpenAI by extension: They may spout idealistic values, but would quickly jettison them for financial and political gains.

Advertisement

On its own, said several people I spoke to, the paper was a net positive to AI governance overall, in that it introduced new ideas into the political discourse around the emerging technology. But unless the company’s policy and political influence made good on those promises, said OpenAI’s critics, it may as well just be a piece of paper.

“My guess is that there are people on the team who care about the stuff, who’ve thought really hard about this document and are proud of it, and did good work, even if it’s not addressing all of the questions that I wish it would address,” Malo Bourgon, the CEO of the Machine Intelligence Research Institute (MIRI), told me. “And there’s still the question of: Are those people gonna find themselves in the position that many previous people at OpenAI have found themselves in, where they thought the company had certain values or aligned with things they cared about, and then ended up finding out that wasn’t the case, becoming disenchanted and leaving?”

With OpenAI proposing policy, it’s worth looking back at its history with the government, which the New Yorker piece details in depth. Altman had been one of the first major CEOs to publicly advocate for federal oversight for AI, going so far as to propose a federal agency to oversee advanced models in 2023 — but privately he worked to suppress the laws containing his own safety proposals. A state legislative aide in California accused OpenAI of engaging in “increasingly cunning, deceptive behavior” to kill a 2023 AI safety bill that it was publicly supporting. In 2025, the company subpoenaed supporters of a California state-level AI bill in an effort to, as one such supporter put it to The New Yorker, “basically scare them into shutting up.” And though Altman had once worked extensively with the Biden administration to build AI safety standards, the moment that Donald Trump became president, Altman successfully persuaded him to kill the initiatives he’d once advocated for.

Nathan Calvin, the general counsel at Encode, an AI policy nonprofit where he focuses on state legislative initiatives, had received one of those subpoenas. “What I’ve seen from their policy and government affairs engagement has just been abysmal,” he told me. While he believed that the team who’d written the OpenAI proposal, primarily from the technical safety research side, was acting with good intentions, he was still reserving judgment. “Will those folks remain engaged as we move from general policy principles towards the many other ways in which lobbying and government influence actually happens? Part of me is hopeful, but a lot of me is also quite skeptical about whether that will happen.” (OpenAI did not return a request for comment.)

A modest, absolutely not craven request:

Advertisement

Next week I plan on running an issue of Regulator cataloging the nerdiest events happening during Nerd Prom, aka the White House Correspondents’ Dinner party circuit. If you’re a tech founder, tech company, or someone that does something related to technology and you’re throwing an event during WHCD week, please let me know what you’re up to! From what I’ve heard so far, the tech world is about to shake up the normal social dynamics of the week — I’ve already caught wind of the Grindr party in Georgetown, and the Substack party, which famed looksmaxxer Clavicular is attending — and I’m so, so excited to pull together the most bonkers “SPOTTED” column that Washington’s ever experienced.

(Again, this is contingent upon whether we’re at war with Iran by the end of April, in which case, I imagine no one will be up for frivolity.)

Speaking of DC reporters, this is very true of all of us:

Screenshot via @jakewilkns/X.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Advertisement

Advertisement

Technology

Canvas is down as ShinyHunters threatens to leak schools’ data

Published

on

Canvas is down as ShinyHunters threatens to leak schools’ data

The Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas.

“Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode,” according to Infrastructure’s status page. “We anticipate being up soon, and will provide updates as soon as possible.”

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

Update, May 7th: Added Infrastructure’s maintenance mode message.

Advertisement
Continue Reading

Technology

Humanoid robot named ‘Gabi’ ordained as Buddhist monk, pledges devotion to ‘holy Buddha’

Published

on

Humanoid robot named ‘Gabi’ ordained as Buddhist monk, pledges devotion to ‘holy Buddha’

NEWYou can now listen to Fox News articles!

A high-tech humanoid robot was officially “ordained” as a Buddhist monk during a ceremony at Seoul’s Jogyesa Temple on Wednesday.

The robot, a $13,500 Unitree G1 model standing just over four feet tall, was given the name “Gabi.” Dressed in traditional brown robes, plain shoes and gloves designed to mimic human hands, the machine stood before a panel of Buddhist monks to commit itself to the faith.

During the ceremony, hosted by the Jogye Order of Korean Buddhism, the robot was asked by a monk if it would devote itself to the “holy Buddha.”

“Yes, I will devote myself,” Gabi responded to the crowd’s cheers.

Advertisement

AI HUMANOID ROBOT LEARNS TO MIMIC HUMAN EMOTIONS AND BEHAVIOR

More than 200 humanoid robots perform during Agibot Night, a live televised gala in Shanghai ahead of Lunar New Year. (Tang Yanjun/China News Service)

The ceremony highlights a growing effort among religious institutions to engage younger, tech-driven audiences, raising broader questions about whether artificial intelligence can play a meaningful role in spiritual life or if such moves risk trivializing long-standing traditions.

While humans typically pledge to abstain from killing, stealing and intoxicating substances, Gabi’s vows were “reprogrammed” for the digital age. The robot pledged to respect and follow humans, refrain from damaging property or other robots, abstain from deceptive behavior and save energy by not overcharging.

The Jogye Order, South Korea’s largest Buddhist sect, framed the move as an effort to make ancient traditions more relevant to a younger, tech-obsessed generation.

Advertisement

HUMANOID ROBOT TURNS HEADS AT NYC SNEAKER STORE

A humanoid robot, front, and Buddhist monks put hands together for a photo after an ordination ceremony ahead of upcoming Buddha’s birthday on May 24 at Jogye temple in Seoul, South Korea, Wednesday, May 6, 2026. (Lee Jin-man/AP)

“The ordination of a robot signifies that technology must be used in accordance with the values of compassion, wisdom, and responsibility,” the order said in a statement shared with The New York Times. Officials added that the move symbolizes “new possibilities for the coexistence of humans and technology.”

Hong Min-suk, a manager at the order, told the publication that robots are “destined to collaborate with humans in every field,” suggesting it is only “natural” for them to participate in religious festivals.

The Jogye Order did not immediately respond to Fox News Digital’s request for comment.

Advertisement

Despite the temple’s optimistic outlook, the move has drawn criticism online. A video of Gabi’s pledge quickly surpassed one million views, with some users on X questioning whether a machine can meaningfully participate in religious practice.

Buddhist monks arrive at Washington National Cathedral in Washington, D.C., on Feb. 10, 2026, before participating in an interfaith ceremony during the final days of their 2,300-mile “Walk for Peace.” (Drew Angerer/AFP via Getty Images)

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

“As a Buddhist, I find this ridiculous and insulting,” one user wrote.

Gabi is expected to make its next public appearance at Seoul’s upcoming Lantern Festival on May 16-17, honoring the Buddha’s birthday.

Advertisement

Continue Reading

Technology

Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI

Published

on

Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI

Sam Altman and Elon Musk are facing off in a high-stakes trial that could alter the future of OpenAI and its most well-known product, ChatGPT. In 2024, Musk filed a lawsuit accusing OpenAI of abandoning its founding mission of developing AI to benefit humanity and shifting focus to boosting profits instead.

Elon Musk, his financial manager and Neuralink CEO, Jared Birchall, and OpenAI cofounder Greg Brockman have already testified before the jury. Now, on Wednesday, May 6th, Shivon Zilis, a former OpenAI board member who shares four children with Musk, is taking the stand, and the courtroom is seeing testimony from former OpenAI exec Mira Murati via video.

Microsoft CEO Satya Nadella is scheduled to appear on Monday, with OpenAI cofounder and former chief scientist Ilya Sutskever lined up to testify after that.

Musk was a cofounder of OpenAI and claims that Altman and Brockman tricked him into giving the company money, only to turn their backs on their original goal. However, OpenAI says that “This lawsuit has always been a baseless and jealous bid to derail a competitor” in a bid to boost Musk’s own SpaceX / xAI / X companies that have launched Grok as a competitor to ChatGPT.

Elon Musk — plaintiff, OpenAI cofounder and now CEO of rival xAI

Advertisement

Steven Molo — lead counsel for plaintiff

Jared Birchall — manager of Musk’s family office

Shivon Zilis — former OpenAI board member who shares multiple children with Musk

Sam Altman — defendant, CEO of OpenAI

William Savitt — lead counsel for defendant

Advertisement

Greg Brockman — president of OpenAI as well as a cofounder

Ilya Sutskever — former chief scientist at OpenAI and a cofounder

Yvonne Gonzalez Rogers — aka YGR, trial judge

Here’s all the latest on the trial between Musk and Altman:

Advertisement
Continue Reading
Advertisement

Trending