Connect with us

Business

OpenAI forms safety and security committee as concerns mount about AI

Published

on

OpenAI forms safety and security committee as concerns mount about AI

ChatGPT creator OpenAI on Tuesday said it formed a safety and security committee to evaluate the company’s processes and safeguards as concerns mount over the use of rapidly developing artificial intelligence technology.

The committee is expected to take 90 days to finish its evaluation. After that, it will present the company’s full board with recommendations on critical safety and security decisions for OpenAI projects and operations, the firm said in a blog post.

The announcement comes after two high-level leaders, co-founder Ilya Sutskever and fellow executive Jan Leike, resigned from the company. Their departures raised concerns about the company’s priorities, because both had been focused on the importance of ensuring a safe future for humanity amid the rise of AI.

Sutskever and Leike led OpenAI’s so-called superalignment team, which was meant to create systems to curb the tech’s longterm risks. The group was tasked with “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” Upon his departure, Leike said OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

OpenAI’s new safety and security committee is led by board chair Bret Taylor, directors Adam D’Angelo and Nicole Seligman and Chief Executive Sam Altman. Multiple OpenAI technical and policy leaders are on the committee as well. OpenAI said that it will “retain and consult with other safety, security and technical experts to support this work.”

Advertisement

The committee’s formation arrives as the company begins work on training what it calls its “next frontier model” for artificial intelligence.

“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” OpenAI said in its blog post.

Controversies about use of AI have dogged the San Francisco-based company, including in the entertainment business, which is worried about the technology’s implications for intellectual property and the potential displacement of jobs.

Actor Scarlett Johansson criticized the company last week over its handling of a ChatGPT voice feature that she and others said sounded eerily like her. Johansson, who voiced an AI program in the Oscar-winning Spike Jonze movie “Her,” said she was approached by Altman with a request to provide her voice, but she declined, only to later hear what sounded like her voice in an OpenAI demo.

OpenAI said that the voice featured in the demo was not Johansson’s, but another actor’s. After Johansson raised the alarm, OpenAI put a pause on its voice option, “Sky,” one of many human voices available on the app. An OpenAI spokesperson said the formation of the safety committee was not related to the issues involving Johansson.

Advertisement

OpenAI is best known for ChatGPT and Sora, a text to video tool that has major potential ramifications for filmmakers and studios.

OpenAI and other tech companies have been holding discussions with Hollywood, as the entertainment industry grapples with the long-term effects of AI on employment and creativity.

Some film and TV directors have said AI allows them to think more boldly, testing ideas without having the constraints of limited visual effects and travel budgets. Others worry that increased efficiency through AI tools could whittle away jobs in areas like makeup, production and animation.

As it faced safety questions, OpenAI’s business, which is backed by Microsoft, also must deal with competition from other companies that are building their own artificial intelligence tools and funding.

San Francisco-based Anthropic has received billions of dollars from Amazon and Google. On Sunday, xAI, which is led by Elon Musk, announced it closed on a $6-billion funding round that goes toward research and development, building its infrastructure and bringing its first products to market.

Advertisement
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Business

After a pandemic strike, nurses union must pay Riverside hospital millions in damages

Published

on

After a pandemic strike, nurses union must pay Riverside hospital millions in damages

The union representing nurses at Riverside Community Hospital has been ordered to pay more than $6 million to the hospital for the fallout from a 2020 strike.

The unusual financial penalty was imposed by an arbitrator who found the 10-day work stoppage during the pandemic violated the terms of the labor agreement signed by HCA Healthcare, which operates the hospital, and Service Employees International Union Local 121RN. The $6.26-million fine, the arbitrator determined, was necessary to compensate the hospital for the cost of replacing workers who walked off the job during the strike, according to a statement released Wednesday.

Nurses walked off the job in June 2020 in an effort to force the hospital to increase staffing and improve safety as COVID-19 infections surged, the union said at the time. But hospital officials argued that because nurses also voiced complaints about shortages of personal protective equipment, the reasons for the strike were too expansive to be allowed under the collective bargaining agreement the two sides had signed.

“Our contract was clear, and the union showed reckless disregard for its members and the Riverside community by calling the strike,” said Jackie Van Blaricum, president of HCA Healthcare’s Far West Division, who was the hospital’s chief executive during the strike. “We applaud the arbitrator’s decision.”

Advertisement

SEIU 121RN Executive Director Rosanna Mendez objected to the arbitrator’s findings, saying nurses were permitted under their contract to go on strike. She called the arbitrator’s decision “absurd and outrageous.”

“It is absolutely shocking that an arbitrator would expect nurses to not talk about safety issues,” Mendez said, adding that the union was exploring its options to contest the arbitrator’s decision.

Continue Reading

Business

Supreme Court rejects California man's attempt to trademark Trump T-shirts

Published

on

Supreme Court rejects California man's attempt to trademark Trump T-shirts

The Supreme Court on Thursday turned down a California attorney’s bid to trademark the phrase “Trump Too Small” for his exclusive use on T-shirts.

The justices said trademark law forbids the use of a living person’s name, including former President Trump.

The vote was 9-0.

Trump was not a party to the case of Vidal vs. Elster, but in the past he objected when businesses and others tried to make use of his name.

Advertisement

Concord, Calif., attorney Steve Elster said he was amused in 2016 when Republican presidential candidates exchanged comments about the size of Trump’s hands during a debate. Florida Sen. Marco Rubio, whom Trump had mocked as “Little Marco,” asked Trump to hold up his hands, which he did. “You know what they say about guys with small hands,” Rubio said.

After Trump won the election, Elster decided to sell T-shirts with the phrase “Trump Too Small,” which he said was meant to criticize Trump’s lack of accomplishments on civil rights, the environment and other issues.

Legally he was free to do so, but the U.S. Patent and Copyright Office denied his request to trademark the phrase for his exclusive use.

When he appealed the denial, he won a ruling from a federal appeals court which said his “Trump Too Small” slogan was political commentary protected by the 1st Amendment.

The Biden administration’s Solicitor Gen. Elizabeth Prelogar appealed and urged the Supreme Court to reject the trademark request.

Advertisement

She acknowledged that Elster had a free-speech right to mock the former president, but argued he did not have the right to “assert property rights in another person’s name.”

“For more than 75 years, Congress has directed the U.S. Patent and Trademark Office to refuse the registration of trademarks that use the name of a particular living individual without his written consent,” she said.

Writing for the court, Justice Clarence Thomas said Thursday: “Elster contends that this prohibition violates his 1st Amendment right to free speech. We hold that it does not,”

Advertisement
Continue Reading

Business

Elon Musk blasts Apple's OpenAI deal over alleged privacy issues. Does he have a point?

Published

on

Elon Musk blasts Apple's OpenAI deal over alleged privacy issues. Does he have a point?

When Apple holds its annual Worldwide Developers Conference, its software announcements typically elicit cheers and excitement from tech enthusiasts.

But there was one notable exception this year — Elon Musk.

The Tesla and SpaceX chief executive threatened to ban all Apple devices from his companies, alleging a new partnership between Apple and Microsoft-backed startup OpenAI could pose security risks. As part of its new operating system update, Apple said users who ask Siri a question could opt in for Siri to pull additional information from ChatGPT.

“Apple has no clue what’s actually going on once they hand your data over to OpenAI,” Musk wrote on X. “They’re selling you down the river.”

The partnership allows Siri to ask iPhone, Mac and iPad users if the digital assistant can surface answers from OpenAI’s ChatGPT to help address a question. The new feature, which will be available on certain Apple devices, is part of the company’s operating system update due later this year.

Advertisement

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” Musk wrote on X. “That is an unacceptable security violation.”

Representatives for Musk and Apple did not respond to a request for comment.

In a keynote presentation at its developers conference on Monday, Apple said ChatGPT would be free for iPhone, Mac and iPad users. Under the partnership, Apple device users would not need to set up a ChatGPT account to use it with Siri.

“Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests,” Apple said on its website. “ChatGPT’s data-use policies apply for users who choose to connect their account.”

Many of Apple’s AI models and features, which the company collectively calls “Apple Intelligence,” run on the device itself, but some inquiries will require information to be sent through the cloud. Apple said that data is not stored or made accessible to Apple and that independent experts can inspect the code that runs on the servers to verify this.

Advertisement

Apple Intelligence will be available for certain models of Apple devices, such as the iPhone 15 Pro and iPhone 15 Pro Max and iPad and Mac with M1 and later.

So does Musk have a point? Technology and security experts who spoke to The Times offered mixed opinions.

Some pushed back on Musk’s assertion that Apple’s OpenAI deal poses security risks, citing a lack of evidence.

“Like a lot of things that Elon Musk says, it’s not based upon any kind of technical reality now, it’s really just based upon his political beliefs,” said Alex Stamos, chief trust officer at Mountain View, Calif.-based cybersecurity company SentinelOne. “There’s no real factual basis for what he said.”

Stamos, who is also a computer science lecturer at Stanford University and a former chief security officer at Facebook, said he was impressed with Apple’s data protection efforts, adding, “They’re promising a level of transparency that nobody’s really ever provided.

Advertisement

“It’s hard to totally prove at this point, but what they’ve laid out is about the best you could do to provide this level of AI services running on people’s private data while protecting their privacy,” Stamos said.

“To do the things that people have become accustomed to from ChatGPT, you just can’t do that on phones yet,” Stamos added. “We’re years away from being able to run those kinds of models on something that fits in your pocket and doesn’t burn a hole in your jeans from the amount of power it burns.”

Musk has been critical of OpenAI. He sued the company in February for breach of contract and fiduciary duty, alleging it had shifted its focus from an agreement to develop artificial general intelligence “for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.” On Tuesday, Musk, who was a co-founder of and investor in OpenAI, withdrew his lawsuit. Musk’s San Francisco company, xAI, is a competitor to OpenAI in the fast-growing field of artificial intelligence.

Musk has taken aim at Apple in the past, calling it a “Tesla graveyard,” because, according to him, Apple had hired people that Tesla had fired. “If you don’t make it at Tesla, you go work at Apple,” Musk said in an interview with German newspaper Handelsblatt in 2015. “I’m not kidding.”

Still, Rayid Ghani, a machine learning and public policy professor at Carnegie Mellon University, said that, at a high level, he thinks the concerns Musk raised about the OpenAI-Apple partnership should be raised.

Advertisement

While Apple said that OpenAI is not storing Siri requests, “I don’t think we should just take that at face value,” Ghani said. “I think we need to ask for evidence of that. How does Apple ensure that processes are there in place? What is the recourse if it doesn’t happen? Who’s liable, Apple or OpenAI, and how do we deal with issues?”

Some industry observers also have raised questions about the option for Apple users who have a ChatGPT account to link it with their iPhone, and what information is collected by OpenAI in that case.

“We have to be careful with that one — linking your account on your mobile phone is a big deal,” said Pam Dixon, executive director of the World Privacy Forum. “I personally would not link until there is a lot more clarity about what happens to the data.”

OpenAI pointed to a statement on its website that says, “Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.” The company declined further comment.

Under OpenAI’s privacy policy, the company says it collects personal information that is included in the input, file uploads or feedback when account holders use its service. ChatGPT has a way for users to opt out of having their inquiries used to train AI models.

Advertisement

As the use of AI becomes more entwined with people’s lives, industry observers say that it will be crucial to provide transparency for customers and test the trustworthiness of the AI tools.

“We’re going to have to understand something about AI. It’s going to be a lot like plumbing. It’s going to be built into our devices and our lives everywhere,” Dixon said. “The AI is going to have to be trustworthy and we’re going to need to be able to test that trustworthiness.”

Night Archiving Supervisor Valerie Hood contributed to this report.

Advertisement
Continue Reading
Advertisement

Trending