Connect with us

Business

Anita Hill-led Hollywood Commission wants to change how workers report sexual harassment

Published

on

Anita Hill-led Hollywood Commission wants to change how workers report sexual harassment

In the wake of movie mogul Harvey Weinstein’s 2020 rape trial, a survey of nearly 10,000 workers by the Anita Hill-led Hollywood Commission revealed a sobering result: Few people believed perpetrators would ever be held accountable.

The vast majority, however, were interested in new tools to document incidents and access resources and helplines.

Four years later, the Hollywood Commission is trying to make that request a reality.

On Thursday, the nonprofit organization launched MyConnext, an online resource and reporting tool that will allow workers at five major entertainment business organizations to get help with reporting incidents of harassment, discrimination and abuse.

Homepage of MyConnext.

Advertisement

(MyConnext)

The website allows those entertainment industry employees to speak with a live ombudsperson, create time-stamped records and submit those reports to their employer or union. (Any entertainment worker can access the site’s resources section to learn more about what it means to report an incident and understand complicating factors such as mandatory arbitration.)

So far, the commission has partnered with the Directors Guild of America, the Writers Guild of America, certain U.S.-based Amazon productions, all U.S.-based Netflix productions and film/TV producer the Kennedy/Marshall Co., founded by filmmakers Kathleen Kennedy and Frank Marshall. The International Alliance of Theatrical Stage Employees is expected to join later this year, according to the commission.

MyConnext is not intended to replace any of these organizations’ individual reporting platforms. Rather, it’s designed to provide an additional option and serve as a one-stop shop for workers seeking help or resources. The commission did not say what the initiative cost.

Advertisement

One key feature of the MyConnext reporting platform is called “hold for match,” which allows a worker to fill out a record of an incident and instructs the system not to send the report to one of the partner organizations until another report about the same person is detected. At that time, both reports will be sent.

“It is very difficult for an individual to come forward,” said Hill, president of the Hollywood Commission, which was founded in October 2017 to help eradicate abuse in the entertainment industry. “Let’s say, for example, Harvey Weinstein: It was very difficult to prove a case when there was only one person because there was a tendency to turn it into a so-called ‘he-said, she-said’ situation.”

With this feature, however, employers could potentially recognize a pattern of abuse. And that, Hill said, could be a game changer.

“We ultimately hope that [the tool] will elevate the level of accountability, and accountability is ultimately what I think everybody wants,” said Hill. The commission led the 2020 survey, along with a follow-up survey this year that found a similar desire for harassment reporting resources.

“Information, really, is power,” said Hill.

Advertisement

Advocates say such resources have become even more crucial amid what they describe as a pullback in Hollywood’s promised efforts to create a more inclusive industry for women. Fears of backsliding escalated after Weinstein’s New York sex assault conviction was overturned last month by a state appeals court, which ordered a new trial. Weinstein’s conviction in California remains.

“What’s so important even now, in light of the reversal of a conviction, is making sure that individuals who have suffered harm get to choose what makes the most sense for them,” said Malia Arrington, executive director of the Hollywood Commission. “You need to be informed about what all of your different choices may mean to make sure that you’re entering into whatever path with eyes wide open.”

With that in mind, the platform has a multipronged approach. The resources section helps workers understand their options, including the general process for filing a complaint, as well as where to access counseling and emotional or employment support.

Members of the participating organizations also have access to a secure platform through MyConnext that lets them record an incident — regardless of whether they submit it as an official report — send anonymous messages, speak with an independent ombudsperson and submit reports of abuse.

Speaking with an advocate allows workers to get their questions answered confidentially and by a live human, said Lillian Rivera, the ombudsperson who is employed by MyConnext.

Advertisement

“It’s a human that’s going to listen to folks, who’s going to be nonjudgmental, who is going to be supportive and is going to be able to point people toward all of their options, and really put the power in the hands of the worker so they can make the decision that’s best for them,” Rivera said.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Business

After a pandemic strike, nurses union must pay Riverside hospital millions in damages

Published

on

After a pandemic strike, nurses union must pay Riverside hospital millions in damages

The union representing nurses at Riverside Community Hospital has been ordered to pay more than $6 million to the hospital for the fallout from a 2020 strike.

The unusual financial penalty was imposed by an arbitrator who found the 10-day work stoppage during the pandemic violated the terms of the labor agreement signed by HCA Healthcare, which operates the hospital, and Service Employees International Union Local 121RN. The $6.26-million fine, the arbitrator determined, was necessary to compensate the hospital for the cost of replacing workers who walked off the job during the strike, according to a statement released Wednesday.

Nurses walked off the job in June 2020 in an effort to force the hospital to increase staffing and improve safety as COVID-19 infections surged, the union said at the time. But hospital officials argued that because nurses also voiced complaints about shortages of personal protective equipment, the reasons for the strike were too expansive to be allowed under the collective bargaining agreement the two sides had signed.

“Our contract was clear, and the union showed reckless disregard for its members and the Riverside community by calling the strike,” said Jackie Van Blaricum, president of HCA Healthcare’s Far West Division, who was the hospital’s chief executive during the strike. “We applaud the arbitrator’s decision.”

Advertisement

SEIU 121RN Executive Director Rosanna Mendez objected to the arbitrator’s findings, saying nurses were permitted under their contract to go on strike. She called the arbitrator’s decision “absurd and outrageous.”

“It is absolutely shocking that an arbitrator would expect nurses to not talk about safety issues,” Mendez said, adding that the union was exploring its options to contest the arbitrator’s decision.

Continue Reading

Business

Supreme Court rejects California man's attempt to trademark Trump T-shirts

Published

on

Supreme Court rejects California man's attempt to trademark Trump T-shirts

The Supreme Court on Thursday turned down a California attorney’s bid to trademark the phrase “Trump Too Small” for his exclusive use on T-shirts.

The justices said trademark law forbids the use of a living person’s name, including former President Trump.

The vote was 9-0.

Trump was not a party to the case of Vidal vs. Elster, but in the past he objected when businesses and others tried to make use of his name.

Advertisement

Concord, Calif., attorney Steve Elster said he was amused in 2016 when Republican presidential candidates exchanged comments about the size of Trump’s hands during a debate. Florida Sen. Marco Rubio, whom Trump had mocked as “Little Marco,” asked Trump to hold up his hands, which he did. “You know what they say about guys with small hands,” Rubio said.

After Trump won the election, Elster decided to sell T-shirts with the phrase “Trump Too Small,” which he said was meant to criticize Trump’s lack of accomplishments on civil rights, the environment and other issues.

Legally he was free to do so, but the U.S. Patent and Copyright Office denied his request to trademark the phrase for his exclusive use.

When he appealed the denial, he won a ruling from a federal appeals court which said his “Trump Too Small” slogan was political commentary protected by the 1st Amendment.

The Biden administration’s Solicitor Gen. Elizabeth Prelogar appealed and urged the Supreme Court to reject the trademark request.

Advertisement

She acknowledged that Elster had a free-speech right to mock the former president, but argued he did not have the right to “assert property rights in another person’s name.”

“For more than 75 years, Congress has directed the U.S. Patent and Trademark Office to refuse the registration of trademarks that use the name of a particular living individual without his written consent,” she said.

Writing for the court, Justice Clarence Thomas said Thursday: “Elster contends that this prohibition violates his 1st Amendment right to free speech. We hold that it does not,”

Advertisement
Continue Reading

Business

Elon Musk blasts Apple's OpenAI deal over alleged privacy issues. Does he have a point?

Published

on

Elon Musk blasts Apple's OpenAI deal over alleged privacy issues. Does he have a point?

When Apple holds its annual Worldwide Developers Conference, its software announcements typically elicit cheers and excitement from tech enthusiasts.

But there was one notable exception this year — Elon Musk.

The Tesla and SpaceX chief executive threatened to ban all Apple devices from his companies, alleging a new partnership between Apple and Microsoft-backed startup OpenAI could pose security risks. As part of its new operating system update, Apple said users who ask Siri a question could opt in for Siri to pull additional information from ChatGPT.

“Apple has no clue what’s actually going on once they hand your data over to OpenAI,” Musk wrote on X. “They’re selling you down the river.”

The partnership allows Siri to ask iPhone, Mac and iPad users if the digital assistant can surface answers from OpenAI’s ChatGPT to help address a question. The new feature, which will be available on certain Apple devices, is part of the company’s operating system update due later this year.

Advertisement

“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies,” Musk wrote on X. “That is an unacceptable security violation.”

Representatives for Musk and Apple did not respond to a request for comment.

In a keynote presentation at its developers conference on Monday, Apple said ChatGPT would be free for iPhone, Mac and iPad users. Under the partnership, Apple device users would not need to set up a ChatGPT account to use it with Siri.

“Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests,” Apple said on its website. “ChatGPT’s data-use policies apply for users who choose to connect their account.”

Many of Apple’s AI models and features, which the company collectively calls “Apple Intelligence,” run on the device itself, but some inquiries will require information to be sent through the cloud. Apple said that data is not stored or made accessible to Apple and that independent experts can inspect the code that runs on the servers to verify this.

Advertisement

Apple Intelligence will be available for certain models of Apple devices, such as the iPhone 15 Pro and iPhone 15 Pro Max and iPad and Mac with M1 and later.

So does Musk have a point? Technology and security experts who spoke to The Times offered mixed opinions.

Some pushed back on Musk’s assertion that Apple’s OpenAI deal poses security risks, citing a lack of evidence.

“Like a lot of things that Elon Musk says, it’s not based upon any kind of technical reality now, it’s really just based upon his political beliefs,” said Alex Stamos, chief trust officer at Mountain View, Calif.-based cybersecurity company SentinelOne. “There’s no real factual basis for what he said.”

Stamos, who is also a computer science lecturer at Stanford University and a former chief security officer at Facebook, said he was impressed with Apple’s data protection efforts, adding, “They’re promising a level of transparency that nobody’s really ever provided.

Advertisement

“It’s hard to totally prove at this point, but what they’ve laid out is about the best you could do to provide this level of AI services running on people’s private data while protecting their privacy,” Stamos said.

“To do the things that people have become accustomed to from ChatGPT, you just can’t do that on phones yet,” Stamos added. “We’re years away from being able to run those kinds of models on something that fits in your pocket and doesn’t burn a hole in your jeans from the amount of power it burns.”

Musk has been critical of OpenAI. He sued the company in February for breach of contract and fiduciary duty, alleging it had shifted its focus from an agreement to develop artificial general intelligence “for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.” On Tuesday, Musk, who was a co-founder of and investor in OpenAI, withdrew his lawsuit. Musk’s San Francisco company, xAI, is a competitor to OpenAI in the fast-growing field of artificial intelligence.

Musk has taken aim at Apple in the past, calling it a “Tesla graveyard,” because, according to him, Apple had hired people that Tesla had fired. “If you don’t make it at Tesla, you go work at Apple,” Musk said in an interview with German newspaper Handelsblatt in 2015. “I’m not kidding.”

Still, Rayid Ghani, a machine learning and public policy professor at Carnegie Mellon University, said that, at a high level, he thinks the concerns Musk raised about the OpenAI-Apple partnership should be raised.

Advertisement

While Apple said that OpenAI is not storing Siri requests, “I don’t think we should just take that at face value,” Ghani said. “I think we need to ask for evidence of that. How does Apple ensure that processes are there in place? What is the recourse if it doesn’t happen? Who’s liable, Apple or OpenAI, and how do we deal with issues?”

Some industry observers also have raised questions about the option for Apple users who have a ChatGPT account to link it with their iPhone, and what information is collected by OpenAI in that case.

“We have to be careful with that one — linking your account on your mobile phone is a big deal,” said Pam Dixon, executive director of the World Privacy Forum. “I personally would not link until there is a lot more clarity about what happens to the data.”

OpenAI pointed to a statement on its website that says, “Users can also choose to connect their ChatGPT account, which means their data preferences will apply under ChatGPT’s policies.” The company declined further comment.

Under OpenAI’s privacy policy, the company says it collects personal information that is included in the input, file uploads or feedback when account holders use its service. ChatGPT has a way for users to opt out of having their inquiries used to train AI models.

Advertisement

As the use of AI becomes more entwined with people’s lives, industry observers say that it will be crucial to provide transparency for customers and test the trustworthiness of the AI tools.

“We’re going to have to understand something about AI. It’s going to be a lot like plumbing. It’s going to be built into our devices and our lives everywhere,” Dixon said. “The AI is going to have to be trustworthy and we’re going to need to be able to test that trustworthiness.”

Night Archiving Supervisor Valerie Hood contributed to this report.

Advertisement
Continue Reading
Advertisement

Trending