Connect with us

Utah

Utah Rep. Maloy introduces bill to hold tech platforms responsible for deepfake images

Published

on

Utah Rep. Maloy introduces bill to hold tech platforms responsible for deepfake images


Rep. Celeste Maloy, R-Utah, sponsored new bipartisan legislation that would make social media and other platforms legally responsible if they fail to act on abusive deepfake images and cyberstalking.

On Monday, Maloy and Rep. Jake Auchincloss, D-Mass., introduced the Deepfake Liability Act, a bill that would change how federal law treats websites and apps that host nonconsensual AI-generated sexual images and other intimate content.

“Abusive deepfakes and cyberstalking are harming people across the country, and victims deserve real help,” Maloy said in a press release. “Our bill creates a straightforward duty of care and a reliable process to remove harmful content when victims ask for help. Companies that take this seriously will keep their protections under the law. Those that do nothing will be held accountable.”

Maloy’s office noted that women and teenage girls are the overwhelming targets of nonconsensual deepfake pornography, which now makes up the majority of deepfake content online.

Advertisement

Changing Section 230 rules for AI content

The bill targets Section 230 of the Communications Decency Act, the law that has long shielded online platforms from being sued over most user-generated content.

The Deepfake Liability Act would condition those protections on whether a platform meets a new “duty of care.” To keep their immunity, companies would need to:

  • Take basic steps to prevent cyberstalking and abusive deepfakes
  • Respond to reports from victims
  • Investigate credible complaints
  • Remove intimate or privacy-violating content identified by those victims

The bill also clarifies that AI-generated content is not automatically covered by Section 230 immunity — a key change as generative tools make it easier to create convincing fake images and videos.

“AI shouldn’t have special privileges and immunities that journalists don’t get,” Auchincloss said in the press release, arguing that using bots or deepfakes to violate or stalk another person “needs to be a CEO-level problem for the trillion-dollar social media corporations that platform it. Congress needs to get ahead of this growing problem, instead of being left in the dust like we were with social media.”

Speaking about his broader “UnAnxious Generation” legislative package, Auchincloss told Time magazine that the Deepfake Liability Act is meant to move platforms from a “reactive” posture to a proactive one: Section 230 protections would hinge on actively working to prevent and remove deepfake porn and cyberstalking, not just responding when forced.

How it connects to the Take It Down Act

The new proposal is designed to build on a law that passed earlier this year: the federal Take It Down Act.

Advertisement

The Take It Down Act was co-sponsored by Sens. Ted Cruz, R-Texas, and Amy Klobuchar, D-Minn. First lady Melania Trump also strongly advocated for the bill to be passed. It passed the Senate by unanimous consent and cleared the House on a 409–2 vote before President Donald Trump signed it into law on May 19.

That law makes it a federal crime to “knowingly publish” or threaten to publish intimate images without a person’s consent, including AI-generated deepfakes. It also requires covered websites and social media platforms to remove such material — and make efforts to delete copies — within 48 hours after a victim reports it.

Enforcement is handled by the Federal Trade Commission, and platforms have until May 2026 to fully implement the required notice-and-removal systems.

The Deepfake Liability Act uses that same basic notice-and-removal framework but goes further by tying Section 230 protections to whether companies meet a clear duty of care.

Maloy and Auchincloss say that change would ensure that platforms that ignore reports of abuse no longer have the same legal shield as those that take active steps to protect victims.

Advertisement

Supporters say it closes a gap — critics warn about overreach

Advocates for reforming online liability say the new bill is a needed next step after Take It Down.

“The time is now to reform Section 230,” said Danielle Keats Citron, vice president of the Cyber Civil Rights Initiative and a longtime scholar of online abuse, per the release.

Keats said the Deepfake Liability Act contains a “well-defined duty of care” that would require platforms to prevent, investigate and remove cyberstalking, nonconsensual intimate images and digital forgeries. She also argued that it would close a loophole by making platforms responsible not only for content they help create but also for harmful content they “solicit or encourage.”

The Take It Down Act from earlier this year had drawn criticism from some free speech and digital rights groups, including the Electronic Frontier Foundation and others, who said its fast takedown deadlines and broad language could pressure platforms to over-remove content, rely heavily on automated filters and potentially sweep in lawful speech — such as news reporting, protest images or LGBTQ content — in the name of avoiding liability, per The Associated Press.

This new measure is part of a broader, bipartisan push to regulate AI-related harms and tighten rules for how tech companies handle children’s safety, online abuse and emerging threats from generative tools.

Advertisement



Source link

Utah

Utah man dies of injuries sustained in avalanche in Big Cottonwood Canyon

Published

on

Utah man dies of injuries sustained in avalanche in Big Cottonwood Canyon


A man died after he was caught in an avalanche in Big Cottonwood Canyon over the weekend.

A spokesperson for the Salt Lake County Sheriff’s Office confirmed on Thursday that Kevin Williams, 57, had died.

He, along with one other person, was hospitalized in critical condition after Saturday’s avalanche in the backcountry.

MORE | Big Cottonwood Canyon Avalanche

In an interview with 2News earlier this week, one of Williams’ close friends, Nate Burbidge, described him as a loving family man.

Advertisement

“Kevin’s an amazing guy. He’s always serving, looking for ways that he can connect with others,” Burbidge said.

A GoFundMe was set up to help support Williams’ family.

Comment with Bubbles

BE THE FIRST TO COMMENT

_____

Advertisement



Source link

Continue Reading

Utah

911 recordings detail hours leading up to discovery of Utah girl, mother dead in Las Vegas

Published

on

911 recordings detail hours leading up to discovery of Utah girl, mother dead in Las Vegas


CONTENT WARNING: This report discusses suicide and includes descriptions of audio from 911 calls that some viewers may find disturbing.

LAS VEGAS — Exclusively obtained 911 recordings detail the hours leading up to the discovery of an 11-year-old Utah girl and her mother dead inside a Las Vegas hotel room in an apparent murder-suicide.

Addi Smith and her mother, Tawnia McGeehan, lived in West Jordan and had traveled to Nevada for the JAMZ cheerleading competition.

The calls show a growing sense of urgency from family members and coaches, and several hours passing before relatives learned what happened.

Advertisement

MORE | Murder-Suicide

Below is a timeline of the key moments, according to dispatch records. All times are Pacific Time.

10:33 a.m. — Call 1

After Addi and her mother failed to appear at the cheerleading competition, Addi’s father and stepmother called dispatch for a welfare check.

Addi and her mother were staying at the Rio hotel. The father told dispatch that hotel security had already attempted contact.

“Security went up and knocked on the door. There’s no answer or response it doesn’t look like they checked out or anything…”

11:18 a.m. and 11:27 a.m. — Calls 2 and 3

As concern grew, Addi’s coach contacted the police two times within minutes.

Advertisement

“We think the child possibly is in imminent danger…”

11:26 a.m. — Call 4

Addi’s stepmother placed another call to dispatch, expressing escalating concern.

“We are extremely concerned we believe that something might have seriously happened.”

She said that Tawnia’s car was still at the hotel.

Police indicated officers were on the way.

Advertisement

2:26 p.m. — Call 5

Nearly three hours after the initial welfare check request, fire personnel were en route to the scene. It appeared they had been in contact with hotel security.

Fire told police that they were responding to a possible suicide.

“They found a note on the door.”

2:35 p.m. — Call 6

Emergency medical personnel at the scene told police they had located two victims.

“It’s going to be gunshot wound to the head for both patients with notes”

Advertisement

A dispatcher responded:

“Oh my goodness that’s not okay.”

2:36 p.m. — Call 7

Moments later, fire personnel relayed their assessment to law enforcement:

“It’s going to be a murder suicide, a juvenile and a mother.”

2:39 p.m. — Call 8

Unaware of what had been discovered, Addi’s father called dispatch again.

Advertisement

“I’m trying to file a missing persons report for my daughter.”

He repeats the details he knows for the second time.

3:13 p.m. — Call 9

Father and stepmother call again seeking information and continue to press for answers.

“We just need some information. There was a room check done around 3:00 we really don’t know where to start with all of this Can we have them call us back immediately?”

Dispatch responded:

Advertisement

“As soon as there’s a free officer, we’ll have them reach out to you.”

4:05 p.m. — Call 10

More than an hour later, Addi’s father was put in contact with the police on the scene. He pleaded for immediate action.

“I need someone there I need someone there looking in that room”

The officer confirmed that they had officers currently in the room.

Addi’s father asks again what they found, if Addi and her mother are there, and if their things were missing.

Advertisement

The officer, who was not on scene, said he had received limited information.

5:23 p.m. — Call 11

Nearly seven hours after the first welfare check request, Addi’s grandmother contacted police, describing conflicting information circulating within the family.

“Some people are telling us that they were able to get in, and they were not in the hotel room, and other people saying they were not able to get in the hotel room, and we need to know”

She repeated the details of the case. Dispatch said officers will call her back once they have more information.

Around 8:00 p.m. — Press Conference

Later that evening, Las Vegas Metropolitan Police held a news conference confirming that Addi and her mother, Tawnia McGeehan, were found dead inside the hotel room.

Advertisement

The investigation remains ongoing.

______



Source link

Continue Reading

Utah

Ban on AI glasses in Utah classrooms inches closer to passing

Published

on

Ban on AI glasses in Utah classrooms inches closer to passing


AI glasses could allow you to get answers, snap photos, access audio and take phone calls—and now a proposal moving through the legislature would ban the glasses from Utah school classrooms.

“I think it’s a great idea,” said Kizzy Guyton Murphy, a mother who accompanied her child’s class on a field trip to the state Capitol on Wednesday. “You can’t see inside what the student is looking at, and it’s just grounds for cheating.”

Mom Tristan Davies Seamons also sees trouble with AI glasses.

“I don’t think they should have any more technology in schools than they currently have,” she said.

Advertisement

Her twin daughters, fourth graders Finley and Grayson, don’t have cell phones yet.

“Not until we’re like 14,” said Grayson, adding they do have Chromebooks in school.

2News sent questions to the Utah State Board of Education:

  • Does it have reports of students using AI glasses?
  • Does it see cheating and privacy as major concerns?
  • Does it support a ban from classrooms?

Matt Winters, USBE AI specialist, said the board has not received reports from school districts of students with AI glasses.

“Local Education Agencies (school districts) have local control over these decisions based on current law and code,” said Winters. “The Board has not taken a position on AI glasses.

MORE | Utah State Legislature:

Some districts across the country have reportedly put restrictions on the glasses in schools.

Advertisement

“I think it should be up to the teachers,” said Briauna Later, another mother who is all for preventing cheating, but senses a ban could leave administrators with tired eyes.

“It’s one more thing for the administration to have to keep track of,” said Later.

The proposal, HB 42, passed the House and cleared a Senate committee on Wednesday.

Comment with Bubbles

BE THE FIRST TO COMMENT

Advertisement

___



Source link

Continue Reading

Trending