Louisiana

Louisiana governor vetoes political deepfakes bill | StateScoop

Published

on


Louisiana Gov. Jeff Landry has vetoed a bill that would have made it illegal to deceive voters through the use of artificial intelligence-generated deepfakes.

While similar legislation outlawing the use of deceptive audio, images and videos for political purposes has passed uncontroversially in a growing number of other states, Louisiana’s governor claimed such a law infringes on the First Amendment rights of AI companies.

“While I applaud the efforts to prevent false political attacks, I believe this bill creates serious First Amendment concerns as it relates to emerging technologies,” Landry wrote of his veto last month. “The law is far from settled on this issue, and I believe more information is needed before such regulations are enshrined into law.”

Louisiana’s law would have held that: “No person shall cause to be distributed or transmitted any oral, visual, digital, or written material containing any image, audio, or video of a known candidate or of a person who is known to be affiliated with the candidate which he knows or should be reasonably expected to know has been created or intentionally manipulated to create a realistic but false image, audio, or video with the intent to deceive a voter or injure the reputation of a known candidate in an election.”

Advertisement

In vetoing the bill, the governor also pointed to a resolution directing the state’s Joint Legislative Committee on Technology and Cybersecurity to make recommendations on how the state should be using AI, a process that’s also underway in many other states.

Landry also vetoed a bill that would have required deepfake media to be watermarked, a new requirement in Connecticut, among other states.

Convincing deepfake media threatens to undermine a political process already being confused by social media algorithms. Numerous states are rushing to minimize the potential harm that generative AI tools could wreak on the nation’s information landscape. Arizona, Florida and Wisconsin are among the states that have passed laws adding AI provisions to laws designed to prevent deception in political campaigns. 

Megan Bellamy, vice president of law and policy at Voting Rights Lab, recently told StateScoop that deepfakes are an especially pernicious threat to democracy.

“AI-generated content can grab the voter’s attention, reach them faster and spread in more of a viral way than state board of elections and county board of elections and all of these trusted sources can overcome,” she said. 

Advertisement

In Arizona, repeatedly failing to label AI-generated political materials, or doing so with the intent to incite violence, was this year made a felony.

Landry, a Republican who formerly served as the state’s attorney general, also currently finds himself amid other controversies — he signed a law last month that will require public classrooms to display the Ten Commandments. The American Civil Liberties Union said it plans to file a lawsuit, a fight it won at least once, including in 2002 when the group’s Maryland branch dismissed a lawsuit against the City and County of Frederick for displaying the biblical text in a public park.

Written by Colin Wood

Colin Wood is the editor in chief of StateScoop and EdScoop. He’s reported on government information technology policy for more than a decade, on topics including cybersecurity, IT governance and public safety.



Source link

Advertisement

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Exit mobile version