Texas

Texas should rein in AI abuses with new consumer protections

Published

on


In testimony before a state committee on artificial intelligence, a witness showed a picture of the Dallas Cowboys celebrating a Super Bowl victory this year. Another witness displayed an AI-generated email response crafted from a state lawmaker’s LinkedIn account.

Two problems. There was no such victory or celebration, and the lawmaker had nothing to do with the email response.

Increasingly used in hiring, credit, health care, law enforcement, insurance and other everyday decisions, artificial intelligence can produce dangerous, life-altering consequences. AI-initiated deceptions or faulty data can cost someone a job, a home or even their freedom. Unfortunately, the absence of clear rules makes it difficult for Texans to know how artificial intelligence impacts their lives and to contest abuses.

Texas lawmakers could remedy this when the Legislature convenes next year. Warning of potential abuses of privacy, consumer rights and threats to elections and national security, several artificial technology experts recently told the Texas House Select Committee on Artificial Intelligence & Emerging Technologies that lawmakers should provide legal and ethical guidelines for artificial intelligence use in Texas. Among their recommendations, they urge laws to require companies to disclose how they use algorithms and biometric information to train and deploy artificial intelligence systems.

Advertisement

Opinion

Get smart opinions on the topics North Texans care about.

Texas should look to data privacy and consumer protection laws for inspiration. States that regulate the use of artificial intelligence rightly focus on transparency, accountability and anti-discrimination measures. It would be consistent with many other consumer and privacy laws to protect residents from abusive, inappropriate, irrelevant, or unauthorized use or reuse of consumer data. Texans should also be protected from unjustified assessments based on race, color, ethnicity, sex, religion or disability.

Colorado, Maryland, Tennessee, Illinois and California also have transparency and accountability standards to give consumers and regulators recourse when lines are crossed, according to the Council of State Governments. And to mitigate false information based on the biases or inaccuracies in AI programs, lawmakers in California, Connecticut, Louisiana and Vermont hold companies responsible for unintended but foreseeable impacts or uses of artificial intelligence systems. Some states also require an employer to receive consent from an employee if the business intends to use an AI system to collect data about its workforce.

The entire world is slowly awakening to a technology that is exploding faster and wider than any technology in the history of the planet. The European Union recently established rules around artificial intelligence use. Congress and major companies that develop and use artificial intelligence systems are so gridlocked that states are trying to fill this regulatory vacuum on behalf of their residents.

Advertisement

It is good that Texas is beginning a serious conversation about the use of artificial intelligence systems. It should enact comprehensive and enforceable consumer protections. Providing Texans with more say on how data is used and recourse when companies abuse consumer trust would be smart public policy.

We welcome your thoughts in a letter to the editor. See the guidelines and submit your letter here. If you have problems with the form, you can submit via email at letters@dallasnews.com



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Exit mobile version