Michigan
Michigan lawmakers weigh new rules for artificial intelligence
Trump says US will loosen rules in push to win AI race
The Trump administration released a new artificial intelligence blueprint that aims to loosen environmental rules and vastly expand AI exports to allies, in a bid to maintain the American edge over China in the critical technology.
- Michigan lawmakers have debated a wide array of artificial intelligence regulations, although few have become law so far.
- Current proposals include banning companion chatbots for minors and creating security plans for AI platforms.
Policymakers eager to address how artificial intelligence should be regulated in Michigan — from governing how AI companies can operate in the state to determining what types of programs companies can use to monitor employee productivity — have plenty of ideas but haven’t been able to see all of their proposals into law, yet.
Measures to regulate AI were introduced in all 50 states last year, according to the National Conference of State Legislatures. While experts point out that innovation in AI generally occurs at a faster rate than state governments can propose and enact new policies, there’s a wide selection of proposals on regulating AI currently floating through the Michigan Legislature.
Michigan has made some headway in enacting AI regulation laws, including prohibiting the use of AI to create sexualized “deep fake” images. In 2025, the state created penalties, including fines and potential jail time, for using AI platforms to make fake or false images portraying a sexual act or intimate part of someone’s body. Lawmakers broadly supported the proposals, creating the law which passed the state Legislature by wide partisan margins.
Supporters of the pornographic deepfake ban said it would protect Michiganders from sexual exploitation.
And in 2023, Michigan became just the fifth state to require disclosing when AI is used in certain political campaign materials. If a campaign used AI in an ad or social media post within 90 days of an election, it would be subject to fines for each violation. The measure aims to prevent AI-driven misinformation during election season.
Here’s a look at other AI policies that have been proposed but not yet voted on:
Guardrails for AI companies
One policy measure would set rules for the companies that operate major AI programs.
House Bill 4668, introduced by Rep. Sarah Lightner, R-Springport, would require operators to create security features intended to mitigate risks. These measures include creating and implementing a publicly accessible safety and risk protocol. Developers would be tasked with using the protocol to manage “critical risks” associated with the AI model.
Critical risks would be considered a scenario where an AI model was used to carry out any incident that could lead to the death or injury of 100 people or $1 million in property damage.
Any company that spends $100 million on its AI model annually, or spends $5 million to start operations, would be subject to the requirements.
Advocates for Lightner’s bill say it’s important to place guardrails around AI, given its rapid evolution.
“Every technologic innovation has the potential for both good and harm,” said Felix De Simone, director of advocacy group Pause AI during a Sept. 11 House Judiciary Committee hearing on the bill. “It’s the responsibility of lawmakers to keep people safe from these harms while ensuring innovation moves in the public interest.”
Opponents of the bill, which include officials from different chambers of commerce around the state, warn it could stifle innovation from AI developers and dissuade them from operating in Michigan. Randy Gross, senior director of legislative affairs for the Michigan Chamber, said during the Sept. 11 hearing the group acknowledges a need for AI guardrails but believes the federal government should take the lead.
“Handling these issues at the state level is going to create a patchwork approach that will inevitably lead to some inconsistencies in application that will likely lead to some contradictions in how you regulate this issue,” Gross said.
While the House Judiciary Committee reported the bill during the Sept. 11 hearing, it has not received a vote from the full chamber, yet. A companion bill, HB 4667, would make it illegal to develop an AI system to commit a crime.
Use of AI to monitor workers
Labor advocates have warned of the possibility of AI being used for surveillance in the workplace. Since remote work boomed for many in traditional office jobs during the COVID-19 pandemic, the availability of AI surveillance programs for workers has escalated. These can include programs to monitor keystroke logging, facial recognition and even when a remote worker steps away for a bathroom break, according to the Aspen Policy Academy, a Bay Area organization that trains prospective lawmakers.
Some labor advocates argue this is an invasion of privacy.
“Invasive, unnecessary and unethical surveillance techniques (are) increasingly used to track the body movements and even facial expressions of employees continuously,” Rep. Penelope Tsernoglou, D-East Lansing, said at a Feb. 23 news conference.
In February, House Democrats proposed legislation that would define how AI could be used in the workplace when it comes to how employers can deploy AI to monitor workers’ productivity.
House Bill 5579, introduced by Tsernoglou, would ban employers from using AI programs to make decisions related to setting wages, hiring and firing workers, and tracking facial patterns of workers. Workplaces would still be allowed to use AI to screen large pools of candidates. Employers would also need to get written consent from workers when using an AI tool to monitor productivity.
The bill has backing from major labor groups, including the Michigan AFL-CIO.
There is opposition from some business groups, however. The Michigan Chamber said in a Feb. 25 news release, the bill would place strict parameters on employers and limit their abilities to maintain productive staff levels.
HB 5579 has been referred to the House Committee on Economic Competitiveness, where it awaits a hearing.
Banning AI chatbot ‘therapy’ for minors
Generative AI generally can be used to mimic some human behavior. Some AI platforms offer companion apps where a language model talks to a user like a real person.
This has raised concern over how minors use generative AI: A Stanford University study found it was easy for researchers to elicit inappropriate responses from a chatbot when posing as minors. The Federal Trade Commission also launched an inquiry into companion chatbots in September, seeking information on how platforms interact with minors.
OpenAI, which runs the popular ChatGPT program that’s become synonymous with generative AI, has faced wrongful death lawsuits after allegations that its chatbot affirmed suicidal ideations from users. OpenAI has denied claims that ChatGPT is responsible for the deaths.
Senate Bill 760, introduced by Sen. Dayna Polehanki, D-Livonia, would ban AI platforms from making chatbots available to minors that can mimic emotional support — specifically, the bill bans any platform retaining conversation history with a minor, sustaining dialogue about the user’s personal matters and offering unprompted emotional advice.
It’s part of a four-bill package aimed at improving social media safety for minors in Michigan.
“These systems are being deployed at scale, marketed as friendly, supportive and conversational. Yet they’re being released without any meaningful safeguards for minors. And when something goes wrong, the consequences can be very grave,” Polehanki said during a March 4 hearing in front of the Senate Committee on Finance, Insurance and Consumer Protection.
Some of the concerns with the proposal center around how AI platforms would verify the age of the user. Age verification laws have popped up in other states and been proposed in Michigan before. Generally, those opposing age verification laws worry about the security of personal information once it’s handed over to a website or another digital platform.
“That kind of data collection creates a honeypot for cyber criminals and bad actors to exploit,” Turner Loesel, a policy analyst at the James Madison Institute, said during the March 4 hearing.
SB 760 currently remains in committee.
Banning AI in public health care, rent-setting
Last year, Rep. Carrie Rheingans, D-Ann Arbor, introduced legislation that would ban the use of AI programs to determine claims for Medicaid and other health insurance programs on the health care marketplace. House Bills 4536 and 4537 were introduced in May and have both been referred to House Committee on Insurance.
House Bill 4538 would ban landlords from using an AI-driven algorithm to determine average rental prices in an area and then instituting rent at their properties derived from the AI’s calculations. The bill’s been referred to the Committee on Regulatory Reform.
The bills haven’t received hearings in their respective committees yet.
Six states — Arizona, California, Illinois, Maryland, Nebraska and Texas — have laws that in some way ban the usage of AI as the basis to deny health insurance claims, according to KFF (formerly Kaiser Family Foundation).
And while some major cities, like San Francisco and Philadelphia have banned using algorithms to set rental prices, adoption in states has been slower to occur, according to government relations firm MultiState.
Trump calls for federal AI standard
In December, President Donald Trump issued an executive order aimed at establishing a federal framework for AI regulation. Having state-level regulations could hamper innovation in AI, the president argued.
“My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones,” the executive order states. “The resulting framework must forbid State laws that conflict with the policy set forth in this order. That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded. A carefully crafted national framework can ensure that the United States wins the AI race, as we must.”
So far, Congress hasn’t passed any legislation prohibiting states from setting their own AI regulations. Trump’s order also called on the Secretary of Commerce to publish a report examining regulations across all 50 states.
You can reach Arpan Lobo at alobo@freepress.com
As artificial intelligence (AI) grows in popularity, how much do you know about it? Test your knowledge with this true/false quiz that covers everything from how AI models are trained to how they sometimes make mistakes.