GRAND RAPIDS, Mich. — Artificial intelligence is rapidly reshaping everything from classroom conversations to social media, and leaders at Grand Valley State University (GVSU) say West Michigan is positioning itself to help determine how the technology is used, responsibly.
The university’s College of Computing is launching the West Michigan Trustworthy Artificial Intelligence (AI) Consortium, aimed at helping businesses, researchers and the community better understand how to use artificial intelligence.
Right in the heart of Grand Rapids, along the Medical Mile, the consortium will meet at the Daniel and Pamella DeVos Center for Interprofessional Health (DCIH) every week, with quarterly meetings open to the general public.
The effort is aimed at helping West Michigan industries adopt AI that fits their specific needs, while problem-solving for security, bias, privacy, and ethical concerns.
Right in the heart of Grand Rapids, along Medical Mile, the consortium will meet at the Daniel and Pamella DeVos Center for Interprofessional Health (DCIH) every week, with quarterly meetings open to the general public. (Abigail Taylor/WWMT)
Marouane Kessentini, Ph.D, Dean of the GVSU College of Computing told News Channel 3 that a wide range of companies in the region are bringing forward questions of where, and how, to ethically integrate artificial intelligence into their practices.
“Here in West Michigan, we have a high concentration of many industries, health, manufacturing, and of course high-tech companies,” said Kessentini. “The first questions are about security, privacy, ethics and bias. It’s not just about deploying tools. It’s about deploying them responsibly.”
Kessentini said the consortium will focus on training, research and community education, with a heavy emphasis on data privacy, cybersecurity and misinformation.
“There are many examples where AI systems were trained on data that wasn’t diverse,” he said. “That can lead to inaccurate results. That’s why testing and training are critical.”
The consortium will bring together faculty researchers, students, and industry leaders, with weekly meetings planned to develop guidance for using AI at scale.
The goal is to help companies validate AI outputs, clean and manage data, and identify bias before systems are put into real-world use, especially in high-risk industries like healthcare and manufacturing.
Some projects will involve software design, others will focus on creating public data sets that are reliably sourced, but anonymized for safe use, and many more are yet to be ideated.
Some projects will involve software design, others will focus on creating public data sets that are reliably sourced, but anonymized for safe use, and many more are yet to be ideated. (Abigail Taylor/WWMT)
The initiative is backed by $1,031,000 in federal support, through the Community Project Funding (CPF) process, resources that U.S. Representative Hillary Scholten (D-MI-03) said she advocated for among members of congress in Washington.
“West Michigan should be leading the way in how artificial intelligence is developed and used, and that starts with investing in people and institutions we trust,” said Rep. Scholten. “This funding will help GVSU bring together educators, industry, and public partners to build AI systems that are ethical, secure, and transparent while preparing students for good-paying jobs and strengthening our region’s economy. I’m proud to support this work and to continue delivering federal investments that ensure West Michigan remains at the forefront of responsible innovation.”
It’s important that AI is useful, but also safe…
GVSU also launched an online certificate portal that is open for community members interested in learning about ethical AI use, for free.
Kessentini said the training is for the general public to learn how to navigate the technology, including the risks and limitations.
“It’s important that AI is useful, but also safe,” said Edgar Cruz, master’s student with a badge in cybersecurity.
Cruz is currently researching how AI systems can be attacked or manipulated with poisoned data, specifically as it relates to vehicle-to-vehicle communication, where AI helps self-driving cars exchange information like speed and position.
“We want to ensure that the system is robust and safe,” he said. “Because obviously people are involved.”
Kessentini said the consortium is designed to be a public resource, not just an academic project.
Quarterly community meetings will be open to the public, and training materials are available online through the College of Computing website.
“This is innovation with purpose,” he said. “We want to start here in Grand Rapids, but we want to make a global impact.”