Connect with us

Austin, TX

More than a third of state agencies are using AI. Texas is beginning to examine its potential impact. | Houston Public Media

Published

on

More than a third of state agencies are using AI. Texas is beginning to examine its potential impact. | Houston Public Media


AP Photo / Eric Gay

The State Capitol is seen in Austin, Texas, Tuesday, June 1, 2021.

When the Texas Workforce Commission became inundated with jobless claims in March 2020, it turned to artificial intelligence.

Affectionately named for the agency’s former head Larry Temple, who had died a year earlier, “Larry” the chatbot was designed to help Texans sign up for unemployment benefits.

Like a next generation FAQ page, Larry would field user-generated questions about unemployment cases. Using AI language processing, the bot would determine which answer prewritten by human staff would best fit the user’s unique phrasing of the question. The chatbot answered more than 21 million questions before being replaced by Larry 2.0 last March.

Advertisement

Larry is one example of the ways artificial intelligence has been used by state agencies. Adaptation of the technology in state government has grown in recent years. But that acceleration has also sparked fears of unintended consequences like bias, loss of privacy or losing control of the technology. This year, the Legislature committed to taking a more active role in monitoring how the state is using AI.

“This is going to totally revolutionize the way we do government,” said state Rep. Giovanni Capriglione, R-Southlake, who wrote a bill aimed at helping the state make better use of AI technology.

In June, Gov. Greg Abbott signed that bill, House Bill 2060, into law, creating an AI advisory council to study and take inventory of the ways state agencies currently utilize AI and assess whether the state needs a code of ethics for AI. The council’s role in monitoring what the state is doing with AI does not involve writing final policy.

Artificial intelligence describes a class of technology that emulates and builds upon human reasoning through computer systems. The chatbot uses language processing to understand users’ questions and match it to predetermined answers. New tools such as ChatGPT are categorized as generative AI because the technology generates a unique answer based on a user prompt. AI is also capable of analyzing large data sets and using that information to automate tasks previously performed by humans. Automated decision making is at the center of HB 2060.

More than one third of Texas state agencies are already utilizing some form of artificial intelligence, according to a 2022 report from the Texas Department of Information Resources. The workforce commission also has an AI tool for job seekers that provides customized recommendations of job openings. Various agencies are using AI for translating languages into English and call center tools such as speech-to-text. AI is also used to enhance cybersecurity and fraud detection.

Advertisement

Automation is also used for time-consuming work in order to “increase work output and efficiency,” according to a statement from the Department of Information Resources. One example of this could be tracking budget expenses and invoices. In 2020, DIR launched an AI Center for Excellence aimed at helping state agencies implement more AI technology. Participation in DIR’s center is voluntary, and each agency typically has its own technology team, so the extent of automation and AI deployment at state agencies is not closely tracked.

Right now, Texas state agencies have to verify that the technology they use meets safety requirements set by state law, but there are no specific disclosure requirements on the types of technology or how they are used. HB 2060 will require each agency to provide that information to the AI advisory council by July 2024.

“We want agencies to be creative,” Capriglione said. He favors finding more use cases for AI that go well beyond chat bots, but recognizes there are concerns around poor data quality stopping the system from working as intended: “We’re gonna have to set some rules.”

As adoption of AI has grown, so have worries around the ethics and functionality of the technology. The AI advisory council is the first step toward oversight of how the technology is being deployed. The seven-member council will include a member of the state House and the Senate, an executive director and four individuals appointed by the governor with expertise in AI, ethics, law enforcement and constitutional law.

Samantha Shorey is an assistant professor at the University of Texas at Austin who has studied the social implications of artificial intelligence, particularly the kind designed for increased automation. She is concerned that if technology is empowered to make more decisions, it will replicate and exacerbate social inequality: “It might move us towards the end goal more quickly. But is it moving us towards an end goal that we want?”

Advertisement

Proponents of using more AI view automation as a way to make government work more efficiently. Harnessing the latest technology could help speed up case management for social services, provide immediate summaries of lengthy policy analysis or streamline the hiring and training process for new government employees.

However, Shorey is cautious about the possibility of artificial intelligence being brought into decision-making processes such as determining who qualifies for social service benefits, or how long someone should be on parole. Earlier this year, the U.S. Justice Department began investigating allegations that a Pennsylvania county’s AI model intended to help improve child welfare was discriminating against parents with disabilities and resulting in their children being taken away.

AI systems “tend to absorb whatever biases there are in the past data,” said Suresh Venkatasubramanian, director of the Center for Technology Responsibility at Brown University. Artificial intelligence that is trained on data that includes any kind of gender, religious, race or other bias is at risk of learning to discriminate.

In addition to the problem of flawed data reproducing social inequality, there are also privacy concerns around the technology’s dependence on collecting large amounts of data. What the AI could be doing with that data over time is also driving fears that humans will lose some control over the technology.

“As AI gets more and more complicated, it’s very hard to understand how these systems are working, and why they’re making decisions the way they do,” Venkatasubramanian said.

Advertisement

That fear is shared by Jason Green-Lowe, executive director at the Center for AI Policy, a group that has lobbied for stricter AI safety in Washington DC. With the accelerating pace of technology and a lack of regulatory oversight, Green-Lowe said, “soon we might find ourselves in a world where AI is mostly steering. … And the world starts to reorient itself to serve the AI’s interests rather than human interest.”

Some technical experts, however, are more confident that humans will remain in the driver’s seat of increasing AI deployment. Alex Dimakis, a professor of electrical engineering and computer science at the University of Texas at Austin, worked on the artificial intelligence commission for the U.S. Chamber of Commerce.

In Dimakis’ view, AI systems should be transparent and subject to independent evaluation known as red teaming, a process in which the underlying data and decision-making process of the technology are scrutinized by multiple experts to determine if more robust safety measures are necessary.

“You cannot hide behind AI,” Dimakis said. Beyond transparency and evaluation, Dimakis said the state should enforce existing laws against whoever created the AI in any case where the technology produces an outcome that violates the law: “apply the existing laws without being confused that an AI system is in the middle.”

The AI advisory council will submit its findings and recommendations to the Legislature by December 2024. In the meantime, interest is growing in deploying AI at all levels of government. DIR operates an artificial intelligence user group made up of representatives from state agencies, higher education and local government interested in implementing AI.

Advertisement

Interest in the user group is growing by the day, according to a DIR spokesperson. The group has more than 300 members representing more than 85 different entities.

Disclosure: University of Texas at Austin and US Chamber of Commerce have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here.

This article originally appeared in The Texas Tribune at https://www.texastribune.org/2024/01/02/texas-government-artificial-intelligence/. The Texas Tribune is a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.



Source link

Advertisement

Austin, TX

Austin police released officer-work body cam video after Sixth Street mass shooting

Published

on

Austin police released officer-work body cam video after Sixth Street mass shooting


Austin police say they are still investigating whether terrorism played a role in the Sixth Street mass shooting, describing it as a possible motive that remains under review.

On Thursday, the Austin Police Department released officer-worn body camera footage from the night of the shooting and played recordings of emergency calls placed in the moments after gunfire erupted early Sunday morning.

“Hello, this is Austin 911. There has been a shooting at Buford’s on Sixth Street. There are people dead,” a caller told dispatchers in one of the recordings. Authorities say numerous calls flooded the 911 center after a gunman opened fire, killing three people and injuring more than a dozen others.

Police Chief Lisa Davis said some of the footage investigators reviewed shows the suspect firing into a crowd, but those images are too graphic to release publicly. “Any video showing the suspect firing his pistol into the crowd is too graphic to show, and we will not be showing that publicly,” Davis said.

Advertisement

RELATED| APD releases bodycam footage, 911 calls from West 6th Street mass shooting

According to investigators, the suspect was driving on West Sixth Street toward Rio Grande Street when he stopped in front of Buford’s and fired into a crowd with a semi-automatic handgun. Body camera footage from responding officers captures the chaotic moments as police and bystanders reacted to the gunfire.

“I am with you,” one officer says in the video before shouting, “AR-15. AR-15. Down! Everybody down!”

Police say not all of the victims were inside the bar when the shooting occurred.“One of the victims was outside of Buford’s waiting for an Uber,” I said during a news conference. Chief Davis agreed that the victims were spread out. “These were not all the people who were in the bar,” she said. “Sixth Street is an entertainment area from east to west. It is an entertainment area. People come to walk along Sixth Street.”

Surveillance video shows the suspect later parking a black SUV, getting out with an AR-15-style rifle, and shooting a pedestrian. By that point, officers had already been dispatched and arrived 57 seconds after the first emergency call, police said. Investigators say the suspect then fired toward officers.“The suspect discharged his weapon at the direction of the officers. The three officers discharged their firearm, striking him multiple times,” Davis said. Body camera footage from the scene caught officers asking, “Where is he? Who shot them?” before additional gunfire is heard.

Advertisement

City leaders say the officers’ rapid response helped prevent further loss of life. Meantime, investigators are asking anyone with video or photos from that night to share them with them.



Source link

Continue Reading

Austin, TX

Austin Police Department updates procedures after controversial deportation

Published

on

Austin Police Department updates procedures after controversial deportation


AUSTIN, Texas — An update to the Austin Police Department’s (APD) procedures outlines that officers are not required to contact U.S. Immigration and Customs Enforcement (ICE) when a person is found to have an ICE administrative warrant if they have no other arrestable charge.  

The update follows a controversial deportation from January, when a woman’s disturbance call to APD led to her detainment, alongside her 5-year-old child, who is a U.S. citizen.  

The incident led to questions from the community regarding the way APD is supposed to interact with ICE.  

In a March 4 memo, APD Police Chief Lisa Davis said that the directives provided by ICE administrative warrants could be confusing in their wording.

Advertisement

According to Davis, officers have not historically regularly encountered administrative warrants while using the National Crime Information Center database, which is used to conduct identity checks. However, in 2025, federal agencies began entering a large volume of administrative warrants into the system.

According to the memo, administrative warrants are formatted in a way that looks similar to criminal warrants in the system.

The APD General Orders have been updated to clearly define the difference between criminal warrants and ICE administrative warrants, as well as specific instructions for how ICE administrative warrants should be handled moving forward.

“APD recognizes the sensitivity of this issue, not only within our city but across the nation. These policies were updated to provide clarity to our officers, ensure compliance with state law, and maintain officer discretion guided by supervisory oversight and operational consideration,” Davis said in the memo.

The updated procedures instruct officers to contact their supervisor when a person is found to have only an ICE administrative warrant, but no other arrestable criminal charge. From there, the officer or their supervisor may contact ICE, but is not required to.

Advertisement

“Austin Police and City of Austin leadership share a paramount goal for Austin to be a safe city for everyone who lives, works, or visits here,” Davis said in the memo. “We particularly want to ensure that anyone who witnesses or is the victim of a crime feels secure in contacting the police for help.”

According to the memo, the entire APD staff will be required to complete new training regarding these updates.  

“In concert with the policy updates, APD is launching a public webpage to help people understand their rights and provide links to resources available from the City of Austin and community organizations, such as Know Your Rights training,” Davis said in the memo. “The webpage will also include information on the option of using APD Victim Services as an alternative to calling 9-1-1, when appropriate, and links to all general orders and policies related to immigration.”



Source link

Advertisement
Continue Reading

Austin, TX

Texas Plans Second Execution of the Year

Published

on

Texas Plans Second Execution of the Year


Cedric Ricks spoke in his own defense at his 2013 murder trial, something most defendants accused of a terrible crime do not do. Ricks confessed that he had killed his girlfriend, Roxann Sanchez, and her 8-year-old son. He admitted he was aggressive and had trouble controlling his anger, stating that he was “sorry about everything.” […]



Source link

Continue Reading

Trending