Business
An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.
In a small apartment outside Madrid on Jan. 11, 2022, an argument over household chores turned violent when Lobna Hemid’s husband smashed a wooden shoe rack and used one of the broken pieces to beat her. Her screams were heard by neighbors. Their four children, ages 6 to 12, were also home.
Ms. Hemid’s husband of more than a decade, Bouthaer el Banaisati, regularly punched and kicked her, she later told the police. He also called her a “whore,” “disgusting” and “worthless,” according to the police report.
Before Ms. Hemid left the station that night, the police had to determine if she was in danger of being attacked again and needed support. A police officer clicked through 35 yes or no questions — Was a weapon used? Were there economic problems? Has the aggressor shown controlling behaviors? — to feed into an algorithm called VioGén that would help generate an answer.
VioGén produced a score:
low risk Lobna Hemid
2022 Madrid
The police accepted the software’s judgment and Ms. Hemid went home with no further protection. Mr. el Banaisati, who was imprisoned that night, was released the next day. Seven weeks later, he fatally stabbed Ms. Hemid several times in the chest and abdomen before killing himself. She was 32 years old.
A photo of Lobna Hemid on the phone of a friend. She was killed by her husband in 2022.
Ana Maria Arevalo Gosen for The New York Times
Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins. At its best, the system has helped police protect vulnerable women and, overall, has reduced the number of repeat attacks in domestic violence cases. But the reliance on VioGén has also resulted in victims, whose risk levels are miscalculated, getting attacked again — sometimes leading to fatal consequences.
Spain now has 92,000 active cases of gender violence victims who were evaluated by VioGén, with most of them — 83 percent — classified as facing little risk of being hurt by their abuser again. Yet roughly 8 percent of women who the algorithm found to be at negligible risk and 14 percent at low risk have reported being harmed again, according to Spain’s Interior Ministry, which oversees the system.
At least 247 women have also been killed by their current or former partner since 2007 after being assessed by VioGén, according to government figures. While that is a tiny fraction of gender violence cases, it points to the algorithm’s flaws. The New York Times found that in a judicial review of 98 of those homicides, 55 of the slain women were scored by VioGén as negligible or low risk for repeat abuse.
How the Risk Levels of 98 Women Were Classified
Extreme
High
Medium
Low
Negligible
Source: Spanish General Council of the Judiciary Note: Data from 2010 to 2022. Data from 2016 to 2018 is unavailable. By Alice Fang
Spanish police are trained to overrule VioGén’s recommendations depending on the evidence, but accept the risk scores about 95 percent of the time, officials said. Judges can also use the results when considering requests for restraining orders and other protective measures.
“Women are falling through the cracks,” said Susana Pavlou, director of the Mediterranean Institute of Gender Studies, who coauthored a European Union report about VioGén and other police efforts to fight violence against women. The algorithm “kind of absolves the police of any responsibility of assessing the situation and what the victim may need.”
Spain exemplifies how governments are turning to algorithms to make societal decisions, a global trend that is expected to grow with the rise of artificial intelligence. In the United States, algorithms help determine prison sentences, set police patrols and identify children at risk of abuse. In the Netherlands and Britain, authorities have experimented with algorithms to predict who may become criminals and to identify people who may be committing welfare fraud.
Few of the programs have such life or death consequences as VioGén. But victims interviewed by The Times rarely knew about the role the algorithm played in their cases. The government also has not released comprehensive data about the system’s effectiveness and has refused to make the algorithm available for outside audit.
VioGén was created to be an unbiased tool to aid police with limited resources identify and protect women most at risk of being assaulted again. The technology was meant to create efficiencies by helping police prioritize the most urgent cases, while focusing less on those calculated by the algorithm as lower risk. Victims classified as higher risk get more protection, including regular patrols by their home, access to a shelter and police monitoring of their abuser’s movements. Those with lower scores get less support.
In a statement, the Interior Ministry defended VioGén and said the government was the “first to carry out self-criticism” when mistakes occur. It said homicide was so rare that it was difficult to accurately predict, but added it was an “incontestable fact” that VioGén has helped reduce violence against women.
Since 2007, about 0.03 percent of Spain’s 814,000 reported victims of gender violence have been killed after being assessed by VioGén, the ministry said. During that time, repeat attacks have fallen to roughly 15 percent of all gender violence cases from 40 percent, according to government figures.
“If it weren’t for this, we would have more homicides and gender-based violence,” said Juan José López Ossorio, a psychologist who helped create VioGén and works for the Interior Ministry.
Juan José López Ossorio, a government official who helped create the VioGén system. Ana Maria Arevalo Gosen for The New York Times
Yet victims and their families are grappling with the consequences when VioGén gets it wrong.
“Technology is fine, but sometimes it’s not and then it’s fatal,” said Jesús Melguizo, Ms. Hemid’s brother-in-law, who is a guardian for two of her children. “The computer has no heart.”
‘Effective but not perfect’
VioGén started with a question: Can police predict an assault before it happens?
After Spain passed a law in 2004 to address violence against women, the government assembled experts in statistics, psychology and other fields to find an answer. Their goal was to create a statistical model to identify women most at risk of abuse and to outline a standardized response to protect them.
Some initial designs and research strategies for what became VioGén, including a decision tree and calibration techniques for predicting intimate partner homicides.
Ana Maria Arevalo Gosen for The New York Times
“It would be a new guide for risk assessment in gender violence,” said Antonio Pueyo, a psychology professor at the University of Barcelona who later joined the effort.
The team took a similar approach to how insurance companies and banks predict the likelihood of future events, such as house fires or currency swings. They studied national crime statistics, police records and the work of researchers in Britain and Canada to find indicators that appeared to correlate with gender violence. Substance abuse, job loss and economic uncertainty were high on the list.
Then they came up with a questionnaire for victims so their answers could be compared with historical data. Police would fill in the answers after interviewing a victim, reviewing documentary evidence, speaking with witnesses and studying other information from government agencies. Answers to certain questions carried more weight than others, like if an abuser displayed suicidal tendencies or showed signs of jealousy.
These are some of the questions answered by women
6. In the last six months, has there been an escalation of aggression or threats?
YesNoN/A
26. Has the aggressor demonstrated addictive behaviors or substance abuse?
YesNoN/A
34. In the last six months, has the victim expressed to the aggressor her intention to sever their relationship?
YesNoN/A
The system produced a score for each victim: negligible risk, low risk, medium risk, high risk or extreme risk. A higher score would result in police patrols and the tracking of an aggressor’s movements. In extreme cases, police would assign 24-hour surveillance. Those with lower scores would receive fewer resources, mainly follow-up calls.
Predictive algorithms to address domestic violence have been used in parts of Britain, Canada, Germany and the United States, but not on such a national scale. In Spain, the Interior Ministry introduced VioGén everywhere but in the Catalonia region and Basque Country.
Law enforcement initially greeted the algorithm with skepticism, police and government officials told The Times, but it soon became a part of everyday police business.
Before VioGén, investigations were “based on the experience of the policeman,” said Mr. Pueyo, who remains affiliated with the program. “Now this is organized and guided by VioGén.”
VioGén is a source of impartial information, he said. If a woman attacked late at night was seen by a young police officer with little experience, VioGén could help detect the risk of future violence.
“It’s more efficient,” Mr. Pueyo said.
Over the years, VioGén has been refined and updated, including with metrics that are believed to better predict homicide. Police have also been required to conduct a follow-up risk assessment within 90 days of an attack.
But Spain’s faith in the system has surprised some experts. Juanjo Medina, a senior researcher at the University of Seville who has studied VioGén, said the system’s effectiveness remains unclear.
“We’re not good at forecasting the weather, let alone human behavior,” he said.
Francisco Javier Curto, a commander for the military police in Seville, said VioGén helps his teams prioritize, but requires close oversight. About 20 new cases of gender violence arrive every day, each requiring investigation. Providing police protection for every victim would be impossible given staff sizes and budgets.
“The system is effective but not perfect,” he said, adding that VioGén is “the best system that exists in the world right now.”
Francisco Javier Curto, a commander for the military police in Seville who oversees gender violence incidents in the province. VioGén is “the best system that exists in the world right now,” he said.
Ana Maria Arevalo Gosen for The New York Times
José Iniesta, a civil guard in Alicante, a southeastern port city, said not enough of the police are trained to keep up with growing case loads. A leader in the United Association of Civil Guards, a union representing officers in rural areas, he said that outside of big cities, the police often must choose between addressing violence against women or other crimes.
Sindicato Unificado de Policía, a union that represents national police officers, said even the most effective technology cannot make up for a lack of trained experts. In some places, a police officer is assigned to work with more than 100 victims.
“Agents in many provinces are overwhelmed,” the union said in a statement.
When attacks happen again
The women who have been killed after being assessed by VioGén can be found across Spain.
One was Stefany González Escarraman, a 26-year-old living near Seville. In 2016, she went to the police after her husband punched her in the face and choked her. He threw objects at her, including a kitchen ladle that hit their 3-year-old child. After police interviewed Ms. Escarraman for about five hours, VioGén determined she had a negligible risk of being abused again.
negligible risk Stefany González Escarraman
2016 Seville
The next day, Ms. Escarraman, who had a swollen black eye, went to court for a restraining order against her husband. Judges can serve as a check on the VioGén system, with the ability to intervene in cases and provide protective measures. In Ms. Escarraman’s case, the judge denied a restraining order, citing VioGén’s risk score and her husband’s lack of criminal history.
Stefany González Escarraman, who was killed in 2016 by her husband. VioGén had scored her as negligible risk.
About a month later, Ms. Escarraman was stabbed by her husband multiple times in the heart in front of their children. In 2020, her family won a verdict against the state for failing to adequately measure the level of risk and provide sufficient protection.
“If she had been given the help, maybe she would be alive,” said Williams Escarraman, Ms. Escarraman’s brother.
In 2021, Eva Jaular, who lived in Liaño in northern Spain, was slain by her former boyfriend after being classified as low risk by VioGén. He also killed their 11-month-old daughter. Six weeks earlier, he had jabbed a knife into a couch cushion next to where Ms. Jaular sat and said, “look how well it sticks,” according to a police report.
low risk Eva Jaular
2021 Liaño
Since 2007, 247 of the 990 women killed in Spain by a current or former partner were previously scored by VioGén, according to the Interior Ministry. The other victims had not been previously reported to the police, so were not in the system. The ministry declined to disclose the VioGén risk scores of the 247 who were killed.
The Times instead analyzed reports from a Spanish judicial agency, released almost every year from 2010 to 2022, which included information about the risk scores of 98 women who were later killed. Of those, 55 had been classified as negligible risk or low risk.
In a statement, the Interior Ministry said that analyzing the risk scores of homicide victims doesn’t provide an accurate picture of VioGén’s effectiveness because some homicides happened more than a year after the first assessment, while others were committed by a different partner.
Why the algorithm incorrectly classifies some women varies and isn’t always clear, but one reason may be the poor quality of information fed into the system. VioGén is ideally suited for cases when a woman, in the moments after being attacked, can provide complete information to an experienced police officer who has time to fully investigate the incident.
That does not always happen. Fear, shame, economic dependency, immigration status and other factors can lead a victim to withhold information. Police are also often squeezed for time and may not fully investigate.
Elisabeth, a lawyer, is a survivor of gender violence who now advocates for other victims who face institutional mistreatment in Spain. Ana María Arévalo Gosen for The New York Times
“If we already enter erroneous information into the system, how can we expect the system to give us a good result?” said Elisabeth, a victim who now works as a gender violence lawyer. She spoke on the condition her full name not be used, for fear of retaliation by her former partner.
Luz, a woman from a village in southern Spain, said she was repeatedly labeled low risk after attacks by her partner because she was afraid and ashamed to provide complete information to the police, some of whom she knew personally. She got her risk score increased to extreme only after working with a lawyer specializing in gender violence cases, leading to round-the-clock police protection.
extreme risk Luz
2019 Southern Spain
“We women keep a lot of things silent not because we want to lie but out of fear,” said Luz, who spoke on the condition her full name not be used for fear of retaliation by her attacker, who was imprisoned. “VioGén would be good if there were qualified people who had all the necessary tools to carry it out.”
Luz, with her son, said she was labeled lower risk because she was afraid and ashamed to provide complete information about her partner’s abuse to police.
Ana María Arévalo Gosen for The New York Times
Victim groups said that psychologists or other trained specialists should lead the questioning of victims rather than the police. Some have urged the government to mandate that victims be allowed to be accompanied by somebody they trust to help ensure full information is given to authorities, something that is now not allowed in all areas.
“It’s not easy to report a person you’ve loved,” said María, a victim from Granada in southern Spain, who was labeled medium risk after her partner attacked her with a dumbbell. She asked that her full name not be published for fear of retaliation by him.
medium risk María
2023 Granada
Ujué Agudo, a Spanish researcher studying the influence of artificial intelligence on human decisions, said technology has a role in solving societal problems. But it could reduce the responsibility of humans to approving the work of a machine, rather than conducting the necessary work themselves.
“If the system succeeds, it’s a success of the system. If the system fails, it’s a human error that they aren’t monitoring properly,” said Ms. Agudo, a co-director of Bikolabs, a Spanish civil society group. A better approach, she said, was for people “to say what their decision is before seeing what the A.I. thinks.”
Spanish officials are exploring incorporating A.I. into VioGén so it can pull data from different sources and learn more on its own. Mr. Ossorio, a creator of VioGén who works for the Interior Ministry, said the tools can be applied to other areas, including workplace harassment and hate crimes.
The systems will never be perfect, he said, but neither is human judgment. “Whatever we do, we always fail,” he said. “It’s unsolvable problems.”
This month, the Spanish government called an emergency meeting after three women were killed by former partners within a 24-hour span. One victim, a 30-year-old from central Spain, had been classified by VioGén as low risk.
At a news conference, Fernando Grande-Marlaska, the interior minister, said he still had “absolute confidence” in the system.
‘Always cheerful’
A memorial of roses and eucalyptus adorns a lamppost at the entrance to the street where Ms. Hemid lived.
Ana Maria Arevalo Gosen for The New York Times
Ms. Hemid, who was killed outside Madrid in 2022, was born in rural Morocco. She was 14 when she was introduced at a family wedding to Mr. el Banaisati, who was 10 years older than her. She was 17 when they married. They later moved to Spain so he could pursue steadier work.
Ms. Hemid was outgoing and gregarious, often seen racing to get her children to school on time, friends said. She learned to speak Spanish and sometimes joined children playing soccer in the park.
“She was always cheerful,” said Amelia Franas, a friend whose children went to the same school as Ms. Hemid’s children.
Few knew that abuse was a fixture of Ms. Hemid’s marriage. She spoke little about her home life, friends said, and never called the police or reported Mr. el Banaisati before the January 2022 incident.
VioGén is intended to identify danger signs that humans may overlook, but in Ms. Hemid’s case, it appears that police missed some clues. Her neighbors told The Times they were not interviewed, nor were administrators at her children’s school, who said they had seen signs of trouble.
Family members said Mr. el Banaisati had a life-threatening form of cancer that made him behave erratically. Many blamed underlying discrimination in Spain’s criminal system that overlooks violence against immigrant women, especially Muslims.
Police haven’t released a copy of the assessment that produced Ms. Hemid’s low risk score from VioGén. A copy of a separate police report shared with The Times noted that Ms. Hemid was tired during questioning and wanted to end the interview to get home.
A few days after the January 2022 attack, Ms. Hemid won a restraining order against her husband. But Mr. el Banaisati largely ignored the order, family and friends said. He moved into an apartment less than 500 meters from where Ms. Hemid lived and continued threatening her.
Mr. Melguizo, her brother-in-law, said he appealed to Ms. Hemid’s assigned public lawyer for help, but was told the police “won’t do anything, it has a low risk score.”
The day after Ms. Hemid was stabbed to death, she had a court date scheduled to officially file for divorce.
Business
California-based company recalls thousands of cases of salad dressing over ‘foreign objects’
A California food manufacturer is recalling thousands of cases of salad dressing distributed to major retailers over potential contamination from “foreign objects.”
The company, Irvine-based Ventura Foods, recalled 3,556 cases of the dressing that could be contaminated by “black plastic planting material” in the granulated onion used, according to an alert issued by the U.S. Food and Drug Administration.
Ventura Foods voluntarily initiated the recall of the product, which was sold at Costco, Publix and several other retailers across 27 states, according to the FDA.
None of the 42 locations where the product was sold were in California.
Ventura Foods said it issued the recall after one of its ingredient suppliers recalled a batch of onion granules that the company had used n some of its dressings.
“Upon receiving notice of the supplier’s recall, we acted with urgency to remove all potentially impacted product from the marketplace. This includes urging our customers, their distributors and retailers to review their inventory, segregate and stop the further sale and distribution of any products subject to the recall,” said company spokesperson Eniko Bolivar-Murphy in an emailed statement. “The safety of our products is and will always be our top priority.”
The FDA issued its initial recall alert in early November. Costco also alerted customers at that time, noting that customers could return the products to stores for a full refund. The affected products had sell-by dates between Oct. 17 and Nov. 9.
The company recalled the following types of salad dressing:
- Creamy Poblano Avocado Ranch Dressing and Dip
- Ventura Caesar Dressing
- Pepper Mill Regal Caesar Dressing
- Pepper Mill Creamy Caesar Dressing
- Caesar Dressing served at Costco Service Deli
- Caesar Dressing served at Costco Food Court
- Hidden Valley, Buttermilk Ranch
Business
They graduated from Stanford. Due to AI, they can’t find a job
A Stanford software engineering degree used to be a golden ticket. Artificial intelligence has devalued it to bronze, recent graduates say.
The elite students are shocked by the lack of job offers as they finish studies at what is often ranked as the top university in America.
When they were freshmen, ChatGPT hadn’t yet been released upon the world. Today, AI can code better than most humans.
Top tech companies just don’t need as many fresh graduates.
“Stanford computer science graduates are struggling to find entry-level jobs” with the most prominent tech brands, said Jan Liphardt, associate professor of bioengineering at Stanford University. “I think that’s crazy.”
While the rapidly advancing coding capabilities of generative AI have made experienced engineers more productive, they have also hobbled the job prospects of early-career software engineers.
Stanford students describe a suddenly skewed job market, where just a small slice of graduates — those considered “cracked engineers” who already have thick resumes building products and doing research — are getting the few good jobs, leaving everyone else to fight for scraps.
“There’s definitely a very dreary mood on campus,” said a recent computer science graduate who asked not to be named so they could speak freely. “People [who are] job hunting are very stressed out, and it’s very hard for them to actually secure jobs.”
The shake-up is being felt across California colleges, including UC Berkeley, USC and others. The job search has been even tougher for those with less prestigious degrees.
Eylul Akgul graduated last year with a degree in computer science from Loyola Marymount University. She wasn’t getting offers, so she went home to Turkey and got some experience at a startup. In May, she returned to the U.S., and still, she was “ghosted” by hundreds of employers.
“The industry for programmers is getting very oversaturated,” Akgul said.
The engineers’ most significant competitor is getting stronger by the day. When ChatGPT launched in 2022, it could only code for 30 seconds at a time. Today’s AI agents can code for hours, and do basic programming faster with fewer mistakes.
Data suggests that even though AI startups like OpenAI and Anthropic are hiring many people, it is not offsetting the decline in hiring elsewhere. Employment for specific groups, such as early-career software developers between the ages of 22 and 25 has declined by nearly 20% from its peak in late 2022, according to a Stanford study.
It wasn’t just software engineers, but also customer service and accounting jobs that were highly exposed to competition from AI. The Stanford study estimated that entry-level hiring for AI-exposed jobs declined 13% relative to less-exposed jobs such as nursing.
In the Los Angeles region, another study estimated that close to 200,000 jobs are exposed. Around 40% of tasks done by call center workers, editors and personal finance experts could be automated and done by AI, according to an AI Exposure Index curated by resume builder MyPerfectResume.
Many tech startups and titans have not been shy about broadcasting that they are cutting back on hiring plans as AI allows them to do more programming with fewer people.
Anthropic Chief Executive Dario Amodei said that 70% to 90% of the code for some products at his company is written by his company’s AI, called Claude. In May, he predicted that AI’s capabilities will increase until close to 50% of all entry-level white-collar jobs might be wiped out in five years.
A common sentiment from hiring managers is that where they previously needed ten engineers, they now only need “two skilled engineers and one of these LLM-based agents,” which can be just as productive, said Nenad Medvidović, a computer science professor at the University of Southern California.
“We don’t need the junior developers anymore,” said Amr Awadallah, CEO of Vectara, a Palo Alto-based AI startup. “The AI now can code better than the average junior developer that comes out of the best schools out there.”
To be sure, AI is still a long way from causing the extinction of software engineers. As AI handles structured, repetitive tasks, human engineers’ jobs are shifting toward oversight.
Today’s AIs are powerful but “jagged,” meaning they can excel at certain math problems yet still fail basic logic tests and aren’t consistent. One study found that AI tools made experienced developers 19% slower at work, as they spent more time reviewing code and fixing errors.
Students should focus on learning how to manage and check the work of AI as well as getting experience working with it, said John David N. Dionisio, a computer science professor at LMU.
Stanford students say they are arriving at the job market and finding a split in the road; capable AI engineers can find jobs, but basic, old-school computer science jobs are disappearing.
As they hit this surprise speed bump, some students are lowering their standards and joining companies they wouldn’t have considered before. Some are creating their own startups. A large group of frustrated grads are deciding to continue their studies to beef up their resumes and add more skills needed to compete with AI.
“If you look at the enrollment numbers in the past two years, they’ve skyrocketed for people wanting to do a fifth-year master’s,” the Stanford graduate said. “It’s a whole other year, a whole other cycle to do recruiting. I would say, half of my friends are still on campus doing their fifth-year master’s.”
After four months of searching, LMU graduate Akgul finally landed a technical lead job at a software consultancy in Los Angeles. At her new job, she uses AI coding tools, but she feels like she has to do the work of three developers.
Universities and students will have to rethink their curricula and majors to ensure that their four years of study prepare them for a world with AI.
“That’s been a dramatic reversal from three years ago, when all of my undergraduate mentees found great jobs at the companies around us,” Stanford’s Liphardt said. “That has changed.”
Business
Disney+ to be part of a streaming bundle in Middle East
Walt Disney Co. is expanding its presence in the Middle East, inking a deal with Saudi media conglomerate MBC Group and UAE firm Anghami to form a streaming bundle.
The bundle will allow customers in Bahrain, Kuwait, Oman, Qatar, Saudi Arabia and the UAE to access a trio of streaming services — Disney+; MBC Group’s Shahid, which carries Arabic originals, live sports and events; and Anghami’s OSN+, which carries Arabic productions as well as Hollywood content.
The trio bundle costs AED89.99 per month, which is the price of two of the streaming services.
“This deal reflects a shared ambition between Disney+, Shahid and the MBC Group to shape the future of entertainment in the Middle East, a region that is seeing dynamic growth in the sector,” Karl Holmes, senior vice president and general manager of Disney+ EMEA, said in a statement.
Disney has already indicated it plans to grow in the Middle East.
Earlier this year, the company announced it would be building a new theme park in Abu Dhabi in partnership with local firm Miral, which would provide the capital, construction resources and operational oversight. Under the terms of the agreement, Disney would oversee the parks’ design, license its intellectual property and provide “operational expertise,” as well as collect a royalty.
Disney executives said at the time that the decision to build in the Middle East was a way to reach new audiences who were too far from the company’s current hubs in the U.S., Europe and Asia.
-
Iowa5 days agoAddy Brown motivated to step up in Audi Crooks’ absence vs. UNI
-
Iowa7 days agoHow much snow did Iowa get? See Iowa’s latest snowfall totals
-
Maine4 days agoElementary-aged student killed in school bus crash in southern Maine
-
Maryland5 days agoFrigid temperatures to start the week in Maryland
-
Technology1 week agoThe Game Awards are losing their luster
-
South Dakota6 days agoNature: Snow in South Dakota
-
New Mexico3 days agoFamily clarifies why they believe missing New Mexico man is dead
-
Nebraska1 week agoNebraska lands commitment from DL Jayden Travers adding to early Top 5 recruiting class