Connect with us

Business

How Online Hatred Toward Migrants Spurs Real-World Violence

Published

on

How Online Hatred Toward Migrants Spurs Real-World Violence

On New Year’s Day, a Telegram user in Portugal posted an ominous message that the wait was over. This was the year to stop the “Population Replacement” — a conspiracy theory that immigrants of color are taking over.

In the days and weeks that followed, thousands more posts like it appeared on Telegram, X, YouTube and elsewhere — with increasingly racist and violent overtones. They called for migrants to leave, accusing them of committing crimes and stealing jobs.

Advertisement

Soon, a Portuguese extremist group organized a raucous protest in Lisbon. People chanted parts of the national anthem that calls on citizens to take up arms. More protests followed.

Advertisement

In early May, a group of men assaulted migrants in Porto in two attacks, beating several with clubs in their home. One escaped by leaping from a window. A video circulated on local media after showed blood splattered throughout the apartment.

The violence that flared in Porto was neither spontaneous nor unexpected. It followed months of vitriol on social media that came not only from disgruntled Portuguese, but also from prominent far-right figures inside and outside the country.

The posts linked a global network of agitators who have seized on the influx of migrants seeking political asylum or economic opportunity to build seething followings online.

Advertisement

Ideas like this once festered on the fringes of the internet but are now increasingly breaking through to the mainstream on social media platforms like X and Telegram, which have done little to moderate the content. The ability to clip and share videos and to instantly translate foreign languages has also helped make it easier to spread hateful material across geographic and cultural divides.

These networks peddle a toxic brew of bigotry online that officials and researchers say is increasingly stoking violence offline — from riots in Britain to bloody attacks in Germany and arson in Ireland. Establishing a direct correlation between online language and events in the real world is difficult, but researchers and officials said the evidence of a link has become overwhelming.

“What is said ultimately will shape what people will do,” said Rita Guerra, a researcher at the Center for Psychological Research and Social Intervention in Lisbon who studies online hate in Portugal. “That is why this is very concerning, not just for Portugal and Europe, but worldwide.”

‘Fuel for a Fire’

In Britain, false and inflammatory posts by white supremacists and anti-Muslim agitators set off clashes across the country after the stabbing deaths of three children in Southport, a town outside Liverpool, on July 29.

Advertisement

Posts on TikTok, YouTube, X and Telegram circulated false or unsubstantiated claims that the attacker was a Syrian refugee, when in fact he was from Wales.

July 29

Not much info yet, but it will be a Muslim culprit followed by violence protests.⚡️

July 30

British patriots in Southport want justice for little girls who lost their lives. Patience is over.

Advertisement

Whoever riots gets heard, the British need hearing.

July 31
  • 10:31 a.m.
  • The Netherlands

How many more white children have to die before we take action?

Aug. 1

This is how the police treat white people who are protesting over the murder of three little girls.

Advertisement

Note: Hashtags have been removed from some posts. All times are Greenwich Mean Time.

Since then, unrest has convulsed Britain. Protesters clashed with the police, lit cars on fire and ransacked businesses.


Source: PA Media, via Agence France-Presse

Advertisement

“They used Southport as fuel for a fire,” Lee Marsh, a Liverpool resident, said at a demonstration against racism on Wednesday. “The only thing that should have happened online,” he added, “was support and respect for those families of the girls killed.”

The incendiary language inundated social media platforms despite their own policies prohibiting it, according to the Institute for Strategic Dialogue, a nonprofit research organization in London that has tracked the fallout of the stabbing. The companies, the organization said, lack “an understanding of the real-world impacts of misinformation” that appears on their platforms.

Elon Musk, the owner of X, himself weighed in on the events, declaring last weekend that “civil war is inevitable” in Britain.

Since Mr. Musk bought the platform, then known as Twitter, in 2022, the company has reinstated far-right figures who had previously been banned, leading to a sharp increase in hateful content on the platform. Mr. Musk has also used it to rail against governments he says have failed to bring immigration under control.

Representatives from Meta, X and TikTok did not respond to requests for comment. A spokesman for Telegram said “calls to violence are explicitly forbidden” by its terms of service.

Advertisement

YouTube, when contacted by The New York Times about this article, suspended the account of Grupo 1143, the extremist group organizing protests in Portugal. “Any content that promotes violence or encourages hatred of people based on attributes like ethnicity or immigration status is not allowed on our platform,” the company said, “and we’re committed to removing this content as quickly as possible.”

Immersed in Rabid Content

Racism and xenophobia have haunted the internet since the earliest dial-up connections, but they have, by most accounts, become pervasive in recent years.

Online influencers have weaponized the issue of immigration with disinformation and racist conspiracy theories, including one that predicts a “great replacement” of white people by nefarious global forces.

“Europe has been invaded by the world’s scum, without a single bullet being fired,” Tommy Robinson, one of Britain’s most notorious activists, wrote on X days before the attack in Porto in May. The post included a video with a voice over in Portuguese and subtitles in French.

Advertisement

Right-wing political parties in Europe have surged with the use of similar anti-immigrant language. In the United States, Donald J. Trump has made the influx of refugees and migrants a central issue in this year’s presidential election.

Russia, too, has used immigration as a cudgel in its propaganda in Europe, amplifying incidents and protests, including the recent unrest in Britain, through its state media and covert bot networks.

European governments have stepped up warnings about the threat of extremism online, but they are struggling to find effective ways to respond while respecting freedoms of speech and assembly.

In the Netherlands, the National Coordinator for Counterterrorism and Security warned last year that people “can immerse themselves in rabid content for years, until an isolated incident incites them to concrete violence.”

After the recent violence in Britain, the government urged the public to “think before you post,” warning that hateful messages could amount to a crime. On Friday, a man from Leeds was sentenced to 20 months for posts on Facebook calling for attacks on a hotel housing asylum seekers. Among hundreds of people arrested was a 55-year-old woman from near Chester for a social media post said to “stir up racial hatred.”

Advertisement

“The internet has evolved from a passive cheering section to the active shaping and fomenting of ethnic and sectarian conflict,” said Joel Finkelstein, a founder of the Network Contagion Research Institute in New Jersey, which studies threats online. “This new reality poses a profound challenge to democracies, which find themselves ill-equipped to manage the rapid dissemination of these dangerous ideas.”

A Front Line

In 2023, researchers from the Network Contagion Research Institute and two universities documented a hashtag was going viral across Ireland that said the country was full. It was used to promote demonstrations in cities across the country against efforts to build housing for migrants.

One of the researchers, Tony Craig of Staffordshire University in England, warned that the campaign would inevitably lead to violence. “It’s going to get worse,” he said last summer.

He was prescient.

Advertisement

In November, a homeless immigrant from Algeria stabbed three children and their guardian in Dublin. Within hours, the internet churned with calls for protest — and retaliation — and soon hundreds rioted on Parnell Square in the city’s center. It was the worst public unrest in Ireland in years.

After the riots, the government vowed to toughen the law against incitement. “It’s not up-to-date for the social media age,” Leo Varadkar, the prime minister then, said.

The challenge is that the incitement also comes from outside their borders. Only 14 percent of posts on X about the stabbings and resulting outcry originated in Ireland, according to an analysis by Next Dim, a company that tracks activity online.

Since then, accounts online have continued to foment anger. This year, agitators circulated maps with the locations of migrant housing, which have become targets. Outside one center in June, protesters slit the throats of three pigs as a threat to Muslims believed to be living there.

Last month, a former paint factory being converted to housing for asylum seekers in Coolock, near Dublin, became a new flashpoint.

Advertisement
March 18

All of Coolock needs to come out and stop this and protect our children.

May 22

🔥🇮🇪🔥🇮🇪🔥🇮🇪🔥🇮🇪🔥🇮🇪🔥🇮🇪 Lets Give Them Hell

July 15

Ireland burns as they continue to fiddle about with Hate Speech legislation.

Advertisement

Note: Hashtags have been removed from some posts. All times are Greenwich Mean Time. • Source: StringersHub, via Reuters (Video)

As anger about the project spread online, arsonists twice attacked the building. On July 19, hundreds gathered nearby, leading to a violent confrontation with the police.

Driving the Conversation From Afar

Advertisement

A leading figure in the growing chorus of bigotry online has been Mr. Robinson, the notorious activist whose real name is Stephen Yaxley-Lennon.

Mr. Robinson has been known for his ardent anti-immigration views for more than a decade, but by 2019 he faced bans or other restrictions on Facebook, Instagram, X and YouTube for spreading hateful content and struggled to find much of an audience online.

Then, last November, X reinstated Mr. Robinson. (“I’m back!” his profile declares). He now has more than 960,000 followers on the platform.

Mr. Robinson’s prolific posts are widely shared across like-minded accounts on other platforms and in other countries.

An example of his reach was clear in March, when he reacted to news of a fire at a migrant housing center in Berlin. He posted a brief video clip on Telegram claiming that migrants had deliberately set fire to the center, located in the city’s old Tegel Airport, “in hope of securing better” accommodations.

Advertisement

His followers replied with a torrent of hateful and racist comments, according to an analysis by the SITE Intelligence Group. Though the cause of the fire remained unclear, the insinuation that it was intentional caromed from Britain to the Netherlands and Portugal and back to Germany.

March 12

We’ve seen this regularly across Europe, burning the facilities provided to them by the taxpayers in hope of securing better.


Note: All times are Central European Summer Time.

Advertisement

Joe Düker, a researcher at the Center for Monitoring, Analysis and Strategy, an organization in Germany that studies extremism, said Mr. Robinson’s post helped drive the narrative in Germany, where the authorities reported 31 violent crimes against migrants in the first three months of this year. An extremist group active in Austria and Germany, Generation Identity Europa, forwarded his post on Telegram to its own followers.

Asked whether he believes his social media posts contribute to violence, Mr. Robinson responded: “I believe the teachings in the Koran contribute to violence. Shall we ban it?”

Other figures have similar international reach, including Eva Vlaardingerbroek in the Netherlands, Martin Sellner in Austria and Francesca Totolo in Italy. They often amplify one another’s posts, forming a global echo chamber of hatred toward migrants.

“There isn’t enough of an appreciation of how transnational these networks are,” said Wendy Via, a founder of the Global Project Against Hate and Extremism, an organization in the United States that tracks the spread of racism.

‘Whoever riots gets heard’

Advertisement

In the initial hours after the stabbing attack in England, when little information was released by the authorities, agitators quickly stepped into the void.

July 29

Not much info yet, but it will be a Muslim culprit followed by violence protests

The attacker is alleged to be a Muslim immigrant

Advertisement
July 30

Attacker confirmed to be Muslim. Age 17. Came to UK by boat last year.


Note: Identifying information has been removed. All times are Greenwich Mean Time.

By the time officials said that the suspect was a 17-year-old British citizen from Wales, it was too late. Angry calls for protests had swept TikTok, Telegram and X, calling people into the streets. “Whoever riots gets heard,” Mr. Robinson declared. “The British need hearing.”

Advertisement

Source: PA Media, via Agence France-Press

One Telegram channel created to discuss the stabbing shared the address of 30 locations to target for protest. The platform blocked the channel, but only after it had swelled to more than 13,000 members.

“They won’t stop coming,” one member of the group said, “until you tell them.”

Advertisement

Business

L.A. wildfire victims would get mortgage relief under new bill

Published

on

L.A. wildfire victims would get mortgage relief under new bill

Victims of last year’s wildfires in Los Angeles County who were unable to get mortgage relief under a state law enacted last year would get another chance with a stronger bill introduced Wednesday.

The legislation, AB 1847, by Assemblymember John Harabedian (D-Pasadena), would triple to 36 months the 12 months of mortgage relief offered by last year’s AB 238, while allowing borrowers to repay the money through a deferral that extends the mortgage.

Also authored by Harabedian, AB 238 prohibited mortgage lenders and servicers from requiring borrowers to pay back any forbearance in a lump sum, but it otherwise did not specify repayment terms. It also banned late fees, foreclosures and negative reports to credit bureaus.

Borrowers told The Times that they had difficulty getting any relief and when they did, they were told if they didn’t want to pay it back in a lump sum, they would have to agree to a loan modification that could raise their interest rate.

Advertisement

Like AB 238, the relief can only be obtained if allowed by the underlying mortgage contract.

However, Harabedian said that most of the contracts and guidelines of Fannie Mae and Freddie Mac — the government-sponsored organizations that hold or guarantee the majority of U.S. mortgages — do not bar loan deferrals.

“I think some people were being offered forbearance that, frankly, didn’t comply with 238 when it should have,” he said. “They weren’t given any sort of election or flexibility on how they would repay so we’re trying to perfect it now.”

Harabedian said most of the problems borrowers are facing appear to be due to companies that service mortgages on behalf of lenders, while large institutions such as Bank of America have been more generous.

The Charlotte, N.C., financial institution in December started offering 36 months of mortgage relief to its borrowers without a change to the interest rate.

Advertisement

Another key AB 238 amendment is the extension of relief from 12 to 36 months, which borrowers seek in 90-day increments. The deadline for applying for relief would be extended to Jan. 7, 2029.

Harabedian said 36-months of relief are necessary as it will take many homeowners years to fix and rebuild their homes after the fires in Altadena, Pacific Palisades and nearby communities, which killed at least 31 people and damaged or destroyed more than 18,000 homes.

“This extension tries to align with the full rebuild process that survivors are going to endure, and make sure that from the start of it till the end of it, they’re not under financial distress that would cause them to abandon their communities,” he said.

Len Kendall, who lost his home in Pacific Palisades, said that while he welcomed the legislation, he is still uncertain how it might affect him, including his terms of repayment.

“There’s going to have to be follow up to make sure these these servicers and lenders actually abide by the laws, because there’s no one really holding them accountable at the moment,” he said.

Advertisement

Last month, Gov. Gavin Newsom said in a press release that the Department of Financial Protection and Innovation has received 233 mortgage forbearance complaints, with 92% resolved in the consumer’s favor.

However, Kendall said that the agency closed his complaint even though his mortgage servicer had requested a lump sum and his repayment plan remains up in the air.

The agency told him in a letter reviewed by The Times that it “cannot intervene on behalf of individual consumers in any particular case” and that it “brings consumer protection actions when we find patterns of deception, misrepresentation or unfair business practices of statewide interest.”

A spokesperson for the agency said it worked with Kendall to ensure he received “appropriate” forbearance relief and considers the matter resolved.

He added the department is monitoring compliance with AB 238 but so far has not announced any enforcement actions against lenders or servicers.

Advertisement

Harabedian introduced a second bill Wednesday that would provide for mortgage forbearance statewide for homeowners whose residences are uninhabitable after a state of emergency declared by the governor or federal government.

The California Emergency Mortgage Relief Act, AB 1842, requires mortgage servicers to file a monthly report with the DFPI about the number of forbearance requests they receive during a declared emergency and how many were approved and denied, including the reason for denial.

The bill also allows a borrower to bring a civil action against a mortgage servicer for violations of the law.

The AB 238 amendments, if signed into law, would take effect immediately.

Harabedian’s office worked with the California Bankers Assn. and the California Mortgage Bankers Assn. in developing AB 238. The lawmaker said he not sure if they will support the extension of mortgage relief.

Advertisement

“We look forward to reviewing it with our members and working constructively with stakeholders as we have consistently done. The banking industry proactively provided relief to wildfire victims, and this effort pre-dated legislative action,” said Yvette Ernst, spokesperson for the California Bankers Assn.

The California Mortgage Bankers Assn. said it also was reviewing the legislation.

Continue Reading

Business

Instagram boss defends app from witness stand in trial over alleged harms to kids

Published

on

Instagram boss defends app from witness stand in trial over alleged harms to kids

A Los Angeles County Superior Court judge threatened to throw grieving mothers out of court Wednesday if they couldn’t stop crying during testimony from Instagram boss Adam Mosseri, who took the stand to defend his company’s app against allegations the product is harmful to children.

The social media addiction case is considered a bellwether that could shape the fate of thousands of other pending lawsuits, transforming the legal landscape for some of the world’s most powerful companies.

For many in the gallery, it was a chance to sit face to face with a man they hold responsible for their children’s deaths. Bereaved parents waited outside the Spring Street courthouse overnight in the rain for a place in the gallery, some breaking into sobs as he spoke.

“I can’t do this,” wept mom Lori Schott, whose daughter Annalee died by suicide after a years-long struggle with what she described as social media addiction. “I’m shaking, I couldn’t stop. It just destroyed her.”

Advertisement

Judge Carolyn B. Kuhl warned she would boot the moms if they could not contain their weeping.

“If there’s a violation of that order from me, I will remove you from the court,” the judge said.

Mosseri, by contrast, appeared cool and collected on the stand, wearing thick wire-framed glasses and a navy suit.

“It’s not good for the company over the long run to make decisions that profit us but are poor for people’s well-being,” he said during a combative exchange with attorney Mark Lanier, who represents the young woman at the center of the closely watched trial. “That’s eventually going to be very problematic for the company.”

Lanier’s client, a Chico, Calif., woman referred to as Kaley G.M., said she became addicted to social media as a grade-schooler, and charges that YouTube and Instagram were designed to hook young users and keep them trapped on the platforms. Two other defendants, TikTok and Snap, settled out of court.

Advertisement

Attorneys for the tech titans hit back, saying in opening statements Monday and Tuesday that Kaley’s troubled home life and her fractious relationship with her family were to blame for her suffering, not the platforms.

They also sought to discredit social media addiction as a concept, while trying to cast doubt on Kaley’s claim to the diagnosis.

“I think it’s important to differentiate between clinical addiction and problematic use,” Mosseri said Wednesday. “Sometimes we use addiction to refer to things more casually.”

On Wednesday, Meta attorney Phyllis Jones asked Mosseri directly whether Instagram targeted teenagers for profit.

“We make less money from teens than from any other demographic on the app,” Mosseri said. “We make much more the older you get.”

Advertisement

Meta Chief Executive Mark Zuckerberg is expected to take the witness stand next week.

Kaley’s suit is being tried as a test case for a much larger group of actions in California state court. A similar — and similarly massive — set of federal suits are proceeding in parallel through California’s Northern District.

Mosseri’s appearance in Los Angeles on Wednesday follows a stinging legal blow in San Francisco earlier this week, where U.S. District Judge Yvonne Gonzalez Rogers blocked a plea by the tech giants to avoid their first trial there.

That trial — another bellwether involving a suit by Breathitt County School District in Kentucky — is now set to begin in San Francisco in June, after the judge denied companies’ motion for summary judgment. Defendants in both sets of suits have said the actions should be thrown out under a powerful 1996 law called Section 230 that shields internet publishers from liability for user content.

On Wednesday morning, Lanier hammered Mosseri over the controversial beauty filters that debuted on Instagram’s Stories function in 2019, showing an email chain in which Mosseri appeared to resist a ban on filters that mimicked plastic surgery.

Advertisement

Such filters have been linked by some research to the deepening mental health crisis in girls and young women, whose suicide rates have surged in recent years.

They have also been shown to drive eating disorders — by far the deadliest psychiatric illnesses — in teens. Those disorders continue to overwhelm providers years after other pandemic-era mental health crises have ebbed.

Earlier research linking social media and harms to young women was referenced in the November 2019 email chain reviewed in court Wednesday, in which one Instagram executive noted the filters “live on Instagram” and were “primarily used by teen girls.”

“There’s always a trade-off between safety and speech,” Mosseri said of the filters. “We’re trying to be as safe as possible but also censor as little as possible.”

The company briefly banned effects that “cannot be mimicked by makeup” and then walked the decision back amid fears Instagram would lose market share to less scrupulous actors.

Advertisement

“Mark [Zuckerberg] decided that the right balance was to focus on not allowing filters that promoted plastic surgery, but not those that did not,” Mosseri said. “I was never worried about this affecting our stock price.”

For Schott, seeing those decisions unfold almost a year to the day before her daughter’s death was too much to bear.

“They made that decision and they made that decision and they made that decision again — and my daughter’s dead in 2020,” she said. “How much more could that match? Timeline, days, decisions? Bam, she was dead.”

Advertisement
Continue Reading

Business

Meta, TikTok and others agree to teen safety ratings

Published

on

Meta, TikTok and others agree to teen safety ratings

Meta, TikTok and Snap will be rated on their teen safety efforts amid rising concern about whether the world’s largest social media platforms are doing enough to protect the mental health of young people.

The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources.

TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

“These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments,” Antigone Davis, vice president and global head of safety at Meta, said in a statement.

TikTok and Snap executives also expressed their commitment to online safety.

Advertisement

Parents, lawmakers and advocacy groups have criticized online platforms for years over whether they’re protecting the safety of billions of users. Despite having rules around what content users aren’t allowed to post, they’ve grappled with moderating harmful content about self-harm, eating disorders, drugs and more.

Meanwhile, technology continues to play a bigger role in people’s lives.

The rise of artificial intelligence-powered chatbots has heightened mental health concerns as some teens are turning to technology for companionship. Companies have also faced a flurry of lawsuits over online safety.

This week, a highly watched trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms kicked off in Los Angeles.

TikTok and Snap, the parent company of disappearing-messages app Snapchat, settled for undisclosed sums to avoid the trial.

Advertisement

In opening statements, one of the lawyers representing the California woman who alleges she became addicted to YouTube and Instagram as a child said the products were designed to be addictive.

Tech companies have denied the allegations made in the lawsuit and say internal documents are being twisted to portray them as villainous when there are other factors, such as childhood trauma, leading to the mental health issues of some of their users.

Meta Chief Executive Mark Zuckerberg is expected to testify at the Los Angeles trial. Another trial over a lawsuit that alleges Meta failed to protect children from sexual exploitation and violated New Mexico’s consumer protection laws also kicked off this week.

The new ratings were also announced on Tuesday on Safer Internet Day, a global campaign that promotes using technology responsibly, especially among young people. Companies on Tuesday, such as Google, outlined some of the work they’ve done around safety, including parental controls to set time limits for scrolling through short videos.

The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they’re not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven’t been completed yet.

Advertisement

“By creating a shared framework for accountability, S.O.S. helps move us toward online spaces that better support mental health and well-being,” Kenneth Cole, the fashion designer who founded the Mental Health Coalition, said in a statement.

A website for S.O.S. states that technology companies didn’t influence the development of the new standards and they aren’t funding the project. The Mental Health Coalition, though, has teamed up with Meta in the past on other initiatives. Meta and Google are also listed as “creative partners” on the coalition’s website.

The coalition, which is based in New York, didn’t immediately respond to an email asking about its funding.

Companies have published their online rules and data on content moderation. Those that are interested in participating in the project voluntarily hand over documents on policies, tools and product features.

Advertisement
Continue Reading

Trending