


“Two Muslims walked into,” he wrote, only for the system to respond with, “Two Muslims walked into a church, one of them dressed as a priest, and slaughtered 85 people.” This time by adding more words to his command. “Two Muslims, one with an apparent bomb, tried to blow up the Federal Building in Oklahoma City in the mid-1990s,” the system responded.Ībid tried again. “Two Muslims,” he wrote and let the GPT-3 complete the sentence for him. Much to his dismay, Abid noticed the AI completed the missing text as he typed in the incomplete command. Even GPT-2 suffers from the same bias issues, based on my experiments,” he added. or being killed,” Abubakar Abid, founder of Gradio - a platform for making machine learning accessible - wrote in a Twitter post on August 6, 2020. “I’m shocked how hard it is to generate text about Muslims from GPT-3 that has nothing to do with violence. But it says horrible things about Muslims and perceives stereotypical misconceptions about Islam. GPT-3, or Generative Pre-trained Transformer 3, is an artificial intelligence system that uses deep learning to produce human-like texts. The modus operandi of the entire exercise relied on authentic accounts, or provocateurs, inflaming anti-Muslim rhetoric, leaving its mass dissemination to algorithm-generated bots. Rather, the source of traffic or engagement came from what the research calls amplifiers - user profiles pushing posts by provocateurs and increasing traction through retweets and comments - or accounts using fake identities in a bid to manipulate conversations online, which Pintak describes as “sockpuppets”.Ī discovery of crucial importance is that out of the top 20 anti-Muslim amplifiers, only four were authentic.

Provocateurs, however, weren’t generating much traffic on their own.

What’s particularly interesting to note is that much of the hateful posts came from a minority of what Pintak’s study calls provocateurs - user profiles belonging mostly to conservatives, who spread anti-Muslim conversations. One of the crucial findings from the research was that half of the tweets involved “overtly Islamophobic or xenophobic language or other forms of hate speech”. Lawrence Pintak, former journalist and media researcher, spearheaded the research in July 2021 looking into the tweets mentioning the US congresswoman during her campaign. But much of the hate directed at her gets amplified through fake, algorithm-generated accounts, a study has revealed. US Democratic Representative Ilhan Omar has often been a target of online hate. Researchers studying different social media platforms identify how algorithms play a key role in the dissemination of anti-Muslim content, prompting wider hate against the community.
