The researchers collected 200 million tweets discussing the coronavirus since January, and they found that 82% of the top 50 influential retweeters are bots and 62% of the top 1,000 retweeters are bots as well.
Accounts that are possibly humans with bot assistants generated 66% of the tweets and accounts that are definitely bots generated 34% of the tweets, according to the research.
“We’re seeing up to two times as much bot activity as we’d predicted based on previous natural disasters, crises and elections,” said Kathleen Carley, a computer science professor at Carnegie Mellon.
While there is no universally shared definition of a bot and not all bots are considered bad, a bot is generally viewed as a software program that controls Twitter accounts and automate tasks like tweeting or retweeting. In theory, it is possible for one person to control thousands of accounts.
Carley’s research team used multiple methods to determine who is or isn’t a bot. Artificial intelligence processed account information and examined factors like the number of followers, frequency of tweeting and an account’s mentions network.
Carley said the surge in bot accounts and activity could be attributed to more people being at home with the time to create their own bots. She said there has also been an increase in firms that have been hired to run bot accounts.
“Because it’s global, it’s being used by various countries and interest groups as an opportunity to meet political agendas,” Carley said.
“Conspiracy theories increase polarization in groups — it’s what many misinformation campaigns aim to do,” Carley added. “People have real concerns about health and the economy, and people are preying on that to create divides.”
Carley said spreading conspiracy theories can lead to more extreme behavior with “real-world consequences” like affecting voting behavior and “hostility toward ethnic groups.”
“We’re prioritizing the removal of COVID-19 content when it has a call to action that could potentially cause harm,” according to a statement from Twitter. “As we’ve said previously, we will not take enforcement action on every Tweet that contains incomplete or disputed information about COVID-19.”
Twitter introduced these new policies on March 18, and the company said it has removed more than 2,600 tweets. The company said its automated tools have also challenged more than 4.3 million accounts that were targeting discussions around coronavirus with “spammy or manipulative behaviors.”
“We permanently suspend millions of accounts every month that are automated or spammy, and we do this before they ever reach an eyeball in a Twitter Timeline or Search,” wrote Nick Pickles and Yoel Roth, the company’s director of global public policy strategy and development and head of site integrity, respectively, in a blog post this week.
The Carnegie Mellon researchers said users should closely examine Twitter accounts for signs that an account could be a bot, like sharing links with subtle typos, issuing multiple tweets very quickly, or a username and profile image that don’t appear to match up.
“Even if someone appears to be from your community, if you don’t know them personally, take a closer look, and always go to authoritative or trusted sources for information,” Carley said. “Just be very vigilant.”
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.