top of page
Search
  • disinfolab

The Role of Twitter Bots in The Big Lie

Updated: Aug 23, 2021

Isabel Conti, Aaraj Vij, Madeline Smith, Caroline Cerny


The Big Lie

On January 6th, 2021, domestic terrorists stormed the U.S. Capitol in an attempt to stop

Congress from certifying the results of the 2020 presidential election. The events of January 6th are a chilling testament to the dangers of disinformation. Previous reports by DisinfoLab have discussed how disinformation spread in late 2020 on popular social media platforms including Twitter, Facebook, and YouTube precipitated the Capitol invasion. Surrounding this event, Twitter bots played a notable role in perpetuating the “Big Lie” and in obfuscating the truth of what happened in the days following the attack.


In hindsight, most experts agree that the U.S. government should have seen the attack

coming. In the months preceding the January 6 th insurrection, a flood of disinformation

designed to undermine Americans’ trust in the results of the presidential election was

unleashed across major social media platforms. Online chatter indicated that many Americans were taking these lies seriously. Unfortunately, security leaders underestimated the power of disinformation to produce real world violence.


Disinformation began to appear in Americans’ feeds weeks before the election took place.Consider this tweet, alleging issues with mail in ballots, posted on October 26th.


On January 6th, 2021, domestic terrorists stormed the U.S. Capitol in an attempt to stop Congress from certifying the results of the 2020 presidential election. The events of January 6 th are a chilling testament to the dangers of disinformation. Previous reports by DisinfoLab have discussed how disinformation spread in late 2020 on popular social media platforms including Twitter, Facebook, and YouTube precipitated the Capitol invasion. Surrounding this event, Twitter bots played a notable role in perpetuating the “Big Lie” and in obfuscating the truth of what happened in the days following the attack. In hindsight, most experts agree that the U.S. government should have seen the attack coming. In the months preceding the January 6 th insurrection, a flood of disinformation designed to undermine Americans’ trust in the results of the presidential election was unleashed across major social media platforms. Online chatter indicated that many Americans were taking these lies seriously. Unfortunately, security leaders underestimated the power of disinformation to produce real world violence. Disinformation began to appear in Americans’ feeds weeks before the election took place. Consider this tweet, alleging issues with mail in ballots, posted on October 26th.

A tweet from the account @realDonaldTrump reading “Big problems and discrepancies with Mail In Ballots all over the USA. Must have final total on November 3rd.”


Bots Rally Around #StopTheSteal 

Fake accounts, or bots, played an important role in increasing the reach of election

related disinformation. DisinfoLab tracked and analyzed data collected by BotSentinel to

identify short and long-term trends in bot activity that primarily concerned voter fraud and

election security in the weeks leading up to and following the 2020 presidential election.

In the collected data, #stopthesteal was not an outlier––it was part of a larger pattern,

growing in popularity alongside others phrases with similar messages, such as

In the wake of the November 3rd presidential election, bots undermined faith in the electoral process. Starting on election day and continuing into the next months, non-bot Twitter accounts including @PhillyGOP, the Philadelphia Republican party, began using

#StopTheSteal. 1 The hashtag served as a catch-all for baseless election disinformation, from

assertions that mail-in ballots were thrown in gutters to allegations that voting machines were being hacked. Bots quickly latched on to this new hashtag. On Wednesday, November 4th #StopTheSteal was the fourth most tweeted hashtag by fake accounts between 6:00 and

7:00AM, behind #MAGA, #TRUMP, and #TRUMP2020, respectively. In the weeks following

the election, fake accounts continued to use #StopTheSteal in conjunction with other hashtags to spread disinformation. 


Over the next few months, pro-Trump groups used social media to organize “Stop the Steal”events, like the rally that culminated in the Capitol invasion. As January 6 th grew closer, the frequency of #StopTheSteal tweets by fake accounts increased significantly.


Figure 1: Bot Tweets Containing #StopTheSteal Hashtags Approaching January 6 th , 2020

Figure 1 shows the usage of #StopTheSteal among bot accounts on Twitter the week of

January 1, 2021. The data include misspellings of the hashtag, such as #StopTheSteaI (using

a capitalized ‘i’ in place of a lowercase ‘L’). January 6, indicated here with an asterisk, was the

day of the insurrection. Data from Bot Sentinel.


On January 6, bot tweets containing #StopTheSteal made up ten percent of all bot tweets with hashtags on the platform. #StopTheSteal was among the most popular hashtags tweeted by inauthentic accounts on every day of the week leading up to January 6th, spending at least twelve hours of every day as one of the top ten.


Stop the Steal Bot Activity: Analysis 

The #StopTheSteal phenomenon demonstrates one of the two strategies inauthentic account use to spread disinformation. These accounts either amplify disinformation originating from authentic accounts, or they create and disseminate their own disinformation through networks of inauthentic accounts. The data indicate that the former is most common. Once a hashtag starts ‘trending’ on Twitter or is tweeted by a popular account, inauthentic accounts begin using and amplifying an associated message or hashtag. 2 In this instance, reporting indicates that #StopTheSteal was tweeted by authentic accounts thousands of times prior to November 4th, when the data first show bot usage of the hashtag. 3 The prevalence of #StopTheSteal before bot usage demonstrates that while hashtags may be created and shared by authentic accounts inauthentic or bot accounts can quickly identify and spread harmful messages.


Although #StopTheSteal confirmed DisinfoLab’s earlier findings, it also constituted a

meaningful departure from other trends in disinformation over the past year. Specifically, no

other single piece of disinformation fomented the level of violence and anti-democratic

sentiment that became associated with #StopTheSteal. To understand this exceptional case,

we must look beyond the hashtag at the larger disinformation environment. #StopTheSteal was just one of a wave of hashtags and phrases related to voter fraud going viral between the November election and January 2021. This hashtag arose in the context of widespread doubt in the 2020 election and on the heels of a contentious presidential campaign characterized by disinformation. 4 All of these factors encouraged a violent reaction to #StopTheSteal.


The consequences of the insurrection were far-reaching, and there were significant changes in the data during the following weeks. Twitter responded to the event by deplatforming many accounts that played a role in advancing the QAnon conspiracy theory. 5 Within days of this mass ban, the content of the hashtags and phrases bots were using shift dramatically. In the months prior to January 6 th , U.S. politics had been a primary focus of the inauthentic activity registered by BotSentinel. After the attack, U.S. politics began losing popularity among bot accounts and the majority of hashtags and phrases in our collected data now concerned Brazilian politics.


This dramatic change in the data offered the opportunity for DisinfoLab to reflect on its data collection practices. While monitoring Twitter is one way to study disinformation, a single-platform focus may conceal some of the larger trends in disinformation. The visible content was largely not native to Twitter––it came from conspiracy theories and false articles that gained traction elsewhere on the internet before being discussed on Twitter. As DisinfoLab considers future developments, it anticipates shifting its primary focus away from Twitter and incorporating more sources from which disinformation and misinformation originate in order to accurately capture the forces shaping American politics.


 

1 Menn, Joseph, and Katie Paul. “Twitter, Facebook Suspend Some Accounts as U.S. Election Misinformation Spreads Online.” Reuters, November 3, 2020, sec. U.S. Legal News.

2 A recent report by the Georgetown SFS Institute for the Study of Diplomacy suggests this strategy for spreading disinformation is increasingly prevalent. The phenomenon is discussed at length here:Somerville, Alistair, and Jonas Heering. “The Disinformation Shift: From Foreign to Domestic.”Georgetown Institute for the Study of Diplomacy, December 1, 2020.

3 Menn, Joseph, and Katie Paul.

4 Kennedy, Merrit. “Dominion Voting Systems Files $1.6 Billion Defamation Lawsuit Against Fox News.” NPR, March 26, 2021. https://www.npr.org/2021/03/26/981515184/dominion-voting-systems-files-1-6-billion-defamation-lawsuit-against-fox-news.

5 Twitter Safety. “An Update Following the Riots in Washington, DC,” January 12, 2021.

254 views0 comments

Recent Posts

See All

Resisting Russian and Chinese Disinformation Campaigns

Sarah Wozniak Russia and China are actively creating and disseminating disinformation narratives in Ukraine and Taiwan to pursue their policy agendas and expand their spheres of influence. Both target

Join Our Team! DisinfoLab Internship

DisinfoLab Internship DisinfoLab is a student-run disinformation and technology think tank and tech incubator at William & Mary. We use a multidisciplinary approach to investigate emerging trends in d

bottom of page