Twitter is a free social networking “microblogging” service that allows people to access openly available content. This content comes up in the form of short posts called tweets. Tweets can be up to 140 characters long and can include links to relevant websites and resources. Twitter users can broadcast tweets and follow other users’ tweets by using multiple devices and/or platforms. It also allows the categorization of ideas through hashtags (#) to provide a friendlier search environment (1). Nowadays, Twitter is one of the most widely used social media channels in the world comprising more than 500 million users that generate, on average, around 200 billion tweets per year (2).
Violent extremism has been gaining momentum in these last years mainly due to a digital transition where, among other social networks, Twitter’s role has been pivotal. Its user-friendly and open source dimensions make it a perfect tool for the promotion of these radical principles. Young people around the world, especially in developing countries, still face a myriad of challenges such as the rising cost of life, the reduced access to job opportunities, and a poor educational system that doesn’t equip them with the necessary competences and resources to face the national and international labor market making them one of the most susceptible social strata to succumb to the strong community feelings that extremist causes offer (3).
Extremist groups have become absolute propaganda machines. Their distorted narratives are often issued in channels such as Twitter through statements and press releases that ultimately promote the radical principles around those unemployed, detached, and isolated young people. It is estimated that 26% of internet users aged 18-29 have a Twitter account making it easier for these groups to recur to it for recruitment purposes (4). Nevertheless, Twitter is not only used for radicalization and recruitment. This channel has also been widely used to arrange and claim responsibility for a plethora of violent initiatives (e.g. bombing attacks in several different European cities during the last few years) (4).
One of the biggest examples of Twitter exploitation by extremist groups is the emergence of the Islamic State of Iraq and Syria (ISIS). The self-proclaimed islamic state relied on the weight of this channel to establish itself and solidify its position as one of the most violent fundamental islamist causes. Taking advantage of Twitter’s powerful sharing aspect during the last few years, ISIS has recruited at least 30,000 foreign fighters, coming from more than 100 different countries, to the battlefields of Syria and Iraq while also contributing to the establishment of new headquarters in places like Libya, Afghanistan, Nigeria or Bangladesh (5). It is also noteworthy to mention that Twitter was used by ISIS as a warning for the United States to withdraw their troops from Iraq. As a way to issue that information, on August 19th, 2014, ISIS representatives broadcasted a video on Twitter beheading an American fotojournalist, James Foley (5).
Similarly, far-right extremist groups are profiting from Twitter’s features to promote their hate speech and organize targeted initiatives against minorities. One example of it is the America First Political Action Conference (AFPAC) where young far-right extremists echo their white nationalistic rhetoric (6). Its founder, Nicholas Fuentes, have been banned from nearly all major social media platforms, including YouTube, Twitch, or TikTok due to its intimidating behaviour but he’s still active in Twitter which leads us directly to a brand new question: Why does Twitter’s anti-extremist guidelines allow right-wingers more freedom?
The most recent Twitter’s policy against violent organizations dates from October 2020 and states that there is no place for “violent organizations, including terrorist organizations, violent extremist groups, or individuals who affiliate with and promote their illicit activities” (7). Nevertheless, as approached by the previous question, there is still a tendency to address these organizations differently according to their “cradle”. This bias is mostly driven by national governments’ priorities who decide, without any standardized criteria, which groups should or should not be considered terrorists. This ultimately results in social media companies, such as Twitter, uplifting political interests instead of truly preventing violence on their platforms (8). One great example of this political clout is the Haqqani Network, an Afghan Taliban branch operating in Pakistan and Afghanistan with the objective of fighting US-led NATO forces and the Islamic Republic of Afghanistan. This guerrilla group remained unlisted from the Foreign Terrorist Organization (FTO) list until 2012 although having close ties with Al-Qaeda (8). Why? The Haqqani Network is deeply rooted in Pakistan’s state intelligence agency so considering the organization as terrorist would necessarily lead to a political offense towards Pakistan as it would be considered a state sponsor of terrorism. This would be absolutely injurious since the United States were cooperating with the Pakistani government on counterterrorism and counterinsurgency initiatives (8).
When this political barrier is finally overcome, efforts should be put into developing strategies to identify and ban online radical content so that the reach and spread of the extremist narrative is limited. These strategies should consider two major players, society and social media companies. The first resolution should come from the citizens themselves. If they are encouraged to report any extremist narrative they encounter in a transparent and clear way, the social media companies will have major allies in this everlasting fight. The other endeavour should come from companies per se. Recent literature states that radical users tend to exhibit distinguishable textual, psychological, and behavioural characteristics in the content they produce and broadcast (9). Facing a data-driven digital transition as we are, this is a huge opportunity to apply dedicated methodologies, such as machine learning, that would facilitate a quick and precise detection of extremist content (9).
In 2018, Twitter announced that over 1.2 million accounts were suspended due to terrorist content but we still have a long journey to pursue (10). When we finally become capable of overcoming the wide political net in the “backstage” and empower our citizens and companies to develop targeted strategies to identify and immediately ban these extremist initiatives, we will decisively be ready to fight this issue with the right weapons.
Rafael Luis Pereira Santos
(1) UKRI – Economic and Social Research Council, “What is Twitter and why should you use it?”.
(2) Statista Research Department (2021), “Twitter – Statistics & Facts”, Statista.
(3) USAID.GOV (2020), “Strengthening youth engagement – Jordan”.
(4) International Association of Chiefs of Police (2014), “Twitter and Violent Extremism”, Awareness Brief. Washington, DC: Office of Community Oriented Policing Services.
(5) Brooking E. and Singer P. (2016), “War goes viral – How social media is being weaponized across the world”, The Atlantic.
(6) Steakin W. (2021), “How the far-right group behind AFPAC is using Twitter to grow its movement”, ABC News.
(7) Twitter (2020), “Violent organizations policy”.
(8) Meier A. (2019), “Why do Facebook and Twitter’s anti-extremist guidelines allow right-wingers more freedom than Islamists?”, The Washington Post.
(9) Nouh M., Nurse J.R.C and Goldsmith M. (2019), “Understanding the Radical Mind: Identifying Signals to Detect Extremist Content on Twitter”, IEEE International Conference on Intelligence and Security Informatics (ISI), pp. 98-103, doi: 10.1109/ISI.2019.8823548.
(10) Twitter (2018), “Expanding and building #TwitterTransparency”.