For a related Perspective on the Buffalo shooting and further analysis into the manifesto, click here.
On the afternoon of 14 May 2022, a heavily armed young man in military gear attacked shoppers and workers at a supermarket in Buffalo, New York, killing ten people and wounding three others. Most of the victims were Black – the supermarket is located in a predominantly Black neighbourhood – with the shooter specifically targeting Black individuals, and reported as refraining from attacking white individuals. The shooting was livestreamed via the gaming platform Twitch, recorded using a helmet camera and echoing the aesthetics of popular first-person shooter games. In the moments prior to the shooting, the perpetrator released an online manifesto – a tactic seen in several other recent extreme right shootings. In it, he specifically spoke of targeting Black minorities in the US, blending racial conspiracy theories of ‘white replacement’ with rabid antisemitism and anti-transgender statements in an enmeshing of online conspiracies and far-right memes.
Public officials, such as New York Governor Kathy Hochul classified the shooting as white supremacy terrorism, calling on social media platforms to ‘up their game’ in monitoring online hate speech, “especially when it's directed against [minority] populations and comes under the guise of white supremacy terrorism, which is exactly what happened here in Buffalo”. However, there is concern that the spread of violent right-wing statements online and high-profile cases of white supremacist and incel violence are not being taken seriously nor adequately dealt with by authorities. With Western governments facing criticism on either their poor record of responding to the extreme right or, in some cases, their active indifference towards the spread of extreme-right narratives in parts of mainstream society, it is important to consider what adequate responses to the extreme right online could look like. This should include responses to the measurable increase in online hate speech, doxing and harassment, extreme right propaganda circulation, and the mainstreaming of elements of far-right ideologies – along with ensuing episodes of violence. May 15th marked the anniversary of the Christchurch Call, an international political commitment by governments and tech companies to eliminate terrorist and violent extremist content online – a call established and named after the massacre of 51 Muslims at the hands of a far-right attacker, which reinforces the importance and urgency of this issue.
Drawing on the Buffalo shooting, this Perspective considers what insights can be gathered to potentially tackle violent right-wing content online and its impacts in Europe, the US and beyond. This Perspective offers an examination of the manifesto through the lens of existing responses to far-right extremism, examining the Buffalo case to offer recommendations on how Western governments and mainstream social media companies could respond to violence linked to engagement with extremist materials shared and disseminated online.
This paper finds that extreme-right attacks aim to capture public attention, often as the primary means for disseminating extreme-right ideas into the mainstream. The manifesto is, in part, a living document, part of a process of updating and passing on extreme-right narratives from one attacker to another. As well as reflecting current extreme-right discourses, it also evidences the transference of ideas from the political mainstream towards the extreme-right, with mainstream discourse shaping central parts of the manifesto. Current governmental and private sector responses struggle to adequately deal with such documents due to a combination of factors: over-reliance on takedowns, difficulties in coordination between governments and social media platforms – but also because current approaches exceptionalise the nature of extreme-right violence as disconnected from mainstream discourse. Challenging this conceptualisation may offer a path to a more effective, less security-centric response against extreme-right violence.
Extreme right violent content online – issues raised by the Buffalo shooter
There are several implications for how government actors currently interact with, and can respond to far-right violence, which we can glean from the shooting and the manifesto. These focus largely on the challenges of dealing with the materials themselves, as well as responding to the types of content they contain.
The circulation and impacts of manifestos online
Firstly, the nature of the shooting demonstrates that the killer drew heavily on innovations and practices of previous attacks, and simultaneously also aimed to widely disseminate a manifesto. There is significant evidence of memetic reproduction in the manifesto, of a reliance on text, charts, and images found widely amongst right-wing online communities. The manifesto also contains ideas, including in- and out-group framings, that have been reproduced from former manifestos, passed from one attacker to the next. This includes specific conceptualisations such as the Great Replacement theory evident in texts and statements from Anders Breivik, Dylann Roof and Patrick Wood Crusius, as well as a significant amount of text and pictures that have been copy-pasted directly from the document of the Christchurch attacker Brenton Tarrant. As such, it makes sense to consider manifestos as a type of palimpsest, a form of living document, which details white supremacist ideologies and their means of articulating these in violent action.
While manifestos are often framed as a product of the shooting, it may be more accurate to suggest that the attack operates as a tactical means of encouraging and enabling wider readership of the manifesto. The public nature of the shooting and the subsequent inevitable scramble of media to read and broadcast its core tenets are key processes in the whitewashing of violence, leading to the spreading of core fascist ideas, conspiracy theories of white replacement, and the practical means for others in carrying out future attacks. Thus, violence is just one strategy to encourage a more core aim of enabling a wider readership of far-right ideas. When we consider how to counter such documents and the discourses they embody, therefore, it may be prudent to think beyond takedowns, as they seem to be ineffective against such a living document that is disseminated before and during an attack, is easily reproduced, and is even shared as a whole or in part by media organisations or analysts.
Linking online behaviour to offline violence
On an operational level, law enforcement faces significant challenges in tackling this type of content particularly as there is not always a clear link between online behaviours and offline violence. Indeed, in the aftermath of an attack, police are often criticised for not having acted on previous “suspicious online behaviour” - the suspect in this case had previously been flagged for making threats and had spent time in hospital evaluations. However, law enforcement is often under-equipped or under-staffed for the sort of screening that such a “pre-crime” approach would entail, not to mention the substantial breaches in civil rights and liberties it would potentially entail. Moreover, while online hate speech can incite real world violence, there is insufficient evidence that connects a higher rate of online activity, on an individual level, with a higher likeliness of committing real-life violence. Thus, an overreliance by law enforcement and security actors on online behaviour as a predictor of violence could not yield the desired effects.
To complicate matters further, significant amounts of extreme right content online are not visibly racist or can immediately be flagged as incitement to violence. In fact, some of the most insidious promoters of hate speech are far-right influencers: self-proclaimed libertarian and conservative online actors who play a key role in promoting racist and white nationalist views in mainstream online spaces. Study of the use of far-right symbols across international contexts has shown that, in online spaces, far-right actors have tended to avoid the use of more obvious imagery and logos linked to national socialism, instead preferring to use ‘cryptic’ imagery, where the far-right messaging is hidden or highly context-dependent. Current approaches to online content responses by governments and supra-governmental bodies have tended to prioritise proscription lists and the identification of proscribed organisational logos, struggling to deal with this more cryptic far-right content. The use of more mainstreamed imagery in the manifesto document thus hints towards the need for greater work identifying far-right language and images within context.
Lone actors and broader links to movements and groups
The manifesto demonstrates how the attacker frames himself in relation to the wider far-right scene. White supremacist shootings seem to operate increasingly independently from the confines of formal far-right groups, and the perpetrator of the Buffalo shooting made it clear that he neither supports, nor is operating as a part of a group, yet still considers himself a fascist and a terrorist. The document and the nature of the shooting underline this lack of group involvement, with explicit statements by the attacker stating that they were radicalised online and had little offline interactions with those of similar views. These statements corroborate existing research suggesting that far-right milieus are highly atomised, linked internationally through transnational online engagement rather than part of membership to the increasingly less popular, formalised neo-fascist groups. The dependence on governmental responses to the far-right that rely on identification of group logos or proscription lists, therefore, are highly problematic in such a context.
Taking down online content
Importantly, measures to legally invest governments with the authority to demand the erasure of online content are met with resistance by the public, including social activists and scholars. The extreme right has capitalised extensively on the extension of government authority and the curtailing of individual liberties during the COVID-19 pandemic. Such measures have been routinely reframed as instruments of social control, government corruption and state illegitimacy. The extension of government authority to internet providers and social media platforms is likely to be used similarly. Recent attempts have been made to address the far-right in Europe for instance, in the governmental proscription of far-right groups, the construction of an EU-wide definition of Violent Right-Wing Extremism and work with major online platforms to challenge violent far-right content. However, current European policy is still critiqued as containing a heavy ‘Islamist bias’, alongside a systematic underplaying and underestimation of neo-Nazism, white supremacism, and similar ‘home-grown’ far-right movements.
Practical guidance for conducting attacks
Vast tracts of text within the document discuss the means for obtaining a relevant arsenal for the shooting, the means for carrying out reconnaissance, the costs and effectiveness of different kinds of weaponry and armour, and even psychological discussions on how to build up confidence for the attack – all of which are meticulously outlined. The document is not just about ideology and the spreading of far-right ideas, but their practical application; the attack predicated on opportunity and means, just as much as ideology. Consideration of how to address far-right violence that fails to deal with, or wholly obscures discussion on the means of addressing the available opportunities for such violence and focuses instead only on ideology, will therefore ensure an impoverished response.
Utilizing mainstream narratives
One way of addressing the opportunity context for such attacks is to consider the mainstream. White supremacism is shown – at least in this manifesto – as responsive to mainstream discussions taking place online, interacting with and drawing upon national and international political debates. The document deals heavily in antiquated stereotypes, debunked eugenics and a set of theories that are deeply rooted in fascist ideologies and language with roots going many generations back. But they interweave traditional fascist language against cultural and political minority communities with current events, constructing narratives that include discussions on cryptocurrency, transgender rights, environmental concerns, pornography, international affairs, and prominent contemporary minority politicians. As such, in order to respond to the threat posed by the extreme right, there must be work conducted not just on understanding far-right ideologies, but on how mainstream language influences far-right discourses.
Implications for private sector: responding to extremist content online
Current governmental approaches to far-right lag far behind attacks – there is evidence that governments in the West are not taking the threat from the far-right seriously and may even be looking to roll back responses to such extremism. The continued replication of tactics online and offline by far-right attackers also suggest that any approaches that are currently being implemented have not yet had enough relevant impact to be effective. Given this landscape of blurred lines and unclear boundaries between terrorist content removal and civil liberties, policy actors would benefit from diversifying their approaches to online extremism.
Both governmental and international organisations should start moving online prevention efforts in a new direction that does not infringe so strongly on individual rights and with a smaller potential to backfire. For example, governments could invest more in online literacy education in schools to minimise the vulnerability of young people to online propaganda. Recognition of the role that anti-migrant, anti-minority and anti-transgender language in national media and politics has in such attacks may also have implications for effective response against far-right violence. As this shooting and manifesto demonstrate, governmental approaches towards the far-right should be just as much rooted in helping to positively shape mainstream debate as extreme ideologies.
Key to the Christchurch Call is the need to address the live streaming of such attacks. With the tactics still being deployed, it is critical that more focus is placed on practical means of preventing this. The selection of Twitch by the Buffalo attacker as the platform for livestream, detailed in the manifesto as the preferred platform, has profound implications. Researchers have been warning of the gamification of violence: “the use of game design elements within non-game contexts”. Whilst it is important to caution against drawing a linear causality from videogame usage to offline violence, the increasing relevance of game-like elements in extremist violence undoubtedly signals a cultural shift in the way violent actors frame their attacks and then distribute them.
Beyond its Internet aesthetic appeal, the gamification of real-life violence has other purposes. Firstly, by reducing real life human casualties to statistics victims become depersonalised. Secondly, translating casualties into score setting helps solidify the common objective necessary to bind angry, young, English-speaking white men actors globally, from Christchurch to Pittsburgh, who feel somehow isolated from their local community. The Buffalo shooting live-streaming echoes the tactics of the perpetrators of the Christchurch and Hanau shootings, who likewise shared their atrocities live online and left behind manifestos to “explain” their actions. The purpose behind such tactics is to serve as pieces of propaganda on the extremist online spheres (4chan, 8chan, Discord and certain areas of Reddit), as well as to shock and manoeuvre mainstream media outlets into engaging with their acts of violence. For a terrorist, be it a self-radicalised individual or an organised group, any publicity is good publicity. Especially when this type of recordings are likely to circulate on the internet for a long time- in some cases, for months after the platforms themselves have vowed to retire the content.
The existence of these recordings and manifestos leads us to another thorny issue: how are tech companies and hosting service providers meant to deal with right wing extremist content online? Whilst it can be difficult to determine extremist online content, there are also issues in drawing a line between permissible freedom of speech and extremist propaganda. Recent attempts to codify extreme right symbols by the European Commission and Member States – a measure developed to support tech companies and groupings such as the Global Internet Forum to Counter Terrorism (GIFCT) – have led to a limited consensus. Meanwhile, the development of European standard definition of Violent Right-Wing Extremism has lacked a firm legal basis – implemented only as an advisory definition and meeting with resistance or indifference from some EU Member States.
All legal and government initiatives regarding terrorist content online rest heavily on a public-private cooperation strategy. From the European Union’s Regulation on the dissemination of terrorist content online, to the aforementioned Christchurch Call agreement, the main role is assigned to mainstream social media platforms such as Twitter, YouTube, or Meta. On a purely operational level, it is unclear whether tech companies’ commitment to content moderation is backed up by the necessary personnel and technical (algorithmic) resources. Furthermore, these initiatives ignore the important role of smaller or more niche online sites in propaganda diffusion, as we have seen repeatedly with sites such as Twitch, 4chan or Discord. Ultimately, governments are either struggling to, or deliberately avoiding, implementing a legal definition of Violent Right-Wing Extremism, and private tech companies, in turn, either do not have the legal capacity to understand various differing national approaches to what is legal or illegal in the context of extreme-right speech, or frame themselves as having a ‘democratic deficit’ in relation to governments. Private companies, therefore, prefer to stick to the letter of the law – rather than the broader spirit of the law – in censoring content, suggesting it should ultimately be up to elected officials to determine the boundary at which controversial speech becomes hate speech or extremism.
It is important to underscore that extreme-right attacks aim to capture public attention, and manifestos and recordings of the massacres are as much part of the attack as the killings themselves, as they contribute to further whitewashing of violence, expose the general public to hateful fringe ideas, and generally provide glorification to the actors whose deeds are subject of international commentary.
The removal of online content might seem like the logical conclusion to this problem. There are, however problems to this. Firstly, simply removing terrorist content is difficult. There are deep political divides around the definition(s) of the extreme right, as well as important technical disagreements. The role of private actors, whilst much talked upon, has so far been only mildly encouraging, and much more commitment on the part of big tech companies may be needed to move forward.
Secondly, removal of manifestos is not enough to ensure the protection of the general population against hate speech and hate crimes. The online sphere is a dynamic, discursive environment, and living texts and terrorist manifestos embody such plasticity, with their mixture of memes, disinformation and conspiracy theories. Even in the event of successful and total removal, such content – from images, to infographics, to theories – was drawn from online spheres in the first place. Thus, removing manifestos would not be enough, as they would simply be drawn up again by new actors. One solution could be removing an incredible amount of content from the internet, but the feasibility of such measure is questionable.
However, the disincentivising of such attacks does present a possible avenue for further exploration by government and private bodies. Thus, we advocate for the development of alternative policy directions which try not to suppress, but to counteract, extreme right content online. This can include the use of digital literacy tools for increasing citizen resilience to propaganda and the dissemination of hate online and offline. This must mean the development of practices built upon examination of both the ideology and the opportunity context that were provided to former attackers, as well as critical discussion on the role of the mainstream in extreme-right attacks, developing practice that is cognisant the role that mainstream debate in enabling the radicalisation of beliefs online.