Responding to online hate: Lessons from Workshop 5

As much as social media connects people and brings them together, so too can it isolate, alienate, and demean people. Trolling represents the ugly side of social media use. The levels of abuse, hatred, harassment, and bullying on social media today are at an all-time high. The toxic environment on some social media sites has begun to discourage some people from expressing legitimate views online. Others have just decided not to even maintain a presence on social media. Concerns about the effects of online nastiness on the mental health of children are well-evidenced and growing.

The 5th MEDI@4SEC Workshop held in London gathered stakeholders from public authorities, law enforcement, advocacy groups and tech companies to discuss this problem and how it can be better tackled. We have produced two reports highlighting and analysing the main issues discussed during the day. In this blog post we highlight some of the main points of discussion which arose around the ways to tackle the problem.

Automated versus human means of tackling trolling

Responding to trolls online has typically employed human-based responses. These include counter-speech and the use of human moderators to decide what material to take down. Human-based responses are resource and time intensive. They also expose moderators or those practising counter-speech to the psychological effects of abusive online material.

Automatic moderation and removal of content by social media providers can be cheaper, faster, and simpler than human moderation. AI tools for automated takedowns, blocking or disruption of trolls and offensive material are well-established examples. More recent initiatives include user-enabled measures, such as social media account settings that let a user choose whether they want to be exposed to material or accounts that have been flagged as trolling.

Yet whilst automated moderation can spare moderators the psychological and other costs of continued exposure to trolling and other abusive material online, there are some things it can’t replace. Human moderators are able to make far more nuanced judgements of the offensiveness of material than automated alternatives. And they are easier to hold accountable. Automated blocking of content is often a blunt tool. It can end up inadvertently censoring harmless content or even interfering with the freedom to express political or religious views. The criticism levelled at the new German law imposing requirements on social media platforms to take down material that is ‘obviously illegal’is a good illustration of this.

The anti-trolling arms race

Trolls are often driven by a desire to provoke offense and cause conflict. This makes them tricky adversaries because they are continually finding ways to use those very interventions designed to thwart them to cause further discord and distress. The potential weaponisation of anti-trolling interventions is therefore challenge for an anti-trolling strategy. One common trolling tactic is to flood anti-trolling hotlines and reporting tools with bogus reports. Another is to make false reports about those they have already victimised in order to get them banned from social media sites. For example, Facebook has faced complaints that their strict enforcement of copyright laws have enabled trolls to get their victims’ pages shut down just on the basis of a complaint. Well-intended counter-speech can also be manipulated by trolls to make it descend into offensiveness and abuse.

Being blocked is seen by trolls as a win because it shows they have provoked a response. Measures to prevent anonymity or to circulate shared lists of known trolls are often suggested as means to unmask trolls and bar them from social media sites. But anonymity is still easy to achieve and trolls are able to circumvent bans.

A different, potentially promising approach is so-called ‘throttling’. This involves a social media or other platform taking directed action to disrupt the connectivity of an identified troublemaker. For example, a platform can surreptitiously reduce the quality of a known troll’s connection to the service. Their connection becomes slower and slower and more subject to bugs and crashes, thereby frustrating their ability to misbehave on that platform. One advantage of throttling is that it is covert: trolls can’t be sure it is happening as there are always a range of other technical explanations for their reduced access to the service.  Another is that the move sidesteps any need for companies to provide guidelines for acceptable behavior. Such guidelines are often themselves weaponised by trolls. However, all these features make the measure ethically questionable as well. Those subject to throttling have no chance to contest their treatment or explain themselves. And they’re given no incentive to reform their behaviour.

Gaming platforms as new sites of trolling of the very young

Gaming sites offer a place where young children can view and engage in anonymous or pseudonymous chat with teenagers and adults. Games often involve online chat features, some of which cannot be turned off. These have become key forums for cyberbullying and racist hate-speech. But there is little warning for parents that young children playing the game may find themselves exposed to this kind of very unpleasant, offensive or upsetting speech. Because the age-appropriateness of games indicated in the advertising and product description reflect the content of the game itself, not the chat posted by users. Most parents wouldn’t let their kids play unsupervised with a load of strangers in a playground. But they’re often not aware that, by letting their kids go on interactive gaming sites, they’re doing just that.

The current focus on social media sites overlooks the fact that children start gaming far younger than they start using social media. Children as young as 3 or 4 routinely game, but social media use starts much later. Exposure to abusive speech at such a young age can both traumatize children and normalize hateful and abusive interactions for them. Gaming companies have been slow off the mark to recognize and deal with this problem. For example, some gaming companies in Germany have been resisting the application of anti-harassment legislation to their online environments. It’s clear that much more attention needs to be given to the presence of hatespeech and abuse in gaming environments, and it’s time that more proactive measures were taken to prevent it.

Tackling trolling: a shared responsibility

Although its impact can be very harmful to its targets, much trolling activity is not illegal because the threshold for criminality is very high. In practice, this means that the onus for dealing with trolling falls mainly on private actors, such as the social media, gaming, and other platforms that host it. But even when trolling is bad enough to be illegal, it often doesn’t result in prosecution or police investigation. The majority of victims are more interested in getting the offending material taken down than in pursuing what are often lengthy and unpleasant legal proceedings. Private means of tackling trolling such as blocking trolls or taking down content satisfies victims, silences trolls and helps make the online environment less toxic a place to be. But it also leaves trolls unpunished and undeterred.

So, while law enforcement agencies play a crucial role in anti-trolling efforts, this only works as part of a much broader web of collaborations, partnerships (sometimes with unexpected and unusual partners such as campaign groups or porn sites) and proactive independent activities by a range of actors from parents to schools and employers to gaming companies.

Based on the discussions at the workshop we have published two reports: an overview of the discussion highlighting the emerging issues presented by trolling and online hate and a summary of the ethical and legal issues raised in connection with these issues. Both reports can be downloaded from the project’s publications page.

Kat Hadjimatheou

University of Warwick

Leave a Reply

Your email address will not be published. Required fields are marked *