How do you police online abuse and the threat of Trolls: MEDIA4SEC Workshop 5 Preview

As social media grows allowing easy communication across communities and peer-to-peer, so it brings with it problems of misuse and abuse. One such problem has been the growth of trolling. Our 5th project workshop provides an opportunity for you as stakeholders to discuss the challenges and opportunities of policing of trolling, hate, and lies online.

What are the key issues that we’ll be discussing on the day?

Who are the Trolls?

Recent studies suggest a number of interesting categories.

The first is that of ‘subcultural trolls’ – those who explicitly identify with the term ‘troll’ to describe their online identity. They tend to be indiscriminating in their choice of targets and are motivated by a desire to get their “kicks” – or ‘lulz’ – from inflicting suffering on others. There is also some evidence that these kinds of trolls engage in antisocial behaviour offline as well as online, a fact that could be significant for LEAs developing strategies to deal with antisocial behaviour offline.

Politically and morally-motivated perpetrators of online aggression are, unlike subcultural trolls, motivated by a desire to highlight and oppose what they see as immoral behaviour. These forms of aggression are directed typically at public actors such as politicians perceived as having violated norms of political correctness and corporations that violate human rights. A 2016 study found that contrary to common assumption, anonymity did not increase this kind of abusive speech. Rather, a greater proportion of aggressive posts were made by non-anonymous than by anonymous commenters. It would seem that people are more willing to be identified when they feel righteous about their actions. This has implications for the way such abuse is policed.

Those who post abusive content online are as likely to be women as they are to be men. Challenging commonly-held assumptions about the behaviour of different genders, a snapshot of misogynistic internet trolling globally on Twitter over 3 weeks, found that 50% of those tweeting aggressive and misogynistic messages are women.

And, of course, not all trolls are human, some trolls are bots.

Who do trolls target?

While anyone can become a target of trolling, victims of abusive trolling online include in particular, young people, women, LGBT people, ethnic minorities and celebrities or anyone with a public-facing role. A US-based survey published in 2014 found that 70% of 18-to-24-year-olds who use the Internet had experienced harassment, and 26% of women that age said they’d been stalked online. Children are also highly likely to be targeted for online  bullying.

What’s the harm of online abuse?

Online abuse, harassment, and hate, cause isolation and relationship problems; fear; depression; loss of confidence; self-harming; loss of employment, paranoia. In the Netherlands, a 13 year old girl committed suicide after her name was published on a ‘banga list’, a list of girls identified maliciously by local boys as being sexually promiscuous. The risks of sexting are made clear by a study by the UK Safer Internet Centre, which showed that 88% of sexting pictures or videos are put on other websites, sometimes even on porn sites. Impact of online hate towards minority/disadvantaged. Trolling can also cause serious harm to a person’s reputation.

How can online hate and abuse be countered?

To date a strategic approaches to tackling trolling has proved difficult. However, efforts by others may offer useful insights for a more co-ordinated approach.

Attempts by self-styled ‘troll hunters’ have developed sophisticated methods of evidence collection and tracing of online abusers that could be useful to police. For example, the Berlin-based Peng! Collective uses bot technology used by spammers to detect misogynistic language on Twitter and then to counter-troll it with humorous messages in an attempt to disrupt trolls. Innovative self-protection tech such as TrollAbort! a troll-blocking app that allows users of twitter to crowdsource information about trolls online and block their accounts, provides intriguing possibilities that should be explored.

Online community members can also counter trolling. Because they know and interact with their communities very proficiently, they can be effective at detecting trolling and countering trolling immediately, in real-time, via moderation, exclusion and counterspeech. But counterspeech has provoked some controversy. Whilst counter hate-speech initiatives can be effective at overshadowing or drowning out hate speech if used tactically in some cases it can itself become abusive, creating a cycle of hate, worsening the problem.

Recent years have seen a growing awareness among social media providers of their responsibilities with respect to abuse and hate crime online.  Social media providers are uniquely well-placed to react to online abuse, because it happens on their platforms and because they have the discretion to remove material that police may not have the power to. Between 2014-5, Reddit, Twitter, Google and Microsoft provided tools which people can use to request the removal of information. In collaboration with a range of mainstream providers, the European Commission published a Code of Conduct on Illegal Online Speech, providing guidance on preventing and responding to such abuse. But generally, measures adopted by many online social forums are reactive rather than preventive, seen by some LEAs as a lacklustre and combined with concerns about freedom of speech with regard to content removal.

Where is policing?

Although organic, community-driven counter-trolling actions (such as counter-speech and education and awareness-raising) are cheaper, faster, more effective and more responsive to trolling than the actions of public authorities, these cannot substitute the strong arm of the law in cases of serious harassment, stalking, and abuse of individuals online. However, there would appear to be a lot of work to be done to make this happen.

Challenges around resources, capacity, knowledge, skills, and legal clarity all impede the response of LEAs to the challenge of online abuse, hate, and lies. There are inadequate criminal justice resources to police the volume of communications that travel through computer networks. The police often remain inadequately informed about what trolling is, dismissing illegal behaviour as ‘just kids on Facebook’. The technology to identify hate speech on social media networks is yet not widely adopted by authorities and there is also a lack of confidence in policing from victims of cyberhate and the communities within which they live. Added to this, there is public reluctance to use traditional crime-reporting mechanisms to report online abuse.

For policing efforts to succeed there will need to be innovation. The development of alternative tools – for example, True Vision, an online hate reporting tool launched by the UK’s Association of Chief Police Officers  – which enable individuals to report crimes in new ways are vital if a breakthrough is to be made.

Register Your Interest in Attending Workshop 5

The workshop aims to create a common agenda for the policing of trolling. It will provide a forum for sharing of best practices and lessons learned among European security professionals and other actors. It will address the issues described above and will focus on three areas of trolling-related activity: hate and abuse online; the influence of organically formed online groups who crowdsource knowledge and resources for counter-trolling efforts (cyber-vigilantes/digitalism); and the intersection between politics and trolling.

Places at this event are limited. Applications are now open and will close on Tuesday 13th March. Participation is free of charge and there are funds to reimburse travel and accommodation costs.

Kat Hadjimatheou

University of Warwick

Leave a Reply

Your email address will not be published. Required fields are marked *