Common standards for the regulation of online hate speech in Europe, while necessary to avoid a patchwork effect, need not be implemented in identical way for every country and should ensure pluralism and freedom of speech, especially when dealing with “grey area” cases. A victim-sensitive method should be at the core of the governments’ efforts. However, mechanisms should be put in place to mitigate the incentive to over-remove content increasing censorship.
These are among conclusions of the new study from the Council of Europe that looks at recent innovations in governance tools for online hate speech initiated by national governments, intergovernmental organisations and Internet intermediaries across Europe in recent years, including the NetzDG Act in Germany, the Avia Bill in France and Facebook Oversight Board.
Common standards to regulate online hate speech in Europe should retain important decentralisation elements: there should be decentralised national regulatory authorities; a common standard on the responsibility of Internet platforms to remove hate speech content within a specified time frame should be applied in the context of national hate speech laws, and each national regulator should be able to design and implement slightly different exceptions and exemptions. Exceptions could be, for instance, allowed for journalistic content and for Internet platforms that refer “grey area” cases to competent independent oversight institutions and abide by their decisions. Internet platforms devoting resources to removing hate speech or fully disclosing the amounts of hateful content on them could be either exempt from regulatory fines or have them reduced.
Regulatory fines could create an unwelcome tendency among Internet platforms to remove suspected hate speech content on a “safety first” basis, the author of the study warns. Referring the suspected hate speech cases to competent oversight boards might help to mitigate this risk. In general, the study author argues, it is wrong to think that the only way for Internet platforms to tackle online hate speech is to remove content: content management and oversight are also important tools, especially for “grey area” cases when there are doubts about illegality of the content.
A victim-sensitive approach should be in the centre of the efforts to design and implement tools for online hate speech by governmental agencies, Internet platforms and civil society organisations. Softening approaches when it comes to the hate content posted/shared by political figures should be avoided, despite the “public interest” rationale used to justify it, as it downgrades the experiences and needs of victims.
At the moderation level, victims should be notified of the decisions taken, hate speech reporting mechanisms should be explained in plain words and multiple languages, victims should be empowered to influence moderations outcomes, where feasible. Oversight bodies should explain the reasoning behind their decisions and help facilitate the recovery of victims, e.g. provide them with access to restorative justice. At the regulatory level, reporting mechanisms should minimise a risk of re-traumatisation and victims should be enabled to play an active part in legal or administrative processes, including by testifying. Introduction of criminal offences for malicious reporting might deter genuine reporting and be detrimental to a victim-sensitive approach, the author of the study warns.
The study is based on meetings, in-depth interviews, a questionnaire and a public survey of experts, representatives of national authorities, social media platforms and civil society organisations.