Back Algorithms and automation: Council of Europe issues guidelines to prevent human rights breaches

Algorithms and automation: Council of Europe issues guidelines to prevent human rights breaches

The Council of Europe today called on its 47 member States to take a precautionary approach to the development and use of algorithmic systems and adopt legislation, policies and practices that fully respect human rights.

In a Recommendation on the human rights impacts of algorithmic systems, the Council of Europe´s Committee of Ministers issued a set of guidelines calling on governments to ensure that they do not breach human rights through their own use, development or procurement of algorithmic systems. In addition, as regulators, they should establish effective and predictable legislative, regulatory and supervisory frameworks that prevent, detect, prohibit and remedy human rights violations, whether stemming from public or private actors.

The recommendation acknowledges the vast potential of algorithmic processes to foster innovation and economic development in numerous fields, including communication, education, transportation, governance and health systems. In the current COVID-19 pandemic, algorithmic systems are being used for prediction, diagnosis and research on vaccines and treatments. Enhanced digital tracking measures are being discussed in a growing number of member States – relying, again, on algorithms and automation.

At the same time, the recommendation warns of significant challenges to human rights related to the use of algorithmic systems, mostly concerning the right to a fair trial; privacy and data protection; freedom of thought, conscience and religion; the freedoms of expression and assembly; the right to equal treatment; and economic and social rights.

Given the complexity, speed and scale of algorithmic development, the guidelines stress that member States must be aware of the human rights impacts of these processes and put in place effective risk-management mechanisms. The development of some systems should be refused when their deployment leads to high risks of irreversible damage or when they are so opaque that human control and oversight become impractical. Serious and unexpected consequences may occur due to the growing interdependence and interlocking of multiple algorithmic systems that are deployed in the same environments.

As a matter of principle, States should ensure that algorithmic systems incorporate safety, privacy, data protection and security safeguards by design. States must further carefully consider the quality and provenance of datasets, as well as inherent risks, such as the possible de-anonymisation of data, their inappropriate or decontextualised use, and the generation of new, inferred, potentially sensitive data through automated means.
The guidelines underline the need for governments to endow their relevant national institutions responsible for supervision, oversight, risk assessment and enforcement with adequate resources and authority. They should also engage in regular consultation and cooperation with all relevant stakeholders, including the private sector, and foster general public awareness of the capacity and impacts of algorithmic systems, including their risks.

Strasbourg 8 April 2020
  • Diminuer la taille du texte
  • Augmenter la taille du texte
  • Imprimer la page

"Everyone has the right to freedom of expression"

Art. 10 European Convention on Human Rights

follow us