Back Human Rights in the Era of AI Europe as international Standard Setter for Artificial Intelligence

Speech

Dear Minister,

Ladies and Gentlemen,

Ensuring that technological development works for and not against human rights, democracy and the rule of law is one of the biggest tasks that states face.

Early on in my mandate I warned about the human rights implications of new technology, including AI systems. Grounding decisions on mathematical calculations can have enormous benefits in many sectors of life, but relying too heavily on AI, which inherently involves identifying patterns beyond these calculations, can also turn against users, perpetrate injustices and restrict people’s rights - especially in times of crisis like the current one.

A topic that has recently been discussed extensively in several states is the use of digital devices to help enforce quarantine orders, track the spread of infections or inform people whether they may have been exposed to infected individuals. Tracking and tracing devices are potentially highly invasive. In the course of my work, I have warned against the data protection risks that such tools can entail, especially when the data collected are not stored properly and may become available for other purposes.

The current crisis also urgently brings into focus specific issues that predate the pandemic. Some groups of people have been disproportionately affected by the measures adopted to contain the spread of COVID-19. Those who were already disadvantaged have faced even greater disadvantages. It is high time therefore for member states to take action and reverse the current trend, making a push for more equality.

However, we have observed that in the area of social services, there has been a steady rise in the use of automated and AI tools by public authorities. Decisions related to the calculation and payment of social benefits are increasingly taken by digital technologies without the involvement of humans.

While those systems offer potential advantages, there have been several examples of errors or failures which have had a disproportionate impact on large numbers of beneficiaries. This is proof that we cannot rely on the decisions made by algorithms. We must know and understand how they function and what flaws they may have and, before we use them, we must be aware of their possible human rights impacts.

Instead of rushing into more “digital welfare”, under the sometimes erroneous assumption that technology will be faster, better and less expensive, states should think carefully about what safeguards should be set up when digital tools are used in the welfare context, what contingencies must be covered, what legislative frameworks are required for oversight, and what happens if something goes wrong. And this should be done in addition to ensuring that the deployment of technological solutions does not infringe individual rights to privacy, data protection, non-discrimination, and dignity. Overall, little attention has been paid to the applicable standards so far.

This, in my view, should become a top priority. We cannot allow this digital Wild West to go on.

The good news is that we already have some tools and some knowledge to ensure that technology benefits and enhances human rights protection. Last year, for instance, the Committee of Ministers of the Council of Europe adopted a Recommendation on the human rights impacts of algorithmic systems, which contains practical guidelines for member states in their role as users, developers and procurers of algorithmic systems.

The Recommendation on Artificial Intelligence which I published in 2019 can also be used for this purpose. It is based on existing standards and on work done in this area by the Council of Europe and other international organisations. It is intended to help member states to maximise the potential of artificial intelligence systems and prevent or mitigate the negative impact they may have on people’s lives and rights, through work based on 10 areas of action.

I consider transparency rules to be essential here. If an AI system is used in any decision-making process that has a significant impact on a person’s human rights, this needs to be identifiable. Public authorities should provide all the necessary information for individuals to understand when and how AI systems are being used. If an AI system is used for interaction with individuals in a public service context, particular scrutiny is required because individuals often cannot opt out. This is even more important in the justice, welfare, or health sector where the consequences for the individual are especially critical and directly affect fundamental human rights. Users need to be notified and told promptly and clearly that they can make use of the services of a professional on request and without delay.

Another area of action relates to the obligation of governments to ensure that businesses abide by human rights standards. Self-imposed standards can be useful, and a number of businesses are already implementing them in good faith. But they are not enough. Standards vary across the globe and self-regulation is unlikely to prevent the negative effects of bad business practice on human rights. Since states bear the responsibility of respecting, protecting and fulfilling every person’s human rights, it is their duty to ensure that private companies which design, develop or use AI systems do not violate human rights standards.

This can be achieved by engaging more resolutely with tech industries to make them aware of the need to incorporate human rights into the design of AI systems and encouraging them to assess the human rights impact of these systems. More inclusive and inter-disciplinary co-operation between state actors, the private sector, academia, NGOs, the media and citizens’ groups would greatly help in this regard.

I would like to conclude with a word of appreciation for the work done so far by the CAHAI towards establishing a legal framework for the development, design and application of AI.

Artificial intelligence can greatly enhance our potential to live the life we desire. But it can also destroy it. It therefore requires strict regulation.

I wish you a fruitful conference and hope there will be an opportunity in future to exchange thoughts with many of you on this important issue.

Strasbourg 20/01/2021
  • Diminuer la taille du texte
  • Augmenter la taille du texte
  • Imprimer la page