"Dear Ministers, Excellencies, Ladies and Gentlemen,
The impact of artificial intelligence’s on human rights, democracy and the rule of law is one of the most crucial factors that will define the period in which we live – and probably the whole century.
How artificial intelligence is designed and works is very complex, but its impact on our life can be easily explained with an example: if a state actor harms me, I can bring the state to court, but I cannot sue an algorithm for harming me.
Simply put, unfettered technological development may uproot the human rights protection system we painstakingly built over the past 70 years.
For this reason, one of the first public positions I took as Commissioner for Human Rights was precisely on the need to safeguard human rights in the era of AI. In a Human Rights Comment and in an opinion editorial I published in July last year, I stressed that AI can greatly enhance our abilities to live the life we desire. But it can also destroy them.
AI in fact can negatively affect a wide range of our human rights, from privacy and equality to freedom of expression and assembly. When data-based decision making reflects societal prejudices, it reproduces – or even reinforces – the biases of that society. It can also spread mis- and disinformation and deepen in subtle ways misogynist, racist and otherwise undemocratic stereotypes.
This problem arises mainly from the fact that decisions are taken on the basis of these systems with little or no transparency, accountability or safeguards in how they are designed, how they work and how they may change over time.
Since the beginning of my mandate, I have tackled this issue in a number of occasions.
In my report following my visit to Estonia in June last year, for example, I looked at how older persons and their human rights are affected by the use of artificial intelligence and robots in social and care services. NGOs alerted me about difficulties linked to the use of automated decision-making in social benefits services. Following a reform of the work ability support system, machines and algorithms were used to automatically re-evaluate incapacity levels. Reportedly, the incomplete data in the e-health platform, coupled with a lack of in-person interviews, resulted in loss of social benefits for certain persons with disabilities and older persons with disabilities.
More recently, I had an opportunity to discuss with the French Defender of Rights the functioning of “Parcoursup”, a higher education admissions service which was set up in France in 2018. Aimed at students who attend the last year of high school and plan to continue their studies at a college or university, it has been developed to simplify access to higher education by centralising the applications. However, a number of complaints containing allegations of algorithmic discrimination against applicants from certain high schools were filed with the Defender of Rights. In two decisions issued last January, the Defender of Rights called for more transparency in the procedure, pointing to certain criteria considered as potentially discriminatory, namely the place of residence or the school of origin. He recommended to the Minister of Higher Education, Research and Innovation to take necessary measures in order to make public all information relating to the treatment, including through algorithm, and assessment of the applicants’ files. He also recommended the implementation of specific accompanying measures for students with disabilities.
These examples highlight the important role that Ombudspersons, but also national human rights institutions and equality bodies, can play in this area. These examples also clearly show that while the whole of society might be affected by the use of AI systems, it is often the most vulnerable who suffer the most. There is increasing evidence that women, older people, ethnic minorities, people with disabilities, LGBTI and economically disadvantaged persons particularly suffer from discrimination by biased algorithms. In addition, there is a lack of information on how these systems operate, which makes it difficult to correct the design and establish accountability.
It is therefore clear to me that either we govern the game, or the game will govern us. AI influences the decisions we take. It can strengthen our freedoms or oppress them. It can bolster participation or become a threat to democracy. It can empower people or push them at the margin of society. It is up to us to steer AI, not the other way round.
To this end, the existing human rights framework must apply and the concerns and rights of everyone put at the centre of AI systems’ design, deployment and implementation. This applies to public entities and the private sector alike.
Since States bear the responsibility to respect, protect and fulfill every person’s human rights, it is their duty to ensure that private companies which design, develop or use AI systems do not violate human rights standards.
This can happen by engaging more resolutely with tech industries to make them aware of the necessity to incorporate human rights in the design of AI systems and push them to assess the human rights impact of these systems. A public conversation among state actors, the private sector, academia, NGOs, the media and citizens’ groups would greatly help in this sense.
State should also reinforce their monitoring of human rights compliance by AI systems and act anytime there is an infringement of these rights. They should strengthen independent oversight and empower national human rights structures to engage in this field too.
Finally, they should promote “AI literacy” among the population, and in particular in schools, in order to help people understand how it works and recognise when it harms. For this to happen, States should invest more in public awareness, trainings and education initiatives to develop the competencies of all citizens and address the knowledge gap. It may be a costly investment but with a huge democratic return.
The Council of Europe must be a leading force in this field. The European Ethical Charter on the use of artificial intelligence in judicial systems, the Guidelines on Artificial Intelligence and Data Protection and the Declaration on the manipulative capabilities of algorithmic processes are important building blocks to ensure that AI systems operate within the perimeter of human rights protection.
I will keep AI as one of my priority themes during my whole mandate. In the coming months, I plan to publish a document on AI and human rights to help member states handle the multifaceted impact that AI can have on human rights.
I firmly believe that AI must serve to solve problems, not to create them. If adequately governed, it can hugely benefit our lives, our society and that of future generation. Injecting ethics into the deployment of AI systems is not enough. That is why we need to act now and put human beings, their dignity and their rights at the centre of automated decision making designs."