Правосудие в Европе сталкивается с проблемами цифровых технологий

Страсбург 15/10/2019
  • Diminuer la taille du texte
  • Augmenter la taille du texte
  • Imprimer la page
  • Imprimer en PDF

«Люди и машины живут во все более тесно взаимосвязанных отношениях. Это дает большие возможности, но и создает новые угрозы. Действительно, технология никогда не бывает нейтральной. Этот вопро всегда личный, потому что имеет развитие технологий имеет этические, политические и правовые последствия».

Далее на английском языке.

No doubt criminal justice systems can benefit greatly from the use of digital technologies. Many countries are already using these technologies to boost the efficiency of their justice system, increase transparency, strengthen criminal investigations, improve case-management and combat crime. But how many are tackling the threat of over-reliance on digital technologies?

Digital technologies pose a problem from an equality standpoint. Machines function based on what humans tell them. If a system is fed with human biases (conscious or unconscious) the result will inevitably be biased, thus reinforcing discrimination and prejudices under the guise of objectivity. If it is true that technology, and in particular artificial intelligence, can help identify criminals and strengthen public safety, it is also clear that it can reproduce the racial and other discriminatory biases that human beings have. It has been demonstrated that stop and search procedures are more likely to be used vis-à-vis minority groups and foreigners, and that racial and ethnic profiling leads to harsher criminal sentences for certain groups.

Another example relates to the use of facial recognition software to search for suspected criminals in public places, which human rights organisations warned “could lead to miscarriages of justice and wrongful arrests” and which poses “massive issues for democracy”.

A case in point is also privacy. Privacy is a fundamental human right, essential in order to live in dignity and security. But in the digital environment large amounts of personal data are collected - with or without our knowledge - and can be used to profile us. We constantly provide data on our whereabouts, our health, political ideas and family life without knowing who is going to use this data, how and why.

It does not take much to imagine what this means for both witnesses and judges. In the past information about them could be obtained, but with difficulty. Today, the internet and the automatisation of case-law have made it much easier to access this information. Unfettered access to such information has the potential to erode the trust people have in the whole system, as well as to destroy the lives of individual human beings.

Technology can be used to undermine judicial independence too. Great care should be taken when designing policies to improve the performance of the justice system, because in linking the achievement of ill-conceived targets to the promotion of judges, there is the risk that judges come under government monitoring with the introduction of software that ultimately distorts the initial aim of improving effectiveness and efficiency.

It is therefore urgent that we pay closer attention to these risks. The good news is that we already have the tools to develop criminal justice systems which use technology to the benefit of human rights. For example, the Ethical Charter on the use of artificial intelligence in judicial systems that the CEPEJ has adopted and the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data are two key texts. The case-law of the European Court of Human Rights is also crucial when it comes to respecting private life, liberty, security and providing effective remedies to challenge intrusions into private life and to protect individuals from unlawful surveillance.

In addition, last May I published a Recommendation on Artificial Intelligence based on existing standards and on work done in this area by the Council of Europe and other international organisations. This Recommendation aims to guide member states to maximise the potential of artificial intelligence systems and prevent or mitigate the negative impact they may have on people’s lives and rights.

Four areas of that Recommendation are relevant for our discussion today, that is, the need to: carry out human rights impact assessments of Artificial Intelligence systems; establish public consultations; engage with the private sector; and ensure effective parliamentary, judicial and expert oversight of the use of technologies in the criminal justice system.

I encourage you to take all these steps whenever you deal with new technologies that may affect human rights. I also encourage you to propose laws and policies in the criminal justice system that shield judges and witnesses from undue interference and digital piracy; ensure training on digital technologies; and close the digital gap in the parts of your countries where not all people have the same access to the Internet.

Robots can support humans, but they cannot entirely replace them. It is our duty to use them in the criminal justice system in a wise and human rights compliant way."

Текст речи (PDF)