LIBE Public Hearing “Artificial Intelligence in Criminal Law and its use by the police and judicial authorities in criminal matters”, organised by the European Parliament / Committee on Civil Liberties, Justice and Home Affairs

20 February 2020, Brussels 

 

 

On 6 November 2019, you have already received my colleague, Patrick Penninckx, who presented the work of the Council of Europe on "Media Freedom, Freedom of Expression and Combatting hate speech online and offline".

He presented to you the way in which the Council of Europe has been involved for decades in the regulation of technologies: EDQM (Pharmacopoeia Convention as early as 1966), Convention 108 (1981), Oviedo Convention (1997) and its protocol prohibiting human cloning (1998), Budapest Convention (2001). These texts anticipated societal and technological developments and helped pave the way to an orderly and beneficial use of technologies. This is not about over-regulation, but about regulating what is needed for technology to be a plus for all.

You have already measured the extent to which artificial intelligence (AI) has a structuring impact on our societies and leads us to reflect on the conditions for its implementation: a legal framework is certainly necessary. A legal framework, ideally binding, to prevent the most serious abuses of use.

Recently, facial recognition devices have been the subject of much debate, particularly with regard to their use on public roads, and the European Commission has taken a position on the debate [developments to be adapted with the content of the White Paper - published on 19 February 2020].

The purpose of my keynote today is to share with you the risks of the use of AI to commit criminal offences but also the opportunities offered by AI to prevent crime.

 

I.             New forms of crime

The CoE Budapest Convention is the first and only treaty worldwide on crimes against or by means of computer systems. It provides for powers to secure electronic evidence and to cooperate internationally with respect to any crime, including new forms of crime if evidence on computer systems is involved. AI amplifies and automates crimes to an exponential extent, making it ever more complicated to identify the identification of perpetrators and secure evidence.

In a way, cybercriminals already use forms of AI to commit offences in a highly efficient and automated way, a) to commit crimes on automated systems or b) to use this technology to commit common crimes

a) crimes on automated systems: such as man-in-the-middle attacks where legitimate payments are intercepted and deviated to accounts of money mules, or where thousands of Internet domains or email accounts may be created by offenders to carry out denial of service attacks, or automated attack on computer systems, carried out by machines that detect vulnerabilities in real time, or interfere in elections,

b) common crimes facilitated by AI: such as fraud or blackmail (such as the creation of deep fakes, i.e. faked videos usurping the face of a third party), or make available child abuse materials, etc., while the powers of criminal justice authorities to identify offenders – and the Internet Protocol addresses used – are limited by rule of law safeguards and the principle of territoriality.

More efficient means to identify offenders and secure electronic evidence are therefore needed.

 

II.            New ways to fight crime

But AI can also be used to better prevent offences (police) and conduct investigations (police and public prosecutors):

Two situations to be distinguished: in real space and in cyberspace

In real space, the use of algorithms to fight crime is not new.

Criminal analysis tools, now enhanced by AI, have made it possible to cross-reference cold cases or complex cases to uncover clues that investigators may have missed.

What is new is the use of advanced technologies, such as facial recognition and AI, to prevent offences in the public space or to identify suspects.

It is the debates on the use of facial recognition in the public space that the Convention 108 committee is currently working on.

Your Commission has always supported this unique flagship instrument, the modernised version of which is fully in line with the EU data protection framework (DPMR and the Police and Justice Directive) and which allows for global convergence based on high standards of data protection.

This forum of data protection specialists, representing 70 countries from all regions of the world, has made it a priority this year to address the challenges posed by facial recognition technologies.

While the challenges to the right to data protection are obvious, I would also like to stress the impact of the use of these technologies on freedom of expression and freedom of religion, on freedom of assembly and association, on the prohibition of discrimination, which cannot be neglected.

With regard to the work of the Convention 108 Committee, which will provide guidance and direction to the countries participating in the Committee's work, I would mention some of the key points that will be addressed by the Committee, but I would like to stress without further ado the important step forward guaranteed by both the modernized Convention and Directive 2016/680: biometric data aimed at the unique identification of a person are now listed as "sensitive data" requiring an enhanced protection regime.

Moreover, this technology has specific characteristics in comparison with other biometric technologies, which make it necessary to justify the absolute necessity of its use in certain cases. In the absence of physical contact with the data subject, it may indeed be carried out without the knowledge of the persons concerned, with proven error rates which may temper its reliability.

Its use raises in particular the question of the legal basis for this processing of sensitive data, its necessity with regard to the purpose pursued, its proportionality, its transparency and its security.

These are all key data protection principles that must be applied in this area.

To conclude on this point, it is clear to me that the particular sensitivity of facial recognition requires that its development and deployment be framed, and I hope that the work of the Convention 108 Committee will serve as a framework for many countries around the world.

It's also crime prediction tools (predictive policing) such as PredPol in the United States, whose results have been criticized after almost 10 years of use: the software reveals nothing that police officers don't already know about.

In cyberspace, the phenomenon is more recent.

The fight against certain forms of crime has been improved: the detection of the origin of child pornography images has been greatly enhanced by the latest technologies. Corruption prevention is greatly facilitate by AI-based technology.

In cyberspace, it is also the ability to search data and metadata for behaviours likely to characterise offences (terrorism, organised crime).

ECHR, 13 September 2018, Big Brother Watch and others v. United Kingdom

Seized as a result of the Snowden case, the Court examined three different types of surveillance: mass interception of communications, information sharing, and obtaining communications data from communications service providers.

This is not the first time the Court has considered mass interception. In June 2018, the Court concluded that Swedish legislation and practice in the field of signals intelligence did not violate the European Convention on Human Rights (Centrum För Rättvisa v. Sweden). In particular, it considered that the Swedish system offered adequate and sufficient safeguards against arbitrariness and the risk of abuse.

In its Chamber judgment in Big Brother Watch and others v. the United Kingdom, the Court continued to consider that the use of a mass interception regime did not in itself constitute a violation of the Convention, but observed that the United Kingdom regime did not meet the criteria set out in its case-law.

The case was heard in the Grand Chamber on 10 July 2019.

In cyberspace, it's also the ability to better attribute behavior and search for evidence in the cloud.

This is the subject of the work of the Convention Committee on Cybercrime, which is working on a protocol for the gathering of evidence:

As indicated before: greater efficiency is needed to identify offenders and secure e-evidence in particular where offenders automate their criminal processes and move data across jurisdictions by using AI.

The future 2nd Additional Protocol to the Budapest Convention on Cybercrime will be an important part of the response. It foresees direct cooperation with service providers in other Parties to obtain subscriber information, more efficient and effective means for mutual legal assistance, and several provisions for obtaining information from other Parties in emergency situations.

All of this will be subject to rule of law – including data protection – safeguards.

We hope to have the draft of this Protocol by the end of this year.

 

III.           An impact to be assessed on the use of AI in the judgment phase

 The use of AI in the judgment phase is currently being considered to assess the risk of recidivism of individuals (HART in Great Britain, COMPAS in the United States).

It is a modernised application of risk assessment techniques (such as Actuarial Risks Assessments Instruments).

The scientific community is divided on the relevance of such assessments, since this amounts to attributing to an individual the characteristics of the statistical group to which he is assigned, which would contravene a principle of individualisation of the sentence recognised in many countries or the Charter of Fundamental Rights of the Union (Article 49(3) on the necessity of the sentence).

Let us not forget that even if the individualisation of the sentence is not recognised as such by the European Court of Human Rights as an independent principle (the Convention does not devote an article to the principle of the necessity of the penalties from which it might derive), it checks that the penalty is not excessive in relation to the facts committed.

The ECHR, as part of its in concreto assessment, reviews the circumstances of the offences with which it is concerned and examines the conditions for individualisation of sentences by the national courts: see, for example, on the restriction of prisoners' right to vote ECHR, 6 October 2005, Hirst (No 2) v. the United Kingdom; ECHR, 22 May 2012, Scoppola v. Italy.

The CEPEJ Charter on the use of AI in justice will be presented to you by the representative of this Council of Europe Commission - the Judge of the Supreme Court of Georgia Mrs Nino Bakakuri. This non-binding text also seems to encourage a certain caution in the systematisation of these technologies: even if they are presented as a simple complement to the decision-making of the judiciary, the CEPEJ considered that their use should be envisaged with the most extreme reservations.

 

IV.          A necessary framework for AI in criminal matters in view of the impact of this technology on human rights, the rule of law and democracy

Since 2016, there has been an intense production of non-binding texts on the AI framework: more than 260 according to the FRA.

The work emanating from international organisations naturally deserves our full attention and nearly 42 recommendations, guideline statements or studies emanating from the Council of Europe are directly or indirectly related to the use of AI. It is a valuable guide for policy makers and system designers.

Among the instruments produced by the Council of Europe, I would highlight the following in particular:

The Recommendation of the Commissioner for Human Rights "Unbox AI" which contains 10 key principles for the application of AI, including the need to carry out systematic human rights impact assessments (HRIAs)

Specifically in the criminal field, the CDPC's ongoing work on criminal liability for autonomous vehicles: This committee has established a working group on AI and criminal law which is working specifically on the issue of criminal liability for damage caused by autonomous vehicles. Various avenues of work are envisaged. Automated vehicles are a reality already. It is key for policy makers to be ahead of the curve and regulate this phenomenon before it is too late - and we end up in a jungle.

But it seems that we now need to go further with potentially binding instruments regarding the use of AI that have a significant impact on the founding pillars protected by the Council of Europe.

An ad hoc Committee on Artificial Intelligence (CAHAI) was mandated by the Committee of Ministers "to examine, on the basis of broad multi-stakeholder consultations, the feasibility and potential elements of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe standards in the field of human rights, democracy and the rule of law.”

This committee is currently carrying out a mapping of existing legal frameworks already applicable to AI, binding frameworks and an analysis of the risks and opportunities of this technology.

A broad multi-stakeholder consultation will take place in order to enrich the feasibility study and to propose one or more legal instruments in response to the challenges of

For my part, I call for a binding framework in the form of a Framework Convention, open also to states beyond Europe.

Special arrangements for ex ante control, such as certification, could be envisaged.

The idea of an organization of professions, especially data science, with deontology or ethics committees could be supported.