Session 5: facial recognition
Mirror of our souls: learning Cicero’s lessons and addressing facial recognition risks
3 July (Friday)
3pm-4pm Facial Recognition
- Sandra AZRIA, Attorney at law, Council of Europe expert
- Tamas MOLNAR, Programme Officer, Legal Research, European Union Agency for Fundamental Rights (FRA)
- Carly KIND, Director, Ada Lovelace Institute
- Vincent GRAF, Strategic Technology Advisor, ICRC
See full programme on the main page
Visit the dedicated webpage on Data Protection Views from Strasbourg in Visio (1-3 July) and see other themes:
Session 1: How to ensure that countries that commit to Convention 108+ comply with its provisions? Why do we need a follow-up and evaluation mechanism, and which one?
Session 2: How do we address the latest challenges posed by profiling in an AI era?
Session 3: What does the right to data protection imply in an educational setting? What schools have to do, and what they should stop doing?
Session 4: Are digital identity programmes being implemented with privacy by design?
Session 5: Mirror of our souls: learning Cicero’s lessons and addressing facial recognition risks
Session 6: Political campaigns and elections: why is data protection so crucial?
Facial recognition techniques raise many issues, in particular in respect of private life, humain dignity and human rights at large. They are, as a matter of fact, rather negatively seen by the public, which expresses some reluctance to their deployment, be it by public authorities or by the private sector, and according to the different types of use made. Next to a strict prohibition in specific sectors (schools, employment…), the use of these techniques should be subject to legislation and respect the principles set by international legal instruments related to the protection of private life and personal data. It is furthermore crucial to minimise the risks of discrimination as well as other risks to the rights of individuals, and particular attention should be given to the accuracy of the results drawn from facial recognition but also to a careful assessment of the reliability of the systems. Setting up quality labelling systems for facial recognition technologies, wide consultations of all stakeholders and groups at stake as well as the constitution of ethical groups or committees are among recommended means to ensure upholding of rights and principles.
However, the issues of accuracy and bias should not obscure more fundamental societal impacts of facial recognition technologies. Issues of ethics, society and generally public issues should remain at the heart of the reflexions on the topic, which should also be considered from the point of view of different uses (passive or remote uses versus active), of the different functions (face detection, face matching, face classifications,…). In this respect, the moratoriums observed in some countries or by some companies (Amazon or IBM, for example), are welcome and interesting.
While facial recognition techniques, and the algorithms technically behind can be useful, notably for police investigations or ICRC victims or displaced people researches, they require strict precautions and rigor in order to protect people, which confirms once again that protecting data equate protecting individuals.
The Q/A session addresses the way the Committee of Convention 108 can further handle these issues, in particular in the context of the COVI-19 pandemic ; how to enable the right to consent in certain circumstances; the criteria for formal red-line for the deployment of facial recognition; the effectiveness of self-learning techniques as well as their impacts in this context; and the effectivity of legal instruments, notably the European Convention on Human Rights in front of such technologies.