Among the 31 applications received, many of them being excellent, the Jury of the Stefano Rodotà Award 2023, composed by the members of the Bureau of the Committee of Convention 108, decided to grant
- in the “thesis works” category, Janis Wong for her thesis titled “Co-creating data protection solutions through a Commons”
- in the “articles” category, Sebastiao Bernardo Bruco Geraldes de Barros Vale, Katerina Demetzou and Gabriela Zanfir-Fortuna, co-authors of a piece of work titled “The Thin Red Line: Refocusing Data Protection Law on ADM, A Global Perspective with Lessons from Case-Law”.
- Furthermore, as the award rules allow it, the jury has decided to give a special mention to Francesca Musiani and Ksenia Ermoshina for their work « Concealing for Freedom: The Making of Encryption, Secure Messaging and Digital Liberties “
Janis Wong is a Post-Doctoral Research Associate at The Alan Turing Institute, the UK's national artificial intelligence and data science institute, researching on data protection, ethics, and governance in the Ethics Theme of the Public Policy programme. She was awarded her interdisciplinary PhD in Computer Science at the Centre for Research into Information, Surveillance and Privacy (CRISP), University of St Andrews. In addition to her research activities, Janis is an expert media contributor and is frequently invited to provide insights and analysis to public news organisations on topics related to technology, privacy, and data ethics.
- Janis Wong's thesis aims to create a socio-technical data commons framework that helps data subjects protect their personal data. Data protection laws and technologies limit the vast personal data collection, processing, and sharing in our data driven-society. However, these tools may lack support for protecting individual autonomy over personal data through collaboration and co-creation. To address this, her research explores the creation of a data protection-focused data commons to encourage co-creating data protection solutions and rebalance power between data subjects and data controllers. Using research methods across Computer Science, Management, and Law, Janis interviewed commons experts to identify and address the multidisciplinary barriers to creating a commons, applied commons principles to a policy scaffolding to support the practical deployment of a commons, and built a commons to test its effectiveness for supporting the co-creation of data protection solutions through conducting a user study. In sum, her thesis demonstrates how a data protection-focused data commons can be an alternative socio-technical solution that supports data subject agency as part of the data protection process.
Janis is currently developing her research in ongoing work, including formalising methods that facilitate participatory data governance processes and support community-driven, co-created, collaborative data protection solutions and data stewardship in areas such as artificial intelligence, children’s rights, healthcare, real estate, and fundamental rights and freedoms.
Sebastião Barros Vale serves as the EU Policy Counsel at the Future of Privacy Forum, where he follows and analyse privacy and data protection on a European and national level. He completed a traineeship at the Cabinet of EU Commissioner Věra Jourová, where he contributed to the 1st Annual Review of the EU-US Privacy Shield and to the Commission Guidance on the applicability of the General Data Protection Regulation (GDPR). Before joining FPF, he was an in-house privacy expert at Johnson & Johnson. He holds an LL.M in EU Law from the College of Europe (2016, Belgium), with a thesis on alternative international data transfer mechanisms after the invalidation of the EU-US Safe Harbour Decision by the Court of Justice of the European Union.
Katerina Demetzou is a Policy Counsel for Global Privacy at the Future of Privacy Forum. Her work focuses on understanding the state of play and trends in AI regulation and data protection laws, particularly in EU, Latin America and Africa. She holds a Bachelor degree in Law from the National & Kapodistrian University of Athens (Greece), and an LL.M in Law and Technology from Tilburg Law School (Netherlands). She is also PhD Candidate at Radboud University (Netherlands). In her doctoral research she examines the concept of risk under the GDPR and proposes a methodology for assessing risks to fundamental rights, particularly to data protection.
Dr. Gabriela Zanfir-Fortuna is Vice President for Global Privacy for the Washington DC-based Future of Privacy Forum, where she leads the work on Global privacy and data protection developments, coordinating FPF's offices in Brussels, Tel Aviv and Singapore. She is also an Associated Researcher for the LSTS Center of Vrije Universiteit Brussel. She has worked for the European Data Protection Supervisor in Brussels and the Article 29 Working Party. She holds a PhD in law with a thesis on the rights of the data subjects, and an LLM in Human Rights. She published a comprehensive volume on the rights of the data subjects in 2015 (C.H. Beck, Bucharest, 2015), and is one of the co-authors of "GDPR: A commentary" (OUP, 2020).
- Ths article explores existing data protection law provisions in the EU and in six other jurisdictions from around the world - with a focus on Latin America - that apply to at least some forms of the processing of data typically part of an Artificial Intelligence (AI) system. In particular, the article analyzes how data protection law applies to “automated decision-making” (ADM), starting from the relevant provisions of EU’s General Data Protection Regulation (GDPR). Rather than being a conceptual exploration of what constitutes ADM and how “AI systems” are defined by current legislative initiatives, the article proposes a targeted approach that focuses strictly on ADM and how data protection law already applies to it in real life cases. First, the article will show how GDPR provisions have been enforced in Courts and by Data Protection Authorities (DPAs) in the EU, in numerous cases where ADM is at the core of the facts of the case considered. After showing that the safeguards in the GDPR already apply to ADM in real life cases, even where ADM does not meet the high threshold in its specialized provision in Article 22 (“solely” ADM which results in “legal or similarly significant effects” on individuals), the article includes a brief comparative law analysis of six jurisdictions that have adopted general data protection laws (Brazil, Mexico, Argentina, Colombia, China and South Africa) and that are visibly inspired by GDPR provisions or its predecessor, Directive 95/46/EC, including those that are relevant for ADM. The ultimate goal of this study is to support researchers, policymakers and lawmakers to understand how existing data protection law applies to ADM and profiling.