Algorithms and AI development
The use of automated data-processing techniques raises challenges not only for the specific policy area in which they are operated, but also for society as a whole. The right to life, the right to fair trial, the presumption of innocence, the right to privacy and freedom of expression, workers’ rights, the right to free elections, and the rule of law itself are all impacted. The impact of ‘algorithms’ used by the public and private sector, in particular by internet platforms, on the exercise of human rights and the possible regulatory implications has become one of the most hotly debated questions today.
Image © Shutterstock
The expert study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications (DGI(2017)12) of December 2017 maps out the main concerns from the Council of Europe’s human rights perspective. While listing the possible implications for various rights enshrined in the European Convention, it concludes that all rights are potentially impacted, as the growing use of automation and algorithmic decision-making in all spheres of life is threatening to disrupt traditional power structures, as operators of algorithms (who may be public or private) gain unprecedented advantages. The study further seeks to identify possible regulatory options that member states may consider to minimise adverse effects or to promote good practices.
Advanced digital technologies and services, including AI tools, come with extraordinary promise, particularly in the form of enhanced efficiency, accuracy, timeliness and conve-nience across a wide range of services. Yet the emergence of these technologies is also accompanied by rising public anxi-ety regarding their potentially damaging effects for individu-als, for vulnerable groups and for society more generally.
Given their pervasiveness in daily life, we must acquire a deeper understanding of their impact on the exercise of human rights and fundamental freedoms, and we should carefully consider how to allocate responsibility in case of adverse consequences. If we are to take human rights seri-ously in a globally connected digital age, we cannot allow the power of our advanced digital technologies and systems, and those who wield and derive benefits from them, to be accrued and exercised without responsibility.
Effective and democratically legitimised governance arrange-ments and enforcement mechanisms must be put in place to ensure that responsibility for the risks, harms and wrongs arising from the operation of advanced digital technologies are duly allocated.
A study of the implications of advanced digital technologies (including AI systems)forthe concept of responsibility within ahuman rights framework prepared by the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT)
Research and analysis
Public entities and non-state actors should initiate research to better understand and respond to the human rights related legal and ethical implications of algorithmic decision-making. Technological developments should be monitored and reviewed for potential negative impacts, with particular attention paid to the use of algorithmic processing techniques during elections and election campaigns.
Human rights impact assessments should be conducted before making use of algorithmic decision-making in public administration. Certification and auditing mechanisms for automated data processing techniques should be developed to ensure their compliance with human rights. Effective responses to identified negative impacts could include experimental regulation with the aim to protect individual rights and guarantee regulatory goals, provided they are accompanied by systematic monitoring of their effects. Public entities and non-state actors should encourage and promote the development of human rights by design and ethics-by-design approaches as well as the adoption of stronger risk-assessment tools to enhance the development of software that upholds and protects fundamental values or basic ethical and societal principles.
Public entities should be held accountable for the decisions they take based on algorithmic processes, whether these are used to prepare their decisions or to actually take them. Effective mechanisms should be adopted that enable redress for individuals that are negatively impacted by algorithmically informed decisions.
Algorithms are often viewed as black boxes by both consumers and regulators alike. Demands for more algorithmic transparency have therefore been growing in public and political debate and have included requests that algorithms should be reviewed prior to their use by independent auditors, regulators and the public. In this context, the key is not the provision of all data imaginable, but rather the notion of “effective transparency”.
Institutions that use algorithmic processes should be encouraged to provide easily accessible explanations with respect to the data that is used by the algorithm, the procedures followed and the criteria based on which decisions are proposed. Moreover, industries that develop the analytical systems used in algorithmic decision-making and data collection processes should create awareness and understanding regarding the possible biases that may be embedded in the design of algorithms.
Enhanced public awareness and discourse are crucially important. All available means should be used to inform the general public so that users are empowered to critically understand and deal with the logic and operation of algorithms. This can include, but is not limited to, information and media literacy campaigns.
Considering the complexity of the field, however, there is also a need for additional institutions, networks and spaces where different forms of algorithmic decision-making are analysed and assessed in a trans-disciplinary, problem-oriented and evidence-based approach.
The Council of Europe expert committee on automated processing and different forms of artificial intelligence (MSI-AUT) is developing detailed guidelines for member states to curb the negative human rights impacts of algorithms in the public and private sector and to enhance their benefits for society.