"Artificial Intelligence and Electoral Integrity"

I. Introduction

With a theme at the crossroads between IT and better election governance, the 19th European Conference of Electoral Management Bodies (the 19th EMB Conference) is looking to the future. For the past 15 years, electoral stakeholders, primarily political parties and candidates, have used digital technologies and big data mainly to better shape election campaigns by analysing the overall sentiments and opinions of the electorate. The advent of artificial intelligence systems and their integration into the electoral machinery is further transforming our approach to electoral processes. If machines can help human beings, this can only be welcome … as long as our democracies do not suffer.

Due to the innovative nature of the topic, the 19th EMB Conference will not necessarily provide clear-cut answers as to whether it would have such an impact. The exponential evolution of AI systems and their already partly measurable impact on the functioning of our democratic institutions requires that this edition of the European Conferences of Electoral Management Bodies discusses the subject from the perspective of electoral processes and the work of electoral administrations.

Before any analysis and development of the questions concerning the impact of the use of artificial intelligence systems on electoral processes, it seems important to propose definitions of the concepts submitted for debate.

Artificial intelligence systems (AI systems) have been jointly defined by the Council of Europe and the Alan Turing Institute in a joint publication entitled Artificial Intelligence, human rights, democracy and the rule of law – A primer (2021) as “algorithmic models that carry out cognitive and perceptual functions in the world that were previously reserved for thinking, judging, and reasoning human beings.”

Electoral integrity can be defined as the set of norms, principles and values inherent to democratic elections and which apply to the entire electoral process. It is, in particular, the ethical behaviour of all electoral actors as well as the respect of the principles of equity, transparency and accountability during the entire electoral process. Together these elements help to consolidate the credibility of an election and the confidence of citizens in this election, all of which aim to consolidate our democratic societies, and even to accelerate democratic reforms.

By linking these two concepts, the 19th European Conference of Electoral Management Bodies aims to assess the impact of the use of artificial intelligence systems in the organisation and conduct of electoral processes as well as on the work of electoral administrations and the lessons to be learned.

As Sarah M.L. Bender, professor at the University of Michigan Law School, points out in an article called Algorithmic Elections - Michigan Law Review, “scholars have sounded the alarm on a variety of ‘algorithmic harms’ resulting from AI’s use in the criminal justice system, employment, healthcare, and other civil rights domains. Many of these same algorithmic harms manifest in elections and voting but have been underexplored and remain unaddressed.”

The conference will first recall the Council of Europe’s acquis in the field of AI through its conventions, recommendations and expertise as well as the fundamental principles at stake (introductory session of the conference). Participants will then be invited to discuss cross-cutting issues at stake in the use of AI systems in electoral processes from a practical perspective as well as in relation to the work of electoral administrations: AI and fairness in electoral processes (first session); the impact of AI on turnout and voter choice vs. data protection (second session); AI and supervision and transparency of electoral processes (third session); and AI and harmful content (fourth session). Finally, participants will be invited to adopt the conclusions of the Conference during the closing session.

II. The Council of Europe’s acquis and the principles at stake

The Council of Europe, through its international instruments, its recommendations and the work of its experts’ committees, is working on the impact of artificial intelligence on our democratic societies and recalls the importance of respecting the principles at stake. The introductory session of the Conference will present this essential and developing acquis.

Among the conventions and recommendations relevant to the issue of the impact of AI systems on our democratic societies, the following sources can be mentioned in a non-exhaustive way:

  • the European Convention on Human Rights and Article 3 of its First Protocol,
  • the Convention on Cybercrime (ETS No. 185) (“Budapest Convention”),
  • the Convention for the Protection of Individuals with regard to the Processing of Personal Data (Convention 108+),
  • Recommendation Rec(2004)15 of the Committee of Ministers to member States on electronic governance (“e-governance”),
  • Recommendation CM/Rec(2009)1 of the Committee of Ministers to member States on electronic democracy (“e-democracy”),
  • Recommendation CM/Rec(2017)5 of the Committee of Ministers to member States on standards for e-voting,
  • Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impact of algorithmic systems,
  • Recommendation CM/Rec(2022)12 of the Committee of Ministers to member States on electoral communication and media coverage of election campaigns.

Furthermore, thanks to the work of the European Committee on Democracy and Governance (CDDG), on 9 February 2022 the Committee of Ministers of the Council of Europe adopted the Committee of Ministers’ Guidelines on the use of information and communication technologies (ICT) in electoral processes in Council of Europe member States.

The Guidelines “aim to contribute to ensuring the integrity of the electoral process and therefore enhancing citizens’ trust in democracy. The guidelines identify the requirements and safeguards to be introduced into the legislation of Council of Europe member States in order to address the use of ICTs in the different stages of the electoral process.” They cover the use of ICT solutions by or on behalf of electoral authorities at all stages of the electoral process, with the exception of e-voting and e-counting, which are covered by the above-mentioned Recommendation CM/Rec(2017)5 on standards for e-voting and therefore fall outside the scope of these Guidelines. However, hybrid forms of counting, which make use of some ICTs but do not fall under the definition of e-voting according to the above-mentioned Recommendation, are covered by the Guidelines.

The Guidelines emphasise that “the use of ICT, like the use of any other technology in electoral processes, should comply with the principles of democratic elections and referendums and other relevant principles and must be balanced against other core considerations such as security and accessibility for users.” They also stress that “democratic elections and referendums should be held in accordance with certain principles that grant their democratic status.”

Relevant work of the Council of Europe also includes the adoption in 2021 of two documents that are the result of the work of the former Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI). These are the publication entitled A legal framework for AI systems (2021); and the document entitled Potential elements of a legal framework on artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law.

Finally, the Council of Europe’s ongoing work on artificial intelligence includes the Committee on Artificial Intelligence (CAI), which has been established on 1 January 2022, replacing the former Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI). The CAI aims to adopt by 31 December 2024, an appropriate legal instrument on the development, design and application of artificial intelligence systems, based on Council of Europe’s standards on human rights, democracy and the rule of law, and which is conducive to innovation in accordance with relevant decisions of the Committee of Ministers (see also this publication).

In terms of the principles at stake, the Conference will discuss in the introductory session the risks and opportunities that AI systems can create in the context of electoral processes and their impact on the work of electoral administrations. The aim will be to determine how AI, at the various crucial stages of an electoral process, can improve the organisation and conduct of an election in an ethical manner, respecting fundamental freedoms, such as freedom of expression, principles of Europe’s electoral heritage, such as universal, equal, free and or secret suffrage, as well as the procedural safeguards for implementing such principles, as developed by the Council of Europe’s Venice Commission’s Code of Good Practice in Electoral Matters.

It will be relevant to discuss the real or perceived tensions between fundamental freedoms and the above-mentioned electoral principles when using AI systems in electoral processes, as the principles and freedoms cannot be implemented to the same degree. It is moreover necessary to assess how the principles of secret suffrage and protection of personal data, in particular, regarding the above-mentioned Convention 108+, can be taken into account with regard to the principle of transparency of elections.

III. AI and fairness in electoral processes

Questions

How to combine AI with fair and balanced electoral campaigns?

How can AI affect gender parity?

Context

The use of AI systems during electoral campaigns by various electoral actors, primarily political parties and candidates, may pose risks to the fairness of the campaign and ultimately to a free and informed choice by the voter.

From the perspective of political parties and candidates, there is certainly more scope for them to optimise an election campaign with AI: for example, the company Advanced Symbolics uses an artificial intelligence called Polly “to identify trends, test scenarios, predict changes and track successes that are critical” to the success of a company. However, the downside may be the risk of misusing AI tools with the purpose of manipulating ideas and messages, creating a selective exposure of voters to politically-oriented information and consequently distorting information and reality. It will also be essential to debate how AI systems affect gender representation in parties and, ultimately, in the distribution of seats.

Furthermore, it might be interesting to analyse how electoral campaigns could use AI to test messages, ads, social media campaigns and radio broadcasts to see what will resonate with the people they are trying to motivate to support their campaigns. To take this a step further, the company Expert.ai uses an AI system called ‘sentiment analysis’ which aims to understand the emotions expressed in social media posts. It is conceivable that such AI could be exploited by parties and candidates during electoral campaigns to determine where to focus their financial and human resources and on which issues to emphasise in a campaign to increase their chances of success with, again, the potential risk of parties and candidates misusing such data to the disadvantage of the electorate, with the consequence of distorting information and the reality of a political situation.

Participants will also be invited to discuss the already initiated holding of election campaigns in the metaverse. (The metaverse can be defined as a persistent and immersive 3D environment; collective and shared; created through enhanced digital and physical reality; accessible via any connected device (smartphones, PCs, virtual reality headsets, tablets); and powered where appropriate by a blockchain-based currency.) (see for example an article on the metaverse used during the March 2022 South Korean presidential election) and the impact that such campaigns may have on the work of electoral administrations, bodies in charge of supervising balance of the campaigns, election observers and other election actors.

IV. The impact of AI on turnout and voter choice vs. data protection

Question

How can AI enable a better informed voter choice and a higher turnout?

Context

From the point of view of electoral administrations, beyond the quantitative aspect of the participation rate, the Conference will address the means that AI offers to electoral administrations and public authorities in general to invite citizens to a more critical and holistic approach of the electoral offer that is proposed to them and so that they can, on their own and consciously, micro-target their interests by themes, by party, etc.

From the voters’ point of view, the digital divide is a persistent risk that could be even amplified by AI systems if the interfaces are too complex or perceived as such depending on the generation or level of education of the electorate concerned. Such a divide could further alienate a part of the electorate from the political offer and, ultimately, from participating in elections. AI system providers must therefore offer to electoral administrations and voters systems or applications that are intuitive and easy to use.

Voters use AI systems, particularly through mobile applications, with the aim to make a more informed choice about the political offer at each election. When the voter indicates the types of public policies that are likely to affect her/him, the AI tool can alert the voter to any movement in these types of policies in order to keep them informed of voting initiatives and candidates that could lead to changes in their daily lives and futures (see for example the iSideWith app). Similarly, parties and candidates also use AI systems to carefully scrutinise what citizens have to say (see for instance the article The good, the bad and the ugly uses of machine learning in election campaigns).

However, as Jessica Heesen points out in an article called AI and Elections – Observations, Analyses and Prospects, “current voting advice apps barely contain any machine-based learning processes that would allow users to be ““walked” through the program by a chatbot that explains the individual statements with examples or respond to questions from the users.”

Participants will be invited to discuss the influence of AI systems on the choice made by voters in the polling booth. While on the one hand, these systems can have a very positive dimension, on the other hand, they can represent a potential danger of narrowing the political offer for voters by leading to a selective exposure without them being able to control such targeting and thus alter the expression of voters’ opinions on polling day.

Questions

How to ensure voters’ data protection?

How to combine micro-targeting and data protection?

Context

In 2018, the Cambridge Analytica scandal shook the world as the public found out that the data of up to 87 million Facebook profiles were collected without user consent and used for ad targeting purposes in the American presidential campaigns of Ted Cruz and Donald Trump, the Brexit referendum, and elections in over 200 countries around the world. The scandal brought unprecedented public awareness to a long-brewing trend – the unchecked collection and use of personal data – which has been intruding on citizens’ privacy and undermining democracy by enabling ever-more-sophisticated voter disinformation.

The influence of AI systems is exerted in particular, by micro-targeted advertisements on social networks, targeting voters according to their social, political and psychological profiles, and relying on big data and machine learning to influence citizens’ emotions and, ultimately, their choice in the polling booth. While this practice may appear to be in line with existing principles such as freedom of expression, micro-targeting raises the question of the accountability of parties and candidates who use it, particularly vis-à-vis the bodies responsible for supervising the electoral campaign, be they electoral administrations or other competent bodies. Micro-targeting online sponsored political advertising or automated electioneering run the risk of leading to a restricted political offer on social networks and a potentially distorted electorate’s behaviour and thus the supposed outcome of an election.

The Cambridge Analytica scandal, other similar situations that occurred, as well as the exponential development of micro-targeting, have seriously entailed citizens’ personal data protection. This should have required strong reaction and legal measures from States. As mentioned earlier, the relevant reference instrument in this respect is the Council of Europe’s Convention for the Protection of Individuals with regard to the Processing of Personal Data (Convention 108+). The participants will therefore be invited to discuss the possible solutions to ensure voters’ data protection in a context of AI-affected elections.

V. AI vs. supervision and transparency of electoral processes

Question

How to ensure oversight of AI tools in electoral processes?

Context

While AI gives considerable hope for improving our democratic societies, it also raises fears of loss of control, surveillance, dependency and discrimination from the viewpoint of voters and more broadly citizens. Transparency must therefore be ensured in the deployment of AI systems. The question will be to what extent such systems can be deployed at different stages of an electoral process without human intervention and/or control by competent bodies. Given the transparency issues inherent in AI systems, human and democratic control is indeed of the utmost importance with regard to electoral processes and related decision making in order to ensure the integrity of AI systems and ultimately of the electoral processes themselves. The participants will debate the use of AI with regard to citizens’ demands for transparency.

From the point of view of election observers, AI also undoubtedly raises the challenge of ensuring the continuity of the election observation exercise partly in a dematerialised manner, including in the metaverse if campaigns develop in this universe.

The private sector and Tech Giants – expression referring to the group comprising the four web companies Google, Apple, Facebook and Amazon; also including other major players in the digital economy such as Microsoft, Yahoo, Twitter and LinkedIn – are major actors in electoral processes and also have a major responsibility to oversee the proper conduct of elections through their prism, i.e. the phases of an electoral process that take place via their platforms and applications. The participants will address the issue of accountability of Tech Giants to other electoral stakeholders and in particular, to electoral administrations.

Question

Can AI reliably predict election outcomes?

Context

In some cases, AI has been shown to be able to predict election results more accurately than mathematical prediction methods (see an article on the latest US presidential elections for example). For example, the company KCore Analytics uses AI to predict elections: it developed an artificial intelligence model and collected a huge number of people’s opinions from social media, in particularly from Twitter, almost one billion tweets. According to the company, its systems analyse “real-time social media to provide a quicker and remarkably accurate method of predicting election trends.” The question is whether such systems could be used to predict voter turnout or even election results. The use of this technology by electoral administrations could, for example, predict the number of people that would vote by mail or in person, in advance and thus increase efficiency and preparation for voting. It could also reduce waiting times and ensure that elections run smoothly and that results are presented as quickly as possible. Considering the above-mentioned risks regarding the freedom of voters’ choice in any election, the participants will discuss the pros and cons of the use of AI systems regarding the prediction of election outcomes.

VI. AI and harmful content

Question

How can AI tools help improve good governance of elections by enabling detection of hate speech, misinformation and “fake news” during electoral processes?

Context

As raised in the conclusions of the 15th European Conference of the Electoral Management Bodies held in Oslo on 19-20 April 2018, “misinformation, disinformation and “fake news” during electoral campaigns are a major challenge for democratic elections and compromise the level playing field amongst political contestants.” AI-based systems have been used to create and facilitate the massive dissemination of disinformation and “fake news”, mainly on social media platforms via political adaptive social bots – automated computer programs designed to simulate the behaviour of a human being or to perform repetitive tasks – and deepfakes – a video or sound recording that replaces someone’s face or voice with that of someone else, in a way that appears real (Cambridge Dictionary’s definition). These actions do not necessarily originate from the contending parties and candidates. While these phenomena are not new, when powered and amplified by AI, they can lead to harmful informational disorder for the citizen who no longer has the full capacity to form a free and informed opinion, leading to an imbalance in the campaign and ultimately influencing the electoral decision.

On the other hand, the same systems have also been used to automatically detect and moderate harmful content on these platforms via fact-checkers and content moderation algorithms. There are many examples of how AI can improve electoral campaigns in an ethical way. AI systems can help detect misinformation (“fake news”) but also identify biased information supply, using appropriate word and topic choices. For instance, Meta indicates using AI to identify potentially false stories: Tessa Lyons, Product Manager at Meta, says that “in the US, we can also use machine learning based on past articles that fact-checkers have reviewed. And recently we gave fact-checkers the option to proactively identify stories to rate.” With such tools, AI can provide alternative information to that which was initially detected as false or distorted. Micro-targeting campaigns can also be deployed to help educate voters on a variety of political issues and enable them to make up their own minds. On the subject of AI in elections and the above issues, reference can be made to the white paper published by the Germany’s Platform for Artificial Intelligence Lernende Systeme.

In this respect, the work of the European Parliament is worth mentioning. In a document called Computational propaganda techniques, the institution states that “the techniques used by anti-democratic state and non-state actors to disrupt or influence democratic processes are constantly evolving. The use of algorithms, automation and artificial intelligence is boosting the scope and efficiency of disinformation campaigns and related cyber-activities.” Researchers at Oxford and Yale Universities estimate a 50% chance that AI will outperform humans in all tasks in less than half a century. It is therefore crucial that AI systems be regulated by law, through the development of norms and standards as well as codes of ethics in the context of electoral processes. It is also crucial that providers of AI platforms, services and systems ensure greater transparency of their systems, including through regulated self-regulation. The ongoing work at the level of the Council of Europe and the European Union (here) are encouraging examples.

The participants will be invited to discuss the dual role of artificial intelligence in creating and facilitating as well as detecting and moderating harmful content during electoral processes. The participants will also be invited to discuss further contributions of AI systems in electoral processes, in particular through moderation tools for electoral content on social networks, via the Tech Giants themselves but also via the competent regulatory authorities, with the ultimate objective of eradicating hate speech, deepfakes and various manipulations present on the platforms and eventually, to limit as much as possible the distortion of the electoral offer.