Khan’s use of AI not only highlights the transformative power and new possibilities of technology in political campaigns and elections in general, but also raises critical questions about its impact on electoral integrity. Its ability to engage and mobilize supporters despite legal and physical constraints highlights both the immense opportunities and significant challenges that AI brings to the democratic process. It therefore becomes essential for electoral stakeholders to understand and navigate the complexities of AI in elections. This is precisely why our first executive workshop on mastering AI for electoral stakeholders in the Asia-Pacific region took an in-depth look at building a democratic foundation for AI in electoral processes. Over three days in Kuala Lumpur, Malaysia, our executive workshop brought together representatives of electoral management bodies and civil society organizations from 19 countries in the Asia-Pacific region. The workshop explored the five pillars necessary for building a democratic foundation when considering the use of AI in electoral processes. The program addressed AI culture, delved into AI ethics and human rights, examined AI content curation and moderation, discussed regulation and legislation, and examined the how AI can improve electoral management.
Five pillars, five regions, these workshops are expected to be deployed in different parts of the world in the coming months. Therefore, this inaugural workshop constitutes the ideal starting point to deepen the first pillar of our series of articles on this global comparative project. Each workshop will be followed by an article, each highlighting one of these critical pillars.
Pillar #1, mastery of AI
To build what we call a democratic AI foundation, it is essential to understand the basic technical details of modern AI systems, the areas in which AI is used, and the key issues associated with it – the very first learning objective of our program. When EMB officials were asked what words come to mind when they think of AI, terms like “automatic”, “intelligent” and “futuristic” were mentioned, but also associations such as “complex,” “dangerous,” “wrong,” and “scary.” Such responses paint a clear picture of the mixed feelings of many: seeing AI as both an opportunity that could protect and streamline electoral processes, and harboring concerns about what AI will mean for the future of maintaining electoral integrity. To better understand the potential risks and challenges, but also the opportunities related to AI, it is essential that electoral stakeholders understand how AI works, where and why it might not work, where it could be useful and where it could be harmful.
AI is an umbrella term that refers to a variety of related technologies. The OECD defines AI as “a machine-based system that, for explicit or implicit purposes, infers, from the information it receives, how to generate results such as predictions, content, recommendations or decisions that can influence physical or virtual environments (OECD). 2019).’ Yet when most people think of AI, they imagine a ChatGPT prompt or a deepfake, not other types of applications. Indeed, AI technologies are very vast. Chatbots and generative image technologies have very little to do with software that, for example, demarcates electoral districts. This shows how important it is to understand key terms like the difference between generative AI (a subset of machine learning capable of generating content like text, images, or other media) and Discriminative AI, where models are used to classify, analyze or separate data.
Although public awareness around AI has increased with the release of ChatGPT in November 2022, which has also led many electoral management bodies (EMBs) to consider and design chatbots to answer questions and provide election information, research on the technology dates back to mid-year. 20th century, with substantial advances in the 1990s and early 2000s, notably in image recognition models, natural language processing and overall methods. Now AI systems are everywhere, from autocomplete on keyboards, spam and phishing filters in emails, voice assistants, AI summaries and bots on platforms from social media to biometric systems, electoral roll management and predictive analytics in electoral management.
At the first event in Kuala Lumpur, many EMBs said they had already discussed the use of AI in elections. However, before taking further steps, they stressed that it was crucial for them to first build capacity and increase knowledge about AI within their institutions. Proficiency in AI is a prerequisite for making informed decisions about where deploying AI could help and improve a process, and where AI could complicate or harm a process already functional.
This need goes beyond knowledge of AI within electoral bodies. Many participants emphasized that broader awareness of AI reaching voters, as well as increased resources for civic education, are essential. Participants shared that this could be an area where EMBs and civil society could work together to address the AI knowledge gap. It was noted that the current lack of civil society oversight over the use of AI by stakeholders in elections needs to be specifically addressed.
This focus on collective efforts echoes one of the key takeaways from the workshop: harnessing the benefits of AI, as well as addressing any AI-related challenges in elections, such as misinformation generated by AI and ethical concerns, require a holistic approach involving everyone. actors.
Despite the unique electoral context of each country, many challenges and opportunities are common to all. There was broad consensus on the importance of coming together, sharing expertise and continuing this collaboration to combat any risks associated with AI in elections.
Our workshop in Kuala Lumpur was a crucial step towards better understanding the complexities of AI in elections. By understanding how AI works, where it can be beneficial, and where it can pose risks, electoral stakeholders can make informed decisions that preserve the integrity of elections.
As we continue this series of workshops in different regions, our goal remains constant: to build a democratic foundation for AI in electoral processes. Improving AI knowledge is not just about keeping pace with technological advancements; it’s about ensuring that democracy thrives in the digital age. We look forward to continued collaboration and shared learning that will enable electoral stakeholders around the world to harness AI responsibly and effectively.
Looking ahead, our next workshop for the Western Balkans and Eastern Europe region will take place in Tirana, Albania during the first week of December. The next article will discuss region-specific insights and delve into the second pillar of a “democratic AI foundation”: AI ethics and human rights.
Please note this is the second article in a series, read the first: A democratic foundation for electoral AI #1