Artificial Intelligence (AI) has grown exponentially, transforming industries around the world. As its use cases multiply, concerns around ethics, data transparency and regulatory compliance have arisen. Chloe Wade, Vice President at IDA Ireland, explores the importance of AI ethics frameworks, regulatory guidelines and internal strategies to ensure responsible AI implementation.
Artificial intelligence (AI) and its many use cases have grown exponentially in recent years, becoming one of the most popular and talked about technologies of the decade. Chloe Wade, Vice President of Global Financial Services UK at IDA Ireland, discusses the importance of implementing internal guidelines and adhering to new government regulations, with ethical AI being a priority.
The latest advances in AI and its popularity have captured the world’s attention, creating headlines and sparking discussions around the world. Over 100 million weekly users flock to Open AI’s GPT Chat, and new use cases are continually emerging as the technology’s potential continues to be explored – from its use in medical diagnostics to robot manufacturing and self-driving cars. A study conducted by the Office for National Statistics last year revealed that one in six UK organisations have implemented some form of AI, contributing to a market valued at over £16.8 billion.(1)
This rapid growth raises questions about the ethical implications of this technology. Another study by Forbes Advisor found that more than half of the British population is concerned about the use of AI, particularly regarding misinformation, privacy, transparency and displacement effects. (2)
What are these concerns, how are regulators responding, and what are the three key considerations for ensuring an ethical AI framework?
Regulatory guidance from the EU
A recent YouGov survey revealed the top two concerns about AI, with 50% of UK business leaders focused on future AI regulation and 46% on the use of invalid or biased data. (3)
New measures are being put in place to ensure AI is ethically oriented, including the European Artificial Intelligence Act 2024, which officially came into force on August 1.st 2024. Despite their rigid nature, several countries are developing frameworks similar to those of the European Commission to protect the public while encouraging organizations to take advantage of the many benefits of AI.
The UK has adopted a “pro-innovation approach” to AI regulation, but has yet to pass its own law. Although a regulatory bill was proposed in March 2024, it is still under review. The EU AI law does indeed affect certain UK companies: those that “develop or deploy an AI system used in the EU,” according to the CBI. However, instilling moral and ethical values in these models, particularly in important decision-making contexts, is a challenge. Codes of ethics in companies and regulatory frameworks are two main ways to implement AI ethics.
A thorough approach to ethics and responsibility in AI software development can provide a competitive advantage over those who neglect these issues. Reporting and assessments are becoming essential as regulations such as the European AI Act become effective in helping companies manage the risks associated with AI. Ethics is about ensuring that AI systems assist rather than replace human decision-making. AI does not have the capacity to make ethical decisions or understand moral nuances, making human oversight necessary, especially in critical applications that affect well-being and social justice. The use of AI as a tool should be encouraged to improve worker efficiency and productivity while remaining in line with new legislation and ethical codes, such as the BCS Code of Conduct. (4)
Key steps for internal implementation of ethical AI
Ireland is one of the countries that has put in place a significant number of foundational processes to prepare for the long-term exponential growth of the AI market. With the publication of the national AI strategy “AI—Here for Good” (5), the Irish government expects civil organisations and public services to adopt AI responsibly and innovatively to improve the delivery of current and future public services. Ireland has required all AI applications within the civil service to adhere to seven ethical guidelines for AI, as outlined by the European Commission’s High-Level Expert Group on AI in its Ethical Guidelines for Trustworthy AI. (6) But what should companies do internally?
- Understanding the role of AI in the business and how data is used
Businesses must first recognize the cooperative nature of AI and the positive impacts it can have. In the early stages of implementation, business leaders must determine how they process, store, and extract data within their value ecosystem. Because organizational goals and business strategies differ from company to company, the capabilities of specific AI models, including machine learning (ML) and generative models, should be explored to determine the optimal use of this technology within operations.
Several strategies can increase the trustworthiness of AI software. Risk assessment is a fundamental aspect of these processes, as it allows developers and engineers to determine whether use cases are high-risk. This measure reinforces ethical considerations and the role of those who drive the resulting processes. For example, product-specific approaches should be used in companies that deploy or sell advanced B2B AI software solutions internally, as the risks associated with data and technologies can vary. A set of responsible AI guidelines is then developed using these measures to outline key steps to mitigate risks and monitor results, including interpretability, bias, validity, and reliability. In addition to diverse internal perspectives, companies will greatly benefit from collaborating with peers, researchers, and government agencies to develop ethical AI frameworks.
- Implement change management processes and build trust
Trust remains at the heart of all ethical challenges of AI. While few full-fledged jobs will be automated in the near future, a growing number of tasks are. The risk of displacement and digital transformation has left professionals anxious about their own careers, meaning that building trust is a core tenet of any ethical AI program. Companies may therefore consider providing resources and opportunities to help their workforce become familiar with AI technology, regardless of their specific responsibilities. Identifying new roles, upskilling, and reskilling are all growth plans and employee enrichment methods that can be leveraged to reduce potential long-term anxieties.
Beyond internal beliefs, trust is paramount in the new business environment. Companies that market and sell AI technologies must ensure that their customers have full confidence that models are being built responsibly. In the digital economy, there are now practical and business reasons to embrace trust during digital transformation, in addition to both ethical and moral ones. Companies must work to build trust in their AI products and software, and across their entire organization, to ensure that they are not forced out of the market due to their inability to embrace radical innovations and their challenges. For example, Ireland’s National AI Strategy is deeply rooted in trust and transparency, with a core principle of “ensuring good governance to build trust and confidence so that innovation can thrive.”
- Development and creation of specialized teams
With the recently introduced legalization, companies will need to strategically organize their business functions in response. Engagement and participation from multiple organizational levels – such as engineering developers, product managers, legal counsel, and senior management – are required to implement essential and ongoing practices, such as collaboratively improving the company’s AI governance framework.
The digital economy requires a responsible and dedicated cadre of AI experts. Yet, developing young professionals to fuel talent acquisition is more necessary than ever. The changing nature of work in terms of roles and responsibilities has highlighted the challenges of skills mismatch, education and redeployment. Despite these externalities, Science Foundation Ireland’s (SFI) AI centres – ADAPT (7) and Insight (8) – are committed to producing skilled graduates in this field. Ireland was also the first country in the world to develop a postgraduate MSc in AI in collaboration with industry. These opportunities demonstrate Ireland’s European and potentially global leadership in AI ethics, having been recognised as the European Centre for AI Ethics with organisations such as the Dublin-based Idiro AI Ethics Centre, which supports businesses with compliance, innovation and responsible practices.
By Chloe Wade, Vice President of UK International Financial Services at IDA Ireland.
(3) https://business.yougov.com/content/47618-risks-and-opportunities-around-ai
(4) https://www.bcs.org/media/2211/bcs-code-of-conduct.pdf
(5) https://enterprise.gov.ie/en/publications/publication-files/national-ai-strategy.pdf
(6) https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai