From potentially brand-damaging ethical risks to regulatory uncertainty, AI poses challenges for investors. But there is a way forward.
NORTHAMPTON, MA / ACCESSWIRE / June 18, 2024 / AllianceBernstein
By Saskia Kort-Chick| Director of Social Research and Engagement-Responsibility and Jonathan Berkow| Data Science-Actions Director
Artificial intelligence (AI) poses many ethical questions that can translate into risks for consumers, businesses and investors. And AI regulation, which is evolving unevenly across multiple jurisdictions, adds to the uncertainty. In our view, the key for investors is to focus on transparency and explainability.
Ethical issues and risks in AI start with the developers who create the technology. From there, they move to developer customers – companies that are integrating AI into their businesses – and then to consumers and society at large. Through their holdings in AI developers and companies that use AI, investors have exposure to both ends of the risk chain.
AI is developing rapidly, long before most people understand it. Among those trying to catch up are global regulators and lawmakers. At first glance, their AI business has seen rapid growth in recent years; many countries have published related strategies and others are about to introduce them (Display).
In reality, progress has been uneven and far from complete. There is no uniform approach to regulating AI across jurisdictions, and some countries introduced their regulations ahead of ChatGPT’s launch in late 2022. As AI proliferates, many regulators will need to put up to date and possibly expand on the work they have already done.
For investors, regulatory uncertainty compounds other AI risks. To understand and evaluate how to manage these risks, it helps to have an overview of the AI business, ethical and regulatory landscape.
Data risks can harm brands
AI involves a range of technologies aimed at taking tasks normally done by humans and performing them in a human-like manner. AI and business can intersect through generative AI, which includes various forms of content generation, including video, voice, text, and music; and large language models (LLM), a subset of generative AI focused on natural language processing. LLMs serve as foundational models for various AI applications – such as chatbots, automated content creation, and the analysis and synthesis of large volumes of information – that companies are increasingly using in their customer engagement.
However, as many companies have found, AI innovations can carry potentially brand-damaging risks. These may arise from biases inherent in the data on which LLMs are trained and have resulted in, for example, banks inadvertently discriminate against minorities in granting home loan approvalsand in the case of a US health insurance provider facing a class action alleging that its use of an AI algorithm caused requests for extended care for elderly patients will be wrongly denied.
Bias and discrimination are just two of the risks targeted by regulators and which should be on investors’ radars; others include intellectual property rights and data privacy considerations. Risk mitigation measures – such as testing developers on the performance, accuracy and robustness of AI models, and providing companies with transparency and assistance in implementing AI solutions – should also be examined.
Dive Deep to Understand AI Regulations
The AI regulatory environment is evolving in different ways and at different rates across jurisdictions. The most recent developments include the European Union’s (EU) Artificial Intelligence Act, which is expected to come into force around mid-2024, and the UK government’s response to a consultation process triggered last year by the launch of government regulation on AI. paper.
The two efforts illustrate how regulatory approaches to AI can differ. The UK adopts a principles-based framework that existing regulators can apply to AI issues in their respective areas. In contrast, the EU law introduces a comprehensive legal framework with risk-based compliance obligations for developers, businesses, as well as importers and distributors of AI systems.
Investors, in our view, should do more than just examine the specifics of each jurisdiction’s AI regulations. They should also familiarize themselves with how jurisdictions handle AI issues using prior laws and outside of AI-specific regulations – for example, copyright law to combat copyright violations. data and labor legislation in cases where AI impacts labor markets.
Fundamental analysis and commitment are essential
A good rule of thumb for investors trying to assess AI risk is that companies that proactively and fully disclose their AI strategies and policies are likely to be well prepared for new regulations . More generally, fundamental analysis and issuer engagement – the foundations of responsible investment – are essential to this area of research.
Fundamental analysis should delve not only into AI risk factors at the enterprise level, but also throughout the business chain and into the regulatory environment, testing insights against AI fundamentals. Responsible AI (Display).
Engagement conversations can be structured to cover AI issues not only as they affect business operations, but also from an environmental, social and governance perspective. Questions investors should ask boards and management include:
- AI Integration: How has the company integrated AI into its overall business strategy? What are some specific examples of AI applications within the enterprise?
- Board oversight and expertise: How does the board ensure it has sufficient expertise to effectively oversee the company’s AI strategy and implementation? Are there any specific training programs or initiatives?
- Public commitment for responsible AI: Has the company published a formal policy or framework on responsible AI? How does this policy align with industry standards, AI ethical considerations, and AI regulation?
- Proactive transparency: Has the company implemented proactive transparency measures to withstand future regulatory implications?
- Risk management and liability: What risk management processes does the company have in place to identify and mitigate AI-related risks? Is there delegated responsibility to oversee these risks?
- Data Challenges in LLMs: How is the company addressing the privacy and copyright challenges associated with the input data used to train large language models? What measures are in place to ensure that data captured complies with privacy regulations and copyright laws, and how does the company manage any restrictions or requirements related to data captured?
- Bias and fairness challenge in generative AI systems: What steps is the company taking to prevent and/or mitigate biased or unfair results from its AI systems? How does the company ensure that the results of any generative AI system used are fair, unbiased, and do not perpetuate discrimination or harm any individual or group?
- Incident tracking and reporting: How does the company track and report incidents related to its development or use of AI, and what mechanisms are in place to address and learn from these incidents?
- Measurements and reporting: What metrics does the company use to measure the performance and impact of its AI systems, and how are these metrics communicated to external stakeholders? How does the company demonstrate due diligence in monitoring the regulatory compliance of its AI applications?
Ultimately, the best way for investors to navigate this maze is to remain grounded and skeptical. AI is a complex and rapidly evolving technology. Investors should insist on clear answers and not be unduly impressed by elaborate or complicated explanations.
The authors would like to thank Roxanne Low, ESG Analyst in AB’s Responsible Investing team, for her research contributions.
The opinions expressed here do not constitute research, investment advice or trading recommendations and do not necessarily represent the views of all of AB’s portfolio management teams. Views are subject to revision over time.
Learn more about AB’s approach to accountability here.
Discover additional multimedia content and other ESG stories from AllianceBernstein at 3blmedia.com.
Contact information:
Spokesperson: AllianceBernstein
Website: https://www.3blmedia.com/profiles/alliancebernstein
E-mail: (email protected)
SOURCE: AllianceBernstein