AI ethics and governance has become a noisy space.
At the last count, the OECD monitoring has more than 1,800 national level documents on initiatives, policies, frameworks and strategies as of September 2024 (and there appear to be consultants and influencers speaking out on each of them).
However, as Mittelstadt (2021) succinctly expressed in a way that only an academic euphemism can, principles alone cannot guarantee ethical AI.
Despite the abundance of high-level guidance, there remains a notable gap between policies and their real-world implementation. But why is this the case, and how should data science and AI leaders think about it?
In this series, my goal is to advance the maturity of practical AI ethics and governance within organizations by breaking down this gap into three components, and drawing on research and experience from real world to propose strategies and structures that worked in implementing AI ethics and governance capabilities. on a large scale.
The first gap I cover is the gap in interpretationwhich arises from the challenge of applying principles expressed in vague language such as “human centrism” and…