In this Help Net Security interview, Ben de Bont, CISO at ServiceNowdiscusses AI governance, focusing on how to foster innovation while ensuring responsible oversight. It highlights the need for collaboration between technologists, policy makers and ethicists to create ethical and effective frameworks.
How do we balance AI innovation with the need for strict oversight?
The best innovation happens within clear boundaries. Governance does not stifle innovation; it gives it purpose and direction. It’s like building a bridge: creativity designs the structure and oversight ensures it’s built to last. In AIit means embedding transparency, accountability and human oversight at every stage of development.
In terms of monitoring, a human-in-the-loop approach is particularly powerful, ensuring that AI results are not only accurate, but meaningful. Combined with governance frameworks that prioritize diverse data sets to reduce bias and robust feedback mechanisms from end users to refine models over time, organizations can innovate with boldness while remaining anchored in responsibility and compliance. The key is to recognize that Responsible AI is the foundation, not an obstacle, to revolutionary progress.
How do cultural and regional differences influence approaches to AI governance?
AI governance can often reflect the unique priorities, values and regulations of the regions implementing it, and organizations must be able to adapt to the different markets in which they operate. Privacy, innovation and accountability may be emphasized differently depending on culture and regulation. contexts, but the main challenge remains the same: ensuring that AI systems are reliable, ethical and aligned with societal needs.
Transparency is the universal cornerstone of effective governance. Practices such as clear labeling of AI-generated content and detailed documentation of AI models promote trust across regions. Likewise, inclusiveness (ensuring that AI is trained on diverse data sets and shaped by a range of perspectives) helps systems meet the needs of users around the world. By combining strong governance principles with tools that can adapt to local contexts, organizations can foster trust and innovation wherever the AI resides. deployed.
How can interdisciplinary collaboration between technologists, policymakers and ethicists improve AI governance?
AI governance is as much about people as it is about technology. Each of these groups brings unique expertise: technologists focus on the “how,” policymakers on the “should,” and ethicists on the “why.” When they come together, they create a critical feedback loop that builds accountability and builds trust.
Take the example of prejudice. Technologists can design algorithms and use bias detection tools to proactively identify and address inequities, while ethicists ensure these systems align with societal values and policymakers create frameworks that promote fairness and accountability. Likewise, policymakers who advocate for clear labeling of AI-generated content can collaborate with technologists to make transparency accessible and user-friendly. The future of AI depends on this cross-pollination of ideas.
How should governments and businesses approach “black box” AI models in terms of accountability?
Black box AI is a trust issue, plain and simple. If users can’t understand how the AI makes its decisions, they won’t trust it – and rightly so. The solution lies in transparency: explainable AI models, clear documentation and human surveillance. Businesses can achieve this in several ways:
- Explainability standards: Encourage or require AI developers to create models that offer clear explanations for their decisions, ensuring that stakeholders understand the rationale for AI results.
- Clear accountability: Assign roles and responsibilities to monitor the performance and compliance of AI systems.
- Adopt AI governance frameworks: Use governance practices that align with ethical standards and regulations that put fairness, security, and trustworthiness first.
- Audit and monitor AI models: Regular reviews and audits of AI models can help detect potential risks, biases, or unexpected results.
By combining these efforts, AI can become an innovative force that drives meaningful progress while preserving public trust and organizational integrity.
How can technology and policy professionals help shape effective AI governance?
AI governance is a team sport. Technologists must prioritize creating systems that are human-centered and transparent from the start. This means adopting practices such as using varied or targeted data sets, ensuring human oversight of critical decisions, and creating clear labeling for AI-generated content to help users make informed choices .
Policy professionals, for their part, can advance standards requiring these elements to be consistent and enforceable. Collaboration is where the magic happens: working together to align on common principles can set the bar for what good governance looks like. Ultimately, shaping AI governance is about balancing innovation with humanity, and that requires everyone – technologists, policymakers and ethicists – pulling in the same direction.