TORONTO — As artificial intelligence (AI) rapidly transforms industries and reshapes operational environments, organizations face significant challenges navigating a complex and evolving regulatory landscape. In response to these pressing challenges, Info-Tech Research Group has released its research and guidance in a new blueprint, Prepare for AI regulationThe resource attempts to address the urgent need for organizations to stay ahead of impending regulations, providing in-depth analysis and actionable strategies for IT leaders to ensure compliance while maximizing the ethical and effective use of AI.
In this new publication, the company highlights the growing responsibility of organizations to protect users from potential risks associated with AI, including misinformation, unfair bias, malicious use, and cybersecurity threats. However, many existing risk management and governance programs within organizations were not designed to anticipate the introduction of AI applications and their subsequent impact.
“Generative AI is changing the world we live in. It is the most disruptive and transformative technology of our time. It will revolutionize the way we interact with technology and the way we work,” said Bill Wong, a researcher at Info-Tech Research Group. “However, along with the benefits of AI, it introduces new risks. Generative AI has demonstrated how easily disinformation and deepfakes can be created, and it can be misused to threaten the integrity of elections.”
Info-Tech recommends that organizations enhance their data and AI governance programs to align with upcoming voluntary or legislative AI regulations.
“Organizations around the world are seeking guidance and some are calling on governments to regulate AI to ensure the technology is used,” Wong says. “As a result, AI laws are emerging around the world. One of the key challenges in any legislation is balancing the need for regulation to protect the public with the need to provide an environment that is conducive to innovation.”
“Some governments and regions, such as the US and the UK, are taking a context- and market-driven approach, often relying on self-regulation and introducing minimal new legislation,” Wong adds. “In contrast, the EU has implemented comprehensive legislation to govern the use of AI technology to protect the public from potential harm. In the future, effective AI regulation globally will likely require international cooperation between governments and regions.”
In Prepare for AI regulationInfo-Tech details six responsible AI guiding principles and corresponding actions IT leaders should take to plan for and manage AI risks and comply with regulatory initiatives.
- Data Privacy
Understand what privacy laws and frameworks apply to an organization: Conduct thorough assessments to ensure compliance with local and international data privacy regulations. - Fairness and bias detection
Identify possible sources of bias in data and algorithms: Conduct regular audits and evaluations of datasets and algorithms to detect and mitigate bias. - Explainability and transparency
Design to inform users and key stakeholders about how decisions were made: Develop user-friendly explanations and documentation that clarify how AI systems reach decisions. - Safety and security
Adopt responsible design, development, and deployment best practices: Follow established best practices to ensure the safe and secure development and deployment of AI systems. - Validity and reliability
Continuously monitor, evaluate, and validate performance: Regularly evaluate and validate AI system performance to ensure accuracy and reliability. - Responsibility
Implement human oversight and review: Establish regular human oversight and review processes for AI systems to ensure ethical and responsible use.