The race to adopt AI is on, but without ethical safeguards, companies risk more than just reputational damage, writes Shaun Wadsworth, director of AI and IoT at Fujitsu and president of Corporate AI Ethics Committee for the Asia-Pacific region.
Rapid adoption of AI, A particularly generative AIexceeds the ability of businesses to prepare for its potential to revolutionize the way people work.
Three out of four knowledge workers now use AI at workand 78% use their own AI tools at work. The Tech Council of Australia estimates that the AI generation will contribute $45 billion to $115 billion per year to the Australian economy by 2030. While 79% of executives agree that adopting AI is essential to remaining competitive , 60% admit that their company lacks the vision and plan to achieve it. implement.
This lack of preparation carries many risks. Integrating AI into core business functions raises many ethical considerations that require careful consideration. Bias, discrimination, and opacity are just some of the risks associated with unethical AI.
The Australian Government recently introduced the voluntary AI Safety Standard. But a more rigorous regulatory environment is on the horizon. Businesses must take a proactive approach to ethical AI or face significant consequences.
Bias, discrimination and lack of transparency are not only ethical problems: they are also business risks.
The International Center for Artificial Intelligence Research at the United Nations Educational, Scientific and Cultural Organization (UNESCO) found that Gen AI’s results reflect considerable gender bias. UNESCO research reveals three major reasons for prejudice:
- Data Bias: If Generation AI is not exposed to data from underrepresented groups, they will perpetuate societal inequalities.
- Algorithm bias: Algorithm selection bias can also entrench existing biases, making AI an unwitting accomplice to discrimination.
- Deployment bias: AI systems applied in contexts different from those for which they were created can give rise to dangerous associations that can stigmatize entire groups.
These biases risk cementing unfair practices in seemingly objective technological systems by amplifying historical injustices.
UN study finds AI results reflect considerable gender bias.
Another challenge is the lack of transparency and explainability of many AI systems.
As AI algorithms become more complex, their decision-making processes often become opaque, even to their creators. This “black box” nature of AI can be particularly problematic. Imagine a scenario in which an AI system recommends a specific medical treatment or denies a loan application without providing a clear rationale. This lack of explainability undermines trust and makes it difficult to identify and correct errors or biases in the system.
The consequences of unethical AI go far beyond reputational damage. Businesses risk lawsuits, loss of customer trust, and damage to their brand.
The Roadmap for Ethical AI Adoption
As a global leader in AI, Fujitsu has been promoting the research and development of innovative AI and machine learning technologies for over 30 years. We are also at the forefront of advocating for ethical AI, contributing to the Australian Government’s Supporting Responsible AI discussion paper.
The approach we recommend to harness the full potential of AI while mitigating its risks is a three-phase process: design, implementation and monitoring.
The design phase: defining a clear vision for ethical AI
Ethical AI is not just an IT concern, it is a strategic imperative that touches every aspect of business.
The design phase is the foundation of ethical AI practices within an organization. This starts with getting buy-in from the highest leadership, recognizing that ethical AI is not just an IT concern but a strategic imperative that touches every aspect of the business. Business leaders must articulate a clear vision for ethical AI and define principles that align with the company’s values and societal expectations.
These principles should then be translated into concrete policies guiding the development and deployment of AI. This phase involves planning the governance structures that will oversee the implementation of these policies. These governance bodies should be diverse and bring together the perspectives of various departments such as legal services, risk management, business operations and human resources. Including external experts in AI ethics can provide valuable independent insights and strengthen the credibility of the governance process.
The implementation phase: implementing clear processes at each stage
Ethical AI implementation is an ongoing process that begins at the project proposal phase and continues throughout design, development, testing, and deployment.
The implementation phase brings the AI ethical framework to life. Governance groups are established with clear mandates and terms of reference. Processes are implemented to manage each stage of AI development and deployment in an ethical manner. This is not a one-time effort but an ongoing process that begins at the project proposal stage and continues throughout design, development, testing and deployment.
Implementing ethical AI is an ongoing process
It is important to recognize that the ethical implementation of AI often involves complex trade-offs. There may be cases where ethical considerations conflict with short-term business goals. Organizations must be prepared to make difficult decisions and prioritize long-term sustainability and societal impact over immediate gains.
The Monitoring Phase: Staying on Top of AI Ethical Practices
Continuous evaluation and adaptation are essential to ensure the continued effectiveness of ethical AI practices.
The final stage, the monitoring phase, ensures the continued effectiveness of ethical AI practices. This phase involves continuous evaluation of governance processes and technological monitoring. It also requires adapting to changing legal and regulatory landscapes, which are also delaying AI deployment. Regular audits of AI systems can help identify potential biases or unintended consequences that may have emerged over time.
Find balance
AI technologies will continue to advance, and the ethical implications of their use will only increase in complexity and importance. Organizations that proactively address these challenges will be better positioned to build trust with customers, employees and stakeholders. They will also be more resilient in the face of regulatory scrutiny and better equipped to deal with the ethical dilemmas that will inevitably arise in the AI-driven business landscape.
Ethical AI is not a destination but a journey. It requires ongoing commitment, resources and a willingness to tackle difficult questions. By addressing this challenge, organizations can unlock the transformative potential of AI while fulfilling their responsibilities to society.