As generative AI becomes more popular, companies need to consider how to deploy it ethically. But what does ethical AI deployment look like? Does it involve harnessing human intelligence? Avoiding bias? Or both?
To gauge how companies are addressing this issue, Deloitte recently surveyed 100 senior executives at U.S. companies with annual revenues of $100 million to $10 billion. The results showed how business leaders are integrating ethics into their sustainability strategies. Generative AI policies.
Top priorities for AI ethics
What ethical issues do these organizations consider most important? Organizations have prioritized the following ethical issues in the development and deployment of AI:
- Balancing innovation and regulation (62%).
- Ensure transparency in how data is collected and used (59%).
- Addressing user concerns about data privacy (56%).
- Ensuring transparency in the operation of business systems (55%).
- Mitigating bias in algorithms, models and data (52%).
- Ensuring that systems operate reliably and as intended (47%).
Organizations with higher revenues ($1 billion or more per year) were more likely than smaller businesses to say that their ethical frameworks and governance structures encourage technological innovation.
Unethical uses of AI can include disinformation, especially critical during election times, and reinforcing bias and discrimination. Generative AI can accidentally replicate human bias by copying what it sees, or malicious actors can use generative AI to intentionally create biased content more quickly.
Threat actors who use Phishing Emails can take advantage of the writing speed of generative AI. Other potentially unethical use cases may include AI making major decisions in warfare or law enforcement.
The U.S. government and major tech companies have agreed to a voluntary commitment in September 2023 which has established standards for disclosing the use of generative AI and the content created using it. The White House Office of Science and Technology Policy has released a blueprint for an AI bill of rightswhich includes anti-discrimination efforts.
U.S. companies that use AI at certain scales and for high-risk tasks must report information to the Commerce Department from January 2024.
SEE: Getting Started With a model of AI ethical policy.
“For any organization adopting AI, the technology presents both the potential for positive outcomes and the risk of unintended outcomes,” Beena Ammanath, executive director of the Global Deloitte AI Institute and Deloitte’s Trustworthy AI leader, said in an email to TechRepublic.
Who makes ethical decisions about AI?
In 34% of cases, ethical decisions about AI come from directors or senior executives. In 24% of cases, all professionals make AI decisions independently. In rarer cases, ethical decisions related to AI are made by business or department heads (17%), managers (12%), professionals who have undergone mandatory training or certification (7%), or by an AI review committee (7%).
Larger companies (with annual revenue of $1 billion or more) were more likely to allow workers to make independent decisions about AI use than companies with annual revenue of less than $1 billion.
Most executives surveyed (76%) said their organizations provide AI ethics training to their employees, and 63% said they provide training to their board of directors. When it comes to AI development, more than three-quarters of respondents said their organizations conduct ethical considerations, assessments, or processes during the deployment phases, while fewer do so in earlier phases, such as the build phase (69%) and pre-development phase (49%).
“As companies continue to explore the possibilities of AI, it’s encouraging to see governance frameworks emerge in parallel to empower employees to advance ethical outcomes and make a positive impact,” said Kwasi Mitchell, Deloitte’s US engagement and DEI leader. “By adopting processes designed to promote accountability and preserve trust, leaders can establish a culture of integrity and innovation that allows them to effectively harness the power of AI while advancing equity and making a positive impact.”
Are organizations recruiting and upskilling for AI ethics roles?
The following positions have been hired or are part of the hiring plans of the organizations surveyed:
- AI researcher (59%).
- Policy analyst (53%).
- AI Compliance Manager (50%).
- Data scientist (47%).
- AI Governance Specialist (40%).
- Data ethicist (34%).
- AI ethicist (27%).
Many of these professionals (68%) came from internal training or development programs. Fewer have used external sources such as traditional recruiting or certification programs, and even fewer are considering campus recruitment or collaboration with academic institutions.
“Ultimately, companies need to be confident that their technology is trusted to protect the privacy, security, and fair treatment of its users, and that it is consistent with their values and expectations,” Ammanath said. “An effective approach to AI ethics must be based on the specific needs and values of each organization, and companies that implement strategic ethics frameworks will often find that these systems support and drive innovation, rather than hinder it.”