Employers and HR departments often wonder what impact artificial intelligence regulations will have on continued innovation. Sessions were dedicated to ethical AI at events such as HR Technology Europe, EDH‘s Strategy Summit, Eightfold’s Cultivate and AWS re:Invent. And while the EU deployed its AI Lawand several US states and cities have pushed back on AI-related laws, the US federal government has been slow to initiate AI legislation.
This month, the White House released a new set of standards, further defining what is expected from the deployment of AI in the workplace. Consistent with the guidance in President Biden’s Executive Order on AI, the Department of Labor has established eight principles for the ethical development and deployment of AI systems in the workplace.
Regarding human resources practice, EEOC Commissioner Keith Sonderling has been on a speaking tour, reminding human resources leaders of long-standing civil rights laws. He told several EDH the public that HR’s focus should be on making appropriate employment decisions and that decisions, not technology, are under the jurisdiction of the EEOC.
A responsible corporate citizen
Asha Palmer, vice president of compliance at enterprise learning platform Skillsoft, said EDH that some organizations are starting with responsible corporate citizenship and moving forward with this as a North Star. Others await regulation, which can result in two types of behavioral environments.
The former is exceptionally risk-averse, with a culture that attempts to restrict the use of AI until firm regulatory guidelines are in place. The opposite occurs as if the absence of legislation means that no safeguards are necessary until further notice.
An ethically minded yet innovative organization takes care to interpret guidelines thoughtfully while moving forward with concepts that align with business needs and appropriate usage guidelines. Industry analyst Josh Bersin put it this way: Eightfold Cultivating Talent Summit in May 2024: “Don’t wait for (AI) to mature before you get to work and try. »
Related: AI regulation: where the UN and other world leaders stand
While creating a responsible AI policy is a good starting point, Palmer believes, HR leaders must continue to push themselves and their peers to visualize how compliance efforts can fit into functional practices. . “Terms and conditions do not influence behavior,” says Palmer. “Bring political words off the page and into the hearts and minds of workers.”
This can be difficult to achieve when an organization lacks regulatory guidance. That’s why news of a White House fact sheet is welcomed by many. Although there is not much concrete guidance, the document provides perspective on what the government expects from employers. The overarching themes focus on protecting workers, giving them a voice, promoting the responsible and ethical development and use of AI, establishing strong governance, ensuring transparency and responsibility and respect for data confidentiality.
Palmer says industry and business leaders don’t have a forum to comment on the principles at this time, but “relentless incrementalism” is moving this governance process forward. In other words, keep putting one foot in front of the other. “We all have ideas, but what is the action? » said Palmer.
Eight principles of AI systems in the workplace
According to the White House fact sheet on ethical AI:
- Workers should participate in the design, development, and monitoring of AI systems in the workplace, especially those in underserved communities.
- AI systems must be designed and trained in a way that protects workers.
- Organizations must have clear governance, monitoring and evaluation processes for AI systems in the workplace.
- Employers must be transparent about AI systems used in the workplace.
- AI systems should not violate workers’ rights to organize, their health and safety rights, their wage rights, or anti-discrimination protections.
- AI systems are expected to assist, complement and empower workers while improving job quality.
- Employers should support or upskill workers during AI-related job transitions.
- Worker data used by AI systems must be limited, used for legitimate business purposes, and handled responsibly.
The full information sheet can be found here.