THE voluntary pact was unveiled a year ago. President Joe Biden’s administration said at the time it had secured companies’ commitments “to help ensure the safe, secure, and transparent development of AI technology.”
What are the guarantees concerning
Tech companies are stepping up efforts to ensure the safe development and deployment of artificial intelligence. A new commitment involves rigorous testing, including simulation cyber attacks and other potential threats, to identify and address vulnerabilities in AI models.
The White House has issued executive orders setting safety standards for AI systems and requiring developers to disclose the results of safety tests. The White House is touting the orders as “the most sweeping steps ever taken to protect Americans from the potential risks of AI systems.”
Testing of AI models or systems must include societal risks and national security concerns such as cyberattacks and the development of biological weapons, the White House said, according to AFP news agency. Companies will also share information about AI risks with each other and with the government.
Apple has joined the debate by unveiling its own AI suite and partnering with OpenAI. While the move demonstrates the company’s commitment to AI, it also highlights the intense competition that exists among tech giants in this rapidly evolving field.