A recent PWC global investigation found that when it comes to generative AI risks, 64% of CEOs say they are more concerned about cybersecurity.
It comes like cyberattacks continue to increase. The damage caused by cyberattacks is expected to amount to approximately 10.5 trillion dollars per year by 2025, an increase of 300% compared to 2015, according to a McKinsey report.
More than half of CEOs surveyed by PwC also agree that generative AI will likely increase the spread of misinformation within their companies, according to the report.
The risks that generative AI poses to businesses arise as many of these same companies have rapidly launched new generative AI products. The findings “underscore the societal obligation of CEOs to ensure their organizations demonstrate AI responsibility,” the PwC report said.
PwC surveyed nearly 5,000 CEOs worldwide between October and November 2023.
OpenAI wants to understand how to combat the drawbacks of generative AI
As OpenAI helps drive demand for generative AI technology, the company this week announced several projects aimed at combating the potentially harmful effects of AI.
HAS the World Economic Forum in Davosthe company’s vice president of global affairs told Bloomberg that OpenAI is develop tools with the US Department of Defense on open source cybersecurity software.
Just a day before, OpenAI explain his plans to manage the elections, like some billion voters everywhere in the world, they will go to the polls this year. For example, the company’s image generator, Dall-E, has safeguards to deny requests requesting the generation of images of real people, including political candidates. OpenAI also doesn’t allow apps that discourage people from voting.
Early this year, the company will roll out a few features that will provide more transparency around AI-generated content. For example, users will be able to detect which tools were used to produce an image. OpenAI’s ChatGPT will also soon be equipped with real-time news, which includes attribution and links, the company says. Transparency around the origin of information could help voters better evaluate information and decide for themselves what they can trust.
AI is already being used in political campaigns
AI-generated songs featuring Indian Prime Minister Narendra Modi had gained traction in the run-up to India’s upcoming elections, as reported by online publication Rest of World.
The changes come amid fears that the rise of so-called deepfakes could mislead voters during elections. Companies like Google, MetaAnd Tic Tac now require labeling of election ads that use AI.
Several US States – including California, Michigan, Minnesota, Texas and Washington – have passed laws banning or requiring the disclosure of political deepfakes.