Rijul Gupta, founder and CEO of Deep Mediadeclared the disruption of the truth due to deep fakes and other unethical uses of generative artificial intelligence pose systemic risk to the U.S. military and other government agencies.
In this article published on Carahsoft.comGupta wrote that the ease of use and accessibility of generative AI tools have transformed the nature of misinformation.
The CEO of Deep Media explained how the company’s AI models help detect deepfakes and other media manipulations.
“Such technology will never be 100% accurate, as that’s how it works now, but we routinely get over 95% accuracy in identifying the use of generative AI in images, audio and videos. This alone constitutes a force multiplier for analysts,” he noted.
Gupta discussed the company’s partnership with universities and government agencies, such as the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology, to promote the ethical use of AI.
“Ensuring the ethical use of AI is a complex challenge that cannot be solved by a single organization, which is why we are doing our best to build a community to address it,” he added.
He also mentioned the company’s work with various partners to integrate its work into other open source intelligence platforms and advance the use of AI to analyze images, video and audio in support of analysts and other users.