Unlock one quadruple return Generative artificial intelligence (genAI) is possible, only if you view security as a competitive advantage.
Rushing to build your first genAI application without establishing a responsible AI council to limit bias, increase fairness, and guide your red team in vulnerability testing virtually guarantees higher risk profiles from state actors and hackers for profit.
Many examples highlight the dangers lurking in genAI applications built without guardrails, including chatbots suspended in South Korea for hate speech against minorities.
The researchers also show that simply repeating a word could cause ChatGPT to spread your training dataincluding personal data. At a car dealership, a the customer manipulated a chatbot by offering a high-value car for a minimal sum, even going so far as to have the software declare the offer legally binding.
During a recent Shanghai AI Conference, experts have studied the growing threat of data poisoning attacks. These attacks allow attackers to subtly manipulate AI training data, thereby compromising the integrity and reliability of the model. By modifying just 0.1% of a data set, a malicious actor can potentially take control of the machine learning model.
As you read, state actors are already prepositioning within critical infrastructures to potentially disrupt key sectors like telecommunications and energy in crisis scenarios, a trend that genAI could accelerate.
This should worry governments and industry since almost a a quarter of all attacks involve espionage in a region with 65% of the world population and generating more than 54% of its gross domestic product (GDP).
Additionally, the Verizon 2024 DBIR shows that the public administration section recorded the highest number of incidents, a figure that could increase if it is not responsible by design.
Pilot genAI safely
Therefore, councils for responsible AI are formed at the national level, within international bodies. initiatives like Hiroshimaand inside tech titans like IBM, Fujifilm And Google. Companies like Verizon and Microsoft are establishing Roadmaps for responsible AI that do something more than avoid risk: the intention is to create sustainable programs, allowing for continuous optimization and construction.
It’s becoming increasingly clear that you need to be a leader in AI ethics to be a leader in AI in general and keep up with evolving regulations.
Connecting Responsible AI to Penetration Testing
This is confirmed by “automated” and “manual” red team penetration testingwhich shows that unless ethics and security are addressed simultaneously, the genAI application may increase risks.
The approach involves a multi-step, interdisciplinary approach combining security, adversarial machine learning, and responsible AI experts to protect the application.
One of the first questions companies considering building their first genAI solution should ask is: what are your security teams doing to test it?
General testing by those unfamiliar with AI will not determine the security of a genAI solution. genAI systems have been shown to be more probabilistic, meaning the same input can produce different outputs each time due to their non-deterministic nature.
Like securing the Imperial Palace in Japan or the Ministry of Defense in Singapore, the approach must be tailored to specific threats and vulnerabilities, as the architecture of genAI systems varies widely, from standalone applications to integrated systems with different input and output modalities (text, audio, images and video).
Placing a regular penetration tester in a complex AI environment may fail to detect vulnerabilities as attack surfaces expand to Internet of Things (IoT) environments and typical AI self-optimization factories. ‘Industry 4.0.
AI guidance is a society-wide effort
As a result, large companies like Microsoft and Verizon now understand that traditional red team penetration testing must simultaneously explore potential security risk and AI-causing failures.
Even China is developing its version of a overall governance framework on AI and work towards its implementation across the country, highlighting its importance for national security and innovation.
Verizon has implemented AI governance measures internally, requiring data scientists to save AI models for review and implement extended language models (LLMs) in a way that mitigates bias and likelihood toxic language.
These efforts are part of a broader campaign for responsible AI. They are integrated into their governance, risk management and compliance (GRC) services.
“The world has witnessed rapid development in AI over the past few months. However, putting AI to use is not an easy task,” says Tang XuningSenior Director of AI/ML Engineering at Verizon. “But the responsible AI program we now have allows us to explore this in a safe way.”
Verizon can also help enterprise and agency cyber teams create similar cross-functional AI steering teams with its risk quantification servicesa crucial first step before creating your first genAI application.
When done correctly, business organizations can achieve return on their investments in AI within 14 months of deployment, 5% of organizations worldwide realized an average of $8 for every dollar invested.
Read Governing Generative AI Securely in APAC