It would be hard to miss the hype around ChatGPT. Once it became widely available, attackers found it much easier and faster to create credible fake identities and plausible phishing sites. Suddenly, creating compelling phishing emails has become child’s play for those leveraging the Extended Language Model (LLM), free of the typos and strange wording that most users rely on to identify phishing. As artificial intelligence (AI) has become even more widely available and robust, malicious actors have already used it to develop more advanced attack methods. The solid strategy to effectively counter this is for defenders to also leverage AI by 2024.
Initiatives for AI in Cybersecurity
In part to anticipate some of the risks inherent in AI, President Biden issued a decree (EO) on safe, secure and trustworthy artificial intelligence. While EO applies more broadly than AI’s role in cybersecurity, it sets in motion a few key initiatives, including:
● Require developers of powerful AI systems to share security test results and other critical information with the U.S. government, particularly if the model poses “a serious risk to national security, national economic security or national public health and safety.” The federal government must be informed during model training and share the results of all red team security testing.
● Develop standards, tools and tests to ensure AI systems are safe, secure and trustworthy. The National Institute of Standards and Technology (NIST) will establish standards for red team testing to ensure safety before public release. Of particular note, the Department of Homeland Security (DHS) will apply these standards to critical infrastructure sectors and establish the AI Safety and Security Council. DHS and the Department of Energy (DOE) will also address the threats that AI systems pose to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
● Establish standards and best practices for detecting AI-generated content and authenticating official government content to protect Americans from AI-based fraud and deception. The Department of Commerce (DOC) will develop watermarks to clearly label AI-generated content and provide guidance for content authentication.
● Establish a cybersecurity program to develop AI tools to detect and remediate vulnerabilities in critical software.
● Ensure the U.S. military and intelligence community uses AI safely, ethically, and effectively in their missions.
The question is, in part, how quickly the government will take these measures and to what extent will the private sector adopt them? Given how quickly AI has already developed, it is imperative that we act quickly to protect citizens in the United States and around the world.
Protection can’t come fast enough
AI facilitates the extraction and exploitation of personal data. This endangers everyone’s privacy, and no one fines can compensate for this. Companies clearly need to use data to train AI systems, but the United States still lacks comprehensive data privacy legislation. As AI becomes more powerful, we must protect the personal data of all Americans, especially children. AI can play a role in creating and using privacy-preserving techniques. One way to achieve this is to train AI models in a way that keeps the training data private. Perhaps the U.S. government will do this quickly and efficiently, but attackers could leverage ill-gotten data following a breach to train their own AI models. And although some AI models may be equipped guaranteesresearchers have already shown that they can be replaced quite easily.
AI can play a role
Although research has not shown that AI is particularly effective at detecting AI-generated content, OpenAI, the developers of ChatGPT, has closed its AI text detection tool over the summer because it just wasn’t accurate enough. He now plans to introduce a cryptographic watermark function to make it easier to capture AI-generated content. It won’t be foolproof either, but that doesn’t mean you shouldn’t use AI in your organization.
AI can improve several aspects of cybersecurity, allowing defenders to stay ahead in the AI game. Here are some key ways AI can help you:
● Threat detection and analysis: AI algorithms are capable of quickly processing large volumes of data. They can analyze patterns and detect anomalies that may indicate a cybersecurity threat, such as unusual network traffic or suspicious user behavior. This feature is particularly useful for identifying new and emerging threats that have not yet been cataloged.
● Research and development: Cybersecurity researchers can leverage AI to process and synthesize large data sets, making it easier to identify patterns and insights.
● Phishing detection: AI algorithms can analyze email content, headers and sender details to identify potential phishing attempts, preventing end users from seeing (and potentially let it fool) convincing AI-generated phishing emails.
● Improve authentication: AI can use biometric logins and behavioral analytics to make it more difficult for unauthorized access to your organization’s assets.
● Vulnerability Management: Use AI to identify vulnerabilities in your software and infrastructure by continuously scanning and analyzing the network and systems.
● Automated threat response: Once you identify a threat, AI can automate the response by isolating affected systems, blocking suspicious IP addresses, or remediating vulnerabilities.
● Predictive analytics: Using historical data, AI may be able to predict future attack patterns.
Security vendors have and will continue to integrate AI into existing security tools to improve their effectiveness, resulting in more accurate and efficient threat detection and management. As 2024 moves forward and the use of AI in cybersecurity continues to lead conversations, the key is to choose solutions (and vendors) that align with EO guidelines, engage to preserve privacy and meet AI ethics and security standards. defined by your organization.