Artificial intelligence is radically transforming the business landscape. It streamlines operations, provides critical insights, and enables businesses to make data-driven decisions efficiently. Through machine learning, predictive analytics, and automation, AI helps identify trends, forecast sales, and streamline supply chains, leading to increased productivity and better business results. It is unfortunately not without its problems.
We spoke to Matt Hillary, VP of Security and CISO at Drataon AI-related challenges in critical security and compliance.
BN: How is AI increasing the threat of ransomware and changing the broader cybersecurity landscape?
MH: The main strategies for spreading ransomware continue to rely on social engineering tactics such as phishing and exploiting weaknesses in externally accessible systems, such as virtual private network endpoints ( VPN), endpoints with exposed Remote Desktop Protocol (RDP), and zero-day applications. , among others. Using AI, cyberattackers can now produce highly sophisticated deceptive messages, reducing typical indicators of phishing attempts and making them more tempting to the unwary user.
Cybercriminals can also use AI to improve different facets of their activities, such as recognition and coding, thereby strengthening their vector of exploitation. By leveraging AI, malicious actors can effectively analyze large data sets to identify weaknesses in an organization’s external systems and create tailored exploits, whether by exploiting known vulnerabilities or discovering new ones.
BN: On the other hand, how does AI help improve defensive and preventative solutions?
MH: AI-based systems can analyze large amounts of data to detect telltale patterns of cyberthreats, including malware, phishing attempts, and unusual network activity. These Large Language Models (LLMs) can identify indicators of compromise or other threats more quickly and accurately than traditional or manual review methods, enabling faster response and mitigation.
AI models can also examine activities to learn the normal behavior of users and systems within a network, allowing them to detect deviations that may indicate a security incident. This approach is particularly effective in identifying insider threats and sophisticated attacks that evade traditional signature-based detection methods.
BN: What are the benefits of AI for automating governance and compliance with evolving industry regulations and standards?
MH: AI tools can be fed log data to continuously monitor systems, detect anomalies, and respond to indicators of a security incident, misconfiguration, or process activity which may result in non-compliance. By keeping abreast of evolving governance regulations in real time, these tools help organizations stay up-to-date and compliant at all times.
AI algorithms can also analyze large amounts of regulatory data, reducing the risk of human error associated with manual efforts. This leads to more accurate assessments of compliance status and reduces the likelihood of regulatory violations.
BN: What other practical or best practices should leaders adopt today to protect their businesses against evolving AI threats?
MH: My suggestions would be:
- Provide comprehensive training to cybersecurity teams on methods to effectively secure AI used by employees and AI embedded or already running in their platforms or systems, with even the most technically proficient teams exploring not only the application but also the underlying technology that powers AI capabilities. .
- Deploy phishing-resistant authentication methods to protect organizations against phishing attacks targeting authentication tokens used to access environments.
- Establish policies, training, and automated mechanisms to equip team members with the knowledge needed to defend against social engineering attacks.
- Systematically harden the organization’s Internet perimeters and internal networks to reduce the effectiveness of such attacks.
BN: What are the ethical considerations when it comes to AI? What practical security measures should leaders take to ensure ethical use of AI across the organization?
MH: Companies should establish governance structures and processes to oversee the development, deployment and use of AI. This includes appointing individuals or committees to monitor compliance with ethics and ensure alignment with the organization’s values. These governance structures must be widely documented and understood across the organization.
At the same time, promote transparency by documenting AI algorithms, data sources and decision-making processes. Ensure that stakeholders understand how AI systems make decisions and their potential impacts on individuals and society.
At Drata, we have developed responsible AI principles across systems and processes, designed to encourage robust, trusted and ethical governance while maintaining a strong security posture.
- Privacy by design: Using anonymized data sets to protect privacy through strict access control and encryption protocols, as well as generating synthetic data to simulate compliance scenarios.
- Fairness and Inclusiveness: Removing inherent bias through detailed curation, with continuous monitoring of models to ensure no unfair results as well as intuitive interfaces that work for all users.
- Security and Reliability: Rigorous testing, combined with 360-degree human monitoring, provides full visibility, giving users confidence that AI solutions will perform as intended.
BN: What future for AI-related threats?
MH: With the increasing accessibility and power of AI, it is inevitable that malicious actors will exploit it to orchestrate highly targeted, automated, and elusive cyberattacks spanning multiple domains. Cyberattacks will evolve in real time, allowing them to evade traditional detection methods.
At the same time, the rise of AI-generated fakes and disinformation has the potential to threaten individuals, organizations, and the democratic process. False visuals, audio and text make it almost impossible to distinguish fact from fiction.
BN: What is the future for advanced AI-based security solutions to strengthen cyber defense capabilities, as well as manage third-party vendor risks?
MH: AI will build cybersecurity resilience using proactive threat intelligence, predictive analytics, and adaptive security controls. Using AI to predict and adapt to emerging threats will enable organizations to maintain a proactive stance against cybercriminals, mitigating the impact of attacks. Continued research and collaborative efforts are essential to ensure that AI continues to serve as a positive force in the fight against cyber threats.
Third-party risk is an essential part of a strong governance, risk and compliance (GRC) program, especially when addressing AI-based vulnerabilities. Security teams need a comprehensive tool to continuously identify, assess and monitor risks and integrate them into internal risk profiles. This holistic approach ensures a unified, clear view of potential exposures across the organization to effectively manage third-party AI risks.
Image credit: Wrightstudio / Dreamstime.com