What could your organization do if you could automate common, repeatable security, compliance, identity, and management tasks?
Managing an organization’s defenses is a difficult and time-consuming task for many different reasons. The adoption and integration of new security technologies requires time and resources to monitor and maintain alongside the company’s existing technology portfolio. Security teams must also keep up with the increasingly rapid pace of attackers. A Microsoft study shows that attackers only need to one hour and 12 minutes on average to access private data once an unsuspecting user clicks on a phishing email. However, at the root of all these challenges is the persistent shortage of cybersecurity talent.
As alerts come in, security teams should review them and properly investigate them according to the procedures outlined in their company’s cybersecurity manual. This is particularly difficult when organizations do not have a sufficient number of experienced SOC analysts. Investigating and responding to alerts is also a very resource-intensive task that often involves correlating data from multiple telemetry sources and documenting the results along the way.
However, generative AI can greatly streamline and democratize these tasks so that your organization can optimize its existing security resources and respond more quickly to emerging threats. Read on to find out how.
Streamline SOC workflows with generative AI
Generative AI represents a step change in how practitioners investigate and respond to incidents, threats and vulnerabilities. When enriched with sufficient security data and threat intelligence, generative AI can use natural language processing (NLP) to easily interact with users, allowing them to ask questions and receive feedback. answers in a more natural format. NLP also gives generative AI the flexibility to “understand” what a user is asking and adapt to their style or preferences.
Let’s take the example of a device that has been locked due to conditional access policy violations. Normally, the analyst must review the support ticket, investigate the device status, and determine why the device was locked before finding a solution to the problem. Generative AI can significantly accelerate this process.
At Microsoft, our generative AI models use plugins and a framework to connect to solutions and answer these types of questions. We also build sessions that use context to inform responses and reporting requests. Rather than having to manually search for information about a device’s status or lockout reason, analysts can simply ask the generative AI model to provide the user’s most recent login attempts and their risk status. Assuming the model has access to the appropriate data sources and is able to reason about past context, analysts can then ask the AI to run a search query to understand what is happening in the environment. If the analyst determines that a real security incident is taking place, the AI model can also correlate this activity with recent security incidents to provide more context and recommend next steps.
Additionally, generative AI can be used to document the analyst’s actions and conclusions along the way. These real-time reports are essential to help other members of the security or management team understand what happened and how the issue was resolved. This report can include everything from when the incident occurred and the devices involved to suspected threat actors, protocols used, processes, connection attempts, and more. Documenting all of this information could historically take an analyst hours, but generative AI can put it together in minutes.
Enrich analysts with automated recommendations and predefined workflows
In addition to helping analysts act faster, generative AI also helps democratize the skills of your security team. Not everyone on your security team has the same level of experience or expertise. Generative AI helps fill this gap by providing analysts with automated recommendations and advice based on their organization’s security data and processes, as well as cybersecurity best practices.
At Microsoft, we use prompt guides: A curated list of individual prompts that facilitate common security, compliance, identity, and management workflows. These playbooks are essentially predefined workflows that guide security teams through common actions such as conducting incident investigations, creating threat actor profiles, analyzing suspicious scripts, and performing vulnerability impact assessments. By leveraging NLP built into promptbooks, security teams can create consistent, measurable processes that require minimal user intervention to execute.
Generative AI has the ability to transform security, compliance, identity and management within the enterprise. This will save practitioners time, equip them with new skills and ensure their time is spent on what matters most to the organization. We just need to expand our thinking and how generative AI is applied to operational roles.
HAS learn more on deploying generative AI in your environment, visit Microsoft Security Insider and explore our AI-powered cybersecurity product, Microsoft Copilot for security.