Traditional tools no longer track threats posed by cybercriminals. The increasing speed, scale and sophistication of recent cyberattacks require a new approach to security. Additionally, given the cybersecurity workforce shortage and the increasing frequency and severity of cyber threats, there is an urgent need to address this skills gap. AI can tip the scales for defenders. A recent study of Microsoft Copilot for Security (currently in preliminary customer testing) showed increased speed and accuracy of security analysts, regardless of their level of expertise, for common tasks such as identification scripts used by attackers, creating incident reports and identifying appropriate corrective steps.
Microsoft detects a huge amount of malicious traffic: more than 65 trillion cybersecurity signals per day. AI enhances its ability to analyze this information and ensure the most valuable information is surfaced to help stop threats. We also use this signals intelligence to power generative AI for advanced threat protection, data security, and identity security to help defenders detect what others miss . Microsoft uses several methods to protect itself and its customers from cyberthreats, including AI-based threat detection to detect changes in how resources or network traffic are used; behavioral analytics to detect risky connections and abnormal behavior; machine learning (ML) models to detect risky logins and malware; Zero Trust models where each access request must be fully authenticated, authorized and encrypted; and checking device health before a device can connect to a corporate network.
Because bad actors understand that Microsoft rigorously uses multi-factor authentication (MFA) to protect itself (all of our employees are configured for MFA or passwordless protection), we have seen attackers turn to social engineering to attempt to compromise our employees. Hot spots for this include areas where elements of value are conveyed, such as free trials or promotional pricing of services or products. In these areas, it is not profitable for attackers to steal one subscription at a time, so they attempt to operationalize and scale these attacks without being detected.
Naturally, Microsoft creates AI models to detect these attacks for Microsoft and our customers. Microsoft detects fake student and school accounts, fake companies or organizations that have altered their firmographic data or concealed their true identity to evade sanctions, circumvent controls or hide past criminal transgressions such as corruption convictions, attempted theft, etc.
Using GitHub Copilot, Microsoft Copilot for Security, and other copilot chat features integrated with Microsoft’s internal engineering and operations infrastructure can help prevent incidents that could impact operations.
Microsoft uses several methods to protect itself and its customers from cyberthreats, including AI-based threat detection to detect changes in how resources or network traffic are used; behavioral analytics to detect risky connections and abnormal behavior; machine learning (ML) models to detect risky logins and malware; Zero Trust models where each access request must be fully authenticated, authorized and encrypted; and checking device health before a device can connect to a corporate network.
First published on