Microsoft announced a new update for the Azure AI platform. It includes systems for detecting and mitigating hallucinations and malicious attacks. Azure AI customers now have access to new tools based on LLM which significantly improve the level of protection against unwanted or unintentional responses in their AI applications.
Microsoft strengthens Azure AI defenses with hallucination and malicious attack detection
Sarah Bird, Microsoft’s director of product for responsible AI, says these security features will protect the “average” Azure user who may not be specialized in identifying or remediating AI vulnerabilities . TheVerge covered it extensively and by ignoring how these new tools can identify potential vulnerabilities, monitor hallucinations, and block malicious prompts in real time, organizations will gain valuable insights into the performance and security of their AI models.
These features include prompt shields to prevent rapid injections/malicious prompts, grounding detection for identifying hallucinations, and security assessments that assess model vulnerability. While Azure AI already has these attributes in preview, other features such as directing models towards safe exits or tracking potentially problematic users are expected for future releases.
One thing that sets Microsoft’s approach apart is its emphasis on custom control, which allows Azure users to enable filters for hate speech or violence in AI models. This helps to allay apprehensions about bias or inappropriate content, allowing users to adjust security settings to suit their particular needs.
The monitoring system checks prompts and responses for banned words or hidden prompts before passing them to the model for processing. This eliminates any possibility that AI will produce results contrary to desired safety and ethical standards and thus generate controversial or harmful materials.
Azure AI now rivals GPT-4 and Llama 2 in terms of security and protection
Although these safety features are readily available on popular models such as GPT-4 and Llama 2, those using smaller or lesser-known open source AI systems may need to manually integrate them into their models. Nonetheless, Microsoft’s commitment to improving AI safety and security demonstrates its commitment to providing robust and reliable AI solutions on Azure.
Microsoft’s efforts to improve security demonstrate a growing interest in the responsible use of AI technology. Microsoft therefore aims to create a safer and more secure environment where customers can detect and prevent risks before they materialize when using the AI ecosystem in Azure.