Azure AI security best practices

This article provides best practices for securing artificial intelligence (AI) workloads specifically in Azure. As organizations adopt AI capabilities at an unprecedented rate, security teams must proactively gain visibility into AI usage and implement appropriate controls to mitigate risks.

This article focuses on Azure-specific AI security considerations. For comprehensive, platform-agnostic AI security guidance—including organizational strategy, governance frameworks, and the full AI security lifecycle—see Security for AI in the Microsoft Security documentation.

This article complements the AI shared responsibility model, which explains the division of security responsibilities between you and Microsoft for AI workloads. For prescriptive security controls with Azure Policy enforcement, see Microsoft Cloud Security Benchmark v2 - Artificial Intelligence Security.

Enable visibility into AI workloads and usage

Before you can secure AI workloads, you need visibility into what AI applications are being used and built in your organization.

  • Use Microsoft Defender for Cloud to discover AI workloads in your Azure environment.: The Defender Cloud Security Posture Management (CSPM) plan provides AI security posture management capabilities, including discovering the generative AI Bill of Materials (AI BOM), built-in security recommendations, and attack path analysis. For more information, see AI security posture management with Defender for Cloud.

  • Use Microsoft Defender for Cloud Apps to discover SaaS AI applications.: The Defender for Cloud Apps catalog includes more than a thousand generative AI apps. You can view risk assessments, sanction or block apps, and create policies to detect new AI apps. For more information, see Govern discovered apps.

  • Track AI agent identities with Microsoft Entra Agent ID.: Microsoft Entra Agent ID provides a unified directory of agent identities created across Microsoft Copilot Studio, helping you manage agent lifecycle and permissions.

Secure Azure Machine Learning

Azure Machine Learning provides platforms for building and deploying AI applications. Securing these environments requires attention to network isolation, access control, and model governance.

  • Use managed network isolation.: Create Azure Machine Learning workspaces with managed virtual networks that provide private endpoints for dependent services and outbound traffic control. For more information, see Configure a private endpoint for Azure Machine Learning.

  • Implement least-privilege access control.: Configure RBAC using built-in roles and assign permissions at the project or workspace level. Use Microsoft Entra Agent ID for AI agent identity management, applying scoped, short-lived tokens for agent function access.

  • Deploy only approved AI models.: Use Azure Machine Learning model registry to track model provenance, verification status, and approval history. Configure automated scanning to validate model integrity and test against adversarial inputs before deployment. Deploy the "[Preview]: Azure Machine Learning Deployments should only use approved Registry Models" Azure Policy to enforce governance. For more information, see Model management and deployment.

  • Secure compute resources.: Configure compute instances without public IPs, use managed identity authentication, enable user isolation for shared clusters, and encrypt disks with customer-managed keys. For more information, see Secure an Azure Machine Learning training environment.

Implement AI-specific threat protection

AI workloads face unique threats including prompt injection, jailbreak attacks, and model manipulation. Implement threat detection and continuous testing specifically designed for AI.

Integrate red teaming into CI/CD pipelines to validate security before deployment. Test against known attack patterns from MITRE ATLAS and the OWASP Top 10 for LLM.

  • Implement human-in-the-loop for critical actions.: For high-risk AI operations such as external data transfers or system configuration changes, design workflows using Azure Logic Apps or Power Automate that pause for human review and approval before execution.

  • Monitor for risky AI usage patterns.: Use Microsoft Purview Insider Risk Management with the Risky AI usage policy template to detect and investigate risk activities related to AI. For more information, see Insider risk management policy templates.

Protect sensitive data in AI interactions

AI applications often interact with sensitive data. Implement data protection controls to prevent data loss and ensure compliance.

  • Use Microsoft Purview Data Security Posture Management (DSPM) for AI.: DSPM for AI provides insights into AI activity, ready-to-use policies to protect data in prompts, and data risk assessments for potential oversharing. For more information, see Data Security Posture Management for AI.

  • Apply sensitivity labels and DLP policies.: Extend Microsoft Purview sensitivity labels to data accessed by AI applications and configure DLP policies to detect and block sensitive data in AI prompts. For more information, see Get started with sensitivity labels.

Govern AI for compliance

AI applications must comply with regulatory requirements and organizational policies.

  • Implement responsible AI controls.: Follow Azure's responsible AI principles for fairness, transparency, privacy, and accountability.

  • Maintain audit trails.: Enable auditing for AI services: Microsoft Purview Audit captures Copilot interactions, Azure Monitor tracks Azure AI service usage, and Defender for Cloud Apps monitors SaaS AI activity. For more information, see Audit log activities.

Next steps