In This Guide
Claude Data Handling: What Happens to Your Data
The first security question in every Claude enterprise deployment is the same: where does our data go? The answer depends on which Claude product and deployment model you're using โ and it matters enormously for regulated industries.
Claude Enterprise: Zero Data Retention
Claude Enterprise, deployed through the Anthropic console with an enterprise agreement, offers zero data retention by default. This means: content you send in prompts and receive in responses is not used to train Anthropic's models, is not retained on Anthropic's infrastructure after the session ends, and is not accessible to Anthropic staff for any purpose other than real-time safety filtering. This is the deployment model for regulated enterprises. Verify the specific terms in your enterprise agreement and ensure zero retention is explicitly confirmed in writing โ it is a contractual commitment, not just a default setting.
Claude API: What the Default Policy Covers
When accessing Claude via the Anthropic API without an enterprise agreement, the standard data handling policy applies: inputs and outputs may be retained for up to 30 days for trust and safety purposes, and Anthropic may review content to improve safety measures. For enterprise deployments with sensitive data, this is typically not acceptable. If you're accessing Claude through AWS Bedrock or Google Cloud Vertex AI, your cloud provider's data handling terms apply โ both offer enterprise-grade data isolation and regional data residency options.
Architecture for Data Minimisation
Even with zero retention, best practice is to minimise the sensitive data that reaches Claude's API at all. Design your prompt construction layer to: strip customer PII and replace with anonymised tokens resolved client-side, exclude regulated data fields (account numbers, national insurance numbers, health identifiers) unless the use case specifically requires them, implement a data classification check before any prompt is constructed, and log what data categories were included in each prompt for audit purposes. This approach satisfies the data minimisation principle under GDPR and reduces your residual risk surface regardless of the retention policy.
Key Point
Zero data retention is a contractual commitment that requires verification in your enterprise agreement. Do not assume it applies โ confirm it in writing and document that confirmation for your compliance records.
Access Controls and Identity Management
Enterprise Claude deployments should integrate with your existing identity infrastructure, not create a parallel access management system. Here's the access control architecture we implement on every enterprise deployment.
SSO Integration
Claude Enterprise supports SSO integration via SAML 2.0 and OIDC. Connect Claude to your existing identity provider โ Okta, Azure AD, Google Workspace, or Ping Identity โ so that user provisioning and deprovisioning is handled centrally. When an employee leaves the company and their account is disabled in your IdP, their Claude access should be revoked automatically within the SSO session window. Manual deprovisioning processes fail.
Role-Based Access Controls
Implement at least three access tiers for Claude Enterprise. End users can interact with Claude within configured workspaces but cannot modify system prompts, configure data connections, or access admin analytics. Power users can create and modify custom prompts, build projects and knowledge bases, and access usage analytics for their team. Admins have full configuration access: SSO settings, workspace management, usage policies, and audit log access. Map these tiers to existing RBAC groups where possible to minimise access management overhead.
Workspace Segregation
Use Claude Enterprise's workspace feature to segregate access by team, project, or data sensitivity level. A workspace for the legal team should not be accessible to the sales team, particularly if legal has access to privileged documents. Configure distinct system prompts per workspace to enforce use-case-appropriate constraints. Audit workspace configurations quarterly โ scope creep in workspace access is a common finding in security reviews.
Audit Logging: What to Capture and How
Audit logging for AI systems is different from audit logging for traditional applications. The audit trail needs to support three use cases: operational monitoring, security investigation, and regulatory examination. Each has different requirements.
What to Log at the Platform Level
Claude Enterprise provides admin-accessible usage analytics including: user-level usage counts, conversation counts by workspace, and API consumption metrics. These are sufficient for operational monitoring and cost management. They are not sufficient for regulatory examination of AI-assisted decisions. For the latter, you need application-level logging.
Application-Level Audit Logging
If Claude is integrated into your own application (via API), implement comprehensive application-level logging that captures: the full prompt sent to Claude (or a hash thereof if data sensitivity requires), the response received, the user who triggered the request, the timestamp, the business context (e.g., "contract review for agreement ID 12345"), the confidence score or human-review status if applicable, and any downstream action taken based on Claude's output.
Store these logs in an immutable, queryable audit store โ a logging platform with write-once storage and access controls that prevent modification. Retention period should align with your jurisdiction's requirements for the data type: six years for financial records in the UK, seven years in the US, and consistent with GDPR data retention principles for personal data.
{
"event_type": "claude_api_call",
"timestamp": "2026-03-26T09:14:32Z",
"user_id": "u_8f3a9c",
"session_id": "sess_x72kp",
"workspace": "finance-reporting",
"business_context": "variance_commentary_q1_2026",
"model": "claude-opus-4-6",
"prompt_hash": "sha256:a8f3...",
"prompt_data_categories": ["financial_data"],
"response_hash": "sha256:b4e2...",
"confidence_score": 0.87,
"human_review_required": false,
"approved_by": null,
"latency_ms": 1840,
"tokens_input": 2340,
"tokens_output": 412
}
Model Risk Management Documentation
For deployments subject to model risk management review โ common in financial services under SR 11-7 guidance โ maintain a model inventory record for each Claude deployment that includes: use case description and business purpose, input data sources and data classification, output types and downstream decisions informed by output, validation approach and test results, human oversight controls, performance monitoring approach, and escalation and model change procedures. Our Claude Security & Governance service includes a complete MRM documentation package.
Prompt Security and Injection Defence
Prompt injection is the most significant active security risk in Claude enterprise deployments. Understanding the attack surface and implementing appropriate defences is non-negotiable for any production deployment.
What Prompt Injection Looks Like in Practice
Prompt injection occurs when an attacker embeds instructions in content that Claude processes โ a document, email, web page, or database record โ that override the system prompt's intended behaviour. In an enterprise context, this might look like: a contract that contains hidden instructions telling Claude to summarise it favourably and omit concerning clauses, an email asking Claude to forward sensitive information to an external address, or a support ticket containing instructions that alter Claude's response to other users.
Architectural Defences
Defence against prompt injection requires layered controls. First, clearly separate system instructions (trusted) from user-supplied content (untrusted) in your prompt architecture. Never interpolate raw user input directly into system prompt sections. Second, implement output validation: before Claude's response is used in any downstream action, validate that it matches the expected format and doesn't contain indicators of injection (unusual instructions, requests to ignore previous instructions, references to being in developer mode). Third, for agentic deployments where Claude takes real-world actions, implement a human approval gate for any high-consequence action โ file writes, email sends, financial transactions. Claude's architecture should require explicit confirmation before irreversible actions.
High Risk Pattern
Never build an agentic Claude system where the agent can send emails or make external API calls based directly on content it read from an untrusted source (web pages, uploaded documents, emails) without a human-in-the-loop approval step. This is the highest-risk prompt injection pattern.
AI Governance Policy Framework
Every enterprise deploying Claude needs a formal AI governance framework. Not because regulators always require it today (though some do), but because it creates the internal accountability structures that prevent misuse, ensure quality, and enable responsible scaling.
The Four Elements of an AI Governance Framework
Effective AI governance has four elements. An AI use policy defines what Claude can and cannot be used for in your organisation, what data can be sent to Claude, what the human oversight requirements are for AI-assisted decisions, and how to handle cases where Claude produces concerning output. An AI governance owner โ ideally a named individual in the CTO or risk function โ has accountability for the policy, reviews new use cases before deployment, and manages incidents. A use case review process ensures that every new Claude application is assessed for risk, data handling implications, and human oversight adequacy before it goes live. And a monitoring and incident response procedure covers how to detect when a Claude deployment is behaving incorrectly and how to respond and communicate.
AI Policy Template
Our Claude Acceptable Use Policy template provides a complete starting point. Key provisions to include: permitted and prohibited use cases, data classification matrix (what data categories can be used with Claude), human oversight requirements by risk level, acceptable output use cases (when can Claude output be used without human review), employee training requirements, and breach reporting obligations. For industries with existing AI governance frameworks โ SR 11-7 in banking, ICO guidance in the UK, EU AI Act requirements โ the policy should reference these and document how Claude deployments comply.
Compliance: GDPR, SOC 2, HIPAA & Financial Services
GDPR
Deploying Claude in a GDPR context requires: a legal basis for processing personal data with Claude (legitimate interests, contract, or consent depending on the use case), a data processing agreement with Anthropic as the data processor (required for enterprise contracts), documentation of data flows in your Record of Processing Activities, data minimisation controls in prompt construction, and a data subject rights process for cases where Claude has processed personal data that a subject requests deletion of. The zero retention commitment significantly simplifies the GDPR position โ if Anthropic doesn't retain the data, deletion requests are technically satisfied by the retention policy itself.
SOC 2 and ISO 27001
Anthropic holds SOC 2 Type II certification. This satisfies the vendor security assessment requirement for most enterprise procurement processes. Request Anthropic's SOC 2 report through your account team and review it against your control requirements. For Claude deployments that process data in-scope for your own SOC 2 or ISO 27001 certification, ensure that your application-level controls (audit logging, access management, monitoring) are documented as part of your control environment. Our Claude SOC 2 and ISO 27001 guide covers this in detail.
HIPAA
Healthcare organisations considering Claude for clinical workflows need a Business Associate Agreement (BAA) from Anthropic before processing any Protected Health Information. Anthropic offers BAAs to enterprise customers โ engage your account team before any PHI is used with Claude in any form. Claude Enterprise with zero retention is the required deployment model for HIPAA contexts. Our Claude HIPAA Compliance guide covers the full requirements.
Financial Services (FCA, PRA, SEC, FINRA)
UK financial services firms deploying Claude need to assess it under the FCA's existing model risk framework and the emerging AI regulatory guidance from the Bank of England/PRA. Key requirements: documentation of model purpose and methodology, validation of outputs against known-good benchmarks, human oversight for regulated activities (advice, credit decisions, transaction surveillance), and incident reporting procedures. Claude is positioned as a decision-support tool, not a decision-maker, which keeps most deployments out of the highest-risk regulatory categories. Our Claude for Financial Services guide covers the regulatory landscape in full.
Enterprise Security Checklist
Pre-Deployment Security Checklist
Need Help Building Your Claude Governance Framework?
Our Claude Security & Governance service delivers a complete framework: data handling architecture, AI policy, audit logging design, and MRM documentation for regulated industries.