Most enterprises deploying Claude start with the technical architecture. Connectors, MCP servers, system prompts, security reviews. The governance documentation โ the Acceptable Use Policy โ gets added later, sometimes much later, often drafted by legal counsel who has never opened a Cowork session or run a Claude Code agent.
That ordering is backwards. An Acceptable Use Policy (AUP) for Claude isn't bureaucratic paperwork. It's a communication tool that tells every employee โ from the CFO to the graduate analyst โ exactly what Claude is for, what it isn't for, and what happens if those lines are crossed. Done right, it accelerates adoption because it removes ambiguity. Done wrong, it creates fear that slows rollout.
This guide provides a production-ready Claude Acceptable Use Policy template you can adapt for your organisation, along with the reasoning behind each section. If you need help formalising your Claude governance framework, our Claude Security & Governance service builds the full policy stack โ AUP, data classification, incident response, and audit procedures.
What this article covers
- Why a standalone Claude AUP is essential (separate from general AI policy)
- The 8 core sections every enterprise AUP must include
- A full, customisable AUP template with clause-level explanations
- Enforcement and incident response procedures
- Common policy gaps that create compliance risk
Why Claude Needs Its Own Acceptable Use Policy
You may already have a general AI usage policy, or a broader technology acceptable use policy covering email, devices, and internet access. Claude requires its own โ for three reasons that general policies consistently miss.
Agentic behaviour changes the risk surface
A general AI policy written in 2023 was almost certainly designed for chatbots: question in, answer out, human decides what to do with it. Claude Cowork and Claude Code agents operate differently. They can read files, write to systems, execute code, send emails, and schedule meetings. The risk of a poorly prompted agent isn't a bad answer โ it's an action. Your policy must explicitly address this.
Data handling is product-specific
Claude Enterprise customers operate under Anthropic's zero-retention policy: your conversations aren't used to train models, and Anthropic doesn't store them by default. But this only applies to the enterprise product tier. A policy that covers "AI tools generally" won't communicate this distinction clearly enough for employees to make informed decisions about what data to share.
Liability follows specificity
When an employee misuses a tool and you need to demonstrate that appropriate policies were in place, courts and regulators look for specificity. A policy that says "use AI responsibly" provides no defence. A policy that specifies "you may not input customer payment card data into Claude Cowork sessions" does.
For a full view of how Claude's data handling supports enterprise compliance, see our guide on Claude data privacy and GDPR compliance.
Claude Acceptable Use Policy โ Full Template
The following template is structured for a mid-to-large enterprise. Sections marked [CUSTOMISE] require organisation-specific input. Review with your legal and compliance team before publishing.
Section 1 โ Scope and Purpose
CLAUDE ACCEPTABLE USE POLICY
Version 1.0 | Effective Date: [DATE] | Review Date: [DATE + 12 months]
Owner: [CISO / Chief AI Officer / CTO โ select appropriate]
1. SCOPE AND PURPOSE
1.1 This policy governs the use of Anthropic's Claude AI platform ("Claude") by all employees, contractors, consultants, and third parties ("Users") who access Claude through [ORGANISATION NAME]'s enterprise subscription.
1.2 This policy applies to all Claude products deployed by [ORGANISATION NAME], including Claude Enterprise, Claude Cowork, Claude Code, and any Claude API integrations developed or procured by the organisation.
1.3 The purpose of this policy is to:
(a) Define permitted and prohibited uses of Claude
(b) Establish data handling requirements when using Claude
(c) Protect [ORGANISATION NAME], its clients, and employees from legal, regulatory, and reputational risk
(d) Enable confident, productive use of Claude by providing clear guidance
1.4 This policy supplements but does not replace [ORGANISATION NAME]'s general Information Security Policy, Data Classification Policy, and Code of Conduct.
Section 2 โ Permitted Uses
2. PERMITTED USES
2.1 Users may use Claude for the following categories of work:
CONTENT AND COMMUNICATION
- Drafting, editing, and proofreading documents, emails, reports, and presentations
- Translating content between languages for internal and client communications
- Summarising lengthy documents, meeting transcripts, and research materials
- Creating training materials, knowledge base articles, and internal guides
ANALYSIS AND RESEARCH
- Analysing publicly available market data, news, and industry research
- Reviewing and summarising regulatory guidance and legal frameworks
- Performing financial modelling and data analysis using non-confidential datasets
- Conducting competitive research using publicly available information
SOFTWARE DEVELOPMENT (Claude Code users only)
- Writing, reviewing, and debugging code in permitted development environments
- Generating test cases, documentation, and code comments
- Modernising legacy codebases under approved migration projects
- Automated code reviews within approved CI/CD pipeline configurations
KNOWLEDGE WORK AUTOMATION (Claude Cowork users only)
- Automating repetitive workflows approved by line managers
- Processing internal documents within approved Cowork connectors
- Scheduling and task management using approved integrations
2.2 Claude may be used with [ORGANISATION NAME]-classified data up to and including [INTERNAL / CONFIDENTIAL โ select per your classification scheme], subject to Section 3 of this policy.
2.3 All Claude outputs must be reviewed by the User before being relied upon for decisions, shared externally, or incorporated into final work products.
Section 3 โ Data Handling Requirements
3. DATA HANDLING REQUIREMENTS
3.1 DATA CLASSIFICATION RULES
PROHIBITED INPUT (must never be entered into Claude):
- Payment card data (PCI-DSS regulated)
- Social Security numbers, National Insurance numbers, or equivalent government identifiers
- Biometric data
- [ORGANISATION NAME] client credentials, access tokens, or API keys
- Data classified as SECRET or above per [ORGANISATION NAME]'s classification scheme
- Unsanitised personal data of individuals in GDPR-regulated jurisdictions, where the processing purpose is not covered by [ORGANISATION NAME]'s AI Data Processing Agreement with Anthropic
PERMITTED WITH CAUTION:
- Client names and project descriptions: permitted when necessary for the task; minimise to what is required
- Employee personal data: permitted for HR workflows explicitly approved under this policy
- Financial figures: permitted unless linked to identifiable individuals without appropriate legal basis
3.2 ENTERPRISE DATA RETENTION
[ORGANISATION NAME] operates Claude Enterprise under Anthropic's zero-retention policy. Claude conversations are not stored by Anthropic and are not used to train Claude models. This applies to the Enterprise tier only. Users who access Claude via personal accounts (e.g., claude.ai Pro) are not covered by this policy and must not use personal accounts for [ORGANISATION NAME] work.
3.3 THIRD-PARTY DATA
Users must not input confidential information belonging to third parties (clients, partners, regulators) into Claude without confirming that the relevant data sharing agreement or data processing agreement covers AI processing by [ORGANISATION NAME]'s approved vendors.
3.4 AUDIT AND LOGGING
All Claude usage within [ORGANISATION NAME]'s enterprise deployment is subject to audit logging. [ORGANISATION NAME] may review Claude usage logs for compliance, security investigation, and policy enforcement purposes. Users have no expectation of privacy in their Claude usage within [ORGANISATION NAME] systems.
Section 4 โ Prohibited Uses
4. PROHIBITED USES
4.1 The following uses of Claude are strictly prohibited:
CONTENT CREATION
- Generating content that is defamatory, discriminatory, harassing, or abusive
- Creating deepfakes, synthetic media, or AI-generated content designed to deceive
- Producing content that infringes third-party intellectual property rights
- Generating content that violates Anthropic's Terms of Service or Usage Policy
DECEPTION AND MISREPRESENTATION
- Presenting AI-generated content as solely human-authored when the context requires disclosure
- Using Claude to impersonate clients, colleagues, regulators, or third parties
- Submitting AI-generated work to academic or professional bodies where AI assistance is prohibited
SECURITY AND TECHNICAL
- Attempting to manipulate Claude's behaviour through prompt injection or jailbreaking techniques
- Using Claude to identify, exploit, or disclose vulnerabilities in [ORGANISATION NAME] or third-party systems, except under an approved security testing programme
- Attempting to extract training data, system prompts, or proprietary configurations from Claude
REGULATORY AND LEGAL
- Using Claude outputs as the sole basis for regulated professional advice (legal, medical, financial) without appropriate professional review
- Processing personal data in ways that violate applicable privacy laws, including GDPR, CCPA, and sector-specific regulations
- Using Claude to circumvent or undermine [ORGANISATION NAME]'s risk management, compliance, or internal control frameworks
COMMERCIAL AND COMPETITIVE
- Sharing [ORGANISATION NAME]'s Claude implementation details, system prompts, or workflows with competitors or third parties without authorisation
- Using [ORGANISATION NAME]'s Claude access for personal commercial benefit or outside employment activities
Section 5 โ Agentic AI Controls (Cowork / Claude Code)
5. AGENTIC AI CONTROLS
5.1 Claude Cowork and Claude Code operate as AI agents capable of taking actions โ reading files, writing documents, executing code, sending communications, and interacting with connected systems. These capabilities require additional controls.
5.2 APPROVAL REQUIREMENTS
Users may not connect Claude Cowork or Claude Code agents to the following systems without prior written approval from [IT SECURITY / CISO / LINE MANAGER โ customise]:
- Production databases or data warehouses
- Customer-facing systems or CRM platforms
- Financial systems, ERP platforms, or payment processing systems
- External communication systems (email, Slack, Teams) with permissions to send externally
5.3 HUMAN OVERSIGHT REQUIREMENT
AI agents must not be configured to take irreversible actions (send emails, execute transactions, delete data, make public disclosures) without explicit human confirmation at each step, unless the workflow has been formally reviewed and approved under [ORGANISATION NAME]'s AI Workflow Approval process.
5.4 SCOPE LIMITATION
Claude Code agents must operate only within the approved repository scope defined in the relevant CLAUDE.md configuration. Agents must not be granted permissions beyond what is required for the approved task.
5.5 INCIDENT REPORTING
Any Claude agent action that produces an unintended output โ incorrect data modification, unintended communication, or unexpected system interaction โ must be reported to [IT SECURITY CONTACT / HELPDESK] within 24 hours.
Section 6 โ Intellectual Property
6. INTELLECTUAL PROPERTY
6.1 OWNERSHIP OF OUTPUTS
AI-generated outputs created using [ORGANISATION NAME]'s Claude deployment in the course of employment are work product of [ORGANISATION NAME] and subject to standard employment IP assignment provisions.
6.2 THIRD-PARTY IP
Users must not instruct Claude to reproduce substantial portions of copyrighted text, code, or other protected works. Claude may be used to analyse, summarise, or comment on third-party content within the limits of fair use and equivalent doctrines.
6.3 DISCLOSURE IN EXTERNAL WORK
Where [ORGANISATION NAME]'s client agreements or professional standards require disclosure of AI-assisted work, Users must follow the disclosure procedures defined in [RELEVANT INTERNAL PROCEDURE].
6.4 OPEN SOURCE CODE
Code generated by Claude Code agents that will be incorporated into software distributed externally must be reviewed for open source licence compatibility by [LEGAL / ENGINEERING LEAD โ customise] before release.
Section 7 โ Enforcement
7. ENFORCEMENT AND CONSEQUENCES
7.1 Violations of this policy may result in:
- Revocation of Claude access, temporary or permanent
- Formal disciplinary action up to and including termination of employment
- Legal action where the violation involves fraud, breach of confidentiality, or regulatory non-compliance
- Notification to relevant regulators where required by law
7.2 Violations will be assessed on a case-by-case basis, taking into account:
- Whether the violation was intentional or inadvertent
- The severity of actual or potential harm
- The User's prior history of policy compliance
- Whether the User took prompt remedial action upon identifying the violation
7.3 Users who are uncertain whether a particular use is permitted under this policy should contact [AI GOVERNANCE CONTACT / LINE MANAGER] before proceeding.
Section 8 โ Review and Governance
8. REVIEW AND GOVERNANCE
8.1 This policy will be reviewed at least annually and updated to reflect:
- Changes to Claude products or Anthropic's terms
- Changes to applicable laws and regulations
- Significant incidents or near-misses identified through the audit process
- Lessons learned from [ORGANISATION NAME]'s own AI deployment experience
8.2 The [CISO / Chief AI Officer] is responsible for maintaining this policy and communicating updates to all Users.
8.3 Questions, concerns, or suspected violations should be directed to:
[AI GOVERNANCE EMAIL / HELPDESK CONTACT]
Policy Owner: [ROLE]
Approved by: [EXECUTIVE SPONSOR]
Effective Date: [DATE]
Common Policy Gaps That Create Compliance Risk
Having reviewed Claude governance documentation across dozens of enterprise deployments, we consistently see the same gaps โ often in policies drafted by legal teams who inherited them from generic AI policy templates.
Gap 1: No distinction between Claude tiers
A policy that says "do not share confidential data with AI tools" without distinguishing between Claude Enterprise (zero-retention, enterprise agreement) and personal claude.ai accounts creates confusion. Employees either over-restrict โ avoiding legitimate work โ or over-share, assuming all Claude access is enterprise-grade. Be explicit about which products are covered.
Gap 2: Agentic AI treated as conversational AI
Policies written before 2025 typically address chatbot usage. Claude Cowork agents that connect to SharePoint, send Teams messages, and update CRM records require action-specific controls, not just conversation restrictions. If your AUP doesn't mention agents, it doesn't cover them.
Gap 3: No escalation path for edge cases
Every policy needs a "when in doubt, ask" mechanism. If employees encounter a task that feels like it might be covered by the AUP but aren't sure, they need a named contact or a clear process โ not just the instruction to "exercise good judgment." Good judgment without guidance produces inconsistent decisions.
Gap 4: Output review obligations are implicit, not explicit
Claude can produce confident-sounding outputs that are factually incorrect. Your policy must state explicitly that AI outputs require human review before being used for decisions or shared externally. This both sets expectations and creates a liability record that demonstrates appropriate human oversight.
Gap 5: No third-party data clause
Many enterprises deploy Claude alongside Salesforce, Jira, or SharePoint via MCP connectors. The data flowing through those connectors includes client information, contractual details, and regulated content. Your AUP must address which third-party data categories are permitted to flow through Claude-connected systems.
Our Claude AI governance framework guide covers the full policy stack โ AUP, data classification, risk assessment, and audit procedures โ if you want to build beyond the AUP itself.
Implementing and Communicating Your AUP
A policy that lives in a SharePoint folder nobody visits is not a governance control. Implementation requires active communication, training integration, and ongoing reinforcement.
Launch alongside Claude onboarding
The best time to introduce the AUP is when employees first access Claude โ not months later during a compliance training cycle. Build policy acknowledgement into your Claude onboarding flow. Platforms like Claude Cowork support admin-configured onboarding messages. Use them.
Include in AI training programmes
If you're running Claude training across your organisation, dedicate at least 20 minutes to AUP scenarios. Use concrete examples: "Can I paste this client email into Claude?" is a better training vehicle than "refer to Section 3.3 for third-party data requirements." Our Claude training workshops include AUP scenario exercises as a standard module.
Integrate into Claude system prompts
For Claude Cowork and API deployments, include a concise summary of the most critical AUP rules in the system prompt: "This Claude instance is for internal use only. Do not enter PCI data or client credentials. All outputs require human review before external sharing." This creates a just-in-time reminder without relying on employees remembering a document they read months ago.
Review quarterly for the first year
Claude's capabilities are evolving faster than annual review cycles can track. For the first 12 months post-deployment, review your AUP quarterly against any new Claude features (computer use, new MCP connectors, updated models) and any incidents that occurred. After stabilisation, annual review is appropriate.
Additional Requirements for Regulated Industries
Financial services, healthcare, and legal organisations face sector-specific requirements that go beyond this standard AUP template.
Financial services
If you're operating under FCA, SEC, MiFID II, or similar regulatory frameworks, your AUP must address AI model risk management (MRM) requirements, including documentation of model governance, change control, and validation processes. Claude outputs used in investment analysis or client recommendations may require additional disclosure and approval steps. See our guide on Claude for financial services for sector-specific considerations.
Healthcare
HIPAA-covered entities must ensure their Claude AUP addresses PHI handling, business associate agreement requirements with Anthropic, and the distinction between de-identified and identifiable patient data. Our Claude HIPAA compliance guide covers these requirements in detail.
Legal and professional services
Solicitors, barristers, and regulated legal practitioners must consider professional conduct rules around client confidentiality, legal professional privilege, and the disclosure obligations that may apply when AI tools are used in client matters. These vary by jurisdiction and bar association guidance โ your AUP should reference the relevant professional body's AI guidance.
Key takeaways
- Claude needs its own AUP โ general AI policies miss agentic behaviour and product-tier distinctions
- The 8 sections in this template cover scope, permitted uses, data handling, prohibited uses, agentic controls, IP, enforcement, and governance
- The most common gaps: no tier distinction, no agentic AI controls, and no third-party data clauses
- Publish the AUP at the point of onboarding, not months later in a compliance cycle
- Regulated industries (financial services, healthcare, legal) need sector-specific additions
Get Claude governance updates delivered weekly
Policy templates, security guides, and implementation playbooks from our certified architects.
CI
ClaudeImplementation Team
Claude Certified Architects with deployments across financial services, healthcare, legal, and manufacturing. Learn about our team โ