General-purpose AI / LLM API
Anthropic compliance: GDPR, AI Act, DPA, training, transfers
Independent compliance research from Janus Compliance. Reviewed by Michael K. Onyekwere, CIPP/E. Last reviewed 2026-04-29. Not legal advice.
TL;DR. Commercial products (API, Team, Enterprise): contractually no training by default. API log retention dropped from 30 days to 7 days on 2025-09-14 — the strongest default in the LLM market. Consumer products (claude.ai free, Pro, Max): training default flipped to opt-in on 2025-10-08, with up to 5-year retention. Anthropic became a Microsoft 365 Copilot subprocessor on 2026-01-07 and is explicitly out of EU Data Boundary scope for that route.
DPO action: map staff use of consumer claude.ai (defaults are no longer protective); decide whether to allow Anthropic models in Copilot tenants; apply for ZDR if data sensitivity warrants.
What the tool does
Anthropic runs Claude — both the consumer-facing chat product (claude.ai) and the API behind it. Buyers will be looking at one of three things: the API (for embedded apps), Claude Team / Enterprise (for staff use), or claude.ai consumer plans. As with OpenAI, the terms, training defaults, and retention defaults differ between commercial and consumer products. Do not conflate them.
Data processed
- Text input from users / apps
- Documents you upload for analysis
- Image content (Claude vision)
- Tool-use / function-call results
- Embeddings (Anthropic's first-party embeddings story has shifted through 2025–2026 via the Voyage AI acquisition; if your use case relies on embeddings, confirm the current API surface and terms)
Special-category likelihood: High for the same reasons as OpenAI — if staff or end users can type freely, sensitive content gets in. DPIA needed for any public-facing or HR-adjacent deployment.
Default geographic processing: Multi-region; primarily US infrastructure.
DPA availability
Anthropic publishes a DPA via the Privacy Center. An updated DPA effective 2026-01-01 is automatically incorporated into the Commercial Terms of Service — when a customer accepts the Commercial Terms, they accept the DPA without a separate signature flow.
- Pointer (Help Center):
https://privacy.claude.com/en/articles/7996862-how-do-i-view-and-sign-your-data-processing-addendum-dpa— short article that links the actual DPA document - The DPA itself covers SCCs Module 2 / Module 3 for international transfers, the UK International Data Transfer Addendum, and a Swiss addendum
- Establishes the customer as data controller, Anthropic as processor
- Irish law governs the agreement; disputes resolve in Irish courts
- Last updated: 2026-01-01
This auto-incorporation model is buyer-friendly compared to OpenAI's account-gated DPA flow, but the implication is the same: read the terms before signing the commercial contract. Note that the SCC modules, UK Addendum, and Irish governing-law provisions are stated in the DPA document itself, not in the Help Center pointer article.
Subprocessor list
Anthropic publishes a subprocessor list. Notable subprocessors include:
- Amazon Web Services (AWS) — primary infrastructure
- Google Cloud Platform — additional infrastructure
- Microsoft Azure — added as a subprocessor for Microsoft Foundry / Copilot integrations from 2026-01-07
- Various support and analytics tooling
The multi-cloud posture (AWS + GCP + Azure) is unusual and gives Anthropic redundancy, but it also means buyers should not assume a single cloud's regional guarantees apply.
Training-on-customer-data position
Commercial products (API, Team, Enterprise): not used for model training. By default and contractually. Anthropic states API inputs and outputs are never used for model training; Team and Enterprise plans do not use data for training.
Consumer products (claude.ai free, Pro, Max): default changed 2025-10-08. Consumer plans now default to opt-in for model training, with data potentially retained for up to 5 years for training purposes unless the user opts out. This is a material change that many enterprise buyers have not registered. Staff using personal claude.ai accounts are now in a training-eligible flow by default.
API log retention: As of 2025-09-14, default API log retention dropped from 30 days to 7 days. Organisations needing the longer 30-day window for audit can opt in via DPA. Zero Data Retention (ZDR) is available for qualifying enterprise customers — inputs and outputs not stored beyond what's needed for abuse screening.
Anthropic's commercial defaults are stronger than OpenAI's (7-day vs 30-day default retention). The consumer-tier shift to default-opt-in for training is a real risk that mirrors OpenAI's shadow-use problem. — My read
EU / UK transfer position
Anthropic relies on Standard Contractual Clauses (SCCs) (Module Two and Module Three) for EU transfers, incorporated through the DPA. The DPA explicitly adapts the SCCs for the UK International Data Transfer Addendum (the addendum laid before UK Parliament on 2 February 2022) and includes an addendum for transfers subject to Swiss data protection law. Irish law governs the agreement and disputes resolve in Irish courts.
Anthropic is certified under the EU-US Data Privacy Framework (confirmed active 2026-03 — same DPF reauthorisation caveat as OpenAI).
EU data residency: Anthropic has been adding region-specific deployments via cloud partners; confirm current options for your specific tier before assuming one exists.
Significant warning: when Anthropic models are accessed via Microsoft 365 Copilot (default-enabled for most commercial tenants from 2026-01-07), Anthropic processing is out of scope for the Microsoft EU Data Boundary. EU buyers using Copilot's Anthropic features need to know this and decide actively whether to allow it.
Security documentation
Anthropic's trust center at trust.anthropic.com lists:
- SOC 2 Type I and Type II
- ISO 27001:2022
- ISO/IEC 42001:2023 — issued by Schellman Compliance, accredited by ANSI National Accreditation Board. Anthropic is one of the few major LLM providers with this certification.
- HIPAA Business Associate Agreement (BAA) — available for qualifying healthcare customers
AI Act role + risk classification
- Role: Anthropic is a provider of general-purpose AI models. GPAI obligations under Articles 51-55 apply to Anthropic.
- Your role as a buyer: deployer, with separate obligations.
- Risk tier: same logic as OpenAI — most uses sit at limited or minimal risk; Annex III triggers high-risk obligations regardless of which model you use.
Anthropic publishes a Responsible Scaling Policy and AI Safety Level (ASL) framework, which is more substantive than most peers. Useful evidence of provider-side governance in your audit file.
DPIA prompts (for your use case)
- Are you using the API/Team/Enterprise tier or claude.ai consumer? Consumer-tier defaults changed in October 2025 — staff accounts may now be in a training-eligible flow by default.
- Are you accessing Claude via Microsoft 365 Copilot? If yes, processing falls outside the EU Data Boundary; EU subjects' data is leaving the boundary unless you actively block it.
- Special-category data: same Article 9 question as OpenAI. Staff can paste anything; UI controls and training matter.
- AI Act Annex III applicability: if your use case is in recruitment, credit, education, law enforcement, migration, or justice, deployer obligations apply.
- API log retention: the 7-day default is short, but if you've opted into 30-day, document it. If you need ZDR, apply via Anthropic enterprise sales.
Unresolved questions / red flags
- Consumer-tier training default flipped 2025-10-08. Many enterprise DPOs are still operating on the assumption that claude.ai consumer is "no training by default." That is no longer true.
- Microsoft 365 Copilot integration moves Anthropic processing out of the EU Data Boundary for those Copilot tenants. This is not obvious and is being missed.
- EU residency options are evolving. Confirm at each review.
- DPF reauthorisation contested. EU institutional debates ongoing through 2026; could affect transfer mechanism availability mid-cycle.
- Embeddings story has shifted through the Voyage acquisition. If your stack relies on Anthropic embeddings, confirm the current product / term shape.
Related profiles
- OpenAI — same general-purpose LLM category, different defaults
- Microsoft 365 Copilot — embeds Anthropic models out of EU Data Boundary scope
- Perplexity — routes to Anthropic models, applying Anthropic's terms transitively
Sources checked
https://privacy.claude.com/en/articles/7996862-how-do-i-view-and-sign-your-data-processing-addendum-dpa— corroborated 2026-04-29https://trust.anthropic.com/- Anthropic data retention policy change 2025-09-14 (30 → 7 days)
- Microsoft Foundry documentation on Anthropic as subprocessor and EU Data Boundary scope — 2026-04-29
- Public reports of consumer-tier training default change 2025-10-08
- Anthropic-in-Copilot default-enable announcement 2026-01-07
Need a reviewed note for your specific use case?
For when the public profile isn't enough — your sector is regulated, your procurement gate is real, your use case is unusual. Tell us the situation and we'll come back with a CIPP/E-reviewed Vendor Risk Note (typically £149, depending on scope).
Your context goes only to Michael. We don't share with the vendor or anyone else. Privacy notice.
AI vendor compliance updates
New profiles, regulatory deadline reminders, and the occasional AI vendor red flag. Written by Michael K. Onyekwere, CIPP/E. Free.
We don't share your address. Unsubscribe any time. Privacy notice.
For ongoing AI compliance support, work with Janus DPO-as-a-Service. For other vendors, browse the full index.