CompanyScope
by Janus Compliance

General-purpose AI / LLM API

Anthropic compliance: GDPR, AI Act, DPA, training, transfers

Independent compliance research from Janus Compliance. Reviewed by Michael K. Onyekwere, CIPP/E. Last reviewed 2026-04-29. Not legal advice.

Share this Anthropic profile:Share on XBluesky

TL;DR. Commercial products (API, Team, Enterprise): contractually no training by default. API log retention dropped from 30 days to 7 days on 2025-09-14 — the strongest default in the LLM market. Consumer products (claude.ai free, Pro, Max): training default flipped to opt-in on 2025-10-08, with up to 5-year retention. Anthropic became a Microsoft 365 Copilot subprocessor on 2026-01-07 and is explicitly out of EU Data Boundary scope for that route.

DPO action: map staff use of consumer claude.ai (defaults are no longer protective); decide whether to allow Anthropic models in Copilot tenants; apply for ZDR if data sensitivity warrants.

What the tool does

Anthropic runs Claude — both the consumer-facing chat product (claude.ai) and the API behind it. Buyers will be looking at one of three things: the API (for embedded apps), Claude Team / Enterprise (for staff use), or claude.ai consumer plans. As with OpenAI, the terms, training defaults, and retention defaults differ between commercial and consumer products. Do not conflate them.

Data processed

Special-category likelihood: High for the same reasons as OpenAI — if staff or end users can type freely, sensitive content gets in. DPIA needed for any public-facing or HR-adjacent deployment.

Default geographic processing: Multi-region; primarily US infrastructure.

DPA availability

Anthropic publishes a DPA via the Privacy Center. An updated DPA effective 2026-01-01 is automatically incorporated into the Commercial Terms of Service — when a customer accepts the Commercial Terms, they accept the DPA without a separate signature flow.

This auto-incorporation model is buyer-friendly compared to OpenAI's account-gated DPA flow, but the implication is the same: read the terms before signing the commercial contract. Note that the SCC modules, UK Addendum, and Irish governing-law provisions are stated in the DPA document itself, not in the Help Center pointer article.

Subprocessor list

Anthropic publishes a subprocessor list. Notable subprocessors include:

The multi-cloud posture (AWS + GCP + Azure) is unusual and gives Anthropic redundancy, but it also means buyers should not assume a single cloud's regional guarantees apply.

Training-on-customer-data position

Commercial products (API, Team, Enterprise): not used for model training. By default and contractually. Anthropic states API inputs and outputs are never used for model training; Team and Enterprise plans do not use data for training.

Consumer products (claude.ai free, Pro, Max): default changed 2025-10-08. Consumer plans now default to opt-in for model training, with data potentially retained for up to 5 years for training purposes unless the user opts out. This is a material change that many enterprise buyers have not registered. Staff using personal claude.ai accounts are now in a training-eligible flow by default.

API log retention: As of 2025-09-14, default API log retention dropped from 30 days to 7 days. Organisations needing the longer 30-day window for audit can opt in via DPA. Zero Data Retention (ZDR) is available for qualifying enterprise customers — inputs and outputs not stored beyond what's needed for abuse screening.

Anthropic's commercial defaults are stronger than OpenAI's (7-day vs 30-day default retention). The consumer-tier shift to default-opt-in for training is a real risk that mirrors OpenAI's shadow-use problem. — My read

EU / UK transfer position

Anthropic relies on Standard Contractual Clauses (SCCs) (Module Two and Module Three) for EU transfers, incorporated through the DPA. The DPA explicitly adapts the SCCs for the UK International Data Transfer Addendum (the addendum laid before UK Parliament on 2 February 2022) and includes an addendum for transfers subject to Swiss data protection law. Irish law governs the agreement and disputes resolve in Irish courts.

Anthropic is certified under the EU-US Data Privacy Framework (confirmed active 2026-03 — same DPF reauthorisation caveat as OpenAI).

EU data residency: Anthropic has been adding region-specific deployments via cloud partners; confirm current options for your specific tier before assuming one exists.

Significant warning: when Anthropic models are accessed via Microsoft 365 Copilot (default-enabled for most commercial tenants from 2026-01-07), Anthropic processing is out of scope for the Microsoft EU Data Boundary. EU buyers using Copilot's Anthropic features need to know this and decide actively whether to allow it.

Security documentation

Anthropic's trust center at trust.anthropic.com lists:

AI Act role + risk classification

Anthropic publishes a Responsible Scaling Policy and AI Safety Level (ASL) framework, which is more substantive than most peers. Useful evidence of provider-side governance in your audit file.

DPIA prompts (for your use case)

  1. Are you using the API/Team/Enterprise tier or claude.ai consumer? Consumer-tier defaults changed in October 2025 — staff accounts may now be in a training-eligible flow by default.
  2. Are you accessing Claude via Microsoft 365 Copilot? If yes, processing falls outside the EU Data Boundary; EU subjects' data is leaving the boundary unless you actively block it.
  3. Special-category data: same Article 9 question as OpenAI. Staff can paste anything; UI controls and training matter.
  4. AI Act Annex III applicability: if your use case is in recruitment, credit, education, law enforcement, migration, or justice, deployer obligations apply.
  5. API log retention: the 7-day default is short, but if you've opted into 30-day, document it. If you need ZDR, apply via Anthropic enterprise sales.

Unresolved questions / red flags

Related profiles

Sources checked

<!-- All Phase C residual items resolved (browser agent run 2026-05-02). trust.anthropic.com confirmed publicly displays SOC 2 Type 2, ISO 27001, ISO 42001, CSA Star, HIPAA, NIST 800-171 across "Claude via Anthropic's API" and "Claude for Enterprise". DPA covers SCCs Module 2/3, UK Addendum, Swiss addendum, Irish governing law (verified in the DPA document itself, not the Help Center pointer). DPF active. Only Voyage embeddings integration shape worth a periodic check on refresh. -->
Share this Anthropic profile:Share on XBluesky

Need a reviewed note for your specific use case?

For when the public profile isn't enough — your sector is regulated, your procurement gate is real, your use case is unusual. Tell us the situation and we'll come back with a CIPP/E-reviewed Vendor Risk Note (typically £149, depending on scope).

Your context goes only to Michael. We don't share with the vendor or anyone else. Privacy notice.

AI vendor compliance updates

New profiles, regulatory deadline reminders, and the occasional AI vendor red flag. Written by Michael K. Onyekwere, CIPP/E. Free.

We don't share your address. Unsubscribe any time. Privacy notice.

For ongoing AI compliance support, work with Janus DPO-as-a-Service. For other vendors, browse the full index.