General-purpose AI / LLM API
OpenAI compliance: GDPR, AI Act, DPA, training, transfers
Independent compliance research from Janus Compliance. Reviewed by Michael K. Onyekwere, CIPP/E. Last reviewed 2026-04-29. Not legal advice.
TL;DR. API and ChatGPT business products: contractually no training by default since 2023-03-01. ChatGPT Plus consumer: trains on conversations unless the user opts out — the dominant unmanaged risk for any organisation with knowledge workers. Default API processing is not EU-resident; enterprise EU residency is configurable but not on by default. ZDR is approval-gated through sales. ISO 42001 certified — the strongest AI Act audit signal in the LLM market.
DPO action: map staff use of consumer ChatGPT, configure EU residency for any EU-subject processing, apply for ZDR if data sensitivity warrants.
What the tool does
OpenAI runs ChatGPT (consumer and business products) and the API behind it. Most enterprise buyers will be looking at one of three things: the API (for building your own apps), ChatGPT Team / Business / Enterprise (for staff use), or ChatGPT Edu (for education). The terms, training defaults, and data residency story are different for each — do not assume an answer for one applies to the other.
Data processed
You will typically pass through some or all of:
- The text the user types (which can include any personal data your staff or product feeds in)
- Document content (if you upload files for analysis)
- Voice input (Whisper / Realtime API)
- Image content (GPT-4o vision)
- Embeddings of customer or proprietary data (RAG patterns)
Special-category likelihood: High if your use case touches health, legal, HR, or any sector where staff might paste sensitive content. Article 9 (UK GDPR) processing is a real risk and most buyers underestimate how often it happens in practice. A DPIA is rarely optional.
Default geographic processing: Multi-region; primarily US infrastructure. EU residency is available on enterprise tiers but is not the default — it must be configured.
DPA availability
OpenAI publishes a Data Processing Addendum that customers can sign through their account portal.
- Sign-up requirement: account-gated; signing the DPA requires an OpenAI account and (for full enterprise terms) sales contact
- The Commercial Terms incorporate the DPA for paid API and ChatGPT business products
If a buyer is using ChatGPT free/Plus consumer tier, the consumer terms apply, which are not equivalent to a DPA — this is an extremely common mistake in SME deployments.
Subprocessor list
OpenAI publishes a subprocessor list at https://openai.com/policies/sub-processor-list/. Last updated 2026-02-11. Notable subprocessors:
- Microsoft Corporation — cloud infrastructure for API, ChatGPT Enterprise, ChatGPT Edu, ChatGPT Business (effectively Microsoft Azure, though the list names the corporate entity)
- Cloudflare — edge / CDN
- Various support tooling (CRM, analytics, customer support)
The Microsoft / Azure dependency is significant: OpenAI's processing inherits Azure's regional commitments, which means EU buyers should pay attention to which Azure regions OpenAI's enterprise tier supports. Do not assume EU residency without confirming with OpenAI sales.
Training-on-customer-data position
API and business products: not used for training by default. OpenAI states (verbatim from openai.com/business-data/):
"By default, we do not use data from ChatGPT Enterprise, ChatGPT Business, ChatGPT Edu, ChatGPT for Healthcare, ChatGPT for Teachers, or our API platform — including inputs or outputs — for training or improving our models."
This default has applied to API customers since 2023-03-01.
Consumer ChatGPT (free / Plus): conversations may be used to improve models unless the user opts out via settings. Most enterprises have staff using ChatGPT Plus on personal accounts — assume this is happening unless you've actively prevented it.
Default abuse monitoring retention: 30 days for API content, even when not used for training. Eligible enterprise customers can apply for Zero Data Retention (ZDR) via the OpenAI sales team, which removes the abuse-monitoring retention. ZDR is opt-in and approval-gated, not automatic.
The training-by-default-no story is solid for paid API/business tiers, but the 30-day abuse-monitoring retention catches buyers off guard. Plan for it explicitly in your DPIA. — My read
EU / UK transfer position
OpenAI relies on Standard Contractual Clauses (SCCs) for EU transfers, supplemented by the UK International Data Transfer Addendum for UK transfers. OpenAI is certified under the EU-US Data Privacy Framework (active as of 2026-03 — confirm current status on each review since DPF reauthorisation is contested through 2026).
EU residency option: enterprise tier offers EU data residency for ChatGPT Enterprise; API EU residency is also available but requires configuration. Default API processing is not EU-resident — this is the most-missed compliance fact in our market.
Security documentation
OpenAI's trust center at trust.openai.com lists:
- SOC 2 Type 2 — covering all products. Report available behind registration gate.
- ISO 27001:2022 — certificate available for public viewing
- ISO 27017:2015, 27018:2019, 27701:2019 — cloud security, cloud privacy, privacy information management
- ISO 42001:2023 — AI management systems, increasingly relevant for AI Act buyers
- CSA STAR
- PCI DSS v4.0.1
- FedRAMP — listed (US government)
The ISO 42001 certification is unusual at this scale and a meaningful signal for AI Act audit defensibility.
AI Act role + risk classification
- Role: OpenAI is a provider of general-purpose AI models (Article 51-55 GPAI obligations apply to OpenAI as the model maker)
- Your role as a buyer: you are a deployer of OpenAI's models when you build a product on top of the API. Your obligations differ from theirs
- Risk tier for typical SME use cases: most chatbot, summarisation, and document-processing uses sit in limited risk (transparency obligations) or minimal risk. High-risk (Annex III) is triggered if the use case touches employment decisions, creditworthiness, education access, or other Annex III categories — at which point full conformity assessment applies, regardless of who built the underlying model.
OpenAI publishes an AI Act readiness position, but it does not relieve your deployer obligations.
DPIA prompts (for your use case)
Answer these before deploying OpenAI in production:
- Are users likely to enter Article 9 special-category data (health, biometric, criminal, religion, etc.) into the prompt? If yes, your lawful basis and safeguards must cover that — UI controls and staff training matter.
- Are you using the API or a consumer ChatGPT account? If staff are using personal ChatGPT Plus, conversations may be retained and used for training. Your data governance must address shadow AI use.
- Is your use case in AI Act Annex III (recruitment, credit, education, law enforcement, migration, justice, critical infrastructure)? If yes, you are a deployer of a high-risk system regardless of the underlying model. Conformity assessment, fundamental rights impact assessment, registration in the EU AI database — all apply.
- Have you configured EU residency if you have EU subjects? Default API is not EU-resident.
- Have you applied for Zero Data Retention if your data sensitivity warrants it? The 30-day default abuse-monitoring window is a real retention period.
Unresolved questions / red flags
- Consumer-tier shadow use is the biggest unmanaged risk. Most SMEs have staff using ChatGPT Plus on personal accounts, which sit under consumer terms. The compliance posture of paid API / Enterprise is irrelevant to that traffic.
- EU residency is not the default. Buyers frequently assume "OpenAI Enterprise" implies EU processing; it does not unless configured.
- Zero Data Retention is approval-gated. The sales process can be slow. Plan timing.
- DPF reauthorisation contested. EU institutional debates are ongoing through 2026; a CJEU appeal on the framework's adequacy was filed October 2025 with a ruling expected late 2026 at earliest. DPF certification could be invalidated mid-cycle.
Related profiles
- Anthropic — same general-purpose LLM category, different defaults
- Microsoft 365 Copilot — embeds OpenAI models with different EU Data Boundary commitments
- Perplexity — routes to OpenAI models, applying OpenAI's terms transitively
Sources checked
https://trust.openai.com/— checked 2026-04-29 (SOC 2, ISO 27001:2022, ISO 27017/27018/27701, ISO 42001, FedRAMP, PCI DSS)https://openai.com/policies/sub-processor-list/(last updated 2026-02-11)https://openai.com/policies/data-processing-addendum/https://openai.com/enterprise-privacy/https://openai.com/business-data/- OpenAI API data usage and ZDR documentation
Need a reviewed note for your specific use case?
For when the public profile isn't enough — your sector is regulated, your procurement gate is real, your use case is unusual. Tell us the situation and we'll come back with a CIPP/E-reviewed Vendor Risk Note (typically £149, depending on scope).
Your context goes only to Michael. We don't share with the vendor or anyone else. Privacy notice.
AI vendor compliance updates
New profiles, regulatory deadline reminders, and the occasional AI vendor red flag. Written by Michael K. Onyekwere, CIPP/E. Free.
We don't share your address. Unsubscribe any time. Privacy notice.
For ongoing AI compliance support, work with Janus DPO-as-a-Service. For other vendors, browse the full index.