The state of play #
The European regulatory landscape for AI consists of two pillars:
- GDPR (Reg. EU 2016/679): in force since 2018, regulates personal data protection
- AI Act (Reg. EU 2024/1689): coming into force progressively from 2025, regulates AI systems
The two regulations coexist and overlap. A company that uses AI with personal data must comply with both. It sounds complicated, but in practice the solution is simpler than you think — if you make the right choices.
The real problem isn’t regulatory complexity. It’s that 77% of employees are already using AI (ChatGPT, Copilot, Gemini) without the company having assessed the GDPR implications. Adoption has already happened. Compliance hasn’t.
GDPR: the critical issues with AI #
1. Data transfers to third countries (the main problem) #
When an employee uses ChatGPT, the data entered travels to servers in the United States. After the invalidation of the Privacy Shield (Schrems II ruling, 2020), transferring personal data to the US is problematic.
The Data Privacy Framework (DPF) adopted in 2023 provides a legal basis, but:
- Not all experts consider it stable (a “Schrems III” is likely)
- It doesn’t cover all business use cases
- Standard Contractual Clauses (SCCs) are often insufficient on their own
The safest solution: eliminate the transfer at its root by using AI that runs in the EU or on-premise.
2. Automated decision-making (Art. 22 GDPR) #
Art. 22 GDPR establishes that the data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them.
In practice: if AI makes decisions that impact people — credit assessment, staff selection, medical diagnostics, personalised pricing — specific safeguards are required:
- Human oversight: a human being must be able to intervene in the decision
- Right to contest: the data subject must be able to challenge the decision
- Explainability: it must be possible to explain how the AI reached its decision
3. Right to explanation #
The GDPR (Arts. 13-15) requires that the data subject be informed about the existence of automated decision-making and receive “meaningful information about the logic involved”. With black-box models like GPT-4, this is a serious problem.
With on-premise open source models, traceability is better: you know the model, the input data, the prompt, and the output. With ORCA and MANTA, every interaction is logged with a complete audit trail.
4. Data minimisation vs training data #
The minimisation principle (Art. 5(1)(c) GDPR) requires processing only the data strictly necessary for the purpose. With cloud services like ChatGPT, inputted data may be used to train future model versions — processing that goes far beyond the original purpose.
With private AI, data is used only for the specific purpose for which the employee entered it. No secondary training, no improper use.
5. Data processor agreement (DPA) #
When you use a cloud AI service, the provider (e.g., OpenAI) acts as data processor. You need a DPA compliant with Art. 28 GDPR that specifies:
- Purpose and duration of processing
- Types of personal data processed
- Obligations and rights of the controller
- Technical and organisational security measures
- Management of sub-processors
With on-premise AI, there is no external data processor. The company is both controller and operator of the system. Less contractual complexity, fewer risks.
AI Act: how it intersects with GDPR #
The AI Act doesn’t replace GDPR — it sits alongside it. Here are the most relevant intersection points for businesses.
AI Act risk levels #
| Level | Examples | Obligations |
|---|---|---|
| Unacceptable risk | Social scoring, subliminal manipulation | Prohibited |
| High risk | Staff selection, credit, diagnostics, biometric surveillance | Conformity assessment, documentation, human oversight, transparency |
| Limited risk | Chatbots, deepfakes | Transparency obligation (user must know they’re interacting with AI) |
| Minimal risk | Spam filters, AI games | No specific obligations |
What changes for businesses in practice #
AI literacy obligation (already in force since February 2025): all employees who use AI must receive adequate training. It’s not enough to say “use ChatGPT” — structured training on risks, limitations, and responsible use is required.
High-risk system obligations (August 2026): if your company uses AI for significant decisions (HR, credit, healthcare), heavy obligations for documentation, testing, monitoring, and human oversight apply.
Transparency: clients and users must know when they’re interacting with an AI system or when content is generated by AI.
The combined effect of GDPR + AI Act #
A company using ChatGPT to analyse candidates’ CVs must simultaneously comply with:
- GDPR: legal basis for processing, DPIA, privacy notice to data subjects, right to explanation, DPA with OpenAI
- AI Act: system documentation, risk assessment, human oversight, audit trail, staff training
With on-premise private AI, many of these obligations are drastically simplified because data never leaves the company perimeter and control is total.
Practical compliance checklist (10 points) #
Here’s what your company needs to do concretely to use AI in compliance with GDPR and the AI Act.
1. Map AI usage #
Identify all AI systems used in the company — including shadow AI (ChatGPT used by employees with personal accounts). You can’t protect what you don’t know about.
2. Risk classification #
For each AI system, classify the risk level under the AI Act. Most business uses fall under “limited risk” (chatbots, document analysis) or “minimal risk” (filters, search).
3. Legal basis for processing #
Identify the GDPR legal basis for each processing of personal data via AI. The most common: consent, legitimate interest, contractual performance.
4. DPIA (if necessary) #
If the AI processes personal data at scale, profiles individuals, or makes significant automated decisions, carry out a DPIA (Art. 35 GDPR).
5. Privacy notice to data subjects #
Update the privacy notice to include the use of AI systems, specifying purposes, processing logic, and the right to contest.
6. DPA with providers #
If you use cloud AI, verify you have a DPA compliant with Art. 28 GDPR with every provider. Check sub-processors and clauses on cross-border data transfers.
7. Human oversight #
For every AI use that impacts people, define a human oversight process: who reviews the AI output before it becomes the final decision?
8. AI literacy #
Train all employees who use AI — from shadow AI risks to best practices for responsible use. Document the training.
9. Audit trail #
Implement a logging system for all AI interactions. Who asked what, when, what response was received, what decision was made.
10. Periodic review #
Schedule half-yearly reviews of AI compliance: new uses, new regulations, new risks. The landscape evolves rapidly.
How private AI simplifies compliance #
Using private AI on-premise or on EU cloud doesn’t eliminate all obligations — you still need training, documentation, human oversight. But it eliminates the biggest and most costly problems:
| Problem | With public AI (ChatGPT) | With private AI (PRISMA) |
|---|---|---|
| Cross-border data transfer | Critical (US) | Eliminated |
| DPA and sub-processors | Complex, little control | Not needed (on-premise) |
| Data used for training | Concrete risk | Impossible |
| Audit trail | Depends on provider | Complete and under your control |
| Data breach | Large attack surface | Controlled perimeter |
| Explainability | Black box | Known model, traced prompts |
| Compliance cost | High (consultants, external audits) | Lower (fewer risks to manage) |
HTX’s PRISMA integrates compliance by design:
- ORCA: every conversation is logged, data stays on-premise, human oversight is built into the workflow
- MANTA: every generated SQL query is traced and verifiable, with complete input/output logs
- KOI: maximum healthcare compliance, clinical audit trail, integrated medical validation
Myths debunked #
“If I use the Enterprise version of ChatGPT, I’m GDPR compliant” #
Not automatically. The Enterprise version has a more robust DPA and OpenAI states it doesn’t use data for training, but data still transits through US servers. The cross-border data transfer risk remains.
“GDPR prohibits using AI in business” #
False. GDPR doesn’t even mention AI. It regulates the processing of personal data, by any means. AI is perfectly legal if used in compliance with GDPR principles.
“Sanctions only affect large companies” #
The Italian Data Protection Authority has sanctioned companies of all sizes. The OpenAI fine was 15 million euros, but SMEs have also received significant fines for GDPR violations unrelated to AI.
“Employee consent is enough to use ChatGPT” #
Consent in the employment relationship is problematic because it’s not “freely given” (there’s a power imbalance). Better to rely on legitimate interest, but you still need a DPIA and adequate technical measures.
“Private AI is too expensive for compliance” #
The opposite is true. The cost of compliance with public AI (legal consultants, audits, DPAs, sanction risk) often exceeds the cost of private AI. With PRISMA, compliance is built into the price.
Next steps #
- Take the free Assessment — Includes an AI compliance evaluation for your company
- Read the AI Act guide — Deep dive into specific obligations
- Discover ORCA — The GDPR-compliant private ChatGPT
- Contact us — Let’s talk about your AI compliance
HTX — Human Technology eXcellence. Private AI for European businesses. Trieste, Italy.
FAQ
Is ChatGPT GDPR compliant?
No, not in its standard configuration. ChatGPT transfers data to OpenAI's servers in the US, where it is subject to the American CLOUD Act. Even the Enterprise version has issues with cross-border data transfers. Italy has already fined OpenAI 15 million euros for GDPR violations.
What do I risk if my employees use ChatGPT with company data?
As data controller, your company is responsible even for unauthorised AI use by employees (shadow AI). GDPR fines go up to 4% of annual turnover. Additionally, you risk breaching professional secrecy and suffering reputational damage.
Does the AI Act replace GDPR?
No, the two regulations coexist and overlap. GDPR regulates personal data protection, the AI Act regulates AI systems. A company using AI must comply with both. On-premise private AI simplifies compliance with both simultaneously.
Do I need a DPIA to use AI in my business?
It depends. If the AI processes personal data at scale, profiles individuals, or makes automated decisions that impact people, a DPIA (Data Protection Impact Assessment) is mandatory under Art. 35 GDPR. For on-premise private AI the risk is typically lower.
How can I use AI while respecting Art. 22 GDPR on automated decisions?
Art. 22 prohibits fully automated decisions with significant effects on individuals. The key is ensuring human oversight: the AI suggests, the human decides. With HTX's ORCA and MANTA, this principle is built into the system design.
Are open source models safer for GDPR?
On-premise open source models eliminate the problem of data transfer to third parties at its root, which is the main GDPR risk with cloud services like ChatGPT. There is no external data processor, no cross-border transfer, and data is not used for training. It is the most GDPR-friendly configuration possible.