What is the AI Act and why it matters for your SME #
The EU Artificial Intelligence Regulation (AI Act, Reg. EU 2024/1689) is the world’s first law governing artificial intelligence. It is not just about Big Tech: it applies to anyone who develops or uses AI systems in the European Union, from startups to large hospitals.
If your company uses ChatGPT, an AI assistant, an automated classification system or a customer-facing chatbot, EU AI regulation affects you.
Why you cannot ignore it #
- Penalties reach up to EUR 35 million (or 7 % of global turnover)
- The AI literacy obligation has been in force since February 2025
- Full obligations for high-risk systems take effect in August 2026
- Your customers and partners will start asking for evidence of AI Act compliance
Timeline: what changes and when #
The AI Act applies gradually. Here are the dates that matter for your SME:
Already in force (February 2025) #
Ban on unacceptable AI practices:
- Subliminal manipulation that causes harm
- Exploitation of vulnerabilities (age, disability)
- Social scoring by public authorities
- Real-time facial recognition (with exceptions for security)
AI literacy obligation (art. 4): All organisations must ensure that personnel who use or oversee AI systems have a “sufficient level of AI literacy”. This applies to all companies, regardless of size.
In practice: if your employees use ChatGPT, Copilot or any other AI tool, you must demonstrate they have received adequate training.
August 2025: general-purpose models #
Providers of general-purpose AI models (GPAI) — such as GPT-4, LLaMA, Mistral — must:
- Document the training process
- Comply with copyright rules
- Publish a summary of training data
This indirectly impacts SMEs: the models you use must be compliant. Using documented open-source models (like those powering ORCA) simplifies verification.
August 2026: full obligations for high-risk systems #
This is the critical deadline. AI systems classified as high-risk must meet stringent requirements.
How risk classification works #
The AI Act classifies AI systems into four levels. The category depends on the use, not on the tool itself.
Unacceptable risk (prohibited) #
Completely banned practices:
- Subliminal behavioural manipulation
- Government social scoring
- Predictive policing based solely on profiling
High risk #
AI systems that affect fundamental rights of individuals:
| Area | Examples |
|---|---|
| Recruitment | Automated CV screening, candidate ranking |
| Credit | Creditworthiness assessment |
| Healthcare | Diagnostic support, automated triage |
| Education | Student assessment, admissions |
| Critical infrastructure | Power grid management, transport |
| Justice | Judicial decision-support systems |
Obligations for high-risk systems:
- Risk management system
- Training data governance
- Complete technical documentation
- Automatic event recording (logging)
- Transparency and user information
- Human oversight
- Accuracy, robustness and cybersecurity
- Registration in the EU database
Limited risk #
Systems with transparency obligations:
- Chatbots: the user must know they are interacting with an AI
- Deepfakes: must be labelled
- AI-generated content: must be identifiable
Minimal risk #
The majority of AI systems. No specific obligations:
- AI for email drafts
- Machine translation
- Spell-checking
- Brainstorming and idea generation
What your SME needs to do now #
1. Map the AI tools in use #
Carry out an inventory of all AI tools used in your company — including unauthorised ones (shadow AI). For each tool, document:
- Who uses it and for what purpose
- What data is processed
- Whether the decisions affect natural persons
2. Classify each use by risk level #
The same tool can fall into different categories depending on the use:
| ChatGPT use | Risk level |
|---|---|
| Drafting emails | Minimal |
| Summarising internal documents | Limited (transparency obligation) |
| Selecting candidates from a pool | High risk |
| Classifying patients by priority | High risk |
3. Start AI literacy training #
The obligation is already in force. Training must cover:
- What AI is and how it works (basic concepts)
- Risks and limitations of AI tools
- Company policy on AI use
- GDPR and AI Act obligations relevant to the role
4. Write a company AI policy #
The document should define:
- Which AI tools are authorised
- What data may be processed with AI
- Who oversees AI use
- How to report problems or incidents
- Procedures for high-risk uses
5. Assess your AI infrastructure #
If you use ChatGPT or other US cloud services to process sensitive data, you face a double problem: GDPR (extra-EU transfers) and AI Act (lack of control). A private AI solution solves both.
How private AI simplifies AI Act compliance #
Private AI — on-premise or in a European cloud — is not just about privacy. It is the tool that makes AI Act compliance manageable for SMEs.
GDPR + AI Act: two problems, one solution #
| Requirement | ChatGPT (US cloud) | Private AI (on-premise/EU) |
|---|---|---|
| Extra-EU data transfer | Yes (risk) | No |
| Complete audit trail | Partial | Yes |
| Human oversight | Limited | Built-in |
| Technical documentation | Not available | Under your control |
| Data governance | Delegated to OpenAI | Under your control |
| AI literacy | Your responsibility | Supported by provider |
HTX solutions for AI Act compliance #
ORCA — the private enterprise chatbot. Like ChatGPT, but your data stays in Europe. GDPR and AI Act compliant by design. For minimal and limited-risk uses.
KOI — AI classification for healthcare. Decision support with built-in human oversight, complete audit trail. Designed for high-risk clinical uses. Currently Research Use Only (RUO), medical device planned for 2027.
MANTA — query databases in natural language. SQL queries validated and sanitised, no unauthorised access. For compliant business intelligence.
All run on PRISMA, the HTX private AI infrastructure: on-premise or European cloud, end-to-end encryption, zero data leakage.
Regulatory sandboxes: an opportunity for SMEs #
The AI Act provides for regulatory sandboxes (art. 57): controlled environments where SMEs can test innovative AI systems under the supervision of authorities, without risking penalties during experimentation.
Every EU Member State must establish at least one sandbox by August 2026. SMEs have priority access.
In Italy, the Data Protection Authority has already launched consultations to define the sandboxes. For SMEs that want to innovate with AI without risk, this is an opportunity to monitor.
Penalties: what non-compliance costs #
AI Act penalties are proportionate to the severity and the size of the company:
| Type of violation | Maximum penalty |
|---|---|
| Prohibited practices (art. 5) | EUR 35 million or 7 % of global turnover |
| Non-compliant high-risk systems | EUR 15 million or 3 % of turnover |
| Incorrect information to authorities | EUR 7.5 million or 1 % of turnover |
For SMEs and startups, the lower amount between the fixed figure and the percentage of turnover always applies. Even so, the figures are significant for a European SME.
The good news: authorities have stated that the initial approach will focus on guidance and support, not immediate penalties. But that does not mean you can wait.
AI Act checklist for SMEs #
- AI inventory completed: all AI tools mapped
- Risk classification: each use classified (minimal/limited/high)
- AI literacy training: staff trained (mandatory since Feb 2025)
- Company AI policy: document written and communicated
- GDPR assessment: extra-EU transfers identified and managed
- Infrastructure: private AI solution evaluated for sensitive uses
- Sandbox: regulatory sandbox opportunity monitored
- Calendar: August 2026 deadlines planned
Want to learn more? #
If your SME uses AI tools and you want to understand how to comply with the AI Act without stalling innovation, get in touch. We help you map risks, choose the right infrastructure and train your team.
This article was written by the HTX team — Human Technology eXcellence. We design private artificial intelligence systems for healthcare and industry, from our datacenter in Trieste. The information in this article is for informational purposes and does not constitute legal advice.
Frequently asked questions #
Does the AI Act apply to SMEs?
Yes. The AI Act applies to all organisations that develop or use AI systems in the EU, regardless of size. SMEs benefit from regulatory sandboxes and proportionate penalties, but the baseline obligations — such as AI literacy — are identical.
What are the main AI Act deadlines?
February 2025: ban on unacceptable practices and AI literacy obligation. August 2025: obligations for general-purpose AI models (GPAI). August 2026: full obligations for high-risk systems, including transparency, human oversight and impact assessment.
Does the AI Act also apply to companies using ChatGPT?
Yes. The AI Act regulates both providers and deployers of AI systems. If you use ChatGPT to make decisions that affect people (recruitment, credit scoring, diagnostics), the use falls into the high-risk category with specific obligations.
What does an SME risk if it does not comply with the AI Act?
Penalties range from EUR 7.5 million (incorrect information) to EUR 35 million (prohibited practices). For SMEs the lower amount between the fixed figure and a percentage of turnover always applies, but even that can be significant.
How does private AI simplify AI Act compliance?
Private AI (on-premise or EU cloud) eliminates extra-EU data transfers, guarantees a complete audit trail, facilitates human oversight and simplifies the documentation required by the AI Act. With solutions like ORCA from HTX, compliance is built into the design.