TL;DR: The EU AI Act's major obligations take effect in August 2026. For most small businesses, the practical impact is limited — but not zero. This guide tells you which provisions apply to typical SMB operations, what you need to document, what the penalties are, and where it is safe to wait before acting. No legal jargon. Written for people who run businesses, not compliance departments.
The EU AI Act is the world's first comprehensive AI regulation. It entered into force in August 2024 and its obligations are rolling out in phases through 2026 and 2027. If your business operates in the EU — or serves EU customers — parts of it apply to you.
Most of the coverage of the AI Act focuses on large enterprises and AI developers. Very little of it is written for the operations manager at a 50-person company who uses AI tools and wants to understand what they actually need to do.
This guide fills that gap. It is written for small business operators, not compliance departments.
Important disclaimer: This is educational content, not legal advice. For specific compliance questions affecting your business, consult a qualified legal professional familiar with AI regulation.
First: what the AI Act actually regulates
The AI Act is a risk-based framework. It regulates AI systems based on the risk they pose — not based on the technology used. This is important because it means:
- Most AI tools used by small businesses fall into low-risk or minimal-risk categories with minimal obligations.
- A small number of AI applications carry significant obligations regardless of company size.
- "We are a small company" is not a compliance exemption.
The Act categorises AI systems into four tiers:
| Risk Tier | What It Covers | Your Obligations | |-----------|---------------|-----------------| | Unacceptable (banned) | Social scoring, subliminal manipulation, real-time biometric surveillance | Prohibited entirely | | High risk | Hiring, credit scoring, healthcare, critical infrastructure | Conformity assessments, documentation, human oversight, registration | | Limited risk | Chatbots, deepfakes | Transparency: users must know they interact with AI | | Minimal risk | Spam filters, recommendations, most business automation | None under the Act |
For the vast majority of small businesses using AI for internal processes — document processing, email drafting, scheduling, reporting — you are in the minimal risk tier with no specific compliance obligations.
When it gets more complicated: are you a deployer?
The Act distinguishes between providers (companies that develop AI systems) and deployers (companies that use AI systems in their operations). Most small businesses are deployers, not providers.
As a deployer, your obligations are lower than a provider's — but not zero. Key deployer obligations under the high-risk provisions include:
- Ensuring AI systems are used in accordance with the provider's instructions
- Maintaining human oversight of AI decisions
- Keeping logs of high-risk AI system use
- Reporting serious incidents to the relevant national authority
When do these apply to you? If you deploy an AI system that falls into the high-risk category. For small businesses, the most commonly relevant high-risk categories are:
- Recruitment and HR: AI tools that assist in screening CVs, scheduling interviews, or evaluating candidates
- Credit and financial assessment: AI tools that assist in evaluating creditworthiness of clients or partners
- Customer profiling: AI used to make consequential decisions about individual customers
If you are using AI-assisted hiring tools, credit scoring, or automated customer profiling, you need to review whether those tools are high-risk under the Act and whether your provider has completed the required conformity assessments.
The transparency obligations that affect everyone
Even minimal-risk AI applications carry transparency obligations when they involve interaction with people:
Chatbots and AI assistants: If you deploy a chatbot on your website that interacts with customers, users must be informed they are speaking with an AI — not necessarily in large letters, but clearly and before the interaction.
AI-generated content: Deepfakes or AI-generated synthetic media must be labelled. This applies to marketing content, not just news media.
Emotion recognition: If you use systems that detect emotional states (increasingly common in customer experience platforms), disclosure is required.
For most SMBs, the practical action here is: ensure any customer-facing AI interaction is clearly identified as AI-assisted or AI-generated.