EU AI Act: What We Need to Know When Building and Using AI

The European Union’s Artificial Intelligence Act (AI Act) is shaping how businesses build and use AI. This landmark law introduces strict obligations – especially documentation – for any organisation deploying AI in the EU. Technical leaders and IT owners (CTOs, CIOs, CEOs of tech-driven firms) must start preparing now. AI compliance is no longer optional. It’s a prerequisite for market access, trust, and competitive advantage. 

What Is the EU AI Act? 

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive AI regulation, adopted in 2024. It applies to any “AI system” offered or used in the EU. Whether you’re an AI provider or a company deploying AI, even if based outside Europe. The Act’s goal is to ensure AI is safe, transparent, and respects fundamental human rights while still encouraging innovation.  

It classifies AI systems into four risk categories: 

- Unacceptable Risk AI – uses of AI deemed a serious threat to rights or safety are banned outright. (E.g., social scoring of citizens, exploitative “mind manipulation” algorithms, certain real-time biometric ID in public.) Organisations must not deploy these – there is no compliance route for forbidden AI. 

- High-Risk AI – AI systems with significant implications (e.g., in healthcare, HR hiring, critical infrastructure, law enforcement) are allowed only if they meet a long list of strict requirements before and during deployment. This includes pre-market conformity assessment, extensive documentation, human oversight, ongoing monitoring, and more. 

-  Limited Risk AI – AI applications with moderate risk have lighter rules, mainly transparency obligations. For example, an AI chatbot or generative AI model must clearly inform users that they are interacting with AI or that the content is AI-generated. No pre-approval is needed, but users deserve disclosure and basic safeguards. 

-  Minimal or Low Risk AI – the vast majority of AI (e.g., spam filters, AI in video games, or productivity tools) fall here. These face no mandatory requirements under the Act beyond existing laws. Adoption of voluntary codes of conduct and best practices is encouraged for responsible AI, but not required. 

If your organisation builds or deploys AI in the EU, these rules apply—regardless of where you’re based. 

Documentation Requirements for High-Risk AI 

One of the most significant obligations – and the focus of “what to document” – comes with high-risk AI systems. The AI Act mandates thorough technical documentation and record-keeping to prove compliance. In other words, if your organisation builds or implements a high-risk AI, you need to maintain a detailed paper trail about that system’s design, purpose, and safeguards. Even for lower-risk AI, some level of documentation or explanatory material is advisable to ensure transparency and accountability. 

For IT owners, compiling this documentation will likely require collaboration across teams. From data scientists (to describe datasets and models) and software engineers (system architecture) to compliance/legal officers (ensuring it meets regulatory points) and domain experts (articulating intended use and limitations). Start early: assembling a complete Annex IV technical document can be a 6+ month effort for complex AI products. 

What IT Leaders Should Do Now 

Preparation and planning are critical so that your organisation is audit-ready. Here’s a strategic action plan: 

1. Inventory and Categorise Your AI Systems: Identify every AI system in your organisation (including those developed in-house, third-party AI tools, and even experimental AI projects). 

2. Establish AI Governance and Accountability: Treat AI compliance as a formal program, not an ad-hoc task. That means setting up a governance structure. 

3. Implement a Risk Management Framework: For any AI system of significance, start following the risk management practices outlined by the Act now. 

4. Start Preparing Technical Documentation: Don’t wait until 2026 to write your AI system documentation. Begin drafting the required technical documents now for each high-risk (or potentially high-risk) AI in your portfolio. 

5. Strengthen Data and Model Governance: Many compliance requirements hinge on good data governance and model monitoring. Ensure your training data is well-managed and documented. 

6. Address Transparency and UX Now: For AI systems with user interaction, design the needed transparency features and messages. If you have a chatbot or an AI-driven decision tool, build in clear notifications to users as required (e.g., “This email was filtered by AI” or “AI Generated Content” watermark). 

7. Monitor Regulatory Updates and Guidance: The AI Act will be supported by harmonised European standards and guidance from the new European AI Office. Stay tuned to updates on standards for AI quality, risk management, etc. 

8. Upskill Your Team in AI Compliance: Ultimately, your people are the ones who will implement these measures. Training staff – from developers to IT managers – on the AI Act and its implications is crucial. 

Upskilling for Compliance and Competitive Advantage 

The final – and perhaps most underestimated – piece of AI Act readiness is people and skills. Technology leaders should recognize that complying with AI regulation isn’t just about paperwork and tech fixes. It’s about cultivating new knowledge and accountability across the organization. In fact, the EU AI Act itself effectively makes employee training mandatory. As of Feb 2025, companies in the EU are required to ensure their staff have “basic knowledge in handling AI systems,” covering how AI works, its risks (bias, security), and safe use practices. 

How to get started? 

Consider specialized training programs or certifications for your technical teams on the EU AI Act and AI ethics. Leverage external expertise if needed – for instance, Nephos provides upskilling training-as-a-service to help organisations build these competencies internally. Make completion of such training a priority so that your organisation has not just the policies on paper, but the practical skills to back them up.