Ian Nicholls, CEO of business transformation company Explic8, a delves into what the AI Act might mean for an organisation, and offers valuable strategies on how to prepare for impact on your business.
On 12th July 2024, the European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (“EU AI Act”), was published in the EU Official Journal, setting a global precedent as the first horizontal regulatory framework for AI systems.
It came into force across all EU Member States on 1st August 2024, with the enforcement of most provisions expected to commence on 2nd August 2026.
The Act introduces a risk-based regulatory approach, placing AI systems into four categories: unacceptable risk [prohibited], high risk [heavily regulated], limited risk [requiring transparency], and minimal or no risk [free from specific obligations].
For businesses, this two-year preparatory period represents a crucial window to align operations and ensure adherence to the new regulations.
While compliance may seem complex, with the right strategies, you can turn this challenge into an opportunity for innovation and leadership in ethical AI.
Understanding the scope of the AI Act
The first step in preparing for the AI Act is understanding its implications for your organisation.
The regulation imposes strict obligations on high-risk AI systems, such as those used in healthcare, recruitment, education, and law. These include robust data governance, risk management, transparency, and third-party assessments.
You must map out your business’ AI systems and classify them based on these categories. High-risk systems, in particular, will demand immediate attention to ensure they meet the Act’s technical, ethical, and documentation standards.
Conducting a compliance audit
To adhere to the AI Act, start with a comprehensive audit of your existing AI systems, data practices, and governance frameworks. This process involves evaluating the quality, sourcing, and transparency of the data used in AI development.
High-risk systems must have clear technical documentation and traceability to meet compliance requirements. By identifying gaps early, you can prioritise areas that need improvement, such as enhancing data quality, reducing algorithmic bias, or increasing system transparency.
Building a governance framework
Governance is at the heart of AI Act compliance, so your business must establish structures to oversee the ethical use of AI, ensure accountability, and manage risks effectively.
Appointing a dedicated AI compliance officer or setting up an oversight committee can provide the necessary leadership and coordination. Policies should be developed to guide AI system development, address potential risks, and document compliance efforts thoroughly.
Additionally, regular monitoring and evaluation should be integrated into the governance framework to ensure ongoing adherence.
Enhancing transparency and data management
The AI Act mandates transparency and high data standards, particularly for high-risk AI systems. This requires you to ensure your company’s data is representative, relevant, and free from biases.
Clear documentation of how AI systems work and the rationale behind their decisions is essential, not just for compliance but also for building trust with stakeholders.
You should therefore invest in tools to track data lineage, ensure robust audit trails, and make your AI systems interpretable to both regulators and end-users.
Preparing for external assessments
High-risk AI systems will require third-party conformity assessments before deployment to verify that they meet the AI Act’s stringent requirements.
You should begin preparing for these evaluations by developing detailed technical files, conducting internal testing, and addressing any gaps identified during audits.
Building relationships with notified bodies early on can smooth the certification process and prevent delays.
Educating the workforce
Compliance with the AI Act is not just a technical challenge – it requires a cultural shift across your business.
Employees at all levels should understand the regulation’s implications and how they can help ensure compliance.
Tailored training programmes can equip your team with the knowledge and skills they need to navigate ethical and regulatory challenges. This includes educating technical teams on algorithm transparency and compliance officers on risk management.
Turning compliance into a competitive edge
While the AI Act introduces new regulatory burdens, it also offers an opportunity for your business to demonstrate leadership in responsible AI.
By aligning with its principles, you can build trust with your customers, enhance your company’s reputation, and unlock new markets that value ethical and transparent AI.
The countdown to 2nd August 2026 has begun, and the time to act is now. Preparing for the AI Act isn’t just about avoiding penalties, it’s about embedding resilience and responsibility into your business model.
With careful planning, your business can turn compliance into a foundation for innovation and growth.
Ian Nicholls is CEO of Explic8