AI Act: How to Assess the Compliance of Your AI Systems Before August 2026
In just a few years, AI has become widely adopted, driven by advances in deep learning and the emergence of large-scale public language models. However, this rapid uptake also raises very concrete questions around security, proper use, and human and energy impacts.
On 13 June 2024, the European Union published its Artificial Intelligence Regulation (AIR), also known as the AI Act. This legislation governs the development, placing on the market, and deployment of any artificial intelligence system (AIS) in Europe. Contrary to what one might fear, achieving compliance is not an arduous journey — it is first and foremost an opportunity to better understand and take control of your own products.
What the regulation requires in practice
The AI Act classifies AI systems into four risk levels. Limited risk covers AIS that are not intended to replace human judgement — summarising information, translating text, or generating code subject to human review. High risk applies to medical devices, HR selection tools, and critical infrastructure monitoring systems. Systemic risk concerns large general-purpose AI models such as GPT or Gemini. Finally, unacceptable risk applies to deliberately harmful or manipulative systems, which have been prohibited since February 2025.
For non-compliance with a high-risk system, penalties can reach €35 million or 7% of global annual turnover.
Key dates to keep in mind
The deadlines are fast approaching. AIS classified as unacceptable risk have been prohibited since February 2025. General-purpose models have been subject to the regulation since September 2025. High-risk systems must be compliant by August 2026, with full application extended to August 2027. Completed conformity assessments must be retained for a minimum of 10 years.
Compliance as a commercial differentiator
Beyond the legal obligation, conformity assessment is a genuine differentiator. In a market where “AI-powered” products are proliferating, demonstrating that your systems have been rigorously assessed builds client confidence — particularly in sensitive sectors such as healthcare or the public sector. Transparency builds loyalty.
Coexya in practice: assessing Onco Place
Coexya applied this approach to Onco Place, the radiotherapy clinical trials platform developed by its subsidiary Aquilab. The assessment involved a data science engineer, a software developer, and a product owner. Of all the modules analysed, only two were classified as AIS under the AI Act — and both were rated as limited risk, as their outputs are systematically subject to user validation. The assessment was digitally signed via blockchain and is available to clients and regulatory authorities.
Coexya’s expertise
Coexya supports software publishers and IT directors in assessing the compliance of their systems against the AI Act: identifying AIS, qualifying risk levels, documenting obligations, and integrating the process into quality management systems. A methodical, tool-supported approach tailored to the day-to-day realities of product and technical teams.
Want to go further? Download our full white paper to discover the detailed methodology, key regulatory definitions, and a complete account of the Onco Place compliance assessment.
Download the white paper in french