High-Risk AI System
Browse all Regulation terms
High-Risk AI Systems are artificial intelligence applications classified under the EU AI Act (Regulation 2024/1689) as posing significant risks to health, safety, or fundamental rights of persons, subject to strict conformity assessment, transparency, human oversight, and regulatory obligations before deployment.
The EU AI Act, which came into force in August 2024 with high-risk obligations effective August 2026, classifies AI systems into risk categories. High-risk designation applies to AI used in critical infrastructure, education and vocational training, employment, essential services, law enforcement, migration management, and administration of justice. Crucially for financial services, AI systems for creditworthiness assessment, credit scoring, pricing of insurance and financial services, and evaluation of financial service applications are classified as high-risk.
High-risk AI systems must undergo conformity assessments, maintain technical documentation and logs, implement human oversight mechanisms, meet accuracy and robustness requirements, and provide transparency to users about AI-driven decisions. Article 86 mandates that automated decisions affecting consumers must include human review opportunities. Providers of high-risk AI systems must register them in an EU database and comply with post-market monitoring obligations, with penalties up to 15 million euros or 3% of global turnover for non-compliance.