RESOURCE ARTICLE

Top 10 operational impacts of the EU AI Act – AI Assurance across the risk categories

This article provides insight into AI assurance across the risk categories in relation to the EU AI Act.


Published: 25 Sept. 2024

View PDF

This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here, with the other articles in the series listed below.

Previous articles in this series have reflected on the scope and oversight requirements of the EU AI Act. Given the vast uses of AI, as well as the significant potential harm these systems carry, the assurance requirements embedded into the act provide important checks and balances.

While AI assurance is not defined in the AI Act, it is increasingly used in the AI ecosystem and is inspired by assurance mechanisms in other industries, such as accounting and product safety. The U.K. government defines assurance as "the process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organisation. In the case of AI, assurance measures, evaluates and communicates the trustworthiness of AI systems." Similar mechanisms in the AI Act include a spectrum of oversight functions, including standards, conformity assessments and audits.

In relation to the AI Act, AI assurance mechanisms establish comprehensive processes and activities to measure and ensure a given AI system or general-purpose AI model adheres to specific obligations and requirements. Assurance is distinct from compliance. While compliance involves meeting set standards and regulations, assurance encompasses a broader and deeper evaluation to build confidence in an AI system's reliability and safety.

This article provides insight into AI assurance across the risk categories in relation to the EU AI Act.

Top 10 operational impacts of the EU AI Act

The overview page for the series can be accessed here.

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Ashley Casovan

Managing Director, AI Governance Center, IAPP

Osman Gazi Güçlütürk

Legal and Regulatory Lead, Public Policy, Holistic AI


Tags:

AI and machine learningFrameworks and standardsPrivacy engineeringRegulatory guidanceRisk managementStrategy and governanceTesting and evaluationTechnologyEU AI ActAI governancePrivacy
RESOURCE ARTICLE

Top 10 operational impacts of the EU AI Act – AI Assurance across the risk categories

This article provides insight into AI assurance across the risk categories in relation to the EU AI Act.

Published: 25 Sept. 2024

View PDF

Contributors:

Ashley Casovan

Managing Director, AI Governance Center, IAPP

Osman Gazi Güçlütürk

Legal and Regulatory Lead, Public Policy, Holistic AI


This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here, with the other articles in the series listed below.

Previous articles in this series have reflected on the scope and oversight requirements of the EU AI Act. Given the vast uses of AI, as well as the significant potential harm these systems carry, the assurance requirements embedded into the act provide important checks and balances.

While AI assurance is not defined in the AI Act, it is increasingly used in the AI ecosystem and is inspired by assurance mechanisms in other industries, such as accounting and product safety. The U.K. government defines assurance as "the process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organisation. In the case of AI, assurance measures, evaluates and communicates the trustworthiness of AI systems." Similar mechanisms in the AI Act include a spectrum of oversight functions, including standards, conformity assessments and audits.

In relation to the AI Act, AI assurance mechanisms establish comprehensive processes and activities to measure and ensure a given AI system or general-purpose AI model adheres to specific obligations and requirements. Assurance is distinct from compliance. While compliance involves meeting set standards and regulations, assurance encompasses a broader and deeper evaluation to build confidence in an AI system's reliability and safety.

This article provides insight into AI assurance across the risk categories in relation to the EU AI Act.

Top 10 operational impacts of the EU AI Act

The overview page for the series can be accessed here.

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Tags:

AI and machine learningFrameworks and standardsPrivacy engineeringRegulatory guidanceRisk managementStrategy and governanceTesting and evaluationTechnologyEU AI ActAI governancePrivacy

Related resources