HUDERIA - risk and impact assessment of AI systems
The risk and impact assessment of AI systems from the point of view of human rights, democracy and the rule of law (“the HUDERIA”) is a guidance which provides a structured approach to risk and impact assessment for AI systems specifically tailored to the protection and promotion of human rights, democracy and the rule of law.
It is intended to be used by public and private actors and play a unique and critical role at the intersection of international human rights standards and existing technical frameworks on risk management in the AI context.
The HUDERIA is a standalone, non-legally binding guidance document. Parties to the Framework Convention have the flexibility to use or adapt it, in whole or in part, to develop new approaches to risk assessment or to refine existing ones, in accordance with their applicable laws. However, Parties must fully comply with their obligations under the Framework Convention, including the baseline standards for risk and impact management outlined in Chapter V.
Objectives of the HUDERIA
The HUDERIA builds on established concepts and terminology for assessing human rights risks, including scale, scope, probability, and reversibility, providing guidance tailored to the complexities of the AI lifecycle. Its objectives are to guide risk management efforts related to human rights, democracy, and the rule of law, and to offer a flexible methodology for identifying, assessing, and mitigating risks across diverse AI applications.
General and specific guidance
The HUDERIA combines general guidance (the HUDERIA Methodology) with flexibility for adaptation, presenting high-level concepts, processes, and elements to assess the risks and impacts of AI systems on human rights, democracy, and the rule of law. At a more specific level (the HUDERIA Model), it offers supporting materials such as tools and scalable recommendations to aid implementation and serve as a resource for broader risk management approaches.
On 28 November 2024, the CAI adopted the HUDERIA Methodology and plans to develop the HUDERIA Model in 2025.
Flexible and adaptable approach
Both the HUDERIA Methodology and HUDERIA Model provide flexibility for adaptation to diverse contexts, needs, and capacities. They establish goals, principles, and objectives while allowing discretion in how to achieve them, offering a variety of policy and governance options that can be tailored to specific circumstances.
Four elements of the HUDERIA
1. Context-Based Risk Analysis
Context-Based Risk Analysis (COBRA)
The COBRA includes 3 key steps:
- It identifies key risk factors—specific characteristics within an AI system's lifecycle context—that heighten the likelihood of adverse impacts on human rights, democracy, and the rule of law. These risk factors are grouped into three categories: the system's application context, design and development context, and deployment context.
- Analyzing these factors enables and facilitates the mapping of potential adverse impacts on human rights, democracy and the rule of law.
- Building on this information, the triage process focuses on identifying and prioritizing systems with significant risks, ensuring that the HUDERIA Methodology remains proportional and not overly burdensome for minimal or low-risk AI systems. This process also supports informed decision-making on whether the benefits of building or deploying an AI system outweigh its risks, particularly regarding potential impacts on human rights, democracy, and the rule of law.
2. Stakeholder Engagement Process
Stakeholder Engagement Process (SEP)
The Stakeholder Engagement Process enhances the Risk and Impact Assessment by incorporating the perspectives of potentially affected individuals identified during the COBRA stage. Tailored to the identified risk factors and potential impacts, stakeholder engagement can take various forms and levels of participation. This process not only improves the quality of risk analysis but also fosters transparency, builds trust, and enhances the usability and performance of the AI system.
3. Risk and Impact Assessment
Risk and Impact Assessment (RIA)
The Risk and Impact Assessment provides a detailed evaluation of the potential and actual impacts of AI system activities on human rights, democracy, and the rule of law, focusing particularly on systems posing significant risks identified during the COBRA triage. It involves re-examining, contextualizing, and expanding upon potential harms, while assessing key risk variables such as scale, scope, reversibility, and likelihood to prioritize and manage risks effectively. Building on the COBRA analysis and potential SEP insights, this step ensures a comprehensive understanding of the risks to inform mitigation and governance strategies.
4. Mitigation Plan
Mitigation Plan (MP)
The Mitigation Plan element of the HUDERIA process outlines actions and strategies to address adverse impacts and mitigate identified harms. It involves formulating targeted measures based on the severity and likelihood of these harms and developing a comprehensive plan to implement them. Where appropriate, it also includes establishing mechanisms for affected individuals and other stakeholders to access remedies.