AI is ubiquitous, and organizations are adopting AI solutions at a rapid pace. Findings from the first nationally representative survey in the US on generative AI use suggest that “U.S. adoption of generative AI has been faster than adoption of the personal computer and the internet.” With this proliferation comes legal risk, and AI risk assessments are essential tools for organizations to understand the risks and the legal requirements that come with AI adoption.
Like privacy risk assessments, AI risk assessments aim to identify, evaluate and mitigate potential risks associated with systems or processes. Because AI can introduce unique challenges—including algorithmic bias, transparency issues, and accountability concerns – the assessment should be tailored to the unique elements of the AI system being implemented.
Below is a general overview on how to conduct an AI risk assessment. While the scope and specific frameworks of each risk assessment will vary, it is essential to maintain a structured, systematic approach to ensure the system is being evaluated thoroughly.
Determine Which Laws Apply
To begin any risk assessment, the first step is to determine which laws, regulations, and standards apply. For AI systems, these laws may include, but are not limited to, AI-specific laws, sector-specific laws, and state privacy laws.
How to identify applicable laws
Begin by identifying the jurisdictions where the AI system will be deployed, accessed, or will otherwise impact individuals. Then, assess which sectors the AI system will be operating in (e.g., finance, employment, healthcare) and whether any AI-specific or general laws apply to the system or its use.
Applicable AI-specific laws may include, but are not limited to:
- California Training Data Transparency Act. In effect on January 1, 2026, this law requires documentation about any generative AI system available to consumers in California. This documentation must be posted on the developer’s website and includes, among other things, a summary of the datasets used in the development of the system, the source of the datasets, how these datasets further the AI system’s intended purpose, and a description of the types of data points within the data sets.
- California AI Transparency Act. In effect on January 1, 2026, this law covers providers with generative AI systems that are accessible in California and have over one million monthly users. Under this law, covered entities are required to make an AI-detection tool at no cost to users of the AI system. The law also requires the covered entity must provide an optional and mandatory embedded disclosure for all outputs, among other things.
- Colorado Artificial Intelligence Act. Enacted in 2024, this Act includes parameters around “high-risk” AI systems—those which make, or are a substantial factor in making, consequential decisions. This Act is designed to protect against algorithmic discrimination and imposes obligations relating to transparency and disclosures, risk analysis and mitigation, and impact assessments for both developers and deployers.
- Utah Artificial Intelligence Policy Act. Enacted in early 2024, this Act requires providers of generative AI systems to ensure that the system discloses whether the user is talking with a generative AI system. In some instances, this disclosure must be made at the beginning of the interaction with the user.
- Illinois Human Rights Act. In effect on January 1, 2026, amendments to the Illinois Human Rights Act will address the use of AI systems, specifically in employment contexts. The Act currently prohibits discrimination for protected classes in Illinois, and the amendments to the Act will expand its scope to include employment discrimination resulting from the use of AI. For more about this Act, visit our previous article here.
- EU AI Act. The EU AI Act entered into force on August 1, 2024, but its provisions are phased into effect over time. Under this Act, AI systems are categorized into one of three risk levels: unacceptable, high and low. While AI systems with unacceptable risk are prohibited under this Act, those models classified as high or low risk are subject to additional transparency, risk, and safety obligations.
Additional privacy laws & standards
Data protection and privacy laws and regulations, like the California Consumer Privacy Act (CCPA) or General Data Protection Regulation (GDPR), should be taken into consideration, because AI systems frequently process personal or sensitive data. For an overview of the current US state comprehensive privacy laws, visit our previous article here.
In addition to identifying applicable laws, it is also helpful to understand emerging standards and ethical guidelines for responsible AI, such as those from ISO, IEEE, or NIST. Although not legally binding, these frameworks can provide best practices to align the AI system or processes with industry standards.
Choose Your Framework
After understanding the legal requirements that apply to your AI system, your organization should select a risk assessment framework that aligns with the type of AI system being implemented and your organization’s goals.
Because AI is still relatively new, frameworks are still in development. However, there are a handful of frameworks currently available, which include, but are not limited to:
- NIST AI Risk Management Framework. This framework – and its accompanying playbook – was developed by the National Institute of Standards and Technology (NIST) and is designed to “increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.” Because the NIST framework addresses risks to organizations, people, and society in general, it offers a flexible approach that can be used across various industries.
- ISO/IEC 42001:2023. This framework focuses on AI management system standards across all types of AI applications and contexts, and offers organizations guidance on creating, deploying, and monitoring AI systems. This standard is particularly useful for organizations seeking international recognition for their AI governance practices, and covers areas including responsible AI, reputation management and user trust, managing AI-specific risks, and innovating within the ISO/IEC framework.
- CNIL Self-Assessment Guide for Artificial Intelligence (AI) Systems. This framework offers organizations an analysis grid to assess the maturity of their AI systems in light of the GDPR. Published by the CNIL, the French data protection authority, this framework outlines general aspects of data protection law as well as specific elements that should be more thoroughly reviewed in the context of AI. Because this assessment focuses on the GDPR, it is best for organizations seeking compliance with European data protection and AI laws.
Regardless of the framework, any organization implementing an AI system or process should conduct an assessment using a structured approach. Not only will this approach help provide a more comprehensive assessment, but it will enable greater consistency with each iteration of the assessment, allowing the organization to more effectively compare risks and manage accountability.
Identify AI Stakeholders
Identifying relevant stakeholders in the organization’s AI system or process ensures that all relevant perspectives and concerns are considered. In turn, this helps provide a more thorough, well-rounded assessment.
Who are stakeholders?
A stakeholder is anyone who is affected by, has an interest in, or has control over an AI system. Key groups often include developers, engineers, product owners or managers, compliance teams, organizational leadership teams, and users.
How to identify stakeholders
To identify relevant stakeholders for an AI system or process, start by analyzing the AI system’s lifecycle. Consider who is involved in each phase, from design and development to deployment. For example, developers and engineers play vital roles in understanding technical implications throughout the lifecycle, while leadership teams can help guide the intended purpose and evolution of the system. Users should also be considered, as they can provide use-case examples after deployment and feedback on their interactions with the system.
Additionally, it is essential to include a diverse range of stakeholders. Balancing differing priorities, such as ensuring fairness, reducing bias, and operational efficiency, will help address potential risks more comprehensively. A range of perspectives can help uncover blind spots, build trust, and ensure that the AI system aligns with legal standards and user expectations.
Map Your System
Mapping your AI system will help provide a clear understanding of how the AI system operates, interacts with, and impacts its environment. By accounting for system components, data flows, and dependencies, an organization can better pinpoint potential risks of bias, inaccuracies, or other issues at each stage of the AI system’s lifecycle.
Outline the system
Start by outlining the AI system’s purpose and scope. Define each input, output, and process, and include algorithms, data sources, and models that the AI system relies on. Integrations with other platforms should also be considered and documented. During this process, the organization should refer back to the roles of all stakeholders to ensure each is accounted for.
Define the data journey
After the system’s structure is defined, trace the data journey from collection, to decision-making, to output. During this process, it is important to highlight any personal data and sensitive data. Processing of this information can lead to issues where errors, biases, or other vulnerabilities may emerge, and may implicate specific AI or other data privacy laws.
Identify monitoring methods
Finally, map feedback loops and other mechanisms for monitoring the system after deployment. AI systems evolve through updates and learning processes, and it is essential to understand how these changes can expose additional risks.
By creating a detailed data map, the organization can establish a comprehensive foundation to carry out the remainder of the risk assessment in a thorough manner.
Set Quality and Accuracy Metrics
For any assessment, metrics must be compared to ensure the system operates as intended, delivers meaningful results, and meets stakeholder expectations. To determine these metrics, the organization should first define the goals of the AI system. Key questions to ask may include:
- What specific problem is the AI system designed to solve?
- What value does the AI system contribute?
- What decisions or actions will the AI influence or automate?
- What are the users’ needs and expectations from the system?
- Are there specific fairness, inclusivity, or accessibility goals?
- How should the system evolve with time or use?
The organization’s metrics should be tailored to address the answers to these and related questions.
Next, consider the datasets used to train and evaluate the system. Ensuring that data is complete, consistent, and representative will help ensure the AI system reflects real-world usage. Therefore, datasets should also have metrics to ensure reliable data is being used to assess the system.
The reliability of the system should also be defined. Consider metrics like error rates, false positives, and false negatives to gain insight on how AI handles instances like edge cases or unexpected inputs.
Finally, user metrics can also be insightful into how well the AI system is performing. These could include satisfaction scores, task success rates, or other metrics to determine how well the AI meets user expectations.
After each metric is defined, establish a threshold or benchmark for each. Continuous monitoring and regular evaluation against these standards will help ensure the AI system maintains reliability over time. For dynamic AI systems – which continuously evolve with new data or updates – assessing quality and accuracy is an ongoing process.
Assess Privacy and Cybersecurity
Privacy and cybersecurity are both deeply interconnected components of AI risk assessments. Taking steps to assess these elements helps ensure user safety – particularly when the system collects or otherwise processes personal or sensitive information.
Increased Risk of Vulnerability in AI Systems
AI systems can handle large amounts of data, making them targets for malicious actors and raising significant threats for privacy concerns. In an evaluation of the cyber security risks to AI by the UK’s Department for Science, Innovation and Technology, vulnerabilities from malicious actors were identified at each stage of an AI system’s lifecycle. Without robust security measures, these vulnerabilities can be more easily exploited. However, by mitigating these vulnerabilities, organizations can enhance their security measures to better protect against a range of cyber threats.
Data Protection Impact Assessments (DPIAs)
Most U.S. states with comprehensive data privacy laws require organizations to conduct a data protection impact assessment or data privacy impact assessment (DPIA) for high-risk data processing activities. DPIAs are systematic evaluations that require organizations to adopt privacy-forward practices and require close interaction between privacy and cybersecurity functions.
DPIAs help organizations evaluate how personal data is collected, stored, processed and shared. In the context of AI, DPIAs are essential for identifying privacy risks in the training, deployment, and maintenance phases of the AI system. In many instances, DPIAs are required in Europe and the U.S. in the case of:
- Deployment of high-risk AI systems, as defined under the EU AI Act;
- Evaluation of personal aspects relating to individuals based on automated processing. This includes profiling, or decisions made on an evaluation that produces legal effects, or similar impacts on a natural person;
- Systematic monitoring of a publicly accessible area on a large scale;
- Processing personal data that constitutes sensitive personal data;
- Processing personal data where it could present a heightened risk of consumer harm, such as unfair or deceptive treatment; financial, physical or reputational injury to consumers; or physical or other intrusion on solitude or private affairs;
- Processing personal data for purposes of targeted advertising; or
- Sales of personal data.
Like frameworks for the overarching AI assessment, there are also frameworks to help conduct a DPIA, including the:
- NIST Risk Management Framework (RMF). This framework is designed to provide a structured yet flexible approach for managing security and privacy risks, including conducting a DPIA. Through this framework, an organization can link risk management processes at the system level and organizational level. The NIST Cybersecurity Framework can be aligned with the NIST RMF and can be implemented through NIST risk management processes.
- ISO/IEC 29135:2023. This document provides guidelines for the process of a privacy impact assessment, and the structure and content of a DPIA report. It is applicable to all types of organizations, regardless of size, including public and private companies, government entities, and not-for-profit organizations.
- ICO Sample DPIA Template. This template from the UK’s Information Commissioner’s Office provides an example of how an organization can record the DPIA process and outcome. This template should be read alongside the guidance for an acceptable DPIA set out in the European Guidelines for DPIAs.
The frameworks to conduct a DPIA are similar those used to conduct an overarching AI risk assessment. While both identify and mitigate potential risks, a DPIA will focus on personal data privacy concerns arising from or within the AI system. While NIST points out that “there is no foolproof way” to protect AI from attacks, using a DPIA to understand privacy and cybersecurity risks can help reduce damage to or by an AI system.
Review Bias
After the groundwork of the assessment has been completed, it is essential to understand the results of the assessment – specifically when it comes to bias and discrimination. Bias in an AI system occurs when a model produces unfair or skewed outcomes due to issues in the data, algorithms, or deployment of the system. These skewed outcomes pose significant ethical, legal, and regulatory risks, making a comprehensive review of bias an essential part of an AI risk assessment.
Bias from Training Data & Algorithms
To review bias, the organization should start by examining the data used to train the AI system. The training data helps AI systems learn to make decisions and should be carefully reviewed. This data should be representative of the context in which the AI system will operate, and issues with this dataset – such as under or overrepresentation of certain groups – can lead to discriminatory outcomes.
In addition to issues with training data, the algorithms used can also introduce or amplify bias. According to a report on managing bias in AI, NIST points out that these situations “often arise when algorithms are trained on one type of data and cannot extrapolate beyond those data.” This could be due to an issue with the data itself or because of the mathematical representations of the data in the algorithms.
Bias from Deployment Context
After reviewing the technical elements of the AI system, bias review should also include deployment contexts. This is because even seemingly neutral or well-trained models can produce biased results if deployed in contexts the AI system was not trained for. Differences in user behavior may create unintended outcomes.
To mitigate these risks, organizations should ensure datasets are diverse, representative, and regularly audited for imbalances or stereotypes. Additionally, organizations should conduct context-specific testing before deployment and implement feedback mechanisms to monitor and address bias over time.
Manage Risks
Effective risk management is the final step of conducting an AI risk assessment. Per NIST, “[a]ddressing, documenting, and managing AI risks and potential negative impacts effectively can lead to more trustworthy AI systems.”
This process should be done through a proactive, iterative, and comprehensive approach to identify and assess risks – especially for systems that evolve over time. Using the steps above, organizations can conduct regular performance reviews and implement feedback loops to better pinpoint potential risks as well as their severity and likelihood of harm.
After identifying risks, organizations should clearly document and communicate risk management processes to stakeholders, ensuring that system limitations and safeguards are understood. Additionally, businesses should take a collaborative approach with stakeholders to mitigate risks and help align practices with best-in-class recommendations. Key practices for managing risk include adopting policies for system oversight and adopting regular assessments to ensure ongoing compliance with laws and regulations.
AI systems will never be risk-free. However, businesses can effectively use AI risk assessments to safeguard against potential harms. Through a systematic evaluation of the AI system, organizations can create more trustworthy and reliable AI systems, while ensuring compliance and protecting user privacy.