CARR CENTER FOR HUMAN RIGHTS POLICY
HARVARD KENNEDY SCHOOL
Human Rights Implications of
Algorithmic Impact Assessments
Priority Considerations to Guide
Eective Development and Use
Brandie Nonnecke
Philip Dawson
Carr Center
Discussion Paper
SPRING
2021
I
SSUE 2021-25
e views expressed in the Carr Center Discussion Paper Series are those
of the author(s) and do not necessarily reect those of the John F. Kennedy
School of Government or of Harvard University. Faculty Research Working
Papers have not undergone formal review and approval. Such papers
are included in this series to elicit feedback and to encourage debate on
important public policy challenges. Copyright belongs to the author(s).
Papers may be downloaded for personal use only.
Human Rights Implications of Algorithmic Impact Assessments:
Priority Considerations to Guide Eective Development and Use
Carr Center for Human Rights Policy
Harvard Kennedy School, Harvard University
October 21, 2021 | Issue 2021-25
Brandie Nonnecke and Philip Dawson
Technology and Human Rights Fellows
Carr Center for Human Rights Policy
CARR CENTER FOR HUMAN RIGHTS POLICY 3
Introduction
AI Governance and Human Rights
Algorithmic Impact Assessments and Human Rights Impact Assessments
e Role of Standards and Certications
Conclusion
Acknowledgment
1
2
5
9
11
11
Table of Contents
CARR CENTER FOR HUMAN RIGHTS POLICY 1
Introduction
In response to growing recognition of the societal risks of
articial intelligence (AI) broadly and automated decision-
making systems (ADS) in particular, algorithmic or AI impact
assessments (AIAs) are increasingly being considered by the
public and private sectors to anticipate, prevent, and mitigate
harms, or as a means to improve the quality of AI products
and services.
1
e term “algorithmic impact assessment”
currently lacks denitional clarity. In general, the purpose of
an AIA is to identify potential risks and impacts—including
to health, safety, ethics, and, in some implementations,
to human rights—arising from the development and
deployment of an algorithmic system as well as appropriate
risk mitigation strategies, such as use ofalgorithmic audits,
datasheets for datasets,” and “model cards.
2
Implementations of AIAs are gaining momentum as a viable
AI governance strategy, nding their way into binding
1
 Articial intelligence (AI) refers to a computer system capable of performing tasks that require human-level intelligence, such
as decision-making, visual perception, and speech recognition. Methods for doing so are wide ranging and vary signicantly in
complexity, including algorithms, predictive models, computer vision, deep learning, machine learning, natural language processing,
neural nets, and more.
2
 Bryan Casey, Ashkon Farhangi, and Roland Vogl, “Rethinking Explainable Machines: e GDPR’s ‘Right to Explanation’ Debate and
the Rise of Algorithmic Audits in Enterprise,Berkeley Technology Law Journal 34, no. 1 (2019): 143–188; Timnit Gebru et al., “Datasheets
for Datasets,arXiv 1803.09010 (2018), https://arxiv.org/abs/1803.09010; Margaret Mitchell et al., “Model Cards for Model Reporting,
Proceedings of the Conference on Fairness, Accountability, and Transparency (2019): 220–29, http://doi.org/10.1145/3287560.3287596;
Emanuel Moss et al., “Governing with Algorithmic Impact Assessments: Six Observations,AAAI / ACM Conference on Articial
Intelligence, Ethics, and Society (AIES) (2020), https://dx.doi.org/10.2139/ssrn.3584818.
3
 Kent Walker and Je Dean, “An Update on Our work on AI and Responsible Innovation,” Google, July 9, 2021, https://blog.google/
technology/ai/update-work-ai-responsible-innovation.
4
 Andrew D. Selbst, “Negligence and AI’s Human Users,Boston University Law Review 100 (2020): 1315–76, https://www.bu.edu/
bulawreview/les/2020/09/SELBST.pdf.
5
 European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Articial Intelligence (Articial Intelligence
Act) and Amending Certain Union Legislative Acts, COM/2021/206 (April 21, 2021), https://eur-lex.europa.eu/legal-content/EN/
TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206.
6
 US Congress, Senate, Algorithmic Accountability Act of 2019, S 1108, 116th Cong., introduced in Senate April 10, 2019, https://www.
congress.gov/bill/116th-congress/senate-bill/1108; Grace Dill, “Sen. Wyden to Reintroduce AI Bias Bill in Coming Months,MeriTalk,
February 19, 2021, https://www.meritalk.com/articles/sen-wyden-to-reintroduce-ai-bias-bill-in-coming-months/.
7
 US Congress, House, Committee on Appropriations, Commerce, Justice, Science and Related Agencies Appropriations Bill, 2021 - Report
Together With Minority Views, 116th Cong., 2d sess., 2020, H. Rep. 116–455, 23, https://appropriations.house.gov/sites/democrats.
appropriations.house.gov/les/July%209th%20report%20for%20circulation_0.pdf.
regulation and legislation.
3
Corporate policies are also
requiring implementation of AIAs as a mechanism to reduce
legal risks stemming from liability and negligence.
4
e
European Commission’s Articial Intelligence Act takes a risk-
based approach to AI governance, prohibiting certain harmful
applications of AI and calling for the use of “conformity
assessments” for high-risk applications to identify necessary
oversight mechanisms.
5
e Algorithmic Accountability Act
proposed in the United States Congress in 2019 would have
required companies with large user bases to conduct impact
assessments of highly sensitive ADS (the Act is expected to
be reintroduced in 2021).
6
In 2021, the National Institute of
Standards and Technology (NIST) was tasked by Congress
to develop an “AI risk management framework” to guide the
“reliability, robustness, and trustworthiness of AI systems”
used in the federal government.
7
In 2021, the National
Security Commission on Articial Intelligence issued a report
recommending risk assessments for AI to be implemented ex
ante and for impact assessments to be conducted ex post to
ABSTRACT: e public and private sectors are increasingly turning to the use of algorithmic or articial intelligence impact
assessments (AIAs) as a means to identify and mitigate harms from AI. While promising, lack of clarity on the proper scope,
methodology, and best practices for AIAs could inadvertently perpetuate the harms they seek to mitigate, especially to human
rights. We explore the emerging integration of the human rights legal framework into AI governance strategies, including
the implementation of human rights impacts assessments (HRIAs) to assess AI. e benets and drawbacks from recent
implementations of AIAs and HRIAs to assess AI adopted by the public and private sectors are explored and considered in the
context of an emerging trend toward the development of standards, certications, and regulatory technologies for responsible AI
governance practices. We conclude with priority considerations to better ensure that AIAs and their corresponding responsible
AI governance strategies live up to their promise.
CARR CENTER FOR HUMAN RIGHTS POLICY2
“increase public transparency about AI use through improved
reporting.
8
In this instance, risk assessments and impact
assessments are dierentiated, with risks being identied at
the outset and impacts being evaluated aer deployment to
quantify and mitigate the identied risks. Canadas “Directive
on Automated Decision-Making,which came into eect in
2020, led to the development of one of the rst AIA tools
to identify and mitigate a range of risks—to individual
rights, economic interests, health and well-being, and
sustainability—arising from ADS developed and deployed in
the public sector.
9
While AIAs hold great promise to promote the development
of regulatory, policy, and governance mechanisms by
government and corporate actors to identify potential
harms, civil society organizations have warned that using a
risk-based AIA approach may be insucient.
10
Most guidance
for implementation of AIAs indicates that their use should
be reserved for “high-risk” AI applications (e.g., use of AI in
biometric identication or judicial sentencing). However,
applications wrongly categorized as “low risk” can evade
proper oversight. is is especially problematic in the context
of emerging legislation that places the onus of determining
risk level on the entity developing the AI. Further, a lack of
common or internationally standardized approaches to the
development of AIAs could lead to confusion and complicate
their eectiveness.
As a risk-based approach increasingly dominates AI
governance strategies, important questions emerge
regarding the proper scope, methodology, and best practices
that might protect AIAs from inadvertently becoming
smokescreens for human rights and other abuses. In short,
the ill-conceived development and deployment of AIAs
pose substantial risk themselves. is isn’t to say that
implementation of AIAs cannot provide benets now, but that
signicant work remains to determine how to appropriately
develop and apply AIAs to ensure long-term eectiveness.
If done inappropriately, their use may ultimately perpetuate
the harms they seek to mitigate.
8
 US National Security Commission on Articial Intelligence, Final Report (2021), 395, https://www.nscai.gov/wp-content/
uploads/2021/03/Full-Report-Digital-1.pdf.
9
 Government of Canada, Directive on Automated Decision-Making (April 1, 2021), https://www.tbs-sct.gc.ca/pol/doc-eng.
aspx?id=32592&section=html.
10
 Fanny Hidvegi, Daniel Leufer, and Estelle Massé, “e EU Should Regulate AI on the Basis of Rights, Not Risks,” Access Now,
February 17, 2021, https://www.accessnow.org/eu-regulation-ai-risk-based-approach/.
11
 “AI Ethics Guidelines Global Inventory,” Algorithm Watch, accessed June 20, 2021, https://inventory.algorithmwatch.org/.
12
 Jessica Fjeld et al., Principled Articial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, Berkman
Klein Center for Internet & Society, 2020, http://nrs.harvard.edu/urn-3:HUL.InstRepos:42160420.
13
 Algorithm Watch, “AI Ethics Guidelines Global Inventory.
In this research, we explore the emerging integration of the
human rights framework into AI governance strategies, the
development and use of AIAs, and the potential benets and
risks they pose to human rights. We rely on the international
human rights law framework, including the UN Declaration
of Human Rights (UDHR) as well as the UN Guiding Principles
on Business and Human Rights (UNGPs), to provide
an analysis of emerging proposals for the use of AIAs,
including in recommendations made by international and
intergovernmental organizations, regulatory and legislative
proposals from government bodies, and usage to date in
the private sector. We conclude with a discussion of priority
considerations to help guide the eective development and
use of AIAs to better ensure that they live up to their promise.
AI Governance and Human Rights
At least 170 sets of ethical or human rights-based AI principles,
frameworks, and guidelines have been developed to support
responsible AI development and deployment within the
public and private sectors.
11
A growing consensus is forming
around core principles, including the need for accountability,
privacy and security, transparency and explainability,
fairness and non-discrimination, professional responsibility,
human control, and the promotion of human values.
12
As
these AI principles gain acceptance within the public and
private sectors, the focus is now shiing to the development
of appropriate strategies to operationalize the principles into
responsible practices. Yet this process is not straightforward.
Out of the over 170 existing sets of AI principles, there is
seldom consensus in the interpretation of the principles
in practice.
13
AI principles have been developed by diverse
institutions (e.g., academia, civil society, governments) with
varying multi-stakeholder representation. Because these
institutions have diering priorities and needs and have oen
applied dierent ethical frameworks (e.g., deontological,
consequentialist, utilitarian approaches) to evaluate the
benets and risks of AI, there is great heterogeneity in
CARR CENTER FOR HUMAN RIGHTS POLICY 3
how AI principles are dened and in recommendations for
their appropriate operationalization. Certain scholars have
argued that “AI principle proliferation” has perpetrated a
crisis of legitimacy, complicating the already complex task
of identifying and mitigating risks and harms of AI-enabled
technologies.
14
In response, the international human rights
framework and its normative and legal guidance has been
proposed as a mechanism to support more consistent framing
and operationalization of AI principles, and many prominent
professional associations, consortia, intergovernmental
organizations, governments, and companies seem to agree.
15
e Institute of Electrical and Electronics Engineers (IEEE),
the world’s largest technical professional organization,
issued a report in 2017 stating as its rst principle that
AI should be “created and operated to respect, promote,
and protect internationally recognized human rights” and
emphasized that human rights should be part of AI risk
assessments.
16
e Asilomar Principles, with over 5,000
signatories from the public and private sectors, include the
need to protect human rights in the design and deployment
of AI systems.
17
e Organisation for Economic Co-operation
and Development (OECD) AI Principles, which 42 countries
have pledged to uphold, specically call for the protection
of human rights.
18
e European Commissions Articial
Intelligence Act (AI Act) seeks to ensure that AI systems
respect human rights through development of a risk-based
14
 Mark Latonero, AI Principle Proliferation as a Crisis of Legitimacy, Carr Center for Human Rights Policy, 2020, https://carrcenter.hks.
harvard.edu/les/cchr/les/mark_latonero_ai_principles_6.pdf.
15
 Mark Latonero, Governing Articial Intelligence: Upholding Human Rights & Dignity, Data & Society, 2018, https://datasociety.net/wp-
content/uploads/2018/10/DataSociety_Governing_Articial_Intelligence_Upholding_Human_Rights.pdf; Alessandro Mantelero
and Samantha Esposito, “An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI
data-intensive systems,Computer Law & Security Review 41 (July 2021), https://doi.org/10.1016/j.clsr.2021.105561; Eileen Donahoe and
Megan MacDuee Metzger, “Articial Intelligence and Human Rights,Journal of Democracy 30, no. 2 (2019): 115–126.
16
 e Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems,
Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligence Systems, 1st ed., IEEE, 2019, https://
standards.ieee.org/content/ieee-standards/en/industry-connections/ec/ autonomous-systems.html.
17
 “Asilomar AI Principles,” Future of Life Institute, 2017, https://futureoife.org/ai-principles/.
18
 Organisation for Economic Co-operation and Development, Recommendation of the Council on Articial Intelligence, OECD/
LEGAL/0449 (May 21, 2019), https://legalinstruments.oecd.org/en/instruments/OECD%20-LEGAL-0449.
19
 European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Articial Intelligence (Articial Intelligence Act) and
Amending Certain Union Legislative Acts.
20
 “Advancing Trustworthy AI,” National AI Initiative Oce, 2021, https://www.ai.gov/strategic-pillars/advancing-trustworthy-
ai/#Metrics-Assessment-Tools-and-Technical-Standards-for-AI.
21
 Government of Canada, Directive on Automated Decision-Making.
22
 “AI Ethics,” Salesforce, accessed August 22, 2021, https://einstein.ai/ethics.
23
 Microso, Human Rights Annual Report, 2018, https://query.prod.cms.rt.microso.com/cms/api/am/binary/RE2FMZY; “Intel Human
Rights Impact Assessment,” Article One Advisors, accessed August 22, 2021, https://www.articleoneadvisors.com/intel-hria.
24
 Mantelero and Esposito, “An evidence-based methodology”; Donahoe and MacDuee Metzger, “Articial Intelligence and Human
Rights”; Charles Bradley, Richard Wingeld, and Megan Metzger, “National Articial Intelligence Strategies and Human Rights: A
Review. Second Edition,Global Partners Digital and Stanford Global Digital Policy Incubator (April 2021): 1–70.
approach to AI governance and oversight.
19
e White House
Oce of Science and Technology Policy (OSTP) in its National
AI Initiative called out the need to ensure that AI systems do
not infringe upon human rights, especially rights to privacy,
civil rights, and civil liberties.
20
Canada, through its Directive
on Automated Decision-Making, is one of the rst countries
to develop an AIA tool that seeks to measure and mitigate the
human rights harms of ADS used in public services.
21
Data-
intensive companies like Salesforce have explicitly identied
protecting human rights in their AI ethics strategy.
22
And
Microso and Intel are among the rst global tech companies
to conduct HRIAs on their development and use of AI.
23
Centering human rights within AI governance strategies can
help operationalize AI principles across sectors, international
contexts, and domain application areas.
24
rough their
codication in charters, case law, regulation, and industry
standards, human rights norms and values have gained
“e ill-conceived development
and deployment of AIAs pose
substantial risk themselves…
If done inappropriately, their use
may ultimately perpetuate the
harms they seek to mitigate.
CARR CENTER FOR HUMAN RIGHTS POLICY4
e human rights framework
can provide the substantive
foundation and governance
architecture needed to
produce greater specicity in
dening and operationalizing
AI principles.
CARR CENTER FOR HUMAN RIGHTS POLICY 5
broad global consensus.
25
e UDHR and corresponding
international human rights instruments and guiding
principles, UN treaties and commentaries, national laws,
and related policies and guidelines have helped to clarify
core denitions and interpretations of human rights over
decades.
26
As such, international human rights norms and
values may be “clearer, better dened, and [more] stable
than AI principles alone. Applying a human rights framework
“facilitates better harmonization and reduces the risk of
uncertainty” in dening and applying AI principles in practice.
27
Take, for example, the principle of “non-discrimination,
which exists in Article 2 of the UDHR and has also been widely
adopted as an AI principle in the public and private sectors.
e operationalization of “non-discrimination” is complicated
by the absence of a shared understanding of what it means in
the development and deployment of AI systems. By applying
a human rights framework and relevant charters, case law,
and regulation to identify how “non-discrimination” has been
interpreted in a particular domain, appropriate strategies to
move the concept of “non-discrimination” from the abstract
to the concrete can become clearer.
Human rights principles also highlight that the responsible
design of AI systems, including the transparency,
explainability, and accountability, are not only desirable from
a commercial or ethical standpoint, but also prerequisites
to upholding existing legal obligations. For instance, a lack
of transparency regarding the use of AI systems can make it
dicult to determine whether a violation of human rights
or any other legal obligation has occurred, pre-empting the
ability to seek redress. Similarly, and especially in the public
sector, the reliance on a recommendation, decision, or insight
provided by an AI system that is not explainable or accountable
is at odds with human rights principles incorporated into
national administrative law, which generally requires that
25
 Donahoe and MacDuee Metzger, “Articial Intelligence and Human Rights.
26
 UN General Assembly, 183rd Plenary Meeting, Resolution 217A, A Universal Declaration of Human Rights, A/RES/217 (December
10, 1948), https://www.un.org/en/ga/search/view_doc.asp?symbol=A/RES/217(III); “e Core International Human Rights
Instruments and their monitoring bodies,” United Nations Human Rights Oce of the High Commissioner (OHCHR), 2021,
https://www.ohchr.org/en/professionalinterest/pages/coreinstruments.aspx; OHCHR, Guiding Principles on Business and Human
Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, June 16, 2011, https://www.ohchr.org/documents/
publications/guidingprinciplesbusinesshr_en.pdf; Latonero, Governing Articial Intelligence.
27
 Mantelero and Esposito, “An evidence-based methodology.
28
 Australian Human Rights Commission, Human Rights and Technology Final Report (2021), 2021, https://tech.humanrights.gov.au/
downloads.
29
 Australian Human Rights Commission, Human Rights and Technology Final Report (2021); Council of Europe Ad Hoc Committee
on Articial Intelligence Policy Develop Group, Human Rights, Democracy and Rule of Law Impact Assessment of AI Systems, CAHAI-
PDG(2021)05, May 21, 2021, https://rm.coe.int/cahai-pdg-2021-05-2768-0229-3507-v-1/1680a291a3.
30
 Jacob Metcalf et al., “Algorithmic Impact Assessments and Accountability: e Co-Construction of Impacts,Proceedings of the 2021
ACM Conference on Fairness, Accountability, and Transparency (2021): 735–46.
31
 Emanuel Moss et al., Assembling Accountability: Algorithmic Impact Assessment for the Public Interest, Data & Society, June 29, 2021,
https://datasociety.net/library/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest/.
an individual be provided with reasons for a decision made
against them, as well as an opportunity to contest that
decision and receive remedy(ies).
28
e human rights framework can provide the substantive
foundation and governance architecture needed to produce
greater specicity in dening and operationalizing AI
principles. As the public and private sectors increase their
eorts to implement AIAs, calls to require HRIAs for AI are
also on the rise.
29
As AIAs and HRIAs for AI continue to gain
prominence, it is important to consider how these approaches
should be implemented to better identify and mitigate risks.
We next evaluate the design and scope of AIAs and HRIAs for
AI and then turn to a discussion of the challenges associated
with their implementation.
Algorithmic Impact Assessments and
Human Rights Impact Assessments
ere is a long history of using impact assessments in a variety
of domains, including to assess and mitigate harms to the
environment, data security, privacy, and human rights. For
each, the appropriate scoping and implementation methods
must be carefully negotiated and constructed to support
accountability.
30
In a recent study of impact assessments in
dierent sectors, researchers noted that the methodology
is largely driven by 10 constitutive components, including
criteria such as source(s) of legitimacy (e.g., legislative or
regulatory mandates that dene who must implement
an impact assessment and when), identifying potential
“impacts” to be assessed and mitigated (e.g., risks to non-
discrimination), and the appropriate methods for doing so
(e.g., consultation with diverse subject-matter experts and
those directly aected).
31
CARR CENTER FOR HUMAN RIGHTS POLICY6
e design and implementation of impact assessments in the
eld of AI is nascent. As such, there is a lack of consensus or
common standards regarding the appropriate conguration
or application of such constitutive components, including
which entities should administer and enforce AIAs or HRIAs to
support legitimacy, how to adopt meaningful governance and
engagement processes to support accountability, and what
are the appropriate methods for implementation, including
how to eectively dene, identify, and mitigate risks.
32
Metcalf et al. (2021) dene AIAs as “emerging governance
practices for delineating accountability, rendering visible the
harms caused by algorithmic systems, and ensuring practical
steps are taken to ameliorate those harms.
33
Typical sources
of risk to be identied include the presence of bias in
datasets used to train an AI system, as well as the fairness
and explainability of the model; identication of potential
impacts can include contextual considerations related to
equity and justice, as well as the economic interests, health,
and well-being of users or populations potentially aected by
the proposed system. Companies may integrate AIAs in whole
or in part into traditional product reviews, risk management,
and due diligence processes. e goal of an AIA, as with
other impact assessments, is ultimately to identify technical
adjustments that can be made to the AI system in order to
eliminate the risks identied or to reduce them to an acceptable
level. Because of their deep expertise and knowledge of
the AI system being assessed, technology rms will likely
be the primary administrators of AIAs, creating a potential
situation where these rms have an outsized eect on what
is included in an AIA and how it is implemented in practice.
34
An HRIA is “a tool to evaluate the potential or actual impact
of an organization’s strategy, practice, or product on people’s
human rights.
35
Endorsed by the UN Human Rights Council
in 2011, the UNGPs underpin much of the criteria and
guidance applicable to best practices of HRIAs. e UNGPs
recommend that assessments of human rights impacts
should be undertaken regularly and at appropriate stages of a
business’s operations as part of its human rights due diligence
processes, for instance prior to a new activity or relationship,
32
 Moss et al., Assembling Accountability, 28.
33
 Moss et al., Assembling Accountability, 26.
34
 Andrew D. Selbst, “An Institutional View of Algorithmic Impact Assessments,Harvard Journal of Law and Technology 35 (2021),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3867634.
35
 Mark Latonero and Aaina Agarwal, Human Rights Impact Assessments for AI: Learning from Facebooks Failure in Myanmar, Carr Center
for Human Rights Policy, 2021, https://carrcenter.hks.harvard.edu/les/cchr/les/210318-facebook-failure-in-myanmar.pdf.
36
 “Microso Global Human Rights Statement,” Microso Corporation, last modied December 11, 2020, https://query.prod.cms.
rt.microso.com/cms/api/am/binary/RE4JIiU; “An Independent Assessment of the Human Rights Impact of Facebook in Myanmar,
Facebook, last modied November 5, 2018, https://about..com/news/2018/11/myanmar-hria/.
37
 Latonero and Agarwal, Human Rights Impact Assessments for AI.
before major decisions or changes in its operations (e.g.,
market entry, product launch, policy change, or wider changes
to the business), and periodically throughout the life of an
activity or relationship. In general, the assessment should
include identifying who may be aected, cataloging the
relevant human rights standards and issues, projecting how
the proposed activity and associated business relationships
could have adverse human rights impacts on those identied,
and identifying mitigations that might eliminate or reduce
the level of risk to an acceptable level.
Large technology companies like Microso and Facebook
have begun conducting HRIAs to identify and address
technology-related human rights risks, including those
emanating from AI.
36
Microso publishes a “Human Rights
Annual Report” within which the human rights eects of its
technologies are explored and risk mitigation strategies taken
are discussed. Facebook commissioned an HRIA to evaluate
its role in the genocide of the Rohingya in Myanmar.
37
e
HRIA was found to be largely ineective at uncovering the
human rights harms of Facebook’s AI-enabled tools and
identifying appropriate mechanisms to mitigate those harms
moving forward.
Because of their deep
expertise and knowledge of
the AI system being assessed,
technology rms will likely be
the primary administrators
of AIAs, creating a potential
situation where these rms
have an outsized eect on what
is included in an AIA and how
it is implemented in practice.
CARR CENTER FOR HUMAN RIGHTS POLICY 7
In the remainder of this section, we explore proposed and
existing AIAs in the public and private sectors to better
understand emerging trends in their scope and structure and
the corresponding benets and risks associated with their
implementation, especially to human rights. We rst review
Canada’s “Directive on Automated Decision-Making” and its
development of an AIA to evaluate and mitigate harms of ADS
in government public service delivery. We next consider the
EU’s risk-based approach to AI governance and its proposed
implementation of “conformity assessments” to identify and
mitigate AI risks emerging from the private sector. We then
explore the implementation of HRIAs for AI and how these may
dier from, complement, or should be integrated into AIAs to
better ensure the protection of fundamental human rights.
In 2019, the Canadian government released its Directive on
Automated Decision-Making (the Directive).
38
e Directive’s
principal objectives were to ensure that the incorporation
of ADS into external public service delivery respects
core administrative law principles such as transparency,
accountability, legality, and procedural fairness” and to ensure
that harmful eects of algorithms on administrative decisions
are assessed and reduced.
39
To this end, the Directive includes
an accompanying AIA Tool in the form of a questionnaire
that must be completed prior to the development of
any ADS. Completion of the questionnaire helps internal
teams compute a raw impact score that measures the risk
of the automation, for instance to the rights of individuals
or communities, their health, well-being, or economic
interests, as well as eects on the overall “sustainability
of the ecosystem.
40
Depending on the level of impact,
the Directive provides for increasingly rigorous mitigation
requirements, such as extensive peer review, notice, human
intervention in the decision-making process, the provision
of a “meaningful explanation,” or personnel training.
While the Directive received attention both within Canada
and globally, the government has been criticized for failing to
enforce its requirements. Since the Directive came into force
in May 2020, few AIAs have been completed and published per
its requirements.
41
In a sense, Canada’s experience with the
Directive highlights a challenge that is well known to global
38
 Government of Canada, Directive on Automated Decision-Making.
39
 Government of Canada, Directive on Automated Decision-Making, 35.
40
 Government of Canada, Directive on Automated Decision-Making, 35.
41
 Tom Cardoso and Bill Curry, “National Defence skirted federal rules in using articial intelligence, privacy commissioner says,e
Globe and Mail, last modied February 8, 2021, https://www.theglobeandmail.com/canada/article-national-defence-skirted-federal-
rules-in-using-articial/; “Open Government Portal,” Government of Canada, accessed September 7, 2021, https://search.open.canada.
ca/en/od/?search_text=AIA.
42
 European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Articial Intelligence (Articial Intelligence Act) and
Amending Certain Union Legislative Acts.
technology companies: obtaining institutional support and
deploying the resources and expertise necessary to support
the implementation of organization-wide compliance tools is
not a straightforward process, particularly for emerging and
poorly understood technologies such as ADS and AI.
In April 2021, the European Commission released its dra
Articial Intelligence Act” (AI Act).
42
e AI Act takes a risk-
based approach to AI regulation, establishing four levels
of risk: minimal, limited, high, and unacceptable. e Act
requires dierent levels of oversight for limited and high-risk
AI applications. Applications that fall within the category of
unacceptable risk are forbidden (e.g., use of AI that is capable
of manipulating individuals through subliminal techniques).
e deploying entity can determine whether applications fall
under the minimal-risk categorization. Applications posing
limited risk would have transparency obligations. High-
risk applications (such as use of AI in critical infrastructure,
medical devices, and education) or applications that pose
a risk to health, safety, and/or fundamental rights (such as
remote biometric identication, credit scoring, or hiring
decisions) would be subject to ex ante conformity assessments
to be conducted by independent third parties. Providers of
high-risk AI systems must have a post-market monitoring
system in place, in which they actively collect, document, and
analyze relevant data throughout the AI system’s lifetime.
e development and use of harmonized technical standards,
such as those in relation to bias mitigation, risk, or quality
management, is encouraged to facilitate the implementation
of conformity assessments.
If AIAs are to be relied upon to protect society from potential AI
harms, inclusion of risks to fundamental human rights will be
critical to their success. Generally, the object of an AIA consists
of the algorithmic or AI system(s), including the datasets used
to train these systems. One of the current trends associated
with AIAs is to focus on assessing the technical aspects (e.g.,
the potential for bias, fairness, or explainability of the system)
and their immediately foreseeable and measurable risks
or consequences. In doing so, Metcalf et al. (2021) caution
that AIAs may lead to an “ontological attening” of the risks
CARR CENTER FOR HUMAN RIGHTS POLICY8
of AI-driven systems.
43
Approaching AIAs in this manner
may inadvertently lead to overlooking human rights risks
altogether, or a failure to identify the connection between
technical weaknesses and downstream, context-dependent
impacts, including to human rights—especially those that
occur secondarily (e.g., the chilling eect of misidentication
by facial recognition systems on an individual’s freedom of
assembly and expression or the tendency of misinformation
to amplify online misogyny and radicalization). In this sense,
the range of issues to consider in the context of an AIA can
be far more extensive than for traditional product reviews.
As such, scoping AIAs too narrowly can lead to a false sense
of due diligence in risk identication and mitigation, allowing
tools with non-trivial risks to human rights to operate freely.
Dening the scope of an HRIA also presents specic
challenges. For one, because the focus of the exercise shis
from an assessment of the quantiable technical risks of
an AI system to the potential for real-life impacts on the
rights and freedoms of individuals and communities, the
scope of an HRIA tends to be broader and more forward-
looking than that of an AIA by default. Accordingly, while
the subject of an HRIA could be the AI system itself, the
assessment is more likely to require consideration of risk
and impacts at a higher level, for example resulting from
the deployment of the product in dierent contexts, the
nature of the overarching business activity, the presence
of adequate legal protections or governance structures,
the track record of supply chain partners, or all of the
above. Furthermore, HRIA guidance cautions against
preemptively narrowing the scope of human rights and
freedoms to be investigated at the outset of an assessment,
for instance to consider only risks or impacts related to
the right to privacy or equality and non-discrimination.
In addition to the need to design appropriate methodologies
for conducting AIAs and HRIAs for AI in dierent contexts,
their operationalization also raises important considerations,
for instance in light of the administrative burden and costs
involved. One approach taken by companies is to stand up
a central unit that develops internal policies and procedures
for AI governance, which may incorporate components of
AIAs and/or HRIAs. is requires hiring additional personnel
with appropriate socio-technical expertise, consequently
increasing operating costs. Even with a central “responsible
AI” unit in place, additional hurdles arise with respect to
training dierent teams to identify and mitigate potential
43
 Metcalf et al., “Algorithmic Impact Assessments and Accountability.
44
 Metcalf et al., “Algorithmic Impact Assessments and Accountability,” 41.
45
 Latonero and Agarwal, Human Rights Impact Assessments for AI.
AI risks, in particular on account of the distinct skill sets,
roles, and responsibilities of personnel at various stages of
the AI lifecycle (e.g., design, development, or deployment).
Companies may opt to conduct training one multidisciplinary
workshop at a time and struggle to administer AI governance
at the enterprise level. Scalability challenges may be further
compounded by the potential for AI systems to exhibit
dierent risks depending on the context of deployment, and
the global scale at which systems may operate. Alternatively,
another approach may be to hire external consultants to
help adapt existing policies and procedures to the AI context,
upskill employees, or conduct in-depth, standalone AIAs and/
or HRIAs for applications believed to be higher risk.
In the absence of proper guidance, the timing of impact
assessments can also have signicant eects on their
outcomes and credibility. For example, a recent study of
the HRIA commissioned by Facebook regarding its potential
implication in the genocide in Myanmar cautioned against
the use of HRIAs as one-time, ex post exercises, which
could become a form of AI “ethics washing.
44
Rather, and
as instructed by the UNGPs, HRIAs should be conducted at
appropriate intervals, aligned with critical stages of the AI
lifecycle and as part of ongoing risk management processes
such as human rights due diligence.
45
In addition, the study
concluded that HRIAs should be conducted at the earliest
stages of the design or conception of AI systems.
Scoping AIAs too
narrowly can lead
to a false sense of
due diligence in risk
identication and
mitigation, allowing
tools with non-trivial
risks to human rights
to operate freely.
CARR CENTER FOR HUMAN RIGHTS POLICY 9
e ex ante HRIA conducted on Alphabet-aliate Sidewalk
Labs’s “smart-city” project in the City of Toronto represents
one potential example of this approach. More than 50
proposed digital solutions, including some anticipated
to leverage the use of AI, were assessed prior to the
conrmation of the project. While the nal report of this
HRIA was never publicly released, the exercise, which
included extensive consultation with subject matter
experts and local stakeholders, contributed to the rapid
acceleration and enhancement of existing human rights-
based governance eorts related to the project in a relatively
short period of time. However, as Mantelero and Esposito
(2021) point out, while labor-intensive HRIAs that involve
extensive research and eld work, including consultations
with local stakeholders and subject matter experts, may be
desirable in complex multi-factor scenarios (e.g., large smart-
city projects), they are likely too burdensome and costly to
serve as appropriate models for projects of a smaller scale.
46
Consideration should be given to developing light-touch
HRIAs, with methodologies calibrated to the nature of the
context, risk prole, and/or stage of the AI lifecycle.
In light of the dynamic nature of AI systems, which can
evolve, dri, or adapt in unpredictable ways, reliance on
static governance tools (such as AIAs) may capture only
a snapshot of an AI system’s operations upfront and be
ineective at identifying potential downstream risks and
necessary mitigations. Rather, continuous monitoring and
auditing of deployed systems by regulatory technologies
that can help automate verication of compliance may be
more appropriate as a complement to human oversight.
47
Given that AI’s technical capabilities are progressing at a
pace that greatly outstrips the ability to govern their harms
through primarily manual risk management processes, the
adaptation of policy frameworks and increased investment
by both the public and private sectors could help incentivize
the development of technologies that can help implement AI
governance at scale more eectively.
48
Ultimately, design specications and implementation tactics
for AIAs and HRIAs will have to be tailored to the complexity,
scale, and scope of the projects they are intended to
assess, including their phase of development. Without
sector-specic guidance, standards, or training of qualied
personnel, the operationalization of AIAs and HRIAs is likely
to face signicant hurdles. In this context, poor outcomes
associated with conducting AIAs or HRIAs for AI, whether
due to their administrative burden or failure to identify and
46
 Mantelero and Esposito, “An evidence-based methodology.
47
 Gillian Hadeld, Rules for a Flat World (Oxford: Oxford University Press, 2016); Jack Clark and Gillian K. Hadeld, “Regulatory
Markets for AI Safety,arXiv 2001.00078 (2019), https://arxiv.org/abs/2001.00078.
48
 Daniel Zhang et al., e AI Index 2021 Annual Report, AI Index Steering Committee, Human-Centered AI Institute, Stanford
University, March 2021, https://aiindex.stanford.edu/report/.
mitigate risk, should be expected to have negative feedback
eects on their legitimacy. At least part of the solution to
this problem could reside with standards bodies, such as
the IEEE, International Organization for Standardization
(ISO), NIST, and national counterparts, which are beginning
to develop standards and conformity assessments to guide
the responsible development and deployment of AIAs and
related risk management processes. ese so-law tools may
have signicant eects on human rights due diligence in the
context of AI, providing enterprise-level guidance regarding
best practices and clarifying expectations for accountability.
e Role of Standards and Certications
In parallel with the development of AI principles and
the exploration of regulations, standard development
organizations (SDOs) at both the national and international
levels have been actively working on developing AI standards
and conformity assessments. e standards may provide
helpful guidance on creating and implementing eective
AIAs by oering denitional clarity on how to operationalize
responsible AI principles in practice. Conformity assessments
will be used to verify that a company’s product, service, or
management/governance process meets the normative and/
or technical requirements contained in those standards. As an
additional step, certication schemes are being developed to
enable accredited third-party assessors to certify conformity
with AI standards by issuing a certication “mark” or “label.
As these processes mature, it is likely that certain AI-related
industry standards and conformity assessments will be
incorporated into legislation or regulation as a condition
of compliance. With diverging approaches to AI regulation
being proposed in Europe and elsewhere, the international
harmonization and mutual recognition of AI standards and
conformity assessments will emerge as signicant geopolitical
issues, which is critical to the protection against AI harms
but also to the international trade of AI goods and services.
In recognition of the global importance of AI standards, the
IEEE has demonstrated a commitment to the development
of a human rights-driven approach. Its Ethically Aligned
Design report outlines a conceptual framework for
addressing universal human values, data agency, and
technical dependability through a set of principles to guide
developers and users engaged in the design, development,
and deployment of AI systems. Human rights are identied
as the rst General Principle, with explicit reference to the
CARR CENTER FOR HUMAN RIGHTS POLICY10
international human rights framework and the relevance
of the UNGPs. Additionally, the IEEE has established an
Ethics Certication Program for Autonomous and Intelligent
Systems (ECPAIS). e ECPAIS is currently developing a set of
standards focused on bias, transparency, and accountability.
If a developer implements the ECPAIS standards, it can add
a quality assurance mark to its products and services, which
has the potential to raise consumer trust and market power.
49
e ISO and the International Electrotechnical Commission
(IEC) are advancing a conformity assessment standard for AI
risk management through the work of a joint committee on
articial intelligence (ISO/IEC JTC1/SC 42).
50
e proposed
ISO/EC 42001 - Articial Intelligence Management System
(AIMS) standard will enable organizations to show they have
implemented and continually work on improving processes
to address bias, fairness, inclusiveness, safety, security,
privacy, accountability, applicability, and transparency in AI.
In January 2021, Congress mandated that NIST identify
and provide “standards, guidelines, best practices,
methodologies, procedures, and processes for developing
trustworthy AI systems.
51
Within two years, NIST is required
to develop an AI risk management framework that enables
the assessment of “trustworthy” AI and identication of
appropriate risk mitigation strategies on a voluntary basis in
the public and private sectors.
52
NIST is to establish common
denitions and characterizations for AI principles, such as
explainability, transparency, and fairness. In June 2021, NIST
issued a dra report dening dierent types of bias and
mitigation strategies—an important rst step in establishing
standards for appropriate oversight and risk mitigation.
53
Given the important role that standards and conformity
assessments are expected to play in supporting compliance
49
 “e Ethics Certication Program for Autonomous and Intelligent Systems (ECPAIS),” IEEE, accessed August 22, 2021, https://
standards.ieee.org/industry-connections/ecpais.html.
50
 “Standards by ISO/IEC JTC 1/SC 42 Articial Intelligence,” International Organizations for Standardization, accessed August 2021,
https://www.iso.org/committee/6794475/x/catalogue/p/0/u/1/w/0/d/0.
51
 NIST was assigned the task of creating an AI risk management framework in the National Articial Intelligence Initiative Act of
2020 (the AI Act), which was included in the 2021 National Defense Authorization Act. See US Congress, House, William M. (Mac)
ornberry National Defense Authorization Act for Fiscal Year 2021, HR 6395, 116th Cong., introduced in House March 26, 2020, https://
www.congress.gov/bill/116th-congress/house-bill/6395/text.
52
 US Congress, House, National Defense Authorization Act, 49.
53
 Reva Schwartz et al., A Proposal for Identifying and Managing Bias in Articial Intelligence, National Institute of Standards and
Technology (NIST), June 2021, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270-dra.pdf.
54
 Other national standards organizations are undertaking similar eorts. In Canada, the national counterpart to NIST and CEN-
CENELEC recently received additional funding from the Canadian government to advance the development and adoption of AI
standards, including risk management standards and conformity assessment schemes for AI.
55
 Stefano Nativi and Sarah De Nigris, AI Standardisation Landscape: state of play and link to the EC proposal for an AI
regulatory framework, EUR 30772 EN, Publications Oce of the European Union, 2021, https://doi.org/10.2760/376602.
56
 Peter Cihon et al., “AI Certication: Advancing Ethical Practice by Reducing Information Asymmetries,IEEE Transactions on
Technology and Society (2021), https://doi.org/ 10.1109/TTS.2021.3077595.
with the proposed EU AI Act, more explicit linkages should be
made between the technical assessments of AI systems and
their potential downstream human rights impacts as these
eorts evolve.
In March 2021, the European Committee for Standardization
(CEN) and the European Committee for Electrotechnical
Standardization (CENELEC) established Joint Technical
Committee 21 on Articial Intelligence (CEN-CLC/JTC 21) to
proceed with the development and adoption of standards for
AI and related data, including international standards already
available or under development from organizations like ISO/
IEC JTC 1 and its subcommittees, such as SC 42 Articial
Intelligence. CEN-CLC/JTC 21 will focus on producing
standardization deliverables that address “European market
and societal needs, as well as underpinning EU legislation,
policies, principles, and values.
54
e European Commission issued a report in 2021 outlining
relevant standards that support compliance with its AI
Act, including standards from the IEEE and ISO to guide
appropriate data governance; risk management; technical
data and record keeping; transparency and accountability;
human oversight; accuracy, robustness, and cybersecurity;
and implementation of a quality management system to
ensure compliance with regulation.
55
As AI standards and conformity assessments mature,
implementation of certication schemes designed to
operationalize them are gaining prominence. Certication
can be dened as the “attestation that a product, process,
person, or organization meets specied criteria.
56
In AI,
certications are emerging for the technology itself (e.g.,
training data and model attributes), the development
CARR CENTER FOR HUMAN RIGHTS POLICY 11
process (e.g., organizational ethics and risk management
processes), or a combination of both. Certications can
be voluntary or mandatory, self-assessed or third-party
assessed. At this stage, self-certications are the most
common with third-party certications being proposed for
high-risk applications of AI. In the EU’s AI Act, for example,
developers of “low-risk” applications can perform voluntary
self-assessments and certain “high-risk” applications are
required to complete mandatory third-party “conformity
assessments.” Self-assessments or self-certications are
widely used in many industries but may lack legitimacy
due to the inherent potential for conicts of interest and
low accountability for lack of enforcement. ird-party
assessments are more rigorous, but can be extraordinarily
costly and require qualied assessors, which can be dicult
to nd for complex AI systems.
57
e development of
soware-based assessment and certication methods that
automate and streamline regulatory compliance is one way
that researchers and industry are investigating new ways of
implementing AI governance at scale.
58
While AI certication processes are still at an early stage,
initiatives like the Responsible Articial Intelligence (RAI)
Certication, developed by the Responsible AI Institute
in collaboration with the World Economic Forum, hold
promise.
59
One of the rst independent, accredited
certication programs to emerge, the RAI Certication
seeks to support the implementation of responsibly built
AI systems through an objective third-party review process.
Certication can incentivize implementation of appropriate
risk identication and mitigation strategies. However, there
are signicant challenges to successful implementation, such
as false positives where certication is provided even though
certain criteria have not been met or false negatives where
certication is not provided even though all criteria have
been satised.
Development of appropriate standards and certications will
depend on access to high-quality data about AI operations
in specic contexts.
60
AI monitoring and measurement,
therefore, will be critical to the eectiveness of standards
and certications and essential to protecting human rights
in high-risk contexts and applications. For human rights,
57
 Cihon et al., “AI Certication,” 54.
58
 Gillian K. Hadeld, “Regulatory technologies can solve the problem of AI,” Schwartz Reisman Institute for Technology and Society,
University of Toronto, April 19, 2021, https://srinstitute.utoronto.ca/news/hadeld-torstar-ai-debate. See also Hadeld, Rules for a Flat
World; Clark and Hadeld, “Regulatory Markets for AI Safety.
59
 “RAI Certication Beta,” Responsible Articial Intelligence Institute, accessed September 1, 2021, https://www.responsible.ai/
certication.
60
 Jess Whittlestone and Jack Clark, “Why and How Governments Should Monitor AI Development,arXiv 2108.12427 (2021), https://
arxiv.org/abs/2108.12427.
appropriately dening evaluation criteria, assessment, and
verication processes of standards and certications will be
critical. In a eld where concepts of “fair,” “accountable,” and
“trustworthy” AI are still under development, dening and
enforcing appropriate procedures to uphold human rights in AI
is equally muddled. While promising to uncover human rights
risks of AI and whether strategies are in place to mitigate these
risks, use of standards and certications to indicate human
rights due diligence should be cautiously implemented.
Conclusion
Risk-based approaches to AI governance, including tools
such as AIAs, are likely to play a central role in AI governance
strategies. Given the important human rights considerations
raised by the use of AI systems, closer linkages should be
made between the study and practice of AIAs and lessons
learned from the implementation of HRIAs. In particular,
international human rights law can serve as a helpful
guide for identifying connections between AI systems’
technical features and human rights implications, especially
for vulnerable individuals and communities. Signicant
work remains to develop best practices for successful
implementation of AIAs and HRIAs for AI, including
considerations related to how AIAs should integrate features
of HRIAs and their appropriate scope, structure, scalability,
timing, and administrative burden. In this respect, however,
the emergence of common approaches and methodologies
for AIAs and HRIAs for AI will be aided by the development
of harmonized technical standards, conformity assessments,
and certication schemes, as well as guidance for their
implementation in a variety of contexts..
Acknowledgment
is report was supported in part and reviewed by Access
Now, a global human rights organization. We also want to
thank the following individuals for their thoughtful review:
Kathy Baxter, Ashley Casovan, Danielle Cass, Peter Cihon,
Camille Crittenden, Mark Latonero, Marc-Étienne Ouimette,
Ed Santow, and Craig Shank.
Statements and views expressed in this report are solely
those of the author and do not imply endorsement by Harvard
University, the Harvard Kennedy School, or the Carr Center for
Human Rights Policy.
Copyright 2021, President and Fellows of Harvard College
Printed in the United States of America
Carr Center Discussion Paper Series
Carr Center for Human Rights Policy
Harvard Kennedy School
79 JFK Street
Cambridge, MA 02138
CARR CENTER FOR HUMAN RIGHTS POLICY12
CARR CENTER FOR HUMAN RIGHTS POLICY 13
79 JFK Street
Cambridge, MA 02138
617.495.5819
https://carrcenter.hks.harvard.edu
is publication was published by the Carr Center for
Human Rights Policy at the John F. Kennedy School of
Government at Harvard University.
Copyright 2021, President and Fellows of Harvard College
Printed in the United States of America