CARR CENTER FOR HUMAN RIGHTS POLICY 9
e ex ante HRIA conducted on Alphabet-aliate Sidewalk
Labs’s “smart-city” project in the City of Toronto represents
one potential example of this approach. More than 50
proposed digital solutions, including some anticipated
to leverage the use of AI, were assessed prior to the
conrmation of the project. While the nal report of this
HRIA was never publicly released, the exercise, which
included extensive consultation with subject matter
experts and local stakeholders, contributed to the rapid
acceleration and enhancement of existing human rights-
based governance eorts related to the project in a relatively
short period of time. However, as Mantelero and Esposito
(2021) point out, while labor-intensive HRIAs that involve
extensive research and eld work, including consultations
with local stakeholders and subject matter experts, may be
desirable in complex multi-factor scenarios (e.g., large smart-
city projects), they are likely too burdensome and costly to
serve as appropriate models for projects of a smaller scale.
46
Consideration should be given to developing light-touch
HRIAs, with methodologies calibrated to the nature of the
context, risk prole, and/or stage of the AI lifecycle.
In light of the dynamic nature of AI systems, which can
evolve, dri, or adapt in unpredictable ways, reliance on
static governance tools (such as AIAs) may capture only
a snapshot of an AI system’s operations upfront and be
ineective at identifying potential downstream risks and
necessary mitigations. Rather, continuous monitoring and
auditing of deployed systems by regulatory technologies
that can help automate verication of compliance may be
more appropriate as a complement to human oversight.
47
Given that AI’s technical capabilities are progressing at a
pace that greatly outstrips the ability to govern their harms
through primarily manual risk management processes, the
adaptation of policy frameworks and increased investment
by both the public and private sectors could help incentivize
the development of technologies that can help implement AI
governance at scale more eectively.
48
Ultimately, design specications and implementation tactics
for AIAs and HRIAs will have to be tailored to the complexity,
scale, and scope of the projects they are intended to
assess, including their phase of development. Without
sector-specic guidance, standards, or training of qualied
personnel, the operationalization of AIAs and HRIAs is likely
to face signicant hurdles. In this context, poor outcomes
associated with conducting AIAs or HRIAs for AI, whether
due to their administrative burden or failure to identify and
46
Mantelero and Esposito, “An evidence-based methodology.”
47
Gillian Hadeld, Rules for a Flat World (Oxford: Oxford University Press, 2016); Jack Clark and Gillian K. Hadeld, “Regulatory
Markets for AI Safety,” arXiv 2001.00078 (2019), https://arxiv.org/abs/2001.00078.
48
Daniel Zhang et al., e AI Index 2021 Annual Report, AI Index Steering Committee, Human-Centered AI Institute, Stanford
University, March 2021, https://aiindex.stanford.edu/report/.
mitigate risk, should be expected to have negative feedback
eects on their legitimacy. At least part of the solution to
this problem could reside with standards bodies, such as
the IEEE, International Organization for Standardization
(ISO), NIST, and national counterparts, which are beginning
to develop standards and conformity assessments to guide
the responsible development and deployment of AIAs and
related risk management processes. ese so-law tools may
have signicant eects on human rights due diligence in the
context of AI, providing enterprise-level guidance regarding
best practices and clarifying expectations for accountability.
e Role of Standards and Certications
In parallel with the development of AI principles and
the exploration of regulations, standard development
organizations (SDOs) at both the national and international
levels have been actively working on developing AI standards
and conformity assessments. e standards may provide
helpful guidance on creating and implementing eective
AIAs by oering denitional clarity on how to operationalize
responsible AI principles in practice. Conformity assessments
will be used to verify that a company’s product, service, or
management/governance process meets the normative and/
or technical requirements contained in those standards. As an
additional step, certication schemes are being developed to
enable accredited third-party assessors to certify conformity
with AI standards by issuing a certication “mark” or “label.”
As these processes mature, it is likely that certain AI-related
industry standards and conformity assessments will be
incorporated into legislation or regulation as a condition
of compliance. With diverging approaches to AI regulation
being proposed in Europe and elsewhere, the international
harmonization and mutual recognition of AI standards and
conformity assessments will emerge as signicant geopolitical
issues, which is critical to the protection against AI harms
but also to the international trade of AI goods and services.
In recognition of the global importance of AI standards, the
IEEE has demonstrated a commitment to the development
of a human rights-driven approach. Its Ethically Aligned
Design report outlines a conceptual framework for
addressing universal human values, data agency, and
technical dependability through a set of principles to guide
developers and users engaged in the design, development,
and deployment of AI systems. Human rights are identied
as the rst General Principle, with explicit reference to the