Authors

Peter Schildkraut
Peter SchildkrautArnold & Porter Kaye Scholer LLP.
Peter Schildkraut is a co-leader of the firm’s Technology, Media & Telecommunications industry team and provides strategic counsel on artificial intelligence, spectrum use, broadband, and other TMT regulatory matters. Mr. Schildkraut helps clients navigate the ever-changing opportunities and challenges of technology, policy, and law to achieve their business objectives at the US Federal Communications Commission (FCC) and elsewhere. He is the author of “AI Regulation: What You Need To Know To Stay Ahead of the Curve.
James Kim
James KimArnold & Porter Kaye Scholer LLP.
James W. Kim is a nationally recognized expert in procurement law that regularly advises companies that do business with the US government, with a focus on professional services organizations and the life sciences industry. He is a regular speaker and author on procurement and drug pricing matters and his work is regularly featured in nationally-distributed industry print and digital media.

Mr. Kim provides clients with strategic counsel related to US government funding and US market access, including assistance with more than $5 billion in procurement and grant awards and regulatory counsel related to more than $40 billion in successful M&A transactions.

Marne Marotta
Marne MarottaArnold & Porter Kaye Scholer LLP.
Marne Marotta works with clients facing complex challenges to develop and implement dynamic government relations strategies. Drawing from her experience in the Senate and the executive branch, she provides clients with strategic guidance and counseling, devises and implements comprehensive advocacy campaigns, and builds coalitions with allied stakeholders. Focused on the intersection between business and public policy, Marne uses a multidisciplinary approach to help clients achieve their legislative and agency goals.
James Courtney, Jr.
James Courtney, Jr.Arnold & Porter Kaye Scholer LLP.
James Courtney focuses his work on a variety of policy areas, including technology, national security, education, and energy and environmental. He conducts research and monitors developing policy issues to aid clients and engage with members of Congress and the Executive Branch. Mr. Courtney works closely with and advises clients on a wide range of regulatory and legislative issues related to technology, privacy, education, workforce development, and energy.
Paul Waters
Paul WatersArnold & Porter Kaye Scholer LLP.
Paul Waters focuses on a variety of policy areas, including financial services, tax, digital asset regulation, technology, and defense. He monitors policy developments and analyzes legislation to support client strategy development and stakeholder outreach in Congress and the Executive branch.
First Published in
First Published inThe Journal of Robotics, Artificial Intelligence & Law
The Journal of Robotics, Artificial Intelligence & Law (RAIL) is the flagship publication of Full Court Press, an imprint of Fastcase. Since 1999, Fastcase has democratized the law and made legal research smarter. Now, Fastcase is proud to publish books and journals that are pioneering, topical, and visionary, written by the law’s leading subject matter experts. Look for more Full Court Press titles available in print, as eBooks, and in the Fastcase legal research service, or at www.fastcase.com/fullcourtpress.

Blueprint for an “Artificial Intelligence Bill of Rights”

Abstract: In this article, the authors discuss the blueprint for an “AI Bill of Rights” unveiled recently by the Biden administration. The blueprint provides a clear indication of the Biden administration’s artificial intelligence regulatory policy goals. This article was first published in The Journal of Robotics, Artificial Intelligence & Law by Fastcase Full Court Press.

More and more, artificial intelligence (AI) and other automated systems make decisions affecting our lives and economy. These systems are not broadly regulated in the United States—although that will change this year in several states. President Biden recently unveiled a blueprint for an “AI Bill of Rights,” motivated by concerns about potential harms from automated decision-making. Arising from an initiative the White House Office of Science and Technology Policy (OSTP) launched in 2021, the AI Bill of Rights lays out five principles to foster policies and practices—and automated systems—that protect civil rights and promote democratic values.

For now, at least, adherence to these principles (and the steps recommended for observing them) remains voluntary—the blueprint is a guidance document with no enforcement authority attached to it. Notably, at inception, OSTP was unsure how the AI Bill of Rights might be enforced:

Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this “bill of rights” or adopting new laws and regulations to fill gaps. States might choose to adopt similar practices.

The Biden administration decided to publish a nonbinding white paper, potentially recognizing the difficulty of shepherding legislation through any potential 118th Congress. Indeed, the document’s first page proclaims that it “is non-binding and does not constitute U.S. government policy.” Nor does it “constitute binding guidance for the public or federal agencies and therefore does not require compliance with the principles described herein.” Notwithstanding this disclaimer, the blueprint provides a clear indication of the Biden administration’s AI regulatory policy goals.

The Executive Branch and also independent agencies are likely to follow this lead in their respective domains.

Issues of Definition

In the debate over the European Union’s pending Artificial Intelligence Act, the definition of “artificial intelligence” has attracted much discussion. OSTP sidesteps this issue in the blueprint by addressing “automated systems,” which are defined as “any system, software or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.” OSTP adds, “Automated systems include, but are not limited to, systems derived from machine learning, statistics or other data processing or AI techniques, and exclude passive computing infrastructure,” which OSTP also defines.

The blueprint’s coverage of “automated systems” instead of “artificial intelligence” offers business a mixed bag. On the one hand, the broader scope aligns with the regulation of automated decision-making under California, Colorado,10 Connecticut, and Virginia12 privacy laws and New York City’s law on automated employment decision tools, all taking effect this year, as well as Article 2214 of the EU/UK General Data Protection Regulation.

On the other hand, it potentially threatens international harmonization of regulations based on the seemingly narrower scopes of the UNESCO Recommendation on the Ethics of Artificial Intelligence and the OECD AI Principles (also shared by the G20). Much of the blueprint concerns protection of “rights, opportunities or access.” OSTP explains this phrase as “the set of: civil rights, civil liberties and privacy, including”:

• “freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts”;

• “equal opportunities, including equitable access to education, housing, credit, employment, and other programs”; or

• “access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits.”

This explanation’s expansiveness underscores the Biden administration’s stated intent that the blueprint apply to automated systems affecting any facet of society or the economy.

Guiding Principles

The blueprint outlines five principles for all automated systems with the potential to “meaningfully impact individuals’ or communities’ exercise of rights, opportunities or access”:
Safe and Effective Systems. Automated systems should be safe and effective. They should be evaluated independently and monitored regularly to identify and mitigate risks to safety and effectiveness. Results of evaluations, including how potential harms are being mitigated, should be “made public whenever possible.”
Algorithmic Discrimination Protections. Automated systems should not “contribute to unjustified different treatment” or impacts that disfavor members of protected classes. Designers, developers, and deployers should include proactive equity assessments in their design processes, use representative data sets, watch for proxies for protected characteristics, ensure accessibility for people with disabilities, and test for and mitigate disparities throughout the system’s life cycle.
Data Privacy. Individuals should be protected from abusive data practices and have control over their data. Privacy engineering should be used to ensure automated systems include privacy by default. Automated systems’ design, development, and use should respect individuals’ expectations about their data and the principle of data minimization, collecting only data strictly necessary for the specific context. OSTP stresses that consent should be used only where it can be appropriately and meaningfully provided, limited to specific use contexts and unconstrained by dark patterns; moreover, notice and requests for consent should be brief and understandable in plain language. Certain sensitive data (including data related to work, home, education, health, and finance) should be subject to additional privacy protection, including ethical review and use prohibitions.
Notice and Explanation. Operators of automated systems should inform people affected by their outputs when, how, and why the system affected them. This principle applies even “when the automated system is not the sole input determining the outcome.” Notices and explanations should be clear and timely and use plain language.
Human Alternatives, Consideration, and Fallback. People should be able to opt out of decision-making by automated systems in favor of a human alternative, where appropriate. Automated decisions should be appealable to humans.

The blueprint also includes a “Technical Companion” that details “concrete steps” for building these five principles into “policy, practice or the technological design process.” Organizations developing, procuring, and deploying AI and other automated systems will find these concrete steps to be generally consistent with other guidance on best practices.

What Next from the U.S. Government?

Having drawn up the blueprint, the Biden administration is ready to build out its AI policies through guidance, rulemaking, and enforcement. This work is already under way.
Thus far, guidance—both for ethical best practices and compliance with existing laws—has been most common. For instance:
• Department of Energy AI Advancement Council. In May 2022, the Department of Energy established the AI Advancement Council20 to oversee coordination, advise on AI strategy, and address issues on the ethical use and development of AI systems.
• Algorithmic Discrimination in Hiring. In May 2022, the Equal Employment Opportunity Commission (EEOC) and the Department of Justice released a technical assistance document that explains how employers’ use of algorithmic decision-making may violate the Americans with Disabilities Act. EEOC’s guidance is a part of its larger initiative to ensure that AI and “other emerging tools used in hiring and other employment decisions comply with federal civilbrights laws that the agency enforces.”
• Consumer Protection. In May 2021, the Federal Trade Commission’s (FTC) published a blog post providing tips for responsible use of AI in compliance with Section 5 of the Federal Trade Commission Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act.

Increasingly, however, the Executive Branch and independent agencies have been shifting to rulemaking and enforcement:
• Broad AI Regulation. In August 2022, FTC opened its “commercial surveillance” proceeding, which could lead to a wide range of rules on AI and other automated systems (as well as privacy and data security). FTC’s Advance Notice of Proposed Rulemaking asks a number of questions about algorithmic accuracy, validity, reliability, and error; algorithmic discrimination against traditionally protected classes and “other underserved groups”; and whether AI and other automated systems yield unfair methods of competition or unfair or deceptive acts or practices that violate Section 5 of the FTC Act.25
• Workplace Protections. The Department of Labor is ramping up enforcement of required surveillance reporting to protect worker organizing. The Department of Labor also released a blog post titled “What the Blueprint for an AI Bill of Rights Means for Workers.”
• Algorithmic Healthcare Discrimination. The Department of Health and Human Services (HHS) issued a proposed rule in August 2022 that, in relevant part, would prohibit algorithmic discrimination in clinical decision-making by covered health program and activities. HHS also planned to release an evidence-based examination of healthcare algorithms and racial and ethnic disparities by late e 2022.
• Algorithmic Housing Discrimination. In June 2022, Meta (formerly, Facebook) settled a Justice Department Fair Housing Act suit (following a Department of Housing and Urban Development investigation). The government alleged that Meta had used algorithms in determining which Facebook users received housing ads and that those algorithms relied, in part, on characteristics protected under the Fair Housing Act. As part of the settlement, Meta agreed to change its targeted advertising practices and to pay the maximum civil penalty of $115,054.
• Algorithmic Credit Discrimination. In March 2022, the Interagency Task Force on Property Appraisal and Valuation Equity released an Action Plan to Advance Property Appraisal and Valuation Equity that includes a commitment from regulators to include a nondiscrimination standard in proposed rules for automated valuation models. Also that month, the Consumer Financial Protection Bureau revised its Supervision and Examination Manual to focus on algorithmic discrimination as a prohibited unfair, deceptive or abusive acts or practice. Businesses should expect the blueprint to inform all such agency actions going forward. It is likely that these agencies will expand their AI initiatives while other agencies will become active addressing AI and other automated systems within their ambits.

The Chamber of Commerce’s Concerns Following the blueprint’s release, the U.S. Chamber of Commerce (the Chamber) wrote34 OSTP Director Dr. Arati Prabhakar,
highlighting a number of concerns:
• Lack of Stakeholder Engagement. OSTP received insufficient stakeholder input in formulating the blueprint, having sought comments only on biometric-identification systems.
• Poor Definitions. The blueprint supplies definitions of key terms, including “Automated System,” which lack precision and could undercut international harmonization of AI policies and standards.
• Independent Evaluations. The current lack of “concrete” auditing standards and metrics for AI systems makes it  “pointless” to allow journalists, third-party auditors, and other independent evaluators “unfiltered access” to AI systems—as called for in the blueprint.
• Conflation of Data Privacy and Artificial Intelligence. Data privacy and AI raise “distinctly different” “nuances and complexities,” so the two issues should not be conflated.

The Chamber’s “unexpectedly forceful pushback” (to quote Politico’s Brendan Bordelon) to a supposedly nonbinding guidance document reflects the blueprint’s potential influence. In an interview, a representative said the Chamber expects dozens of federal agencies to incorporate the guidance into regulatory mandates and
fears “copycats at the state and local level.” A patchwork of differing requirements could impose a substantial burden on businesses.

Having released the Blueprint for an AI Bill of Rights with great fanfare, the Biden administration is unlikely to withdraw it in response to the Chamber’s critique. However, the critique probably does foreshadow coming battles in rulemaking dockets and legislative chambers around the country.

Conclusion

AI regulation is arriving swiftly. Businesses should monitor these changes and prepare their compliance programs. Companies with particular concerns may wish to raise them early in legislative and rulemaking processes while proposals remain fluid.