Congress is finally holding hearings on how to regulate artificial intelligence, just as the founders of OpenAI have called for the equivalent of an International Atomic Energy Agency to vet AI efforts for potential harm.

The government will be playing catch-up for some time to come, not only as AI progresses technically, but also begins to display unanticipated behavior, such as seeking to emotionally manipulate human beings as New York Times columnist Kevin Roose discovered to his chagrin. The EU, the U.K. and China are much farther down this policymaking road than the United States.

While there are many areas of regulation to be addressed, one of the most pressing is decision-making by AI, described by the acronym AIDM. This involves decisions concerning consumer loans or government benefits, decisions concerning medical diagnoses, and already on the horizon, decisions about legal guilt and punishment. Some AIDM merely assists human decision-makers, but in other cases the decision-maker is the AIDM system itself. Not only is there documented evidence of the biases of training corpora affecting AI decisions, AI has been shown to be flat-out wrong in many troubling cases, and has even asserted wrongdoing by individuals who are completely innocent, such as Jim Buckley, who was falsely identified as the perpetrator of a 1992 bombing.

The very first step in regulating AI decision-making is to establish fundamental principles. I suggest three. If a human being holds the foundational rights to know they are interacting with an AIDM system, the right to appeal any decision made by such a system, and the right to litigate harm resulting from a decision undertaken by such a system, then effective governance safeguarding human rights can be constructed.

The right to know

Every human has the right to know when they are engaging with an AI system. Beyond simple notification that they are encountering an AI system, individuals should have unfettered access to a standardized identification label with the contact information for the party having a fiduciary obligation for the performance of the system.

Proposed EU guidelines, for example, state that “AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system.” Identification is also important for rapid de-deployment in case of serious violations — for example, in domains where AIDM systems cannot be legally used, such as in decisions about whether to use lethal force. Furthermore, without the human right to know, the rights of appeal and of harm-based litigation become moot.

This demands both central record-keeping and situational notification. Before an AIDM system can be deployed, it should be registered with the government. That is, the government must be notified it exists, who the creator and the deployer are, and archive the code itself or determine that the code has been adequately archived.

And simply having the identity tag available may not be enough. It may be necessary for certain AIDM systems to identify up front that they are not human. For example, the state of Illinois has passed an Artificial Intelligence Video Interview Act, which requires that job applicants be informed in writing that an AI system is being used as part of a video interview, along with an explanation of what information the AI system will be examining, and how the system evaluates that information. This is a good model for national regulation.

To be noted in passing is that the right to know implies that there may also be a concomitant right to refuse to interact with a nonhuman system. The EU notes, “The option to decide against this (AI) interaction in favour of human interaction should be provided where needed to ensure compliance with fundamental rights.” Policymakers should consider such a right.

The right of appeal

The second general principle to be used in regulating AIDM is the right of appeal: Every human should have the right to appeal an AIDM decision to a human being, not another AIDM system. As the IEEE asserts, “Individuals should be provided a forum to make a case for extenuating circumstances that the A/IS may not appreciate — in other words, a recourse to a human appeal.” The right to appeal to a human being is the means by which the subordination of AIDM to humankind is effected. 

There is a profound need for human beings to be able to appeal to another human being in situations of distress. Furthermore, this human being must be an official representative of the entity making the decision, empowered to change the decision made by the AIDM system. Though humans are by no means foolproof, human beings can more readily see when an algorithm has veered from its intended purposes in terms of outcomes generated. Successful appeals should catalyze an audit of the algorithm in question. 

Consider that in January 2020, a facial recognition algorithm used by the police department in Detroit, Michigan, led officers to arrest Robert Julian-Borchak Williams for larceny. Surveillance camera footage from the store in question was fed into an algorithm provided by DataWorks Plus, and the algorithm attempted to match the footage to driver’s license photos, ultimately deciding Williams might be the culprit. He was arrested and handcuffed in front of his family, then brought to the station. 

What happened next is worth reflection: after seeing the surveillance video next to Williams’ face, the two detectives leaned back in their chairs and looked at one another. One detective, seeming chagrined, said to his partner: “I guess the computer got it wrong.”

Note that the humans involved recognized the algorithm was dead wrong — and they recognized it in a nanosecond. Indeed, the human mind excels at the ability to hold in mind both the big picture and the minute details that comprise it. In addition, the human stakes — such as false arrest — are (hopefully) meaningful to other human beings in a way they are not to AIDM systems. Appeal to a human being is thus a fundamental right when humans face AIDM systems.

Related
Opinion: At the crossroads of AI — are we too late to regulate it?
Perspective: Why putting the brakes on AI is the right thing to do

The right to appeal, with the potential of overturning an AIDM decision, requires explicability of the decision made. Laws should grant individuals the right to ask for an explanation when a solely automated decision (e.g., refusal of an online credit application or employment application) is being made about them. Legally extending such a right could provide a mechanism to increase the transparency and accountability of AIDM, and should therefore be seriously considered.

What counts as a sufficient explanation is important in a nascent regulatory system. We have seen in other areas of human endeavor, such as human subject research, the necessity of setting standards for informed consent, which include mandated elements such as explanations of purposes, risks and benefits of the research, using language that is easy to understand, and with contact information if the human subject has questions or concerns. An effort to develop similar standards for AIDM will be needed.

The right to appeal to a human empowered to overturn or modify a decision made by an AIDM also entails actually building that capability. That is, the construction of an appeal interface to the AIDM system and the training of individuals to take on the role of “appellate judges” in the process will be important tasks for which the deployer of the system will have accountability. While deployers may not be responsible for harms caused by the AIDM system if they have performed due diligence, they may be held responsible for harms caused by a wholly inadequate human appeals process.

The right to litigate

The third general principle is the right to litigate harm. Legal liability should rest with the vendor of the AIDM system that sold the product. Vendors are the most intimately knowledgeable about the system, and most capable of preventing or reducing harm their systems might cause. 

Obligations for due diligence on the part of the vendor will necessarily involve pre-deployment testing and auditing. The vendor of an AIDM system should, as a first step toward fulfilling its responsibilities, ensure the system is lawful in every aspect from data collection to decision implementation. Furthermore, the vendor must be in compliance with government regulations, such as the requirement for identity tags, pre-deployment registration and nondiscrimination, that must be followed.

But to be litigable, the concept of a “tort” must be expanded under U.S. law to include not only conventionally understood harms (a self-driving car runs over a pedestrian, for example), but also the unique types of harm produced by AIDM, such as unjustifiable decisions or false accusations. In addition, the government itself may be the offending party, as in cases of the denial of government benefits or federal arrests, meaning that blanket immunity for the government under common tort law would be a nightmare for citizens; this, therefore, requires amendment.

Building an ecosystem

With laws enshrining these three general principles in place, the next order of business is legislation to establish a federal oversight agency with specific powers over the deployment of AIDM systems. The agency could be either standalone, or more likely subsumed under the Federal Trade Commission’s mandate.

This new agency of the U.S. government would have several functions. First, it would establish the government regulations under which AIDM is legal to deploy, and determine when it would be illegal to do so. Second, it would register AIDM systems created by vendors/developers at the point where the system would be deployed by developer or purchaser. After checking that the vendor/developer has performed both internal and third-party testing of the system for validity and legality, has created the requisite appeal interfaces, archived the code and confirmed it has acquired insurance for the system, the agency would issue a registration identifier to be used in the required identity tag enshrining the right to know. 

The new agency would have enforcement powers; that is, it would have the power to order rapid de-deployment of an AIDM system shown to be harmful or illegal, as well as punitive fines and other forms of punishments. Updates or modifications to an AIDM system would also be subject to registration and renewed testing mandates.

But the new agency by itself cannot constitute the entire regulatory ecosystem; it will depend on the creation and standardization of new capabilities within the national marketplace. One of the most pivotal of these is the creation of a national standards board for the testing of AIDM systems, both for validity testing (i.e., does it perform the function desired?) and testing for other legal and societal goods, such as explicability, nondiscrimination, privacy and so forth. Compliance with such standards will determine if due diligence has been performed. 

View Comments

The creation of these standards will then allow the standards board to certify that private, independent testing/auditing companies are following best practices in making their judgments. These independent third-party testing/auditing companies will be needed by vendors for purposes of pre-deployment testing, auditing during deployment, obtaining insurance and providing expert witness in case of litigation. 

A real ecosystem of AIDM governance is needed, one with several moving parts, including founding legislation, a new federal oversight agency, a new standards board, regulations and statutes, and the development of significant new capabilities in both the private and public sectors. It will involve insurance companies, testing/auditing companies, vendors and developers, and deploying entities. Universities will need to develop programs to train those who must staff this ecosystem. 

Only with such a working regulatory ecosystem can the three fundamental rights of humans in relation to AIDM systems be maintained. If we fail to act expeditiously, it may be impossible to recapture these rights once they have been lost.

Valerie M. Hudson is a University Distinguished Professor at Texas A&M University, and the co-editor of The Oxford Handbook on AI Governance. Her views are her own.

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.