Breadcrumb

  1. Inicio
  2. node
  3. Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems
Imagen
Seals of the Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission, and the Federal Trade Commission

Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems

Rohit Chopra, Director of the Consumer Financial Protection Bureau,
Kristen Clarke, Assistant Attorney General for the Justice Department’s Civil Rights Division,
Charlotte A. Burrows, Chair of the Equal Employment Opportunity Commission, and
Lina M. Khan, Chair of the Federal Trade Commission
issued the following joint statement about enforcement efforts to protect the public

from bias in automated systems and artificial intelligence:

America’s commitment to the core principles of fairness, equality, and justice is deeply embedded in the federal laws that our agencies enforce to protect civil rights, fair competition, consumer protection, and equal opportunity. These established laws have long served to protect individuals even as our society has navigated emerging technologies. Responsible innovation is not incompatible with these laws. Indeed, innovation and adherence to the law can complement each other and bring tangible benefits to people in a fair and competitive manner, such as increased access to opportunities as well as better products and services at lower costs.

Today, the use of automated systems, including those sometimes marketed as “artificial intelligence” or “AI,” is becoming increasingly common in our daily lives. We use the term “automated systems” broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. Private and public entities use these systems to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services. These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.

Our Agencies’ Enforcement Authorities Apply to Automated Systems

Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems. For example:

  • The Consumer Financial Protection Bureau (CFPB) supervises, sets rules for, and enforces numerous federal consumer financial laws and guards consumers in the financial marketplace from unfair, deceptive, or abusive acts or practices and from discrimination. The CFPB published a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology being used. The circular also made clear that the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.  
  • The Department of Justice’s Civil Rights Division (Division) enforces constitutional provisions and federal statutes prohibiting discrimination across many facets of life, including in education, the criminal justice system, employment, housing, lending, and voting. Among the Division’s other work on issues related to AI and automated systems, the Division recently filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services.
  • The Equal Employment Opportunity Commission (EEOC) enforces federal laws that make it illegal for an employer, union, or employment agency to discriminate against an applicant or employee due to a person’s race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age (40 or older), disability, or genetic information (including family medical history). In addition to the EEOC’s enforcement activities on discrimination related to AI and automated systems, the EEOC issued a technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees.
  • The Federal Trade Commission (FTC) protects consumers from deceptive or unfair business practices and unfair methods of competition across most sectors of the U.S. economy by enforcing the FTC Act and numerous other laws and regulations. The FTC issued a report evaluating the use and impact of AI in combatting online harms identified by Congress. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The FTC has also warned market participants that it may violate the FTC Act to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks. Finally, the FTC has required firms to destroy algorithms or other work product that were trained on data that should not have been collected.

Automated Systems May Contribute to Unlawful Discrimination and Otherwise Violate Federal Law

Many automated systems rely on vast amounts of data to find patterns or correlations, and then apply those patterns to new data to perform tasks or make recommendations and predictions. While these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination. Potential discrimination in automated systems may come from different sources, including problems with:

  • Data and Datasets: Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.
  • Model Opacity and Access: Many automated systems are “black boxes” whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.
  • Design and Use: Developers do not always understand or account for the contexts in which private or public entities will use their automated systems. Developers may design a system on the basis of flawed assumptions about its users, relevant context, or the underlying practices or procedures it may replace.

Today, our agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation. We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.  

 

Note: This document is for informational purposes only and does not provide technical assistance about how to comply with federal law. It does not constitute final agency action and does not have an immediate and direct legal effect. It does not create any new rights or obligations and it is not enforceable.