On Tuesday, four US federal agencies issued a warning stating that they already possess the power to address and combat the damages caused by AI bias, and they intend to utilize it. This warning comes as Congress grapples with the question of how best to protect Americans from the potential risks that arise from AI, as the technology has advanced rapidly and become readily accessible to consumers. Senate Majority Leader Chuck Schumer recently announced he is working towards a comprehensive framework for AI legislation, which suggests that it is a priority in Congress.
Despite the attempts of lawmakers to establish targeted rules for the new technology, regulators have declared that they already possess the means to pursue companies that abuse or misuse AI in various ways. In a joint statement from the Consumer Financial Protection Bureau, the Department of Justice, the Equal Employment Opportunity Commission, and the Federal Trade Commission, regulators outlined how current laws would allow them to take action against firms that utilize AI.
For instance, the CFPB is investigating “digital redlining,” or housing discrimination that occurs as a result of prejudice in lending or home-valuation algorithms, according to Rohit Chopra, the agency’s director. The CFPB is also planning to propose regulations that ensure AI valuation models for residential real estate are protected against discrimination.
“There is no exemption in our nation’s civil rights laws for new technologies and artificial intelligence that engages in unlawful discrimination,” Chopra stated during a virtual press conference.
Each agency has the legal authority to combat AI-driven harm readily, according to FTC Chair Lina Khan. Firms should be aware that systems that bolster fraud or perpetuate unlawful bias may violate the FTC Act. There is no AI exemption to the laws on the books.
The FTC is also prepared to take action against companies that unlawfully seek to prevent new entrants from entering AI markets, Khan added.
Kristen Clarke, assistant attorney general for the DOJ Civil Rights Division, referred to a previous settlement with Meta regarding accusations that the company had used algorithms that unlawfully discriminated on the basis of sex and race in displaying housing advertisements. Clarke stated that the Civil Rights Division is dedicated to using federal civil rights laws to hold firms accountable when they use AI in ways that are discriminatory.
EEOC Chair Charlotte Burrows highlighted the use of AI in recruitment and hiring, stating that it can result in biased decisions if trained on biased datasets. This practice may resemble screening out all candidates who do not resemble those in the select group the AI was trained to identify.
Despite this, regulators acknowledged that there is still room for Congress to act. “Artificial intelligence poses some of the greatest modern-day threats when it comes to discrimination today, and these issues warrant closer study and examination by policymakers and others,” Clarke stated, adding that, in the meantime, agencies have an array of bedrock civil rights laws at their disposal to hold bad actors accountable.