Business

As the Use of AI Spreads, Congress Looks to Rein It In

The White House, lawmakers from both parties, and federal agencies are all working on bills or projects to constrain potential downsides of the tech.

There’s bipartisan agreement in Washington that the US government should do more to support development of artificial intelligence technology. The Trump administration redirected research funding towards AI programs; President Biden’s science advisor Eric Lander said of AI last month that “America’s economic prosperity hinges on foundational investments in our technological leadership.”

At the same time, parts of the US government are working to place limits on algorithms to prevent discrimination, injustice, or waste. The White House, lawmakers from both parties, and federal agencies including the Department of Defense and the National Institute for Standards and Technology are all working on bills or projects to constrain potential downsides of AI.

Biden’s Office of Science and Technology Policy is working on addressing the risks of discrimination caused by algorithms. The National Defense Authorization Act passed in January introduced new support for AI projects, including a new White House office to coordinate AI research, but also required the Pentagon to assess the ethical dimensions of AI technology it acquires, and NIST to develop standards to keep the technology in check.

In the past three weeks, the Government Accountability Office, which audits US government spending and management and is known as Congress’s watchdog, released two reports warning that federal law enforcement agencies aren’t properly monitoring the use and potential errors of algorithms used in criminal investigations. One took aim at face recognition, the other at forensic algorithms for face, fingerprint, and DNA analysis; both were prompted by lawmaker requests to examine potential problems with the technology. A third GAO report laid out guidelines for responsible use of AI in government projects.

Helen Toner, director of strategy at Georgetown’s Center for Security and Emerging Technology, says the bustle of AI activity provides a case study of what happens when Washington wakes up to new technology.

In the mid-2010s, lawmakers didn’t pay much notice as researchers and tech companies brought about a rapid increase in the capabilities and use of AI, from conquering champs at Go to ushering smart speakers into kitchens and bedrooms. The technology became a mascot for US innovation, and a talking point for some tech-centric lawmakers. Now the conversations have become more balanced and business-like, Toner says. “As this technology is being used in the real world you get problems that you need policy and government responses to.”

Face recognition, the subject of GAO’s first AI report of the summer, has drawn special focus from lawmakers and federal bureaucrats. Nearly two dozen US cities have banned local government use of the technology, usually citing concerns about accuracy, which studies have shown is often worse on people with darker skin.

The GAO’s report on the technology was requested by six Democratic representatives and senators, including the chairs of the House oversight and judiciary committees. It found that 20 federal agencies that employ law enforcement officers use the technology, with some using it to identify people suspected of crimes during the January 6 assault on the US Capitol, or the protests after the killing of George Floyd by Minneapolis police in 2020.

Fourteen agencies sourced their face recognition technology from outside the federal government—but 13 did not track what systems their employees used. The GAO advised agencies to keep closer tabs on face recognition systems to avoid the potential for discrimination or privacy invasion.

The GAO report appears to have increased the chances of bipartisan legislation constraining government use of face recognition. At a hearing of the House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security held Tuesday to chew over the GAO report, Representative Sheila Jackson Lee (D-Texas), the subcommittee chair, said that she believed it underscored the need for regulations. The technology is currently unconstrained by federal legislation. Ranking member Representative Andy Biggs (R-Arizona) agreed. “I have enormous concerns, the technology is problematic and inconsistent,” he said. “If we’re talking about finding some kind of meaningful regulation and oversight of facial recognition technology then I think we can find a lot of common ground.”

GAO dug deeper on law enforcement technology in its report on forensic algorithms, requested by bipartisan members of the House, including the chairs of the oversight and science, space, and technology committees. The agency said that algorithms for face recognition, latent fingerprint analysis, and DNA profiling from degraded or mixed samples can help investigators. But the report also suggested lawmakers support new standards on training and appropriate use of such algorithms to avoid errors and increase transparency in criminal justice.

Representative Mark Takano (D-California), among the lawmakers who requested the report, says it provided a window into the fallibility of forensic algorithms. “Everything from the data input, to the design of the algorithm, to the testing of it can lead to disparate outcomes for people in the real world,” he says. In April, Takano reintroduced a bill drafted in 2019 that would direct NIST to establish standards and a testing program for forensic algorithms, and prohibit use of trade secrets to prevent criminal defense teams from accessing source code of algorithms used to process evidence.

The GAO’s third AI-related report, specifying guidelines on responsible use of AI for federal agencies, was initiated by the agency itself in anticipation of rapid growth in government AI projects.

GAO chief data scientist Taka Ariga says the report aims to explain to government agencies and AI suppliers in the private sector the acceptable standards for the testing, security, and privacy of AI systems and data used to create them. Future audits of government AI projects will draw on the document’s criteria, he says. “We want to make sure we’re asking the accountability questions now because our job is going to get more difficult when we encounter AI systems that are more capable,” Ariga says.

Despite the recent efforts of lawmakers and officials like Ariga, some policy experts say the US agencies and Congress still need to invest more in adapting to the age of AI.

In a recent report, Georgetown’s CSET outlined scary but plausible “AI accidents” to encourage lawmakers to work more urgently on AI safety research and standards. Its hypothetical disasters included a skin cancer app misdiagnosing Black people at higher rates, leading to unnecessary deaths, or mapping apps steering drivers into the path of wildfires.

The Brookings Institution’s director of governance studies, Darrell West, recently called for the revival of the Office of Technology Assessment, shut down 25 years ago, to provide lawmakers with independent research on new technologies such as AI.

Members of congress from both parties have attempted to bring back the OTA in recent years. They include Takano, who says it could help Congress be more proactive in tackling challenges raised by algorithms. “We need OTA or something like it to help members anticipate where technology is going to challenge democratic institutions, or the justice system, or political stability,” he says.


More Great WIRED Stories

Products You May Like

Articles You May Like

The Science of Crypto Forensics Survives a Court Battle—for Now
The Case Against Apple Weaponizes the Cult of Cupertino
The EU Targets Apple, Meta, and Alphabet for Investigations Under New Tech Law
Photography Is No Longer Evidence of Anything
A Deepfake Nude Generator Reveals a Chilling Look at Its Victims

Leave a Reply