Another change urged by lawmakers and industry witnesses alike was requiring disclosure to inform people when they’re conversing with a language model and not a human, or when AI technology makes important decisions with life-changing consequences. One example could be a disclosure requirement to reveal when a facial recognition match is the basis of an arrest or criminal accusation.
The Senate hearing follows growing interest from US and European governments, and even some tech insiders, in putting new guardrails on AI to prevent it from harming people. In March, a group letter signed by major names in tech and AI called for a six-month pause on AI development, and this month, the White House called in executives from OpenAI, Microsoft, and other companies and announced it is backing a public hacking contest to probe generative AI systems. The European Union is also finalizing a sweeping law called the AI Act.
IBM’s Montgomery urged Congress yesterday to take inspiration from the AI Act, which categorizes AI systems by the risks they pose to people or society and sets rules for—or even bans—them accordingly. She also endorsed the idea of encouraging self-regulation, highlighting her position on IBM’s AI ethics board, although at Google and Axon those structures have become mired in controversy.
The Center for Data Innovation, a tech think tank, said in a letter released after yesterday’s hearing that the US doesn’t need a new regulator for AI. “Just as it would be ill-advised to have one government agency regulate all human decision-making, it would be equally ill-advised to have one agency regulate all AI,” the letter said.
“I don’t think it’s pragmatic, and it’s not what they should be thinking about right now,” says Hodan Omaar, a senior analyst at the center.
Omaar says the idea of booting up a whole new agency for AI is improbable given that Congress has yet to follow through on other necessary tech reforms, like the need for overarching data privacy protections. She believes it is better to update existing laws and allow federal agencies to add AI oversight to their existing regulatory work.
The Equal Employment Opportunity Commission and Department of Justice issued guidance last summer on how businesses that use algorithms in hiring—algorithms that may expect people to look or behave a certain way—can stay in compliance with the Americans with Disabilities Act. Such guidance shows how AI policy can overlap with existing law and involve many different communities and use cases.
Alex Engler, a fellow at the Brookings Institution, says he’s concerned that the US could repeat problems that sank federal privacy regulation last fall. The historic bill was scuppered by California lawmakers who withheld their votes because the law would override the state’s own privacy legislation. “That’s a good enough concern,” Engler says. “Now is that a good enough concern that you’re gonna say we’re just not going to have civil society protections for AI? I don’t know about that.”
Though the hearing touched on potential harms of AI—from election disinformation to conceptual dangers that don’t exist yet, like self-aware AI—generative AI systems like ChatGPT that inspired the hearing got the most attention. Multiple senators argued they could increase inequality and monopolization. The only way to guard against that, said Senator Cory Booker, a Democrat from New Jersey who has cosponsored AI regulation in the past and supported a federal ban on face recognition, is if Congress creates rules of the road.
Leave a Reply
You must be logged in to post a comment.