Senators on Wednesday heard from prominent tech executives and others on how to accomplish bipartisan legislation next year that would help speed the development of artificial intelligence and mitigate its biggest risks.
A closed-door forum on Capitol Hill hosted by Senate Majority Leader Chuck Schumer included nearly two dozen tech executives, tech advocates, civil rights groups and labor leaders. The guest list featured some of the biggest names in the industry: Meta's Mark Zuckerberg and X and Tesla's Elon Musk, as well as former Microsoft CEO Bill Gates. All 100 senators were invited; There was no society.
Schumer, D-N.Y., said more than 60 senators attended and that there was some broad consensus to create a bipartisan AI policy framework. When he asked everyone in the room whether the government should play a role in regulating AI, “every single person raised their hands, even though they had different views,” he said.
Schumer, who co-chaired the forum with Sen. Mike Rounds, R-D., won't necessarily heed the advice of tech executives as he works with colleagues to provide some oversight of the growing sector. But he hopes they will give senators some realistic direction for meaningful regulation of the tech industry.
Technical leaders presented their views, each participant was given three minutes to speak on a topic of their choice.
Musk and former Google CEO Eric Schmidt have raised existential risks posed by artificial intelligence, Zuckerberg has raised questions about closed and “open source” AI models, and IBM CEO Arvind Krishna has voiced opposition to the licensing approach favored by other companies. The attendee, who spoke on condition of anonymity due to closed forum rules.
Schumer said one of the issues discussed was whether there should be a new agency to regulate artificial intelligence.
AI in education system
“It was a very civilized discussion between some of the smartest people in the world,” Musk told reporters after leaving the meeting. According to him, there is clearly some strong consensus.
Some senators criticized the private meeting, arguing that tech executives should testify publicly.
Sen. Josh Hawley, R-R., said he would not attend what he called “a giant cocktail party for big tech.” Hawley introduced legislation with Sen. Richard Blumenthal, D-Conn., that would require tech companies to seek licenses for high-risk AI systems.
“I don't know why we would invite all the biggest monopolies in the world to come and give advice to Congress on how to help them make more money and then close it to the public,” Hawley said.
Congress has a lack of experience when it comes to regulating technology, and the industry has largely grown under government control over the past few decades.
Many lawmakers point to the failure of any legislation surrounding social media, such as stricter privacy standards.
“We don't want to do what we did with social media, which is allow technicians to figure it out and fix it later,” said Senate Intelligence Committee Chairman Mark Warner, D-Va. .
Schumer said regulating AI would be “one of the most difficult things we'll ever do,” and he listed several reasons why: It's technically complex, it's constantly changing, and it “has such broad, broad effects around the world. The whole world,” he said.
But his bipartisan task force — Round and Sen. Martin Heinrich, DN.M. and Todd Young, R-Ind. – It is hoped that the rapid growth of artificial intelligence will create more relevance.
Rounds said before the forum that Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also addressing potential data transparency and privacy issues.
“AI isn't going away and it can do really good things or it can be a real challenge,” Rounds said.
Sparked by the release of ChatGPT less than a year ago, businesses in many sectors have been clamoring to use new generative AI tools that can compose human-like passages of text, program computer code, and create new images, audio and video. The hype over such tools has fueled concerns about its potential social harm and led to calls for more transparency about the data collection and use behind the new products.
‘Worried he'll do it to someone else': China stabbing victim describes attack
Hurricane Lee: Storm will bring heavy rain and strong winds to NS this weekend
Several concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar, D-Minn., that would require disclaimers from artificial intelligence-generated election ads with deceptive images and voices. Hawley and Blumenthal's broader approach would create a government oversight body that could audit certain AI systems for harm before granting a license.
ChatGPT, AI apps are changing the way schools teach
In the United States, major tech companies have expressed support for AI regulations, although they don't necessarily agree on what that means. For example, Microsoft has favored a licensing approach, while IBM favors rules governing specific risky uses of AI rather than the technology itself.
“I think it's important for government to play a role in both innovation and creating the right safeguards, and I thought it was a productive discussion,” Google CEO Sundar Pichai said after leaving the forum.
Many members of Congress agree that legislation is needed, but there is little consensus.
“I am involved in this process to a great extent that we will act, but we will not act more boldly or more broadly than the circumstances warrant,” Young said. “We have to be skeptical of government, so I think it's important that Republicans come to the table.”
Some of those invited to Capitol Hill, including Musk and Sam Altman, CEO of ChatGPT developer OpenAI, expressed more dire concerns that echo the popular science fiction of humanity losing control of advanced AI systems if adequate safeguards are not in place.
But for many lawmakers and the people they represent, AI's impact on employment and navigating AI-generated misinformation are more immediate concerns.
Rounds said he wants to see new medical technologies boosted that save lives and give medical professionals access to more data. The topic is “very personal to me,” Rounds said after his wife died of cancer two years ago.
Some Republicans are wary of the path taken by the European Union, which signed the world's first comprehensive rules on artificial intelligence in June. The EU AI Act regulates any product or service that uses an AI system and classifies them according to four levels of risk, from minimal to acceptable.
A group of European corporations has called on EU leaders to review the rules, arguing it could make it easier for companies in the 27-nation bloc to compete with rivals abroad in the use of generative artificial intelligence.