Chuck Schumer, a Democrat from New York, center, leads a Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, D.C., US, on Wednesday. The gathering is part of the Senate majority leader’s strategy to give Congress more influence over the future of artificial intelligence as it takes on a growing role in the professional and personal lives of Americans. Al Drago/Bloomberg

Some of the most powerful tech leaders in the world – including Tesla CEO Elon Musk and Meta CEO Mark Zuckerberg – traveled to Capitol Hill for a historic meeting on artificial intelligence, where they expressed unanimous agreement that the government needs to intervene to avert the potential pitfalls of the evolving technology.

But as the six-hour session wore on Wednesday, there was little apparent consensus about what a congressional framework should look like to govern AI, as companies forge ahead amid a tense industry arms race.

Senate Majority Leader Charles E. Schumer, D-N.Y., said Congress’s “difficult job” ahead will be finding ways to enhance the benefits of the technology while minimizing its risks. But his remarks to reporters made it clear that lawmakers are at least months away from unveiling a legislative framework to address AI.

“It’s a big challenge,” Schumer told reporters. “This is the hardest thing that I think we have ever undertaken. But we can’t be like ostriches and put our head in the sand because if we don’t step forward, things will be a lot worse.”

In Washington, lawmakers have tried to rein in the power of Silicon Valley for years, and recent advances in AI represent their biggest test to date. In the past five years, lawmakers have not passed a single comprehensive law to protect data privacy, regulate social media, or promote fair competition by the tech giants, despite numerous congressional hearings spent grilling tech executives about the role of social media in election manipulation, potential abuses of user data and allegedly monopolistic behaviors.

Lawmakers, industry and civil rights leaders, and tech industry advocates say the United States can’t afford a repeat of past attempts to craft tech legislation, which became mired in partisan battles, industry lobbying, and competing congressional priorities, especially because of AI’s potential to discriminate and its critical role in national security.

Advertisement

Tristan Harris, the co-founder of the Center for Humane Technology and a prominent advocate for social media regulation, said he was “hopeful” about what the session accomplished.

Lawmakers “were willing to tear up the playbook to say we need to do something that moves the pace this is moving,” Harris said.

Google CEO Sundar Pichai and Meta CEO Mark Zuckerberg on Capitol Hill on Wednesday. Elizabeth Frantz/The Washington Post

The moves on the Hill follow the launch of ChatGPT and other generative AI that can craft surprisingly humanlike images and text, sparking a worldwide movement to regulate and rein in the tech before it gets too far ahead. The new scrutiny is palpable in Washington, where President Biden has hosted several AI meetings with Silicon Valley leaders, and congressional committees this year alone have held at least 10 hearings on AI, covering issues ranging from national security to human rights.

Still, Congress is far behind other governments around the world eager to chart the regulatory path for artificial intelligence. The European Union is expected to reach a deal this year on its AI Act, which aims to protect consumers from potentially dangerous applications of artificial intelligence. China in July released its own rules for generative AI, which requires the technology to abide by the socialist ideology governing most aspects of daily life.

The urgency was on display Wednesday in the historic Kennedy Caucus Room, where every one of the more than 20 tech CEOs, prominent civil rights advocates, and consumer advocates raised their hands when Schumer asked the room if the government should intervene in AI.

The atmosphere in the room was generally cordial, lawmakers and tech leaders said, but there was some disagreement over what the government’s approach should be to open-source models, code that is freely available to the public and lacks the restrictions Google and OpenAI put on their systems. Meta has released an open-source model called LLaMA, an approach that has alarmed some lawmakers.

Advertisement

Harris told the room that with $800 and a few hours of work, his team was able to strip Meta’s safety controls off LLaMA 2 and that the AI responded to prompts with instructions to develop a biological weapon. Zuckerberg retorted that anyone can find that information on the internet, according to two people familiar with the meeting, who spoke on the condition of anonymity to discuss the closed-door meeting.

Meta declined to comment. In Zuckerberg’s prepared remarks, he said that “open source democratizes access to these tools, and that helps level the playing field and foster innovation for people and businesses, which I think is valuable for our economy overall.”

Harris said in a statement that by releasing its open-source model, Meta “unilaterally decided for the whole world what was ‘safe.'”

Other executives suggested another path forward.

“Some things are totally fine open source and great,” OpenAI CEO Sam Altman, whose company created ChatGPT, told reporters. “Some things in the future – we may not want to. We need to evaluate the models as they go.”

The discussion was wide-ranging, covering many different aspects of how AI might transform society, for better or worse. Microsoft founder Bill Gates suggested it could be used to solve hunger, Schumer said. Some executives expressed the need for greater government funding to ensure strong advances in artificial intelligence, Schumer said. Lawmakers also said there was discussion about how to ensure the workforce, especially within government, was prepared for the transformations that AI would bring.

Advertisement

There isn’t yet agreement about whether the government needs a new AI regulator or whether existing agencies could take up the mantle. As Musk exited the meeting, he told reporters that he could envision a regulator dedicated to AI and compared the issue to the controversy over seat belts in cars decades ago, saying tech giants can’t stick their heads in the sand.

Elon Musk, chief executive officer of Tesla, arrives for a Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, D.C., US, on Wednesday. The gathering is part of the Senate majority leader’s strategy to give Congress more influence over the future of artificial intelligence as it takes on a growing role in the professional and personal lives of Americans. Al Drago/Bloomberg

Schumer called the discussion of a new regulator one of the “big issues” that Congress needs to consider, saying some attendees support the creation of a new agency while others say existing government agencies, including the National Institute of Standards and Technology, should take a leading role.

There were some expectations of tensions in the meeting because many of the executives fiercely compete in business, and Musk and Zuckerberg recently sparred online about the possibility of a cage match. Musk and Zuckerberg were seated at opposite ends of the dais, far away from each other.

There was some consensus during the meeting about the need for international coordination on AI, particularly the development of an agency similar to nuclear regulators to organize a global response to AI, attendees said. Altman had previously testified that such an agency was needed.

Lawmakers also said they discussed the risks that AI presents to elections, a day after a bipartisan group of senators unveiled a bill that would prohibit the use of generative AI in elections.

More than two-thirds of senators attended the forum, according to Schumer. Many lawmakers are just in the early stages of grappling with the prospect of AI and told reporters as they exited the session that they found it educational. Sen. Angus King quipped that he would call the session “Schumer University.”

Advertisement

Wednesday’s session was starkly different from past congressional hearings on tech, where lawmakers often found themselves under public scrutiny for gaffes that exposed their lack of tech expertise. It was mostly closed to the press in an attempt to permit more candid conversation and limit grandstanding common at high-profile public hearings.

But some lawmakers expressed consternation that the meeting was closed-door, diverging from past public hearings with tech executives. Individual senators were not able to ask questions during the morning session, which was moderated by Schumer, Sen. Elizabeth Warren, D-Mass., said.

“The people of Massachusetts did not send me here not to ask questions,” Warren told reporters. “There’s no interaction, no bumping each against each other on any of these issues.”

Reporters and cameras swarmed tech executives as they filed into the Russell Senate Office Building on Wednesday morning. Musk stopped to pose for cameras, while Altman took questions from reporters about his positions on AI policy.

Schumer started by moderating a three-hour session with the executives in the morning, and then after an hour-long break, Sen. Mike Rounds, R-S.D., took over asking questions.

Wednesday’s event has attracted some criticism from prominent AI ethicists because initial reports of the guest list did not include any women, civil rights leaders, or AI researchers. The Washington Post first reported that Schumer had invited several prominent advocates and scientists, including AFL-CIO President Liz Shuler.

Advertisement

Shuler told attendees that working people “are concerned that this technology will make our jobs worse, make us earn less, maybe even cost us our jobs,” according to an excerpt of her remarks, shared exclusively with The Post. She criticized studio heads for “threatening the earning power and fundamental roles of writers and actors,” while sitting in the same room as Charles Rivkin, the chief executive officer of the Motion Picture Association, which represents Hollywood.

Deborah Raji, an AI researcher at the University of California at Berkeley, said she tried to provide a counterweight to the narrative that artificial intelligence’s potential risks could arise from the technology working too well in ways that are hard for its makers to control. Instead, Raji, who has helped lead pioneering work on biases in facial recognition systems and the need for algorithmic auditing, said she tried to redirect discussion toward present-day challenges of deploying models in the wild, particularly when errors disproportionately affect people who are underrepresented or misrepresented in the data.

Raji said she shared the example that just because OpenAI’s GPT-4 did well on the MCATs does not mean it’s suitable for medical applications or medical Q&As, noting that it would have to be specifically evaluated in a healthcare context – an area where IBM’s AI Watson failed.

“I think that that helped ground conversations,” she said.

 

Nitasha Tiku, Will Oremus, Danielle Abril and Gerrit De Vynck contributed to this report.

Comments are not available on this story.