Senators demand OpenAI detail efforts to make its AI safe

Date:

Senators demanded in a Monday letter that OpenAI turn over data about its efforts to build safe and secure artificial intelligence, following employee warnings that the company rushed through safety-testing of its latest AI model, which were detailed in The Washington Post earlier this month.

Led by Sen. Brian Schatz (D-Hawaii), the five lawmakers asked OpenAI chief executive Sam Altman to outline how the maker of ChatGPT plans to meet “public commitments” to ensure its AI does not cause harm, such as teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks, in the letter obtained exclusively by The Post.

The senators — a group of Democrats and an independent — also asked the company for information about employee agreements, which could have muzzled workers who wished to alert regulators to risks. In a July letter to the Securities and Exchange Commission, OpenAI whistleblowers said they had filed a complaint with the agency alleging the company illegally issued restrictive severance, nondisclosure and employee agreements, potentially penalizing workers who wished to raise concerns to federal regulators.

In a statement to The Post earlier this month, OpenAI spokesperson Hannah Wong said the company has “made important changes to our departure process to remove nondisparagement terms” from staff agreements.

The letter comes amid employee concerns that OpenAI is putting profit before safety in creating its technology. It cites a July report in The Post detailing how OpenAI rushed out its latest AI model, GPT-4 Omni, to meet a May release date. Company leaders moved ahead with the launch, despite employee concerns about the time frame, and sped through comprehensive safety testing, undermining a July 2023 safety pledge to the White House.

“Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems,” the senators wrote. “This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies.”

“We didn’t cut corners on our safety process, though we recognize the launch was stressful for our teams,” OpenAI spokesperson Liz Bourgeois said in a statement.

“Artificial intelligence is a transformative new technology and we appreciate the importance it holds for U.S. competitiveness and national security,” Bourgeois wrote. “We take our role in developing safe and secure AI very seriously and continue to work alongside policymakers to establish the appropriate safeguards going forward.”

Lawmakers, including Sen. Chuck Grassley (R-Iowa), have said employees at AI companies need to be able to offer Congress a clear understanding of the technology as it attempts to regulate it — including concerns and risks.

Senators in the letter asked OpenAI to commit to not enforcing nondisparagement agreements and “removing any other provisions” from employee agreements that could be used to punish those who raise concerns about company practices.

Senate Majority Leader Charles E. Schumer (D-N.Y.) and a bipartisan working group of senators released recommendations earlier this year to infuse $32 billion into AI research and development, but critics have said the plan is vague and has stymied other efforts in Congress to craft legislation. The chances of passing comprehensive legislation this year are dwindling as attention in Washington shifts to the 2024 election.

In the absence of new laws from Congress, the White House has largely relied on voluntary commitments from the companies to create safe and trustworthy AI systems. The Biden administration also passed a sweeping AI executive order requiring companies to share testing results about the most powerful models.

The letter also asked Altman whether OpenAI will dedicate 20 percent of its computing resources to research on AI safety, a commitment the company made last July when announcing a team dedicated to preventing existential risks. That group, the “Superalignment team,” has since been disbanded and its staff redistributed to other parts of the company.

OpenAI’s Bourgeois said the company’s promise to dedicate 20 percent of computing power to safety, announced last July, was not intended to go to a single safety team and would be allocated over multiple years, with more resources invested as its technology advanced.

The senators asked OpenAI if it will allow independent experts to assess the safety and security of its systems before release, and to make its next foundational AI model available to government agencies for predeployment testing. Legislators also asked OpenAI to outline what misuse and safety risks its staff have observed after releasing its most recent large language models.

Stephen Kohn, a lawyer representing OpenAI whistleblowers, said the senators’ requests are “not sufficient” to cure the chilling effect of preventing employees from speaking about company practices. “What steps are they taking to cure that cultural message,” he said, “to make OpenAI an organization that welcomes oversight?”

The senators also asked OpenAI to fulfill the requests by Aug. 13, including documentation on how it plans to meet its voluntary pledge to the Biden administration to protect the public from abuses of generative AI.

Kohn added that Congress must hold hearings and an investigation into OpenAI’s practices.

“Congressional oversight on this is badly needed,” Kohn said. “It’s essential that when you have a technology that has the potential risks of artificial intelligence that the government get in front of it.”

correction

A previous version of this article incorrectly said five Senate Democrats signed the letter. The letter was signed by four Senate Democrats and Sen. Angus King (Maine), an independent who caucuses with their party. The article has been corrected.

Share post:

Popular

More like this
Related

Week 16 instant reactions: Commanders’ resurgence, Seahawks in purgatory, Raiders lose tank bowl | Inside Coverage

This embedded content is not available in your region.Subscribe...

Indonesian tsunami survivor holds on to hope for missing son after 20 years

By Yuddy Cahya BudimanBANDA ACEH, Indonesia (Reuters) - In...

Parkinson not overly critical after losing leads

Phil Parkinson says he will not be overly critical...