Current and former AI employees warn of the technology’s dangers

Date:

A handful of current and former employees at prominent artificial intelligence companies warned the technology poses grave risks to humanity in a Tuesday letter, calling on corporations to commit to being more transparent and fostering a culture of criticism that holds them more accountable.

The letter, signed by 13 people, including current and former employees at OpenAI, Anthropic and Google’s DeepMind, said AI can exacerbate inequality, increase misinformation and allow AI systems to become autonomous and cause significant death. Though these risks could be mitigated, corporations in control of the software have “strong financial incentives” to limit oversight, they said.

The move comes as OpenAI faces a staff exodus. Many critics have seen prominent departures — including OpenAI co-founder Ilya Sutskever and senior researcher Jan Leike — as a rebuke of company leaders, who some employees argue chase profit at the expense of making OpenAI’s technologies safer.

Daniel Kokotajlo, a former employee at OpenAI, said he left the start-up because of the company’s disregard for the risks of artificial intelligence.

“I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence,” he said in a statement, referencing a hotly contested term referring to computers matching the power of human brains.

GET CAUGHT UP

Summarized stories to quickly stay informed

“They and others have bought into the ‘move fast and break things’ approach, and that is the opposite of what is needed for technology this powerful and this poorly understood.”

Liz Bourgeois, a spokesperson at OpenAI, said the company agrees that “rigorous debate is crucial given the significance of this technology.” Representatives from Anthropic and Google did not immediately reply to a request for comment.

The employees said that absent government oversight, AI workers are the “few people” who can hold corporations accountable. They noted that they are hamstrung by “broad confidentiality agreements” and that ordinary whistleblower protections are “insufficient” because they focus on illegal activity, and the risks that they are warning about are not yet regulated.

The letter called for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles include a commitment to not enter into or enforce agreements that prohibit criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; supporting a culture of criticism; and a promise to not retaliate against current and former employees who share confidential information to raise alarms “after other processes have failed.”

The Washington Post in December reported that senior leaders at OpenAI raised fears about retaliation from CEO Sam Altman — warnings that preceded the chief’s temporary ousting. In a recent podcast interview, former OpenAI board member Helen Toner said part of the nonprofit’s decision to remove Altman as CEO late last year was his lack of candid communication about safety.

“He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically just impossible for the board to know how well those safety processes were working,” she told “The TED AI Show” in May.

The letter was endorsed by AI luminaries including Yoshua Bengio and Geoffrey Hinton, who are considered “godfathers” of AI, and renowned computer scientist Stuart Russell.

Share post:

Popular

More like this
Related

Joel Embiid returns to 76ers’ starting lineup after missing 1 game with sinus fracture

Philadelphia 76ers center Joel Embiid will start Friday's matchup...

Kittle still receives advice from former 49ers tight ends coach Embree

Kittle still receives advice from former 49ers tight ends...

Embiid comes back from sinus fracture ahead of schedule for Sixers vs. Hornets

Embiid comes back from sinus fracture ahead of schedule...