“There is no real teeth to these voluntary agreements,” Easterly said. “There needs to be a set of rules in place, ultimately legislation.”
Deepfakes and AI-generated images have been around for several years, but as the technology improves and the tools to make them become widely available, they’ve become increasingly common on social media platforms. An AI-generated image of a sprawling refugee camp with the words “All Eyes on Rafah” went viral in late May as a way for people to show their support for Palestinians in Gaza. As major elections take place across the globe, some politicians have tried to use fake images to make their opponents look bad.
In February, tech companies, including Google, Meta, OpenAI and TikTok, said they would work to identify and label deepfakes on their social media platforms. But their agreement was voluntary and did not include an outright ban on deceptive political AI content. The agreement came months after the tech companies also signed a pledge organized by the White House that they would label AI images.
Congressional and state-level politicians are debating numerous bills to try to regulate AI in the United States, but so far the initiatives haven’t made it into law. The E.U. parliament passed an AI Act last year, but it won’t fully go into force for another two years.
The spread of false claims about the 2020 election are leading to threats of violence against election officials right now, Easterly said. Some poll workers have quit over the worsening environment, she said. “Those who remain often operate, frankly, in difficult conditions.”
GET CAUGHT UP
Stories to keep you informed
Easterly also said that Chinese hackers are busy hacking into critical infrastructure in the United States, such as water treatment facilities and pipeline control centers, in order to “preposition” themselves to strike if there was ever a conflict between the two countries.
“They are creating enormous risk to our critical infrastructure,” Easterly said. “That is happening right now.”