Global AI Safety Hampered by Indecision, Regulatory Delays

Date:

Governments seek to create security safeguards around artificial intelligence, but roadblocks and indecision are delaying cross-nation agreements on priorities and obstacles to avoid.

In November 2023, Great Britain published its Bletchley Declaration, agreeing to boost global efforts to cooperate on artificial intelligence safety with 28 countries, including the United States, China, and the European Union.

Efforts continued to pursue AI safety regulations in May with the second Global AI Summit, during which the U.K. and the Republic of Korea secured a commitment from 16 global AI tech companies to a set of safety outcomes building on that agreement.

“The Declaration fulfills key summit objectives by establishing shared agreement and responsibility on the risks, opportunities, and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration,” Britain said in a separate statement accompanying the declaration.

The European Union’s AI Act, adopted in May, became the world’s first major law regulating AI. It includes enforcement powers and penalties, such as fines of $38 million or 7% of their annual global revenues if companies breach the Act.

Following that, in a Johnny-come-lately response, a bipartisan group of U.S. senators recommended that Congress draft $32 billion in emergency spending legislation for AI and published a report saying the U.S. needs to harness AI opportunities and address the risks.

“Governments absolutely need to be involved in AI, particularly when it comes to issues of national security. We need to harness the opportunities of AI but also be wary of the risks. The only way for governments to do that is to be informed, and being informed requires a lot of time and money,” Joseph Thacker, principal AI engineer and security researcher at SaaS security company AppOmni, told TechNewsWorld.

AI Safety Essential for SaaS Platforms

AI safety is growing in importance daily. Nearly every software product, including AI applications, is now built as a software-as-a-service (SaaS) application, noted Thacker. As a result, ensuring the security and integrity of these SaaS platforms will be critical.

“We need robust security measures for SaaS applications. Investing in SaaS security should be a top priority for any company developing or deploying AI,” he offered.

Existing SaaS vendors are adding AI into everything, introducing more risk. Government agencies should take this into account, he maintained.

US Response to AI Safety Needs

Thacker wants the U.S. government to take a faster and more deliberate approach to confronting the realities of missing AI safety standards. However, he praised the commitment of 16 major AI companies to prioritize the safety and responsible deployment of frontier AI models.

“It shows growing awareness of the AI risks and a willingness to commit to mitigating them. However, the real test will be how well these companies follow through on their commitments and how transparent they are in their safety practices,” he said.

Still, his praise fell short in two key areas. He did not see any mention of consequences or aligning incentives. Both are extremely important, he added.

According to Thacker, requiring AI companies to publish safety frameworks shows accountability, which will provide insight into the quality and depth of their testing. Transparency will allow for public scrutiny.

“It may also force knowledge sharing and the development of best practices across the industry,” he observed.

Thacker also wants quicker legislative action in this space. However, he thinks that a significant movement will be challenging for the U.S. government in the near future, given how slowly U.S. officials usually move.

“A bipartisan group coming together to make these recommendations will hopefully kickstart a lot of conversations,” he said.

Still Navigating Unknowns in AI Regulations

The Global AI Summit was a great step forward in safeguarding AI’s evolution, agreed Melissa Ruzzi, director of artificial intelligence at AppOmni. Regulations are key.

“But before we can even think about setting regulations, a lot more exploration needs to be done,” she told TechNewsWorld.

This is where cooperation among companies in the AI industry to join initiatives around AI safety voluntarily is so crucial, she added.

“Setting thresholds and objective measures is the first challenge to be explored. I don’t think we are ready to set those yet for the AI field as a whole,” said Ruzzi.

It will take more investigation and data to consider what these may be. Ruzzi added that one of the biggest challenges is for AI regulations to keep pace with technology developments without hindering them.

Start by Defining AI Harm

According to David Brauchler, principal security consultant at NCC Group, governments should consider looking into definitions of harm as a starting point in setting AI guidelines.

As AI technology becomes more commonplace, a shift may develop from classifying AI’s risk from its training computational capacity. That standard was part of the recent U.S. executive order.

Instead, the shift might turn toward the tangible harm AI may inflict in its execution context. He noted that various pieces of legislation hint at this possibility.

“For example, an AI system that controls traffic lights ought to incorporate far more safety measures than a shopping assistant, even if the latter required more computational power to train,” Brauchler told TechNewsWorld.

So far, a clear view of regulation priorities for AI development and usage is lacking. Governments should prioritize the real impact on people in how these technologies are implemented. Legislation should not attempt to predict the long-term future of a rapidly changing technology, he observed.

If a present danger emerges from AI technologies, governments can respond accordingly once that information is concrete. Attempts to pre-legislate those threats are likely to be a shot in the dark, clarified Brauchler.

“But if we look toward preventing harm to individuals via impact-targeted legislation, we don’t have to predict how AI will change in form or fashion in the future,” he said.

Balancing Governmental Control, Legislative Oversight

Thacker sees a tricky balance between control and oversight when regulating AI. The result should not be stifling innovation with heavy-handed laws or relying solely on company self-regulation.

“I believe a light-touch regulatory framework combined with high-quality oversight mechanisms is the way to go. Governments should set guardrails and enforce compliance while allowing responsible development to continue,” he reasoned.

Thacker sees some analogies between the push for AI regulations and the dynamics around nuclear weapons. He warned that countries that achieve AI dominance could gain significant economic and military advantages.

“This creates incentives for nations to rapidly develop AI capabilities. However, global cooperation on AI safety is more feasible than it was with nuclear weapons, as we have greater network effects with the internet and social media,” he observed.

Share post:

Popular

More like this
Related

Justin Lower shoots another 65 and leads Bermuda Championship

SOUTHAMPTON, Bermuda (AP) — Justin Lower had another 6-under...

Barmore listed as questionable on Patriots-Rams Week 11 injury report

Barmore listed as questionable on Patriots-Rams Week 11 injury...

NBA reportedly discussing yet another change to All-Star Game format, and a possible Caitlin Clark cameo

It's just not an NBA season without some fretting...

Saturday’s gossip: Guardiola decides Man City future

Pep Guardiola makes a decision over his Manchester City...