President Joe Biden has unveiled the most sweeping measures ever to control artificial intelligence to ensure the technology cannot be weaponized.
The order announced Monday requires developers including Microsoft, OpenAI and Google to conduct security tests and submit results before releasing models to the public.
These findings are analyzed by federal agencies, including Homeland Security, to address threats to critical infrastructure and chemical, biological, radiological, nuclear and cybersecurity risks.
Biden believes the government has been slow to respond to the dangers of social media and that U.S. youth now have to deal with related mental health issues.
Monday’s order is an “urgent” step to curb technology before it distorts fundamental notions of truth with false images, exacerbates racial and social inequalities, provides a tool for fraudsters and criminals and is used to wage war.
President Joe Biden has unveiled the most sweeping measures ever to control artificial intelligence to ensure the technology cannot be weaponized
About 15 leading companies have agreed to implement voluntary AI security commitments, but the executive order aims to provide concrete regulations for the development of the technology.
Several companies, including Microsoft and OpenAI, have also testified before Congress where they have been questioned about the security of their chatbots.
Vice President Kamala Harris – who has the lowest approval rating of any vice president – was named “AI czar” in May and is tasked with leading the crackdown on AI competition as fears grow that the technology will disrupt life as we do it know, could turn on its head.
However, Harris has not commented further since his appointment to the role.
She is scheduled to speak at the UK AI Security Summit on November 2nd.
The new regulation reflects the government’s efforts to shape the development of AI in a way that maximizes its capabilities and mitigates its threats.
AI has drawn great personal interest for Biden because of its potential to impact the economy and national security.
The order announced Monday requires developers including Microsoft, OpenAI – ChatGPT creators – and Google to conduct security tests and submit results before releasing models to the public.
Deepfakes spread misinformation, AI robots scam people out of money, and chatbots show signs of bias.
The convention questioned OpenAI founder Sam Altman for five hours in May about how ChatGPT and other models could change “human history,” for better or worse, comparing them to either the printing press or the atomic bomb.
White House Chief of Staff Jeff Zients recalled that Biden had directed his team to address the issue urgently as technology was a top priority.
“We cannot move at the normal pace of government,” Zients said the Democratic president told him. “We need to advance just as quickly, if not faster, than the technology itself.”
AI companies conduct their own testing to root out disinformation, bias or racism.
Fear of AI is growing as experts predict it will reach the singularity by 2045, when technology surpasses human intelligence, which we cannot control
Under the Defense Production Act, the order will require leading AI developers to share security test results and other information with the government.
In accordance with the defense law, Monday’s order requires companies developing AI to notify the federal government if models show signs of danger to national security, public health and safety.
The National Institute of Standards and Technology is supposed to create standards to ensure AI tools are safe before release.
The Department of Commerce will issue guidance on labeling and watermarking AI-generated content to help distinguish between authentic interactions and those generated by software.
One point in the order is to “protect against the risks of using AI to produce dangerous biological materials,” which entails the introduction of “strong new standards for screening biological syntheses,” the document says.
The order also affects privacy, civil rights, consumer protection, scientific research and employee rights.
An administration official who introduced the order while speaking to reporters on Sunday said the to-do lists in the order would be implemented and met within 90 to 365 days, with security considerations ahead of the earliest deadlines.
While much of the order concerns risks in AI development, Biden is keenly aware of the technology’s potential to help the public by making products better, cheaper and more widely available.
This has been evident in the development of affordable and life-saving medicines, and the executive order states: “The Department of Health and Human Services will also establish a safety program to receive reports of harm or unsafe health practices and provide remediation using AI.”
“AI is everywhere in our lives. And it’s going to become more common,” Zients said.
“I think it’s an important part of making our country an even better place and making our lives better… at the same time we have to avoid the downsides.”
With Congress still in the early stages of debating AI safety measures, Biden’s order represents a U.S. perspective as countries around the world struggle to set their own policies.
After more than two years of deliberations, the European Union is putting the finishing touches on a comprehensive set of rules targeting the technology’s riskiest applications. China, a major AI competitor of the US, has also set some rules.
British Prime Minister Rishi Sunak also hopes to give Britain a prominent role as an AI safety hub at a summit that Vice President Kamala Harris plans to attend this week.