Three key takeaways from the Senate Energy Committee hearing on DOE and AI

Lawmakers on the Senate Energy Committee were warned Thursday about the dangers and opportunities that come with integrating artificial intelligence into the U.S. energy sector and everyday life as a whole.
The committee held a hearing on the rapidly evolving technology, and the experts in attendance spent considerable time discussing not only AI, but also the ever-looming threat of China and its efforts to steal and recover emerging U.S. capabilities .
“China has released its new generation AI development plan, which includes: [research and development] and infrastructure goals. “The United States currently has no such strategic AI plan,” Committee Chairman Joe Manchin, D-W.Va., said at the start of the hearing.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Energy Committee Chairman Sen. Joe Manchin, D-Wa., and ranking member Sen. John Barrasso, R-Wyo., held a hearing Thursday on the U.S. energy sector and AI
People will have AI “in their pockets.”
Among the revelations made by witnesses on Thursday was how pervasive AI will be in everyday life – as Professor Rick Stevens of Argonne National Laboratory put it: “You can’t put Pandora back in the box.”
He suggested that officials and other Americans need to be quickly educated about how AI works and how to curb its negative effects rather than trying to hinder its further development.
“I think we need to … get smarter about how we manage the risks associated with advanced AI systems,” Stevens said.
The Defense Department needs widespread guidance on AI acquisition, according to government report
“In the next few years, every person will have a very powerful AI assistant in their pocket that can do whatever they want that assistant to do. Hopefully most of it will be positive progress for society, etc. Some of it will be negative.
“We need to be able to reduce this negative element, detect it when it occurs and mitigate it either through legislation or other technical means before something dramatically bad happens.”

From left to right are the witnesses: David Turk, Deputy Secretary of the US Department of Energy, Dr. Rick Stevens, deputy director of the Argonne Laboratory for Computer, Environmental and Life Sciences, and Anna Puglisi, senior fellow at Georgetown University’s Center for Security and Emerging Technology
Department of Energy Deputy Secretary David Turk echoed that sentiment at another point in the hearing, noting that the advancement of AI “makes it easier for less sophisticated actors to carry out more complex types of attacks.”
“Pandora’s box is open. We have to deal with this now. And we need to address these types of emerging AI challenges head-on,” Turk said. “We are not yet where we need to be. We have to make it happen. Investments we need to keep working on it.”
We need a policy “for the China we have”
Anna Puglisi of Georgetown University’s Center for Security and Emerging Technology also warned senators that current U.S. policy toward our adversaries, particularly China, will not be enough in the rapidly changing technology landscape.
“We need a policy for the China we have, not the China we want. Most policies to date have been tactical in nature and not designed to counter an entire system that is structurally different from our own,” Puglisi said.
“It is important that the United States and other liberal democracies, democracies, invest in the future. We have heard about the great promise of these technologies. But we have to build research security into these funding programs from the start.”

Turk was the only Biden Energy Department official present at the hearing
“Existing policies and laws are insufficient to address the extent of the CCP’s influence in our society, particularly in science and research.”
Turk later added that it wasn’t just China that posed a threat and that the U.S.’s traditional adversaries on the world stage were also posing a number of new problems with AI.
“It’s not just China. There are of course others, Russia, Iran, North Korea,” Turk said. “The threat is evolving and we must evolve our responses accordingly… We now update this risk matrix annually to ensure we remain current on what technologies we deem sensitive and what protocols we use.”
Frequently asked questions about artificial intelligence
Why regulation is not enough
Although the importance of guardrails to mitigate the worst consequences of AI was emphasized, witnesses who heard them also warned that regulation can only go so far.
This comes at a time when Senate Majority Leader Chuck Schumer is pushing his chamber to move forward with an AI regulatory framework, even as some, particularly on the Republican side, fear it is too early for that.

It comes as Senate Majority Leader Chuck Schumer makes AI a focus of his term and controls the chamber (Tom Williams/CQ-Roll Call, Inc via Getty Images)
When asked by Sen. Angus King, I-Maine, whether introducing watermarking requirements for AI content would help mitigate problems with disinformation, Stevens explained that it was a “flawed” approach.
CLICK HERE TO GET THE FOX NEWS APP
“I think it’s flawed in the sense that there will ultimately be hundreds or thousands of AI generators, some of which will be large companies like Google and OpenAI, etc. But there will be many open models that are manufactured outside the United States and manufactured elsewhere, of course that would not be bound by US regulation,” the scientist said.
“We can have a law that says AI-generated content is watermarked, but a fraudulent player outside the [country] If someone was operating in Russia, China or elsewhere, they wouldn’t be tied to it and could produce a lot of material that wouldn’t actually have those watermarks. And so maybe it could pass a test.
Stevens said the U.S. approach needs to be “more strategic” than watermarking laws.
“We need to authenticate real content all the way to the source. Whether they are true or not is another question.”