AI went to Washington and here’s what you need to know about this stunning technology
NEWYou can now listen to Fox News articles!
On Tuesday, May 16, Mr. Altman traveled to Washington. And today the world feels a little scarier.
There is so much movement, so much talk, and so much concern about the rapid advance of artificial intelligence (AI) into all areas of our lives. Hardly a day goes by that we don’t hear a new report on the disruptive impact – and potential dangers – of this technology. Great learning models like ChatGPT have surprised the world because of the speed of their learning and current skills.
So it was only a matter of time before the government intervened. Anything that moves so quickly and has such a huge impact on society inevitably raises questions of risk and regulation. That’s why Sam Altman, the CEO of ChatGPT, traveled to Washington this week to testify at a hearing on congressional oversight and regulation of generative AI.
“IT WILL TAKE OVER EVERYTHING”: AMERICANS DISCOVER FEAR THAT AI MAY AFFECT DAILY LIFE
It was an uncomfortable discussion; rather something we would expect from a science fiction series. Consider some of the statements we’re hearing both on Capitol Hill and from companies concerned about AI:
language of doom
- Altman admitted that AI could pose “Considerable damage to the world” when technology goes wrong
- Potential for AI “Destructive to Humanity”
- “I think if this technology goes wrong, it can go pretty wrong.”
Language about speed
- AI technology is “Move forward as quickly as possible”
- It is “evolving by the minute”
Language surrounding the progressive/aggressive nature of the technology:
- “Shows signs of human thought”
- “Becoming Smarter Than Humans”
The fact that Congress is taking a bipartisan approach to trying to regulate AI, and the fact that the inventors of the technology and those who do it, like Elon Musk, are on the front lines leading the warning call and demanding regulation , should give us reason to take a break in the industry.
There is clearly a need for regulation, as there is for other potentially harmful industries, from cigarettes to nuclear power.
But in the heat of the moment, amid the worry and fear, let’s not lose sight of the exciting potential of AI. Whether you love it, loathe it, adore it, or fear it, AI is here forever. And it’s already touching your life in one way or another.
After Altman’s visit to Capitol Hill, it’s a good time to reconsider and possibly restate some perceptions and positions around AI without arguing that it needs regulation. Here are four quick things to consider, or possible ways to reshape the debate about this mind-boggling technology:
- AI: Danger or welcome innovation? In every century throughout history there is a revolution that drives us forward. The Printing Press. manufacturing. The Internet. Now there is AI. We can portray it as a threat to free speech or humanity in general. Or we can embrace it as an amazing new frontier and do what America does best: Lead the world in innovation.
- Does it come after us or does it make our lives easier? There is no doubt that generative AI will have a significant impact on the job market. According to economists at Goldman Sachs, “the job market could face significant disruption” as up to 300 million full-time jobs worldwide could potentially be automated in some way by the latest wave of AI like ChatGPT.
CLICK HERE TO RECEIVE THE OPINION NEWSLETTER
But it’s not just about chasing us and replacing jobs. Instead of looking at AI as a job stealer, why not look at it as a potential productivity boost? Throughout history, technical innovations that initially displaced labor have also, in the long run, created employment growth.
According to the Goldman Sachs report, widespread adoption of AI could ultimately increase labor productivity — and increase global GDP by 7% annually over 10 years. “The combination of significant labor cost savings, job creation and a productivity boost for workers who are not laid off raises the possibility of a labor productivity boom like that which occurred after the advent of earlier general-purpose technologies such as electric motors and personal computers.”
- Regulate what matters: Regulation is coming. Most people want it. The AI industry itself demands it. But like so many issues, few people believe the government is equipped to regulate. Regulation needs to be defined the way we want it, with a framework that gives people reassurance about their biggest concerns about things like bias, privacy and misinformation.
CLICK HERE TO GET THE FOX NEWS APP
- In conclusion, don’t discount the technology: Remember, as hard as that is to believe, we’re still basically at version 1.0 of the AI. As with so many new technologies and breakthroughs, there are many weaknesses today that will not be there tomorrow. We can focus on these current shortcomings or present the technology as an amazing work in progress, something that will last and get better quickly.
In our own company, we explore ways to use generative AI to support and improve our work. And we already see great potential for improving our productivity. Instead of being afraid of it, we have to embrace it. And our language should reflect this change in mindset.
Lee Carter is President and Partner of Maslansky + Partnersa language strategy company built on the idea, “It doesn’t matter what you say, it matters what you hear” and author of “Persuasion: Convincing others when facts don’t seem to matter.” Follow her on Twitter at @lh_carter
CLICK HERE TO READ MORE BY LEE CARTER