“The Godfather of AI” ends Google and warns of impending dangers

Geoffrey Hinton was a pioneer of artificial intelligence. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto developed a technology that became the intellectual foundation for the AI ​​systems that the biggest companies in the tech industry believe hold the key to their future.

However, on Monday he officially joined a growing chorus of critics who say these companies are running into trouble with their aggressive campaign to develop products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

dr Hinton said he quit his job at Google, where he worked for more than a decade and has become one of the most respected voices in the field, so he could speak freely about the risks of AI. A part of him, he said, now regrets his life’s work.

“I console myself with the usual excuse: If I hadn’t done it, someone else would have,” said Dr. Hinton made the breakthrough during a lengthy interview last week in the dining room of his Toronto home, just a short walk from where his students were.

dr Hinton’s journey from AI trailblazer to doomsayer marks a remarkable moment for the tech industry at what is perhaps its most important turning point in decades. Industry leaders believe the new AI systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug discovery to education.

But many industry insiders are nagging the fear that they are releasing something dangerous into the wild. Generative AI can already be a tool for misinformation. Jobs could soon be at risk. At some point down the line, the tech’s biggest fears say it could pose a risk to humanity.

“It’s hard to imagine how you can stop the bad actors from using it for bad things,” said Dr. Hinton.

After San Francisco startup OpenAI released a new version of ChatGPT in March, more than 1,000 tech leaders and researchers signed an open letter calling for a six-month moratorium on new system development because AI technologies pose “profound risks for society” and humanity”.

A few days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, published their own letter warning of the risks of AI. That group included Eric Horvitz, chief scientific officer at Microsoft, which has used OpenAI’s technology in a wide range of products, including its search engine Bing.

dr Hinton, often referred to as “the godfather of AI,” didn’t sign any of those letters and said he didn’t want to publicly criticize Google or any other company until he quit his job. He told the company last month he was stepping down and spoke by phone Thursday with Sundar Pichai, chief executive officer of Google’s parent company Alphabet. He declined to publicly discuss the details of his conversation with Mr Pichai.

Google’s chief scientist Jeff Dean said in a statement: “We remain committed to responsible use of AI. We are continually learning to understand emerging risks while boldly innovating.”

dr Hinton, a 75-year-old British expatriate, is a lifelong academic whose career has been fueled by his personal belief in the development and use of an AI idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton Professor of Computer Science at Carnegie Mellon University but left the university for Canada because he said he was reluctant to accept Pentagon funds. Back then, most of the AI ​​research in the United States was funded by the Department of Defense. dr Hinton is deeply opposed to the use of artificial intelligence on the battlefield – what he calls “robot soldiers”.

In 2012, Dr. Hinton and two of his Toronto students, Ilya Sutskever and Alex Krishevsky, created a neural network that could analyze thousands of photos and teach themselves to identify common objects like flowers, dogs and cars.

Google spent $44 million to acquire a company owned by Dr. Hinton and his two students was founded. And their system led to the development of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever later became Chief Scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often referred to as the “Nobel Prize for computers,” for their work on neural networks.

Around the same time, Google, OpenAI, and other companies began building neural networks that learned from vast amounts of digital text. dr Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans interacted with language.

Then, last year, when Google and OpenAI were building systems with much larger amounts of data, his view changed. He still believed that the systems were in some ways inferior to the human brain, but he thought they eclipsed the human intelligence in others. “Maybe what’s going on in those systems,” he said, “is actually a lot better than what’s going on in the brain.”

In his opinion, as companies improve their AI systems, they become increasingly dangerous. “Look at how it was five years ago and how it is today,” he said of AI technology. “Take the difference and propagate it to the front. That’s spooky.”

Until last year, he said, Google acted as the “proper steward” of the technology, careful not to release anything that could cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — a challenge to Google’s core business — Google is trying to use the same kind of technology. The tech giants are locked in a competition that may be unstoppable, said Dr. Hinton.

His immediate concern is that the internet will be deluged with fake photos, videos and text and the average person will be “unable to know what’s true”.

He is also concerned that AI technologies will turn the job market upside down over time. Today, chatbots like ChatGPT typically supplement human workers, but they could replace paralegals, personal assistants, translators, and others who handle routine tasks. “It takes away the drudgery,” he said. “It might take away more than that.”

He later becomes concerned that future versions of the technology pose a threat to humanity, as they often learn unexpected behavior from the vast amounts of data they analyze. This becomes a problem, he said, as individuals and companies allow AI systems not only to generate their own computer code, but to actually run that code themselves. And he dreads a day when truly autonomous weapons — those killer robots — become a reality.

“The idea that this stuff could actually get smarter than humans — a few people believed that,” he said. “But most people thought it was far away. And I thought it was far away. I thought it was 30 to 50 years or more away. Of course I don’t think that anymore.”

Many other experts, including many of his students and colleagues, consider this threat hypothetical. But dr Hinton believes the race between Google and Microsoft and others will escalate into a global race that will not end without some form of global regulation.

But that may be impossible, he said. Unlike nuclear weapons, there is no way of knowing if companies or countries are secretly working on the technology. The greatest hope is that the world’s leading scientists will work together to control the technology. “I don’t think they should expand that any further until they understand if they can control it,” he said.

dr Hinton said that when people asked him how he could work on potentially dangerous technology, he paraphrased Robert Oppenheimer, who led the US effort to build the atomic bomb: “When you see something that’s technically cute, you go ahead and do it.” it.”

He doesn’t say that anymore.

Related Articles

Back to top button