Evolution makes us treat it like a human and we need to break that habit

Photo credit: Unsplash+

Artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned from Google, warning of the danger of the technology “getting smarter than us”.

His fear is that one day AI will be able to “manipulate people into doing what they want”.

There are reasons why we should be concerned about AI. But we often treat or talk about AIs as if they were humans.

Stopping this and realizing what they actually are could help us maintain a fruitful relationship with technology.

In a recent essay, the American psychologist Gary Marcus advised us to stop treating AI models like humans.

By AI models, he means large language models (LLMs) like ChatGPT and Bard, which are now used by millions of people every day.

He cites egregious examples of humans “overattributing” human-like cognitive abilities to AI, which has had a number of consequences.

Most amusing was the US Senator who claimed ChatGPT “taught itself the chemistry”.

Most shocking was the report of a young Belgian who is said to have taken his own life after lengthy conversations with an AI chatbot.

Marcus is right when he says we should stop treating AI like humans — conscious moral agents with interests, hopes, and desires. For many, however, this will be difficult, if not almost impossible.

That’s because LLMs are designed – by humans – to interact with us as if they were humans, and we – through biological evolution – are designed to interact with them equally.

Good imitators

The reason that LLMs can mimic human conversations so convincingly goes back to a profound insight by computer pioneer Alan Turing, who realized that it doesn’t require a computer to understand an algorithm in order to run it.

This means that while ChatGPT can create paragraphs full of emotional language, it doesn’t understand a word in the sentences it generates.

The LLM designers managed to turn the problem of semantics – the arrangement of words to create meaning – into statistics by assigning words based on the frequency of their previous use.

Turing’s insight mirrors Darwin’s theory of evolution, which explains how species adapt to their environment and become increasingly complex without having to understand anything about their environment or themselves.

Cognitive scientist and philosopher Daniel Dennett coined the phrase “competence without understanding,” which perfectly encapsulates the insights of Darwin and Turing.
Another important contribution of Dennett’s is his “deliberate attitude”. This essentially states that in order to fully explain the behavior of an object (human or non-human), we must treat it as a rational agent.

This is most commonly manifested in our tendency to anthropomorphize non-human species and other non-living beings.

But it’s useful. For example, if we want to beat a computer at chess, the best strategy is to think of it as a rational agent that “wants” to beat us.

We can explain, for example, that the computer castled because it “wanted to protect its king from our attack” without any contradiction.

We can speak of a tree in a forest that “wants to grow” towards the light.

But neither the tree nor the chess computer represent these “desires” or reasons for themselves; Except that the best way to explain their behavior is to treat them as if they do.

intentions and agency

Our evolutionary history has equipped us with mechanisms that lead us to find intention and agency everywhere.

In prehistory, these mechanisms helped our ancestors avoid predators and develop altruism towards their closest relatives.

These mechanisms are the same that cause us to see faces in clouds and anthropomorphize inanimate objects. There is no harm in confusing a tree with a bear, but the opposite is true.

Evolutionary psychology shows us how we always try to interpret any object that could be human as human.

We unconsciously adopt the intentional stance, attributing all our cognitive abilities and emotions to this object.

Given the potential disruptions that LLMs can cause, we need to be aware that they are merely probabilistic machines with no intentions or concerns about humans.

When describing the human-like feats of LLMs and AI in general, we need to be extra vigilant about our use of language. Here are two examples.

The first was a recent study that found ChatGPT to be more empathetic and provide more “qualitative” answers to patient questions compared to doctors.

Using emotional words like “empathy” for an AI predisposes us to endow it with the ability to think, reflect, and genuinely care about others—which it doesn’t have.

The second reason was when GPT-4 (the latest version of ChatGPT technology) was released last month, it was credited with greater abilities in creativity and reasoning.

However, we only see an increase in “competence” but still no “understanding” (in Dennett’s sense) and definitely no intentions – just pattern matching.

Saved and loaded

In his recent comments, Hinton voiced the near-term danger that “bad actors” would use AI for subversion.

We could easily imagine a ruthless regime or a multinational using an AI trained on fake news and untruths to flood the public discourse with misinformation and deep fakes.

Fraudsters could also use AI to prey on people at risk of financial scams.

Last month, Gary Marcus and others, including Elon Musk, signed an open letter calling for an immediate pause in further development of LLMs.

Marcus has also called for the creation of an international agency to promote safe and peaceful AI technologies – dubbed it “Cern for AI”.

Additionally, many have suggested that anything generated by an AI should carry a watermark so there can be no doubt as to whether we are interacting with a human or a chatbot.

Regulation in AI is lagging behind innovation, as is so often the case in other areas of life. There are more problems than solutions, and the gap is likely to widen before it narrows.

But in the meantime, repeating Dennett’s phrase “competence without understanding” might be the best antidote to our innate compulsion to treat AI like humans.

Written by Neil Saunders. The conversation.


Related Articles

Back to top button