Large language models are computer programs trained on lots of text data to understand and generate human language. This type of deep learning, such as GBT chat, can understand and respond to questions, write articles, and even translate languages.
The more data these models are trained on, the better they become at understanding and generating language, which means they become more or less capable every day. But at that point, a new question arises: Does this software, which is becoming more intelligent and developing its capabilities day by day, have awareness and understand what is being asked of it, or are they just zombies that work based on smart algorithms to match patterns?
In a new study published in the journal Trends in Neuroscience, led by researchers from the University of Tartu in Estonia, neuroscientists are trying to take a neuroscientific angle to answer this question.
Science refuses
According to the study, this team indicates that although the responses of systems such as “GPT Chat” appear to be conscious, they most likely are not, and this team presents 3 pieces of evidence for this claim:
The first is that the input into language models misses the embodied and embedded cognition that characterizes our sensory contact with the world around us. Embodied embedded cognition is a concept that indicates that cognition is not limited to the brain only, but is distributed throughout the body and interacts with the surrounding environment, as our cognitive abilities are deeply intertwined with our physical experiences and sensory-motor interactions with the world, while large language models revolve around text only.
The second piece of evidence in this context relates to the fact that current AI algorithm architectures lack key natural connectivity features of the thalamocortical system in the brain, which could be linked to consciousness in mammals.
Third, the paths of growth and evolution that led in the first place to the emergence of sentient organisms have no parallel in artificial systems as they are conceived today.
According to the study, real neurons are similar to artificial neural networks. The former are real physical entities that can grow and change their shape, while neural networks in large language models are merely pieces of code that have no meaning.
From neuroscience to philosophy
In an article by the famous American linguist and philosopher Noam Chomsky, the human mind is not only a statistical engine that aims to analyze patterns in a huge amount of data and as a result deduce the most likely response, but it is an effective system that works with small amounts of information, and does not seek to infer correlations between data points. But to create deeper interpretations than simply collecting probabilistic data outputs.
In this context, these models are simulations of humans and do not have human awareness, as John Searle, professor of the philosophy of mind and language at the University of California, Berkeley, explains when he confirms that artificial intelligence speaks and forms sentences in a grammatical way, not semantic or meaningful, and they are two completely separate matters, as building sentences does not It can be the foundation of mental components such as meaning or significance that humans possess.
In the end, the researchers of the new study make it clear that neuroscientists and philosophers still have a long way to go to understanding consciousness, and thus an even longer way to reach conscious machines.