11/3/2023–|Last Updated: 11/3/202310:48 PM (Makkah Time)
Large language models are computer programs trained on large amounts of text data to understand and produce human language. This type of deep learning, like GBT Chat, can understand and answer questions, write essays, and translate languages.
The more these models are trained, the better they get at understanding and producing language, meaning they become more or less proficient every day. But at that time, a new question arises: is this software growing more intelligent day by day and developing its capabilities to understand what is being asked of it, or are they zombies operating on the basis of smart algorithms? Competition patterns?
In a new study published in the journal Trends in Neuroscience, led by researchers at the University of Tartu in Estonia, neuroscientists try to take a neurological angle to answer this question.
Science denies it
According to the study, the team points out that while responses from systems like “GPT Chat” appear to be conscious, they often are not, and the team presents 3 pieces of evidence for this claim:
First, input into language models loses the embodied and embedded cognitions that represent our emotional connection to the world around us. Embodied embedded cognition means that cognition is not limited to the brain but is distributed throughout the body and interacts with the surrounding environment because our cognitive abilities are deeply intertwined with our physical experiences and sensori-motor interactions with the world. , large language models revolve around text only.
A second source of evidence in this context is that current AI algorithm architectures lack key natural connectivity features of the thalamocortical system in the brain, which may be linked to mammalian consciousness.
Third, the paths of development and evolution that first led to the emergence of sentient beings do not parallel the artificial systems conceived today.
According to the study, real neurons are similar to artificial neural networks, the former being actual physical organs that can grow and change their shape, while the neural networks in large language models are pieces of code that have no meaning.
From neuroscience to philosophy
In an essay by the famous American linguist and philosopher Noam Chomsky, the human mind is not only a statistical machine that aims to analyze patterns in a large amount of data and narrow down the most likely answer as a result, but it is also a useful system. Works with small amounts of information, and does not attempt to infer relationships between data points, but rather collects probabilistic data outputs to develop deeper interpretations.
In this context, these models are simulations of humans and lack human awareness, explains John Searle, professor of philosophy of mind and language at the University of California, Berkeley, when confirming that artificial intelligence can speak and form sentences. Grammatical way, not semantic or meaningful, and they are two completely different things, because the construction of sentences can be the foundation of mental elements such as meaning or significance that humans have.
In conclusion, the researchers of the new study make it clear that neuroscientists and philosophers still have a long way to go to understand consciousness, and thus a long way to reach conscious machines.
“Award-winning beer geek. Extreme coffeeaholic. Introvert. Avid travel specialist. Hipster-friendly communicator.”