Some interesting perspectives from Aru, Larkum, and Shine in Trends in Neurosciences. Motivated readers can obtain a copy of the article's text from me.
Highlights
Large language models (LLMs) can produce text that leaves the impression that one may be interacting with a conscious agent.
Present-day LLMs are text-centric, whereas the phenomenological umwelt of living organisms is multifaceted and integrated.
Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing. Currently, such processes are not implemented in LLMs.
The organizational complexity of living systems has no parallel in present-day AI tools. Possibly, AI systems would have to capture this biological complexity to be considered conscious.
LLMs and the current debates on conscious machines provide an opportunity to re-examine some core ideas of the science of consciousness.Abstract
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.
No comments:
Post a Comment