Monday, February 19, 2024

Comparing how generative AI and living organisms generate meaning suggests future direction for AI development

I want to pass on this open source opinion article in Trends in Cognitive Sciences by Karl Friston, Andy Clark, and other prominent figures who study generative models of sentient behavior in living organisms.  (They suggest a future direction for AI development that is very similar to the vision described in the previous MindBlog post, which described a recent article by Venkatesh Rao.) Here are the highlights and abstract of the article.

Highlights

  • Generative artificial intelligence (AI) systems, such as large language models (LLMs), have achieved remarkable performance in various tasks such as text and image generation.
  • We discuss the foundations of generative AI systems by comparing them with our current understanding of living organisms, when seen as active inference systems.
  • Both generative AI and active inference are based on generative models, but they acquire and use them in fundamentally different ways.
  • Living organisms and active inference agents learn their generative models by engaging in purposive interactions with the environment and by predicting these interactions. This provides them with a core understanding and a sense of mattering, upon which their subsequent knowledge is grounded.
  • Future generative AI systems might follow the same (biomimetic) approach – and learn the affordances implicit in embodied engagement with the world before – or instead of – being trained passively.

Abstract

Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is – we argue – essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI.

No comments:

Post a Comment