Thursday, December 28, 2023

Origins of our current crises in the 1990s, the great malformation, and the illusion of race.

I'm passing on three clips I found most striking from David Brooks, recent NYTimes Sydney awards column:

I generally don’t agree with the arguments of those on the populist right, but I have to admit there’s a lot of intellectual energy there these days. (The Sidneys go to essays that challenge readers, as well as to those that affirm.) With that, the first Sidney goes to Christopher Caldwell for his essay “The Fateful Nineties” in First Things. Most people see the 1990s as a golden moment for America — we’d won the Cold War, we enjoyed solid economic growth, the federal government sometimes ran surpluses, crime rates fell, tech took off.

Caldwell, on the other hand, describes the decade as one in which sensible people fell for a series of self-destructive illusions: Globalization means nation-states don’t matter. Cyberspace means the material world is less important. Capitalism can run on its own without a countervailing system of moral values. Elite technocrats can manage the world better than regular people. The world will be a better place if we cancel people for their linguistic infractions.

As Caldwell sums it up: “America’s discovery of world dominance might turn out in the 21st century to be what Spain’s discovery of gold had been in the 16th — a source of destabilization and decline disguised as a windfall.”

***************** 

In “The Great Malformation,” Talbot Brewer observes that parenthood comes with “an ironclad obligation to raise one’s children as best one can.” But these days parents have surrendered child rearing to the corporations that dominate the attention industry, TikTok, Facebook, Instagram and so on: “The work of cultural transmission is increasingly being conducted in such a way as to maximize the earnings of those who oversee it.”

He continues: “We would be astonished to discover a human community that did not attempt to pass along to its children a form of life that had won the affirmation of its elders. We would be utterly flabbergasted to discover a community that went to great lengths to pass along a form of life that its elders regarded as seriously deficient or mistaken. Yet we have slipped unawares into precisely this bizarre arrangement.” In most societies, the economy takes place in a historically rooted cultural setting. But in our world, he argues, the corporations own and determine the culture, shaping our preferences and forming, or not forming, our conception of the good.

*****************

It’s rare that an essay jolts my convictions on some major topic. But that happened with one by Subrena E. Smith and David Livingstone Smith, called “The Trouble With Race and Its Many Shades of Deceit,” in New Lines Magazine. The Smiths are, as they put it, a so-called mixed-race couple — she has brown skin, his is beige. They support the aims of diversity, equity and inclusion programs but argue that there is a fatal contradiction in many antiracism programs: “Although the purpose of anti-racist training is to vanquish racism, most of these initiatives are simultaneously committed to upholding and celebrating race.” They continue: “In the real world, can we have race without racism coming along for the ride? Trying to extinguish racism while shoring up race is like trying to put out a fire by pouring gasoline on it.”

I’ve heard this argument — that we should seek to get rid of the whole concept of race — before and dismissed it. I did so because too many people I know have formed their identity around racial solidarity — it’s a source of meaning and strength in their lives. The Smiths argue that this is a mistake because race is a myth: “The scientific study of human variation shows that race is not meaningfully understood as a biological grouping, and there are no such things as racial essences. There is now near consensus among scholars that race is an ideological construction rather than a biological fact. Race was fashioned for nothing that was good. History has shown us how groups of people ‘racialize’ other groups of people to justify their exploitation, oppression and annihilation.”

Monday, December 25, 2023

Large Language Models are not yet providing theories of human language.

 From Dentella et al. (open source):

Significance
The synthetic language generated by recent Large Language Models (LMs) strongly resembles the natural languages of humans. This resemblance has given rise to claims that LMs can serve as the basis of a theory of human language. Given the absence of transparency as to what drives the performance of LMs, the characteristics of their language competence remain vague. Through systematic testing, we demonstrate that LMs perform nearly at chance in some language judgment tasks, while revealing a stark absence of response stability and a bias toward yes-responses. Our results raise the question of how knowledge of language in LMs is engineered to have specific characteristics that are absent from human performance.
Abstract
Humans are universally good in providing stable and accurate judgments about what forms part of their language and what not. Large Language Models (LMs) are claimed to possess human-like language abilities; hence, they are expected to emulate this behavior by providing both stable and accurate answers, when asked whether a string of words complies with or deviates from their next-word predictions. This work tests whether stability and accuracy are showcased by GPT-3/text-davinci-002, GPT-3/text-davinci-003, and ChatGPT, using a series of judgment tasks that tap on 8 linguistic phenomena: plural attraction, anaphora, center embedding, comparatives, intrusive resumption, negative polarity items, order of adjectives, and order of adverbs. For every phenomenon, 10 sentences (5 grammatical and 5 ungrammatical) are tested, each randomly repeated 10 times, totaling 800 elicited judgments per LM (total n = 2,400). Our results reveal variable above-chance accuracy in the grammatical condition, below-chance accuracy in the ungrammatical condition, a significant instability of answers across phenomena, and a yes-response bias for all the tested LMs. Furthermore, we found no evidence that repetition aids the Models to converge on a processing strategy that culminates in stable answers, either accurate or inaccurate. We demonstrate that the LMs’ performance in identifying (un)grammatical word patterns is in stark contrast to what is observed in humans (n = 80, tested on the same tasks) and argue that adopting LMs as theories of human language is not motivated at their current stage of development.

Friday, December 22, 2023

Three common assumptions about inflammation, aging, and health that are probably wrong

The abstract of a recent PNAS article by Thomas W. McDade (motivated readers can obtain the whole article from me):  

Significance

Inflammation is one of the most important, and potent, physiological systems in the human body. It is widely assumed that levels of inflammation increase with age and that chronic inflammation contributes to cardiovascular diseases. This understanding of inflammation is based on studies of people living in affluent, industrialized settings with low burdens of infectious disease. A broader view, based on research conducted across a wider range of ecological settings globally, indicates that chronic inflammation is not necessarily a “normal” part of aging and that the association between inflammation and age-related diseases is not inevitable. It also suggests that environments early in development have lasting effects on the regulation of inflammation in adulthood, with implications for diseases of aging.
Abstract
Chronic inflammation contributes to the onset and progression of cardiovascular disease and other degenerative diseases of aging. But does it have to? This article considers the associations among inflammation, aging, and health through the lens of human population biology and suggests that chronic inflammation is not a normal nor inevitable component of aging. It is commonly assumed that conclusions drawn from research in affluent, industrialized countries can be applied globally; that aging processes leading to morbidity and mortality begin in middle age; and that inflammation is pathological. These foundational assumptions have shifted focus away from inflammation as a beneficial response to infection or injury and toward an understanding of inflammation as chronic, dysregulated, and dangerous. Findings from community-based studies around the world—many conducted in areas with relatively high burdens of infectious disease—challenge these assumptions by documenting substantial variation in levels of inflammation and patterns of association with disease. They also indicate that nutritional, microbial, and psychosocial environments in infancy and childhood play important roles in shaping inflammatory phenotypes and their contributions to diseases of aging. A comparative, developmental, and ecological approach has the potential to generate novel insights into the regulation of inflammation and how it relates to human health over the life course.

Wednesday, December 20, 2023

In Search of Hardness - Protocol studies, the next crypto cycle, and the next age of the world

I’m using this posting to save for myself some clips of text from Venkatesh Rao’s most recent piece, to continue mulling over where I place it on the trivial versus sublime spectrum (some of his jargon you will only understand if you have followed the previous installments on Rao I've put in MindBlog...note the link at the end to The Summer of Protocols)… Here are the clips:
Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.
But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.
So what is hardness? Hardness is to protocols as information is to computing, or intelligence to AI. I’ll quote Josh Stark’s original take (specific to blockchains, but applicable to all kinds of protocols) here:
Although humans have been creating and using information technologies like writing, printing, and telegrams for hundreds or thousands of years, it was only in the last century that we articulated clearly what all of these things have in common, and realized that they can be understood as a category.
In the decades since, the idea of information has spread into mass culture. Today, it is intuitive to most people that speech, images, films, writing, DNA, and software are all just different kinds of information.
I believe that a similar situation exists today with respect to blockchains. A new technology has forced us to reconsider things we thought we understood. But instead of books, telephones, and voices, this time it is money, law, and government. We can sense the outline of a category that unites these seemingly disparate things.
Perhaps there is an analog to information hidden in the foundations of our civilization. An abstract property that once revealed, might help remake our understanding of the world, and help us answer plainly what problem blockchains are supposed to solve.
Call this property hardness.
Human civilization depends in part on our ability to make the future more certain in specific ways.
Fixed, hard points across time that let us make the world more predictable.
We need these hard points because it is impossible to coordinate at scale without them. Money doesn’t work unless there is a degree of certainty it will still be valuable in the future. Trade is very risky if there isn’t confidence that parties will follow their commitments.
The bonds of social and family ties can only reach so far through space and time, and so we have found other means of creating certainty and stability in relationships stretching far across the social graph. Throughout history we have found ways to make the future more certain, creating constants that are stable enough to rely upon.
It’s all hardness engineering, and the solution is always protocols that put the right amounts of hardness in the right places at the right times. And it’s almost always enlightening and useful to explicitly think of problems that way. … My favorite protocol in recent weeks has been the one implemented in ATMs that forces you to take your card back before dispensing cash. A simple re-ordering of actions to create a spot of hardness where there was previously an annoying softness (remembering to take your card).
I’ve been nursing this thought that AI and crypto are like the First and Second Foundations of our technological future, together building a pathway out of the desolation of the collapsing industrial age. I just came up with another metaphor for the relationship that I like: AI cuts, crypto chooses. It’s the balance-of-power protocol that will govern the planet in the coming decades.
In practically any domain, I find, thinking in terms of protocols and explicitly searching for hardness to work with is an immensely generative thing to do. It helps get immediate problems unstuck, and it helps you see creative and expansive options.
The world of protocols is a strangely hidden one. Even though there are hundreds of millions of people around the world who explicitly organize their work and thinking around protocols of various sorts, the language of protocols is not a familiar one. It is easier to look through protocols than at them. It is easier to act through protocols than on them.
The language of protocols is an esoteric one for navigating a hidden (and at the risk of sounding cliched, liminal) world that prefers to stay hidden, in part because it deals in the civilizational techno-unconscious. The invisibility of protocols is a core feature. There’s a reason A. N. Whitehead’s famous line, “Civilization advances by extending the number of important operations which we can perform without thinking of them” became the de facto motto of the Summer of Protocols. Thinking about protocols, you get a sense of a landscape of invisible, inviolable hardness all around us, that shapes our behaviors without our being quite aware of it. A landscape you can learn to see, shape, and violate if you learn the language.

Monday, December 18, 2023

Traumatic Memories Are Processed as Present Experience

Ellen Barry points to work by Perl et al. showing that different patterns of brain activity underlie sad versus traumatic autobiographical memories.  The Perl et al. abstract:

For people with post-traumatic stress disorder (PTSD), recall of traumatic memories often displays as intrusions that differ profoundly from processing of ‘regular’ negative memories. These mnemonic features fueled theories speculating a unique cognitive state linked with traumatic memories. Yet, to date, little empirical evidence supports this view. Here we examined neural activity of patients with PTSD who were listening to narratives depicting their own memories. An intersubject representational similarity analysis of cross-subject semantic content and neural patterns revealed a differentiation in hippocampal representation by narrative type: semantically similar, sad autobiographical memories elicited similar neural representations across participants. By contrast, within the same individuals, semantically similar trauma memories were not represented similarly. Furthermore, we were able to decode memory type from hippocampal multivoxel patterns. Finally, individual symptom severity modulated semantic representation of the traumatic narratives in the posterior cingulate cortex. Taken together, these findings suggest that traumatic memories are an alternative cognitive entity that deviates from memory per se.

Friday, December 15, 2023

The ebb and flow of physical and cognitive fatigue

Matthews et al. Have investigated moment-to-moment fluctuations in fatigue using behavioral experiments and computational modeling to offer a precise account of how fatigue waxes (during physical and cognitive effort) and wanes (during rest).   From Bijleveld's review of the work:
A key insight from this work is that there are important parallels between physical and cognitive fatigue. Specifically, for both types of fatigue, the best-fitting computational model parsed fatigue into two components: a recoverable component (i.e., the share of fatigue that increases with effort and recovers with rest) and an unrecoverable component (i.e., the share of fatigue that only increases with effort and does not recover with rest, at least not within the ~1-h session). For physical fatigue, this result conceptually replicates a previous study; for cognitive fatigue, this result is new and original. Together, these findings paint a nuanced picture of how both physical and cognitive fatigue emerge over time. From an applied angle, the distinction between recoverable and unrecoverable fatigue is intriguing: it has potential implications for task design, such as in the contexts of education and work...On balance, physical and cognitive fatigue may be more similar than they are different.
Here is the Maltthews et al. abstract:
The willingness to exert effort for reward is essential but comes at the cost of fatigue. Theories suggest fatigue increases after both physical and cognitive exertion, subsequently reducing the motivation to exert effort. Yet a mechanistic understanding of how this happens on a moment-to-moment basis, and whether mechanisms are common to both mental and physical effort, is lacking. In two studies, participants reported momentary (trial-by-trial) ratings of fatigue during an effort-based decision-making task requiring either physical (grip-force) or cognitive (mental arithmetic) effort. Using a novel computational model, we show that fatigue fluctuates from trial-to-trial as a function of exerted effort and predicts subsequent choices. This mechanism was shared across the domains. Selective to the cognitive domain, committing errors also induced momentary increases in feelings of fatigue. These findings provide insight into the computations underlying the influence of effortful exertion on fatigue and motivation, in both physical and cognitive domains.


 

 

Wednesday, December 13, 2023

AI is a camera, not an engine.

Yet another brilliant essay from Venkatest Rao on how modern AI puts us into an age of exploration of computational reality. I got halfway through my usual gig of extracting clips from the article to archive the main points in a post for later look up by myself or others. Halfway through this chore, I realized the material was so deep and dense that my clips were containing more than half of the original text. So, I have halted the editing down and suggest that readers have a go at the entire original article. I also presented the entire text to ChatGPT-4, preceded by the following prompt:
Intelligence as a Measure of Data. Could you condense, rephrase, extract the main messages of the following text to be more comprehensible to college educated non-specialists? Here is the text:

Here is ChatGPT-4's take on the article, which is a vastly better job than I could have done without spending many more hours on the material: 

The text you've provided is a deep and philosophical exploration of modern artificial intelligence (AI), contrasting it with earlier forms of AI and other technological advancements. Here's a more accessible summary:

  1. Modern AI as a Discovery, Not an Invention: Unlike earlier AI (termed "Good Old-Fashioned AI"), modern AI is viewed not as a machine or an invention, but as a significant discovery. This new era of AI is compared to the James Webb Space Telescope, an instrument of exploration rather than a tool of production. Modern AI is seen as a way to delve deeply into vast amounts of data.

  2. AI as a Data Camera: The author suggests thinking of AI as a "data camera" that reveals hidden layers and insights within large datasets, much like telescopes reveal distant galaxies. This analogy emphasizes AI's role in uncovering hidden patterns and information in data, rather than creating it.

  3. The Nature of Intelligence in AI: The text challenges traditional views of intelligence, suggesting that what we consider intelligent behavior may arise from simple processes and large datasets. It proposes that AI is enabling us to understand the nature of intelligence itself, which is more about data interaction than complex processing.

  4. AI's Role in Understanding Computational Reality: Modern AI fits within a series of discoveries about computational reality, which lies between objective (physical) and subjective (mental) realities. This journey of discovery includes various fields like information theory, computability theory, and now, AI.

  5. The Broad Impact of AI: The author explores the potential of AI to transform our understanding of various domains, from language to natural phenomena. This includes using AI to interpret complex datasets in novel ways, such as understanding ocean currents or predicting solar flares.

  6. Reconceptualizing Intelligence: The idea is proposed that intelligence might be a property of data processed in certain ways, rather than an attribute of the processing technology itself. This view opens up new ways of understanding and measuring intelligence based on data characteristics.

  7. AI and the Universe's Memory: The author posits that AI reveals a natural, universal memory phenomenon inherent in all matter, suggesting a deep connection between information, space, time, and matter. This perspective considers AI as a tool that uncovers the universe's way of remembering and expressing itself.

  8. The Human Perspective: Finally, the text touches on the human aspect in this technological evolution, pondering how our understanding of ourselves and our place in the universe might change in the face of these AI-driven discoveries.

This summary simplifies the original text's complex ideas, focusing on the main themes of AI as a tool of discovery, the redefinition of intelligence, and the broader implications for understanding reality and our place within it.

 

Monday, December 11, 2023

There are more living cells on Earth than stars in the universe: ~ 10^30 vs. ~ 10^22

These numbers come from a fascinating piece by Dennis Overbye summarizing work by a team of biologists and geologists. Selected clips:

According to the fossil record, geology and evolution have been engaged in a dance for 3.8 billion years, since our planet was only 700 million years old. It was then that the first single-celled creatures appeared, perhaps in undersea volcanic vents, feasting on the chemical energy around them.
The population of cells has been growing exponentially ever since, even through geological disasters and extinction events, which opened up new avenues of evolution.
The seeds for animal life were sown sometime in the dim past when some bacterium learned to use sunlight to split water molecules and produce oxygen and sugar. By 2.4 billion years ago, with photosynthesis well-established, the amount of oxygen in the atmosphere began to rise dramatically. The Great Oxidation Event was clearly the biggest event in the history of the biosphere.
Without photosynthesis, the rest of creation would have little to eat. But it is just one strand in a web of geological feedback loops by which weather, oceans, microbes and volcanoes conspire to keep the globe basically stable and warm and allow life to grow.
The carbonate silicate cycle, for example, regulates the amount of carbon dioxide in the atmosphere; the gas traps heat and keeps the planet temperate and mostly stable. Rain washes carbon dioxide from the air and into the ocean; volcanoes disgorge it again from the underworld. As a result, a trillion gigatons of carbon have been cycled from gas to life and back again over the millenniums. That’s about 100 times as much carbon as exists on Earth, which suggests that, in principle, every atom of carbon has been recycled 100 times.
The rise of cyanobacteria set off what is known as the Cambrian Explosion about 550 million years ago, when multicellular creatures — animals — appeared in sudden splendiferous profusion in the fossil record. We were off to the Darwinian races.
Crockford and his colleagues realized that they could trace the population growth of cells through time by measuring mineral isotopes and the amount of oxygen in old rocks. As a result, they were able to estimate the total life that Earth has produced since its beginning — about 10^40 cells, roughly 10 billion times more than currently exist.
Although this number sounds huge, it represents only 10 percent of all the cells that will come about by the time the curtain falls on life on Earth a billion years from now. As the sun ages, it will brighten, astronomers say, amplifying the weathering and washing away of carbon dioxide. At the same time, as Earth’s interior gradually cools, volcanic activity will subside, cutting off the replenishment of the greenhouse gas.
As a result, Dr. Crockford said, “it is unlikely that Earth’s biosphere will ever grow beyond a time-integrated ∼10^41 cells across the planet’s entire habitable lifetime.”
But for now, Dr. Crockford and his colleagues wrote in their paper, “the extension of today’s relatively high rates of primary productivity will likely squeeze more life into less time.” The more cells there are, the more times they will replicate, producing more mutations, Dr. Crockford explained. We inhabitants of Earth’s biosphere have a billion years’ worth of surprises ahead of us.

Thursday, December 07, 2023

Goggle's Bard and Chat GPT-4 on using AI to explore beyond language and words...

I show first the prompt I addressed yesterday afternoon to both Bard and ChatGPT-4.  The more nuanced response I received from Bard makes me wonder if Google has already started to use the new Gemini AI algorithms  announced yesterday to power Bard.  Here is the prompt, followed by the Bard and ChatGPT-4 responses:

Exploring beyond language and words - starting a conversation: Let us suppose that increasingly sophisticated LLMs, generative pretrained transformers, will become superior to humans in performing any linguistic chores requiring use and knowledge of all the world's written literature that was used as the training opus. This might free humans to pay more attention to understanding non-linguistic aspects of their experiences (such as non verbal music, art, prosody, body language, social mirroring, etc). Could you suggest ways in which AI could assist in defining and elucidating the non-linguistic aspects of our lives so that we might devote more of our attention to them rather than to linguistic chores ?

 
Google's Bard's response:
 
How AI could help us explore beyond language and words:

Your suggestion about increasingly sophisticated LLMs taking over linguistic tasks opens up a fascinating possibility: freeing humans to focus on non-linguistic aspects of experience. Here are some ways AI could assist us in this exploration:

1. Data analysis and pattern recognition:

  • Large-scale analysis of nonverbal communication: AI could analyze vast datasets of audio, video, and sensor data to identify patterns and relationships between nonverbal cues like facial expressions, gestures, posture, and vocal intonation. This could deepen our understanding of non-verbal communication in various contexts, from interpersonal communication to artistic expression.
  • Mapping emotions and mental states: AI could analyze physiological data like heart rate, skin conductance, and brain activity to identify correlations with emotions and mental states. This could lead to the development of tools that help us better understand our own and others' emotions, fostering empathy and improving communication.
  • Identifying patterns in non-verbal art forms: AI could analyze large collections of music, paintings, sculptures, and other non-verbal art forms to identify recurring patterns, styles, and themes. This could help us discover new insights into human creativity and unlock hidden meanings in art.

2. AI as a creative collaborator:

  • Generating novel forms of non-verbal communication: AI could be used to generate new forms of non-verbal communication, such as synthesized musical pieces, abstract visual art, or even gestures and facial expressions. This could lead to new artistic expressions and enhance communication beyond words.
  • Personalized non-verbal experiences: AI could personalize non-verbal experiences based on individual preferences and emotional states. For example, AI could curate music playlists or generate visual art that resonates with a user's mood. This could deepen our emotional engagement with non-verbal art forms and enhance our well-being.
  • Interactive storytelling and learning: AI could create interactive non-verbal narratives that adapt to user choices and emotions. This could provide engaging and personalized learning experiences in various fields, from language learning to cultural immersion.

3. AI as a facilitator for human interaction:

  • Real-time translation and interpretation of non-verbal cues: AI could be used to translate nonverbal cues in real-time, facilitating communication across cultures and bridging the gap between people with different communication styles.
  • Augmenting sensory experiences: AI could be used to create immersive virtual environments that stimulate various senses, allowing us to explore and interact with the world in new ways. This could broaden our understanding of non-verbal communication and deepen our appreciation for sensory experiences.
  • Promoting mindfulness and self-awareness: AI could be used to develop tools that help us become more aware of our own non-verbal communication and emotions. This could enhance our self-awareness and improve our ability to manage our emotions and relationships.

4. Ethical considerations:

While AI holds immense potential for assisting us in exploring the non-linguistic aspects of our lives, ethical considerations must be addressed. These include:

  • Bias and fairness: AI models are susceptible to biases present in the data they are trained on. It is crucial to ensure that AI tools used to analyze and interpret non-verbal communication are unbiased and fair.
  • Data privacy and security: The use of AI necessitates the collection and analysis of personal data. It is essential to ensure data privacy and security, safeguarding individuals' information from unauthorized access or misuse.
  • Human autonomy and control: AI should be used to enhance human understanding and experience, not replace it. We must ensure that AI remains a tool that empowers humans to explore and appreciate the non-linguistic aspects of life, not a force that dictates our thoughts and emotions.

By leveraging the power of AI responsibly and ethically, we can embark on a journey of deeper understanding and engagement with the non-linguistic aspects of our existence. This journey holds the potential to enrich our lives, strengthen our relationships, and unlock new forms of creativity and expression.

*********

ChatGPT-4's response:

The idea of AI assisting in understanding and elucidating non-linguistic aspects of human experience is intriguing and holds much potential. Here are several ways in which AI could be instrumental in this area:

  1. Enhanced Emotional Recognition and Analysis: AI can be trained to recognize and interpret human emotions through facial expressions, voice modulation, and body language. This would be particularly useful in enhancing interpersonal communication and understanding, especially in situations where verbal communication is limited or not possible.

  2. Music and Art Analysis: AI can analyze patterns, styles, and historical influences in music and art. By doing so, it can provide insights into the emotional and cultural contexts of various artworks or musical pieces, deepening our appreciation and understanding of these forms.

  3. Virtual Reality and Augmented Reality Experiences: AI can be used to create immersive VR and AR experiences that enable individuals to explore non-linguistic aspects of human culture and history, such as virtual tours of art galleries, historical sites, or even recreations of historical events.

  4. Non-Verbal Communication Training: AI can assist in teaching and improving non-verbal communication skills, such as understanding body language, gestures, and facial expressions, which are crucial for effective interpersonal interactions.

  5. Social Mirroring and Behavioral Analysis: AI can analyze social interactions and provide feedback or insights into social dynamics. This could be useful in understanding group behaviors, cultural norms, and even in improving social skills.

  6. Sensory Enhancement Technologies: AI-driven technologies could enhance or augment human senses, allowing for a deeper engagement with the non-linguistic elements of the world, such as experiencing a wider range of sounds or visual details.

  7. Emotional and Psychological Support: AI can offer personalized support and interventions based on non-linguistic cues, aiding in mental health and emotional well-being.

  8. Enhancing Creative Processes: AI can assist artists, musicians, and creators by suggesting non-linguistic elements that might enhance their work, based on analysis of vast amounts of artistic and musical data.

  9. Personalized Learning and Development: AI can tailor learning experiences to individual preferences and learning styles, focusing on non-linguistic methods like interactive simulations, visual aids, and hands-on activities.

  10. Cross-Cultural Understanding and Communication: AI can help bridge cultural gaps by providing insights into non-verbal cues and norms that are specific to different cultures, aiding in cross-cultural understanding and communication.

In all these ways, AI has the potential to significantly enrich human understanding and appreciation of the non-linguistic aspects of our lives, allowing us to engage more deeply with the world beyond words.

 



Sunday, December 03, 2023

A flash of clarity on what current LLMs can and can not do. An AI apocalypse does not appear to be eminent...

In his most recent newsletter, Venkatesh Rao pulls up a twitter thread he wrote in 2017 making what he calls an ontological distinction between  boundary intelligence and interior intelligence.  This was before transformers like GPT-1 began to attract more attention. The distinction Rao makes is central to understanding what current large language models (LLMs) can and can not do. Here is his unedited text from 2017:
 
1. I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence

2. Boundary intelligence is how you deal with information flows across the boundary of your processing abilities 

3. Interior intelligence is how you process information. Includes logic, emotional self-regulation, etc.

4. A thesis I've been converging on is that boundary intelligence is VASTLY more consequential once interior intelligence exceeds a minimum

5. Boundary intelligence is by definition meta, since you're tuning your filters and making choices about what to even let hit your attention

6. I think it is highly consequential because almost all risk management happens via boundary intelligence (blindspots, black swans etc)

7. Interior intelligence is your poker skill and strategy. Boundary intelligence is picking which table to sit down at

8. Interior intelligence is reading a book competently, extracting insights and arguments. Boundary intelligence is picking books to read. 

9. Interior intelligence is being a good listener. Boundary intelligence is deciding whom to listen to. 

10. Basically, better input plus mediocre IQ beats bad input and genius IQ every time, so boundary intelligence is leverage

11. And obviously, boundary intelligence is more sensitive to context. The noisier and angrier info streams get, the more BI beats II

12. Most of boundary intelligence has to do with input intelligence, but output intelligence becomes more important with higher agency 

13. Output intelligence is basically the metacognition around when/where/how/to-whom/why to say or do things you are capable of saying/doing

14. We think a lot about external factors in decisions, but output intelligence is about freedom left after you've dealt with external part

Next, from the abstract of a forthcoming paper by Yadlowsky et al. Rao extracts the following:

…when presented with tasks or functions which are out-of-domain of their pretraining data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks. Together our results highlight that the impressive ICL abilities of high-capacity sequence models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities.

And then, in the following selected clips, continues his text:

Translated into the idiom from the fourteen points above, this translates into “It’s all interior intelligence, just within a very large boundary.” There is no boundary intelligence in current machine learning paradigms. There isn’t even an awareness of boundaries; just the ability to spout statements about doubt, unknowns, and boundaries of knowability; a bit like a blind person discussing color in the abstract.

This is not to say AI cannot acquire BI. In fact, it can do so in a very trivial way, through embodiment. Just add robots around current AIs and let them loose in real environments.

The reason people resist this conclusion is is irrational attachment to interior intelligence as a sacred cow (and among computer science supremacists, a reluctance to acknowledge the relevance and power of embodiment and situatedness in understandings of intelligence). If much of the effectual power of intelligence is attributable to boundary intelligence, there is much less room for sexy theories of interior intelligence. Your (cherished or feared) god-like AI is reduced to learning through FAFO (Fuck around and find out) feedback relationships with the rest of the universe, across its boundary, same as us sadsack meatbag intelligences with our paltry 4-GPU-grade interior intelligence.

In their current (undoubtedly very impressive) incarnation, what we have with AI is 100% II, 0% BI. Human and animal intelligences (and I suspect even plant intelligences, and definitely evolutionary process intelligence) are somewhere between 51-49 to 99.9-0.1% BI. They are dominated to varying degrees by boundary intelligence. Evolutionary processes are 100% BI, 0% II.