Showing posts with label language. Show all posts
Showing posts with label language. Show all posts

Monday, December 16, 2024

Analysis of the dumbing down of language on social media over time

Di Marco et al. (open source) do a comparative analysis of 8 different social media platforms (Facebook, Twitter, YouTube, Voat, Reddit, Usenet, Gab, and Telegram), focusing on their complexity and temporal shifts in a dataset of ~300 million English comments over 34 years.Their abstract:

Understanding the impact of digital platforms on user behavior presents foundational challenges, including issues related to polarization, misinformation dynamics, and variation in news consumption. Comparative analyses across platforms and over different years can provide critical insights into these phenomena. This study investigates the linguistic characteristics of user comments over 34 y, focusing on their complexity and temporal shifts. Using a dataset of approximately 300 million English comments from eight diverse platforms and topics, we examine user communications’ vocabulary size and linguistic richness and their evolution over time. Our findings reveal consistent patterns of complexity across social media platforms and topics, characterized by a nearly universal reduction in text length, diminished lexical richness, and decreased repetitiveness. Despite these trends, users consistently introduce new words into their comments at a nearly constant rate. This analysis underscores that platforms only partially influence the complexity of user comments but, instead, it reflects a broader pattern of linguistic change driven by social triggers, suggesting intrinsic tendencies in users’ online interactions comparable to historically recognized linguistic hybridization and contamination processes.

Friday, July 05, 2024

ChatGPT as a "lab rat" for understanding how our brains process language.

I've now read twice through a fascinating PNAS piece by Mitchell Waldrop (Open source, with useful references), and urge MindBlog readers to havve a look. Our brains, as well as all of the GPT (Generative Pretrained Transforer) engines are prediction machines. The following slilghtly edited extract gives context.
...computer simulations of language [are] working in ways that [are] strikingly similar to the left-hemisphere language regions of our brains, using the same computational principles...The reasons for this AI–brain alignment are still up for debate. But its existence is a huge opportunity for neuroscientists struggling to pin down precisely how the brain’s language regions actually work...What’s made this so difficult in the past is that language is a brain function unique to humans. So, unlike their colleagues studying vision or motor control, language researchers have never had animal models that they can slice, probe, and manipulate to tease out all the neural details.
But now that the new AI models have given them the next best thing—an electronic lab rat for language—Fedorenko and many other neuroscientists around the world have eagerly put these models to work. This requires care, if only because the AI–brain alignment doesn’t seem to encompass many cognitive skills other than language...Language is separate in the brain...there are left-side regions of the brain that are always activated by language—and nothing but language..the system responds in the same way to speaking, writing—all the kinds of languages a person knows and speaks, including sign languages. It doesn't espond to things that aren’t language, like logical puzzles, mathematical problems, or music.

Friday, June 07, 2024

Is it a fact? The epistemic force of language in news headlines.

From Chuey et al. in PNAS (open source):  

Significance

Headlines are an influential source of information, especially because people often do not read beyond them. We investigated how subtle differences in epistemic language in headlines (e.g., “believe” vs. “know“) affect readers’ inferences about whether claims are perceived as matters of fact or mere opinion. We found, for example, saying “Scientists believe methane emissions soared to a record in 2021” led readers to view methane levels as more a matter of opinion compared to saying “Scientists know…” Our results provide insight into how epistemic verbs journalists use affect whether claims are perceived as matters of fact and suggest a mechanism contributing to the rise of alternative facts and “post-truth” politics.
Abstract
How we reason about objectivity—whether an assertion has a ground truth—has implications for belief formation on wide-ranging topics. For example, if someone perceives climate change to be a matter of subjective opinion similar to the best movie genre, they may consider empirical claims about climate change as mere opinion and irrelevant to their beliefs. Here, we investigate whether the language employed by journalists might influence the perceived objectivity of news claims. Specifically, we ask whether factive verb framing (e.g., "Scientists know climate change is happening") increases perceived objectivity compared to nonfactive framing (e.g., "Scientists believe [...]"). Across eight studies (N = 2,785), participants read news headlines about unique, noncontroversial topics (studies 1a–b, 2a–b) or a familiar, controversial topic (climate change; studies 3a–b, 4a–b) and rated the truth and objectivity of the headlines’ claims. Across all eight studies, when claims were presented as beliefs (e.g., “Tortoise breeders believe tortoises are becoming more popular pets”), people consistently judged those claims as more subjective than claims presented as knowledge (e.g., “Tortoise breeders know…”), as well as claims presented as unattributed generics (e.g., “Tortoises are becoming more popular pets”). Surprisingly, verb framing had relatively little, inconsistent influence over participants’ judgments of the truth of claims. These results demonstrate how, apart from shaping whether we believe a claim is true or false, epistemic language in media can influence whether we believe a claim has an objective answer at all.

Friday, April 05, 2024

Our seduction by AI’s believable human voice.

 I want to point to an excellent New Yorker article by Patrick House titled  “The Lifelike Illusion of A.I.”  The article strikes home for me, for when a Chat Bot responds to one of my prompts using the pronoun “I”  I unconsciously attribute personhood to the machine, forgetting that this is a cheap trick used by programmers of large language model to increase the plausibility of responses.

House starts off his article by describing the attachments people formed with the Furby, an animatronic toy resembling a small owl, and Pleo, an animatronic toy dinosaur. Both use a simple set of rules to make the toys appear to be alive. Furby’s eyes move up and down in a way meant to imitate an infant’s eye movements while scanning a parent’s face. Pleo mimes different emotional behaviors when touched differently.
For readers who hit the New Yorker paywall when they click the above link, here are a few clips from the article that I think get across the main points:
“A Furby possessed a pre-programmed set of around two hundred words across English and “Furbish,” a made-up language. It started by speaking Furbish; as people interacted with it, the Furby switched between its language dictionaries, creating the impression that it was learning English. The toy was “one motor—a pile of plastic,” Caleb Chung, a Furby engineer, told me. “But we’re so species-centric. That’s our big blind spot. That’s why it’s so easy to hack humans.” People who used the Furby simply assumed that it must be learning.”
Chung considers Furby and Pleo to be early, limited examples of artificial intelligence—the “single cell” form of a more advanced technology. When I asked him about the newest developments in A.I.—especially the large language models that power systems like ChatGPT—he compared the intentional design of Furby’s eye movements to the chatbots’ use of the word “I.” Both tactics are cheap, simple ways to increase believability. In this view, when ChatGPT uses the word “I,” it’s just blinking its plastic eyes, trying to convince you that it’s a living thing.
We know that, in principle, inanimate ejecta from the big bang can be converted into thinking, living matter. Is that process really happening in miniature at server farms maintained by Google, Meta, and Microsoft? One major obstacle to settling debates about the ontology of our computers is that we are biased to perceive traces of mind and intention even where there are none. In a famous 1944 study, two psychologists, Marianne Simmel and Fritz Heider, had participants watch a simple animation of two triangles and a circle moving around one another. They then asked some viewers what kind of “person” each of the shapes was. People described the shapes using words like “aggressive,” “quarrelsome,” “valiant,” “defiant,” “timid,” and “meek,” even though they knew that they’d been watching lifeless lines on a screen.
…chatbots are designed by teams of programmers, executives, and engineers working under corporate and social pressures to make a convincing product. “All these writers and physicists they’re hiring—that’s game design,” he said. “They’re basically making levels.” (In August of last year, OpenAI acquired an open-world-video-game studio, for an undisclosed amount.) Like a game, a chatbot requires user input to get going, and relies on continued interaction. Its guardrails can even be broken using certain prompts that act like cheat codes, letting players roam otherwise inaccessible areas. Blackley likened all the human tinkering involved in chatbot training to the set design required for “The Truman Show,” the TV program within the eponymous film. Without knowing it, Truman has lived his whole life surrounded not by real people but by actors playing roles—wife, friend, milkman. There’s a fantasy that “we’ve taken our great grand theories of intelligence and baked them into this model, and then we turned it on and suddenly it was exactly like this,” Blackley went on. “It’s much more like Truman’s show, in that they tweak it until it seems really cool.”
A modern chatbot isn’t a Furby. It’s not a motor and a pile of plastic. It’s an analytic behemoth trained on data containing an extraordinary quantity of human ingenuity. It’s one of the most complicated, surprising, and transformative advances in the history of computation. A Furby is knowable: its vocabulary is limited, its circuits fixed. A large language model generates ideas, words, and contexts never before known. It is also—when it takes on the form of a chatbot—a digital metamorph, a character-based shape-shifter, fluid in identity, persona, and design. To perceive its output as anything like life, or like human thinking, is to succumb to its role play.



Friday, March 29, 2024

How communication technology has enabled the corruption of our communication and culture .

I pass on two striking examples from today’s New York Times, with few clips of text from each:

A.I.-Generated Garbage Is Polluting Our Culture:

(You really should read the whole article...I've given up on trying to assemble clips of text that get across the whole message, and pass on these bits towards the end of the article:)

....we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap A.I. content to maximize clicks and views, which in turn pollutes our culture and even weakens our grasp on reality. And so far, major A.I. companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture.
Is Threads the Good Place?:

Once upon a time on social media, the nicest app of them all, Instagram, home to animal bloopers and filtered selfies, established a land called Threads, a hospitable alternative to the cursed X,..Threads would provide a new refuge. It would be Twitter But Nice, a Good Place where X’s liberal exiles could gather around for a free exchange of ideas and maybe even a bit of that 2012 Twitter magic — the goofy memes, the insider riffing, the meeting of new online friends

...And now, after a mere 10 months, we can see exactly what we built: a full-on bizarro-world X, handcrafted for the left end of the political spectrum, complete with what one user astutely labeled “a cult type vibe.” If progressives and liberals were provoked by Trumpers and Breitbart types on Twitter, on Threads they have the opportunity to be wounded by their own kind...Threads’ algorithm seems precision-tweaked to confront the user with posts devoted to whichever progressive position is slightly lefter-than-thou....There’s some kind of algorithm that’s dusting up the same kind of outrage that Twitter had.Threads feels like it’s splintering the left.

The fragmentation of social media may have been as inevitable as the fragmentation of broadcast media. Perhaps also inevitable, any social media app aiming to succeed financially must capitalize on the worst aspects of social behavior. And it may be that Hobbes, history’s cheery optimist, was right: “The condition of man is a condition of war of every one against every one.” Threads, it turns out, is just another battlefield.


 

Monday, March 11, 2024

How AI’s GPT engines work - Lanier’s forest and trees metaphor.

Jaron Lanier does a piece in The New Yorker titled "How to Picture A.I." (if you hit the paywall by clicking the link, try opening an 'empty tab" on your browser, then copy and paste in the URL that got you the paywall). I tried to do my usual sampling of small chunks of text to give the message, but found that very difficult, and so I pass several early paragraphs and urge you to read the whole article. Lanier's metaphors give me a better sense of what is going on in a GPT engine, but I'm still largely mystified. Anyway, here's some text:
In this piece, I hope to explain how such A.I. works in a way that floats above the often mystifying technical details and instead emphasizes how the technology modifies—and depends on—human input.
Let’s try thinking, in a fanciful way, about distinguishing a picture of a cat from one of a dog. Digital images are made of pixels, and we need to do something to get beyond just a list of them. One approach is to lay a grid over the picture that measures something a little more than mere color. For example, we could start by measuring the degree to which colors change in each grid square—now we have a number in each square that might represent the prominence of sharp edges in that patch of the image. A single layer of such measurements still won’t distinguish cats from dogs. But we can lay down a second grid over the first, measuring something about the first grid, and then another, and another. We can build a tower of layers, the bottommost measuring patches of the image, and each subsequent layer measuring the layer beneath it. This basic idea has been around for half a century, but only recently have we found the right tweaks to get it to work well. No one really knows whether there might be a better way still.
Here I will make our cartoon almost like an illustration in a children’s book. You can think of a tall structure of these grids as a great tree trunk growing out of the image. (The trunk is probably rectangular instead of round, since most pictures are rectangular.) Inside the tree, each little square on each grid is adorned with a number. Picture yourself climbing the tree and looking inside with an X-ray as you ascend: numbers that you find at the highest reaches depend on numbers lower down.
Alas, what we have so far still won’t be able to tell cats from dogs. But now we can start “training” our tree. (As you know, I dislike the anthropomorphic term “training,” but we’ll let it go.) Imagine that the bottom of our tree is flat, and that you can slide pictures under it. Now take a collection of cat and dog pictures that are clearly and correctly labelled “cat” and “dog,” and slide them, one by one, beneath its lowest layer. Measurements will cascade upward toward the top layer of the tree—the canopy layer, if you like, which might be seen by people in helicopters. At first, the results displayed by the canopy won’t be coherent. But we can dive into the tree—with a magic laser, let’s say—to adjust the numbers in its various layers to get a better result. We can boost the numbers that turn out to be most helpful in distinguishing cats from dogs. The process is not straightforward, since changing a number on one layer might cause a ripple of changes on other layers. Eventually, if we succeed, the numbers on the leaves of the canopy will all be ones when there’s a dog in the photo, and they will all be twos when there’s a cat.
Now, amazingly, we have created a tool—a trained tree—that distinguishes cats from dogs. Computer scientists call the grid elements found at each level “neurons,” in order to suggest a connection with biological brains, but the similarity is limited. While biological neurons are sometimes organized in “layers,” such as in the cortex, they are not always; in fact, there are fewer layers in the cortex than in an artificial neural network. With A.I., however, it’s turned out that adding a lot of layers vastly improves performance, which is why you see the term “deep” so often, as in “deep learning”—it means a lot of layers.

Friday, March 08, 2024

Explaining the evolution of gossip

 A fascinating open source article from Pan et al.:

Significance
From Mesopotamian cities to industrialized nations, gossip has been at the center of bonding human groups. Yet the evolution of gossip remains a puzzle. The current article argues that gossip evolves because its dissemination of individuals’ reputations induces individuals to cooperate with those who gossip. As a result, gossipers proliferate as well as sustain the reputation system and cooperation.
Abstract
Gossip, the exchange of personal information about absent third parties, is ubiquitous in human societies. However, the evolution of gossip remains a puzzle. The current article proposes an evolutionary cycle of gossip and uses an agent-based evolutionary game-theoretic model to assess it. We argue that the evolution of gossip is the joint consequence of its reputation dissemination and selfishness deterrence functions. Specifically, the dissemination of information about individuals’ reputations leads more individuals to condition their behavior on others’ reputations. This induces individuals to behave more cooperatively toward gossipers in order to improve their reputations. As a result, gossiping has an evolutionary advantage that leads to its proliferation. The evolution of gossip further facilitates these two functions of gossip and sustains the evolutionary cycle.

Monday, December 25, 2023

Large Language Models are not yet providing theories of human language.

 From Dentella et al. (open source):

Significance
The synthetic language generated by recent Large Language Models (LMs) strongly resembles the natural languages of humans. This resemblance has given rise to claims that LMs can serve as the basis of a theory of human language. Given the absence of transparency as to what drives the performance of LMs, the characteristics of their language competence remain vague. Through systematic testing, we demonstrate that LMs perform nearly at chance in some language judgment tasks, while revealing a stark absence of response stability and a bias toward yes-responses. Our results raise the question of how knowledge of language in LMs is engineered to have specific characteristics that are absent from human performance.
Abstract
Humans are universally good in providing stable and accurate judgments about what forms part of their language and what not. Large Language Models (LMs) are claimed to possess human-like language abilities; hence, they are expected to emulate this behavior by providing both stable and accurate answers, when asked whether a string of words complies with or deviates from their next-word predictions. This work tests whether stability and accuracy are showcased by GPT-3/text-davinci-002, GPT-3/text-davinci-003, and ChatGPT, using a series of judgment tasks that tap on 8 linguistic phenomena: plural attraction, anaphora, center embedding, comparatives, intrusive resumption, negative polarity items, order of adjectives, and order of adverbs. For every phenomenon, 10 sentences (5 grammatical and 5 ungrammatical) are tested, each randomly repeated 10 times, totaling 800 elicited judgments per LM (total n = 2,400). Our results reveal variable above-chance accuracy in the grammatical condition, below-chance accuracy in the ungrammatical condition, a significant instability of answers across phenomena, and a yes-response bias for all the tested LMs. Furthermore, we found no evidence that repetition aids the Models to converge on a processing strategy that culminates in stable answers, either accurate or inaccurate. We demonstrate that the LMs’ performance in identifying (un)grammatical word patterns is in stark contrast to what is observed in humans (n = 80, tested on the same tasks) and argue that adopting LMs as theories of human language is not motivated at their current stage of development.

Monday, November 27, 2023

The feasibility of artificial consciousness through the lens of neuroscience

Some interesting perspectives from Aru, Larkum, and Shine in Trends in Neurosciences. Motivated readers can obtain a copy of the article's text from me.  

Highlights

Large language models (LLMs) can produce text that leaves the impression that one may be interacting with a conscious agent.
Present-day LLMs are text-centric, whereas the phenomenological umwelt of living organisms is multifaceted and integrated.
Many theories of the neural basis of consciousness assign a central role to thalamocortical re-entrant processing. Currently, such processes are not implemented in LLMs.
The organizational complexity of living systems has no parallel in present-day AI tools. Possibly, AI systems would have to capture this biological complexity to be considered conscious.
LLMs and the current debates on conscious machines provide an opportunity to re-examine some core ideas of the science of consciousness.
Abstract
Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

Monday, August 21, 2023

Never-Ending Stories - a survival tactic for uncertain times

I keep returning to clips of text that I abstracted from a recent piece by Venkatesh Rao. It gets more rich for me on each re-reading.  I like its points about purpose being inappropriate for uncertain times when the simplification offered by a protocol narrative is the best route to survival.  I post the clips here for my own future use, also thinking it might interest some MindBlog readers:

Never-Ending Stories

Marching beat-by-beat into a Purposeless infinite horizon

During periods of emergence from crisis conditions (both acute and chronic), when things seem overwhelming and impossible to deal with, you often hear advice along the following lines:

Take it one day at a time

Take it one step at a time

Sleep on it; morning is wiser than evening

Count to ten

Or even just breathe

All these formulas have one thing in common: they encourage you to surrender to the (presumed benevolent) logic of a situation at larger temporal scales by not thinking about it, and only attempt to exercise agency at the smallest possible temporal scales.

These formulas typically move you from a state of high-anxiety paralyzed inaction or chaotic, overwrought thrashing, to deliberate but highly myopic action. They implicitly assume that lack of emotional regulation is the biggest immediate problem and attempt to get you into a better-regulated state by shrinking time horizons. And that deliberate action (and more subtly, deliberate inaction) is better than either frozen inaction or overwrought thrashing.

There is no particular reason to expect taking things step-by-step to be a generally good idea. Studied, meditative myopia may be good for alleviating the subjective anxieties induced by a stressful situation, but there’s no reason to believe that the objective circumstances will yield to the accumulating power of “step-by-step” local deliberateness.

So why is this common advice? And is it good advice?

I’m going to develop an answer using a concept I call narrative protocols. This step-by-step formula is a typical invocation of such protocols. They seem to work better than we expect under certain high-stress conditions.

Protocol Narratives, Narrative Protocols

Loosely speaking, a protocol narrative is a never-ending story. I’ll define it more precisely as follows:

A protocol narrative is a never-ending story, without a clear capital-P Purpose, driven by a narrative protocol that can generate novelty over an indefinite horizon, without either a) jumping the shark, b) getting irretrievably stuck, or c) sinking below a threshold of minimum viable unpredictability.

A narrative protocol, for the purposes of this essay, is simply a storytelling formula that allows the current storytellers to continue the story one beat at a time, without a clear idea of how any of the larger narrative structure elements, like scenes, acts, or epic arcs, might evolve.

Note that many narrative models and techniques, including the best-known on
e, the Hero’s Journey, are not narrative protocols because they are designed to tell stories with clear termination behaviors. They are guaranteed-ending stories. They may be used to structure episodes within a protocol narrative, but by themselves are not narrative protocols.

This pair of definitions is not as abstract as it might seem. Many real-world fictional and non-fictional narratives approximate never-ending stories.

Long-running extended universe franchises (Star Wars, Star Trek, MCU), soap operas, South Park …, the Chinese national grand narrative, and perhaps the American one as well, are all approximate examples of protocol narratives driven by narrative protocols.

Protocols and Purpose

In ongoing discussions of protocols, several of us independently arrived at a conclusion that I articulate as protocols have functions but not purposes, by which I mean capital-P Purposes. Let’s distinguish two kinds of motive force in any narrative:

1. Functions are causal narrative mechanisms for solving particular problems in a predictable way. For example, one way to resolve a conflict between a hero and a villain is a fight. So a narrative technology that offers a set of tropes for fights has something like a fight(hero, villain) function that skilled authors or actors can invoke in specific media (text, screen, real-life politics). You might say that fight(hero, villain) transitions the narrative state causally from a state of unresolved conflict to resolved conflict. Functions need not be dramatic or supply entertainment though; they just need to move the action along, beat-by-beat, in a causal way.

2. Purposes are larger philosophical theses whose significance narratives may attest to, but do not (and cannot) exhaust. These theses may take the form of eternal conditions (“the eternal struggle between good and neutral”), animating paradoxes (“If God is good, why does He allow suffering to exist?”), or historicist, teleological terminal conditions. Not all stories have Purposes, but the claim is often made that the more elevated sort can and should. David Mamet, for instance, argues that good stories engage with and air eternal conflicts, drawing on their transformative power to drive events, without exhausting them.

In this scheme, narrative protocols only require a callable set of functions to be well-defined. They do not need, and generally do not have Purposes. Functions can sustain step-by-step behaviors all by themselves.

What’s more, not only are Purposes not necessary, they might even be actively harmful during periods of crisis, when arguably a bare-metal protocol narrative, comprising only functions, should exist.

There is, in fact, a tradeoff between having a protocol underlying a narrative, and an overarching Purpose guiding it from “above.”

The Protocol-Purpose Tradeoff

During periods of crisis, when larger logics may be uncomputable, and memory and identity integration over longer epochs may be intractable, it pays to shorten horizons until you get to computability and identity integrity — so long as the underlying assumptions that movement and deliberation are better than paralysis and thrashing hold.

The question remains though. When are such assumptions valid?

This is where the notion of a protocol enters the picture in a fuller way. There is protocols as in a short foreground behavior sequence (like step-by-step), but there is also the idea of a big-P Protocol, as in a systematic (and typically constructed rather than natural) reality in the background that has more lawful and benevolent characteristics than you may suspect.

Enacting protocol narratives is enacting trust in the a big-P Protocolized environment. You trust that the protocol narrative is much bigger than the visible tip of the iceberg that you functionally relate to.

As a simple illustration, on a general somewhat sparse random graph, trying to navigate by a greedy or myopic algorithm, one step at a time, to get to destination coordinates, is likely to get you trapped in a random cul-de-sac. But that same algorithm, on a regular rectangular grid, will not only get you to your destination, it will do so via a shortest path. You can trust the gridded reality more, given the same foreground behaviors.

In this example, the grid underlying the movement behavior is the big-P protocol that makes the behavior more effective than it would normally be. It serves as a substitute for the big-P purpose.

This also gives us a way to understand the promises, if not the realities, of big-P purposes of the sort made by religion, and why there is an essential tension and tradeoff here. 

To take a generic example, let’s say I tell you that in my religion, the
cosmos is an eternal struggle between Good and Evil, and that you should be Good in this life in order to enjoy a pleasurable heaven for eternity (terminal payoff) as well as to Do The Right Thing (eternal principle).

How would you use it?

This is not particularly useful in complex crisis situations where good and evil may be hard to disambiguate, and available action options may simply not have a meaningful moral valence.

The protocol directive of step-by-step is much less opinionated. It does not require you to act in a good way. It only requires you to take a step in a roughly right direction. And then another. And another. The actions do not even need to be justifiably rational with respect to particular consciously held premises. They just need to be deliberate.

*****

A sign that economic narratives are bare-bones protocol narratives is the fact that they tend to continue uninterrupted through crises that derail or kill other kinds of narratives. Through the Great Weirding and the Pandemic, we still got GDP, unemployment, inflation, and interest rate “stories.”

I bet that even if aliens landed tomorrow, even though the rest of us would be in a state of paralyzed inaction, unable to process or make sense of events, economists would continue to publish their numbers and argue about whether aliens landing is inflationary or deflationary. And at the microeconomic level, Matt Levine would probably write a reassuring Money Matters column explaining how to think about it all in terms of SEC regulations and force majeure contract clauses.

I like making fun of economists, but if you think about this, there is a profound and powerful narrative capability at work here. Strong protocol narratives can weather events that are unnarratable for all other kinds of narratives. Events that destroy high-Purpose religious and political narratives might cause no more than a ripple in strong protocol narratives.

So if you value longevity and non-termination, and you sense that times are tough, it makes sense to favor Protocols over Purposes.

***********


Step-by-Step is Hard-to-Kill

While economic narratives provide a good and clear class of examples of protocol narratives, they are not the only or even best examples.

The best examples are ones that show that a bare set of narrative functions is enough to sustain psychological life indefinitely. That surprisingly bleak narratives are nevertheless viable.

The very fact that we can even talk of “going through the motions” or feeling “empty and purposeless” when a governing narrative for a course of events is unsatisfying reveals that something else is in fact continuing, despite the lack of Purpose. Something that is computationally substantial and life-sustaining.

I recall a line from (I think) an old Desmond Bagley novel I read as a teenager, where a hero is trudging through a trackless desert. His inner monologue is going, one bloody foot after the next blood foot; one bloody step after the next bloody step.

Weird though it might seem, that’s actually a complete story. It works as a protocol narrative. There is a progressively summarizable logic to it, and a memory-ful evolving identity to it. If you’re an economist, it might even be a satisfying narrative, as good as “number go up.”

Protocol narratives only need functions to keep going.

They do not need Purposes, and generally are, to varying degrees, actively hostile to such constructs. It’s not just take it one day at a time, but an implied don’t think about weeks and months and the meaning of life; it might kill you.

While protocol narratives may tolerate elements of Purpose during normal times, they are especially hostile to them during crisis periods. If you think about it, step-by-step advancement of a narrative is a minimalist strategy. If a narrative can survive on a step-by-step type protocol alone, it is probably extraordinarily hard to kill, and doing more likely adds risk and fragility (hence the Protocol-Purpose tradeoff).

During periods of crisis, narrative protocols switch into a kind of triage mode where only step-by-step movement is allowed (somewhat like how, in debugging a computer program, stepping through code is a troubleshooting behavior). More abstract motive forces are deliberately suspended.

I like to think of the logic governing this as exposure therapy for life itself. In complex conditions, the most important thing to do is simply to choose life over and over, deliberately, step-by-step. To keep going is to choose life, and it is always the first order of business.

This is why, as I noted in the opening section, lack of emotional regulation is the first problem to address. Because in a crisis, if it is left unmanaged, it will turn into a retreat from life itself. As Churchill said, the only thing we have to fear is fear itself.

To reach for loftier abstractions than step-by-step in times of crisis is to retreat from life. Purpose is a life-threatening luxury you cannot afford in difficult times. But a narrative protocol will keep you going through even nearly unnarratable times. And even if it feels like merely going through empty motions, sometimes all it takes to choose life is to be slightly harder to kill.

Monday, June 05, 2023

A simple heuristic for distinguishing lie from truth

Work by Verschuere et al. shows that a simple heuristic of only judging the level of detail in the message consistently allows people to discriminate lies from truths:
Decades of research have shown that people are poor at detecting deception. Understandably, people struggle with integrating the many putative cues to deception into an accurate veracity judgement. Heuristics simplify difficult decisions by ignoring most of the information and relying instead only on the most diagnostic cues. Here we conducted nine studies in which people evaluated honest and deceptive handwritten statements, video transcripts, videotaped interviews or live interviews. Participants performed at the chance level when they made intuitive judgements, free to use any possible cue. But when instructed to rely only on the best available cue (detailedness), they were consistently able to discriminate lies from truths. Our findings challenge the notion that people lack the potential to detect deception. The simplicity and accuracy of the use-the-best heuristic provides a promising new avenue for deception research.

Monday, May 29, 2023

To fulfill its promise, artificial intelligence needs to deepen human intelligence.

For MindBlog readers interested in AI, I have to point to another must-read article by Ezra Klein. Below are some clips that try to communicate his central points. (And no, I'm not using ChatGPT to generate this post, because of several of AI's limitations that he notes.) Klein starts by noting the many ways in which the internet has not fullfiled its promise, overwhelming us with more information than we can process, degrading our political discourse and attention spans, and leading us multitasking which not only diminished our cognitive depth but also activates our stress chemistry. He then lists several wrong directions that might be taken by large language models like OpenAI’s GPT-4 and Google’s Bard:
One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up...A question to ask about large language models, then, is where does trustworthiness not matter?...A.I. will be great for creating content where reliability isn’t a concern. The personalized video games and children’s shows and music mash-ups and bespoke images will be dazzling...But where reliability matters — say, a large language model devoted to answering medical questions or summarizing doctor-patient interactions — deployment will be more troubled, as oversight costs will be immense. The problem is that those are the areas that matter most for economic growth.
...Instead of generating 10 ideas in a minute, A.I. can generate hundreds of ideas in a second...Imagine that multiplied across the economy. Someone somewhere will have to process all that information. What will this do to productivity?...Email and chat systems like Slack offer useful analogies here. Both are widely used across the economy. Both were initially sold as productivity boosters, allowing more communication to take place faster. And as anyone who uses them knows, the productivity gains — though real — are more than matched by the cost of being buried under vastly more communication, much of it junk and nonsense.
Many of us have had the experience of asking ChatGPT to draft a piece of writing and seeing a fully formed composition appear, as if by magic, in seconds...My third concern is related to this use of A.I.: Even if those summaries and drafts are pretty good, something is lost in the outsourcing...It’s the time spent inside an article or book spent drawing connections to what we know and having thoughts we would not otherwise have had that matters...No one thinks that reading the SparkNotes summary of a great piece of literature is akin to actually reading the book. And no one thinks that if students have ChatGPT write their essays, they have cleverly boosted their productivity rather than lost the opportunity to learn. The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real.
These are old concerns, of course. Socrates questioned the use of writing (recorded, ironically, by Plato), worrying that “if men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves but by means of external marks.” I think the trade-off here was worth it — I am, after all, a writer — but it was a trade-off. Human beings really did lose faculties of memory we once had.
To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don’t overwhelm and distract and diminish us. We failed that test with the internet. Let’s not fail it with A.I.

Wednesday, May 24, 2023

Using AI to decipher words and sentences from brain scans

Things are happening very fast in AI, as this work from Huth and his collaborators shows. Previous work has shown that speeh articulation and other signals can be decoded from invasive intracranial recordings, and they have developed a language decoder that now accomplishs this with non-invasive fMRI. Motivated readers can obtain the detailed description of the work by emailing me. Their abstract:
A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.

Tuesday, March 28, 2023

MindBlog's seduction by AI

I decided to drink the kool aid, go down the rabbit hole, of learning more about AI, and have ceased regular MindBlog posts. As I have been reading numerous articles and a few books I have curbed my urge to rapidly summarize, abstract, and post their main ideas. This has been in the hope that a more integrative summary that feels faintly original might perhaps rise from the mists of my unconscious during a waking period at night. A few paragraphs that feel  like they might be of that sort are in draft form. 

I'm going to break my reticence with this post, however, to share with MindBlog readers how simple it is to start playing with OpenAI's ChatGPT or Microsoft Bing's implementation in its new Edge browser. I'm still on the waiting list for Google's Bard. Here is what happens when I ask these engines to 

'Write a poem in iambic pentameter about China in the voice of Paul Krugman':

Microsoft Edge result:

China’s rise has been a sight to behold, A nation once poor now rich and quite bold, Its growth has been rapid, its power immense, Its influence felt in every sense.

But with this growth comes a price to be paid, A system that’s flawed and quite often waylaid, The government’s power is vast and unchecked, And human rights abuses are often decked.

The world must take notice of China’s might, And work to ensure that it’s used for right, For if we don’t, then we’ll all pay the price, As China’s power grows to an even greater size.

So let us work together to ensure, That China’s rise is one that we can endure, And that the world remains a place of peace, Where all nations can grow and find release


ChatGPT result:


In China's great economy, we see A marvel of our globalized age, With growth and trade that fascinate me, And prospects that we cannot gauge.

The nation's rise is hard to deny, Its factories thrive and cities boom, And yet, despite the progress nearby, The country's flaws still cast a gloom.

For China's leaders, too, must choose, Between democracy and control, And as their power and wealth accrues, The danger of repression takes its toll.

But still, we cannot help but be amazed, At what this nation has achieved, And though its path is yet untraced, We hope that it will not be deceived.

For China's future is a global stake, And we must strive to understand, The risks and opportunities it may make, And build a brighter future hand in hand.


and when I ask ChatGPT to ‘regenerate response’:.


In China's land, a wonderous sight to see, A nation grown with lightning-like degree. Its factories thrive, its cities ever-sprawl, Its growth unmatched, its rise an envied call.

Yet with its growth, there comes a price to pay, As smog and pollution cloud the skies of day. Its workers toil, their wages held at bay, A system ruled by power's strong display.

And as it rises, tensions also grow, A clash of values, East and West to know. A future wrought with both hope and fear, A world transformed, with China drawing near.

But still, we cannot turn our eyes away, For in its growth, there lies a chance to sway. To shape a world, where freedom's light may shine, And China's rise a force for good in line.

So let us watch and learn from China's ways, And seek to guide its path in brighter days. For in its rise, we see the future's face, And all the challenges we yet must face.




 

Wednesday, March 01, 2023

Artificial intelligence and personhood

MindBlog hesitates to add to the feeding frenzy of articles about LLMs (large language models) such as Open AI’s ChatGPT and Microsoft Bing’s “Sydney,” but I want to pass on clips from a fascinating episode of Venkatesh Rao’s “Mediocre Computing” series, that suffers from logorrhea and could use some ruthless editing, but has some searing points to make, which I largely agree with. He starts by posing A.I. as another Copernican moment:
…stripping away yet another layer of our anthropocentric conceits is obvious. But which conceits specifically, and what, if anything is left behind? In case you weren’t keeping track, here’s the current Copernican Moments list:
The Earth goes around the Sun,
Natural selection rather than God created life,
Time and space are relative,
Everything is Heisenberg-uncertain
“Life” is just DNA’s way of making more DNA,
Computers wipe the floor with us anywhere we can keep score
There’s not a whole lot left at this point is there? I’m mildly surprised we End-of-History humans even have any anthropocentric conceits left to strip away. But apparently we do. Let’s take a look at this latest Fallen Conceit: Personhood.
…..at a basic level: text is all it takes to produce personhood. We knew this from the experience of watching good acting…We just didn’t recognize the significance. Of course you can go beyond, adding a plastic or human body around the text production machinery to enable sex for example, but that’s optional extras. Text is all you need to produce basic see-and-be-seen I-you personhood.
Chatbots do, at a vast scale, and using people’s data traces on the internet rather than how they present in meatspace, what the combination of fiction writers and actors does in producing convincing acting performances of fictional persons.
In both cases, text is all you need. That’s it. You don’t need embodiment, meatbag bodies, rich sensory memories.
This is actually a surprisingly revealing fact. It means we can plausibly exist, at least as social creatures, products of I-you seeings, purely on our language-based maps of reality.
Language is a rich process, but I for one didn’t suspect it was that rich. I thought there was more to seeing and being seen, to I-you relations.
Still, even though text is all you need to personhood, the discussion doesn’t end there. Because personhood is not all there is to, for want of a better word, being. Seeing, being seen, and existing at the nexus of a bunch of I-you relationships, is not all there is to being.
What is the gap between being and personhood? Just how much of being is constituted by the ability to see and be seen, and being part of I-you relationships?
The ability to doubt, unlike the ability to think (which I do think is roughly equivalent to the ability to see and be seen in I-you ways), is not reducible to text. In particular, text is all it takes to think and produce or consume unironically believable personhood, but doubt requires an awareness of the potential for misregistration between linguistic maps and the phenomenological territory of life. If text is all you have, you can be a person, but you cannot be a person in doubt.
Doubt is eerily missing in the chat transcripts I’ve seen, from both ChatGPT and Sydney. There are linguistic markers of doubt, but they feel off, like a color-blind person cleverly describing colors. In a discussion, one person suggested this is partly explained by the training data. Online, textually performed personas are uncharacteristically low on doubt, since the medium encourages a kind of confident stridency.
But I think there’s something missing in a more basic way, in the warp and woof of the conversational texture. At some basic level, rich though it is, text is missing important non-linguistic dimensions of the experience of being. But what’s missing isn’t cosmetic aspects of physicality, or the post-textual intimate zones of relating, like sex (the convincing sexbots aren’t that far away). What’s missing is doubt itself.
The signs, in the transcripts, of repeated convergence to patterns of personhood that present as high-confidence paranoia, is I think due to the gap between thought and doubt; cogito and dubito. Text is all you need to be a person, but context is additionally necessary to be a sane person and a full being. And doubt is an essential piece of the puzzle there.
So where does doubt live? Where is the aspect of being that’s doubt, but not “thought” in a textual sense.
For one, it lives in the sheer quantity of bits in the world that are not textual. There are exabytes of textual data online, but there is orders of magnitude more data in every grain of sand. Reality just has vastly more data than even the impressively rich map that is language. And to the extent we cannot avoid being aware of this ocean of reality unfactored into our textual understandings, it shapes and creates our sense of being.
For another, even though with our limited senses we can only take in a tiny and stylized fraction of this overwhelming mass of bits around us, the stream of inbound sense-bits still utterly dwarfs what eventually trickles out as textual performances of personhood (and what is almost the same thing in my opinion, conventional social performances “in-person” which are not significantly richer than text — expressions of emotion add perhaps a few dozen bytes of bandwidth for example — I think of this sort of information stream as “text-equivalent” — it only looks plausibly richer than text but isn’t).
But the most significant part of the gap is probably experiential dark matter: we know we know vastly more than we can say. The gap between what we can capture in words and what we “know” of reality in some pre-linguistic sense is vast. The gap between an infant’s tentative babbling and Shakespeare is a rounding error relative to the gap within each of us between the knowable and the sayable.
So while it is surprising (though… is it really?) that text is all it takes to perform personhood with enough fidelity to provoke involuntary I-you relating in a non-trivial fraction of the population, it’s not all there is to being. This is why I so strongly argue for embodiment as a necessary feature of the fullest kind of AI.
The most surprising thing for me has been the fact that so many people are so powerfully affected by the Copernican moment and the dismantling of the human specialness of personhood.
I think I now see why it’s apparently a traumatic moment for at least some humans. The advent of chatbots that can perform personhood that at least some people can’t not relate to in I-you ways, coupled with the recognition that text is all it takes to produce such personhood, forces a hard decision.
Either you continue to see personhood as precious and ineffable and promote chatbots to full personhood.
Or you decide personhood — seeing and being seen — is a banal physical process and you are not that special for being able to produce, perform, and experience it.
And both these options are apparently extremely traumatic prospects. Either piles of mechanically digested text are spiritually special, or you are not. Either there is a sudden and alarming increase in your social universe, or a sudden sharp devaluation of mutualism as a component of identity.
Remember — I’m defining personhood very narrowly as the ability to be seen in I-you ways. It’s a narrow and limited aspect of being, as I have argued, but one that average humans are exceptionally attached to.
We are of course, very attached to many particular aspects of our beings, and they are not all subtle and ineffable. Most are in fact quite crude. We have identities anchored to weight, height, skin color, evenness of teeth, baldness, test scores, titles, net worths, cars, and many other things that are eminently effable. And many people have no issues getting bariatric surgery, wearing lifts, lightening or tanning their skin, getting orthodontics, hair implants, faking test scores, signaling more wealth than they possess, and so on. The general level of “sacredness” of strong identity attachments is fairly low.
But personhood, being “seen,” has hitherto seemed ineffably special. We think it’s the “real” us that is seen and does such seeing. We are somewhat prepared to fake or pragmatically alter almost everything else about ourselves, but treat personhood as a sacred thing.
Everything else is a “shallow” preliminary. But what is the “deep” or “real” you that we think lurks beneath? I submit that it is in fact a sacralized personhood — the ability to see and be seen. And at least for some people I know personally, that’s all there is to the real-them. They seem to sort of vanish when they are not being seen (and panic mightily about it, urgently and aggressively arranging their lives to ensure they’re always being seen, so they can exist — Trump and Musk are two prominent public examples).
And the trauma of this moment — again for some, not all of us — lies in the fact that text is all you need to produce this sacredly attached aspect of being.
I have a feeling, as this technology becomes more widespread and integrated into everyday life, the majority of humans will initially choose some tortured, conflicted version of the first option — accepting that they cannot help but see piles of digested data in I-you ways, and trying to reclaim some sense of fragile, but still-sacred personhood in the face of such accommodation, while according as little sacredness as possible to the artificial persons, and looking for ways to keep them in their place, creating a whole weird theater of an expanding social universe.
A minority of us will be choosing the second option, but I suspect in the long run of history, this is in fact the “right” answer in some sense, and will become the majority answer. Just as with the original Copernican moment, the “right” answer was to let go attachment to the idea of Earth as the center of the universe. Now the right answer is to let go the idea that personhood and I-you seeing is special. It’s just a special case of I-it seeing that some piles of digested text are as capable of as tangles of neurons.
…there will also be a more generative and interesting aspect. Once we lose our annoying attachment to sacred personhood, we can also lose our attachment to specific personhoods we happen to have grown into, and make personhood a medium of artistic expression that we can change as easily as clothes or hairstyles. If text is all you need to produce personhood, why should we be limited to just one per lifetime? Especially when you can just rustle up a bunch of LLMs (Large Language Models) to help you see-and-be-seen in arbitrary new ways?
I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
What’s left, once this layer of anthropocentric conceit, static, stable personhood, dissolves in a flurry of multiplied matrices, Ballardian banalities, and imaginative larped personhoods being cheaply hallucinated in and out of existence with help from computers?
I think what is left is the irreducible individual subjective, anchored in dubito ergo sum. I doubt therefore I am.

Monday, January 23, 2023

Our different styles of thinking.

An interesting recent article by Joshua Rothman, the ideas editor of newyorker.com, notes several recent books that describe different styles of thinking. A few clips:
In “Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions,” Temple Grandin identifies a continuum of thought styles that’s roughly divisible into three sections. On one end are verbal thinkers, who often solve problems by talking about them in their heads or, more generally, by proceeding in the linear, representational fashion typical of language. (Estimating the cost of a building project, a verbal thinker might price out all the components, then sum them using a spreadsheet—an ordered, symbol-based approach.) On the other end of the continuum are “object visualizers”: they come to conclusions through the use of concrete, photograph-like mental images, as Grandin does when she compares building plans in her mind. In between those poles, Grandin writes, is a second group of visual thinkers—“spatial visualizers,” who seem to combine language and image, thinking in terms of visual patterns and abstractions.
Grandin proposes imagining a church steeple. Verbal people, she finds, often make a hash of this task, conjuring something like “two vague lines in an inverted V,” almost as though they’ve never seen a steeple before. Object visualizers, by contrast, describe specific steeples that they’ve observed on actual churches: they “might as well be staring at a photograph or photorealistic drawing” in their minds. Meanwhile, the spatial visualizers picture a kind of perfect but abstract steeple—“a generic New England-style steeple, an image they piece together from churches they’ve seen.” They have noticed patterns among church steeples, and they imagine the pattern, rather than any particular instance of it.
The imagistic minds in “Visual Thinking” can seem glamorous compared with the verbal ones depicted in “Chatter: The Voice in Our Head, Why It Matters, and How to Harness It,” by Ethan Kross. Kross is interested in what’s known as the phonological loop—a neural system, consisting of an “inner ear” and an “inner voice,” that serves as a “clearinghouse for everything related to words that occurs around us in the present.” If Grandin’s visual thinkers are attending Cirque du Soleil, then Kross’s verbal thinkers are stuck at an Off Broadway one-man show. It’s just one long monologue.
People with inner monologues, Kross reports, often spend “a considerable amount of time thinking about themselves, their minds gravitating toward their own experiences, emotions, desires, and needs.” This self-centeredness can spill over into our out-loud conversation. In the nineteen-eighties, the psychologist Bernard Rimé investigated what we’d now call venting—the compulsive sharing of negative thoughts with other people. Rimé found that bad experiences can inspire not only interior rumination but the urge to broadcast it. The more we share our unhappiness with others, the more we alienate them… Maybe it can pay to keep your thoughts to yourself.
Kross’s bottom line is that our inner voices are powerful tools that must be tamed. He ends his book with several dozen techniques for controlling our chatter. He advises trying “distanced self-talk”: by using “your name and the second-person ‘you’ to refer to yourself,” he writes, you can gain more command over your thinking. You might use your inner voice to pretend that you’re advising a friend about his problems; you might redirect your thoughts toward how universal your experiences are (It’s normal to feel this way), or contemplate how every new experience is a challenge you can overcome (I have to learn to trust my partner). The idea is to manage the voice that you use for self-management. Take advantage of the suppleness of dialogue. Don’t just rehearse the same old scripts; send some notes to the writers’ room.
If we can’t say exactly how we think, then how well do we know ourselves? In an essay titled “The Self as a Center of Narrative Gravity,” the philosopher Daniel Dennett argued that a layer of fiction is woven into what it is to be human. In a sense, fiction is flawed: it’s not true. But, when we open a novel, we don’t hurl it to the ground in disgust, declaring that it’s all made-up nonsense; we understand that being made up is actually the point. Fiction, Dennett writes, has a deliberately “indeterminate” status: it’s true, but only on its own terms. The same goes for our minds. We have all sorts of inner experiences, and we live through and describe them in different ways—telling one another about our dreams, recalling our thoughts, and so on. Are our descriptions and experiences true or fictionalized? Does it matter? It’s all part of the story.

Wednesday, December 07, 2022

OpenAI’s ChatGPT answers some questions from MindBlog

Given a recent posting in which I played with the DALL-E 2 A.I. system that generates images when you tell it what you want to see, I was motivated by a Venkatesh Rao essay "The Dawn of Mediocre Computing" to try out an essay generating system - OpenAI's ChatGPT, released as a research beta several days ago. After puttering through a number of references and joining the OpenAI Discord Server, I fundamentally don't get it and have nothing original to pass on in this post, but I thought some MindBlog readers might enjoy spending some time in the rabbit hole I am just now exiting. I pass on the following MindBlog queries and ChatGPT's rather remarkable replies: 

Question: what is the predictive processing model for brain function? 

The predictive processing model is a framework for understanding how the brain works. It proposes that the brain is constantly generating predictions about the world around us and that these predictions are used to inform our perception, behavior, and action. According to this model, the brain is always trying to make sense of the sensory information it receives by comparing it to its internal models and predictions. If there is a mismatch between the sensory information and the brain's predictions, this can lead to the experience of surprise or prediction error. The brain then uses this prediction error to update its internal models and make more accurate predictions in the future. This process is thought to be the basis for how we learn and adapt to our environment. 

Question: What are the main models for brain function? 

There are many different models or frameworks for understanding how the brain works, each of which offers a different perspective on brain function. Some of the most prominent and well-known models include the following:

The neural network model, which proposes that the brain is composed of interconnected networks of neurons that communicate with each other through electrical and chemical signals.
The cognitive model, which emphasizes the role of higher-level cognitive processes such as attention, memory, and decision-making in brain function.
The computational model, which views the brain as a biological computer that processes information using algorithms and rules.
The systems model, which focuses on the ways in which different brain regions and systems interact to produce complex behaviors and mental states.
The evolutionary model, which emphasizes the role of natural selection in shaping the brain and its functions.
The dynamic systems model, which proposes that the brain is a complex, self-organizing system that is constantly adapting and changing in response to experience. 

Each of these models offers a different perspective on brain function and can be used to help understand different aspects of the brain and its behavior.

Question: Explain critical race theory 

Critical race theory is a school of thought that explores the ways in which race and racism affect and are embedded in the laws, policies, and social structures of society. It is a framework for understanding how society and its institutions can perpetuate inequality along racial lines, and it emphasizes the importance of examining the interplay between race, power, and the law. Critical race theory is often used as a lens for analyzing issues related to inequality and social justice. It is a critical approach to the study of race and racism that seeks to challenge and dismantle the ways in which race and racism operate in society. 

ADDENDUM: I suggest readers have a look at this article on "Sentience and Sensibility"

Friday, December 02, 2022

The Z-Library Kerfuffle and the whack-a-mole problem

Several months ago a friend told me about the Z-Library that he was using to download recently published fiction to evaluate candidates for book club reading. He thought it was sanctioned by publishers. I checked it out, decided to download a book, the book looked promising, so I bought the Amazon Kindle version. As this New York Daily News article makes clear in its description of the seizing of many Z-Library domains by the FBI, it is a pirate site, a file sharing service that evades copyright restrictions. I asked Wikipedia about Z-Library, and brought up a page with a description not only of Z-Library and its recent shut down in the U.S., but learned about the existence of many shadow libraries scattered over many different countries accessed by several metasearch engines. A box on this page, partly shown in the screen clip below, indicates it is part of a series on file sharing. I went to Anna's Archive, one of sites mentioned under the heading of Academic file sharing, and was able to easily download the entire text of a copyrighted Amazon Kindle Book I had purchased. Given the thousands of motivated individuals around the world who can't afford or don't have access to publications they need or want, there seems little prospect of denting the anarchy that prevails. 


Monday, November 28, 2022

The Computational Society

The most recent issue of Trends in Cognitive Sciences presents a forum in its 25th Anniversary Series: Looking Forward. Several of the contribution are open source (you can email me to request access to those that are not), and I would like to point to Nick Charter's brief article "The computational society," passing on his initial and final comments. I suggest you read through his descriptions of what he thinks are four promising lines of work.
How do individual human minds create languages, legal systems, scientific theories, and technologies? From a cognitive science viewpoint, such collective phenomena may be considered a type of distributed computation in which human minds together solve computational problems beyond any individual. This viewpoint may also shift our perspective on individual minds.
To make the computational society more than a metaphor, we need conceptual tools and methods to understand social phenomena in information-processing terms. Fortunately, several different, yet complementary, approaches have emerged in recent years. Here I highlight four promising lines of work: (i) social interaction as computation, (ii) the computational Leviathan, (iii) collective self-correction and rationality, and (iv) computation through spontaneous order.
Cognitive science may stand on the brink of a new revolution, seeing social, organizational, and cultural processes as distributed computation. If so, we will need to look afresh at the computational role of individual minds. For example, rather than seeing each developing child as a lone minilinguist or a scientist-in-the-crib, we may, following Adam Ferguson, see humans as primarily learning to contribute to collective computations beyond the understanding of individual understanding.

Friday, September 23, 2022

Machine learning is translating the languages of animals

Anthes does an article on how machine learning is being used to eavesdrop on naked mole rats, fruit bats, crows and whales — and to communicate back. Some edited clips:
Machine-learning systems, which use algorithms to detect patterns in large collections of data, have excelled at analyzing human language, giving rise to voice assistants that recognize speech, transcription software that converts speech to text and digital tools that translate between human languages.
...this technology can be deployed to decode animal communication, working towards finding a Google Translate for animals, using machine-learning algorithms to identify when squeaking mice are stressed or why fruit bats are shouting. Even more ambitious projects are underway — to create a comprehensive catalog of crow calls, map the syntax of sperm whales and even to build technologies that allow humans to talk back.
...machine-learning algorithms can spot subtle patterns that might elude human listeners...these programs can tell apart the voices of individual animals, distinguish between sounds that animals make in different circumstances and break their vocalizations down into smaller parts, a crucial step in deciphering meaning.
...the technology could also be deployed for the benefit of animals, helping experts monitor the welfare of both wild and domestic fauna. Scientists also said that they hoped that by providing new insight into animal lives, this research might prompt a broader societal shift. Many pointed to the galvanizing effect of the 1970 album “Songs of the Humpback Whale,” which featured recordings of otherworldly whale calls and has been widely credited with helping to spark the global Save the Whales movement...many scientists said they hoped these new, high-tech efforts to understand the vocalizations of whales — and crows and bats and even naked mole rats — will be similarly transformative, providing new ways to connect with and understand the creatures with whom we share the planet.