Wednesday, April 10, 2024

The world of decentralized everything.

Following up on my last post on the Summer of Protocols sessions, I want to pass on (again, to my future self, and possibly a few techie MindBlog readers) a few links to the world of decentralized grass roots everything - commerce, communications, finance, etc.  - trying to bypass the traditional powers and gate keepers in these areas by constructing distributed systems usually based on block chains and cryptocurrencies.  I am trying to learn more about this, taking things in small steps to avoid overload headaches... (One keeps stumbling on areas of world wide engagement of thousands of very intelligent minds.)

Here is a worthwhile read of the general idea from the Ethereum Foundation.

I've described getting into one decentralized context by setting up a Helium Mobile network hotspot, as well as my own private Helium Mobile Cellular account. To follow this up, I pass on a link in an email from Helium pointing to its participation in Consensus24 May 29-31 in Austin TX (where I now live) sponsored by CoinDesk.  At look at the agenda for that meeting gives you an impression of the multiple engagements of government regulatory agencies, business, and crypto-world that are occurring.

Monday, April 08, 2024

New protocols for uncertain times.

I want to point to a project launched by Venkatest Rao and others last year: “The Summer of Protocols.”  Some background for this project can be found in his essay “In Search of Hardness”.  Also,  “The Unreasonable Sufficiency of Protocols”  essay by Rao et al. is an excellent presentation of what protocols are about.  I strongly recommend that you read it if nothing else. 

Here is a description of the project: 

Over 18 weeks in Summer 2023, 33 researchers from diverse fields including architecture, law, game design, technology, media, art, and workplace safety engaged in collaborative speculation, discovery, design, invention, and creative production to explore protocols, boadly construed, from various angles.

Their findings, catalogued here in six modules, comprise a variety of textual and non-textual artifacts (including art works, game designs, and software), organized around a set of research themes: built environments, danger and safety, dense hypermedia, technical standards, web content addressability, authorship, swarms, protocol death, and (artificial) memory.
I have read through through Module One for 2003, and it is solid interesting deep dive stuff.  Module 2 is also available. Modules 3-6 are said to be 'coming soon’  (as of 4/4/24, four months into a year that has Summer of Protocols program 2024 already underway, with the deadline for proposals 4/12/24.)

Here is one clip from the “In Search of Hardness” essay:

…it’s only in the last 50 years or so, with the rise of communications technologies, especially the internet and container shipping, and the emergence of unprecedented planet-scale coordination problems like climate action, that protocols truly came into focus as first-class phenomena in our world; the sine qua non of modernity. The word itself is less than a couple of centuries old.

And it wasn’t until the invention of blockchains in 2009 that they truly came into their own as phenomena with their own unique technological and social characteristics, distinct from other things like machines, institutions, processes, or even algorithms.

Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.

But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.”


Friday, April 05, 2024

Our seduction by AI’s believable human voice.

 I want to point to an excellent New Yorker article by Patrick House titled  “The Lifelike Illusion of A.I.”  The article strikes home for me, for when a Chat Bot responds to one of my prompts using the pronoun “I”  I unconsciously attribute personhood to the machine, forgetting that this is a cheap trick used by programmers of large language model to increase the plausibility of responses.

House starts off his article by describing the attachments people formed with the Furby, an animatronic toy resembling a small owl, and Pleo, an animatronic toy dinosaur. Both use a simple set of rules to make the toys appear to be alive. Furby’s eyes move up and down in a way meant to imitate an infant’s eye movements while scanning a parent’s face. Pleo mimes different emotional behaviors when touched differently.
For readers who hit the New Yorker paywall when they click the above link, here are a few clips from the article that I think get across the main points:
“A Furby possessed a pre-programmed set of around two hundred words across English and “Furbish,” a made-up language. It started by speaking Furbish; as people interacted with it, the Furby switched between its language dictionaries, creating the impression that it was learning English. The toy was “one motor—a pile of plastic,” Caleb Chung, a Furby engineer, told me. “But we’re so species-centric. That’s our big blind spot. That’s why it’s so easy to hack humans.” People who used the Furby simply assumed that it must be learning.”
Chung considers Furby and Pleo to be early, limited examples of artificial intelligence—the “single cell” form of a more advanced technology. When I asked him about the newest developments in A.I.—especially the large language models that power systems like ChatGPT—he compared the intentional design of Furby’s eye movements to the chatbots’ use of the word “I.” Both tactics are cheap, simple ways to increase believability. In this view, when ChatGPT uses the word “I,” it’s just blinking its plastic eyes, trying to convince you that it’s a living thing.
We know that, in principle, inanimate ejecta from the big bang can be converted into thinking, living matter. Is that process really happening in miniature at server farms maintained by Google, Meta, and Microsoft? One major obstacle to settling debates about the ontology of our computers is that we are biased to perceive traces of mind and intention even where there are none. In a famous 1944 study, two psychologists, Marianne Simmel and Fritz Heider, had participants watch a simple animation of two triangles and a circle moving around one another. They then asked some viewers what kind of “person” each of the shapes was. People described the shapes using words like “aggressive,” “quarrelsome,” “valiant,” “defiant,” “timid,” and “meek,” even though they knew that they’d been watching lifeless lines on a screen.
…chatbots are designed by teams of programmers, executives, and engineers working under corporate and social pressures to make a convincing product. “All these writers and physicists they’re hiring—that’s game design,” he said. “They’re basically making levels.” (In August of last year, OpenAI acquired an open-world-video-game studio, for an undisclosed amount.) Like a game, a chatbot requires user input to get going, and relies on continued interaction. Its guardrails can even be broken using certain prompts that act like cheat codes, letting players roam otherwise inaccessible areas. Blackley likened all the human tinkering involved in chatbot training to the set design required for “The Truman Show,” the TV program within the eponymous film. Without knowing it, Truman has lived his whole life surrounded not by real people but by actors playing roles—wife, friend, milkman. There’s a fantasy that “we’ve taken our great grand theories of intelligence and baked them into this model, and then we turned it on and suddenly it was exactly like this,” Blackley went on. “It’s much more like Truman’s show, in that they tweak it until it seems really cool.”
A modern chatbot isn’t a Furby. It’s not a motor and a pile of plastic. It’s an analytic behemoth trained on data containing an extraordinary quantity of human ingenuity. It’s one of the most complicated, surprising, and transformative advances in the history of computation. A Furby is knowable: its vocabulary is limited, its circuits fixed. A large language model generates ideas, words, and contexts never before known. It is also—when it takes on the form of a chatbot—a digital metamorph, a character-based shape-shifter, fluid in identity, persona, and design. To perceive its output as anything like life, or like human thinking, is to succumb to its role play.



Wednesday, April 03, 2024

Neurons help flush waste out of our brains during sleep

More information (summarized here) on what is happening in our brains while we sleep is provided by Jiang-Xie et al.,, who show that active neurons can stimulate the clearance of their own metabolic waste by driving changes to ion gradients in the surrounding fluid and by promoting the pulsation of nearby blood vessels.  Here is the Jiang-Xie et al.abstract:

The accumulation of metabolic waste is a leading cause of numerous neurological disorders, yet we still have only limited knowledge of how the brain performs self-cleansing. Here we demonstrate that neural networks synchronize individual action potentials to create large-amplitude, rhythmic and self-perpetuating ionic waves in the interstitial fluid of the brain. These waves are a plausible mechanism to explain the correlated potentiation of the glymphatic flow through the brain parenchyma. Chemogenetic flattening of these high-energy ionic waves largely impeded cerebrospinal fluid infiltration into and clearance of molecules from the brain parenchyma. Notably, synthesized waves generated through transcranial optogenetic stimulation substantially potentiated cerebrospinal fluid-to-interstitial fluid perfusion. Our study demonstrates that neurons serve as master organizers for brain clearance. This fundamental principle introduces a new theoretical framework for the functioning of macroscopic brain waves.

Monday, April 01, 2024

When memories get complex, sleep comes to their rescue

Here I point to a PNAS article by Lutz et al. and a commentary on the work by Schechtman. Here is the Lutz. et al. abstract:

Significance

Real-life events usually consist of multiple elements such as a location, people, and objects that become associated during the event. Such associations can differ in their strength, and some elements may be associated only indirectly (e.g., via a third element). Here, we show that sleep compared with nocturnal wakefulness selectively strengthens associations between elements of events that were only weakly encoded and of such that were not encoded together, thus fostering new associations. Importantly, these sleep effects were associated with an improved recall of the complete event after presentation of only a single cue. These findings uncover a fundamental role of sleep in the completion of partial information and are critical for understanding how real-life events are processed during sleep.

Abstract

Sleep supports the consolidation of episodic memory. It is, however, a matter of ongoing debate how this effect is established, because, so far, it has been demonstrated almost exclusively for simple associations, which lack the complex associative structure of real-life events, typically comprising multiple elements with different association strengths. Because of this associative structure interlinking the individual elements, a partial cue (e.g., a single element) can recover an entire multielement event. This process, referred to as pattern completion, is a fundamental property of episodic memory. Yet, it is currently unknown how sleep affects the associative structure within multielement events and subsequent processes of pattern completion. Here, we investigated the effects of post-encoding sleep, compared with a period of nocturnal wakefulness (followed by a recovery night), on multielement associative structures in healthy humans using a verbal associative learning task including strongly, weakly, and not directly encoded associations. We demonstrate that sleep selectively benefits memory for weakly associated elements as well as for associations that were not directly encoded but not for strongly associated elements within a multielement event structure. Crucially, these effects were accompanied by a beneficial effect of sleep on the ability to recall multiple elements of an event based on a single common cue. In addition, retrieval performance was predicted by sleep spindle activity during post-encoding sleep. Together, these results indicate that sleep plays a fundamental role in shaping associative structures, thereby supporting pattern completion in complex multielement events.