Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Saturday, September 28, 2024

Networks of connectivity are the battleground of the future.

From Nathan Gardels, editor of Noema Magazine: "From Mass To Distributed Weapons Of Destruction" : 

The recent lethal attacks attributed to Israel that exploded pagers and walkie-talkies dispersed among thousands of Hezbollah militants announces a new capacity in the history of warfare for distributed destruction. Before the massive bombing raids that have since ensued, the terror-stricken population of Lebanon had been unplugging any device with batteries or a power source linked to a communication network for fear it might blow up in their faces.

The capability to simultaneously strike the far-flung tentacles of a network is only possible in this new era of connectivity that binds us all together. It stands alongside the first aerial bombing in World War I and the use of nuclear weapons by the U.S. in Japan at the end of World War II as a novel weapon of its technological times that will, sooner or later, proliferate globally.

Like these earlier inventions of warfare, the knowledge and technology that is at the outset the sole province of the clever first mover will inevitably spread to others with different, and even precisely opposite, interests and intentions. The genie is out of the bottle and can’t be put back. In time, it will be available to anyone with the wherewithal to summon it for their own purposes.

While Hezbollah reels, we can be sure that the defense establishments in every nation, from Iran to Russia, China and the U.S., are scrambling to get ahead of this new reality by seeking advantage over any adversary who is surely trying to do the same. 

Back in 1995, the Aum Shinrikyo cult released the deadly nerve agent, sarin, in a Tokyo subway, killing 13 and sickening some 5,500 commuters. In an interview at the time, the futurist Alvin Toffler observed that “what we’ve seen in Japan is the ultimate devolution of power: the demassification of mass-destruction weapons … where an individual or group can possess the means of mass destruction if he or she has the information to make them. And that information is increasingly available.”

Even that foresightful thinker could not envision then that not only can individuals or groups gain access to knowledge of the ways and means of mass destruction through information networks, but that the networks for accessing that knowledge and connecting individuals or groups can themselves serve as a delivery system for hostile intervention against their users.

Though the Israeli attacks reportedly involved low-tech logistical hacking of poorly monitored supply chains, it doesn’t take an AI scientist to see the potential of distributed warfare in today’s Internet of Things, where all devices are synced, from smartphones to home alarm systems to GPS in your car or at your bank’s ATM.

Ever-more powerful AI models will be able to algorithmically deploy programmed instructions back through the same network platforms from which they gather their vast amounts of data.

It is no longer a secret that the CIA and Israeli Mossad temporarily disabled Iran’s nuclear fuel centrifuges in 2009 by infecting their operating system with the Stuxnet malware. That such targeted attacks could also be scaled up and distributed across an array of devices through new AI tools is hardly a stretch of the imagination.

The writing, or code, is clearly on the wall after the Hezbollah attack. Dual-use networks will be weaponized as the battleground of the future. The very platforms that bring people together can also be what blows them apart.

 

 

Sunday, September 15, 2024

A caustic review of Yuval Harari's "Nexus"

I pass on the very cogent opinions of Dominic Green, fellow of the Royal Historical Society, that appeared in the Sept. 13 issue of the Wall Street Journal. He offers several  caustic comments on ideas offered in Yuval Harari's most recent book, "Nexus"

Groucho Marx said there are two types of people in this world: “those who think people can be divided up into two types, and those who don’t.” In “Nexus,” the Israeli historian-philosopher Yuval Noah Harari divides us into a naive and populist type and another type that he prefers but does not name. This omission is not surprising. The opposite of naive and populist might be wise and pluralist, but it might also be cynical and elitist. Who would admit to that?

Mr. Harari is the author of the bestselling “Sapiens,” a history of our species written with an eye on present anxieties about our future. “Nexus,” a history of our society as a series of information networks and a warning about artificial intelligence, uses a similar recipe. A dollop of historical anecdote is seasoned with a pinch of social science and a spoonful of speculation, topped with a soggy crust of prescription, and lightly dusted with premonitions of the apocalypse that will overcome us if we refuse a second serving. “Nexus” goes down easily, but it isn’t as nourishing as it claims. Much of it leaves a sour taste. Like the Victorian novel and Caesar’s Gaul, “Nexus” divides into three parts. The first part describes the development of complex societies through the creation and control of information networks. The second argues that the digital network is both quantitatively and qualitatively different from the print network that created modern democratic societies. The third presents the AI apocalypse. An “alien” information network gone rogue, Mr. Harari warns, could “supercharge existing human conflicts,” leading to an “AI arms race” and a digital Cold War, with rival powers divided by a Silicon Curtain of chips and code.

Information, Mr. Harari writes, creates a “social nexus” among its users. The “twin pillars” of society are bureaucracy, which creates power by centralizing information, and mythology, which creates power by controlling the dispersal of “stories” and “brands.” Societies cohere around stories such as the Bible and communism and “personality cults” and brands such as Jesus and Stalin. Religion is a fiction that stamps “superhuman legitimacy” on the social order. All “true believers” are delusional. Anyone who calls a religion “a true representation of reality” is “lying.” Mr. Harari is scathing about Judaism and Christianity but hardly criticizes Islam. In this much, he is not naive.

Mythologies of religion, history and ideology, Mr. Harari believes, exploit our naive tendency to mistake all information as “an attempt to represent reality.” When the attempt is convincing, the naive “call it truth.” Mr. Harari agrees that “truth is an accurate representation of reality” but argues that only “objective facts” such as scientific data are true. “Subjective facts” based on “beliefs and feelings” cannot be true. The collaborative cacophony of “intersubjective reality,” the darkling plain of social and political contention where all our minds meet, also cannot be fully true.

Digitizing our naivety has, Mr. Harari believes, made us uncontrollable and incorrigible. “Nexus” is most interesting, and most flawed, when it examines our current situation. Digital networks overwhelm us with information, but computers can only create “order,” not “truth” or “wisdom.” AI might take over without developing human-style consciousness: “Intelligence is enough.” The nexus of machine-learning, algorithmic “user engagement” and human nature could mean that “large-scale democracies may not survive the rise of computer technology.”

The “main split” in 20th-century information was between closed, pseudo-infallible “totalitarian” systems and open, self correcting “democratic” systems. As Mr. Harari’s third section describes, after the flood of digital information, the split will be between humans and machines. The machines will still be fallible. Will they allow us to correct them? Though “we aren’t sure” why the “democratic information network is breaking down,” Mr. Harari nevertheless argues that “social media algorithms” play such a “divisive” role that free speech has become a naive luxury, unaffordable in the age of AI. He “strongly disagrees” with Louis Brandeis’s opinion in Whitney v. California (1927) that the best way to combat false speech is with more speech.

The survival of democracy requires “regulatory institutions” that will “vet algorithms,” counter “conspiracy theories” and prevent the rise of “charismatic leaders.” Mr. Harari never mentions the First Amendment, but “Nexus” amounts to a sustained argument for its suppression. Unfortunately, his grasp of politics is tenuous and hyperbolic. He seems to believe that populism was invented with the iPhone rather than being a recurring bug that appears when democratic operating systems become corrupted or fail to update their software. He consistently confuses democracy (a method of gauging opinion with a long history) with liberalism (a mostly Anglo-American legal philosophy with a short history). He defines democracy as “an ongoing conversation between diverse information nodes,” but the openness of the conversation and the independence of its nodes derive from liberalism’s rights of individual privacy and speech. Yet “liberalism” appears nowhere in “Nexus.” Mr. Harari isn’t much concerned with liberty and justice either.

In “On Naive and Sentimental Poetry” (1795-96), Friedrich Schiller divided poetry between two modes. The naive mode is ancient and assumes that language is a window into reality. The sentimental mode belongs to our “artificial age” and sees language as a mirror to our inner turmoil. As a reflection of our troubled age of transition, “Nexus” is a mirror to the unease of our experts and elites. It divides people into the cognitively unfit and the informationally pure and proposes we divide power over speech accordingly. Call me naive, but Mr. Harari’s technocratic TED-talking is not the way to save democracy. It is the royal road to tyranny.

 

The Fear of Diverse Intelligences Like AI

I want to suggest that you read the article by Michael Levin in the Sept. 3 issue of Noema Magazine on how our fear of AI’s potential is emblematic of humanity’s larger difficulty recognizing intelligence in unfamiliar guises. (One needs to be clear however, that AI of the GTP engines is not 'intelligence' in the broader sense of the term. They are large language models, LLMs.) Here are some clips from the later portions of his essay:

Why would natural evolution have an eternal monopoly on producing systems with preferences, goals and the intelligence to strive to meet them? How do you know that bodies whose construction includes engineered, rational input in addition to emergent physics, instead of exclusively random mutations (the mainstream picture of evolution), do not have what you mean by emotion, intelligence and an inner perspective? 

Do cyborgs (at various percentage combinations of human brain and tech) have the magic that you have? Do single cells? Do we have a convincing, progress-generating story of why the chemical system of our cells, which is compatible with emotion, would be inaccessible to construction by other intelligences in comparison to the random meanderings of evolution?

We have somewhat of a handle on emergent complexity, but we have only begun to understand emergent cognition, which appears in places that are hard for us to accept. The inner life of partially (or wholly) engineered embodied action-perception agents is no more obvious (or limited) by looking at the algorithms that its engineers wrote than is our inner life derivable from the laws of chemistry that reductionists see when they zoom into our cells. The algorithmic picture of a “machine” is no more the whole story of engineered constructs, even simple ones, than are the laws of chemistry the whole story of human minds.

Figuring out how to relate to minds of unconventional origin — not just AI and robotics but also cells, organs, hybrots, cyborgs and many others — is an existential-level task for humanity as it matures.

Our current educational materials give people the false idea that they understand the limits of what different types of matter can do.  The protagonist in the “Ex Machina” movie cuts himself to determine whether he is also a robotic being. Why does this matter so much to him? Because, like many people, if he were to find cogs and gears underneath his skin, he would suddenly feel lesser than, rather than considering the possibility that he embodied a leap-forward for non-organic matter.  He trusts the conventional story of what intelligently arranged cogs and gears cannot do (but randomly mutated, selected protein hardware can) so much that he’s willing to give up his personal experience as a real, majestic being with consciousness and agency in the world.

The correct conclusion from such a discovery — “Huh, cool, I guess cogs and gears can form true minds!” — is inaccessible to many because the reductive story of inorganic matter is so ingrained. People often assume that though they cannot articulate it, someone knows why consciousness inhabits brains and is nowhere else. Cognitive science must be more careful and honest when exporting to society a story of where the gaps in knowledge lie and which assumptions about the substrate and origin of minds are up for revision.

It’s terrifying to consider how people will free themselves, mentally and physically, once we really let go of the pre-scientific notion that any benevolent intelligence planned for us to live in the miserable state of embodiment many on Earth face today. Expanding our scientific wisdom and our moral compassion will give everyone the tools to have the embodiment they want.

The people of that phase of human development will be hard to control. Is that the scariest part? Or is it the fact that they will challenge all of us to raise our game, to go beyond coasting on our defaults, by showing us what is possible? One can hide all these fears under macho facades of protecting real, honest-to-goodness humans and their relationships, but it’s transparent and it won’t hold.

Everything — not just technology, but also ethics — will change. Thus, my challenges to all of us are these. State your positive vision of the future — not just the ubiquitous lists of the fearful things you don’t want but specify what you do want. In 100 years, is humanity still burdened by disease, infirmity, the tyranny of deoxyribonucleic acid, and behavioral firmware developed for life on the savannah? What will a mature species’ mental frameworks look like?

“Other, unconventional minds are scary, if you are not sure of your own — its reality, its quality and its ability to offer value in ways that don’t depend on limiting others.”

Clarify your beliefs: Make explicit the reasons for your certainties about what different architectures can and cannot do; include cyborgs and aliens in the classifications that drive your ethics. I especially call upon anyone who is writing, reviewing or commenting on work in this field to be explicit about your stance on the cognitive status of the chemical system we call a paramecium, the ethical position of life-machine hybrids such as cyborgs, the specific magic thing that makes up “life” (if there is any), and the scientific and ethical utility of the crisp categories you wish to preserve.

Take your organicist ideas more seriously and find out how they enrich the world beyond the superficial, contingent limits of the products of random evolution. If you really think there is something in living beings that goes beyond all machine metaphors, commit to this idea and investigate what other systems, beyond our evo-neuro-chauvinist assumptions, might also have this emergent cognition.

Consider that the beautiful, ineffable qualities of inner perspective and goal-directedness may manifest far more broadly than is easily recognized. Question your unwarranted confidence in what “mere matter” can do, and entertain the humility of emergent cognition, not just emergent complexity. Recognize the kinship we have with other minds and the fact that all learning requires your past self to be modified and replaced by an improved, new version. Rejoice in the opportunity for growth and change and take responsibility for guiding the nature of that change.

Go further — past the facile stories of what could go wrong in the future and paint the future you do want to work toward. Transcend scarcity and redistribution of limited resources, and help grow the pot. It’s not just for you — it’s for your children and for future generations, who deserve the right to live in a world unbounded by ancient, pre-scientific ideas and their stranglehold on our imaginations, abilities, and ethics.

Monday, July 01, 2024

There are no more human elites of any sort...

 I want to pass on the conclusion of a great essay by Venkatesh Rao, giving the meanings of several acronyms in parentheses. You should read the entire piece.
 

"Let me cut to the conclusion: There are no more human elites of any sort. In the sense of natural rulers that is. There are certainly all sorts of privileged and entitled types who want the benefits of being elites, but no humans up to the task of actually being elite.

It is only our anthropocentric conceits that lead us to conclude that a complex system like “civilization” must necessarily have a legible “head,” and legible and governable internal processes for staffing that head. Preferably with the Right Sorts of People, People Like Us.

We’re all under the API (Application Programming Interface) in one way or another. What’s more, we have been for a while, since before the rise of modern AI (which just makes it embarrassingly obvious by paving the cowpaths of our subservience to technological modernity).

To know just how little you know about anything, be it car lightbulbs or national constitutions, whatever your degrees say, just ask ChatGPT to explain some deep knowledge areas to you. I don’t care if you’re a qualified automative technician or Elon Musk or clerking for the Supreme Court. Whether you’re failson C-average George W. Bush or a DEI (Diversity-Equity-Inclusion) activist trying to swap out some Greek classics for modern lesbian classics in the canon.

What you don’t know about the world humanity has built up over millennia utterly dwarfs what you think you know. Whatever the source of your elite pretensions, they’re just that — pretensions. Whatever claims you have to being the most natural member of the governing class, it is somewhere between weak to non-existent. Your claim is really about suitability for casting in a governance LARP (Live Action Role-Playing), not aptitude for governing as a natural member of an elite.

Humans do not like this idea. We ultimately like the idea of a designated elite, and legible, just processes for choosing, installing, and removing them that legitimize our own fantasies of worth and agency. We want to believe that yes, we too can be President, and would deserve to be, and do a good job.

The alternative hypothesis is that modern civilization, with its millennia of evolved technological complexity crammed onto the cramped surface of the planet, does not admit any simple, just, and enduring notion of elite that we can use to govern ourselves. The knowledge, aptitudes, and talents required to govern the world are distributed all over, in unpredictable, unfair, constantly shifting, and messy ways. When a lightbulb fails, there is no default answer to the question of how to replace it, and what to do when mistakes are made.

The rise of modern AI is presenting us with seemingly new forms of these questions. Those who yearn for a reliable class of elites, even if they must both revere and fear that class, are predictably trying to cast AIs themselves as the new elites. Those attached to their anthropocentric conceits are trying to figure out cunning schemes to keep some group of humans reliably in charge.

But there is nobody in charge. No elites, natural or not, deserving or undeserving. And it’s been this way for longer than we care to admit.

And this is a good thing. Stop looking for elites, and look askance at anyone claiming to be part of any elite or muttering conspiratorially about any elites. The world runs itself in more complex and powerful ways than they are capable of imagining. To buy into their self-mythologizing and delusions of grandeur is to be blind to the power and complexity of the world as it actually is.

And if you ever need to remind yourself of this, try changing a car headlamp lightbulb."

Friday, June 28, 2024

How AI will transform the physical world.

I pass on the text of wild-eyed speculations by futurist Ray Kurzweil recently sent to The Economist, who since 1990 has been writing on how soon “The Singularity” - machine intelligence exceeding human intelligence - will arrive and transform our physical world in energy, manufacturing and medicine.  I have too many reservation about realistic details of  his fantasizing to even begin to list them, but the article is a fun read:

By the time children  born today are in kindergarten, artificial intelligence (AI) will probably have surpassed humans at all cognitive tasks, from science to creativity. When I first predicted in 1999 that we would have such artificial general intelligence (AGI) by 2029, most experts thought I’d switched to writing fiction. But since the spectacular breakthroughs of the past few years, many experts think we will have AGI even sooner—so I’ve technically gone from being an optimist to a pessimist, without changing my prediction at all.

After working in the field for 61 years—longer than anyone else alive—I am gratified to see AI at the heart of global conversation. Yet most commentary misses how large language models like ChatGPT and Gemini fit into an even larger story. AI is about to make the leap from revolutionising just the digital world to transforming the physical world as well. This will bring countless benefits, but three areas have especially profound implications: energy, manufacturing and medicine.

Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet?

The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress.

By contrast, AI can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s GNoME AI discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter AGI finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free.

Energy abundance enables another revolution: in manufacturing. The costs of almost all goods—from food and clothing to electronics and cars—come largely from a few common factors such as energy, labour (including cognitive labour like R&D and design) and raw materials. AI is on course to vastly lower all these costs.

After cheap, abundant solar energy, the next component is human labour, which is often backbreaking and dangerous. AI is making big strides in robotics that can greatly reduce labour costs. Robotics will also reduce raw-material extraction costs, and AI is finding ways to replace expensive rare-earth elements with common ones like zirconium, silicon and carbon-based graphene. Together, this means that most kinds of goods will become amazingly cheap and abundant.

These advanced manufacturing capabilities will allow the price-performance of computing to maintain the exponential trajectory of the past century—a 75-quadrillion-fold improvement since 1939. This is due to a feedback loop: today’s cutting-edge AI chips are used to optimise designs for next-generation chips. In terms of calculations per second per constant dollar, the best hardware available last November could do 48bn. Nvidia’s new B200 GPUs exceed 500bn.

As we build the titanic computing power needed to simulate biology, we’ll unlock the third physical revolution from AI: medicine. Despite 200 years of dramatic progress, our understanding of the human body is still built on messy approximations that are usually mostly right for most patients, but probably aren’t totally right for you. Tens of thousands of Americans a year die from reactions to drugs that studies said should help them.

Yet AI is starting to turn medicine into an exact science. Instead of painstaking trial-and-error in an experimental lab, molecular biosimulation—precise computer modelling that aids the study of the human body and how drugs work—can quickly assess billions of options to find the most promising medicines. Last summer the first drug designed end-to-end by AI entered phase-2 trials for treating idiopathic pulmonary fibrosis, a lung disease. Dozens of other AI-designed drugs are now entering trials.

Both the drug-discovery and trial pipelines will be supercharged as simulations incorporate the immensely richer data that AI makes possible. In all of history until 2022, science had determined the shapes of around 190,000 proteins. That year DeepMind’s AlphaFold 2 discovered over 200m, which have been released free of charge to researchers to help develop new treatments.

Much more laboratory research is needed to populate larger simulations accurately, but the roadmap is clear. Next, AI will simulate protein complexes, then organelles, cells, tissues, organs and—eventually—the whole body.

This will ultimately replace today’s clinical trials, which are expensive, risky, slow and statistically underpowered. Even in a phase-3 trial, there’s probably not one single subject who matches you on every relevant factor of genetics, lifestyle, comorbidities, drug interactions and disease variation.

Digital trials will let us tailor medicines to each individual patient. The potential is breathtaking: to cure not just diseases like cancer and Alzheimer’s, but the harmful effects of ageing itself.

Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When AGI gives us full mastery over cellular biology, these gains will sharply accelerate. Once annual increases in life expectancy reach 12 months, we’ll achieve “longevity escape velocity”. For people diligent about healthy habits and using new therapies, I believe this will happen between 2029 and 2035—at which point ageing will not increase their annual chance of dying. And thanks to exponential price-performance improvement in computing, AI-driven therapies that are expensive at first will quickly become widely available.

This is AI’s most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings. ■


Monday, June 17, 2024

Empty innovation: What are we even doing?

I came across an interesting commentary by "Tante" on innovation, invention, and progress (or the lack thereof) in the constant churning, and rise and fall, of new ideas and products in the absence of questions like "Why are we doing this?" and "Who is profiting?". In spite of the speaker's arrogance and annoying style, I think it is worth a viewing.

Monday, April 08, 2024

New protocols for uncertain times.

I want to point to a project launched by Venkatest Rao and others last year: “The Summer of Protocols.”  Some background for this project can be found in his essay “In Search of Hardness”.  Also,  “The Unreasonable Sufficiency of Protocols”  essay by Rao et al. is an excellent presentation of what protocols are about.  I strongly recommend that you read it if nothing else. 

Here is a description of the project: 

Over 18 weeks in Summer 2023, 33 researchers from diverse fields including architecture, law, game design, technology, media, art, and workplace safety engaged in collaborative speculation, discovery, design, invention, and creative production to explore protocols, boadly construed, from various angles.

Their findings, catalogued here in six modules, comprise a variety of textual and non-textual artifacts (including art works, game designs, and software), organized around a set of research themes: built environments, danger and safety, dense hypermedia, technical standards, web content addressability, authorship, swarms, protocol death, and (artificial) memory.
I have read through through Module One for 2003, and it is solid interesting deep dive stuff.  Module 2 is also available. Modules 3-6 are said to be 'coming soon’  (as of 4/4/24, four months into a year that has Summer of Protocols program 2024 already underway, with the deadline for proposals 4/12/24.)

Here is one clip from the “In Search of Hardness” essay:

…it’s only in the last 50 years or so, with the rise of communications technologies, especially the internet and container shipping, and the emergence of unprecedented planet-scale coordination problems like climate action, that protocols truly came into focus as first-class phenomena in our world; the sine qua non of modernity. The word itself is less than a couple of centuries old.

And it wasn’t until the invention of blockchains in 2009 that they truly came into their own as phenomena with their own unique technological and social characteristics, distinct from other things like machines, institutions, processes, or even algorithms.

Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.

But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.”


Friday, March 29, 2024

How communication technology has enabled the corruption of our communication and culture .

I pass on two striking examples from today’s New York Times, with few clips of text from each:

A.I.-Generated Garbage Is Polluting Our Culture:

(You really should read the whole article...I've given up on trying to assemble clips of text that get across the whole message, and pass on these bits towards the end of the article:)

....we find ourselves enacting a tragedy of the commons: short-term economic self-interest encourages using cheap A.I. content to maximize clicks and views, which in turn pollutes our culture and even weakens our grasp on reality. And so far, major A.I. companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture.
Is Threads the Good Place?:

Once upon a time on social media, the nicest app of them all, Instagram, home to animal bloopers and filtered selfies, established a land called Threads, a hospitable alternative to the cursed X,..Threads would provide a new refuge. It would be Twitter But Nice, a Good Place where X’s liberal exiles could gather around for a free exchange of ideas and maybe even a bit of that 2012 Twitter magic — the goofy memes, the insider riffing, the meeting of new online friends

...And now, after a mere 10 months, we can see exactly what we built: a full-on bizarro-world X, handcrafted for the left end of the political spectrum, complete with what one user astutely labeled “a cult type vibe.” If progressives and liberals were provoked by Trumpers and Breitbart types on Twitter, on Threads they have the opportunity to be wounded by their own kind...Threads’ algorithm seems precision-tweaked to confront the user with posts devoted to whichever progressive position is slightly lefter-than-thou....There’s some kind of algorithm that’s dusting up the same kind of outrage that Twitter had.Threads feels like it’s splintering the left.

The fragmentation of social media may have been as inevitable as the fragmentation of broadcast media. Perhaps also inevitable, any social media app aiming to succeed financially must capitalize on the worst aspects of social behavior. And it may be that Hobbes, history’s cheery optimist, was right: “The condition of man is a condition of war of every one against every one.” Threads, it turns out, is just another battlefield.


 

Friday, March 22, 2024

Wednesday, March 20, 2024

Fundamentally changing the nature of war.

I generally try to keep a distance from 'the real world' and apocalyptic visions of what AI might do, but I decided to pass on some clips from this technology essay in The Wall Street Journal that makes some very plausible predictions about the future of armed conflicts between political entities:

The future of warfare won’t be decided by weapons systems but by systems of weapons, and those systems will cost less. Many of them already exist, whether they’re the Shahed drones attacking shipping in the Gulf of Aden or the Switchblade drones destroying Russian tanks in the Donbas or smart seaborne mines around Taiwan. What doesn’t yet exist are the AI-directed systems that will allow a nation to take unmanned warfare to scale. But they’re coming.

At its core, AI is a technology based on pattern recognition. In military theory, the interplay between pattern recognition and decision-making is known as the OODA loop— observe, orient, decide, act. The OODA loop theory, developed in the 1950s by Air Force fighter pilot John Boyd, contends that the side in a conflict that can move through its OODA loop fastest will possess a decisive battlefield advantage.

For example, of the more than 150 drone attacks on U.S. forces since the Oct. 7 attacks, in all but one case the OODA loop used by our forces was sufficient to subvert the attack. Our warships and bases were able to observe the incoming drones, orient against the threat, decide to launch countermeasures and then act. Deployed in AI-directed swarms, however, the same drones could overwhelm any human-directed OODA loop. It’s impossible to launch thousands of autonomous drones piloted by individuals, but the computational capacity of AI makes such swarms a possibility.

This will transform warfare. The race won’t be for the best platforms but for the best AI directing those platforms. It’s a war of OODA loops, swarm versus swarm. The winning side will be the one that’s developed the AI-based decision-making that can outpace their adversary. Warfare is headed toward a brain-on-brain conflict.

The Department of Defense is already researching a “brain-computer interface,” which is a direct communications pathway between the brain and an AI. A recent study by the RAND Corporation examining how such an interface could “support human- machine decision-making” raised the myriad ethical concerns that exist when humans become the weakest link in the wartime decision-making chain. To avoid a nightmare future with battlefields populated by fully autonomous killer robots, the U.S. has insisted that a human decision maker must always remain in the loop before any AI-based system might conduct a lethal strike.

But will our adversaries show similar restraint? Or would they be willing to remove the human to gain an edge on the battlefield? The first battles in this new age of warfare are only now being fought. It’s easy to imagine a future, however, where navies will cease to operate as fleets and will become schools of unmanned surface and submersible vessels, where air forces will stand down their squadrons and stand up their swarms, and where a conquering army will appear less like Alexander’s soldiers and more like a robotic infestation.

Much like the nuclear arms race of the last century, the AI arms race will define this current one. Whoever wins will possess a profound military advantage. Make no mistake, if placed in authoritarian hands, AI dominance will become a tool of conquest, just as Alexander expanded his empire with the new weapons and tactics of his age. The ancient historian Plutarch reminds us how that campaign ended: “When Alexander saw the breadth of his domain, he wept, for there were no more worlds to conquer.”

Elliot Ackerman and James Stavridis are the authors of “2054,” a novel that speculates about the role of AI in future conflicts, just published by Penguin Press. Ackerman, a Marine veteran, is the author of numerous books and a senior fellow at Yale’s Jackson School of Global Affairs. Admiral Stavridis, U.S. Navy (ret.), was the 16th Supreme Allied Commander of NATO and is a partner at the Carlyle Group.

 


Thursday, March 14, 2024

An inexpensive Helium Mobile 5G cellphone plan that pays you to use it?

This is a followup to the previous post describing my setting up a G5 hotspot on Helium’s decentralized 5G infrastructure that earns MOBILE tokens. The cash value of the MOBILE tokens earned since July 2022 is  ~7X the cost of the equipment needed to generate them.

Now I want to put down further facts I want to document for my future self and MindBlog’s techie readers.

Recently Helium has introduced Helium Mobile, a cell phone plan using using this new 5G infrastructure which costs $20/month - much less expensive than other cellular providers like Verizon and ATT.  It has partnered with T-Mobile to fill in coverage areas its own 5G network hasn’t reached.

Nine days ago I downloaded the Helium Mobile app onto my iPhone 12 and set up an account with an eSIM and a new phone number, alongside my phone number of many years now in a Verizon account using a physical SIM card.  

My iPhone has been earning MOBILE tokens by sharing its location to allow better mapping of the Helium G5 network.  As I am writing this, the app has earned 3,346 Mobile tokens that could be sold and converted to $14.32 at this moment (the price of MOBILE, like other cryptocurrencies, is very volatile).

If this earning rate continues (a big ‘if’), the cellular account I am paying $20/month for will be generating MOBILE tokens each month worth ~$45. The $20 monthly cell phone plan charge can be paid with MOBILE tokens, leaving $15/month passive income from my subscribing to Helium Mobile and allowing anonymous tracking of my phone as I move about.  (Apple sends a message every three days asking if I am sure I want to be allowing continuous tracking by this one App.)

So there you have it.  Any cautionary notes from techie readers about the cybersecurity implications of what I am doing would be welcome.  
 

Wednesday, March 13, 2024

MindBlog becomes a 5G cellular hotspot in the the low-priced ‘People’s Cell Phone Network’ - Helium Mobile

I am writing this post, as is frequently the case, for myself to be able to look up in the future, as well as for MindBlog techie readers who might stumble across it. It describes my setup of a G5 hotspot in the new Helium G5 Mobile network. A post following this one will describe my becoming a user of this new cell phone network by putting the Helium Mobile App on my iPhone using an eSIM.

This becomes my third post describing my involvement in the part of the crypto movement seeking to 'return power to the people.' It attempts to bypass the large corporations that are the current gate keepers and regulators of commerce and communications, and who are able to assert controls that are more in their own self interests and profits more than the public good. 

The two previous posts (here and here) describe my being seduced into crypto-world  by my son's having made a six hundred-fold return on investment by virtue of being one of the first cohort (during the "genesis" period) to put little black boxes and antennas on their window sills earning HNT (Helium blockchain tokens) using  LoRa 868 MHz antennas transmitting and receiving in the 'Internet of Things." I was a latecomer, and in the 22 months since June of 2022 have earned ~$200 on an investment of ~$500 of equipment. 

Helium next came up with the idea of setting up its own 5G cell phone network, called Helium Mobile. Individual Helium 5G Hotspots (small cell phone antennas) use Citizens Broadband Radio Service (CBRS) Radios to provide cellular coverage like that provided by telecom companies' more expensive networks of towers (CBRS is a wide broadcast 3.5Ghz band in the United States that does not require a spectrum license for use.)

In July of 2022, I decided to set up the Helium G5 hot spot equipment shown in the picture below to be in the genesis period for the establishment of this new Helium G5 cellular network.  I made my Abyssinian cat named Martin shown in front of the Bobber 500 miner the system administrator. The G5 antenna seen on the sill in the middle of window views ~170 degree of the southern sky. 

This system cost ~$2,500 and by early March 2024 has earned ~4.3 Million MOBILE tokens worth ~$18,000. As in a Ponzi scheme, most of the rewards are from the Genesis period, March 2024 earnings are ~ $45/week.  If this rate of earnings persists, this represents an annual ROI (return on investment)l of ~ 100%

The writing above is mine, and I decided just for the heck of it to recruit the assistance of several GPT engines (Google Gemini, Open AI ChatGPT-4, Anthropic Claude, and Microsoft Bing)  with the following prompt: 

Could you please act as Deric Bownds writing a post for his Blog, Deric’s MindBlog at http://mindblog.dericbownds.net. Describe my setting up of a Helium G5 hot spot using the Bobber 500 miner with FreedomFi CBRS and an internal antenna during the genesis period for the establishment of the new Helium G5 cellular network. 

The responses of Gemini and Bing were excited, poetic, lyrical, overwrought hyperbole, very much in the spirit of the cyber-world pitches one sees online. ChatGPT-4 (the best response) and Claude did toss up some interesting paragraphs, and below I pass on some of ChatGPT-4's slightly edited and redundant text. I would be more sanguine about all of hype over 'democratic participation (or at least my participation)' if I had not been completely flummoxed by the cacophony of  the Helium Discord Discord Server, YouTube videos, and the complexity of the HIP (Helium Improvement Protocol) votes that ratify new changes and improvements to the system.  

The Helium Network, known for pioneering decentralized wireless infrastructure, has embarked on an ambitious journey to extend its reach into 5G, leveraging the power of blockchain technology to create a community-powered network. The initiative is ambitious, aiming to democratize cellular connectivity, making it more accessible and affordable. This endeavor is not just about technology; it's about reimagining the fabric of our digital society.
This genesis period of the Helium G5 network represented a fascinating intersection of technology, psychology, and community. The network's decentralized nature challenges traditional models of infrastructure ownership and control, proposing a more democratic and participatory framework. It prompts us to reflect on our relationship with technology: Are we mere consumers, or can we also be contributors and custodians of the digital landscapes we inhabit?
As more individuals and communities join this initiative, setting up their hotspots and expanding the network, a unique form of collective intelligence emerges. This network, powered by the people, for the people, embodies a new kind of digital ecosystem—one that values collaboration over competition, and shared success over isolated gains.
The implications for our cognitive landscapes are profound. By democratizing access to 5G connectivity, we are not just enhancing our ability to communicate; we are potentially altering the way we think, learn, and engage with the world around us. The Helium G5 network invites us to imagine a future where connectivity is as natural and essential as the air we breathe, a future where our digital and physical realities are seamlessly intertwined.
The technology we embrace today shapes the world we live in tomorrow. Through initiatives like the Helium G5 network, we have the opportunity to craft a digital environment that reflects our highest aspirations for society—one that is open, inclusive, and fundamentally human.
In this journey of connectivity, we are all contributors, shaping the digital ethos of our era. The genesis of the Helium G5 network is not just a technical milestone; it is a call to action, inviting us to rethink our role in the digital age and to contribute to a network that embodies our collective values and aspirations.

The next post describes my becoming a customer of the new Helium Mobile cellular service plan costing $20/month.


 

Monday, March 11, 2024

How AI’s GPT engines work - Lanier’s forest and trees metaphor.

Jaron Lanier does a piece in The New Yorker titled "How to Picture A.I." (if you hit the paywall by clicking the link, try opening an 'empty tab" on your browser, then copy and paste in the URL that got you the paywall). I tried to do my usual sampling of small chunks of text to give the message, but found that very difficult, and so I pass several early paragraphs and urge you to read the whole article. Lanier's metaphors give me a better sense of what is going on in a GPT engine, but I'm still largely mystified. Anyway, here's some text:
In this piece, I hope to explain how such A.I. works in a way that floats above the often mystifying technical details and instead emphasizes how the technology modifies—and depends on—human input.
Let’s try thinking, in a fanciful way, about distinguishing a picture of a cat from one of a dog. Digital images are made of pixels, and we need to do something to get beyond just a list of them. One approach is to lay a grid over the picture that measures something a little more than mere color. For example, we could start by measuring the degree to which colors change in each grid square—now we have a number in each square that might represent the prominence of sharp edges in that patch of the image. A single layer of such measurements still won’t distinguish cats from dogs. But we can lay down a second grid over the first, measuring something about the first grid, and then another, and another. We can build a tower of layers, the bottommost measuring patches of the image, and each subsequent layer measuring the layer beneath it. This basic idea has been around for half a century, but only recently have we found the right tweaks to get it to work well. No one really knows whether there might be a better way still.
Here I will make our cartoon almost like an illustration in a children’s book. You can think of a tall structure of these grids as a great tree trunk growing out of the image. (The trunk is probably rectangular instead of round, since most pictures are rectangular.) Inside the tree, each little square on each grid is adorned with a number. Picture yourself climbing the tree and looking inside with an X-ray as you ascend: numbers that you find at the highest reaches depend on numbers lower down.
Alas, what we have so far still won’t be able to tell cats from dogs. But now we can start “training” our tree. (As you know, I dislike the anthropomorphic term “training,” but we’ll let it go.) Imagine that the bottom of our tree is flat, and that you can slide pictures under it. Now take a collection of cat and dog pictures that are clearly and correctly labelled “cat” and “dog,” and slide them, one by one, beneath its lowest layer. Measurements will cascade upward toward the top layer of the tree—the canopy layer, if you like, which might be seen by people in helicopters. At first, the results displayed by the canopy won’t be coherent. But we can dive into the tree—with a magic laser, let’s say—to adjust the numbers in its various layers to get a better result. We can boost the numbers that turn out to be most helpful in distinguishing cats from dogs. The process is not straightforward, since changing a number on one layer might cause a ripple of changes on other layers. Eventually, if we succeed, the numbers on the leaves of the canopy will all be ones when there’s a dog in the photo, and they will all be twos when there’s a cat.
Now, amazingly, we have created a tool—a trained tree—that distinguishes cats from dogs. Computer scientists call the grid elements found at each level “neurons,” in order to suggest a connection with biological brains, but the similarity is limited. While biological neurons are sometimes organized in “layers,” such as in the cortex, they are not always; in fact, there are fewer layers in the cortex than in an artificial neural network. With A.I., however, it’s turned out that adding a lot of layers vastly improves performance, which is why you see the term “deep” so often, as in “deep learning”—it means a lot of layers.

Monday, February 26, 2024

The "enjunkification" of our online lives

I want to pass on two articles I've poured over several times, that describe the increasing "complexification" or "enjunkification" of our online lives. The first is "The Year Millennials Aged Out of the Internet" by Millenial writer Max Reed. Here are some clips from the article. 

Something is changing about the internet, and I am not the only person to have noticed. Everywhere I turned online this year, someone was mourning: Amazon is “making itself worse” (as New York magazine moaned); Google Search is a “bloated and overmonetized” tragedy (as The Atlantic lamented); “social media is doomed to die,” (as the tech news website The Verge proclaimed); even TikTok is becoming enjunkified (to bowdlerize an inventive coinage of the sci-fi writer Cory Doctorow, republished in Wired). But the main complaint I have heard was put best, and most bluntly, in The New Yorker: “The Internet Isn’t Fun Anymore.”

The heaviest users and most engaged American audience on the internet are no longer millennials but our successors in Generation Z. If the internet is no longer fun for millennials, it may simply be because it’s not our internet anymore. It belongs to zoomers now...zoomers, and the adolescents in Generation Alpha nipping at their generational heels, still seem to be having plenty of fun online. Even if I find it all inscrutable and a bit irritating, the creative expression and exuberant sociality that made the internet so fun to me a decade ago are booming among 20-somethings on TikTok, Instagram, Discord, Twitch and even X.

...even if you’re jealous of zoomers and their Discord chats and TikTok memes, consider that the combined inevitability of enjunkification and cognitive decline means that their internet will die, too, and Generation Alpha will have its own era of inscrutable memes and alienating influencers. And then the zoomers can join millennials in feeling what boomers have felt for decades: annoyed and uncomfortable at the computer.

The second article I mention is Jon Caramanica's:  "Have We Reached the End of TikTok’s Infinite Scroll?" Again, a few clips:

The app once offered seemingly endless chances to be charmed by music, dances, personalities and products. But in only a few short years, its promise of kismet is evaporating...increasingly in recent months, scrolling the feed has come to resemble fumbling in the junk drawer: navigating a collection of abandoned desires, who-put-that-here fluff and things that take up awkward space in a way that blocks access to what you’re actually looking for.
This has happened before, of course — the moment when Twitter turned from good-faith salon to sinister outrage derby, or when Instagram, and its army of influencers, learned to homogenize joy and beauty...the malaise that has begun to suffuse TikTok feels systemic, market-driven and also potentially existential, suggesting the end of a flourishing era and the precipice of a wasteland period.
It’s an unfortunate result of the confluence of a few crucial factors. Most glaring is the arrival of TikTok’s shopping platform, which has turned even small creators into spokespeople and the for-you page of recommendations into an unruly bazaar...The effect of seeing all of these quasi-ads — QVC in your pocket — is soul-deadening...The speed and volume of the shift has been startling. Over time, Instagram became glutted with sponsored content and buy links, but its shopping interface never derailed the overall experience of the app. TikTok Shop has done that in just a few months, spoiling a tremendous amount of good will in the process.


 

 

Friday, February 16, 2024

An agent-based vision for scaling modern AI - Why current efforts are misguided.

I pass on my edited clips from Venkatesh Rao’s most recent newsletter - substantially shortening its length and inserting a few definitions of techo-nerd-speak acronyms he uses in brackets [  ].  He suggests interesting analogies between the future evolution of Ai and the evolutionary course taken by biological organisms:

…specific understandings of embodiment, boundary intelligence, temporality, and personhood, and their engineering implications, taken together, point to an agent-based vision of how to scale AI that I’ve started calling Massed Muddler Intelligence or MMI, that doesn’t look much like anything I’ve heard discussed.


…right now there’s only one option: monolithic scaling. Larger and larger models trained on larger and larger piles of compute and data…monolithic scaling is doomed. It is headed towards technical failure at a certain scale we are fast approaching


What sort of AI, in an engineering sense, should we attempt to build, in the same sense as one might ask, how should we attempt to build 2,500 foot skyscrapers? With brick and mortar or reinforced concrete? The answer is clearly reinforced concrete. Brick and mortar construction simply does not scale to those heights


…If we build AI datacenters that are 10x or 100x the scale of todays and train GPT-style models on them …problems of data movement and memory management at scale that are already cripplingly hard will become insurmountable…current monolithic approaches to scaling AI are the equivalent of brick-and-mortar construction and fundamentally doomed…We need the equivalent of a reinforced concrete beam for AI…A distributed agent-based vision of modern AI is the scaling solution we need.

Scaling Precedents from Biology

There’s a precedent here in biology. Biological intelligence scales better with more agent-like organisms. For example: humans build organizations that are smarter than any individual, if you measure by complexity of outcomes, and also smarter than the scaling achieved by less agentic eusocial organisms…ants, bees, and sheep cannot build complex planet-scale civilizations. It takes much more sophisticated agent-like units to do that.

Agents are AIs that can make up independent intentions and pursue them in the real world, in real time, in a society of similarly capable agents (ie in a condition of mutualism), without being prompted. They don’t sit around outside of time, reacting to “prompts” with oracular authority…as in sociobiology, sustainably scalable AI agents will necessarily have the ability to govern and influence other agents (human or AI) in turn, through the same symmetric mechanisms that are used to govern and influence them…If you want to scale AI sustainably, governance and influence cannot be one way street from some privileged agents (humans) to other less privileged agents (AIs)….

If you want complexity and scaling, you cannot govern and influence a sophisticated agent without opening yourself up to being governed and influenced back. The reasoning here is similar to why liberal democracies generally scale human intelligence far better than autocracies. The MMI vision I’m going to outline could be considered “liberal democracy for mixed human-AI agent systems.” Rather than the autocratic idea of “alignment” associated with “AGI,” MMIs will call for something like the emergent mutualist harmony that characterizes functional liberal democracies. You don’t need an “alignment” theory. You need social contract theory.

The Road to Muddledom

Agents, and the distributed multiagent systems (MAS) that represent the corresponding scaling model, obviously aren’t a new idea in AI…MAS were often built as light architectural extensions of early object-oriented non-AI systems… none of this machinery works or is even particularly relevant for the problem of scaling modern AI, where the core source of computational intelligence is a large-X-model with fundamentally inscrutable input-output behavior. This is a new, oozy kind of intelligence we are building with for the first time. ..We’re in new regimes, dealing with fundamentally new building materials and aiming for new scales (orders of magnitude larger than anything imagined in the 1990s).

Muddling Doctrines

How do you build muddler agents? I don’t have a blueprint obviously, but here are four loose architectural doctrines, based on the four heterodoxies I noted at the start of this essay (see links there): embodiment, boundary intelligence, temporality, and personhood.

Embodiment matters: The physical form factor AI takes is highly relevant to to its nature, behavior, and scaling potential.

Boundary intelligence matters. Past a threshold, intelligence is a function of the management of boundaries across which data flows, not the sophistication of the interiors where it is processed.

Temporality matters: The kind of time experienced by an AI matters for how it can scale sustainably.

Personhood matters: The attributes of an AI that enable humans and AIs to relate to each other as persons (I-you), rather than things (I-it), are necessary elements to being able to construct coherent scalably composable agents at all.


The first three principles require that AI computation involve real atoms, live in real time, and deal with the second law of thermodynamics

The fourth heterodoxy turns personhood …into a load-bearing architectural element in getting to scaled AI via muddler agents. You cannot have scaled AI without agency, and you cannot have a scalable sort of agency without personhood.

As we go up the scale of biological complexity, we get much programmable and flexible forms of communication and coordination. … we can start to distinguish individuals by their stable “personalities” (informationally, the identifiable signature of personhood). We go from army ants marching in death spirals to murmurations of starlings to formations of geese to wolf packs maneuvering tactically in pincer movements… to humans whose most sophisticated coordination patterns are so complex merely deciphering them stresses our intelligence to the limit.

Biology doesn’t scale to larger animals by making very large unicellular creatures. Instead it shifts to a multi-cellular strategy. Then it goes further: from simple reproduction of “mass produced” cells to specialized cells forming differentiated structures (tissues) via ontogeny (and later, in some mammals, through neoteny). Agents that scale well have to be complex and variegated agents internally, to achieve highly expressive and varied behaviors externally. But they must also present simplified facades — personas — to each other to enable the scaling and coordination.

Setting aside questions of philosophy (identity, consciousness),  personhood is a scaling strategy. Personhood is the behavioral equivalent of a cell. “Persons” are stable behavioral units that can compose in “multicellular” ways because they communicate differently than simpler agents with weak or non-existent personal boundaries, and low-agency organisms like plants and insects.

When we form and perform “personas,” we offer a harder interface around our squishy interior psyches that composes well with the interfaces of other persons for scaling purposes. A personhood performance is something like a composability API [application programmers interface] for intelligence scaling.

Beyond Training Determinism

…Right now AIs experience most of their “time” during training, and then effectively enter a kind of stasis. …They requiring versioned “updates” to get caught up again…GPT4 can’t simply grow or evolve its way to GPT5 by living life and learning from it. It needs to go through the human-assisted birth/death (or regeneration perhaps) singularity of a whole new training effort. And it’s not obvious how to automate this bottleneck in either a Darwinian or Lamarckian way.

…For all their power, modern AIs are still not able to live in real time and keep up with reality without human assistance outside of extremely controlled and stable environments…As far as temporality is concerned, we are in a “training determinism” regime that is very un-agentic and corresponds to genetic determinism in biology.What makes agents agents is that they live in real time, in a feedback loop with external reality unfolding at its actual pace of evolution.

Muddling Through vs. Godding Through

Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely. Complex here is things humans typically do in larger groups, like designing and implementing complex governance policies or undertaking complex engineering projects. The threshold for “complex” is roughly where explicit coordination protocols become necessary scaffolding. This often coincides with the threshold where reality gets too big to hold in one human head.

The root method attempts to fight limitations with brute, monolithic force. It aims to absorb all the relevant information regarding the circumstances a priori (analogous to training determinism), and discover the globally optimal solution through “rational” and “comprehensive” thinking. If the branch method is “muddling through,” we might say that the root, or rational-comprehensive approach, is an attempt to “god through.”…Lindblom’s thesis is basically that muddling through eats godding through for lunch.

To put it much more bluntly: Godding through doesn’t work at all beyond small scales and it’s not because the brains are too small. Reasoning backwards from complex goals in the context of an existing complex system evolving in real time doesn’t work. You have to discover forwards (not reason forwards) by muddling.

..in thinking about humans, it is obvious that Lindblom was right…Even where godding through apparently prevails through brute force up to some scale, the costs are very high, and often those who pay the costs don’t survive to complain…Fear of Big Blundering Gods is the essential worry of traditional AI safety theology, but as I’ve been arguing since 2012 (see Hacking the Non-Disposable Planet), this is not an issue because these BBGs will collapse under their own weight long before they get big enough for such collapses to be exceptionally, existentially dangerous.

This worry is similar to the worry that a 2,500 foot brick-and-mortar building might collapse and kill everybody in the city…It’s not a problem because you can’t build a brick-and-mortar building to that height. You need reinforced concrete. And that gets you into entirely different sorts of safety concerns.

Protocols for Massed Muddling

How do you go from individual agents (AI or human) muddling through to masses of them muddling through together? What are the protocols of massed muddling? These are also the protocols of AI scaling towards MMIs (Massed Muddler Intelligences)

When you put a lot of them together using a mix of hard coordination protocols (including virtual-economic ones) and softer cultural protocols, you get a massed muddler intelligence, or MMI. Market economies and liberal democracies are loose, low-bandwidth examples of MMIs that use humans and mostly non-AI computers to scale muddler intelligence. The challenge now is to build far denser, higher bandwidth ones using modern AI agents.

I suspect at the scales we are talking about, we will have something that looks more like a market economy than like the internal command-economy structure of the human body. Both feature a lot of hierarchical structure and differentiation, but the former is much less planned, and more a result of emergent patterns of agglomeration around environmental circumstances (think how the large metros that anchor the global economy form around the natural geography of the planet, rather than how major organ systems of the human body are put together).

While I suspect MMIs will partly emerge via choreographed ontogenic roadmaps from a clump of “stem cells” (is that perhaps what LxMs [large language models] are??), the way market economies emerge from nationalist industrial policies, overall the emergent intelligences will be masses of muddling rather than coherent artificial leviathans. Scaling “plans” will help launch, but not determine the nature of MMIs or their internal operating protocols at scale. Just like tax breaks and tariffs might help launch a market economy but not determine the sophistication of the economy that emerges or the transactional patterns that coordinate it. This also answers the regulation question: Regulating modern AI MMIs will look like economic regulation, not technology regulation.

How the agentic nature of the individual muddler agent building block is preserved and protected is the critical piece of the puzzle, just as individual economic rights (such as property rights, contracting regimes) are the critical piece in the design of “free” markets.

Muddling produces a shell of behavioral uncertainty around what a muddler agent will do, and how it will react to new information, that creates an outward pressure on the compressive forces created by the dense aggregation required for scaling. This is something like the electron degeneracy pressure that resists the collapse of stars under their own gravity. Or how the individualist streak in even the most dedicated communist human resists the collapse of even the most powerful cults into pure hive minds. Or how exit/voice dynamics resist the compression forces of unaccountable organizational management.

…the fundamental intentional tendency of individual agents, on which all other tendencies, autonomous or not, socially influencable or not, rest…[is]  body envelope integrity.

…This is a familiar concern for biological organisms. Defending against your body being violently penetrated is probably the foundation of our entire personality. It’s the foundation of our personal safety priorities — don’t get stabbed, shot, bitten, clawed or raped. All politics and economics is an extension of envelope integrity preservation instincts. For example, strictures against theft (especially identity theft) are about protecting the body envelope integrity of your economic body. Habeas corpus is the bedrock of modern political systems for a reason. Your physical body is your political body…if you don’t have body envelope integrity you have nothing.

This is easiest to appreciate in one very visceral and vivid form of MMIs: distributed robot systems. Robots, like biological organisms, have an actual physical body envelope (though unlike biological organisms they can have high-bandwidth near-field telepathy). They must preserve the integrity of that envelope as a first order of business … But robot MMIs are not the only possible form factor. We can think of purely software agents that live in an AI datacenter, and maintain boundaries and personhood envelopes that are primarily informational rather than physical. The same fundamental drive applies. The integrity of the (virtual) body envelope is the first concern.

This is why embodiment is an axiomatic concern. The nature of the integrity problem depends on the nature of the embodiment. A robot can run away from danger. A software muddler agent in a shared memory space within a large datacenter must rely on memory protection, encryption, and other non-spatial affordances of computing environments.

Personhood is the emergent result of successfully solving the body-envelope-integrity problem over time, allowing an agent to present a coherent and hard mask model to other agents even in unpredictable environments. This is not about putting a smiley-faced RLHF [Reinforcement Learning from Human Feedback]. mask on a shoggoth interior to superficially “align” it. This is about offering a predictable API for other agents to reliably interface with, so scaled structures in time and social space don’t collapse.  [They have] hardness - the property or quality that allows agents with soft and squishy interiors to offer hard and unyielding interfaces to other agents, allowing for coordination at scale.

…We can go back to the analogy to reinforced concrete. MMIs are fundamentally built out of composite materials that combine the constituent simple materials in very deliberate ways to achieve particular properties. Reinforced concrete achieves this by combining rebar and cement in particular geometries. The result is a flexible language of differentiated forms (not just cuboidal beams) with a defined grammar.

MMIs will achieve this by combining embodiment, boundary management, temporality, and personhood elements in very deliberate ways, to create a similar language of differentiated forms that interact with a defined grammar.

And then we can have a whole new culture war about whether that’s a good thing.

Monday, February 05, 2024

Functional human brain tissue produced by layering different neuronal types with 3D bioprinting

A very important advance by Su-Chun Zhang and collaborators at the University of Wisconsin that moves studies of nerve cells connecting in nutrient dishes from two to three dimensions:  

Highlights

  • Functional human neural tissues assembled by 3D bioprinting
  • Neural circuits formed between defined neural subtypes
  • Functional connections established between cortical-striatal tissues
  • Printed tissues for modeling neural network impairment

Summary

Probing how human neural networks operate is hindered by the lack of reliable human neural tissues amenable to the dynamic functional assessment of neural circuits. We developed a 3D bioprinting platform to assemble tissues with defined human neural cell types in a desired dimension using a commercial bioprinter. The printed neuronal progenitors differentiate into neurons and form functional neural circuits within and between tissue layers with specificity within weeks, evidenced by the cortical-to-striatal projection, spontaneous synaptic currents, and synaptic response to neuronal excitation. Printed astrocyte progenitors develop into mature astrocytes with elaborated processes and form functional neuron-astrocyte networks, indicated by calcium flux and glutamate uptake in response to neuronal excitation under physiological and pathological conditions. These designed human neural tissues will likely be useful for understanding the wiring of human neural networks, modeling pathological processes, and serving as platforms for drug testing.
 

 


Friday, February 02, 2024

Towards a Metaphysics of Worlds

I have a splitting headache from having just watched a 27 minute long YouTube rapid fire lecture by Venkatesh Rao, given last November at the Autonomous Worlds Assembly in Istanbul (part of DevConnect, a major Ethereum ecosystem event).  His latest newsletter “Towards a Metaphysics of Worlds” gives adds some notes and context, and gives a link  to its slides. As Rao notes:

“This may seem like a glimpse into a very obscure and nerdy subculture for many (most?) of you, but I think something very important and interesting is brewing in this scene and more people should know about it.”

I would suggest that you to skip the YouTube lecture and cherry pick your way through his slides.  Some are very simple and quite striking, clearly presenting interesting ideas about the epistomology, ontology, and definitions of worlds.  Here is Slide 11, where what Rao means by "Worlds" is made more clear:

Wednesday, January 03, 2024

Thursday, December 28, 2023

Origins of our current crises in the 1990s, the great malformation, and the illusion of race.

I'm passing on three clips I found most striking from David Brooks, recent NYTimes Sydney awards column:

I generally don’t agree with the arguments of those on the populist right, but I have to admit there’s a lot of intellectual energy there these days. (The Sidneys go to essays that challenge readers, as well as to those that affirm.) With that, the first Sidney goes to Christopher Caldwell for his essay “The Fateful Nineties” in First Things. Most people see the 1990s as a golden moment for America — we’d won the Cold War, we enjoyed solid economic growth, the federal government sometimes ran surpluses, crime rates fell, tech took off.

Caldwell, on the other hand, describes the decade as one in which sensible people fell for a series of self-destructive illusions: Globalization means nation-states don’t matter. Cyberspace means the material world is less important. Capitalism can run on its own without a countervailing system of moral values. Elite technocrats can manage the world better than regular people. The world will be a better place if we cancel people for their linguistic infractions.

As Caldwell sums it up: “America’s discovery of world dominance might turn out in the 21st century to be what Spain’s discovery of gold had been in the 16th — a source of destabilization and decline disguised as a windfall.”

***************** 

In “The Great Malformation,” Talbot Brewer observes that parenthood comes with “an ironclad obligation to raise one’s children as best one can.” But these days parents have surrendered child rearing to the corporations that dominate the attention industry, TikTok, Facebook, Instagram and so on: “The work of cultural transmission is increasingly being conducted in such a way as to maximize the earnings of those who oversee it.”

He continues: “We would be astonished to discover a human community that did not attempt to pass along to its children a form of life that had won the affirmation of its elders. We would be utterly flabbergasted to discover a community that went to great lengths to pass along a form of life that its elders regarded as seriously deficient or mistaken. Yet we have slipped unawares into precisely this bizarre arrangement.” In most societies, the economy takes place in a historically rooted cultural setting. But in our world, he argues, the corporations own and determine the culture, shaping our preferences and forming, or not forming, our conception of the good.

*****************

It’s rare that an essay jolts my convictions on some major topic. But that happened with one by Subrena E. Smith and David Livingstone Smith, called “The Trouble With Race and Its Many Shades of Deceit,” in New Lines Magazine. The Smiths are, as they put it, a so-called mixed-race couple — she has brown skin, his is beige. They support the aims of diversity, equity and inclusion programs but argue that there is a fatal contradiction in many antiracism programs: “Although the purpose of anti-racist training is to vanquish racism, most of these initiatives are simultaneously committed to upholding and celebrating race.” They continue: “In the real world, can we have race without racism coming along for the ride? Trying to extinguish racism while shoring up race is like trying to put out a fire by pouring gasoline on it.”

I’ve heard this argument — that we should seek to get rid of the whole concept of race — before and dismissed it. I did so because too many people I know have formed their identity around racial solidarity — it’s a source of meaning and strength in their lives. The Smiths argue that this is a mistake because race is a myth: “The scientific study of human variation shows that race is not meaningfully understood as a biological grouping, and there are no such things as racial essences. There is now near consensus among scholars that race is an ideological construction rather than a biological fact. Race was fashioned for nothing that was good. History has shown us how groups of people ‘racialize’ other groups of people to justify their exploitation, oppression and annihilation.”

Monday, December 25, 2023

Large Language Models are not yet providing theories of human language.

 From Dentella et al. (open source):

Significance
The synthetic language generated by recent Large Language Models (LMs) strongly resembles the natural languages of humans. This resemblance has given rise to claims that LMs can serve as the basis of a theory of human language. Given the absence of transparency as to what drives the performance of LMs, the characteristics of their language competence remain vague. Through systematic testing, we demonstrate that LMs perform nearly at chance in some language judgment tasks, while revealing a stark absence of response stability and a bias toward yes-responses. Our results raise the question of how knowledge of language in LMs is engineered to have specific characteristics that are absent from human performance.
Abstract
Humans are universally good in providing stable and accurate judgments about what forms part of their language and what not. Large Language Models (LMs) are claimed to possess human-like language abilities; hence, they are expected to emulate this behavior by providing both stable and accurate answers, when asked whether a string of words complies with or deviates from their next-word predictions. This work tests whether stability and accuracy are showcased by GPT-3/text-davinci-002, GPT-3/text-davinci-003, and ChatGPT, using a series of judgment tasks that tap on 8 linguistic phenomena: plural attraction, anaphora, center embedding, comparatives, intrusive resumption, negative polarity items, order of adjectives, and order of adverbs. For every phenomenon, 10 sentences (5 grammatical and 5 ungrammatical) are tested, each randomly repeated 10 times, totaling 800 elicited judgments per LM (total n = 2,400). Our results reveal variable above-chance accuracy in the grammatical condition, below-chance accuracy in the ungrammatical condition, a significant instability of answers across phenomena, and a yes-response bias for all the tested LMs. Furthermore, we found no evidence that repetition aids the Models to converge on a processing strategy that culminates in stable answers, either accurate or inaccurate. We demonstrate that the LMs’ performance in identifying (un)grammatical word patterns is in stark contrast to what is observed in humans (n = 80, tested on the same tasks) and argue that adopting LMs as theories of human language is not motivated at their current stage of development.