Showing posts with label futures. Show all posts
Showing posts with label futures. Show all posts

Thursday, December 12, 2024

Sustainability of Animal-Sourced Foods - how to deal with farting cows...

I've just read through a number of articles in a Special Feature section of  the most recent issue of PNAS on the future of animal and plant sourced food. After a balanced lead article by Qaim et al., a following article that really caught my eye was "Mitigating methane emissions in grazing beef cattle with a seaweed-based feed additive: Implications for climate-smart agriculture."  First line of it's abstract is "This study suggests that the addition of pelleted bromoform-containing seaweed (Asparagopsis taxiformis) to the diet of grazing beef cattle can potentially reduce enteric methane (CH4) emissions (g/d) by an average of 37.7% without adversely impacting animal performance."

Saturday, September 28, 2024

Networks of connectivity are the battleground of the future.

From Nathan Gardels, editor of Noema Magazine: "From Mass To Distributed Weapons Of Destruction" : 

The recent lethal attacks attributed to Israel that exploded pagers and walkie-talkies dispersed among thousands of Hezbollah militants announces a new capacity in the history of warfare for distributed destruction. Before the massive bombing raids that have since ensued, the terror-stricken population of Lebanon had been unplugging any device with batteries or a power source linked to a communication network for fear it might blow up in their faces.

The capability to simultaneously strike the far-flung tentacles of a network is only possible in this new era of connectivity that binds us all together. It stands alongside the first aerial bombing in World War I and the use of nuclear weapons by the U.S. in Japan at the end of World War II as a novel weapon of its technological times that will, sooner or later, proliferate globally.

Like these earlier inventions of warfare, the knowledge and technology that is at the outset the sole province of the clever first mover will inevitably spread to others with different, and even precisely opposite, interests and intentions. The genie is out of the bottle and can’t be put back. In time, it will be available to anyone with the wherewithal to summon it for their own purposes.

While Hezbollah reels, we can be sure that the defense establishments in every nation, from Iran to Russia, China and the U.S., are scrambling to get ahead of this new reality by seeking advantage over any adversary who is surely trying to do the same. 

Back in 1995, the Aum Shinrikyo cult released the deadly nerve agent, sarin, in a Tokyo subway, killing 13 and sickening some 5,500 commuters. In an interview at the time, the futurist Alvin Toffler observed that “what we’ve seen in Japan is the ultimate devolution of power: the demassification of mass-destruction weapons … where an individual or group can possess the means of mass destruction if he or she has the information to make them. And that information is increasingly available.”

Even that foresightful thinker could not envision then that not only can individuals or groups gain access to knowledge of the ways and means of mass destruction through information networks, but that the networks for accessing that knowledge and connecting individuals or groups can themselves serve as a delivery system for hostile intervention against their users.

Though the Israeli attacks reportedly involved low-tech logistical hacking of poorly monitored supply chains, it doesn’t take an AI scientist to see the potential of distributed warfare in today’s Internet of Things, where all devices are synced, from smartphones to home alarm systems to GPS in your car or at your bank’s ATM.

Ever-more powerful AI models will be able to algorithmically deploy programmed instructions back through the same network platforms from which they gather their vast amounts of data.

It is no longer a secret that the CIA and Israeli Mossad temporarily disabled Iran’s nuclear fuel centrifuges in 2009 by infecting their operating system with the Stuxnet malware. That such targeted attacks could also be scaled up and distributed across an array of devices through new AI tools is hardly a stretch of the imagination.

The writing, or code, is clearly on the wall after the Hezbollah attack. Dual-use networks will be weaponized as the battleground of the future. The very platforms that bring people together can also be what blows them apart.

 

 

Sunday, September 15, 2024

A caustic review of Yuval Harari's "Nexus"

I pass on the very cogent opinions of Dominic Green, fellow of the Royal Historical Society, that appeared in the Sept. 13 issue of the Wall Street Journal. He offers several  caustic comments on ideas offered in Yuval Harari's most recent book, "Nexus"

Groucho Marx said there are two types of people in this world: “those who think people can be divided up into two types, and those who don’t.” In “Nexus,” the Israeli historian-philosopher Yuval Noah Harari divides us into a naive and populist type and another type that he prefers but does not name. This omission is not surprising. The opposite of naive and populist might be wise and pluralist, but it might also be cynical and elitist. Who would admit to that?

Mr. Harari is the author of the bestselling “Sapiens,” a history of our species written with an eye on present anxieties about our future. “Nexus,” a history of our society as a series of information networks and a warning about artificial intelligence, uses a similar recipe. A dollop of historical anecdote is seasoned with a pinch of social science and a spoonful of speculation, topped with a soggy crust of prescription, and lightly dusted with premonitions of the apocalypse that will overcome us if we refuse a second serving. “Nexus” goes down easily, but it isn’t as nourishing as it claims. Much of it leaves a sour taste. Like the Victorian novel and Caesar’s Gaul, “Nexus” divides into three parts. The first part describes the development of complex societies through the creation and control of information networks. The second argues that the digital network is both quantitatively and qualitatively different from the print network that created modern democratic societies. The third presents the AI apocalypse. An “alien” information network gone rogue, Mr. Harari warns, could “supercharge existing human conflicts,” leading to an “AI arms race” and a digital Cold War, with rival powers divided by a Silicon Curtain of chips and code.

Information, Mr. Harari writes, creates a “social nexus” among its users. The “twin pillars” of society are bureaucracy, which creates power by centralizing information, and mythology, which creates power by controlling the dispersal of “stories” and “brands.” Societies cohere around stories such as the Bible and communism and “personality cults” and brands such as Jesus and Stalin. Religion is a fiction that stamps “superhuman legitimacy” on the social order. All “true believers” are delusional. Anyone who calls a religion “a true representation of reality” is “lying.” Mr. Harari is scathing about Judaism and Christianity but hardly criticizes Islam. In this much, he is not naive.

Mythologies of religion, history and ideology, Mr. Harari believes, exploit our naive tendency to mistake all information as “an attempt to represent reality.” When the attempt is convincing, the naive “call it truth.” Mr. Harari agrees that “truth is an accurate representation of reality” but argues that only “objective facts” such as scientific data are true. “Subjective facts” based on “beliefs and feelings” cannot be true. The collaborative cacophony of “intersubjective reality,” the darkling plain of social and political contention where all our minds meet, also cannot be fully true.

Digitizing our naivety has, Mr. Harari believes, made us uncontrollable and incorrigible. “Nexus” is most interesting, and most flawed, when it examines our current situation. Digital networks overwhelm us with information, but computers can only create “order,” not “truth” or “wisdom.” AI might take over without developing human-style consciousness: “Intelligence is enough.” The nexus of machine-learning, algorithmic “user engagement” and human nature could mean that “large-scale democracies may not survive the rise of computer technology.”

The “main split” in 20th-century information was between closed, pseudo-infallible “totalitarian” systems and open, self correcting “democratic” systems. As Mr. Harari’s third section describes, after the flood of digital information, the split will be between humans and machines. The machines will still be fallible. Will they allow us to correct them? Though “we aren’t sure” why the “democratic information network is breaking down,” Mr. Harari nevertheless argues that “social media algorithms” play such a “divisive” role that free speech has become a naive luxury, unaffordable in the age of AI. He “strongly disagrees” with Louis Brandeis’s opinion in Whitney v. California (1927) that the best way to combat false speech is with more speech.

The survival of democracy requires “regulatory institutions” that will “vet algorithms,” counter “conspiracy theories” and prevent the rise of “charismatic leaders.” Mr. Harari never mentions the First Amendment, but “Nexus” amounts to a sustained argument for its suppression. Unfortunately, his grasp of politics is tenuous and hyperbolic. He seems to believe that populism was invented with the iPhone rather than being a recurring bug that appears when democratic operating systems become corrupted or fail to update their software. He consistently confuses democracy (a method of gauging opinion with a long history) with liberalism (a mostly Anglo-American legal philosophy with a short history). He defines democracy as “an ongoing conversation between diverse information nodes,” but the openness of the conversation and the independence of its nodes derive from liberalism’s rights of individual privacy and speech. Yet “liberalism” appears nowhere in “Nexus.” Mr. Harari isn’t much concerned with liberty and justice either.

In “On Naive and Sentimental Poetry” (1795-96), Friedrich Schiller divided poetry between two modes. The naive mode is ancient and assumes that language is a window into reality. The sentimental mode belongs to our “artificial age” and sees language as a mirror to our inner turmoil. As a reflection of our troubled age of transition, “Nexus” is a mirror to the unease of our experts and elites. It divides people into the cognitively unfit and the informationally pure and proposes we divide power over speech accordingly. Call me naive, but Mr. Harari’s technocratic TED-talking is not the way to save democracy. It is the royal road to tyranny.

 

The Fear of Diverse Intelligences Like AI

I want to suggest that you read the article by Michael Levin in the Sept. 3 issue of Noema Magazine on how our fear of AI’s potential is emblematic of humanity’s larger difficulty recognizing intelligence in unfamiliar guises. (One needs to be clear however, that AI of the GTP engines is not 'intelligence' in the broader sense of the term. They are large language models, LLMs.) Here are some clips from the later portions of his essay:

Why would natural evolution have an eternal monopoly on producing systems with preferences, goals and the intelligence to strive to meet them? How do you know that bodies whose construction includes engineered, rational input in addition to emergent physics, instead of exclusively random mutations (the mainstream picture of evolution), do not have what you mean by emotion, intelligence and an inner perspective? 

Do cyborgs (at various percentage combinations of human brain and tech) have the magic that you have? Do single cells? Do we have a convincing, progress-generating story of why the chemical system of our cells, which is compatible with emotion, would be inaccessible to construction by other intelligences in comparison to the random meanderings of evolution?

We have somewhat of a handle on emergent complexity, but we have only begun to understand emergent cognition, which appears in places that are hard for us to accept. The inner life of partially (or wholly) engineered embodied action-perception agents is no more obvious (or limited) by looking at the algorithms that its engineers wrote than is our inner life derivable from the laws of chemistry that reductionists see when they zoom into our cells. The algorithmic picture of a “machine” is no more the whole story of engineered constructs, even simple ones, than are the laws of chemistry the whole story of human minds.

Figuring out how to relate to minds of unconventional origin — not just AI and robotics but also cells, organs, hybrots, cyborgs and many others — is an existential-level task for humanity as it matures.

Our current educational materials give people the false idea that they understand the limits of what different types of matter can do.  The protagonist in the “Ex Machina” movie cuts himself to determine whether he is also a robotic being. Why does this matter so much to him? Because, like many people, if he were to find cogs and gears underneath his skin, he would suddenly feel lesser than, rather than considering the possibility that he embodied a leap-forward for non-organic matter.  He trusts the conventional story of what intelligently arranged cogs and gears cannot do (but randomly mutated, selected protein hardware can) so much that he’s willing to give up his personal experience as a real, majestic being with consciousness and agency in the world.

The correct conclusion from such a discovery — “Huh, cool, I guess cogs and gears can form true minds!” — is inaccessible to many because the reductive story of inorganic matter is so ingrained. People often assume that though they cannot articulate it, someone knows why consciousness inhabits brains and is nowhere else. Cognitive science must be more careful and honest when exporting to society a story of where the gaps in knowledge lie and which assumptions about the substrate and origin of minds are up for revision.

It’s terrifying to consider how people will free themselves, mentally and physically, once we really let go of the pre-scientific notion that any benevolent intelligence planned for us to live in the miserable state of embodiment many on Earth face today. Expanding our scientific wisdom and our moral compassion will give everyone the tools to have the embodiment they want.

The people of that phase of human development will be hard to control. Is that the scariest part? Or is it the fact that they will challenge all of us to raise our game, to go beyond coasting on our defaults, by showing us what is possible? One can hide all these fears under macho facades of protecting real, honest-to-goodness humans and their relationships, but it’s transparent and it won’t hold.

Everything — not just technology, but also ethics — will change. Thus, my challenges to all of us are these. State your positive vision of the future — not just the ubiquitous lists of the fearful things you don’t want but specify what you do want. In 100 years, is humanity still burdened by disease, infirmity, the tyranny of deoxyribonucleic acid, and behavioral firmware developed for life on the savannah? What will a mature species’ mental frameworks look like?

“Other, unconventional minds are scary, if you are not sure of your own — its reality, its quality and its ability to offer value in ways that don’t depend on limiting others.”

Clarify your beliefs: Make explicit the reasons for your certainties about what different architectures can and cannot do; include cyborgs and aliens in the classifications that drive your ethics. I especially call upon anyone who is writing, reviewing or commenting on work in this field to be explicit about your stance on the cognitive status of the chemical system we call a paramecium, the ethical position of life-machine hybrids such as cyborgs, the specific magic thing that makes up “life” (if there is any), and the scientific and ethical utility of the crisp categories you wish to preserve.

Take your organicist ideas more seriously and find out how they enrich the world beyond the superficial, contingent limits of the products of random evolution. If you really think there is something in living beings that goes beyond all machine metaphors, commit to this idea and investigate what other systems, beyond our evo-neuro-chauvinist assumptions, might also have this emergent cognition.

Consider that the beautiful, ineffable qualities of inner perspective and goal-directedness may manifest far more broadly than is easily recognized. Question your unwarranted confidence in what “mere matter” can do, and entertain the humility of emergent cognition, not just emergent complexity. Recognize the kinship we have with other minds and the fact that all learning requires your past self to be modified and replaced by an improved, new version. Rejoice in the opportunity for growth and change and take responsibility for guiding the nature of that change.

Go further — past the facile stories of what could go wrong in the future and paint the future you do want to work toward. Transcend scarcity and redistribution of limited resources, and help grow the pot. It’s not just for you — it’s for your children and for future generations, who deserve the right to live in a world unbounded by ancient, pre-scientific ideas and their stranglehold on our imaginations, abilities, and ethics.

Friday, August 30, 2024

The delusions of transhumanists and life span extenders

One futurist story line is that we all will be seamlessly integrated with AI and the cloud, and bioengineered to live forever. This simplistic fantasy does not take into account the pervasive and fundamental relationships and fluxes between all living things. The cellular mass of individual humans, after all, is mainly composed of their diverse microbiotas (bacteria, fungi, protists), that influence how all organ systems interact with environmental input.  Humans dominate the planet because their brains first evolved a largely unconscious mirroring collective cooperative group intelligence from which different languages and cultures have risen. The linguistic and cognitive brain being modeled by AI's large language models is just one component of a much larger working ensemble in continuous flux with the geosphere and biosphere.

(I've been meaning to develop the above sentiments for a longer post, but have decided to go ahead and pass them on here in case that doesn't happen.)  

 

Friday, August 02, 2024

The Ooze is Growing

In case you haven't seen enough pessimistic predictions about the next ~10-20 years, I pass on the following clips from a recent installment of my email subscription to Venkatesh Rao's "breaking Smart" news letter: 

...world affairs are effectively on autopilot now, running on what in the control engineering world are called bang-bang laws. Bang-bang laws drive a control mechanism from one extreme to another discontinuously, like steering a car with only hard right/hard left limit steers, and regulating speed only with hard braking and floored accelerators. Or electing two opposed sets of wingnuts to govern in loose alternation...Bang-bang laws can often have nice properties like time-optimality, and making good use of limited control authority that “saturates.” And they’re simple and robust, like your thermostat, a bang-bang controller that’s in Chapter 1 of most control engineering textbooks.

...I don’t think the state of the world today is one that can be governed by the bang-bang control laws being applied to it by opposed tribes of wingnuts everywhere. It’s not the right sort of problem for the strategy. Nevertheless, it’s the strategy we’ve adopted, because it happens to be the only option available right now. Instead of an Overton window within which politics as an art and science of the possible operates, we have two narrow slits representing the bang-bang limit positions, separated by a stretch of impossible positions. This situation has been contrived by people who don’t like to think very much or at all. So we have control laws that don’t actually work in the current planetary condition...weird out-of-control behaviors unfold if you apply an ill-conceived bang-bang strategy to a system unsuited to it...you brace for weird fallouts. Just as if you’re a passenger in a car being driven in a bang-bang way, you brace for a crash...You prepare for a world that will go out of control when the bang-bang strategy fails. It’s a question of when, not if.

The world is becoming ungovernable even as visions and attempts to govern it are getting ever more outlandishly delusional and ineffective.

The only conclusion we can reach is that we’re in for a period of increasingly ungoverned anarchy. Sure, some sort of theater of governance will continue everywhere. There will be bang-Left and bang-Right theaters piously strutting about on their respective stages pretending to govern, to captive audiences desperate to believe they are being governed, confined within ever-shrinking and increasingly exclusionary islands of stable prosperity, secured with growing amounts of low-grade localized boundary-integrity-maintaining violence. Lots of individual lives will get destroyed in the process. There will also be marquee showcase things built just so there are things to point at as examples of governance doing something, even if not well. The facade will last longer than we expect, even if you’re expecting it.

But all around, the ooze will grow. The ungovernability will grow.

But the ooze is not inevitably good or bad. What will make it good or bad is the quality of the attention and care we bring to thinking and feeling our way into it. It only feels shitty-shitty right now because we are attending to it in bang-bang ways, blinded by too-simple ideas and a fear of words.

How do you ignore and sidestep the bang-bang governance theaters, and feel and think your way into the ooze, to begin co-evolving with it? That’s the question. Fortunately, it’s mostly not a question for you or me, but for the next generation.

Friday, July 19, 2024

MindBlog does a survey of futurists and recommends a few.

In the wild and wooly world of thinking about futures for the earth, its living forms, and humanity there exists a tier of well know public intellectuals. I prompted 5 different GPT engines (Google Gemini, ChatGPT 4o, Anthropic Claude, Perplexity, Microsoft Copilot) to "Please act as an author who has chosen a list of the 10 most influential futurists - public intellectuals who speculate about the future of humanity. Write a brief essay describing each of them and and their basic messages"

The names selected by more than one of the GPT engines were: Ray Kurzweil (5), Michio Kaku (5), Yuval Harari (4), Alvin Toffler (4), Nick Bostrom (3), Aubrey de Grey (3), William Gibson (2), Faith Popcorn (2), and Paul Roberts (2). The names chosen by only one of the engines were Peter Schwartz, Juan Enriquez, Elon Musk, Peter Diamandis, Kevin Kelly, Jane McGonigal, James Lovelock, Ari Wallach, Buckminster Fuller, Stewart Brand, Jacque Fresco, Donna Haraway, Hans Rosling, Paul Roberts, Jaron Lanier. Christopher Alberg, Dirk Helbing, and John Naisbitt. 

My own favorite futurist has been Venkatesh Rao, a brilliant polymath whose ideas I have passed on in numerous MindBlog posts (see Ribbonfarm for his narratives) . He is one of a cohort of younger thinkers ~50 years old  generating their own newsletters and websites that I have stumbled onto recently. These include Canadian Patrick Tanguary (Sentiers newsletterSamuel Arbesman's site and its 'lists of lists',  Parag Khanna and Jason Crawford (Roots of Progress).  I would encourage MindBlog readers to check out a few of these links. 

Wednesday, July 17, 2024

An essay on Geopolitics and Globalization

I want to pass on one complete section, indicating titles of the others, of  a larger essay addressing tribalism versus global consciousness.

Untimely Meditations

Philosophy goes against the grain of political immediacy to insist on inconvenient truths.

 

Nathan Gardels, Noema  Magazine Editor-in-Chief

The Condition of Planetarity

The Politics of Planetary Realism

Conceptual demolition of the outmoded paradigm of nation-state realism, however, does not erase its still firmly rooted expression in present practice. Building the centripetal momentum toward binding planetary association against the weighty centrifugal pull of tribal identity is an endeavor as fraught as it is necessary.

While technology and advancing science may foster a universal understanding of the planetary condition, politics and culture have a different logic rooted in emotion and ways of life cultivated among one’s own kind. Far from moving ahead in lock step, when they meet, they clash.

Indeed, the great paradox today is that the planetary imperative of mitigating climate change has become the province of renewed nationalism. Industrial policies designed to make the green energy transition are competing to protect and promote national self-interest instead of joining together at the level of all humanity. What we see instead is the battle of subsidies between the U.S. Inflation Reduction Act and the European Green New Deal, with both raising stiff tariff barriers to blunt China’s concerted conquest of core green technologies from storage batteries to electric vehicles, solar cells and their supply chains.

In short, rather than uniting as a threatened species to meet a challenge that knows no boundaries, competition has sidelined collaboration within the West while global warming has been weaponized in the new Cold War with China.

The new element of a greenlash, registered in the recent European elections, portends social resistance that is more about resentment of self-righteous Tesla-driving elites and the unequally borne costs of the energy transition than climate denial. The Diet-Coke imaginary of environmentalists — who sold climate policies as achievable without undue burdens on economies built around fossil fuels for more than a century — has been put to rest. As the heavy lift of the transition bites ever more deeply into the daily bread, we are learning the hard lesson that the future has a scarce constituency in consumer democracies as well as growth-oriented autocracies.

In this sense, “planetary realism” takes on a double meaning. It entails both a recognition of the interdependence of the planetary condition as well as a realistic grasp of what it will take to navigate through what remains a world of nations.

Immigration And Belonging

Becoming Universal




Friday, June 28, 2024

How AI will transform the physical world.

I pass on the text of wild-eyed speculations by futurist Ray Kurzweil recently sent to The Economist, who since 1990 has been writing on how soon “The Singularity” - machine intelligence exceeding human intelligence - will arrive and transform our physical world in energy, manufacturing and medicine.  I have too many reservation about realistic details of  his fantasizing to even begin to list them, but the article is a fun read:

By the time children  born today are in kindergarten, artificial intelligence (AI) will probably have surpassed humans at all cognitive tasks, from science to creativity. When I first predicted in 1999 that we would have such artificial general intelligence (AGI) by 2029, most experts thought I’d switched to writing fiction. But since the spectacular breakthroughs of the past few years, many experts think we will have AGI even sooner—so I’ve technically gone from being an optimist to a pessimist, without changing my prediction at all.

After working in the field for 61 years—longer than anyone else alive—I am gratified to see AI at the heart of global conversation. Yet most commentary misses how large language models like ChatGPT and Gemini fit into an even larger story. AI is about to make the leap from revolutionising just the digital world to transforming the physical world as well. This will bring countless benefits, but three areas have especially profound implications: energy, manufacturing and medicine.

Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet?

The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress.

By contrast, AI can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s GNoME AI discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter AGI finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free.

Energy abundance enables another revolution: in manufacturing. The costs of almost all goods—from food and clothing to electronics and cars—come largely from a few common factors such as energy, labour (including cognitive labour like R&D and design) and raw materials. AI is on course to vastly lower all these costs.

After cheap, abundant solar energy, the next component is human labour, which is often backbreaking and dangerous. AI is making big strides in robotics that can greatly reduce labour costs. Robotics will also reduce raw-material extraction costs, and AI is finding ways to replace expensive rare-earth elements with common ones like zirconium, silicon and carbon-based graphene. Together, this means that most kinds of goods will become amazingly cheap and abundant.

These advanced manufacturing capabilities will allow the price-performance of computing to maintain the exponential trajectory of the past century—a 75-quadrillion-fold improvement since 1939. This is due to a feedback loop: today’s cutting-edge AI chips are used to optimise designs for next-generation chips. In terms of calculations per second per constant dollar, the best hardware available last November could do 48bn. Nvidia’s new B200 GPUs exceed 500bn.

As we build the titanic computing power needed to simulate biology, we’ll unlock the third physical revolution from AI: medicine. Despite 200 years of dramatic progress, our understanding of the human body is still built on messy approximations that are usually mostly right for most patients, but probably aren’t totally right for you. Tens of thousands of Americans a year die from reactions to drugs that studies said should help them.

Yet AI is starting to turn medicine into an exact science. Instead of painstaking trial-and-error in an experimental lab, molecular biosimulation—precise computer modelling that aids the study of the human body and how drugs work—can quickly assess billions of options to find the most promising medicines. Last summer the first drug designed end-to-end by AI entered phase-2 trials for treating idiopathic pulmonary fibrosis, a lung disease. Dozens of other AI-designed drugs are now entering trials.

Both the drug-discovery and trial pipelines will be supercharged as simulations incorporate the immensely richer data that AI makes possible. In all of history until 2022, science had determined the shapes of around 190,000 proteins. That year DeepMind’s AlphaFold 2 discovered over 200m, which have been released free of charge to researchers to help develop new treatments.

Much more laboratory research is needed to populate larger simulations accurately, but the roadmap is clear. Next, AI will simulate protein complexes, then organelles, cells, tissues, organs and—eventually—the whole body.

This will ultimately replace today’s clinical trials, which are expensive, risky, slow and statistically underpowered. Even in a phase-3 trial, there’s probably not one single subject who matches you on every relevant factor of genetics, lifestyle, comorbidities, drug interactions and disease variation.

Digital trials will let us tailor medicines to each individual patient. The potential is breathtaking: to cure not just diseases like cancer and Alzheimer’s, but the harmful effects of ageing itself.

Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When AGI gives us full mastery over cellular biology, these gains will sharply accelerate. Once annual increases in life expectancy reach 12 months, we’ll achieve “longevity escape velocity”. For people diligent about healthy habits and using new therapies, I believe this will happen between 2029 and 2035—at which point ageing will not increase their annual chance of dying. And thanks to exponential price-performance improvement in computing, AI-driven therapies that are expensive at first will quickly become widely available.

This is AI’s most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings. ■


Wednesday, June 26, 2024

Off the rails - inequity and unfairness built into capitalism

I have largely withdrawn from posting items relevant to the details of our current political and social malaise, but I want to pass on a few clips from a piece by Brett Stevens, that passes on points made by Ruchir Sharma, the chairman of Rockefeller International and a Financial Times columnist, in his new book “What Went Wrong With Capitalism.” Sharma makes a convincing case that hits the nail on the head about what has gotten us where we are: easy money, or ultralow interest. When the price of borrowing money is zero, everything goes bonkers.
In 2010, as the era of ultralow and even negative interest rates was getting started, the median sale price for a house in the United States hovered around $220,000. By the start of this year, it was more than $420,000.
Inflation is seen in global financial markets:
..In 1980 they were worth a total of $12 trillion — equal to the size of the global economy at the time. After the pandemic...those markets were worth $390 trillion, or around four times the world’s total gross domestic product.
In theory, easy money should have broad benefits for regular people, from employees with 401(k)s to consumers taking out cheap mortgages. In practice, it has destroyed much of what used to make capitalism an engine of middle-class prosperity in favor of the old and very rich.
First, there was inflation in real and financial assets, followed by inflation in consumer prices, followed by higher financing costs as interest rates have risen to fight inflation...for Americans who rely heavily on credit, it’s been devastating...
...he system is broken and rigged, particularly against the poor and the young. “A generation ago, it took the typical young family three years to save up to the down payment on a home,” Sharma observes in the book. “By 2019, thanks to no return on savings, it was taking 19 years.”
The social consequence of this is rage; the political consequence is populism.
For all their policy differences, both leading U.S. candidates are committed and fearless statists, not friends of competitive capitalism.”
What happens when both major parties are wedded to two versions of the same failing ideas? And what happens when leading figures of both the progressive left and the populist right seek to compound the problem with even easier credit and more runaway spending?
The answer: We are wandering in fog. And the precipice is closer than we think.

Wednesday, June 19, 2024

Managing the human herd

 This post is a dyspeptic random walk through thoughts triggered by the front page photograph of the Wall Street journal of June 17, 2024, showing Masses of pilgrims embarked on a symbolic stoning of the devil in Saudi Arabia under the soaring summer heat. Such enormous mobs of people are those most easily roused to strong emotions by charismatic speakers.

How are the emotions and behaviors of such enormous clans of humans be regulated in a sane and humane way? Can this be accomplished outside of authoritarian or oligarchical governance? Might such governance establish its control of the will and moods of millions through the unnoticed infiltration of AI into all aspects of their daily life (cf. Apple's recent AI announcements). Will the world come to be ruled by a "Book of Elon"? 

Or might we be moving into a world of decentralized everything? a bottom up emergence of consensus governance the from the mosh pit of web 3, cryptocurrencies and stablecoins? The noble sentiments of the Etherium Foundation/ notwithstanding, the examples we have to date of 'rules of the commons' are the chaos of Discord, Reddit, and other social media where the sentiments of idiots and experts jumble together in an impenetrable cacophony.  

Who or what is going to emerge to control this mess? How long will the "permaweird" persist?  

 

Monday, June 17, 2024

Empty innovation: What are we even doing?

I came across an interesting commentary by "Tante" on innovation, invention, and progress (or the lack thereof) in the constant churning, and rise and fall, of new ideas and products in the absence of questions like "Why are we doing this?" and "Who is profiting?". In spite of the speaker's arrogance and annoying style, I think it is worth a viewing.

Friday, June 14, 2024

The future of life.

I want to pass on this science magazine review of Jamie Metzl's new book "Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World". Metzel is founder of the One Shared World organization. Check out its website here.
On the night of 4 July 1776, the Irish immigrant and official printer to the Continental Congress John Dunlap entered his Philadelphia print-shop and began to typeset the first printed version of a document that was to become the enduring North Star of the “American experiment.” It comprised an ideological handbook for its utopian aspirations and a codification of purported essential self-evident ground truths that included the equality of all men and the rights to life, liberty, and the pursuit of happiness. By the morning, Dunlap had produced an estimated 200 copies of the American Declaration of Independence, which Abraham Lincoln would later refer to as a “rebuke and a stumblingblock… to tyranny and oppression.”
In his erudite, optimistic, and timely book Superconvergence, the futurist Jamie Metzl laments the lack of any such authoritative reference to inform our exploration of an equally expansive, intriguing, and uncharted territory: humankind’s future. Replete with unprecedented opportunities and existential risks hitherto unimaginable in life’s history, the new world we are entering transcends geographical boundaries, and—as a result of humankind’s global interdependencies—it must, by necessity, exist in a no-man’s-land beyond the mandates of ideologies and nation-states. Its topography is defined not by geological events and evolution by natural selection so much as by the intersection of several exponential human-made technologies. Most notably, these include the generation of machine learning intelligence that can interrogate big data to define generative “rules” of biology and the post- Darwinian engineering of living systems through the systematic rewriting of their genetic code.
Acknowledging the intrinsic mutability of natural life and its ever-changing biochemistry and morphology, Metzl is unable to align himself with UNESCO’s 1997 Universal Declaration on the Human Genome and Human Rights. To argue that the current version of the human genome is sacred is to negate its prior iterations, including the multiple species of human that preceded us but disappeared along the way. The sequences of all Earth’s species are in a simultaneous state of being and becoming, Metzl argues. Life is intrinsically fluid.
Although we are still learning to write complex genomes rapidly, accurately, without sequence limitation, and at low cost, and our ability to author novel genomes remains stymied by our inability to unpick the generative laws of biology, it is just a matter of time before we transform biology into a predictable engineering material, at which point we will be able to recast life into desired forms. But while human-engineered living materials and biologically inspired devices offer potential solutions to the world’s most challenging problems, our rudimentary understanding of complex ecosystems and the darker sides of human nature cast long shadows, signaling the need for caution.
Metzl provides some wonderful examples of how artificial species and bioengineering, often perceived as adversaries of natural life, could help address several of the most important issues of the moment. These challenges include climate change, desertification, deforestation, pollution (including the 79,000-metric-ton patch of garbage the size of Alaska in the Pacific Ocean), the collapse of oceanic ecosystems, habitat loss, global population increase, and the diminution of species biodiversity. By rewriting the genomes of crops and increasing the efficiency of agriculture, we can reduce the need to convert additional wild habitats into farmland, he writes. Additionally, the use of bioengineering to make sustainable biofuels, biocomputing, bio foodstuffs, biodegradable plastics, and DNA information–storing materials will help reduce global warming.
Meanwhile, artificial intelligence (AI) can free up human time. By 2022, DeepMind’s AlphaFold program had predicted the structures of 214 million proteins—a feat that would have taken as long as 642 million years to achieve using conventional methods. As Metzl comments, this places “millions of years back into the pot of human innovation time.” The ability to hack human biology using AI will also have a tremendous impact on the human health span and life span, not least through AI-designed drugs, he predicts.
Metzl is right when he concludes that we have reached a “critical moment in human history” and that “reengineered biology will play a central role in the future of our species.” We will need to define a new North Star—a manifesto for life—to assist with its navigation. Metzl argues for the establishment of a new international body with depoliticized autonomy to focus on establishing common responses to shared global existential challenges. He suggests that this process could be kick-started by convening a summit aimed at establishing aligned governance guidelines for the revolutionary new technologies we are creating.

Monday, May 13, 2024

How the US is destroying young people's future.

When I wake up in the morning, I frequently remind myself to be grateful for the luck of having been born in 1942, and being able to ride the crest of a number of fortunate external circumstances that made my generation vastly better off than those who followed. I was in high school in the late 50s when Sputnik happened, fueling a huge increase in federal research funding that, powered my laboratory research career how our vision works. Both my parents and myself were clients of state governments or universities that offered generous retirement plans and pensions, and and the ability to set aside tax deferred money to invest for later years.

This has lead to the situation succinctly described in the following TED video done by Scott Galloway, who teaches at NYU, transcript of talk is here.  (It was sent to me by my 49 year old son, an senior E-commerce digital solutions architect, whose expectations about the future are vastly more modest than mine were when I was his age.) One of the most striking graphics in the video shows how the increase in household wealth of those 70 and older has increased by 11% since 1989, while it has decreased by 5% for those under 40. 

 

Monday, May 06, 2024

Are we the cows of the future?

One of the questions posed by Yuval Harari in his writing on our possible futures is "What are we to do with all these humans who are, except for a small technocratic elite, no longer required as the means of production?" Esther Leslie, a professor of political aesthetics at Birkbeck College, University of London, does an essay on this issue, pointing out that our potential futures in the pastures of digital dictatorship — crowded conditions, mass surveillance, virtual reality — are already here. You should read her essay, and I passon just a few striking clips of text:

...Cows’ bodies have historically served as test subjects — laboratories of future bio-intervention and all sorts of reproductive technologies. Today cows crowd together in megafarms, overseen by digital systems, including facial- and hide-recognition systems. These new factories are air-conditioned sheds where digital machinery monitors and logs the herd’s every move, emission and production. Every mouthful of milk can be traced to its source.
And it goes beyond monitoring. In 2019 on the RusMoloko research farm near Moscow, virtual reality headsets were strapped onto cattle. The cows were led, through the digital animation that played before their eyes, to imagine they were wandering in bright summer fields, not bleak wintry ones. The innovation, which was apparently successful, is designed to ward off stress: The calmer the cow, the higher the milk yield.
A cow sporting VR goggles is comedic as much as it is tragic. There’s horror, too, in that it may foretell our own alienated futures. After all, how different is our experience? We submit to emotion trackers. We log into biofeedback machines. We sign up for tracking and tracing. We let advertisers’ eyes watch us constantly and mappers store our coordinates.
Could we, like cows, be played by the machinery, our emotions swayed under ever-sunny skies, without us even knowing that we are inside the matrix? Will the rejected, unemployed and redundant be deluded into thinking that the world is beautiful, a land of milk and honey, as they interact minimally in stripped-back care homes? We may soon graze in the new pastures of digital dictatorship, frolicking while bound.
Leslie then describes the ideas of German philosopher and social critic Theodor Adorno:
Against the insistence that nature should not be ravished by technology, he argues that perhaps technology could enable nature to get what “it wants” on this sad earth. And we are included in that “it.”...Nature, in truth, is not just something external on which we work, but also within us. We too are nature.
For someone associated with the abstruseness of avant-garde music and critical theory, Adorno was surprisingly sentimental when it came to animals — for which he felt a powerful affinity. It is with them that he finds something worthy of the name Utopia. He imagines a properly human existence of doing nothing, like a beast, resting, cloud gazing, mindlessly and placidly chewing cud.
To dream, as so many Utopians do, of boundless production of goods, of busy activity in the ideal society reflects, Adorno claimed, an ingrained mentality of production as an end in itself. To detach from our historical form adapted solely to production, to work against work itself, to do nothing in a true society in which we embrace nature and ourselves as natural might deliver us to freedom.
Rejecting the notion of nature as something that would protect us, give us solace, reveals us to be inextricably within and of nature. From there, we might begin to save ourselves — along with everything else.
(The above is a repost of MindBlog's 1/7/21 post)

Wednesday, April 10, 2024

The world of decentralized everything.

Following up on my last post on the Summer of Protocols sessions, I want to pass on (again, to my future self, and possibly a few techie MindBlog readers) a few links to the world of decentralized grass roots everything - commerce, communications, finance, etc.  - trying to bypass the traditional powers and gate keepers in these areas by constructing distributed systems usually based on block chains and cryptocurrencies.  I am trying to learn more about this, taking things in small steps to avoid overload headaches... (One keeps stumbling on areas of world wide engagement of thousands of very intelligent minds.)

Here is a worthwhile read of the general idea from the Ethereum Foundation.

I've described getting into one decentralized context by setting up a Helium Mobile network hotspot, as well as my own private Helium Mobile Cellular account. To follow this up, I pass on a link in an email from Helium pointing to its participation in Consensus24 May 29-31 in Austin TX (where I now live) sponsored by CoinDesk.  At look at the agenda for that meeting gives you an impression of the multiple engagements of government regulatory agencies, business, and crypto-world that are occurring.

Wednesday, March 20, 2024

Fundamentally changing the nature of war.

I generally try to keep a distance from 'the real world' and apocalyptic visions of what AI might do, but I decided to pass on some clips from this technology essay in The Wall Street Journal that makes some very plausible predictions about the future of armed conflicts between political entities:

The future of warfare won’t be decided by weapons systems but by systems of weapons, and those systems will cost less. Many of them already exist, whether they’re the Shahed drones attacking shipping in the Gulf of Aden or the Switchblade drones destroying Russian tanks in the Donbas or smart seaborne mines around Taiwan. What doesn’t yet exist are the AI-directed systems that will allow a nation to take unmanned warfare to scale. But they’re coming.

At its core, AI is a technology based on pattern recognition. In military theory, the interplay between pattern recognition and decision-making is known as the OODA loop— observe, orient, decide, act. The OODA loop theory, developed in the 1950s by Air Force fighter pilot John Boyd, contends that the side in a conflict that can move through its OODA loop fastest will possess a decisive battlefield advantage.

For example, of the more than 150 drone attacks on U.S. forces since the Oct. 7 attacks, in all but one case the OODA loop used by our forces was sufficient to subvert the attack. Our warships and bases were able to observe the incoming drones, orient against the threat, decide to launch countermeasures and then act. Deployed in AI-directed swarms, however, the same drones could overwhelm any human-directed OODA loop. It’s impossible to launch thousands of autonomous drones piloted by individuals, but the computational capacity of AI makes such swarms a possibility.

This will transform warfare. The race won’t be for the best platforms but for the best AI directing those platforms. It’s a war of OODA loops, swarm versus swarm. The winning side will be the one that’s developed the AI-based decision-making that can outpace their adversary. Warfare is headed toward a brain-on-brain conflict.

The Department of Defense is already researching a “brain-computer interface,” which is a direct communications pathway between the brain and an AI. A recent study by the RAND Corporation examining how such an interface could “support human- machine decision-making” raised the myriad ethical concerns that exist when humans become the weakest link in the wartime decision-making chain. To avoid a nightmare future with battlefields populated by fully autonomous killer robots, the U.S. has insisted that a human decision maker must always remain in the loop before any AI-based system might conduct a lethal strike.

But will our adversaries show similar restraint? Or would they be willing to remove the human to gain an edge on the battlefield? The first battles in this new age of warfare are only now being fought. It’s easy to imagine a future, however, where navies will cease to operate as fleets and will become schools of unmanned surface and submersible vessels, where air forces will stand down their squadrons and stand up their swarms, and where a conquering army will appear less like Alexander’s soldiers and more like a robotic infestation.

Much like the nuclear arms race of the last century, the AI arms race will define this current one. Whoever wins will possess a profound military advantage. Make no mistake, if placed in authoritarian hands, AI dominance will become a tool of conquest, just as Alexander expanded his empire with the new weapons and tactics of his age. The ancient historian Plutarch reminds us how that campaign ended: “When Alexander saw the breadth of his domain, he wept, for there were no more worlds to conquer.”

Elliot Ackerman and James Stavridis are the authors of “2054,” a novel that speculates about the role of AI in future conflicts, just published by Penguin Press. Ackerman, a Marine veteran, is the author of numerous books and a senior fellow at Yale’s Jackson School of Global Affairs. Admiral Stavridis, U.S. Navy (ret.), was the 16th Supreme Allied Commander of NATO and is a partner at the Carlyle Group.

 


Monday, January 08, 2024

The Importance of Not Being Earnest

I pass on some of the paragraphs from Rao's latest piece to archive them  for myself here on MindBlog, and to make them available to other readers:

For my purposes, I will define earnestness as being helplessly locked into a single way of looking at what you’re doing, unaware of other ways.

I suspect there are only a few known and culturally familiar modes of being non-earnest…I think they are humor, irony, and surrealism. I’d guess humor is at least as old as civilization and possibly as old as life. Irony proper seems like an outgrowth of early modern conditions. Surrealism is the newest and youngest mode, barely a century old. I think this potted history is fun, but I won’t insist upon it. Maybe there are more modes, and maybe they appeared in a different sequence, or were all always-already present.

Here’s the core of my argument: the more complex the circumstances, the more dangerous it is to inhabit them from a single perspective; ie earnestly. The only really good reason to do so is when dealing with small children or deeply traumatized adults who both need some earnestness in their environment to feel safe.

The importance of non-earnestness is evident even in the “simple” task of chopping vegetables. If you’re doing that for more than 15 minutes, you’ll likely get bored, and start to get sloppy and careless. Creative multi-modal engagement with chopping vegetables — seeing shapes perhaps, or noting colors and textures with an artist’s eye — keeps you mindfully absorbed for longer, more robustly.

In your brain there are two basic modes — mind wandering, sustained by the default-mode network, and focus, sustained by the task-positive network — and my assertion is that they should work together like a clock escapement, unfolding as little micro-fugues of fancy that depart from and return to a base literal mode, and trace out a kind of strange-attractor orbit around the nominal behavior. Something like this is visible at even more basic levels: A healthy heart exhibits high HRV (heart-rate variability). Fitness trackers use HRV as the primary indicator of cardiovascular health. Low variability is a mark of poor health or disease.

Now apply that same principle to complex, large-scale systems and problems. Can you afford to be on-the-nose earnest in thinking about them? Are humor, irony, and surrealism optional extras?

The more complex the circumstances, the more dangerous it is to act in ways that are entailed by only a single perspective. Such action is fragile and degenerate. Robust action contains multitudes. It contains obliquities that harbor strategic depth. It contains tempo variations that encode unsuspected depths of adjacent informational richness.

Action must be richer than thought, because phenomenology is always richer than any single theorization. Earnestness — action confined to the imagination of one theory of itself — is behavioral impoverishment. Non-earnestness is proof of richness. Proof of life.

There is more than one way of looking at complex systems, and action within a complex system must make sense in more than one way. There must be more than one categorical scheme through which an unfactored reality can be viewed and justified.

I think we’re currently caught between the retreat of irony and the advent of surrealism.

Unless you’ve been living under a rock, you’ve probably noticed that the last decade has been marked by a broad and intense backlash against irony, the dominant mode of non-earnestness between 1989-2010 or so (I think humor dominated the 70s and 80s). Now, after a transient decade of various sorts of unstable forays into deadening collective earnestness, it feels like we’re shifting en masse to a dominantly surreal mode.

I’ve decided to approach 2024 with a surreal orientation. I don’t quite know what the hell that means yet, but I plan to fuck around and find out.

Humor would be nice to have in what’s already shaping up to a joyless year, and irony will provide, as it always does, some solace in the darkest, most joyless depths of it. But the workhorse modality is going to be surrealism. Beat-by-beat, breath-by-breath, the creativity of our responses to the year is going to be shaped by our ability to repeatedly escape into the adjacent impossible, and from that vantage point spot the germs of new possibilities. You cannot jailbreak the future from tyranny of the past without stepping outside of both.

It is hard to escape the thought that we are going to be unsurprisingly unlucky as a planet in 2024, with few and uncertain bright prospects to alleviate the general gloom. We are going to end up with a cognitively compromised geriatric as US President by December. We are going to let two bloody wars grind on. We are going to see weaponized AI compound myriad miseries.

If there is serendipity —surprising luck — to be found in 2024, it will be found and nurtured at the micro level. By people who understand what it means to chop vegetables non-earnestly, and escape the tyranny of the real with every breath and stroke. By people who are not too scared of life to stubbornly resist the temptations of humor, irony, and surrealism in service of the idiot gods of authenticity and earnestness.



Wednesday, December 20, 2023

In Search of Hardness - Protocol studies, the next crypto cycle, and the next age of the world

I’m using this posting to save for myself some clips of text from Venkatesh Rao’s most recent piece, to continue mulling over where I place it on the trivial versus sublime spectrum (some of his jargon you will only understand if you have followed the previous installments on Rao I've put in MindBlog...note the link at the end to The Summer of Protocols)… Here are the clips:
Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.
But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.
So what is hardness? Hardness is to protocols as information is to computing, or intelligence to AI. I’ll quote Josh Stark’s original take (specific to blockchains, but applicable to all kinds of protocols) here:
Although humans have been creating and using information technologies like writing, printing, and telegrams for hundreds or thousands of years, it was only in the last century that we articulated clearly what all of these things have in common, and realized that they can be understood as a category.
In the decades since, the idea of information has spread into mass culture. Today, it is intuitive to most people that speech, images, films, writing, DNA, and software are all just different kinds of information.
I believe that a similar situation exists today with respect to blockchains. A new technology has forced us to reconsider things we thought we understood. But instead of books, telephones, and voices, this time it is money, law, and government. We can sense the outline of a category that unites these seemingly disparate things.
Perhaps there is an analog to information hidden in the foundations of our civilization. An abstract property that once revealed, might help remake our understanding of the world, and help us answer plainly what problem blockchains are supposed to solve.
Call this property hardness.
Human civilization depends in part on our ability to make the future more certain in specific ways.
Fixed, hard points across time that let us make the world more predictable.
We need these hard points because it is impossible to coordinate at scale without them. Money doesn’t work unless there is a degree of certainty it will still be valuable in the future. Trade is very risky if there isn’t confidence that parties will follow their commitments.
The bonds of social and family ties can only reach so far through space and time, and so we have found other means of creating certainty and stability in relationships stretching far across the social graph. Throughout history we have found ways to make the future more certain, creating constants that are stable enough to rely upon.
It’s all hardness engineering, and the solution is always protocols that put the right amounts of hardness in the right places at the right times. And it’s almost always enlightening and useful to explicitly think of problems that way. … My favorite protocol in recent weeks has been the one implemented in ATMs that forces you to take your card back before dispensing cash. A simple re-ordering of actions to create a spot of hardness where there was previously an annoying softness (remembering to take your card).
I’ve been nursing this thought that AI and crypto are like the First and Second Foundations of our technological future, together building a pathway out of the desolation of the collapsing industrial age. I just came up with another metaphor for the relationship that I like: AI cuts, crypto chooses. It’s the balance-of-power protocol that will govern the planet in the coming decades.
In practically any domain, I find, thinking in terms of protocols and explicitly searching for hardness to work with is an immensely generative thing to do. It helps get immediate problems unstuck, and it helps you see creative and expansive options.
The world of protocols is a strangely hidden one. Even though there are hundreds of millions of people around the world who explicitly organize their work and thinking around protocols of various sorts, the language of protocols is not a familiar one. It is easier to look through protocols than at them. It is easier to act through protocols than on them.
The language of protocols is an esoteric one for navigating a hidden (and at the risk of sounding cliched, liminal) world that prefers to stay hidden, in part because it deals in the civilizational techno-unconscious. The invisibility of protocols is a core feature. There’s a reason A. N. Whitehead’s famous line, “Civilization advances by extending the number of important operations which we can perform without thinking of them” became the de facto motto of the Summer of Protocols. Thinking about protocols, you get a sense of a landscape of invisible, inviolable hardness all around us, that shapes our behaviors without our being quite aware of it. A landscape you can learn to see, shape, and violate if you learn the language.

Wednesday, December 13, 2023

AI is a camera, not an engine.

Yet another brilliant essay from Venkatest Rao on how modern AI puts us into an age of exploration of computational reality. I got halfway through my usual gig of extracting clips from the article to archive the main points in a post for later look up by myself or others. Halfway through this chore, I realized the material was so deep and dense that my clips were containing more than half of the original text. So, I have halted the editing down and suggest that readers have a go at the entire original article. I also presented the entire text to ChatGPT-4, preceded by the following prompt:
Intelligence as a Measure of Data. Could you condense, rephrase, extract the main messages of the following text to be more comprehensible to college educated non-specialists? Here is the text:

Here is ChatGPT-4's take on the article, which is a vastly better job than I could have done without spending many more hours on the material: 

The text you've provided is a deep and philosophical exploration of modern artificial intelligence (AI), contrasting it with earlier forms of AI and other technological advancements. Here's a more accessible summary:

  1. Modern AI as a Discovery, Not an Invention: Unlike earlier AI (termed "Good Old-Fashioned AI"), modern AI is viewed not as a machine or an invention, but as a significant discovery. This new era of AI is compared to the James Webb Space Telescope, an instrument of exploration rather than a tool of production. Modern AI is seen as a way to delve deeply into vast amounts of data.

  2. AI as a Data Camera: The author suggests thinking of AI as a "data camera" that reveals hidden layers and insights within large datasets, much like telescopes reveal distant galaxies. This analogy emphasizes AI's role in uncovering hidden patterns and information in data, rather than creating it.

  3. The Nature of Intelligence in AI: The text challenges traditional views of intelligence, suggesting that what we consider intelligent behavior may arise from simple processes and large datasets. It proposes that AI is enabling us to understand the nature of intelligence itself, which is more about data interaction than complex processing.

  4. AI's Role in Understanding Computational Reality: Modern AI fits within a series of discoveries about computational reality, which lies between objective (physical) and subjective (mental) realities. This journey of discovery includes various fields like information theory, computability theory, and now, AI.

  5. The Broad Impact of AI: The author explores the potential of AI to transform our understanding of various domains, from language to natural phenomena. This includes using AI to interpret complex datasets in novel ways, such as understanding ocean currents or predicting solar flares.

  6. Reconceptualizing Intelligence: The idea is proposed that intelligence might be a property of data processed in certain ways, rather than an attribute of the processing technology itself. This view opens up new ways of understanding and measuring intelligence based on data characteristics.

  7. AI and the Universe's Memory: The author posits that AI reveals a natural, universal memory phenomenon inherent in all matter, suggesting a deep connection between information, space, time, and matter. This perspective considers AI as a tool that uncovers the universe's way of remembering and expressing itself.

  8. The Human Perspective: Finally, the text touches on the human aspect in this technological evolution, pondering how our understanding of ourselves and our place in the universe might change in the face of these AI-driven discoveries.

This summary simplifies the original text's complex ideas, focusing on the main themes of AI as a tool of discovery, the redefinition of intelligence, and the broader implications for understanding reality and our place within it.