Friday, June 23, 2023

Why AI Will Save The World

I want to pass on the following text of a Marc Adreeessen substack post. This is because my usual extracting of small clips of text to give a sense of the whole just wasn't doing its job. I think that all of this sane article should be read by anyone interested in AI. Andreessen thinks that AI is quite possibly the most important – and best – thing our civilization has ever created, and that the development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future. I would urge you to have patience with the article's arguments against the conventional wisdom regarding the dangers of AI, even if they seem jarring and overstated to you.

Why AI Will Save The World

The era of Artificial Intelligence is here, and boy are people freaking out.

Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.

First, a short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

A shorter description of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.

An even shorter description of what AI could be: A way to make everything we care about better.

Why AI Can Make Everything We Care About Better

The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.

Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years.

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it.

In our new era of AI:

  • Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.

  • Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.

  • Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.

  • Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.

  • Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.

  • Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.

  • The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.

  • I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

  • In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.

  • And this isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.

The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.

We should be living in a much better world with AI, and now we can.

So Why The Panic?

In contrast to this positive view, the public conversation about AI is presently shot through with hysterical fear and paranoia.

We hear claims that AI will variously kill us all, ruin our society, take all our jobs, cause crippling inequality, and enable bad people to do awful things.

What explains this divergence in potential outcomes from near utopia to horrifying dystopia?

Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic – a social contagion that convinces people the new technology is going to destroy the world, or society, or both. The fine folks at Pessimists Archive have documented these technology-driven moral panics over the decades; their history makes the pattern vividly clear. It turns out this present panic is not even the first for AI.

Now, it is certainly the case that many new technologies have led to bad outcomes – often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about.

But a moral panic is by its very nature irrational – it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.

And wow do we have a full-blown moral panic about AI right now.

This moral panic is already being used as a motivating force by a variety of actors to demand policy action – new AI restrictions, regulations, and laws. These actors, who are making extremely dramatic public statements about the dangers of AI – feeding on and further inflaming moral panic – all present themselves as selfless champions of the public good.

But are they?

And are they right or wrong?

The Baptists And Bootleggers Of AI

Economists have observed a longstanding pattern in reform movements of this kind. The actors within movements like these fall into two categories – “Baptists” and “Bootleggers” – drawing on the historical example of the prohibition of alcohol in the United States in the 1920’s:

  • “Baptists” are the true believer social reformers who legitimately feel – deeply and emotionally, if not rationally – that new restrictions, regulations, and laws are required to prevent societal disaster.

    For alcohol prohibition, these actors were often literally devout Christians who felt that alcohol was destroying the moral fabric of society.

    For AI risk, these actors are true believers that AI presents one or another existential risks – strap them to a polygraph, they really mean it.

  • “Bootleggers” are the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.

    For alcohol prohibition, these were the literal bootleggers who made a fortune selling illicit alcohol to Americans when legitimate alcohol sales were banned.

    For AI risk, these are CEOs who stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition – the software version of “too big to fail” banks.

A cynic would suggest that some of the apparent Baptists are also Bootleggers – specifically the ones paid to attack AI by their universities, think tanks, activist groups, and media outlets. If you are paid a salary or receive grants to foster AI panic…you are probably a Bootlegger.

The problem with the Bootleggers is that they win. The Baptists are naive ideologues, the Bootleggers are cynical operators, and so the result of reform movements like these is often that the Bootleggers get what they want – regulatory capture, insulation from competition, the formation of a cartel – and the Baptists are left wondering where their drive for social improvement went so wrong.

We just lived through a stunning example of this – banking reform after the 2008 global financial crisis. The Baptists told us that we needed new laws and regulations to break up the “too big to fail” banks to prevent such a crisis from ever happening again. So Congress passed the Dodd-Frank Act of 2010, which was marketed as satisfying the Baptists’ goal, but in reality was coopted by the Bootleggers – the big banks. The result is that the same banks that were “too big to fail” in 2008 are much, much larger now.

So in practice, even when the Baptists are genuine – and even when the Baptists are right – they are used as cover by manipulative and venal Bootleggers to benefit themselves. 

And this is what is happening in the drive for AI regulation right now.

However, it isn’t sufficient to simply identify the actors and impugn their motives. We should consider the arguments of both the Baptists and the Bootleggers on their merits.

AI Risk #1: Will AI Kill Us All?

The first and original AI doomer risk is that AI will decide to literally kill humanity.

The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus Myth – Prometheus brought the destructive power of fire, and more generally technology (“techne”), to man, for which Prometheus was condemned to perpetual torture by the gods. Later, Mary Shelley gave us moderns our own version of this myth in her novel Frankenstein, or, The Modern Prometheus, in which we develop the technology for eternal life, which then rises up and seeks to destroy us. And of course, no AI panic newspaper story is complete without a still image of a gleaming red-eyed killer robot from James Cameron’s Terminator films.

The presumed evolutionary purpose of this mythology is to motivate us to seriously consider potential risks of new technologies – fire, after all, can indeed be used to burn down entire cities. But just as fire was also the foundation of modern civilization as used to keep us warm and safe in a cold and hostile world, this mythology ignores the far greater upside of most – all? – new technologies, and in practice inflames destructive emotion rather than reasoned analysis. Just because premodern man freaked out like this doesn’t mean we have to; we can apply rationality instead.

My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

Now, obviously, there are true believers in killer AI – Baptists – who are gaining a suddenly stratospheric amount of media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now scared out of their minds by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.

My response is that their position is non-scientific – What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so non-scientific and so extreme – a conspiracy theory about math and code – and is already calling for physical violence, that I will do something I would normally not do and question their motives as well.

Specifically, I think three things are going on:

First, recall that John Von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role creating nuclear weapons – which helped end World War II and prevent World War III – with, “Some people confess guilt to claim credit for the sin.” What is the most dramatic way one can claim credit for the importance of one’s work without sounding overtly boastful? This explains the mismatch between the words and actions of the Baptists who are actually building and funding AI – watch their actions, not their words. (Truman was harsher after meeting with Oppenheimer: “Don’t let that crybaby in here again.”)

Second, some of the Baptists are actually Bootleggers. There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.

Third, California is justifiably famous for our many thousands of cults, from EST to the Peoples Temple, from Heaven’s Gate to the Manson Family. Many, although not all, of these cults are harmless, and maybe even serve a purpose for alienated people who find homes in them. But some are very dangerous indeed, and cults have a notoriously hard time straddling the line that ultimately leads to violence and death.

And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors – including, until recently, Sam Bankman-Fried. And it’s developed a full panoply of cult behaviors and beliefs.

This cult is why there are a set of AI risk doomers who sound so extreme – it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extremely extreme.

It turns out that this type of cult isn’t new – there is a longstanding Western tradition of millenarianism, which generates apocalypse cults. The AI risk cult has all the hallmarks of a millenarian apocalypse cult. From Wikipedia, with additions by me:

“Millenarianism is the belief by a group or movement [AI risk doomers] in a coming fundamental transformation of society [the arrival of AI], after which all things will be changed [AI utopia, dystopia, and/or end of the world]. Only dramatic events [AI bans, airstrikes on datacenters, nuclear strikes on unregulated AI] are seen as able to change the world [prevent AI] and the change is anticipated to be brought about, or survived, by a group of the devout and dedicated. In most millenarian scenarios, the disaster or battle to come [AI apocalypse, or its prevention] will be followed by a new, purified world [AI bans] in which the believers will be rewarded [or at least acknowledged to have been correct all along].”

This apocalypse cult pattern is so obvious that I am surprised more people don’t see it.

Don’t get me wrong, cults are fun to hear about, their written material is often creative and fascinating, and their members are engaging at dinner parties and on TV. But their extreme beliefs should not determine the future of laws and society – obviously not.

AI Risk #2: Will AI Ruin Our Society?

The second widely mooted AI risk is that AI will ruin our society, by generating outputs that will be so “harmful”, to use the nomenclature of this kind of doomer, as to cause profound damage to humanity, even if we’re not literally killed.

Short version: If the murder robots don’t get us, the hate speech and misinformation will.

This is a relatively recent doomer concern that branched off from and somewhat took over the “AI risk” movement that I described above. In fact, the terminology of AI risk recently changed from “AI safety” – the term used by people who are worried that AI would literally kill us – to “AI alignment” – the term used by people who are worried about societal “harms”. The original AI safety people are frustrated by this shift, although they don’t know how to put it back in the box – they now advocate that the actual AI risk topic be renamed “AI notkilleveryoneism”, which has not yet been widely adopted but is at least clear.

The tipoff to the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.

As it happens, I have had a front row seat to an analogous situation – the social media “trust and safety” wars. As is now obvious, social media services have been under massive pressure from governments and activists to ban, restrict, censor, and otherwise suppress a wide range of content for many years. And the same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment”. 

My big learnings from the social media wars are:

On the one hand, there is no absolutist free speech position. First, every country, including the United States, makes at least some content illegal. Second, there are certain kinds of content, like child pornography and incitements to real world violence, that are nearly universally agreed to be off limits – legal or not – by virtually every society. So any technological platform that facilitates or generates content – speech – is going to have some restrictions.

On the other hand, the slippery slope is not a fallacy, it’s an inevitability. Once a framework for restricting even egregiously terrible content is in place – for example, for hate speech, a specific hurtful word, or for misinformation, obviously false claims like “the Pope is dead” – a shockingly broad range of government agencies and activist pressure groups and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences. They will do this up to and including in ways that are nakedly felony crimes. This cycle in practice can run apparently forever, with the enthusiastic support of authoritarian hall monitors installed throughout our elite power structures. This has been cascading for a decade in social media and with only certain exceptions continues to get more fervent all the time.

And so this is the dynamic that has formed around “AI alignment” now. Its proponents claim the wisdom to engineer AI-generated speech and thought that are good for society, and to ban AI-generated speech and thoughts that are bad for society. Its opponents claim that the thought police are breathtakingly arrogant and presumptuous – and often outright criminal, at least in the US – and in fact are seeking to become a new kind of fused government-corporate-academic authoritarian speech dictatorship ripped straight from the pages of George Orwell’s 1984.

As the proponents of both “trust and safety” and “AI alignment” are clustered into the very narrow slice of the global population that characterizes the American coastal elites – which includes many of the people who work in and write about the tech industry – many of my readers will find yourselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society. I will not attempt to talk you out of this now, I will simply state that this is the nature of the demand, and that most people in the world neither agree with your ideology nor want to see you win.

If you don’t agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship. AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.

In short, don’t let the thought police suppress AI.

AI Risk #3: Will AI Take All Our Jobs?

The fear of job loss due variously to mechanization, automation, computerization, or AI has been a recurring panic for hundreds of years, since the original onset of machinery such as the mechanical loom. Even though every new major technology has led to more jobs at higher wages throughout history, each wave of this panic is accompanied by claims that “this time is different” – this is the time it will finally happen, this is the technology that will finally deliver the hammer blow to human labor. And yet, it never happens. 

We’ve been through two such technology-driven unemployment panic cycles in our recent past – the outsourcing panic of the 2000’s, and the automation panic of the 2010’s. Notwithstanding many talking heads, pundits, and even tech industry executives pounding the table throughout both decades that mass unemployment was near, by late 2019 – right before the onset of COVID – the world had more jobs at higher wages than ever in history.

Nevertheless this mistaken idea will not die.

And sure enough, it’s back.

This time, we finally have the technology that’s going to take all the jobs and render human workers superfluous – real AI. Surely this time history won’t repeat, and AI will cause mass unemployment – and not rapid economic, job, and wage growth – right?

No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why.

The core mistake the automation-kills-jobs doomers keep making is called the Lump Of Labor Fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.

The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages.

To summarize, technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise. This in turn causes economic growth and job growth, while motivating the creation of new jobs and new industries. If a market economy is allowed to function normally and if technology is allowed to be introduced freely, this is a perpetual upward cycle that never ends. For, as Milton Friedman observed, “Human wants and needs are endless” – we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there. And that is why technology doesn’t destroy jobs and never will.

These are such mindblowing ideas for people who have not been exposed to them that it may take you some time to wrap your head around them. But I swear I’m not making them up – in fact you can read all about them in standard economics textbooks. I recommend the chapter The Curse of Machinery in Henry Hazlitt’s Economics In One Lesson, and Frederic Bastiat’s satirical Candlemaker’s Petition to blot out the sun due to its unfair competition with the lighting industry, here modernized for our times.

But this time is different, you’re thinking. This time, with AI, we have the technology that can replace ALL human labor.

But, using the principles I described above, think of what it would mean for literally all existing human labor to be replaced by machines.

It would mean a takeoff rate of economic productivity growth that would be absolutely stratospheric, far beyond any historical precedent. Prices of existing goods and services would drop across the board to virtually zero. Consumer welfare would skyrocket. Consumer spending power would skyrocket. New demand in the economy would explode. Entrepreneurs would create dizzying arrays of new industries, products, and services, and employ as many people and AI as they could as fast as possible to meet all the new demand.

Suppose AI once again replaces that labor? The cycle would repeat, driving consumer welfare, economic growth, and job and wage growth even higher. It would be a straight spiral up to a material utopia that neither Adam Smith or Karl Marx ever dared dream of. 

We should be so lucky.

AI Risk #4: Will AI Lead To Crippling Inequality?

Speaking of Karl Marx, the concern about AI taking jobs segues directly into the next claimed AI risk, which is, OK, Marc, suppose AI does take all the jobs, either for bad or for good. Won’t that result in massive and crippling wealth inequality, as the owners of AI reap all the economic rewards and regular people get nothing?

As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual  work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality. But let’s drive a stake through its heart anyway.

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. The largest market in the world for any product is the entire world, all 8 billion of us. And so in reality, every new technology – even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers – rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet.

The classic example of this was Elon Musk’s so-called “secret plan” – which he naturally published openly – for Tesla in 2006:

Step 1, Build [expensive] sports car

Step 2, Use that money to build an affordable car

Step 3, Use that money to build an even more affordable car

…which is of course exactly what he’s done, becoming the richest man in the world as a result.

That last point is key. Would Elon be even richer if he only sold cars to rich people today? No. Would he be even richer than that if he only made cars for himself? Of course not. No, he maximizes his own profit by selling to the largest possible market, the world.

In short, everyone gets the thing – as we saw in the past with not just cars but also electricity, radio, computers, the Internet, mobile phones, and search engines. The makers of such technologies are highly motivated to drive down their prices until everyone on the planet can afford them. This is precisely what is already happening in AI – it’s why you can use state of the art generative AI not just at low cost but even for free today in the form of Microsoft Bing and Google Bard – and it is what will continue to happen. Not because such vendors are foolish or generous but precisely because they are greedy – they want to maximize the size of their market, which maximizes their profits.

So what happens is the opposite of technology driving centralization of wealth – individual customers of the technology, ultimately including everyone on the planet, are empowered instead, and capture most of the generated value. As with prior technologies, the companies that build AI – assuming they have to function in a free market – will compete furiously to make this happen.

Marx was wrong then, and he’s wrong now.

This is not to say that inequality is not an issue in our society. It is, it’s just not being driven by technology, it’s being driven by the reverse, by the sectors of the economy that are the most resistant to new technology, that have the most government intervention to prevent the adoption of new technology like AI – specifically housing, education, and health care. The actual risk of AI and inequality is not that AI will cause more inequality but rather that we will not allow AI to be used to reduce inequality.

AI Risk #5: Will AI Lead To Bad People Doing Bad Things?

So far I have explained why four of the five most often proposed risks of AI are not actually real – AI will not come to life and kill us, AI will not ruin our society, AI will not cause mass unemployment, and AI will not cause an ruinous increase in inequality. But now let’s address the fifth, the one I actually agree with: AI will make it easier for bad people to do bad things.

In some sense this is a tautology. Technology is a tool. Tools, starting with fire and rocks, can be used to do good things – cook food and build houses – and bad things – burn people and bludgeon people. Any technology can be used for good or bad. Fair enough. And AI will make it easier for criminals, terrorists, and hostile governments to do bad things, no question.

This causes some people to propose, well, in that case, let’s not take the risk, let’s ban AI now before this can happen. Unfortunately, AI is not some esoteric physical material that is hard to come by, like plutonium. It’s the opposite, it’s the easiest material in the world to come by – math and code.

The AI cat is obviously already out of the bag. You can learn how to build AI from thousands of free online courses, books, papers, and videos, and there are outstanding open source implementations proliferating by the day. AI is like air – it will be everywhere. The level of totalitarian oppression that would be required to arrest that would be so draconian – a world government monitoring and controlling all computers? jackbooted thugs in black helicopters seizing rogue GPUs? – that we would not have a society left to protect.

So instead, there are two very straightforward ways to address the risk of bad people doing bad things with AI, and these are precisely what we should focus on.

First, we have laws on the books to criminalize most of the bad things that anyone is going to do with AI. Hack into the Pentagon? That’s a crime. Steal money from a bank? That’s a crime. Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can simply focus on preventing those crimes when we can, and prosecuting them when we cannot. We don’t even need new laws – I’m not aware of a single actual bad use for AI that’s been proposed that’s not already illegal. And if a new bad use is identified, we ban that use. QED.

But you’ll notice what I slipped in there – I said we should focus first on preventing AI-assisted crimes before they happen – wouldn’t such prevention mean banning AI? Well, there’s another way to prevent such actions, and that’s by using AI as a defensive tool. The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals – specifically the good guys whose job it is to prevent bad things from happening.

For example, if you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.

And so, second, let’s mount major efforts to use AI for good, legitimate, defensive purposes. Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe.

There are already many smart people in and out of government doing exactly this, of course – but if we apply all of the effort and brainpower that’s currently fixated on the futile prospect of banning AI to using AI to protect against bad people doing bad things, I think there’s no question a world infused with AI will be much safer than the world we live in today.

The Actual Risk Of Not Pursuing AI With Maximum Force And Speed

There is one final, and real, AI risk that is probably the scariest at all:

AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.

China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

I propose a simple strategy for what to do about this – in fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union.

“We win, they lose.”

Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can.

We should seek to win the race to global AI technological superiority and ensure that China does not.

In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.

This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision.

What Is To Be Done?

I propose a simple plan:

  • Big AI companies should be allowed to build AI as fast and aggressively as they can – but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk. This will maximize the technological and societal payoff from the amazing capabilities of these companies, which are jewels of modern capitalism.

  • Startup AI companies should be allowed to build AI as fast and aggressively as they can. They should neither confront government-granted protection of big companies, nor should they receive government assistance. They should simply be allowed to compete. If and as startups don’t succeed, their presence in the market will also continuously motivate big companies to be their best – our economies and societies win either way.

  • Open source AI should be allowed to freely proliferate and compete with both big AI companies and startups. There should be no regulatory barriers to open source whatsoever. Even when open source does not beat companies, its widespread availability is a boon to students all over the world who want to learn how to build and use AI to become part of the technological future, and will ensure that AI is available to everyone who can benefit from it no matter who they are or how much money they have.

  • To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but also more general problems such as malnutrition, disease, and climate. AI can be an incredibly powerful tool for solving problems, and we should embrace it as such.

  • To prevent the risk of China achieving global AI dominance, we should use the full power of our private sector, our scientific establishment, and our governments in concert to drive American and Western AI to absolute global dominance, including ultimately inside China itself. We win, they lose.

And that is how we use AI to save the world.

It’s time to build.

Legends and Heroes

I close with two simple statements.

The development of AI started in the 1940’s, simultaneous with the invention of the computer. The first scientific paper on neural networks – the architecture of the AI we have today – was published in 1943. Entire generations of AI scientists over the last 80 years were born, went to school, worked, and in many cases passed away without seeing the payoff that we are receiving now. They are legends, every one.

Today, growing legions of engineers – many of whom are young and may have had grandparents or even great-grandparents involved in the creation of the ideas behind AI – are working to make AI a reality, against a wall of fear-mongering and doomerism that is attempting to paint them as reckless villains. I do not believe they are reckless or villains. They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.

Wednesday, June 21, 2023

Turing, von Neumann, and the computational architecture of biological machines

I pass on the abstract of a PNAS perspective article by Hashim M. Al-Hashimi (motivated readers can obtain a PDF of the article from me).
In the mid-1930s, the English mathematician and logician Alan Turing invented an imaginary machine which could emulate the process of manipulating finite symbolic configurations by human computers. His machine launched the field of computer science and provided a foundation for the modern-day programmable computer. A decade later, building on Turing’s machine, the American–Hungarian mathematician John von Neumann invented an imaginary self-reproducing machine capable of open-ended evolution. Through his machine, von Neumann answered one of the deepest questions in Biology: Why is it that all living organisms carry a self-description in the form of DNA? The story behind how two pioneers of computer science stumbled on the secret of life many years before the discovery of the DNA double helix is not well known, not even to biologists, and you will not find it in biology textbooks. Yet, the story is just as relevant today as it was eighty years ago: Turing and von Neumann left a blueprint for studying biological systems as if they were computing machines. This approach may hold the key to answering many remaining questions in Biology and could even lead to advances in computer science.

Tuesday, June 20, 2023

The psychological illusion of "The Good Old Days"

Mastroianni and Gilbert (open access) do a nice study illustrating that the the universal perception of our ongoing moral decline is a psychological illusion to which people all over the world and throughout history have been susceptible. Their abstract:
Anecdotal evidence indicates that people believe that morality is declining. In a series of studies using both archival and original data (n = 12,492,983), we show that people in at least 60 nations around the world believe that morality is declining, that they have believed this for at least 70 years and that they attribute this decline both to the decreasing morality of individuals as they age and to the decreasing morality of successive generations. Next, we show that people’s reports of the morality of their contemporaries have not declined over time, suggesting that the perception of moral decline is an illusion. Finally, we show how a simple mechanism based on two well-established psychological phenomena (biased exposure to information and biased memory for information) can produce an illusion of moral decline, and we report studies that confirm two of its predictions about the circumstances under which the perception of moral decline is attenuated, eliminated or reversed (that is, when respondents are asked about the morality of people they know well or people who lived before the respondent was born). Together, our studies show that the perception of moral decline is pervasive, perdurable, unfounded and easily produced. This illusion has implications for research on the misallocation of scarce resources, the underuse of social support and social influence.

Saturday, June 10, 2023

How do we define spirituality and what it’s role is in our daily lives?

This post is primarily directed to Austin TX people who will be attending the Sunday 2 p.m. July 2 meeting of the Austin Rainbow Forum that is now in its 5th year. We will be discussing the topic of what spirituality means to each of us, and what its role in our lives is. Look below to see how the three major AI chatbots responded to my request for some guidance on how to moderate such a discussion.  Also I'll pass on this youtube video  titled "What To Remember To Be Happy" pointed to by Rainbow Forum discussant David Soileau. 

Question to the chat bots:  

I have to moderate a discussion session on the topic of spirituality in our daily lives. What advice would you give me? How would you define spirituality? 

Response from ChatGPT 4:

That sounds like an interesting topic, Deric! Here are some suggestions for leading a discussion on spirituality in daily life:
Start by defining what spirituality means to you and how you understand it. This will help participants understand the context of the discussion and provide a starting point for exploring the topic.
Encourage participants to share their own experiences of spirituality in their daily lives. This can include practices such as meditation, prayer, or simply taking time for reflection and introspection.
Explore the ways in which spirituality can help us cope with the challenges and stresses of everyday life. For example, some people find that spiritual practices help them find meaning and purpose in difficult situations or provide a sense of inner peace and calm.
Discuss the role that community and social connections can play in spirituality. Many people find that sharing their spiritual journey with others and being part of a supportive community is an important aspect of their spiritual practice.
Finally, encourage participants to think about how they can incorporate spirituality into their daily lives in practical ways. This might involve setting aside time each day for meditation or prayer, making a conscious effort to be more mindful and present in their interactions with others, or finding ways to give back to their community in a spiritually meaningful way.
As for the definition of spirituality, it can mean different things to different people. Generally, spirituality refers to a sense of connection to something larger than oneself, whether that be a higher power, the natural world, or a broader sense of humanity. It often involves a search for meaning, purpose, and transcendence beyond the material world.

Response from Google Bard: 

Spirituality is a broad concept that can be defined in many ways. For some people, it is a belief in a higher power or a connection to something larger than themselves. For others, it is a sense of inner peace or well-being. Spirituality can also be expressed through religious practices, meditation, or simply spending time in nature.
When leading a discussion on spirituality, it is important to be respectful of all beliefs. There is no right or wrong answer when it comes to spirituality, and everyone's experience is unique. The goal of the discussion should be to learn more about each other's beliefs and to explore how spirituality can impact our daily lives.
Here are some tips for leading a discussion on spirituality:
Be respectful of all beliefs.
Encourage open and honest discussion.
Share your own experiences, if you are comfortable doing so.
Ask questions to learn more about each other's beliefs.
Be open to new ideas.
Remember that there is no right or wrong answer when it comes to spirituality.
Here are some questions you could ask to start the discussion:
What does spirituality mean to you?
How does spirituality impact your daily life?
What are some of your favorite spiritual practices?
What are some of the challenges you face in your spiritual journey?
What advice would you give to someone who is new to spirituality?
I hope this helps!

Response from Microsoft Bing: 

Leading a discussion session on spirituality can be a great way to explore this topic with others. Here are some tips that might help you:

Start with an icebreaker activity that allows participants to introduce themselves and get to know each other better.
Be clear about your expectations and intentions for the discussion session.
Use inclusive language that welcomes everyone to participate.
Ask open-ended questions that encourage participants to share their thoughts and feelings.
Listen actively to what others are saying and show respect for their opinions.
Encourage participants to share their own experiences and insights.
Be prepared to handle any conflicts or disagreements that may arise during the discussion.
Spirituality can mean different things to different people, but it generally refers to a sense of connection with something greater than oneself. It can involve a belief in God or a higher power, but it can also be more broadly defined as a sense of purpose or meaning in life.
I hope this helps! Let me know if you have any other questions.

Friday, June 09, 2023

A new place for consciousness in our understanding of the universe

I want to point MindBlog readers to an article by Thomas Lewton that I enjoyed reading. It describes how theorists are trying to make sense of mysteries like quantum mechanics and the passage of time by reformulating physics to include subjective experience as a physical constituent of the world. Lewton cites a number of authors who contributed to a special issue of the Journal of Consciousness Studies (Volume 28, Numbers 9-10, 2021)

Wednesday, June 07, 2023

Negativity drives online news consumption

Why paying attention to the news can come close to being a receipe for clinic depression... An open source article from Robertson et al. Their abstract:

Online media is important for society in informing and shaping opinions, hence raising the question of what drives online news consumption. Here we analyse the causal effect of negative and emotional words on news consumption using a large online dataset of viral news stories. Specifically, we conducted our analyses using a series of randomized controlled trials (N = 22,743). Our dataset comprises ~105,000 different variations of news stories from Upworthy.com that generated ∼5.7 million clicks across more than 370 million overall impressions. Although positive words were slightly more prevalent than negative words, we found that negative words in news headlines increased consumption rates (and positive words decreased consumption rates). For a headline of average length, each additional negative word increased the click-through rate by 2.3%. Our results contribute to a better understanding of why users engage with online media.

Monday, June 05, 2023

A simple heuristic for distinguishing lie from truth

Work by Verschuere et al. shows that a simple heuristic of only judging the level of detail in the message consistently allows people to discriminate lies from truths:
Decades of research have shown that people are poor at detecting deception. Understandably, people struggle with integrating the many putative cues to deception into an accurate veracity judgement. Heuristics simplify difficult decisions by ignoring most of the information and relying instead only on the most diagnostic cues. Here we conducted nine studies in which people evaluated honest and deceptive handwritten statements, video transcripts, videotaped interviews or live interviews. Participants performed at the chance level when they made intuitive judgements, free to use any possible cue. But when instructed to rely only on the best available cue (detailedness), they were consistently able to discriminate lies from truths. Our findings challenge the notion that people lack the potential to detect deception. The simplicity and accuracy of the use-the-best heuristic provides a promising new avenue for deception research.

Friday, June 02, 2023

Gender inequality is associated with differences between the brains of men and women

Sobering but not surprising analysis by Jugman et al.:  

Significance

Gender inequality is associated with worse mental health and academic achievement in women. Using a dataset of 7,876 MRI scans from healthy adults living in 29 different countries, we here show that gender inequality is associated with differences between the brains of men and women: cortical thickness of the right hemisphere, especially in limbic regions such as the right caudal anterior cingulate and right medial orbitofrontal, as well as the left lateral occipital, present thinner cortices in women compared to men only in gender-unequal countries. These results suggest a potential neural mechanism underlying the worse outcome of women in gender-unequal settings, as well as highlight the role of the environment in the brain differences between women and men.
Abstract
Gender inequality across the world has been associated with a higher risk to mental health problems and lower academic achievement in women compared to men. We also know that the brain is shaped by nurturing and adverse socio-environmental experiences. Therefore, unequal exposure to harsher conditions for women compared to men in gender-unequal countries might be reflected in differences in their brain structure, and this could be the neural mechanism partly explaining women’s worse outcomes in gender-unequal countries. We examined this through a random-effects meta-analysis on cortical thickness and surface area differences between adult healthy men and women, including a meta-regression in which country-level gender inequality acted as an explanatory variable for the observed differences. A total of 139 samples from 29 different countries, totaling 7,876 MRI scans, were included. Thickness of the right hemisphere, and particularly the right caudal anterior cingulate, right medial orbitofrontal, and left lateral occipital cortex, presented no differences or even thicker regional cortices in women compared to men in gender-equal countries, reversing to thinner cortices in countries with greater gender inequality. These results point to the potentially hazardous effect of gender inequality on women’s brains and provide initial evidence for neuroscience-informed policies for gender equality.

Wednesday, May 31, 2023

Constructing Self and World

There is a strong similarity between the predictive processing brain model that has been the subject of numerous Mind Blog Posts, and the operations that ChatGPT and other generative pre-trained transformer algorithms are performing, with the ‘priors’ of the predictive processing model being equivalent to the ‘pre-trained’ weightings of the generative transformer algorithms.  

The open and empty awareness of the non-dual perspective corresponds to the ‘generator’ component of the AI algorithms. It is what can begin to allow reification - rendering opaque rather than transparent - the self model and other products of the underlying content-free open awareness generator (such as our perceptions of trees, interoceptive signals, cultural rules, etc.). It enables seeing rather than being the glass window through which you are viewing the tree in the yard. The rationale of non-dual awareness is not to have ‘no-self.’ The ‘self’ prior is there because it is a very useful avatar for interactions. Rather, the non-dual perspective can enable a tweaking or re-construction of previously transparent priors - now rendered opaque - that lets go of their less useful components. The point of having an expanded 'no self' is to become aware of and refine the illusions or phantasies about what is in our internal and external worlds that rise from it.  

The paragraphs above derive  from my listening to one of Sam Harris’ podcasts in his “Making Sense” series titled “Constructing Self and World.” It was a conversation with Shamil Chandaria, who is a philanthropist, serial entrepreneur, technologist, and academic with multidisciplinary research interests. During the conversation a number of ideas I am familiar with were framed in a very useful way, and I wanted to put  them down and pass on to MindBlog readers the thumbnail summary above.

 

 

Monday, May 29, 2023

To fulfill its promise, artificial intelligence needs to deepen human intelligence.

For MindBlog readers interested in AI, I have to point to another must-read article by Ezra Klein. Below are some clips that try to communicate his central points. (And no, I'm not using ChatGPT to generate this post, because of several of AI's limitations that he notes.) Klein starts by noting the many ways in which the internet has not fullfiled its promise, overwhelming us with more information than we can process, degrading our political discourse and attention spans, and leading us multitasking which not only diminished our cognitive depth but also activates our stress chemistry. He then lists several wrong directions that might be taken by large language models like OpenAI’s GPT-4 and Google’s Bard:
One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up...A question to ask about large language models, then, is where does trustworthiness not matter?...A.I. will be great for creating content where reliability isn’t a concern. The personalized video games and children’s shows and music mash-ups and bespoke images will be dazzling...But where reliability matters — say, a large language model devoted to answering medical questions or summarizing doctor-patient interactions — deployment will be more troubled, as oversight costs will be immense. The problem is that those are the areas that matter most for economic growth.
...Instead of generating 10 ideas in a minute, A.I. can generate hundreds of ideas in a second...Imagine that multiplied across the economy. Someone somewhere will have to process all that information. What will this do to productivity?...Email and chat systems like Slack offer useful analogies here. Both are widely used across the economy. Both were initially sold as productivity boosters, allowing more communication to take place faster. And as anyone who uses them knows, the productivity gains — though real — are more than matched by the cost of being buried under vastly more communication, much of it junk and nonsense.
Many of us have had the experience of asking ChatGPT to draft a piece of writing and seeing a fully formed composition appear, as if by magic, in seconds...My third concern is related to this use of A.I.: Even if those summaries and drafts are pretty good, something is lost in the outsourcing...It’s the time spent inside an article or book spent drawing connections to what we know and having thoughts we would not otherwise have had that matters...No one thinks that reading the SparkNotes summary of a great piece of literature is akin to actually reading the book. And no one thinks that if students have ChatGPT write their essays, they have cleverly boosted their productivity rather than lost the opportunity to learn. The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real.
These are old concerns, of course. Socrates questioned the use of writing (recorded, ironically, by Plato), worrying that “if men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves but by means of external marks.” I think the trade-off here was worth it — I am, after all, a writer — but it was a trade-off. Human beings really did lose faculties of memory we once had.
To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don’t overwhelm and distract and diminish us. We failed that test with the internet. Let’s not fail it with A.I.

Friday, May 26, 2023

When Their Idea of Liberty Is Your Idea of Death

This post gives a snapshot of another really excellent Thomas Edsall piece, provided courtesy of Chat GPT-4, making the central points accessible to MindBlog readers who are blocked by the NYTimes paywall. 

In this essay, Thomas Edsall explores the concept of freedom as a contested ideal in American politics, particularly in the run-up to the 2024 election.

President Biden, in announcing his re-election bid, frames freedom as being under threat by MAGA extremists. He emphasizes defending democracy, personal freedom, and civil rights, portraying the election as a choice between more or less freedom, more or fewer rights.

The essay also highlights contrasting views from Florida Governor Ron DeSantis, who views the threat to freedom as coming not only from government actions, but also from powerful institutions pushing a "woke" agenda. DeSantis criticizes elites in federal bureaucracy, media, Big Tech, and universities for using undemocratic means to impose their ideology on society.

Edsall cites the political philosopher Isaiah Berlin's two concepts of liberty: negative freedom, being free from interference by others, and positive freedom, the individual's desire to be their own master. This dichotomy is mirrored in the divergent notions of freedom espoused by figures such as George Wallace and Martin Luther King Jr.

The essay also examines differing political interpretations of freedom, from Biden's liberal tradition linked to F.D.R., emphasizing meeting material needs and democratic institutions, to Reagan's view of freedom as being largely free from government interference. Smith argues that Trump and DeSantis' notion of freedom is more constrained and restrictive, supporting democracy only as long as it produces the results they want.

Edsall points out that both the left and the right have imposed limitations on freedom. He notes conservative Republicans' restrictions on teaching about race and sex, banning books, suppressing voting, and barring local ordinances. Meanwhile, left-leaning students and faculty have sought to "cancel" figures who violate progressive orthodoxy, disrupt conservative speakers, and restrict teaching material considered harmful to marginalized groups.

Democrats and Republicans are competing to define and advocate for freedom. Isabel V. Sawhill from the Brookings Institution argues that Democrats have substance behind their freedom rhetoric, citing battles over abortion rights and Republicans' changing attitude towards the business community.

Francis Fukuyama highlights the rise of Trumpist populism and the shift of American conservatives towards a more European conservative approach. He argues that the right is using exaggerated fears of the "woke" agenda to justify authoritarian assaults on democracy, but Edsall points out that some voters view liberal policies as infringing on their freedom.

The essay examines homelessness as an ongoing debate over freedom and how progressives historically took the lead in defining and advancing freedom. William Galston argues that progressives lost their command of freedom in the 1960s, allowing conservatives to claim it.

As the 2024 election approaches, both parties have some favorable signs in polls. Republicans can point to Biden's vulnerability and conservative attitudes towards transgender issues. Democrats can highlight the public's opposition to book banning and strict abortion bans. The essay concludes by emphasizing the importance of the election in determining the nation's direction on freedom and liberty, and warns Democrats not to take anything for granted.



Wednesday, May 24, 2023

Using AI to decipher words and sentences from brain scans

Things are happening very fast in AI, as this work from Huth and his collaborators shows. Previous work has shown that speeh articulation and other signals can be decoded from invasive intracranial recordings, and they have developed a language decoder that now accomplishs this with non-invasive fMRI. Motivated readers can obtain the detailed description of the work by emailing me. Their abstract:
A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces.

Monday, May 22, 2023

Multilevel cultural evolution - new theory and practical applications

I want to point to a exceptionally lucid and well written expostion by David Sloan Willson et al. and pass on the opening paragraphs of their article that frames the context for understanding how group level selection operates at multiple levels, from cells to societies to the entire earth system. I strongly recommend reading through it slowly and carefully. If that seems a bit much, skip down to the section titled "The earth system as the ultimate unit of selection.'  (added note: for a comprehensive game theoretic analysis of evolutionary dynamics within and among competing groups see Cooney et al.)

Darwin’s theory of evolution is celebrated for its explanatory scope, prompting the geneticist Theodosius Dobzhansky to declare in 1973 that “nothing in biology makes sense except in the light of evolution”. However, what became the “modern synthesis” can also be called the “great constriction.” The study of evolution was confined almost entirely to genetic evolution, relegating the study of human cultural change to other disciplines.
It was not until the 1970s that evolutionary thinkers started to go back to basics by defining natural selection as Darwin did—any process that combines the triad of variation, selection, and replication—irrespective of the proximate mechanisms. The first mathematical models of cultural evolution were based on population genetics models developed 50 y earlier.
Today, the study of cultural evolution in humans and other species is in full swing —and these advances in basic scientific knowledge have practical applications. In this article, we will first review major developments in our basic understanding of human cultural evolution. Then, we will show how they can be applied to a diversity of positive change efforts, no matter what the scale (e.g., from the individual person to global governance) or topic domain. We elaborate for the topics of complex systems science and engineering, economics and business, mental health and well-being, and global change efforts.

Here is the text from the section near the end of the article titled "The earth system as the ultimate unit of selection.' 

The earth system as the ultimate unit of selection.
We have seen that multilevel selection is like a perverse alchemist who turns gold into lead. Self-preservation—a good thing—becomes disruptive selfishness. Helping kith and kin—a good thing—becomes cronyism and nepotism. The welfare of my nation—a good thing—leads to international conflicts. Thriving economies—a good thing—leads to overheating the earth. Nearly everything that is pathological at higher scales can be traced to behaviors that are prosocial at smaller scales.
The only solution to this problem is for policies to be formulated with welfare of the whole-earth system in mind. This is not sufficient by itself, as we will elaborate below, but the basic logic of multilevel selection reveals that it is necessary. There is no invisible hand to permute lower-level interests into higher-level welfare other than our own conscious efforts.
Superficially, it might seem that selection at the planetary scale is impossible because our planet is not competing with any other planets. What makes planet-level selection possible is a decision-making process that makes planetary welfare the target of selection, orients variation around the target, and identifies and replicates better practices, realizing they will be sensitive to context. This is how conscious cultural evolution takes place at smaller scales, as described in the previous sections, and can also take place at the global scale.
The concept of the whole earth as a cooperative system and the primary social identity of an individual was beyond the imagination only a few centuries ago. Nevertheless, when it comes to cultural evolution, the past does not predict the future. Given the myriad forms of globalization that have taken place during the last century, it is difficult not to consider the whole earth as a single system that must transition from CAS2 (“survive”) to CAS1 (“thrive”). Human social groups are nearly always socially constructed. To say “I am first and foremost a human being and citizen of the earth” is no more difficult than to say “I am an American” or “I am a Christian.”
Many people have already adopted a whole-earth ethic, which does manifest as action to a degree—but they do not have a common and authoritative theoretical framework to invoke and from which to derive effective policies. This is in contrast to neoclassical economics and its elaborate mathematical justification of the invisible hand metaphor. Multilevel selection reveals the invisible hand metaphor to be profoundly untrue. It is simply not the case, in economics or any other policy domain, that the lower-level pursuit of self-interest robustly benefits the common good. However, multilevel selection does lead to another, more legitimate conception of the invisible hand metaphor. We must act in two capacities: as designers of whole systems and as participants in the systems that we design. As designers, we must have the welfare of the whole system in mind, which is the opposite of the invisible hand metaphor. As participants, we can indeed respond to our local concerns without having the whole system in mind. Put another way, selection at the level of whole systems is the hand, which winnows the small set of lower-level behaviors that benefit the common good from the much larger set of lower-level behaviors that undermine the common good.

Friday, May 19, 2023

Liberty, Equality, and Fraternity in the Postmodern, Neoliberal Age

I want to share with MindBlog readers the background material prepared by Austin Rainbow Forum member Daniel Owen to support our Sunday June 4 2023 discussion on the conflict between the ideals of liberty, equality, and fraternity: 

Liberty, Equality, and Fraternity in the Postmodern, Neoliberal Age

The French motto of liberty, equality, and fraternity has symbolized democracy and human rights since the French Revolution. Our U.S. Declaration of Independence declares, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” No mention of fraternity, but in the 1835 political theory classic “Democracy in America,” the French political philosopher Alexis d Tocqueville observed this about American society:

“Americans of all ages, all conditions, all minds constantly unite. Not only do they have commercial and industrial associations in which all take part, but they also have a thousand other kinds: religious, moral, grave, futile, very general and very particular, immense and very small; Americans use associations to give fêtes, to found seminaries, to build inns, to raise churches, to distribute books, to send missionaries to the antipodes; in this manner they create hospitals, prisons, schools. Finally, if it is a question of bringing to light a truth or developing a sentiment with the support of a great example, they associate. Everywhere that, at the head of a new undertaking, you see the government in France and a great lord in England, count on it that you will perceive an association in the United States.”

Perhaps the ideal of fraternity as civic-mindedness was just part of the character of early American culture. Is it still, or have we lost that?

Earlier this year, I listened to a talk by a Unitarian minister who suggested that liberty, equality, and fraternity are like a three-legged stool. If one of the legs becomes too long or too short, the stool is unbalanced. He thought our culture was out of balance with too much emphasis on individual liberty at the expense of equality and fraternity. Agree? Disagree?

How do we find a balance?

How do we deal with differences between the political left and right regarding what form these ideals should take?

How relevant are these ideals today in a globalized world dominated by neoliberalism?

How can we reconcile “We hold these truths to be self-evident…” with the relativism of postmodernism, where such declarations may be seen as socially constructed metanarratives used to advance the power and interests of some groups at the expense of others?

One possible response is to rethink what these principles mean in the 21st century. Liberty does not have to mean unlimited freedom to pursue one’s own agenda. It can also mean freedom to participate in democratic decision-making, to express one's identity and culture, and to access education, health care, and other public goods. Equality does not have to mean uniformity or conformity. It can also mean respect for diversity, human rights, and social justice. Fraternity does not have to mean exclusion or nationalism. It can also mean solidarity, empathy, and mutual aid among people of different backgrounds, beliefs, and interests.

Thoughts?

Recommended reading/listening:

When Their Idea of Liberty is Your Idea of Death (12-minute read)

What It Means to Be Woke (5-minute read)

What Is Postmodernism? (12-minute video)

Extra Credit: Why We're Still Postmodern (56-minute podcast.
A lively discussion of postmodernism, neoliberalism, and contemporary culture from a very different angle than the previous video. Worth your time)

Wednesday, May 17, 2023

An evolutionary psychology perspective on the politics of leadership

Thomas Edsall does another of his scholarly analyses, this time putting Trump as the wannabe alpha male in a larger evolutionary context. I started this post with a number of clips from Edsall's text, but then decided to have another go at asking ChatGPT 4 to do a summary for me. In response to my request "Please condense and summarize the following essay by Thomas Edsall" it did a quite credible job, and I'm going to be lazy and pass on what the bot did!

Thomas Edsall's essay discusses how Donald Trump's alpha-male image conflicts with the barrage of criminal investigations and civil suits he faces, threatening his self-portrayal and traditional leadership standards. Trump's bid for dominance never garnered majority support, yet he still poses a serious threat to American democracy.

The essay delves into the evolutionary and neuroendocrine foundations of human aggression, highlighting the crucial role of coalition formation in overcoming coercive alpha males. Early sub-elite males formed coalitions to overthrow alpha males who violated social norms, paving the way for a more egalitarian society. This development has been traced to the roots of democracy, as it eventually led to the state's acquisition of police power and the legal use of force to enforce norms and laws.

The criminal investigations and civil suits against Trump represent America's democratic system of government, attempting to constrain a deregulated alpha-male wannabe. Over time, complex systems have evolved to limit the power of leaders, like coalitions, power-sharing agreements, parliaments, and constitutions. However, the dynamic of dominance persists, as voters often favor taller candidates and crave strong leaders.

Prof. Rose McDermott explains the process of self-domestication in human societies, where beta and gamma males work together to unseat coercive alpha males who exploit the community, leading to more egalitarian dynamics. McDermott views former President Trump as an example of a coercive alpha male, and suggests that the polarization in the U.S. has prevented the formation of coalitions strong enough to oppose him.

Some academics disagree with this biological explanation for modern social behavior, such as Prof. John Horgan, who finds it deterministic and promoting fatalism, and Prof. R. Brian Ferguson, who disputes the idea of alphas facing death due to sub-alpha elite coalitions. On the other hand, Prof. Dan McAdams argues that Trump's personality and authoritarian dynamic align with an older, evolutionarily-driven paradigm of dominance.

Prof. Kevin Smith attributes the rise of coercive alpha males and other unprincipled personalities in politics to the weakening of democratic norms, pointing out that these norms are difficult to institutionalize and easy to destroy. Once gone, they may be difficult to re-establish, leaving the political system vulnerable to demagogues and tyrants.

Monday, May 15, 2023

People who talk too much

I host a monthly discussion group in Austin TX, The Austin Rainbow Forum, that meets at 2 pm on the first Sunday of every month to consider interesting topics and ideas. On this past May 7, one of our group members led a discussion of "Overtalking" in the modern world, which has got us all spouting opinions, giving advice, and getting ourselves in trouble, according to Dan Lyons in his recent book titled "STFU: The Power of Keeping Your Mouth Shut in an Endlessly Noisy World."  The central ideas in Lyons’ book are summarized in this Time Magazine article. I looked through a reviewers copy of the book I was sent, and suggest that it is worth having a look if you are stimulated by the summary article. The bottom line of the book could be stated as "Shut up and listen instead of talking so much." Lyons offers five nudges: 

-When possible, say nothing

-Master the power of the pause

-Quit social media

-Seek out silence

-Learn how to listen

Lyons is a professional columnist who writes with a very engaging style, even if the level of his coverage is sometimes a bit superficial.  (He quotes a researcher who studied brain activity and '“figured out what causes talkaholism,” ...unfortunately, on doing a quick look up of the work describing the neuronal measurements, I found that there is no there there.)

Friday, May 12, 2023

Virality

This post is the ninth and final installment of my passing on to both MindBlog readers and my future self my idiosyncratic selection of clips of text from O’Gieblyn’s book ‘God, Human, Animal, Machine’ that I have found particularly interesting. Here are fragments of Chapter 13 from the  seventh section of her book, titled "Virality"

The most successful metaphors become invisible through ubiquity. The same is true of ideology, which, as it becomes thoroughly integrated into a culture, sheds its contours and distinctive outline and dissolves finally into pure atmosphere. Although digital technology constitutes the basic architecture of the information age, it is rarely spoken of as a system of thought. Its inability to hold ideas or beliefs, preferences or opinions, is often misunderstood as an absence of philosophy rather than a description of its tenets. The central pillar of this ideology is its conception of being, which might be described as an ontology of vacancy—a great emptying-out of qualities, content, and meaning. This ontology feeds into its epistemology, which holds that knowledge lies not in concepts themselves but in the relationships that constitute them, which can be discovered by artificial networks that lack any true knowledge of what they are uncovering. And as global networks have come to encompass more and more of our  human relations, it’s become increasingly difficult to speak of ourselves—the nodes of this enormous brain—as living agents with beliefs, preferences, and opinions.

The term “viral media” was coined in 1994 by the critic Douglas Rushkoff, who argued that the internet had become “an extension of a living organism” that spanned the globe and radically accelerated the way ideas and culture spread. The notion that the laws of the biosphere could apply to the datasphere was already by that point taken for granted, thanks to the theory of memes, a term Richard Dawkins devised to show that ideas and cultural phenomena spread across a population in much the same way genes do. iPods are memes, as are poodle skirts, communism, and the Protestant Reformation. The main benefit of this metaphor was its ability to explain how artifacts and ideologies reproduce themselves without the participation of conscious subjects. Just as viruses infect hosts without their knowledge or consent, so memes have a single “goal,” self-preservation and spread, which they achieve by latching on to a host and hijacking its reproductive machinery for their own ends. That this entirely passive conception of human culture necessitates the awkward reassignment of agency to the ideas themselves—imagining that memes have “goals” and “ends”—is usually explained away as a figure of speech.

When Rushkoff began writing about “viral media,” the internet was still in the midst of its buoyant overture, and he believed, as many did at the time, that this highly networked world would benefit “people who lack traditional political power.” A system that has no knowledge of a host’s identity or status should, in theory, be radically democratic. It should, in theory, level existing hierarchies and create an even playing field, allowing the most potent ideas to flourish, just as the most successful genes do under the indifferent gaze of nature. By 2019, however, Rushkoff had grown pessimistic. The blind logic of the network was, it turned out, not as blind as it appeared—or rather, it could be manipulated by those who already had enormous resources. “Today, the bottom-up techniques of guerrilla media activists are in the hands of the world’s wealthiest corporations, politicians, and propagandists,” Rushkoff writes in his book Team Human. What’s more, it turns out that the blindness of the system does not ensure its judiciousness. Within the highly competitive media landscape, the metrics of success have become purely quantitative—page views, clicks, shares—and so the potential for spread is often privileged over the virtue or validity of the content. “It doesn’t matter what side of an issue people are on for them to be affected by the meme and provoked to replicate it,” Rushkoff writes. In fact the most successful memes don’t appeal to our intellect at all. Just as the proliferation of a novel virus depends on bodies that have not yet developed an effective immune response, so the most effective memes are those that bypass the gatekeeping rational mind and instead trigger “our most automatic impulses.” This logic is built into the algorithms of social media, which replicate content that garners the most extreme reactions and which foster, when combined with the equally blind and relentless dictates of a free market, what one journalist has called “global, real-time contests for attention.”
            
The general public has become preoccupied by robots—or rather “bots,” the diminutive, a term that appears almost uniformly in the plural, calling to mind a swarm or infestation, a virus in its own right, though in most cases they are merely the means of transmission. It should not have come as a surprise that a system in which ideas are believed to multiply according to their own logic, by pursuing their own ends, would come to privilege hosts that are not conscious at all. There had been suspicions since the start of the pandemic about the speed and efficiency with which national discourse was hijacked by all manner of hearsay, conspiracy, and subterfuge.

The problem is not merely that public opinion is being shaped by robots. It’s that it has become impossible to decipher between ideas that represent a legitimate political will and those that are being mindlessly propagated by machines. This uncertainty creates an epistemological gap that renders the assignment of culpability nearly impossible and makes it all too easy to forget that these ideas are being espoused and proliferated by members of our democratic system—a problem that is far more deep-rooted and entrenched and for which there are no quick and easy solutions. Rather than contending with this fact, there is instead a growing consensus that the platforms themselves are to blame, though no one can settle on precisely where the problem lies: The algorithms? The structure? The lack of censorship and intervention? Hate speech is often spoken of as though it were a coding error—a “content-moderation nightmare,” an “industry-wide problem,” as various platform executives have described it, one that must be addressed through “different technical changes,” most of which are designed to appease advertisers. Such conversations merely strengthen the conviction that the collective underbelly of extremists, foreign agents, trolls, and robots is an emergent feature of the system itself, a phantasm arising mysteriously from the code, like Grendel awakening out of the swamp.

Donald Trump himself, a man whose rise to power may or may not have been aided by machines, is often included in this digital phantasm, one more emergent property of the network’s baffling complexity…Robert A. Burton, a prominent neurologist, claimed in an article that the president made sense once you stopped viewing him as a human being and began to see him as “a rudimentary artificial intelligence-based learning machine.” Like deep-learning systems, Trump was working blindly through trial and error, keeping a record of what moves worked in the past and using them to optimize his strategy, much like AlphaGo, the AI system that swept the Go championship in Seoul. The reason that we found him so baffling was that we continually tried to anthropomorphize him, attributing intention and ideology to his decisions, as though they stemmed from a coherent agenda. AI systems are so wildly successful because they aren’t burdened with any of these rational or moral concerns—they don’t have to think about what is socially acceptable or take into account downstream consequences. They have one goal—winning—and this rigorous single-minded interest is consistently updated through positive feedback. Burton’s advice to historians and policy wonks was to regard Trump as a black box. “As there are no lines of reasoning driving the network’s actions,” he wrote, “it is not possible to reverse engineer the network to reveal the ‘why’ of any decision.”

If we resign ourselves to the fact that our machines will inevitably succeed us in power and intelligence, they will surely come to regard us this way, as something insensate and vaguely revolting, a glitch in the operation of their machinery. That we have already begun to speak of ourselves in such terms is implicit in phrases like “human error,” a phrase that is defined, variously, as an error that is typical of humans rather than machines and as an outcome not desired by a set of rules or an external observer. We are indeed the virus, the ghost in the machine, the bug slowing down a system that would function better, in practically every sense, without us.

If Blumenberg is correct in his account of disenchantment, the scientific revolution was itself a leap of faith, an assertion that the ill-conceived God could no longer guarantee our worth as a species, that our earthly frame of reference was the only valid one. Blumenberg believed that the crisis of nominalism was not a one-time occurrence but rather one of many “phases of objectivization that loose themselves from their original motivation.” The tendency to privilege some higher order over human interests had emerged throughout history—before Ockham and the Protestant reformers it had appeared in the philosophy of the Epicureans, who believed that there was no correspondence between God and earthly life. And he believed it was happening once again in the technologies of the twentieth century, as the quest for knowledge loosened itself from its humanistic origins. It was at such moments that it became necessary to clarify the purpose of science and technology, so as to “bring them back into their human function, to subject them again to man’s purposes in relation to the world.” …Arendt hoped that in the future we would develop an outlook that was more “geocentric and anthropomorphic.”  She advocated a philosophy that took as its starting point the brute fact of our mortality and accepted that the earth, which we were actively destroying and trying to escape, was our only possible home.”