Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Sunday, September 15, 2024

A caustic review of Yuval Harari's "Nexus"

I pass on the very cogent opinions of Dominic Green, fellow of the Royal Historical Society, that appeared in the Sept. 13 issue of the Wall Street Journal. He offers several  caustic comments on ideas offered in Yuval Harari's most recent book, "Nexus"

Groucho Marx said there are two types of people in this world: “those who think people can be divided up into two types, and those who don’t.” In “Nexus,” the Israeli historian-philosopher Yuval Noah Harari divides us into a naive and populist type and another type that he prefers but does not name. This omission is not surprising. The opposite of naive and populist might be wise and pluralist, but it might also be cynical and elitist. Who would admit to that?

Mr. Harari is the author of the bestselling “Sapiens,” a history of our species written with an eye on present anxieties about our future. “Nexus,” a history of our society as a series of information networks and a warning about artificial intelligence, uses a similar recipe. A dollop of historical anecdote is seasoned with a pinch of social science and a spoonful of speculation, topped with a soggy crust of prescription, and lightly dusted with premonitions of the apocalypse that will overcome us if we refuse a second serving. “Nexus” goes down easily, but it isn’t as nourishing as it claims. Much of it leaves a sour taste. Like the Victorian novel and Caesar’s Gaul, “Nexus” divides into three parts. The first part describes the development of complex societies through the creation and control of information networks. The second argues that the digital network is both quantitatively and qualitatively different from the print network that created modern democratic societies. The third presents the AI apocalypse. An “alien” information network gone rogue, Mr. Harari warns, could “supercharge existing human conflicts,” leading to an “AI arms race” and a digital Cold War, with rival powers divided by a Silicon Curtain of chips and code.

Information, Mr. Harari writes, creates a “social nexus” among its users. The “twin pillars” of society are bureaucracy, which creates power by centralizing information, and mythology, which creates power by controlling the dispersal of “stories” and “brands.” Societies cohere around stories such as the Bible and communism and “personality cults” and brands such as Jesus and Stalin. Religion is a fiction that stamps “superhuman legitimacy” on the social order. All “true believers” are delusional. Anyone who calls a religion “a true representation of reality” is “lying.” Mr. Harari is scathing about Judaism and Christianity but hardly criticizes Islam. In this much, he is not naive.

Mythologies of religion, history and ideology, Mr. Harari believes, exploit our naive tendency to mistake all information as “an attempt to represent reality.” When the attempt is convincing, the naive “call it truth.” Mr. Harari agrees that “truth is an accurate representation of reality” but argues that only “objective facts” such as scientific data are true. “Subjective facts” based on “beliefs and feelings” cannot be true. The collaborative cacophony of “intersubjective reality,” the darkling plain of social and political contention where all our minds meet, also cannot be fully true.

Digitizing our naivety has, Mr. Harari believes, made us uncontrollable and incorrigible. “Nexus” is most interesting, and most flawed, when it examines our current situation. Digital networks overwhelm us with information, but computers can only create “order,” not “truth” or “wisdom.” AI might take over without developing human-style consciousness: “Intelligence is enough.” The nexus of machine-learning, algorithmic “user engagement” and human nature could mean that “large-scale democracies may not survive the rise of computer technology.”

The “main split” in 20th-century information was between closed, pseudo-infallible “totalitarian” systems and open, self correcting “democratic” systems. As Mr. Harari’s third section describes, after the flood of digital information, the split will be between humans and machines. The machines will still be fallible. Will they allow us to correct them? Though “we aren’t sure” why the “democratic information network is breaking down,” Mr. Harari nevertheless argues that “social media algorithms” play such a “divisive” role that free speech has become a naive luxury, unaffordable in the age of AI. He “strongly disagrees” with Louis Brandeis’s opinion in Whitney v. California (1927) that the best way to combat false speech is with more speech.

The survival of democracy requires “regulatory institutions” that will “vet algorithms,” counter “conspiracy theories” and prevent the rise of “charismatic leaders.” Mr. Harari never mentions the First Amendment, but “Nexus” amounts to a sustained argument for its suppression. Unfortunately, his grasp of politics is tenuous and hyperbolic. He seems to believe that populism was invented with the iPhone rather than being a recurring bug that appears when democratic operating systems become corrupted or fail to update their software. He consistently confuses democracy (a method of gauging opinion with a long history) with liberalism (a mostly Anglo-American legal philosophy with a short history). He defines democracy as “an ongoing conversation between diverse information nodes,” but the openness of the conversation and the independence of its nodes derive from liberalism’s rights of individual privacy and speech. Yet “liberalism” appears nowhere in “Nexus.” Mr. Harari isn’t much concerned with liberty and justice either.

In “On Naive and Sentimental Poetry” (1795-96), Friedrich Schiller divided poetry between two modes. The naive mode is ancient and assumes that language is a window into reality. The sentimental mode belongs to our “artificial age” and sees language as a mirror to our inner turmoil. As a reflection of our troubled age of transition, “Nexus” is a mirror to the unease of our experts and elites. It divides people into the cognitively unfit and the informationally pure and proposes we divide power over speech accordingly. Call me naive, but Mr. Harari’s technocratic TED-talking is not the way to save democracy. It is the royal road to tyranny.

 

The Fear of Diverse Intelligences Like AI

I want to suggest that you read the article by Michael Levin in the Sept. 3 issue of Noema Magazine on how our fear of AI’s potential is emblematic of humanity’s larger difficulty recognizing intelligence in unfamiliar guises. (One needs to be clear however, that AI of the GTP engines is not 'intelligence' in the broader sense of the term. They are large language models, LLMs.) Here are some clips from the later portions of his essay:

Why would natural evolution have an eternal monopoly on producing systems with preferences, goals and the intelligence to strive to meet them? How do you know that bodies whose construction includes engineered, rational input in addition to emergent physics, instead of exclusively random mutations (the mainstream picture of evolution), do not have what you mean by emotion, intelligence and an inner perspective? 

Do cyborgs (at various percentage combinations of human brain and tech) have the magic that you have? Do single cells? Do we have a convincing, progress-generating story of why the chemical system of our cells, which is compatible with emotion, would be inaccessible to construction by other intelligences in comparison to the random meanderings of evolution?

We have somewhat of a handle on emergent complexity, but we have only begun to understand emergent cognition, which appears in places that are hard for us to accept. The inner life of partially (or wholly) engineered embodied action-perception agents is no more obvious (or limited) by looking at the algorithms that its engineers wrote than is our inner life derivable from the laws of chemistry that reductionists see when they zoom into our cells. The algorithmic picture of a “machine” is no more the whole story of engineered constructs, even simple ones, than are the laws of chemistry the whole story of human minds.

Figuring out how to relate to minds of unconventional origin — not just AI and robotics but also cells, organs, hybrots, cyborgs and many others — is an existential-level task for humanity as it matures.

Our current educational materials give people the false idea that they understand the limits of what different types of matter can do.  The protagonist in the “Ex Machina” movie cuts himself to determine whether he is also a robotic being. Why does this matter so much to him? Because, like many people, if he were to find cogs and gears underneath his skin, he would suddenly feel lesser than, rather than considering the possibility that he embodied a leap-forward for non-organic matter.  He trusts the conventional story of what intelligently arranged cogs and gears cannot do (but randomly mutated, selected protein hardware can) so much that he’s willing to give up his personal experience as a real, majestic being with consciousness and agency in the world.

The correct conclusion from such a discovery — “Huh, cool, I guess cogs and gears can form true minds!” — is inaccessible to many because the reductive story of inorganic matter is so ingrained. People often assume that though they cannot articulate it, someone knows why consciousness inhabits brains and is nowhere else. Cognitive science must be more careful and honest when exporting to society a story of where the gaps in knowledge lie and which assumptions about the substrate and origin of minds are up for revision.

It’s terrifying to consider how people will free themselves, mentally and physically, once we really let go of the pre-scientific notion that any benevolent intelligence planned for us to live in the miserable state of embodiment many on Earth face today. Expanding our scientific wisdom and our moral compassion will give everyone the tools to have the embodiment they want.

The people of that phase of human development will be hard to control. Is that the scariest part? Or is it the fact that they will challenge all of us to raise our game, to go beyond coasting on our defaults, by showing us what is possible? One can hide all these fears under macho facades of protecting real, honest-to-goodness humans and their relationships, but it’s transparent and it won’t hold.

Everything — not just technology, but also ethics — will change. Thus, my challenges to all of us are these. State your positive vision of the future — not just the ubiquitous lists of the fearful things you don’t want but specify what you do want. In 100 years, is humanity still burdened by disease, infirmity, the tyranny of deoxyribonucleic acid, and behavioral firmware developed for life on the savannah? What will a mature species’ mental frameworks look like?

“Other, unconventional minds are scary, if you are not sure of your own — its reality, its quality and its ability to offer value in ways that don’t depend on limiting others.”

Clarify your beliefs: Make explicit the reasons for your certainties about what different architectures can and cannot do; include cyborgs and aliens in the classifications that drive your ethics. I especially call upon anyone who is writing, reviewing or commenting on work in this field to be explicit about your stance on the cognitive status of the chemical system we call a paramecium, the ethical position of life-machine hybrids such as cyborgs, the specific magic thing that makes up “life” (if there is any), and the scientific and ethical utility of the crisp categories you wish to preserve.

Take your organicist ideas more seriously and find out how they enrich the world beyond the superficial, contingent limits of the products of random evolution. If you really think there is something in living beings that goes beyond all machine metaphors, commit to this idea and investigate what other systems, beyond our evo-neuro-chauvinist assumptions, might also have this emergent cognition.

Consider that the beautiful, ineffable qualities of inner perspective and goal-directedness may manifest far more broadly than is easily recognized. Question your unwarranted confidence in what “mere matter” can do, and entertain the humility of emergent cognition, not just emergent complexity. Recognize the kinship we have with other minds and the fact that all learning requires your past self to be modified and replaced by an improved, new version. Rejoice in the opportunity for growth and change and take responsibility for guiding the nature of that change.

Go further — past the facile stories of what could go wrong in the future and paint the future you do want to work toward. Transcend scarcity and redistribution of limited resources, and help grow the pot. It’s not just for you — it’s for your children and for future generations, who deserve the right to live in a world unbounded by ancient, pre-scientific ideas and their stranglehold on our imaginations, abilities, and ethics.

Friday, August 30, 2024

The delusions of transhumanists and life span extenders

One futurist story line is that we all will be seamlessly integrated with AI and the cloud, and bioengineered to live forever. This simplistic fantasy does not take into account the pervasive and fundamental relationships and fluxes between all living things. The cellular mass of individual humans, after all, is mainly composed of their diverse microbiotas (bacteria, fungi, protists), that influence how all organ systems interact with environmental input.  Humans dominate the planet because their brains first evolved a largely unconscious mirroring collective cooperative group intelligence from which different languages and cultures have risen. The linguistic and cognitive brain being modeled by AI's large language models is just one component of a much larger working ensemble in continuous flux with the geosphere and biosphere.

(I've been meaning to develop the above sentiments for a longer post, but have decided to go ahead and pass them on here in case that doesn't happen.)  

 

Wednesday, August 14, 2024

Human distinctiveness and Artificial Intelligence

I pass on some random thoughts occasioned by the previous post. What is distinctive about humans?
-There are millions of humans, there can be only a few LLMs, given the enormous amounts of material and energy required to make them.
-Many humans are required to generate and mirror shared illusions about value,purpose, and meaning that bind together and distinguish different cultures.
-For GPT engines to obtain such a capability would require that they be embodied, self sufficient, energy efficient, replicable, and interactive.... In other words, like current human bodies.
-Mr. Musk's humanoid robot, and the Chinese robot that has beat it to the assembly line, don't even come close.

 

Wednesday, July 10, 2024

From nematodes to humans a common brain network motif intertwines hierarchy and modularity.

Pathak et al. (abstract below) suggest the evolved pattern they describe may apply to information processing networks in general, in particular to those of evolving AI implementations.

Significance
Nervous systems are often schematically represented in terms of hierarchically arranged layers with stimuli in the “input” layer sequentially transformed through successive layers, eventually giving rise to response in the “output” layer. Empirical investigations of hierarchy in specific brain regions, e.g., the visual cortex, typically employ detailed anatomical information. However, a general method for identifying the underlying hierarchy from the connectome alone has so far been elusive. By proposing an optimized index that quantifies the hierarchy extant in a network, we reveal an architectural motif underlying the mesoscopic organization of nervous systems across different species. It involves both modular partitioning and hierarchical layered arrangement, suggesting that brains employ an optimal mix of parallel (modular) and sequential (hierarchic) information processing.
Abstract
Networks involved in information processing often have their nodes arranged hierarchically, with the majority of connections occurring in adjacent levels. However, despite being an intuitively appealing concept, the hierarchical organization of large networks, such as those in the brain, is difficult to identify, especially in absence of additional information beyond that provided by the connectome. In this paper, we propose a framework to uncover the hierarchical structure of a given network, that identifies the nodes occupying each level as well as the sequential order of the levels. It involves optimizing a metric that we use to quantify the extent of hierarchy present in a network. Applying this measure to various brain networks, ranging from the nervous system of the nematode Caenorhabditis elegans to the human connectome, we unexpectedly find that they exhibit a common network architectural motif intertwining hierarchy and modularity. This suggests that brain networks may have evolved to simultaneously exploit the functional advantages of these two types of organizations, viz., relatively independent modules performing distributed processing in parallel and a hierarchical structure that allows sequential pooling of these multiple processing streams. An intriguing possibility is that this property we report may be common to information processing networks in general.

Friday, July 05, 2024

ChatGPT as a "lab rat" for understanding how our brains process language.

I've now read twice through a fascinating PNAS piece by Mitchell Waldrop (Open source, with useful references), and urge MindBlog readers to havve a look. Our brains, as well as all of the GPT (Generative Pretrained Transforer) engines are prediction machines. The following slilghtly edited extract gives context.
...computer simulations of language [are] working in ways that [are] strikingly similar to the left-hemisphere language regions of our brains, using the same computational principles...The reasons for this AI–brain alignment are still up for debate. But its existence is a huge opportunity for neuroscientists struggling to pin down precisely how the brain’s language regions actually work...What’s made this so difficult in the past is that language is a brain function unique to humans. So, unlike their colleagues studying vision or motor control, language researchers have never had animal models that they can slice, probe, and manipulate to tease out all the neural details.
But now that the new AI models have given them the next best thing—an electronic lab rat for language—Fedorenko and many other neuroscientists around the world have eagerly put these models to work. This requires care, if only because the AI–brain alignment doesn’t seem to encompass many cognitive skills other than language...Language is separate in the brain...there are left-side regions of the brain that are always activated by language—and nothing but language..the system responds in the same way to speaking, writing—all the kinds of languages a person knows and speaks, including sign languages. It doesn't espond to things that aren’t language, like logical puzzles, mathematical problems, or music.

Friday, June 28, 2024

How AI will transform the physical world.

I pass on the text of wild-eyed speculations by futurist Ray Kurzweil recently sent to The Economist, who since 1990 has been writing on how soon “The Singularity” - machine intelligence exceeding human intelligence - will arrive and transform our physical world in energy, manufacturing and medicine.  I have too many reservation about realistic details of  his fantasizing to even begin to list them, but the article is a fun read:

By the time children  born today are in kindergarten, artificial intelligence (AI) will probably have surpassed humans at all cognitive tasks, from science to creativity. When I first predicted in 1999 that we would have such artificial general intelligence (AGI) by 2029, most experts thought I’d switched to writing fiction. But since the spectacular breakthroughs of the past few years, many experts think we will have AGI even sooner—so I’ve technically gone from being an optimist to a pessimist, without changing my prediction at all.

After working in the field for 61 years—longer than anyone else alive—I am gratified to see AI at the heart of global conversation. Yet most commentary misses how large language models like ChatGPT and Gemini fit into an even larger story. AI is about to make the leap from revolutionising just the digital world to transforming the physical world as well. This will bring countless benefits, but three areas have especially profound implications: energy, manufacturing and medicine.

Sources of energy are among civilisation’s most fundamental resources. For two centuries the world has needed dirty, non-renewable fossil fuels. Yet harvesting just 0.01% of the sunlight the Earth receives would cover all human energy consumption. Since 1975, solar cells have become 99.7% cheaper per watt of capacity, allowing worldwide capacity to increase by around 2m times. So why doesn’t solar energy dominate yet?

The problem is two-fold. First, photovoltaic materials remain too expensive and inefficient to replace coal and gas completely. Second, because solar generation varies on both diurnal (day/night) and annual (summer/winter) scales, huge amounts of energy need to be stored until needed—and today’s battery technology isn’t quite cost-effective enough. The laws of physics suggest that massive improvements are possible, but the range of chemical possibilities to explore is so enormous that scientists have made achingly slow progress.

By contrast, AI can rapidly sift through billions of chemistries in simulation, and is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. In all of history until November 2023, humans had discovered about 20,000 stable inorganic compounds for use across all technologies. Then, Google’s GNoME AI discovered far more, increasing that figure overnight to 421,000. Yet this barely scratches the surface of materials-science applications. Once vastly smarter AGI finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free.

Energy abundance enables another revolution: in manufacturing. The costs of almost all goods—from food and clothing to electronics and cars—come largely from a few common factors such as energy, labour (including cognitive labour like R&D and design) and raw materials. AI is on course to vastly lower all these costs.

After cheap, abundant solar energy, the next component is human labour, which is often backbreaking and dangerous. AI is making big strides in robotics that can greatly reduce labour costs. Robotics will also reduce raw-material extraction costs, and AI is finding ways to replace expensive rare-earth elements with common ones like zirconium, silicon and carbon-based graphene. Together, this means that most kinds of goods will become amazingly cheap and abundant.

These advanced manufacturing capabilities will allow the price-performance of computing to maintain the exponential trajectory of the past century—a 75-quadrillion-fold improvement since 1939. This is due to a feedback loop: today’s cutting-edge AI chips are used to optimise designs for next-generation chips. In terms of calculations per second per constant dollar, the best hardware available last November could do 48bn. Nvidia’s new B200 GPUs exceed 500bn.

As we build the titanic computing power needed to simulate biology, we’ll unlock the third physical revolution from AI: medicine. Despite 200 years of dramatic progress, our understanding of the human body is still built on messy approximations that are usually mostly right for most patients, but probably aren’t totally right for you. Tens of thousands of Americans a year die from reactions to drugs that studies said should help them.

Yet AI is starting to turn medicine into an exact science. Instead of painstaking trial-and-error in an experimental lab, molecular biosimulation—precise computer modelling that aids the study of the human body and how drugs work—can quickly assess billions of options to find the most promising medicines. Last summer the first drug designed end-to-end by AI entered phase-2 trials for treating idiopathic pulmonary fibrosis, a lung disease. Dozens of other AI-designed drugs are now entering trials.

Both the drug-discovery and trial pipelines will be supercharged as simulations incorporate the immensely richer data that AI makes possible. In all of history until 2022, science had determined the shapes of around 190,000 proteins. That year DeepMind’s AlphaFold 2 discovered over 200m, which have been released free of charge to researchers to help develop new treatments.

Much more laboratory research is needed to populate larger simulations accurately, but the roadmap is clear. Next, AI will simulate protein complexes, then organelles, cells, tissues, organs and—eventually—the whole body.

This will ultimately replace today’s clinical trials, which are expensive, risky, slow and statistically underpowered. Even in a phase-3 trial, there’s probably not one single subject who matches you on every relevant factor of genetics, lifestyle, comorbidities, drug interactions and disease variation.

Digital trials will let us tailor medicines to each individual patient. The potential is breathtaking: to cure not just diseases like cancer and Alzheimer’s, but the harmful effects of ageing itself.

Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When AGI gives us full mastery over cellular biology, these gains will sharply accelerate. Once annual increases in life expectancy reach 12 months, we’ll achieve “longevity escape velocity”. For people diligent about healthy habits and using new therapies, I believe this will happen between 2029 and 2035—at which point ageing will not increase their annual chance of dying. And thanks to exponential price-performance improvement in computing, AI-driven therapies that are expensive at first will quickly become widely available.

This is AI’s most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings. ■


Friday, June 21, 2024

The other shoe is about to drop in 2024 - important closure moments in the state of the world.

I want to pass on a clip (to which I have added some definitions in parentheses) from Venkatest Rao’s most recent Ribbonfarm Studio installment in which he argues that the other shoe is about to drop for many narratives in 2024, a year that feels much more exciting than 1984, 1994, 2004, or 2014.
With the Trump/Biden election, the other shoe is about to drop on the arc that began with the Great Weirding (radical global transformations that unfolded between 2016 and 2020). This arc has fuzzy beginnings and endings globally, but is clearly heading towards closure everywhere. For instance, the recent election in India, with its chastening message for the BJP, has a 10-year span (2014-24). In the EU and UK, various arcs that began with events in Greece/Italy and Brexit are headed towards some sort of natural closure.
Crypto and AI, two strands in my mediocre computing series (the other two being robotics and the metaverse), also seem to be at an other-shoe-drops phase. Crypto, after experiencing 4-5 boom-bust cycles since 2009, is finally facing a triple test of geopolitical significance, economic significance, and “product” potential (as in international agreement that “stable coins” as well as fiat currencies are valid vehicles for managing debt and commerce). It feels like in the next year or two we’ll learn if it’s a technology that’s going to make history or remain a sideshow. AI is shifting gears from a rapid and accelerating installation phase of increasing foundational capabilities to a deployment phase of productization, marked by Apple’s entry into the fray and a sense of impending saturation in the foundational capabilities.
Various wars (Gaza, Ukraine) and tensions (Taiwan) are starting to stress the Westphalian model of the nation state for real now. That’s the other shoe dropping on a 400-year-long story (also a 75 year long story about the rules-based international order, but that’s relatively less interesting).
Economically, we’re clearly decisively past the ZIRPy (Zero Interest Rate Policy) end of the neoliberal globalization era that began in the mid-80s. That shoe has already dropped. What new arc is starting is unclear — the first shoe of the new story hasn’t dropped yet. Something about nonzero interest rates in an uncertain world marked by a mercantilist resource-grabbing geopolitical race unfolding in parallel to slowly reconfiguring global trade patterns.

Wednesday, June 19, 2024

Managing the human herd

 This post is a dyspeptic random walk through thoughts triggered by the front page photograph of the Wall Street journal of June 17, 2024, showing Masses of pilgrims embarked on a symbolic stoning of the devil in Saudi Arabia under the soaring summer heat. Such enormous mobs of people are those most easily roused to strong emotions by charismatic speakers.

How are the emotions and behaviors of such enormous clans of humans be regulated in a sane and humane way? Can this be accomplished outside of authoritarian or oligarchical governance? Might such governance establish its control of the will and moods of millions through the unnoticed infiltration of AI into all aspects of their daily life (cf. Apple's recent AI announcements). Will the world come to be ruled by a "Book of Elon"? 

Or might we be moving into a world of decentralized everything? a bottom up emergence of consensus governance the from the mosh pit of web 3, cryptocurrencies and stablecoins? The noble sentiments of the Etherium Foundation/ notwithstanding, the examples we have to date of 'rules of the commons' are the chaos of Discord, Reddit, and other social media where the sentiments of idiots and experts jumble together in an impenetrable cacophony.  

Who or what is going to emerge to control this mess? How long will the "permaweird" persist?  

 

Monday, June 17, 2024

Empty innovation: What are we even doing?

I came across an interesting commentary by "Tante" on innovation, invention, and progress (or the lack thereof) in the constant churning, and rise and fall, of new ideas and products in the absence of questions like "Why are we doing this?" and "Who is profiting?". In spite of the speaker's arrogance and annoying style, I think it is worth a viewing.

Monday, June 10, 2024

Protecting scientific integrity in an age of generative AI

I want to pass on the full text of an editorial by Blau et al in PNAS, the link points to the more complete open source online version containing acknowledgements and references:

Revolutionary advances in AI have brought us to a transformative moment for science. AI is accelerating scientific discoveries and analyses. At the same time, its tools and processes challenge core norms and values in the conduct of science, including accountability, transparency, replicability, and human responsibility (13). These difficulties are particularly apparent in recent advances with generative AI. Future innovations with AI may mitigate some of these or raise new concerns and challenges.
 
With scientific integrity and responsibility in mind, the National Academy of Sciences, the Annenberg Public Policy Center of the University of Pennsylvania, and the Annenberg Foundation Trust at Sunnylands recently convened an interdisciplinary panel of experts with experience in academia, industry, and government to explore rising challenges posed by the use of AI in research and to chart a path forward for the scientific community. The panel included experts in behavioral and social sciences, ethics, biology, physics, chemistry, mathematics, and computer science, as well as leaders in higher education, law, governance, and science publishing and communication. Discussions were informed by commissioned papers detailing the development and current state of AI technologies; the potential effects of AI advances on equality, justice, and research ethics; emerging governance issues; and lessons that can be learned from past instances where the scientific community addressed new technologies with significant societal implications (49).
 
Generative AI systems are constructed with computational procedures that learn from large bodies of human-authored and curated text, imagery, and analyses, including expansive collections of scientific literature. The systems are used to perform multiple operations, such as problem-solving, data analysis, interpretation of textual and visual content, and the generation of text, images, and other forms of data. In response to prompts and other directives, the systems can provide users with coherent text, compelling imagery, and analyses, while also possessing the capability to generate novel syntheses and ideas that push the expected boundaries of automated content creation.
 
Generative AI’s power to interact with scientists in a natural manner, to perform unprecedented types of problem-solving, and to generate novel ideas and content poses challenges to the long-held values and integrity of scientific endeavors. These challenges make it more difficult for scientists, the larger research community, and the public to 1) understand and confirm the veracity of generated content, reviews, and analyses; 2) maintain accurate attribution of machine- versus human-authored analyses and information; 3) ensure transparency and disclosure of uses of AI in producing research results or textual analyses; 4) enable the replication of studies and analyses; and 5) identify and mitigate biases and inequities introduced by AI algorithms and training data.

Five Principles of Human Accountability and Responsibility

To protect the integrity of science in the age of generative AI, we call upon the scientific community to remain steadfast in honoring the guiding norms and values of science. We endorse recommendations from a recent National Academies report that explores ethical issues in computing research and promoting responsible practices through education and training (3). We also reaffirm the findings of earlier work performed by the National Academies on responsible automated research workflows, which called for human review of algorithms, the need for transparency and reproducibility, and efforts to uncover and address bias (10).
 
Building upon the prior studies, we urge the scientific community to focus sustained attention on five principles of human accountability and responsibility for scientific efforts that employ AI:
1.
Transparent disclosure and attribution
Scientists should clearly disclose the use of generative AI in research, including the specific tools, algorithms, and settings employed; accurately attribute the human and AI sources of information or ideas, distinguishing between the two and acknowledging their respective contributions; and ensure that human expertise and prior literature are appropriately cited, even when machines do not provide such citations in their output.
 
Model creators and refiners should provide publicly accessible details about models, including the data used to train or refine them; carefully manage and publish information about models and their variants so as to provide scientists with a means of citing the use of particular models with specificity; provide long-term archives of models to enable replication studies; disclose when proper attribution of generated content cannot be provided; and pursue innovations in learning, reasoning, and information retrieval machinery aimed at providing users of those models with the ability to attribute sources and authorship of the data employed in AI-generated content.
2.
Verification of AI-generated content and analyses
Scientists are accountable for the accuracy of the data, imagery, and inferences that they draw from their uses of generative models. Accountability requires the use of appropriate methods to validate the accuracy and reliability of inferences made by or with the assistance of AI, along with a thorough disclosure of evidence relevant to such inferences. It includes monitoring and testing for biases in AI algorithms and output, with the goal of identifying and correcting biases that could skew research outcomes or interpretations.
 
Model creators should disclose limitations in the ability of systems to confirm the veracity of any data, text, or images generated by AI. When verification of the truthfulness of generated content is not possible, model output should provide clear, well-calibrated assessments of confidence. Model creators should proactively identify, report, and correct biases in AI algorithms that could skew research outcomes or interpretations.
3.
Documentation of AI-generated data
Scientists should mark AI-generated or synthetic data, inferences, and imagery with provenance information about the role of AI in their generation, so that it is not mistaken for observations collected in the real world. Scientists should not present AI-generated content as observations collected in the real world.
 
Model creators should clearly identify, annotate, and maintain provenance about synthetic data used in their training procedures and monitor the issues, concerns, and behaviors arising from the reuse of computer-generated content in training future models.
4.
A focus on ethics and equity
Scientists and model creators should take credible steps to ensure that their uses of AI produce scientifically sound and socially beneficial results while taking appropriate steps to mitigate the risk of harm. This includes advising scientists and the public on the handling of tradeoffs associated with making certain AI technologies available to the public, especially in light of potential risks stemming from inadvertent outcomes or malicious applications.
 
Scientists and model creators should adhere to ethical guidelines for AI use, particularly in terms of respect for clear attribution of observational versus AI-generated sources of data, intellectual property, privacy, disclosure, and consent, as well as the detection and mitigation of potential biases in the construction and use of AI systems. They should also continuously monitor other societal ramifications likely to arise as AI is further developed and deployed and update practices and rules that promote beneficial uses and mitigate the prospect of social harm.
 
Scientists, model creators, and policymakers should promote equity in the questions and needs that AI systems are used to address as well as equitable access to AI tools and educational opportunities. These efforts should empower a diverse community of scientific investigators to leverage AI systems effectively and to address the diverse needs of communities, including the needs of groups that are traditionally underserved or marginalized. In addition, methods for soliciting meaningful public participation in evaluating equity and fairness of AI technologies and uses should be studied and employed.
 
AI should not be used without careful human oversight in decisional steps of peer review processes or decisions around career advancement and funding allocations.
5.
Continuous monitoring, oversight, and public engagement
Scientists, together with representatives from academia, industry, government, and civil society, should continuously monitor and evaluate the impact of AI on the scientific process, and with transparency, adapt strategies as necessary to maintain integrity. Because AI technologies are rapidly evolving, research communities must continue to examine and understand the powers, deficiencies, and influences of AI; work to anticipate and prevent harmful uses; and harness its potential to address critical societal challenges. AI scientists must at the same time work to improve the effectiveness of AI for the sciences, including addressing challenges with veracity, attribution, explanation, and transparency of training data and inference procedures. Efforts should be undertaken within and across sectors to pursue ongoing study of the status and dynamics of the use of AI in the sciences and pursue meaningful methods to solicit public participation and engagement as AI is developed, applied, and regulated. Results of this engagement and study should be broadly disseminated.

A New Strategic Council to Guide AI in Science

We call upon the scientific community to establish oversight structures capable of responding to the opportunities AI will afford science and to the unanticipated ways in which AI may undermine scientific integrity.
 
We propose that the National Academies of Sciences, Engineering, and Medicine establish a Strategic Council on the Responsible Use of Artificial Intelligence in Science.* The council should coordinate with the scientific community and provide regularly updated guidance on the appropriate uses of AI, especially during this time of rapid change. The council should study, monitor, and address the evolving uses of AI in science; new ethical and societal concerns, including equity; and emerging threats to scientific norms. The council should share its insights across disciplines and develop and refine best practices.
 
More broadly, the scientific community should adhere to existing guidelines and regulations, while contributing to the ongoing development of public and private AI governance. Governance efforts must include engagement with the public about how AI is being used and should be used in the sciences.
 
With the advent of generative AI, all of us in the scientific community have a responsibility to be proactive in safeguarding the norms and values of science. That commitment—together with the five principles of human accountability and responsibility for the use of AI in science and the standing up of the council to provide ongoing guidance—will support the pursuit of trustworthy science for the benefit of all.
 

Friday, May 24, 2024

Think AI Can Perceive Emotion? Think Again.

Numerous MindBlog posts have presented the work and writing of Elizabeth Feldman Barrett (enter Barrett in the search box in the right column of this web page). Her book, "How Emotions Are Made," is the one I recommend when anyone asks me what I think is the best popular book on how our brains work.  Here I want to pass on her piece on AI and emotions in the Sat. May 18 Wall Street Journal. It collects together the various reasons that AI can not, and should not, be used for detecting our emotional state from our facial expressions or other body language.  Here is her text: 

Imagine that you are interviewing for a job. The interviewer asks a question that makes you think. While concentrating, you furrow your brow and your face forms a scowl. A camera in the room feeds your scowling face to an AI model, which determines that you’ve become angry. The interview team decides not to hire you because, in their view, you are too quick to anger. Well, if you weren’t angry during the interview, you probably would be now.

This scenario is less hypothetical than you might realize. So-called emotion AI systems already exist, and some are specifically designed for job interviews. Other emotion AI products try to create more empathic chatbots, build more precise medical treatment plans and detect confused students in classrooms. But there’s a catch: The best available scientific evidence indicates that there are no universal expressions of emotion.

In real life, angry people don’t commonly scowl. Studies show that in Western cultures, they scowl about 35% of the time, which is more than chance but not enough to be a universal expression of anger. The other 65% of the time, they move their faces in other meaningful ways. They might pout or frown. They might cry. They might laugh. They might sit quietly and plot their enemy’s demise. Even when Westerners do scowl, half the time it isn’t in anger. They scowl when they concentrate, when they enjoy a bad pun or when they have gas.

Similar findings hold true for every so-called universal facial expression of emotion. Frowning in sadness, smiling in happiness, widening your eyes in fear, wrinkling your nose in disgust and yes, scowling in anger, are stereotypes—common but oversimplified notions about emotional expressions.

Where did these stereotypes come from? You may be surprised to learn that they were not discovered by observing how people move their faces during episodes of emotion in real life. They originated in a book by Charles Darwin, “The Expression of the Emotions in Man and Animals,” which proposed that humans evolved certain facial movements from ancient animals. But Darwin didn’t conduct careful observations for these ideas as he had for his masterwork, “On the Origin of Species.” Instead, he came up with them by studying photographs of people whose faces were stimulated with electricity, then asked his colleagues if they agreed.

In 2019, the journal Psycho--logical Science in the Public Interest engaged five senior scientists, including me, to examine the scientific evidence for the idea that people express anger, sadness, fear, happiness, disgust and surprise in universal ways. We came from different fields—psychology, neuroscience, engineering and computer science—and began with opposing views. Yet, after reviewing more than a thousand papers during almost a hundred videoconferences, we reached a consensus: In the real world, an emotion like anger or sadness is a broad category full of variety. People express different emotions with the same facial movements and the same emotion with different facial movements. The variation is meaningfully tied to a person’s situation.

In short, we can’t train AI on stereotypes and expect the results to work in real life, no matter how big the data set or sophisticated the algorithm. Shortly after the paper was published, Microsoft retired the emotion AI features of their facial recognition software.

Other scientists have also demonstrated that faces are a poor indicator of a person’s emotional state. In a study published in the journal Psychological Science in 2008, scientists combined photographs of stereotypical but mismatched facial expressions and body poses, such as a scowling face attached to a body that’s holding a dirty diaper. Viewers asked to identify the emotion in each image typically chose what was implied by the body, not the face— in this case disgust, not anger. In a study published in the journal Science in 2012, the same lead scientist showed that winning and losing athletes, in the midst of their glory or defeat, make facial movements that are indistinguishable.

Nevertheless, these stereotypes are still widely assumed to be universal expressions of emotion. They’re in posters in U.S. preschools, spread through the media, designed into emojis and now enshrined in AI code. I recently asked two popular AIbased image generators, Midjourney and OpenAI’s DALL-E, to depict “an angry person.” I also asked two AI chatbots, OpenAI’s ChatGPT and Google’s Gemini, how to tell if a person is angry. The results were filled with scowls, furrowed brows, tense jaws and clenched teeth.

Even AI systems that appear to sidestep emotion stereotypes may still apply them in stealth. A 2021 study in the journal Nature trained an AI model with thousands of video clips from the internet and tested it on millions more. The authors concluded that 16 facial expressions are made worldwide in certain social contexts. Yet the trainers who labeled the clips with emotion words were all English--speakers from a single country, India, so they effectively transmitted cultural stereotypes to a machine. Plus, there was no way to objectively confirm what the strangers in the videos were actually feeling at the time.

Clearly, large data sets alone cannot protect an AI system from applying preconceived assumptions about emotion. The European Union’s AI Act, passed in 2023, recognizes this reality by barring the use of emotion AI in policing, schools and workplaces.

So what is the path forward? If you encounter an emotion AI --product that purports to hire skilled job candidates, diagnose anxiety and depression, assess guilt or innocence in court, detect terrorists in airports or analyze a person’s emotional state for any other purpose, it pays to be skeptical. Here are three questions you can ask about any emotion AI product to probe the scientific approach behind it.

Is the AI model trained to account for the huge variation of real-world emotional life? Any individual may express an emotion like anger differently at different times and in different situations, depending on context. People also use the same movements to express different states, even nonemotional ones. AI models must be trained to reflect this variety.

Does the AI model distinguish between observing facial movements and inferring meaning from these movements? Muscle movements are measurable; inferences are guesses. If a system or its designers confuse description with inference, like considering a scowl to be an “anger expression” or even calling a facial movement a “facial expression,” that’s a red flag.

Given that faces by themselves don’t reveal emotion, does the AI model include abundant context? I don’t mean just a couple of signals, such as a person’s voice and heart rate. In real life, when you perceive someone else as emotional, your brain combines signals from your eyes, ears, nose, mouth, skin, and the internal systems of your body and draws on a lifetime of experience. An AI model would need much more of this information to make reasonable guesses about a person’s emotional state.

AI promises to simplify decisions by providing quick answers, but these answers are helpful and justified only if they draw from the true richness and variety of experience. None of us wants important outcomes in our lives, or the lives of our loved ones, to be determined by a stereotype.

Monday, May 06, 2024

Are we the cows of the future?

One of the questions posed by Yuval Harari in his writing on our possible futures is "What are we to do with all these humans who are, except for a small technocratic elite, no longer required as the means of production?" Esther Leslie, a professor of political aesthetics at Birkbeck College, University of London, does an essay on this issue, pointing out that our potential futures in the pastures of digital dictatorship — crowded conditions, mass surveillance, virtual reality — are already here. You should read her essay, and I passon just a few striking clips of text:

...Cows’ bodies have historically served as test subjects — laboratories of future bio-intervention and all sorts of reproductive technologies. Today cows crowd together in megafarms, overseen by digital systems, including facial- and hide-recognition systems. These new factories are air-conditioned sheds where digital machinery monitors and logs the herd’s every move, emission and production. Every mouthful of milk can be traced to its source.
And it goes beyond monitoring. In 2019 on the RusMoloko research farm near Moscow, virtual reality headsets were strapped onto cattle. The cows were led, through the digital animation that played before their eyes, to imagine they were wandering in bright summer fields, not bleak wintry ones. The innovation, which was apparently successful, is designed to ward off stress: The calmer the cow, the higher the milk yield.
A cow sporting VR goggles is comedic as much as it is tragic. There’s horror, too, in that it may foretell our own alienated futures. After all, how different is our experience? We submit to emotion trackers. We log into biofeedback machines. We sign up for tracking and tracing. We let advertisers’ eyes watch us constantly and mappers store our coordinates.
Could we, like cows, be played by the machinery, our emotions swayed under ever-sunny skies, without us even knowing that we are inside the matrix? Will the rejected, unemployed and redundant be deluded into thinking that the world is beautiful, a land of milk and honey, as they interact minimally in stripped-back care homes? We may soon graze in the new pastures of digital dictatorship, frolicking while bound.
Leslie then describes the ideas of German philosopher and social critic Theodor Adorno:
Against the insistence that nature should not be ravished by technology, he argues that perhaps technology could enable nature to get what “it wants” on this sad earth. And we are included in that “it.”...Nature, in truth, is not just something external on which we work, but also within us. We too are nature.
For someone associated with the abstruseness of avant-garde music and critical theory, Adorno was surprisingly sentimental when it came to animals — for which he felt a powerful affinity. It is with them that he finds something worthy of the name Utopia. He imagines a properly human existence of doing nothing, like a beast, resting, cloud gazing, mindlessly and placidly chewing cud.
To dream, as so many Utopians do, of boundless production of goods, of busy activity in the ideal society reflects, Adorno claimed, an ingrained mentality of production as an end in itself. To detach from our historical form adapted solely to production, to work against work itself, to do nothing in a true society in which we embrace nature and ourselves as natural might deliver us to freedom.
Rejecting the notion of nature as something that would protect us, give us solace, reveals us to be inextricably within and of nature. From there, we might begin to save ourselves — along with everything else.
(The above is a repost of MindBlog's 1/7/21 post)

Monday, April 29, 2024

An expanded view of human minds and their reality.

I want to pass on this recent essay by Venkatesh Rao in its entirety, because it has changed my mind on agreeing with Daniel Dennett that the “Hard Problem” of consciousness is a fabrication that doesn’t actually exist. There are so many interesting ideas in this essay that I will be returning to it frequently in the future.  

We Are All Dennettians Now

An homage riff on AI+mind+evolution in honor of Daniel Dennett

The philosopher Daniel Dennett (1942-2024) died last week. Dennett’s contributions to the 1981 book he co-edited with Douglas Hofstadter, The Mind’s I,¹ which I read in 1996 (rather appropriately while doing an undergrad internship at the Center for AI and Robotics in Bangalore), helped shape a lot of my early philosophical development. A few years later (around 1999 I think), I closely read his trollishly titled 1991 magnum opus, Consciousness Explained (alongside Steven Pinker’s similar volume How the Mind Works), and that ended up shaping a lot of my development as an engineer. Consciousness Explained is effectively a detailed neuro-realistic speculative engineering model of the architecture of the brain in a pseudo-code like idiom. I stopped following his work closely at that point, since my tastes took me in other directions, but I did take care to keep him on my radar loosely.

So in his honor, I’d like to (rather chaotically) riff on the interplay of the three big topics that form the through-lines of his life and work: AI, the philosophy of mind, and Darwinism. Long before we all turned into philosophers of AI overnight with the launch of ChatGPT, he defined what that even means.

When I say Dennett’s views shaped mine, I don’t mean I necessarily agreed with them. Arguably, your early philosophical development is not shaped by discovering thinkers you agree with. That’s for later-life refinements (Hannah Arendt, whom I first read only a few years ago, is probably the most influential agree-with philosopher for me). Your early development is shaped by discovering philosophers you disagree with.

But any old disagreement will not shape your thinking. I read Ayn Rand too (if you want to generously call her a philosopher) around the same time I discovered Dennett, and while I disagreed with her too, she basically had no effect on my thinking. I found her work to be too puerile to argue with. But Dennett — disagreeing with him forced me to grow, because it took serious work over years to decades — some of it still ongoing — to figure out how and why I disagreed. It was philosophical weight training. The work of disagreeing with Dennett led me to other contemporary philosophers of mind like David Chalmers and Ned Block, and various other more esoteric bunnytrails. This was all a quarter century ago, but by the time I exited what I think of as the path-dependent phase of my philosophical development circa 2003, my thinking bore indelible imprints of Dennett’s influence.

I think Dennett was right about nearly all the details of everything he touched, and also right (and more crucially, tasteful) in his choices of details to focus on as being illuminating and significant. This is why he was able to provide elegant philosophical accounts of various kinds of phenomenology that elevated the corresponding discourses in AI, psychology, neuroscience, and biology. His work made him a sort of patron philosopher of a variety of youngish scientific disciplines that lacked robust philosophical traditions of their own. It also made him a vastly more relevant philosopher than most of his peers in the philosophy world, who tend, through some mix of insecurity, lack of courage, and illiteracy, to stay away from the dirty details of technological modernity in their philosophizing (and therefore cut rather sorry figures when they attempt to weigh in on philosophy-of-technology issues with cartoon thought experiments about trolleys or drowning children). Even the few who came close, like John Searle, could rarely match Dennett’s mastery of vast oceans of modern techno-phenomenological detail, even if they tended to do better with clever thought experiments. As far as I am aware, Dennett has no clever but misleading Chinese Rooms or Trolley Problems to his credit, which to my mind makes him a superior rather than inferior philosopher.

I suspect he paid a cost for his wide-ranging, ecumenical curiosities in his home discipline. Academic philosophers like to speak in a precise code about the simplest possible things, to say what they believe to be the most robust things they can. Dennett on the other hand talked in common language about the most complex things the human mind has ever attempted to grasp. The fact that he got his hands (and mind!) dirty with vast amounts of technical detail, and dealt in facts with short half-lives from fast-evolving fields, and wrote in a style accessible to any intelligent reader willing to pay attention, made him barely recognizable as a philosopher at all. But despite the cosmetic similarities, it would be a serious mistake to class him with science popularizers or TED/television scientists with a flair for spectacle at the expense of substance.

Though he had a habit of being uncannily right about a lot of the details, I believe Dennett was almost certainly wrong about a few critical fundamental things. We’ll get to what and why later, but the big point to acknowledge is that if he was indeed wrong (and to his credit, I am not yet 100% sure he was), he was wrong in ways that forced even his opponents to elevate their games. He was as much a patron philosopher (or troll or bugbear) to his philosophical rivals as to the scientists of the fields he adopted. You could not even be an opponent of Dennett except in Dennettian ways. To disagree with the premises of Strong AI or Dennett’s theory of mind is to disagree in Dennettian ways.

If I were to caricature how I fit in the Dennettian universe, I suspect I’d be closest to what he called a “mysterian” (though I don’t think the term originated with him). Despite mysterian being something of a dismissive slur, it does point squarely at the core of why his opponents disagree with him, and the parts of their philosophies they must work to harden and make rigorous, to withstand the acid forces of the peculiarly Dennetian mode of scrutiny I want to talk about here.

So to adapt the line used by Milton Friedman to describe Keynes: We are all Dennettians now.

Let’s try and unpack what that means.

Mysterianism

As I said, in Dennettian terms, I am a “mysterian.” At a big commitments level, mysterianism is the polar opposite of the position Dennett consistently argued across his work, a version of what we generally call a “Strong AI” position. But at the detailed level, there are no serious disagreements. Mysterians and Strong AI people agree about most of the details of how the mind works. They just put the overall picture together differently because mysterians want to accommodate certain currently mysterious things that Strong AI people typically reject as either meaningless noise or shallow confusions/illusions.

Dennett’s version of Strong AI was both more robustly constructed than the sophomoric versions one typically encounters, and more broadly applied: beyond AI to human brains and seemingly intelligent processes like evolution. Most importantly, it was actually interesting. Reading his accounts of minds and computers, you do not come away with the vague suspicion of a non-neurotypical succumbing to the typical-mind fallacy and describing the inner life of a robot or philosophical zombie as “truth.” From his writing, it sounds like he had a fairly typical inner-life experience, so why did he seem to deny the apparent ineffable essence of it? Why didn’t he try to eff that essence the way Chalmers, for instance, does? Why did he seemingly dismiss it as irrelevant, unreal, or both?

To be a mysterian in Dennettian terms is to take ineffable, vitalist essences seriously. With AIs and minds, it means taking the hard problem of consciousness seriously. With evolution, it means believing that Darwinism is not the whole story. Dennett tended to use the term as a dismissive slur, but many, (re)claim it as a term of approbation, and I count myself among them.

To be a rigorous mysterian, as opposed to one of the sillier sorts Dennett liked to stoop to conquer (naive dualists, intelligent-designers, theological literalists, overconfident mystics…), you have to take vitalist essences “seriously but not literally.” My version of doing that is to treat my vitalist inclinations as placeholder pointers to things that lurk in the dank, ungrokked margins of the thinkable, just beyond the reach of my conceptualizing mind. Things I suspect exist by the vague shapes of the barely sensed holes they leave in my ideas. In pursuit of such things, I happily traffic in literary probing of Labatutian/Lovecraftian/Ballardian varieties, self-consciously magical thinking, junk from various pre-paradigmatic alchemical thought spaces, constructs that uncannily resemble astrology, and so on. I suppose it’s a sort of intuitive-ironic cognitive kayfabe for the most part, but it’s not entirely so.

So for example, when I talk of elan vital, as I frequently do in this newsletter, I don’t mean to imply I believe in some sort of magical fluid flowing through living things or a Gaian planetary consciousness. Nor do I mean the sort of overwrought continental metaphysics of time and subjectivity associated with Henri Bergson (which made him the darling of modernist literary types and an object of contempt to Einstein). I simply mean I suspect there are invisible things going on in the experience and phenomenology of life that are currently beyond my ability to see, model, and talk about using recognizably rational concepts, and I’d rather talk about them as best I can with irrational concepts than pretend they don’t exist.

Or to take another example, when I say that “Darwin is not the whole story,” I don’t mean I believe in intelligent design or a creator god (I’m at least as strong an atheist as Dennett was). I mean that Darwinian principles of evolution constrain but do not determine the nature of nature, and we don’t yet fully grok what completes the picture except perhaps in hand-wavy magical-thinking ways. To fully determine what happens, you need to add more elements. For example, you can add ideas like those of Stuart Kauffman and other complexity theorists. You could add elements of what Maturana and Varela called autopoiesis. Or it might be none of these candidate hole-filling ideas, but something to be dreamt up years in the future. Or never. But just because there are only unsatisfactory candidate ways for talking about stuff doesn’t mean you should conclude the stuff doesn’t exist.

In all such cases, there are more things present in phenomenology I can access than I can talk about using terms of reference that would be considered legitimate by everybody. This is neither known-unknowns (which are holes with shapes defined by concepts that seem rational), nor unknown-unknown (which have not yet appeared in your senses, and therefore, to apply a Gilbert Ryle principle, cannot be in your mind).

These are things that we might call magically known. Like chemistry was magically known through alchemy. For phenomenology to be worth magically knowing, the way-of-knowing must offer interesting agency, even if it doesn’t hang together conceptually.

Dennett seemed to mostly fiercely resist and reject such impulses. He genuinely seemed to think that belief in (say) the hard problem of consciousness was some sort of semantic confusion. Unlike say B. F. Skinner, whom critics accused of only pretending to not believe in inner processes, Dennett seemed to actually disbelieve in them.

Dennett seemed to disregard a cousin to the principle that absence of evidence is not evidence of absence: Presence of magical conceptualizations does not mean absence of phenomenology. A bad pointer does not disprove the existence of what it points to. This sort of error is easy to avoid in most cases. Lightning is obviously real even if some people seem to account for it in terms of Indra wielding his vajra. But when we try to talk of things that are on the phenomenological margins, barely within the grasp of sensory awareness, or worse, potentially exist as incommunicable but universal subjective phenomenology (such as the experience of the color “blue”), things get tricky.

Dennett was a successor of sorts to philosophers like Gilbert Ryle, and psychologists like B. F. Skinner. In evolutionary philosophy, his thinking aligned with people like Richard Dawkins and Steven Pinker, and against Noam Chomsky (often classified as a mysterian, though I think the unreasonable effectiveness of LLMs kinda vindicates to a degree Chomsky’s notions of an ineffable more-than-Darwin essence around universal grammars that we don’t yet understand).

I personally find it interesting to poke at why Dennett took the positions he took, given that he was contemplating the same phenomenological data and low-to-mid-level conceptual categories as the rest of us (indeed, he supplied much of it for the rest of us). One way to get at it is to ask: Was Dennett a phenomenologist? Are the limits of his ideas are the limits of phenomenology?

I think the answers are yes and yes, but he wasn’t a traditional sort of phenomenologist, and he didn’t encounter the more familiar sorts of limits.

The Limits of Phenomenology

Let’s talk regular phenomenology first, before tackling what I think was Dennett’s version.

I think of phenomenology, as a working philosophical method, to be something like a conceited form of empiricism that aims to get away from any kind of conceptually mediated seeing.

When you begin to inquire into a complex question with any sort of fundamentally empirical approach, your philosophy can only be as good as a) the things you know now through your (potentially technologically augmented) senses and b) the ways in which you know those things.

The conceit of phenomenology begins with trying to “unknow” what is known to be known, and contemplate the resulting presumed “pure” experiences “directly.” There are various flavors of this: Husserlian bracketing in the Western tradition, Zen-like “beginner mind” practices, Vipassana style recursive examination of mental experiences, and so on. Some flavors apply only to sense-observations of external phenomena, others apply only to subjective introspection, and some apply to both. Given the current somewhat faddish uptick in Eastern-flavored disciplines of interiority, it is important to take note of the fact that the phenomenological attitude is not necessarily inward-oriented. For example, the 19th century quest to measure a tenth of a second, and factor out the “human equation” in astronomical observations, was a massive project in Western phenomenology. The abstract thought experiments with notional clocks in the theory of relativity began with the phenomenology of real clocks.

In “doing” phenomenology, you are assuming that you know what you know relatively completely (or can come to know it), and have a reliable procedure for either unknowing it, or systematically alloying it with skeptical doubt, to destabilize unreliable perceptions it might be contributing to. Such destabilizability of your default, familiar way of knowing, in pursuit of a more-perfect unknowing, is in many ways the essence of rationality and objectivity. It is the (usually undeclared) starting posture for doing “science,” among other things.

Crucially, for our purposes in this essay, you do not make a careful distinction between things you know in a rational way and things you know in a magical or mysterian way, but effectively assume that only the former matter; that the latter can be trivially brushed aside as noise signifying nothing that needs unknowing. I think the reverse is true. It is harder, to the point of near impossible, to root out magical ideas from your perceptions, and they signify the most important things you know. More to the point, it is not clear that trying to unknow things, especially magical things, is in fact a good idea, or that unknowing is clarifying rather than blinding. But phenomenology is committed to trying. This has consequences for “phenomenological projects” of any sort, be they Husserlian or Theravadan in spirit.

A relatively crude example: “life” becomes much less ineffable (and depending on your standards, possibly entirely drained of mystery) once you view it through the lens of DNA. Not only do you see new things through new tools, you see phenomenology you could already see, such as Mendelian inheritance, in a fundamentally different way that feels phenomenologically “deeper” when in fact it relies on more conceptual scaffolding, more things that are invisible to most of us, and requires instruments with elaborate theories attached to even render intelligible. You do not see “ATCG” sequences when contemplating a pea flower. You could retreat up the toolchain and turn your attention to how instruments construct the “idea of DNA” but to me that feels like a usually futile yak shave. The better thing to do is ask why a more indirect way of knowing somehow seems to perceive more clearly than more direct ways.

It is obviously hard to “unsee” knowledge of DNA today when contemplating the nature of life. But it would have been even harder to recognize that something “DNA shaped” was missing in say 1850, regardless of your phenomenological skills, by unknowing things you knew then. In fact, clearing away magical ways of knowing might have swept away critical clues.

To become aware, as Mendel did, that there was a hidden order to inheritance in pea flowers, takes a leap of imagination that cannot be purely phenomenological. To suspect in 1943, as Schrodinger did, the existence of “aperiodic covalent bonded crystals” at the root of life, and point the way to DNA, takes a blend of seeing and knowing that is greater than either. Magical knowing is pathfinder-knowing that connects what we know and can see to what we could know and see. It is the bootstrapping process of the mind.

Mendel and Schrodinger “saw” DNA before it was discovered, in terms of reference that would have been considered “rational” in their own time, but this has not always been the case. Newton, famously, had a lot of magical thinking going on in his successful quest for a theory of gravity. Kepler was a numerologist. Leibniz was ball of mad ideas. One of Newton’s successful bits of thinking, the idea of “particles” of light, which faced off against Huygens’ “waves,” has still not exited the magical realm. The jury is still out in our time about whether quantized fields are phenomenologically “real” or merely a convenient mnemonic-metaphoric motif for some unexpected structure in some unreasonably effective math.

Arguably, none of these thinkers was a phenomenologist, though all had a disciplined empirical streak in their thinking. The history of their ideas suggests that phenomenology is no panacea for philosophical troubles with unruly conceptual universes that refuse to be meekly and rationally “bracketed” away. There is no systematic and magic-free way to march from current truths to better ones via phenomenological disciplines.

The fatal conceit of naive phenomenology (which Paul Feyerabend spotted) is the idea that there is privileged reliable (or meta-reliable) “technique” of relating to your sense experiences, independent of the concepts you hold, whether that “technique” is Husserlian bracketing or vipassana. Understood this way, theories of reality are not that different from physical instruments that extend our senses. Experiment and theory don’t always expose each other’s weaknesses. Sometimes they mutually reinforce them.

In fact, I would go so far as to suggest—and I suspect Dennett would have agreed—that there is no such thing as phenomenology per se. All we ever see is the most invisible of our theories (rational and magical), projected via our senses and instruments (which shape, and are shaped by, those theories), onto the seemingly underdetermined aspects of the universe. There are only incomplete ways of knowing and seeing within which ideas and experiences are inextricably intertwined. No phenomenological method can consistently outperform methodological anarchy.

To deny this is to be a traditional phenomenologist, striving to procedurally separate the realm of ideas and concepts from the realm of putatively unfactored and “directly perceived” (a favorite phrase of meditators) “real” experiences.

Husserlian bracketing — “suspending trust in the objectivity of the world” — is fine in theory, but not so easy in practice. How do you know that you’re setting aside preconceived notions, judgments, and biases and attending to a phenomenon as it truly is? How do you set aside the unconscious “theory” that the Sun revolves around the Earth, and open your mind to the possibility that it’s the other way around? How do you “see” DNA-shaped holes in current ways of seeing, especially if they currently manifest as strange demons that you might sweep away in a spasm of over-eager epistemic hygiene? How do you relate, as a phenomenologist, to intrinsically conceptual things like electrons and positrons that only exist behind layers of mathematics describing experimental data processed through layers of instrumentation conceived by existing theories? If you can’t check the math yourself, how can you trust that the light bulb turning on is powered by those “electrons” tracing arcs through cloud chambers?

In practice, we know how such shifts actually came about. Not because philosophers meditated dispassionately on the “phenomenology” with free minds seeing reality as it “truly is,” but because astronomers and biologists with heads full of weird magical notions looked through telescopes and microscopes, maintained careful notes of detailed measurements, informed by those weird magical theories, and tried to account for discrepancies. Tycho Brahe, for instance, who provided the data that dethroned Ptolemy, believed in some sort of Frankenstein helio-geo-centric Ptolemy++ theory. Instead of explaining the discrepancies, as Kepler did later, Brahe attempted to explain them away using terms of reference he was attached to. He failed to resolve the tension. But he paved the way to Kepler resolving that particular tension (who of course introduced new ones, while lost in his own magical thinking about platonic solids). Formally phenomenological postures were not just absent from the story, but would have arguably retarded it by being too methodologically conservative.

Phenomenology, in other words, is something of a procedural conceit. An uncritical trust in self-certifying ways of seeing based entirely on how compelling they seem to the seer. The self-certification follows some sort of seemingly rational procedure (which might be mystical but still rational in the sense of being coherent and disciplined and internally consistent) but ultimately derives its authority from the intuitive certainties and suspicions of the perceiving subject. Phenomenological procedures are a kind of rule-by-law for governing sense experience in a laissez-faire way, rather than the “objective” rule-of-law they are often presented as. Phenomenology is to empiricism as “socialism with Chinese characteristics” is to liberal democracy.

This is not to say phenomenology is hopelessly unreliable or useless. All methodologies have their conceits, which manifest as blindspots. With phenomenology, the blindspot manifests as an insistence on non-magicality. The phenomenologist fiercely rejects the Cartesian theater and the varied ghosts-in-machines that dance there. The meditator insists he is “directly perceiving” reality in a reproducible way, no magic necessary. I do not doubt that these convictions are utterly compelling to those who hold them; as compelling as the incommunicable reality of perceiving “blue” is to everybody. I have no particular argument with such insistence. What I actually have a problem with is the delegitimization of magical thinking in the process, which I suspect to be essential for progress.

My own solution is to simply add magical thinking back into the picture for my own use, without attempting to defend that choice, and accepting the consequences.

For example, I take Myers-Briggs and the Enneagram seriously (but not literally!). I believe in the hard problem of consciousness, and therefore think “upload” and “simulationism” ideas are not-even-wrong. I don’t believe in Gods or AGIs, and therefore don’t see the point of Pascal’s wager type contortions to avoid heaven/hell or future-simulated-torture scenarios. In each case my commitments rely on chains of thought that are at least partly magical thinking, and decidedly non-phenomenological, which has various social consequences in various places. I don’t attempt to justify any of it because I think all schemes of justification, whether they are labeled “science” or something else, rest on traditional phenomenology and its limits.

Does this mean solipsism is the best we can hope for? This is where we get back to Dennett.

Dennett, to his credit, I don’t think he was a traditional phenomenologist, and he mostly avoided all the traps I’ve pointed out, including the trap of solipsism. Nor was he what one might call a “phenomenologist of language” like most modern analytical philosophers in the West. He was much too interested in technological modernity (and the limits of thought it has been exposing for a century) to be content with such a shrinking, traditionalist philosophical range.

But he was a phenomenologist in the broader sense of rejecting the possible reality of things that currently lack coherent non-magical modes of apprehension.

So how did he operate if not in traditional phenomenological ways?

Demiurge Phenomenology

I believe Dennett was what we might call a demiurge phenomenologist, which is a sort of late modernist version of traditional phenomenology. It will take a bit of work to explain what I mean by that.

I can’t recall if he ever said something like this (I’m definitely not a completist with his work and have only read a fraction of his voluminous output), but I suspect Dennett believed that the human experience of “mind” is itself subject to evolutionary processes (think Jaynes and bicameral mind theories for example — I seem to recall him saying something approving about that in an interview somewhere). He sought to construct philosophy in ways that did not derive authority from an absolute notion of the experience of mind. He tried to do relativity theory for minds, but without descending into solipsism.

It is easiest to appreciate this point by starting with body experience. For example, we are evolved from creatures with tails, but we do not currently possess tails. We possess vestigial “tail bones” and presumably bits of DNA relevant to tails, but we cannot know what it is like to have a tail (or in the spirit of mysterian philosopher Thomas Nagel’s What is it Like to Be a Bat provocation, which I first read in The Mind’s I, what it is like for a tailed creature to have a tail).

We do catch tantalizing Lovecraftian-Ballardian glimpses of our genetic heritage though. For example, the gasping reflex and shot of alertness that accompanies being dunked in water (the mammalian dive reflex) is a remnant of a more aquatic evolutionary past that far predates our primate mode of existence. Now apply that to the experience of “mind.”

Why does Jaynes’ bicameral mind theory sound so fundamentally crackpot to modern minds? It could be that the notion is actually crackpot, but you cannot easily dismiss the idea that it’s actually a well-posed notion that only appears crackpot because we are not currently possessed of bicameral mind-experiences (modulo cognitive marginalia like tulpas and internal family systems — one of my attention/taste biases is to index strongly on typical rather than rare mental experiences; I believe the significance of the latter is highly overstated due to the personal significance they acquire in individual lives).

I hope it is obvious why the possibility that the experience of mind is subject to evolution is fatal to traditional phenomenology. If despite all the sophistication of your cognitive toolchain (bracketing, jhanas, ketamine, whatever), it turns out that you’re only exploring the limits of the evolutionarily transient and arbitrary “variety of mind” that we happen to experience, what does that say about the reliability of the resulting supposedly objective or “direct” perceptions of reality itself that such a mind mediates?

This, by the way, is a problem that evolutionary terms of reference make elegantly obvious, but you can get here in other ways. Darwinian evolution is convenient scaffolding to get there (and the one I think Dennett used), but ultimately dispensable. But however you get there, the possibility that experiences of mind are relative to contingent and arbitrary evolutionary circumstances is fatal to the conceits of traditional phenomenology. It reduces traditional phenomenology in status to any old sort of Cartesian or Platonic philosophizing with made-up bullshit schemas. You might as well make 2x2s all day like I sometimes do.

The Eastern response to this quandary has traditionally been rather defeatist — abandoning the project of trying to know reality entirely. Buddhist and Advaita philosophies in particular, tend to dispense with “objective reality” as an ontologically meaningful characterization of anything. There is only nothing. Or only the perceiving subject. Everything else is maya-moh, a sentimental attachment to the ephemeral unreal. Snap out of it.


I suspect Western philosophy was starting to head that way in the 16th century (through the Spinoza-vs-Leibniz shadowboxing years), but was luckily steered down a less defeatist path to a somewhat uneasy detente between a sort of “probationary reality” accessed through technologically augmented senses, and a subjectivity resolutely bound to that probationary reality via the conceits of traditional phenomenology. This is a long-winded way of saying “science happened” to Western philosophy.

I think that detente is breaking down. One sign is the growing popularity of the relatively pedestrian metaphysics of physicists like Donald Hoffman (leading to a certain amount of unseemly glee among partisans of Eastern philosophies — “omigod you think quantum mechanics shows reality is an illusion? Welcome to advaita lol”).

But despite these marginally interesting conversations, and whether you get there via Husserl, Hoffman, or vipassana, we’re no closer to resolving what we might call the fundamental paradox of phenomenology. If our experience of mind is contingent, how can any notion of justifiable absolute knowledge be sustained? We are essentially stopped clocks trying to tell the time.

Dennett, I think favored one sort of answer: That the experience of mind was too untrustworthy and transient to build on, but that mind’s experience of mathematics was both trustworthy and absolute. Bicameral or monocameral, dolphin-brain or primate-brain, AI-brain or Hoffman-optimal ontological apparatus, one thing that is certain is that a prime number is a prime number in all ways that reality (probationary or not, illusory or not) collides with minds (typical or atypical, bursting with exotic qualia or full of trash qualia). Even the 17 and 13 year cicadas agree. Prime numbers constitute a fixed point in all the ways mind-like things have experience-like things in relation to reality-like things, regardless of whether minds, experiences, and reality are real. Prime numbers are like a motif that shows up in multiple unreliable dreams. If you’re going to build up a philosophy of being, you should only use things like prime numbers.

This is not just the most charitable interpretation of Dennett’s philosophy, but the most interesting and powerful one. It’s not that he thought of the mysterian weakness for ineffable experiences as being particularly “illusory”. As far as he was concerned, you could dismiss the “experience of mind” in its entirety as irrelevant philosophically. Even the idea that it has an epiphenomenal reality need not be seriously entertained because the thing that wants to entertain that idea is not to be trusted.

You see signs of this approach in a lot of his writing. In his collaborative enquires with Hofstadter, in his fundamentally algorithmic-mathematical account of evolution, in his seemingly perverse stances in debates both with reputable philosophers of mind and disreputable intelligent designers. As far as he was concerned, anyone who chose to build any theory of anything on the basis of anything other than mathematical constancy was trusting the experience of mind to an unjustifiable degree.

Again, I don’t know if he ever said as much explicitly (he probably did), but I suspect he had a basic metaphysics similar to that of another simpatico thinker on such matters, Roger Penrose: as a triad of physical/mental/platonic-mathematical worlds projecting on to each other in a strange loop. But unlike Penrose, who took the three realms to be equally real (or unreal) and entangled in an eternal dance of paradox, he chose to build almost entirely on the Platonic-mathematical vertex, with guarded phenomenological forays to the physical world, and strict avoidance of the mental world as a matter of epistemological hygiene.


The guarded phenomenological forays, unlike those of traditional phenomenologists, were governed by an allow list rather than a block list. Instead of trying to “block out” suspect conceptual commitments with bracketing or meditative discipline, he made sure to only work with allowable concepts and percepts that seemed to have some sort of mathematical bones to them. So Turing machines, algorithms, information theory, and the like, all made it into his thinking in load-bearing ways. Everything else was at best narrative flavor or useful communication metaphors. People who took anything else seriously were guilty of deep procedural illusions rather than shallow intellectual confusions.

If you think about it, his accounts of AI, evolution, and the human mind make a lot more sense if you see them as outcomes of philosophical construction processes governed by one very simple rule: Only use a building block if it looks mathematically real.

Regardless of what you believe about the reality of things other than mathematically underwritten ones, this is an intellectually powerful move. It is a kind of computational constructionism applied to philosophical inquiry, similar to what Wolfram does with physics on automata or hypergraphs, or what Grothendieck did with mathematics.

It is also far harder to do, because philosophy aims and claims to speak more broadly and deeply than either physics or mathematics.

I think Dennett landed where he did, philosophically, because he was essentially trying to rebuild the universe out of a very narrow admissible subset of the phenomenological experience of it. Mysterian musings didn’t make it in because they could not ride allowable percepts and concepts into the set of allowable construction materials.

In other words, he practiced demiurge phenomenology. Natural philosophy as an elaborate construction practice based on self-given rules of construction.

In adopting such an approach he was ahead of his time. We’re on the cusp of being able to literally do what he tried to do with words — build phenomenologically immersive virtual realities out of computational matter that seem to be defined by nothing more than mathematical absolutes, and have almost no connection even to physical reality, thanks to the seeming buffering universality of Turing-equivalent computation.

In that almost, I think, lies the key to my fundamental disagreement with Dennett, and my willingness to wander in magical realms of thought where mathematically sure-footed angels fear to tread. There are… phenomenological gaps between mathematical reconstructions of reality by energetic demiurges (whether they work with powerful arguments or VR headsets) and reality itself.

The biggest one, in my opinion, is the experience of time, which seems to oddly resist computational mathematization (though Stephen Wolfram claims to have one… but then he claims to have a lot of things). In an indirect way, disagreeing with Dennett at age 20 led me to my lifelong fascination with the philosophy of time.

Where to Next?

It is something of a cliche that over the last century or two, philosophy has gradually and reluctantly retreated from an increasing number of the domains it once claimed as its own, as scientific and technological advances rendered ungrounded philosophical ideas somewhere between moot and ridiculous. Bergson retreating in the face of the Einsteinian assault, ceding the question of the nature of time to physics, is probably as good a historical marker of the culmination of the process as any.

I would characterize Dennett as a late modernist philosopher in relation to this cliche. Unlike many philosophers, who simply gave up on trying to provide useful accounts of things that science and technology were beginning to describe in inexorably more powerful ways, he brought enormous energy to the task of simply keeping up. His methods were traditional, but his aim was radical: Instead of trying to provide accounts of things, he tried to provide constructions of things, aiming to arrive at a sense of the real through philosophical construction with admissible materials. He was something like Brouwer in mathematics, trying to do away with suspect building blocks to get to desirable places only using approved methods.

This actually worked very well, as far as it went. For example, I think his philosophy of mind was almost entirely correct as far as the mechanics of cognition go, and the findings of modern AI vindicate a lot of the specifics. For example, his idea of a “multiple drafts” model of cognition (where one part of the brain generates a lot of behavioral options in a bottom-up, anarchic way, and another part chooses a behavior from among them) is basically broadly correct, not just as a description of how the brain works, but of how things like LLMs work. But unlike many other so-called philosophers of AI he disagreed with, like Nick Bostrom, Dennett’s views managed to be provocative without being simplistic, opinionated without being dogmatic. He appeared to have a Strong AI stance similar to many people I disagree with, but unlike most of those people, I found his views worth understanding with some care, and hard to dismiss as wrong, let alone not-even-wrong.

I like to think he died believing his philosophies — of mind, AI, and Darwinism — to be on the cusp of a triumphant redemption. There are worse ways to go than believing your ideas have been thoroughly vindicated. And indeed, there was a lot Dennett got right. RIP.

Where do we go next with Dennettian questions about AI, minds, and evolution?

Oddly enough, I think Dennett himself pointed the way: demiurge phenomenology is the way. We just need to get more creative with it, and admit magical thinking into the process.

Dennett, I think, approached his questions the way some mathematicians originally approached Euclid’s fifth postulate: Discard it and try to either do without, or derive it from the other postulates. That led him to certain sorts of demiurgical constructions of AI, mind, and evolution.

There is another, equally valid way. Just as other mathematicians replaced the fifth postulates with alternatives and ended up with consistent non-Euclidean geometries, I think we could entertain different mysterian postulates and end up with a consistent non-Dennettian metaphysics of AI, mind, and evolution. You’d proceed by trying to do your own demiurgical constructing of a reality. An alternate reality.

For instance, what happens if you simply assume that there is human “mind stuff” that ends with death and cannot be uploaded or transferred to other matter, and that can never emerge in silico. You don’t have to try accounting for it (no need to mess with speculations about the pineal gland like Descartes, or worry about microtubules and sub-Planck-length phenomena like Penrose). You could just assume that consciousness is a thing like space or time, and run with the idea and see where you land and what sort of consistent metaphysical geometries are possible. This is in fact what certain philosophers of mind like Ned Block do.

The procedure can be extended to other questions as well. For instance, if you think Darwin is not the whole story with evolution, you could simply assume there are additional mathematical selection factors having to do with fractals or prime numbers, and go look for them, as the Santa Fe biologists have done. Start simple and stupid, for example, by applying a rule that “evolution avoids rectangles” or “evolution cannot get to wheels made entirely of grown organic body parts” and see where you land (for the latter, note that the example in Dark Materials trilogy cheats — that’s an assembled wheel, not an evolved one).

But all these procedures follow the basic Dennettian approach of demiurgical constructionist phenomenology. Start with your experiences. Let in an allow-list of percepts as concepts. Add an arbitrarily constructed magical suspicion or two. Let your computer build out the entailments of those starter conditions. See what sort of realities you can conjure into being. Maybe one of them will be more real than your current experience of reality. That would be progress. Perhaps progress only you can experience, but still, progress.

Would such near-solipsistic activities constitute a collective philosophical search for truth? I don’t know. But then, I don’t know if we have ever been on a coherent collective philosophical search for truth. All we’ve ever had is more or less satisfying descriptions of the primal mystery of our own individual existence.

Why is there something, rather than nothing, it is like, to be me?

Ultimately, Dennett did not seem to find that question to be either interesting or serious. But he pointed the way for me to start figuring out why I do. And that’s why I too am a Dennettian.


footnote  1
I found the book in my uncle’s library, and the only reason I picked it up was because I recognized Hofstadter’s name because Godel, Escher, Bach had recently been recommended to me. I think it’s one of the happy accidents of my life that I read The Mind’s I before I read Hofstadter’s Godel, Escher, Bach. I think that accident of path-dependence may have made me a truly philosophical engineer as opposed to just an engineer with a side interest in philosophy. Hofstadter is of course much better known and familiar in the engineering world, and reading him is something of a rite of passage in the education of the more sentimental sorts of engineers. But Hofstadter’s ideas were mostly entertaining and informative for me, in the mode of popular science, rather than impactful. Dennett on the other hand, was impactful.