Showing posts with label futures. Show all posts
Showing posts with label futures. Show all posts

Friday, July 22, 2022

The End of the World is Just the Beginning

The title of this post repeats the title of Peter Zeihan's latest book, which I've just finished reading and found utterly fascinating and entertaining, in a gallows humor sort of way. During my awakening this morning my mind was generating words attempting to cook Zeihan's basic message down into a few sentences... Here they are:
In the new world that we are now entering America is one of the few countries that can both feed itself and make all the widgets that it needs. Together with its partners in the NAFTA alliance it is geographically and demographically secure, able to turn inwards and still maintain much of its population and lifestyle. Almost all other countries must either export or import energy, food, materials, or manufactured products. Free trade transport routes that have permitted this are crumbling as America continues its withdrawal from guaranteeing a world order formed to oppose a former Soviet Union that fell in 1990. As the level of global trade diminishes, most countries outside the North American group must reduce their population levels and living standards.
I was pointed to this book by listening to a Sam Harris "Makeing Sense" podcast titled titled "The End of Global Order," an interview with Peter Zeihand and Ian Bremmer. Zeihan integrates geopolitical and demographic perspectives to make a compelling case that that past few decades have been the best it will ever be in our lifetime, because our world is breaking apart. For the past seventy-five years we have been living a a perfect moment made possible by post World War II American fostering:
“an environment of global security so that any partner could go anywhere, anytime, interface with anyone, in any economic manner, participate in any supply chain and access any material input—all without needing a military escort. This butter side of the Americans’ guns-and-butter deal created what we today recognize as free trade. Globalization.”
But,
“Thirty years on from the Cold War’s end, the Americans have gone home. No one else has the military capacity to support global security, and from that, global trade. The American-led Order is giving way to Disorder. Global aging didn’t stop once we reached that perfect moment of growth...The global worker and consumer base is aging into mass retirement. In our rush to urbanize, no replacement generation was ever born...“The 2020s will see a collapse of consumption and production and investment and trade almost everywhere. Globalization will shatter into pieces. Some regional. Some national. Some smaller. It will be costly. It will make life slower. And above all, worse.”
Zeihan shows that the America and its partners in the NAFTA accord, Canada and Mexico, enjoy a "Geography of Success" and demographics that will render it vastly better off than the rest of the world.
Perhaps the oddest thing of our soon-to-be present is that while the Americans revel in their petty, internal squabbles, they will barely notice that elsewhere the world is ending!!! Lights will flicker and go dark. Famine’s leathery claws will dig deep and hold tight. Access to the inputs—financial and material and labor—that define the modern world will cease existing in sufficient quantity to make modernity possible. The story will be different everywhere, but the overarching theme will be unmistakable: the last seventy-five years long will be remembered as a golden age, and one that didn’t last nearly long enough at that.
In the introduction of his book, from which the above quotes are taken, Zeihan states that the book's real focus..
...is to map out what everything looks like on the other side of this change in condition. What are the new parameters of the possible? In a world deglobalized, what are the new Geographies of Success?
The book's introduction and epilogue are useful summaries, and you should check out the very instructive graphics provided on Zeihan's website.

NOTE!! ADDENDUM TO POST 11/13/22 I am obliged to pass on a critique of Zeihan's shocking China predictions that points out some blatant errors in his numbers: Debunking Peter Zeihan’s Shocking and Popular China Predictions

Monday, November 08, 2021

It’s Quitting Season

I want to pass on two articles with the similar themes of people taking stock of their lives and deciding to stop making themselves unhappy. The piece by Crouse and Ferguson is a video, by and directed towards, Millenials, with the following introductory text:
It’s been a brutal few years. But we’ve gritted through. We’ve spent time languishing. We’ve had one giant national burnout. And now, finally, we’re quitting...We are quitting our jobs. Our cities. Our marriages. Even our Twitter feeds...And as we argue in the video, we’re not quitting because we’re weak. We’re quitting because we’re smart...younger Americans like 18-year-old singer Olivia Rodrigo and the extraordinary Simone Biles are barely old enough to rent a car but they are already teaching us about boundaries. They’ve seen enough hollowed-out millennials to know what the rest of us are learning: Don’t be a martyr to grit.
I feel some personal resonance with points made about a whole career path in the piece by Arthur Brooks, To Be Happy, Hide From the Spotlight, because this clip nails a part of the reason I keep driving myself to performances (writing, lecturing, music) by rote habit:
Assuming that you aren’t a pop star or the president, fame might seem like an abstract problem. The thing is, fame is relative, and its cousin, prestige — fame among a particular group of people — is just as fervently chased in smaller communities and fields of expertise. In my own community of academia, honors and prestige can be highly esoteric but deeply desired.
I suggest you read the whole article, but here are a few further clips:
Even if a person’s motive for fame is to set a positive example, it mirrors the other, less flattering motives insofar as it depends on other people’s opinions. And therein lies the happiness problem. Thomas Aquinas wrote in the 13th century, “Happiness is in the happy. But honor is not in the honored.” ...research shows that fame ...based on what scholars call extrinsic rewards... brings less happiness than intrinsic rewards...fame has become a form of addiction. This is especially true in the era of social media, which allows almost anyone with enough motivation to achieve recognition by some number of strangers...this is not a new phenomenon. The 19th-century philosopher Arthur Schopenhauer said fame is like seawater: “The more we have, the thirstier we become.”
No social scientists I am aware of have created a quantitative misery index of fame. But the weight of the indirect evidence above, along with the testimonies of those who have tasted true fame in their time, should be enough to show us that it is poisonous. It is “like a river, that beareth up things light and swollen,” said Francis Bacon, “and drowns things weighty and solid.” Or take it from Lady Gaga: “Fame is prison.”
...Pay attention to when you are seeking fame, prestige, envy, or admiration—especially from strangers. Before you post on social media, for example, ask yourself what you hope to achieve with it...Say you want to share a bit of professional puffery or photos of your excellent beach body. The benefit you experience is probably the little hit of dopamine you will get as you fire it off while imagining the admiration or envy others experience as they see it. The cost is in the reality of how people will actually see your post (and you): Research shows that people will largely find your boasting to be annoying—even if you disguise it with a humblebrag—and thus admire you less, not more. As Shakespeare helpfully put it, “Who knows himself a braggart, / Let him fear this, for it will come to pass / that every braggart shall be found an ass.”
The poet Emily Dickinson called fame a “fickle food / Upon a shifting plate.” But far from a harmless meal, “Men eat of it and die.” It’s a good metaphor, because we have the urge to consume all kinds of things that appeal to some anachronistic neurochemical impulse but that nevertheless will harm us. In many cases—tobacco, drugs of abuse, and, to some extent, unhealthy foods—we as a society have recognized these tendencies and taken steps to combat them by educating others about their ill effects.
Why have we failed to do so with fame? None of us, nor our children, will ever find fulfillment through the judgment of strangers. The right rule of thumb is to treat fame like a dangerous drug: Never seek it for its own sake, teach your kids to avoid it, and shun those who offer it.

Wednesday, October 20, 2021

A debate over stewardship of global collective behavior

In this post I'm going to pass on the abstract of a PNAS perspective piece by Bak-Coleman et al., a critique by Cheong and Jones and a reply to the critique by Bak-Coleman and Bergstrom. First the Bak-Coleman et al. abstract:
Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.
The critique by Cheong and Jones:
In vivid detail, Bak-Coleman et al. describe explosively multiplicative global pathologies of scale posing existential risk to humanity. They argue that the study of collective behavior in the age of digital social media must rise to a “crisis discipline” dedicated to averting global ruin through the adaptive manipulation of social dynamics and the emergent phenomenon of collective behavior. Their proposed remedy is a massive global, multidisciplinary coalition of scientific experts to discover how the “dispersed networks” of digital media can be expertly manipulated through “urgent, evidence-based research” to “steward” social dynamics into “rapid and effective collective behavioral responses,” analogous to “providing regulators with information” to guide the stewardship of ecosystems. They picture the enlightened harnessing of yet-to-be-discovered scale-dependent rules of internet-age social dynamics as a route to fostering the emergent phenomenon of adaptive swarm intelligence.
We wish to issue an urgent warning of our own: Responding to the self-evident fulminant, rampaging pathologies of scale ravaging the planet with yet another pathology of scale will, at best, be ineffective and, at worst, counterproductive. It is the same thing that got us here. The complex international coalition they propose would be like forming a new, ultramodern weather bureau to furnish consensus recommendations to policy makers while a megahurricane is already making landfall. This conjures images of foot dragging, floor fights, and consensus building while looking for actionable “mechanistic insight” into social dynamics on the deck of the Titanic. After lucidly spotlighting the urgent scale-dependent mechanistic nature of the crisis, Bak-Coleman et al. do not propose any immediate measures to reduce scale, but rather offer that there “is reason to be hopeful that well-designed systems can promote healthy collective action at scale...” Hope is neither a strategy nor an action.
Despite lofty goals, the coalition they propose does not match the urgency or promise a rapid and collective behavioral response to the existential threats they identify. Scale reduction may be “collective,” but achieving it will have to be local, authentic, and without delay—that is, a response conforming to the “all hands on deck” swarm intelligence phenomena that are well described in eusocial species already. When faced with the potential for imminent global ruin lurking ominously in the fat tail (5) of the future distribution, the precautionary principle dictates that we should respond with now-or-never urgency. This is a simple fact. A “weather bureau” for social dynamics would certainly be a valuable, if not indispensable, institution for future generations. But there is no reason that scientists around the world, acting as individuals within their own existing social networks and spheres of influence, observing what is already obvious with their own eyes, cannot immediately create a collective chorus to send this message through every digital channel instead of waiting for a green light from above. “Urgency” is euphemistic. It is now or never.
The Bak-Coleman and Bergstrom reply to the critique:
In our PNAS article “Stewardship of global collective behavior”, we describe the breakneck pace of recent innovations in information technology. This radical transformation has transpired not through a stewarded effort to improve information quality or to further human well-being. Rather, current technologies have been developed and deployed largely for the orthogonal purpose of keeping people engaged online. We cannot expect that an information ecology organized around ad sales will promote sustainability, equity, or global health. In the face of such impediments to rational democratic action, how can we hope to overcome threats such as global warming, habitat destruction, mass extinction, war, food security, and pandemic disease? We call for a concerted transdisciplinary response, analogous to other crisis disciplines such as conservation ecology and climate science.
In their letter, Cheong and Jones share our vision of the problem—but they express frustration at the absence of an immediately actionable solution to the enormity of challenges that we describe. They assert “swarm intelligence begins now or never” and advocate local, authentic, and immediate “scale reduction.” It’s an appealing thought: Let us counter pathologies of scale by somehow reversing course.
But it’s not clear what this would entail by way of practical, safe, ethical, and effective intervention. Have there ever been successful, voluntary, large-scale reductions in the scale of any aspect of human social life?
Nor is there reason to believe that an arbitrary, hasty, and heuristically decided large-scale restructuring of our social networks would reduce the long tail of existential risk. Rather, rapid shocks to complex systems are a canonical source of cascading failure. Moving fast and breaking things got us here. We can’t expect it to get us out.
Nor do we share the authors’ optimism about what scientists can accomplish with “a collective chorus … through every digital channel”. It is difficult to envision a louder, more vehement, and more cohesive scientific response than that to the COVID-19 pandemic. Yet this unified call for basic public health measures—grounded in centuries of scientific knowledge—nonetheless failed to mobilize political leadership and popular opinion.
Our views do align when it comes to the “now-or-never urgency” that Cheong and Jones highlight. Indeed, this is a key feature of a crisis discipline: We must act without delay to steer a complex system—while still lacking a complete understanding of how that system operates.
As scholars, our job is to call attention to underappreciated threats and to provide the knowledge base for informed decision-making. Academics do not—and should not—engage in large-scale social engineering. Our grounded view of what science can and should do in a crisis must not be mistaken for lassitude or unconcern. Worldwide, the unprecedented restructuring of human communication is having an enormous impact on issues of social choice, often to our detriment. Our paper is intended to raise the alarm. Providing the definitive solution will be a task for a much broader community of scientists, policy makers, technologists, ethicists, and other voices from around the globe.

Monday, August 16, 2021

What is our brain's spontaneous activity for?

Continuing in MindBlog's recent thread on the predictive brain (see here and here), I pass on highlights of an opinion piece by Pezzulo et al., who suggest that all that background brain noise has a very specific purpose - figuring out what to expect next:
Spontaneous brain dynamics are manifestations of top-down dynamics of generative models detached from action–perception cycles.
Generative models constantly produce top-down dynamics, but we call them expectations and attention during task engagement and spontaneous activity at rest.
Spontaneous brain dynamics during resting periods optimize generative models for future interactions by maximizing the entropy of explanations in the absence of specific data and reducing model complexity.
Low-frequency brain fluctuations during spontaneous activity reflect transitions between generic priors consisting of low-dimensional representations and connectivity patterns of the most frequent behavioral states.
High-frequency fluctuations during spontaneous activity in the hippocampus and other regions may support generative replay and model learning.
Brains at rest generate dynamical activity that is highly structured in space and time. We suggest that spontaneous activity, as in rest or dreaming, underlies top-down dynamics of generative models. During active tasks, generative models provide top-down predictive signals for perception, cognition, and action. When the brain is at rest and stimuli are weak or absent, top-down dynamics optimize the generative models for future interactions by maximizing the entropy of explanations and minimizing model complexity. Spontaneous fluctuations of correlated activity within and across brain regions may reflect transitions between ‘generic priors’ of the generative model: low dimensional latent variables and connectivity patterns of the most common perceptual, motor, cognitive, and interoceptive states. Even at rest, brains are proactive and predictive.

Wednesday, August 04, 2021

Historical language records reveal societal depression and anxiety in past two decades higher than during 20th century.

Fascinating work from Bollen et al. (open source):  

Significance

Can entire societies become more or less depressed over time? Here, we look for the historical traces of cognitive distortions, thinking patterns that are strongly associated with internalizing disorders such as depression and anxiety, in millions of books published over the course of the last two centuries in English, Spanish, and German. We find a pronounced “hockey stick” pattern: Over the past two decades the textual analogs of cognitive distortions surged well above historical levels, including those of World War I and II, after declining or stabilizing for most of the 20th century. Our results point to the possibility that recent socioeconomic changes, new technology, and social media are associated with a surge of cognitive distortions.
Abstract
Individuals with depression are prone to maladaptive patterns of thinking, known as cognitive distortions, whereby they think about themselves, the world, and the future in overly negative and inaccurate ways. These distortions are associated with marked changes in an individual’s mood, behavior, and language. We hypothesize that societies can undergo similar changes in their collective psychology that are reflected in historical records of language use. Here, we investigate the prevalence of textual markers of cognitive distortions in over 14 million books for the past 125 y and observe a surge of their prevalence since the 1980s, to levels exceeding those of the Great Depression and both World Wars. This pattern does not seem to be driven by changes in word meaning, publishing and writing standards, or the Google Books sample. Our results suggest a recent societal shift toward language associated with cognitive distortions and internalizing disorders.

Wednesday, July 14, 2021

The A.I. Revolution, Trillionaires and the Future of Political Power.

I want to point to a fascinating Ezra Klein podcast - you can read the transcript here - that is an interview with Sam Altman, the C.E.O. of OpenAI, which is one of the biggest and most interesting of the companies trying to create general purpose artificial intelligence. His recent essay titled "Moore's Law for Everything" has received wide comment, and it's topics are the focus of the interview. Klein notes: "what caught my eye about this essay, “Moore’s Law for Everything,” is Altman’s effort to try and imagine the political consequences of true artificial intelligence and the policies that could decide whether it ushers in utopia or dystopia." I'm going to pass on only clips from Klein's general introductions to give you a taste of the direction of the arguments, and urge you to read both the transcript of the podcast and Altman's essay.
“The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel...This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.”...Altman's argument is this: Since the 1970s, computers have gotten exponentially better even as they’re gotten cheaper, a phenomenon known as Moore’s Law. Altman believes that A.I. could get us closer to Moore’s Law for everything: it could make everything better even as it makes it cheaper. Housing, health care, education, you name it.
A.I. will create phenomenal wealth, but it will do so by driving the price of a lot of labor to basically zero. That is how everything gets cheaper. It’s also how a lot of people lose their jobs...To make that world a good world for people, to make that a utopia rather than a dystopia, it requires really radical policy change to make sure the wealth A.I. creates is distributed broadly. But if we can do that, he says, well, then we can improve the standard of living for people more than we ever have before in less time. So Altman’s got some proposals here for how we can do that. They’re largely proposals to tax wealth and land. And I push on them here.
This is a conversation, then, about the political economy of the next technological age. Some of it is speculative, of course, but some of it isn’t. That shift of power and wealth is already underway. Altman is proposing an answer: a move toward taxing land and wealth, and distributing it to all. We talk about that idea, but also the political economy behind it: Are the people gaining all this power and wealth really going to offer themselves up for more taxation? Or will they fight it tooth-and-nail?
We also discuss who is funding the A.I. revolution, the business models these systems will use (and the dangers of those business models), how A.I. would change the geopolitical balance of power, whether we should allow trillionaires, why the political debate over A.I. is stuck, why a pro-technology progressivism would also need to be committed to a radical politics of equality, what global governance of A.I. could look like, whether I’m just “energy flowing through a neural network,” and much more.
(You can also listen to the whole conversation by following “The Ezra Klein Show” on Apple, Spotify, Google or wherever you get your podcasts.)

Thursday, January 07, 2021

Are we the cows of the future?

One of the questions posed by Yuval Harari in his writing on our possible futures is "What are we to do with all these humans who are, except for a small technocratic elite, no longer required as the means of production?" Esther Leslie, a professor of political aesthetics at Birkbeck College, University of London, does an essay on this issue, pointing out that our potential futures in the pastures of digital dictatorship — crowded conditions, mass surveillance, virtual reality — are already here. You should read her essay, and I passon just a few striking clips of text:
...Cows’ bodies have historically served as test subjects — laboratories of future bio-intervention and all sorts of reproductive technologies. Today cows crowd together in megafarms, overseen by digital systems, including facial- and hide-recognition systems. These new factories are air-conditioned sheds where digital machinery monitors and logs the herd’s every move, emission and production. Every mouthful of milk can be traced to its source.
And it goes beyond monitoring. In 2019 on the RusMoloko research farm near Moscow, virtual reality headsets were strapped onto cattle. The cows were led, through the digital animation that played before their eyes, to imagine they were wandering in bright summer fields, not bleak wintry ones. The innovation, which was apparently successful, is designed to ward off stress: The calmer the cow, the higher the milk yield.
A cow sporting VR goggles is comedic as much as it is tragic. There’s horror, too, in that it may foretell our own alienated futures. After all, how different is our experience? We submit to emotion trackers. We log into biofeedback machines. We sign up for tracking and tracing. We let advertisers’ eyes watch us constantly and mappers store our coordinates.
Could we, like cows, be played by the machinery, our emotions swayed under ever-sunny skies, without us even knowing that we are inside the matrix? Will the rejected, unemployed and redundant be deluded into thinking that the world is beautiful, a land of milk and honey, as they interact minimally in stripped-back care homes? We may soon graze in the new pastures of digital dictatorship, frolicking while bound.
Leslie then describes the ideas of German philosopher and social critic Theodor Adorno:
Against the insistence that nature should not be ravished by technology, he argues that perhaps technology could enable nature to get what “it wants” on this sad earth. And we are included in that “it.”...Nature, in truth, is not just something external on which we work, but also within us. We too are nature.
For someone associated with the abstruseness of avant-garde music and critical theory, Adorno was surprisingly sentimental when it came to animals — for which he felt a powerful affinity. It is with them that he finds something worthy of the name Utopia. He imagines a properly human existence of doing nothing, like a beast, resting, cloud gazing, mindlessly and placidly chewing cud.
To dream, as so many Utopians do, of boundless production of goods, of busy activity in the ideal society reflects, Adorno claimed, an ingrained mentality of production as an end in itself. To detach from our historical form adapted solely to production, to work against work itself, to do nothing in a true society in which we embrace nature and ourselves as natural might deliver us to freedom.
Rejecting the notion of nature as something that would protect us, give us solace, reveals us to be inextricably within and of nature. From there, we might begin to save ourselves — along with everything else.

Tuesday, October 27, 2020

The end of an expanding epidemic cannot be precisely forecast

A sobering analysis from Castro et al.:  

Significance

Susceptible–infected–removed (SIR) models and their extensions are widely used to describe the dynamics of infection spreading. Certain generic features of epidemics are well-illustrated by these models, which can be remarkably good at reproducing empirical data through suitably chosen parameters. However, this does not assure a good job anticipating the forthcoming stages of the process. To illustrate this point, we accurately describe the propagation of COVID-19 in Spain using one such model and show that predictions for its subsequent evolution are disparate, even contradictory. The future of ongoing epidemics is so sensitive to parameter values that predictions are only meaningful within a narrow time window and in probabilistic terms, much as what we are used to in weather forecasts.
Abstract
Epidemic spread is characterized by exponentially growing dynamics, which are intrinsically unpredictable. The time at which the growth in the number of infected individuals halts and starts decreasing cannot be calculated with certainty before the turning point is actually attained; neither can the end of the epidemic after the turning point. A susceptible–infected–removed (SIR) model with confinement (SCIR) illustrates how lockdown measures inhibit infection spread only above a threshold that we calculate. The existence of that threshold has major effects in predictability: A Bayesian fit to the COVID-19 pandemic in Spain shows that a slowdown in the number of newly infected individuals during the expansion phase allows one to infer neither the precise position of the maximum nor whether the measures taken will bring the propagation to the inhibition regime. There is a short horizon for reliable prediction, followed by a dispersion of the possible trajectories that grows extremely fast. The impossibility to predict in the midterm is not due to wrong or incomplete data, since it persists in error-free, synthetically produced datasets and does not necessarily improve by using larger datasets. Our study warns against precise forecasts of the evolution of epidemics based on mean-field, effective, or phenomenological models and supports that only probabilities of different outcomes can be confidently given.

Monday, October 05, 2020

Facing major changes that are a predictable and integral part of life.

I pass on clips from another of Arthur Brooks' biweekly articles on "How to Build a Life." Its discussion of major life changes begins with the obvious  major life transition that is being forced upon most of us by the COVID -19 pandemic.
We have been awakening to the reality that the coronavirus pandemic is not a temporary affliction, but an involuntary transition from one way of life to another. Our jobs and personal lives are shifting and, in many cases, will never fully return to “normal.” ...You may never go back to work like before. Dating may never be the same. Your alma mater might go broke and disappear. Will you hug your friends or even shake hands as much as you used to? Perhaps not.
...Even when a transition is completely voluntary, it can be the source of intense suffering, because it involves adapting to new surroundings and changing your self-conception.
If we understand transitions properly, however, we can curb our natural tendency to fight against them—a futile battle, given their inevitability. Indeed, with a shift in mindset, we can make transitions into a source of meaning and transcendence.
Psychologists call the state of being in transition “liminality - you are neither in the state you left nor completely in your new state, at least not mentally. This provokes something of an identity crisis - it raises the question “Who am I?” - which can be emotionally destabilizing.
After interviewing hundreds of people about their life transitions, author Bruce Feiler found that a major change in life occurs, on average, every 12 to 18 months. Huge ones happen three to five times in each person’s life. Some are voluntary and joyful, such as getting married or having a child. Others are involuntary and unwelcome, such as unemployment or life-threatening illness.
...here’s the good news: Even difficult, unwanted transitions are usually seen differently in retrospect than in real time... research  shows that we tend to see past events—even unwanted ones—as net positives over time. Though our brains have a tendency to focus on negative emotions in the present, over the years unpleasant feelings fade more than pleasant feelings do, a phenomenon known as “fading affect bias.”
One of the things we learn by not resisting challenging transitions is how to cope with subsequent life changes - a sense of meaning gained through change makes the rest of life seem more stable.
Difficult periods can also stimulate innovation and ingenuity. A large amount of literature  talks about “post-traumatic growth,” in which people derive long-term benefits from painful experiences, including more appreciation for life, richer relationships, greater resilience, and deeper spirituality. Another manifestation of this growth, according to some newer scholarship, is heightened creativity.
Life changes are painful, but inevitable. And as hard as they may be, we only make things harder—and risk squandering the benefits and lessons they can bring—when we work against them instead of with them...those who benefit the most from painful periods are those who spend time experiencing and processing them. The right strategy is to accept transitions as an integral part of life, and lean into them.

Thursday, August 27, 2020

A brief history of risk

Li, Hills, and Hertwig do an open access review with the title of this post. From their introductory paragraph:
...First, we examined how the frequency of the word risk has changed over historical time. Is the construct of risk playing an ever-increasing role in the public discourse, as the sociological notion of a ‘risk society’ suggests? Second, we investigated how the sentiments for the words co-occurring with risk have changed. Are the connotations of risk becoming increasingly ominous? Third, how has the meaning of risk changed relative to close associates such as danger and hazard? Is risk more subject to semantic change? Finally, we decompose the construct of risk into the specific topics with which it has been associated and track those topics over historical time. This brief history of the semantics of risk reveals new and surprising insights—a fourfold increase in frequency, increasingly negative sentiment, a semantic drift toward forecasting and prevention, and a shift away from war toward chronic disease—reflecting the conceptual evolution of risk in the archeological records of public discourse.

Tuesday, July 28, 2020

“Design fiction” skirts reality to provoke discussion and debate

I want to suggest that readers have a look at David Adams Science and Culture essay in the June 16 issues of PNAS. Here is its beginning:
In October 2015, researchers presented an unusual paper at a computer science conference in London. The paper described the promising results of a pilot project in which a local community used surveillance drones to enforce car parking restrictions and to identify dog owners who failed to clean up after their pets. Controlled by four elderly retirees, the drones buzzed around the city and directed council officials on the ground.
The paper and its accompanying video generated lively discussion about the ethics and regulation of drone use among delegates at the CHI PLAY conference. But there was a catch: The paper, the video, and the pilot scheme were fictional, as the researchers admitted at the end of both the paper and the presentation.
The researchers had invented the scenario as a way to focus attention on how drone technology—a topic of study for some of the people in the room—could shape and change society. The team thought that presenting the idea as if it were real—for example, showing familiar street signs in the video warning drivers about a drone-controlled zone—would provoke discussion about a future in which such use of technology was considered mundane.
The practice is called design fiction. Originally used in product design, the approach is finding increasing use in scientific and medical fields as a way to explore the possible consequences of technological development. These projects are not so much experiments designed to test a hypothesis as they are orchestrated scenarios designed to provoke forward-thinking discussion and debate. From climate science and artificial intelligence to wearable technologies and healthcare, researchers are creating and sharing often dystopian tales about the near future. And they’re tracking people’s reactions to these scenarios to help reshape the way researchers conceive the technology they are developing.

Thursday, June 25, 2020

The University Is Like a CD in the Streaming Age

Having lectured in university classrooms for about 40 years, but not since 2005, I am blown away by changes in the academy that technology has wrought since then. In an Atlantic article, Michael D. Smith, Professor of information technology and marketing at Carnegie Mellon University, describes how colleges, like the entertainment industry, will need to embrace digital services in order to survive. I pass on a few clips, and suggest you read the whole article.
Universities have long been remarkably stable institutions — so stable that in 2001, by one account, they comprised an astonishing 70 of the 85 institutions in the West that have endured in recognizable form since the 1520s...That stability has ... bred overconfidence, overpricing, and an overreliance on business models tailored to a physical world. Like ... entertainment executives did, many of us in higher education dismiss the threats that digital technologies pose to the way we work. We diminish online-learning and credentialing platforms such as Khan Academy, Kaggle, and edX as poor substitutes for the “real thing.” We can’t imagine that “our” students would ever want to take a DIY approach to their education instead of paying us for the privilege of learning in our hallowed halls. We can’t imagine “our” employers hiring someone who doesn’t have one of our respected degrees.
But we’re going to have to start thinking differently.
...this past semester, the coronavirus pandemic transformed distance learning from a quaint side product that few elite schools took seriously to a central part of our degree-granting programs. Arguments for the inherent superiority of the residential college experience will be less convincing now that we’ve conferred the same credentials—and charged the same tuition—for education delivered remotely.
Do students think their pricey degrees [from prestigious private universities] are worth the cost when delivered remotely?
The Wall Street Journal asked that question in April, and one student responded with this zinger: “Would you pay $75,000 for front-row seats to a Beyoncé concert and be satisfied with a livestream instead?”
...the core mission of higher education...in my view...is simple: As educators, we strive to create opportunities for as many students as possible to discover and develop their talents, and to use those talents to make a difference in the world.
By that measure, our current model falls short. Elite colleges talk about helping our students flourish in society, but our tuition prices leave many of them drowning in debt—or unable to enroll in the first place. We talk about creating opportunities for students, but we measure our success based on selectivity, which is little more than a celebration of the number of students we exclude from the elite-campus experience. We talk about preparing students for careers after graduation, but a 2014 Gallup survey found that only 11 percent of business leaders believed “college graduates have the skills and competencies that their workplaces need.” We talk about creating diverse campuses, but, as recent admissions scandals have made painfully clear, our admissions processes overwhelmingly favor the privileged few.
What if new technologies could allow us to understand the varied backgrounds, goals, and learning styles of our students—and provide educational material customized to their unique needs? What if we could deliver education to students via on-demand platforms that allowed them to study whenever, wherever, and whatever they desired, instead of requiring them to conform to the “broadcast” schedule of today’s education model? What if the economies of scale available from digital delivery allowed us to radically lower the price of our educational resources, creating opportunities for learners we previously excluded from our finely manicured quads? Might we discover, as the entertainment industry has, a wealth of talented individuals with valuable contributions to make who just didn’t fit into the rigid constraints of our old model?
I believe we will, but that doesn’t mean the residential university will go away. Indeed, these changes may allow universities to jettison “anti-intellectual” professional-degree programs in favor of a renewed focus on a classical liberal-arts education. But as this happens, we might discover that the market for students interested in spending four years and thousands of dollars on a broad foundation in the humanities is smaller than we believe—certainly not large enough to support the 5,000 or so college campuses in the United States today. Soon, residential colleges may experience a decline similar to that of live theaters after the advent of movies and broadcast television. Broadway and local playhouses still exist, but they are now considered exclusive and expensive forms of entertainment, nowhere near the cultural force they once were.
But remember, just because new technology changed the way entertainment was delivered doesn’t mean it impeded the industry’s underlying mission. Instead of destroying TV, movies, and books, new technologies have produced an explosion in creative output, delivered through the convenience, personalization, and interactivity of Kindle libraries, Netflix recommendations, and Spotify playlists. Despite—or maybe because of—the digital disruption we’ve recently lived through, we’re now enjoying a golden age of entertainment.
Whether we like it or not, big changes are coming to higher education. Instead of dismissing them or denying that they’re happening, let’s embrace them and see where they can take us. We have a chance today to reimagine an old model that has fallen far behind the times. If we do it right, we might even usher in a new golden age of education.

Tuesday, June 23, 2020

10 reasons why a 'Greater Depression' for the 2020s is inevitable.

Try to hang on to any threads of optimism you might still have as I summarize,from an article in The Guardian by Nouriel Roubini, 10 ominous and risky trends:
-Soaring levels of public and private debts, and their corollary risks of defaults, all but ensure a more anemic recovery than the one that followed the Great Recession a decade ago.
-The demographic timebomb in advanced economies with ageing societies means more public spending (and debt) allocated to health systems.
-Deflation risk is increasing as COVID crisis creates massive unused capacity, unemployment, and commodities (oil, metals) price collapse, making debt deflation likely, increasing insolvency.
-As central banks run monetised fiscal deficits to avoid depression and deflation, currency will be debased and accelerated deglobalisation and renewed protectionism will make stagflation all but inevitable.
-Income and wealth gaps will widen as production is re-shored to guard against future supply-chain shocks, accelerating the rate of automation and downward pressure on wages, further fanning the flames of populism, nationalism, and xenophobia.
-The current deglobalisation trend, accelerated by the pandemic, will lead to tighter restrictions on the movement of goods, services, capital, labour, technology, data, and information.
-A populist backlash against democracy will reinforce this trend, as blue collar and middle class workers become more susceptible to poplulist rhetoric, scapegoating foreigners for the crisis, and supporting restriction of migration and trade.
-A geostrategic standoff between the US and China will cause decoupling in trade, technology, investment, data, and monetary arrangements to intensify.
--This diplomatic breakup will set the stage for a new cold war between the US and its rivals (China, Russia, Iran, North Korea) and because technology is the key weapon for controlling future industries and pandemics, the US private tech sector will be increasingly integrated into the national-security-industrial complex.
-A final risk that cannot be ignored is environmental disruption, which, as the Covid-19 crisis has shown, can wreak far more economic havoc than a financial crisis.
These 10 risks, already looming large before Covid-19 struck, now threaten to fuel a perfect storm that sweeps the entire global economy into a decade of despair. By the 2030s, technology and more competent political leadership may be able to reduce, resolve, or minimise many of these problems, giving rise to a more inclusive, cooperative, and stable international order. But any happy ending assumes that we find a way to survive the coming Greater Depression.

Friday, May 22, 2020

AI for social good: Well meaning gobbledegook

I pass on this link to a my-eyes-glaze-over open access perspective on international efforts to use artificial intelligence for social good. The effort is a noble one, and indeed tries to deal with a very complex domain. Here is the abstract, which introduces the first of an array of acronyms:
Advances in machine learning (ML) and artificial intelligence (AI) present an opportunity to build better tools and solutions to help address some of the world’s most pressing challenges, and deliver positive social impact in accordance with the priorities outlined in the United Nations’ 17 Sustainable Development Goals (SDGs). The AI for Social Good (AI4SG) movement aims to establish interdisciplinary partnerships centred around AI applications towards SDGs. We provide a set of guidelines for establishing successful long-term collaborations between AI researchers and application-domain experts, relate them to existing AI4SG projects and identify key opportunities for future AI applications targeted towards social good.
Added note: I almost never respond positively to emails requesting that I link to a particular advocacy or commercial site, but I make an exception on receiving this morning email on 8/12/2020:
My name is Sean from Don’t Panic. We’re a Creative Advertising agency in London passionate about creating cause-related advertising campaigns for charities and brands.
I am a fan of your website and I noticed that you linked to a Nature resource discussing social good on this page https://mindblog.dericbownds.net/2020/05/ai-for-social-good-well-meaning.html.
I have put together an article exploring what social good is. I think that your readers would find this very useful. You can find it here: https://www.dontpaniclondon.com/what-is-social-good/
I’d be very grateful if you considered linking to my article in addition to the Nature resource.

Wednesday, May 20, 2020

How Social Networks are destroying democracy.

In the December 2019 issue of The Atlantic Haidt and Rose-Stockwell offer a must-read article titled "The Dark Psychology of Social Networks." It begins with a thought experiment asks us to imagine what chaos would result if God became bored and decided to double the gravitational constant. Birds would falls from the sky, buildings would collapse, etc.
Let’s rerun this thought experiment in the social and political world, rather than the physical one. The U.S. Constitution was an exercise in intelligent design. The Founding Fathers knew that most previous democracies had been unstable and short-lived. But they were excellent psychologists, and they strove to create institutions and procedures that would work with human nature to resist the forces that had torn apart so many other attempts at self-governance...James Madison wrote about his fear of the power of “faction,” by which he meant strong partisanship or group interest that “inflamed [men] with mutual animosity” and made them forget about the common good...The Constitution included mechanisms to slow things down, let passions cool, and encourage reflection and deliberation.
Madison’s design has proved durable. But what would happen to American democracy if, one day in the early 21st century, a technology appeared that—over the course of a decade—changed several fundamental parameters of social and political life? What if this technology greatly increased the amount of “mutual animosity” and the speed at which outrage spread? Might we witness the political equivalent of buildings collapsing, birds falling from the sky, and the Earth moving closer to the sun?
What Social Media Changed....The problem may not be connectivity itself but rather the way social media turns so much communication into a public performance...social psychologist Mark Leary coined the term sociometer to describe the inner mental gauge that tells us, moment by moment, how we’re doing in the eyes of others...Social media, with its displays of likes, friends, followers, and retweets, has pulled our sociometers out of our private thoughts and posted them for all to see...Human beings evolved to gossip, preen, manipulate, and ostracize. We are easily lured into this new gladiatorial circus, even when we know that it can make us cruel and shallow...In other words, social media turns many of our most politically engaged citizens into Madison’s nightmare: arsonists who compete to create the most inflammatory posts and images, which they can distribute across the country in an instant while their public sociometer displays how far their creations have traveled...Citizens are now more connected to one another, on platforms that have been designed to make outrage contagious.
Is There Any Way Back?...Social media has changed the lives of millions of Americans with a suddenness and force that few expected...citizens are now more connected to one another, in ways that increase public performance and foster moral grandstanding, on platforms that have been designed to make outrage contagious, all while focusing people’s minds on immediate conflicts and untested ideas, untethered from traditions, knowledge, and values that previously exerted a stabilizing effect. This, we believe, is why many Americans—and citizens of many other countries, too—experience democracy as a place where everything is going haywire...It doesn’t have to be this way...Many researchers, legislators, charitable foundations, and tech-industry insiders are now working together in search of ... improvements. We suggest three types of reform that might help:
(1) Reduce the frequency and intensity of public performance. If social media creates incentives for moral grandstanding rather than authentic communication, then we should look for ways to reduce those incentives. One such approach already being evaluated by some platforms is “demetrication,” the process of obscuring like and share counts so that individual pieces of content can be evaluated on their own merit, and so that social-media users are not subject to continual, public popularity contests.
(2) Reduce the reach of unverified accounts. Bad actors—trolls, foreign agents, and domestic provocateurs—benefit the most from the current system, where anyone can create hundreds of fake accounts and use them to manipulate millions of people. Social media would immediately become far less toxic, and democracies less hackable, if the major platforms required basic identity verification before anyone could open an account—or at least an account type that allowed the owner to reach large audiences. (Posting itself could remain anonymous, and registration would need to be done in a way that protected the information of users who live in countries where the government might punish dissent. For example, verification could be done in collaboration with an independent nonprofit organization.)
(3) Reduce the contagiousness of low-quality information. Social media has become more toxic as friction has been removed. Adding some friction back in has been shown to improve the quality of content. For example, just after a user submits a comment, AI can identify text that’s similar to comments previously flagged as toxic and ask, “Are you sure you want to post this?” This extra step has been shown to help Instagram users rethink hurtful messages. The quality of information that is spread by recommendation algorithms could likewise be improved by giving groups of experts the ability to audit the algorithms for harms and biases.
If we want our democracy to succeed—indeed, if we want the idea of democracy to regain respect in an age when dissatisfaction with democracies is rising—we’ll need to understand the many ways in which today’s social-media platforms create conditions that may be hostile to democracy’s success. And then we’ll have to take decisive action to improve social media.

Thursday, May 07, 2020

Future of the Human climate niche

Sobering analysis from Xu et al. :

Significance
We show that for thousands of years, humans have concentrated in a surprisingly narrow subset of Earth’s available climates, characterized by mean annual temperatures around ∼13 °C. This distribution likely reflects a human temperature niche related to fundamental constraints. We demonstrate that depending on scenarios of population growth and warming, over the coming 50 y, 1 to 3 billion people are projected to be left outside the climate conditions that have served humanity well over the past 6,000 y. Absent climate mitigation or migration, a substantial part of humanity will be exposed to mean annual temperatures warmer than nearly anywhere today.
Abstract
All species have an environmental niche, and despite technological advances, humans are unlikely to be an exception. Here, we demonstrate that for millennia, human populations have resided in the same narrow part of the climatic envelope available on the globe, characterized by a major mode around ∼11 °C to 15 °C mean annual temperature (MAT). Supporting the fundamental nature of this temperature niche, current production of crops and livestock is largely limited to the same conditions, and the same optimum has been found for agricultural and nonagricultural economic output of countries through analyses of year-to-year variation. We show that in a business-as-usual climate change scenario, the geographical position of this temperature niche is projected to shift more over the coming 50 y than it has moved since 6000 BP. Populations will not simply track the shifting climate, as adaptation in situ may address some of the challenges, and many other factors affect decisions to migrate. Nevertheless, in the absence of migration, one third of the global population is projected to experience a MAT >29 °C currently found in only 0.8% of the Earth’s land surface, mostly concentrated in the Sahara. As the potentially most affected regions are among the poorest in the world, where adaptive capacity is low, enhancing human development in those areas should be a priority alongside climate mitigation.

Friday, February 14, 2020

Mindfulness as an antidote to the intrusions of artificial intelligence?

I want to point to a very interesting New Yorker Magazine article by Ian Parker describing the life and ideas of Yuval Harari, whose work has been the subject of numerous MindBlog posts. A series of five sequential MindBlog posts, starting on 12/31/18, presented an abstracted version of his book "21 Lessons for the 21st Century". Here are some clips from the article that especially caught my attention:
His proposition, often repeated, is that humanity faces three primary threats: nuclear war, ecological collapse, and technological disruption. Other issues that politicians commonly talk about—terrorism, migration, inequality, poverty—are lesser worries, if not distractions... Harari highlights the technological one...“Think about a situation where somebody in Beijing or San Francisco knows what every citizen in Israel is doing at every moment—all the most intimate details about every mayor, member of the Knesset, and officer in the Army, from the age of zero.” He added, “Those who will control the world in the twenty-first century are those who will control data.”
The aspect of a technological dystopia that most preoccupies him—losing mental autonomy to A.I.—can be at least partly countered, in his view, by citizens cultivating greater mindfulness. He collects examples of A.I. threats. He refers, for instance, to recent research suggesting that it’s possible to measure people’s blood pressure by processing video of their faces.
...his writing underscores the importance of equanimity. In a section of “Sapiens” titled “Know Thyself,” Harari describes how the serenity achieved through meditation can be “so profound that those who spend their lives in the frenzied pursuit of pleasant feelings can hardly imagine it.” “21 Lessons” includes extended commentary on the life of the Buddha, who “taught that the three basic realities of the universe are that everything is constantly changing, nothing has any enduring essence, and nothing is completely satisfying.” Harari continues, “You can explore the furthest reaches of the galaxy, of your body, or of your mind, but you will never encounter something that does not change, that has an eternal essence, and that completely satisfies you... ‘What should I do?’ ask people, and the Buddha advises, ‘Do nothing. Absolutely nothing.’ ”
According to Harari's book “Sapiens,” progress is basically an illusion; the Agricultural Revolution was “history’s biggest fraud,” and liberal humanism is a religion no more founded on reality than any other...In the schema of “Sapiens,” money is a “fiction,” as are corporations and nations. Harari uses “fiction” where another might say “social construct.” (He explained to me, “I would almost always go for the day-to-day word, even if the nuance of the professional word is a bit more accurate.”) Harari further proposes that fictions require believers, and exert power only as long as a “communal belief” in them persists. Every social construct, then, is a kind of religion: a declaration of universal human rights is not a manifesto, or a program, but the expression of a benign delusion; an activity like using money, or obeying a stoplight, is a collective fantasy, not a ritual.

Wednesday, October 23, 2019

The Metamorphosis of the Western Soul

I want to point to an article by Will Storr "The Metamorphosis of the Western Soul" that has been languishing for over a year in my list of references that might become the basis of a MindBlog post. Storr presents a nice distillation of the story of how Between 1965 and 1985, the Western self was transformed. Storr's basic point is that economic forces are the dominant reason for these changes.
We turned from anti-materialistic, stick-it-to-the-Man hippies into greed-is-good yuppies... While the origins of such changes cannot be reduced to a single source, I believe we can point to a dominant one: the economy. In the early 1980s, President Ronald Reagan and the British Prime Minister Margaret Thatcher rewrote the rules by which we had once lived. And that, with stunning rapidity, changed who we were.
Storr proceeds review the historical story of how citizens of the individualistic West and the collectivist East have developed fundamental cognitive differences - largely adaptations to different physical landscapes - in how they view the world through collectivist versus individualistic filters. But, there is plasticity:
Humans are born incomplete. The brain absorbs huge amounts of essential information throughout childhood and adolescence, which it uses to carry on building who we are. It’s as if the brain asks a single, vital question: Who do I have to be, in this place, to thrive? If it was a boastful hustler in ancient Greece and a humble team-player in ancient China, then who is it in the West today?
The answer is a neoliberal...After the economic chaos of the 1970s, it was decided that the United States and Britain had become too collective. Previous decades had seen the introduction of the New Deal, which included the Social Security Act, strict regulations on banking and business, and the rising power of the unions. This collectively tilted economy sired a collectively tilted people...For Mr. Reagan and Mrs. Thatcher, saving ourselves meant rediscovering our individualist roots.
They cut taxes and regulations; they battled unions; they shrunk the welfare state; they privatized assets and weakened the state’s safety nets. They pursued the neoliberal dream of globalization — one free market that covered the earth. As much of human life as possible was to become a competition of self versus self...In 1981, Margaret Thatcher said “Economics are the method: The object is to change the soul.” And that’s precisely what happened.
Before 2008, it felt as if neoliberalism was basically working for most people. But since the crash, millions have come to see the system as broken...We have seen the neoliberal Hillary Clinton falter and the antiglobalist Donald Trump triumph. Britain’s Brexit was secured by antiglobalist arguments....The perception of a broken, rigged economy has left us angry and increasingly tribal, which might explain this recent trend toward “us” over the narcissistic “me.”
If this is correct, it’s yet more evidence that who we are is powerfully influenced by where we are. Humans want to get along and get ahead and will become whoever they need to be in order to do so. In the 21st century, those rules are no longer set by our physical landscape. Today, the deep and enormously powerful controlling force is the economy.

Monday, September 16, 2019

Psychological adaptation to the apocalypse - meditate, or just be happy?

In this post, not exactly an upper, I point first to two in-your-face articles on how we ought to be afraid, very afraid, about humanity's future technological and ecological environment, and then note two pieces of writing on psychological adaptations that might dampen down the full turn on of our brains' fear machinery.

Novelist Jonathan Franzen does a screed very effective at scaring the bejesus out of us. His basic argument: “The climate apocalypse is coming. To prepare for it, we need to admit that we can’t prevent it.” A chorus of criticism has greeted Franzen's article: "Franzen is wrong on the science, on the politics, and on the psychology of human behavior as it pertains to climate change." (See also Chrobak.)

And, for alarm on our looming digital environment, The 6,000 word essay by Glenn S. Gerstell, general counsel of the National Security, and summarized by Warzel, should do the job. The first nation to crack quantum computing (China or the US) will rule the world! 

So, how do we manage to wake up cheerful in the morning? Futurist Yuval Harari offers his approach in Chapter 21 of his book "21 Lessons for the 21st century," by describing his experience of learning to meditate, starting with the initial instructions (to observe your process of breathing) in his first Vipassana meditation course. He now meditates two hours every day.
The point is that meditation is a tool for observing the mind directly...For at least two hours a day I actually observe reality as it is, while for the other twenty-two hours I get overwhelmed by emails and tweets and cute-puppy videos. Without the focus and clarity provided by this practice, I could not have written Sapiens or Homo Deus.
A glimmer of hopefulness can also be obtained by reading books in the vein of Pinker's "Enlightenment Now", which documents again and again, for many areas, how dire predictions about the future have not come to pass. The injunction here would be to be optimistic, not a bad idea, given the recent PNAS article by Lee et al. documenting that the lifespan of optimistic people, on average, is 11 to 15% longer.

Monday, September 09, 2019

Training to reduce cognitive biases.

Sellier et al. show that students assigned to solve a business case exercise are less likely to choose an inferior confirmatory solution when they have previously undergone a debiasing-training intervention:
The primary objection to debiasing-training interventions is a lack of evidence that they improve decision making in field settings, where reminders of bias are absent. We gave graduate students in three professional programs (N = 290) a one-shot training intervention that reduces confirmation bias in laboratory experiments. Natural variance in the training schedule assigned participants to receive training before or after solving an unannounced business case modeled on the decision to launch the Space Shuttle Challenger. We used case solutions to surreptitiously measure participants’ susceptibility to confirmation bias. Trained participants were 29% less likely to choose the inferior hypothesis-confirming solution than untrained participants. Analysis of case write-ups suggests that a reduction in confirmatory hypothesis testing accounts for their improved decision making in the case. The results provide promising evidence that debiasing-training effects transfer to field settings and can improve decision making in professional and private life.