Showing posts with label tech. Show all posts
Showing posts with label tech. Show all posts

Monday, September 29, 2025

Platforms like X and TikTok are destroying our culture.

 I pass on most of a recent Sam Harris piece:

Charlie Kirk was a political prodigy on the Right and adored by a younger generation of Republicans...His murder was an especially terrible crime for several reasons—the fact that it occurred on a college campus in front of thousands of students, the manner in which it was immediately broadcast on social media, the presence of his wife and children at the scene, and the unavoidable sense that both the causes and consequences had to be political. Whatever the killer’s motives, he dropped a match onto an information landscape that was ready to burn.

...platforms like X and TikTok are destroying our culture. No metaphor does the problem justice. I’ve compared social media to a dangerous psychological experiment, a hallucination machine, a funhouse mirror, a digital sewer—but nothing captures the ludicrous insults, moral injuries, and delusions that millions of us avidly produce and consume online. If the medium is the message, the message is mass psychosis—and it will send us careening from one political emergency to the next. The fact that some of the most deranging and divisive content is being created (or amplified) by foreign adversaries—and that we have literally built and monetized their capacity to do this—beggars belief. We are poisoning ourselves and inviting others to poison us.

More disturbing still, the effects are self-reinforcing. Part of the reason for this is algorithmic—these platforms have been designed to raise the amplitude on our tribal hatreds, because this maximizes engagement. But the algorithms in our brains are little better: Seeing another person (or what appears to be another person) gleefully dance on a slain man’s grave, it is easy to conclude that they represent some significant faction of American society—and to feel the outrage appropriate to such a terrible discovery.

President Trump is a creature of social media, and his presidency would be unthinkable without it. Unfortunately, his address to the nation in response to Kirk’s murder evinced all the wisdom of an angry tweet. Rather than speak in a way that would be expected of a normal president, he produced a dangerous piece of gaslighting—suggesting that the threat of political violence in America came exclusively from the Left and ignoring recent examples of rightwing attacks, including those carried out in his name. Rather than calling for calm and unity, he accused his political opponents of being accessories to murder. And most ominously, he implied that the full power of the federal government would soon be turned against them.

It was the behavior of an arsonist, pretending to be a firefighter. Of course, some will insist that this observation just heaps more fuel on the fire. But serious criticism of President Trump and Trumpism isn’t part of the problem of hyperpolarization in America—no more than serious criticism of the far Left is.

When Elon Musk announced to his 225 million followers on X that “The Left is the party of murder,” he wasn’t describing our political reality, but he was greatly damaging it. And when he posted, “If they won’t leave us in peace, then our choice is fight or die,” he joined a deranged chorus of prominent people on the Right who seem committed to viewing Kirk’s murder as the first shot fired in a civil war.

And who, after all, are “they”?

No morally sane person, Left or Right, supports political assassination—or feels anything but horror over it.

Kirk’s killer is now in custody, and from the details that have been released, he doesn’t appear to be the far-Left golem conjured by the Right.1 He is a Utah native who grew up hunting with his Republican parents. We don’t yet know why he did what he did, but there is a very good chance that he represents no cause beyond his own mental illness. As for the frequency and character of political violence in America, we shouldn’t delude ourselves about it. It isn’t at all a common form of murder, nor is it more prevalent on the Left.

There is no “party of murder” in this country. And insisting that there is just adds energy to yet another moral panic. Social media amplifies extreme views as though they were representative of most Americans, and many of us are losing our sense of what other people are really like. Many seem completely unaware that their hold on reality is being steadily undermined by what they are seeing online, and that the business models of these platforms, as well as the livelihoods of countless “influencers,” depend on our continuing to gaze, and howl, into the digital abyss.

Get off social media.

Read good books and real journalism.

Find your friends.

And enjoy your life.

 

 

 

Wednesday, August 27, 2025

AI is a Mass-Delusion Event - and - Gen Z and the End of Predictable Progress

I want to recommend two articles whose titles are this post's title, the first by Charlie Warzel in The Atlantic and the second by Kyla Scanlon in her Substack newsletter.   Here is final portion of Warzel's essay:

What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence?

The models being good enough doesn’t mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures. 

Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion. What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane.

 

Wednesday, August 20, 2025

A brain-computer interface that reads inner thoughts.

Inampudi does a description of work by Kunz et al., who isolated signals from a brain implant so people with movement disorders could voice thoughts without trying to speak. Here are the highlights and summary of the work: 

Highlights

•Attempted, inner, and perceived speech have a shared representation in motor cortex
•An inner-speech BCI decodes general sentences with improved user experience
•Aspects of private inner speech can be decoded during cognitive tasks like counting
•High-fidelity solutions can prevent a speech BCI from decoding private inner speech

Summary

Speech brain-computer interfaces (BCIs) show promise in restoring communication to people with paralysis but have also prompted discussions regarding their potential to decode private inner speech. Separately, inner speech may be a way to bypass the current approach of requiring speech BCI users to physically attempt speech, which is fatiguing and can slow communication. Using multi-unit recordings from four participants, we found that inner speech is robustly represented in the motor cortex and that imagined sentences can be decoded in real time. The representation of inner speech was highly correlated with attempted speech, though we also identified a neural “motor-intent” dimension that differentiates the two. We investigated the possibility of decoding private inner speech and found that some aspects of free-form inner speech could be decoded during sequence recall and counting tasks. Finally, we demonstrate high-fidelity strategies that prevent speech BCIs from unintentionally decoding private inner speech.

 

Monday, August 18, 2025

Polarization may be inherent in social media

Science news reports on a fascinating recent study suggesting that just the basic functions of social media—posting, reposting, and following—inevitably produce polarization. Here is the abstract of the article from Larooij and Törnberg: 

Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions? We address this question using a novel method - generative social simulation - that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms. We create a minimal platform where agents can post, repost, and follow others. We find that the resulting following-networks reproduce three well-documented dysfunctions: (1) partisan echo chambers; (2) concentrated influence among a small elite; and (3) the amplification of polarized voices - creating a 'social media prism' that distorts political discourse. We test six proposed interventions, from chronological feeds to bridging algorithms, finding only modest improvements - and in some cases, worsened outcomes. These results suggest that core dysfunctions may be rooted in the feedback between reactive engagement and network growth, raising the possibility that meaningful reform will require rethinking the foundational dynamics of platform architecture.  

Monday, November 25, 2024

Amazing.....the AI effect: Nearly 1 Billion Threats a Day

I have to pass on this piece from today's WSJ.  Makes me increasingly wonder when all of my financial savings held in electronic form in the cloud might vanish.....

*********

AI Effect: Amazon Sees Nearly 1 Billion Threats a Day

Amazon.com says it is seeing hundreds of millions more possible cyber threats across the web each day than it did earlier this year, a shift its security chief attributes in part to artificial intelligence.

Just as criminals have embraced AI, Amazon has turned to the technology to drastically scale up its threat-intelligence capabilities.

The company, given its presence online, can now view activity on around 25% of all IP addresses on the internet, it says, between its Amazon Web Services platform, its Project Kuiper satellite program and its other businesses, giving the company a sweeping view of hacker capabilities and techniques.

Amazon’s chief information security officer, CJ Moses, spoke with The Wall Street Journal on how the company is approaching threat intelligence in the AI era.

Prior to his current role, Moses ran security for Amazon Web Services, its cloud business, and before that investigated cybercrime at both the Federal Bureau of Investigation and the Air Force Office of Special Investigations.

Moses outlined how the company has built specialized tools using AI such as graph databases, which track threats and their relationships to each other; how that information has uncovered threats from nation-states that haven’t historically been known to have extensive cyber operations, and how its tools trick hackers into revealing their tactics.

He also discussed Amazon’s recent work with the U.S. Justice Department in taking down the platform used by cybercriminal group Anonymous Sudan to launch attacks on critical infrastructure globally.

This interview has been edited for length and clarity.

WSJ: How many attacks are you seeing these days? C.J. Moses: We’re seeing billions of attempts coming our way. On average, we’re seeing 750 million attempts per day. Previously, we’d see about 100 million hits per day, and that number has grown to 750 million over six or seven months.

WSJ: Is that a sign hackers are using AI? Moses: Without a doubt. Generative AI has provided access to those who previously didn’t have softwaredevelopment engineers to do these things. Now, it’s more ubiquitous, such that normal humans can do things they couldn’t do before because they just ask the computer to do that for them.

We’re seeing a good bit of that, as well as the use of AI to increase the realness of phishing, and things like that. They’re still not there 100%. We still can find errors in every phishing message that goes out, but they’re getting cleaner.

WSJ: Are you applying AI on the defensive side as well? Moses: When you have a large-scale environment, you need a large-scale system. We’ve created what is essentially a graph database that allows us to look at billions of interactions across the environment. That identifies, through machine learning, the things that we should be concerned about, and also the domains we’re seeing that could be problematic based upon past history as well as predictive analysis.

WSJ: What are the other ways you’re learning about hacker tactics? Moses: Probably the most interesting is MadPot. This is essentially a network of honey pots throughout our environment, which we use to glean intelligence from those that are acting on them. So, you have a bunch of semi-vulnerable systems that are presented in different ways, the threat actors act upon them, and then you can learn from their actions.

Once you become smarter, then you can look back at the data that you had from before and say: “Wait a second, we can determine that at this point in time we were seeing these interactions with these systems that now make sense to us.”

Pulling all that information together then gives us, in some cases, attribution.

WSJ: What have you learned from all this? Moses: We’ve definitely have seen an increase of activity globally from threat actors over the last year, or even less. In the last eight months, we’ve seen nationstate actors that we previously weren’t tracking come onto the scene. I’m not saying they didn’t exist, but they definitely weren’t on the radar. You have China, Russia and North Korea, those types of threat actors. But then you start to see the Pakistanis, you see other nation- states. We have more players in the game than we ever did before.

Nation-states that haven’t been active in this space now realize that they have to be, because all of all the big players are. That means that there is more activity, there are more threats, there are more things we have to look for, unfortunately.

WSJ: Amazon was recently credited with providing assistance to the Justice Department in an operation that seized hacking tools belonging to Anonymous Sudan. How are you finding cooperation with the government on threat intelligence today? Moses: It’s working out, it’s better and better, which is a great thing. There were points in time where it didn’t work in the past. Now, we have a lot more people like myself that have been in the government, and are able to speak the same language, or convey the right information so they can be more effective in their jobs.

We worked very effectively together on that particular case. It was a really good example of those of us that have been there knowing exactly what things need to be tied up in a bow, to hand off to the right people, so they could actually do something about it.

Monday, July 01, 2024

There are no more human elites of any sort...

 I want to pass on the conclusion of a great essay by Venkatesh Rao, giving the meanings of several acronyms in parentheses. You should read the entire piece.
 

"Let me cut to the conclusion: There are no more human elites of any sort. In the sense of natural rulers that is. There are certainly all sorts of privileged and entitled types who want the benefits of being elites, but no humans up to the task of actually being elite.

It is only our anthropocentric conceits that lead us to conclude that a complex system like “civilization” must necessarily have a legible “head,” and legible and governable internal processes for staffing that head. Preferably with the Right Sorts of People, People Like Us.

We’re all under the API (Application Programming Interface) in one way or another. What’s more, we have been for a while, since before the rise of modern AI (which just makes it embarrassingly obvious by paving the cowpaths of our subservience to technological modernity).

To know just how little you know about anything, be it car lightbulbs or national constitutions, whatever your degrees say, just ask ChatGPT to explain some deep knowledge areas to you. I don’t care if you’re a qualified automative technician or Elon Musk or clerking for the Supreme Court. Whether you’re failson C-average George W. Bush or a DEI (Diversity-Equity-Inclusion) activist trying to swap out some Greek classics for modern lesbian classics in the canon.

What you don’t know about the world humanity has built up over millennia utterly dwarfs what you think you know. Whatever the source of your elite pretensions, they’re just that — pretensions. Whatever claims you have to being the most natural member of the governing class, it is somewhere between weak to non-existent. Your claim is really about suitability for casting in a governance LARP (Live Action Role-Playing), not aptitude for governing as a natural member of an elite.

Humans do not like this idea. We ultimately like the idea of a designated elite, and legible, just processes for choosing, installing, and removing them that legitimize our own fantasies of worth and agency. We want to believe that yes, we too can be President, and would deserve to be, and do a good job.

The alternative hypothesis is that modern civilization, with its millennia of evolved technological complexity crammed onto the cramped surface of the planet, does not admit any simple, just, and enduring notion of elite that we can use to govern ourselves. The knowledge, aptitudes, and talents required to govern the world are distributed all over, in unpredictable, unfair, constantly shifting, and messy ways. When a lightbulb fails, there is no default answer to the question of how to replace it, and what to do when mistakes are made.

The rise of modern AI is presenting us with seemingly new forms of these questions. Those who yearn for a reliable class of elites, even if they must both revere and fear that class, are predictably trying to cast AIs themselves as the new elites. Those attached to their anthropocentric conceits are trying to figure out cunning schemes to keep some group of humans reliably in charge.

But there is nobody in charge. No elites, natural or not, deserving or undeserving. And it’s been this way for longer than we care to admit.

And this is a good thing. Stop looking for elites, and look askance at anyone claiming to be part of any elite or muttering conspiratorially about any elites. The world runs itself in more complex and powerful ways than they are capable of imagining. To buy into their self-mythologizing and delusions of grandeur is to be blind to the power and complexity of the world as it actually is.

And if you ever need to remind yourself of this, try changing a car headlamp lightbulb."

Monday, April 08, 2024

New protocols for uncertain times.

I want to point to a project launched by Venkatest Rao and others last year: “The Summer of Protocols.”  Some background for this project can be found in his essay “In Search of Hardness”.  Also,  “The Unreasonable Sufficiency of Protocols”  essay by Rao et al. is an excellent presentation of what protocols are about.  I strongly recommend that you read it if nothing else. 

Here is a description of the project: 

Over 18 weeks in Summer 2023, 33 researchers from diverse fields including architecture, law, game design, technology, media, art, and workplace safety engaged in collaborative speculation, discovery, design, invention, and creative production to explore protocols, boadly construed, from various angles.

Their findings, catalogued here in six modules, comprise a variety of textual and non-textual artifacts (including art works, game designs, and software), organized around a set of research themes: built environments, danger and safety, dense hypermedia, technical standards, web content addressability, authorship, swarms, protocol death, and (artificial) memory.
I have read through through Module One for 2003, and it is solid interesting deep dive stuff.  Module 2 is also available. Modules 3-6 are said to be 'coming soon’  (as of 4/4/24, four months into a year that has Summer of Protocols program 2024 already underway, with the deadline for proposals 4/12/24.)

Here is one clip from the “In Search of Hardness” essay:

…it’s only in the last 50 years or so, with the rise of communications technologies, especially the internet and container shipping, and the emergence of unprecedented planet-scale coordination problems like climate action, that protocols truly came into focus as first-class phenomena in our world; the sine qua non of modernity. The word itself is less than a couple of centuries old.

And it wasn’t until the invention of blockchains in 2009 that they truly came into their own as phenomena with their own unique technological and social characteristics, distinct from other things like machines, institutions, processes, or even algorithms.

Protocols are engineered hardness, and in that, they’re similar to other hard, enduring things, ranging from diamonds and monuments to high-inertia institutions and constitutions.

But modern protocols are more than that. They’re not just engineered hardness, they are programmable, intangible hardness. They are dynamic and evolvable. And we hope they are systematically ossifiable for durability. They are the built environment of digital modernity.”


Thursday, December 28, 2023

Origins of our current crises in the 1990s, the great malformation, and the illusion of race.

I'm passing on three clips I found most striking from David Brooks, recent NYTimes Sydney awards column:

I generally don’t agree with the arguments of those on the populist right, but I have to admit there’s a lot of intellectual energy there these days. (The Sidneys go to essays that challenge readers, as well as to those that affirm.) With that, the first Sidney goes to Christopher Caldwell for his essay “The Fateful Nineties” in First Things. Most people see the 1990s as a golden moment for America — we’d won the Cold War, we enjoyed solid economic growth, the federal government sometimes ran surpluses, crime rates fell, tech took off.

Caldwell, on the other hand, describes the decade as one in which sensible people fell for a series of self-destructive illusions: Globalization means nation-states don’t matter. Cyberspace means the material world is less important. Capitalism can run on its own without a countervailing system of moral values. Elite technocrats can manage the world better than regular people. The world will be a better place if we cancel people for their linguistic infractions.

As Caldwell sums it up: “America’s discovery of world dominance might turn out in the 21st century to be what Spain’s discovery of gold had been in the 16th — a source of destabilization and decline disguised as a windfall.”

***************** 

In “The Great Malformation,” Talbot Brewer observes that parenthood comes with “an ironclad obligation to raise one’s children as best one can.” But these days parents have surrendered child rearing to the corporations that dominate the attention industry, TikTok, Facebook, Instagram and so on: “The work of cultural transmission is increasingly being conducted in such a way as to maximize the earnings of those who oversee it.”

He continues: “We would be astonished to discover a human community that did not attempt to pass along to its children a form of life that had won the affirmation of its elders. We would be utterly flabbergasted to discover a community that went to great lengths to pass along a form of life that its elders regarded as seriously deficient or mistaken. Yet we have slipped unawares into precisely this bizarre arrangement.” In most societies, the economy takes place in a historically rooted cultural setting. But in our world, he argues, the corporations own and determine the culture, shaping our preferences and forming, or not forming, our conception of the good.

*****************

It’s rare that an essay jolts my convictions on some major topic. But that happened with one by Subrena E. Smith and David Livingstone Smith, called “The Trouble With Race and Its Many Shades of Deceit,” in New Lines Magazine. The Smiths are, as they put it, a so-called mixed-race couple — she has brown skin, his is beige. They support the aims of diversity, equity and inclusion programs but argue that there is a fatal contradiction in many antiracism programs: “Although the purpose of anti-racist training is to vanquish racism, most of these initiatives are simultaneously committed to upholding and celebrating race.” They continue: “In the real world, can we have race without racism coming along for the ride? Trying to extinguish racism while shoring up race is like trying to put out a fire by pouring gasoline on it.”

I’ve heard this argument — that we should seek to get rid of the whole concept of race — before and dismissed it. I did so because too many people I know have formed their identity around racial solidarity — it’s a source of meaning and strength in their lives. The Smiths argue that this is a mistake because race is a myth: “The scientific study of human variation shows that race is not meaningfully understood as a biological grouping, and there are no such things as racial essences. There is now near consensus among scholars that race is an ideological construction rather than a biological fact. Race was fashioned for nothing that was good. History has shown us how groups of people ‘racialize’ other groups of people to justify their exploitation, oppression and annihilation.”

Friday, August 25, 2023

The promise and pitfalls of the metaverse for science

A curious open-sourse bit of hand waving and gibble-gabble about the metaverse. I pass on the first two paragraphs and links to its references.
Some technology companies and media have anointed the metaverse as the future of the internet. Advances in virtual reality devices and high-speed connections, combined with the acceptance of remote work during the COVID-19 pandemic, have brought considerable attention to the metaverse as more than a mere curiosity for gaming. Despite substantial investments and ambitiously optimistic pronouncements, the future of the metaverse remains uncertain: its definitions and boundaries alternate among dystopian visions, a mixture of technologies (for example, Web3 and blockchain) and entertainment playgrounds.
As a better-defined and more-coherent realization of the metaverse continues to evolve, scientists have already started bringing their laboratories to 3D virtual spaces, running experiments with virtual reality and augmenting knowledge by using immersive representations. We consider how scientists can flexibly and responsibly leverage the metaverse, prepare for its uncertain future and avoid some of its pitfalls.