Showing posts with label culture/politics. Show all posts
Showing posts with label culture/politics. Show all posts

Monday, January 13, 2025

Is critical race theory an inversion of history?

John Ellis, who says yes, is a professor emeritus of German literature at the University of California, Santa Cruz (and author of “A Short History of Relations Between Peoples: How the World Began to Move Beyond Tribalism.” )

I've decided to share his recent WSJ essay to archive them for myself and interested MindBlog readers, even though it is overly simplistic in its emphasis on the virtues of the Anglosphere versus its vices. His basic argument is that so-called "White Privilege" of the Anglosphere is what originally freed us from a universal tribalism in which everyone, by today's standards, was racist. It did this by developing the idea of a common humanity.  Here is the text: 

It’s a tribute of sorts to critical race theory’s success that the Trump administration will make its eradication a priority. The Biden administration had quietly implemented policies throughout the federal government based on this theory, and it is being taught in colleges and schools throughout the country. It has overrun much of the corporate world, and it has even secured a place in the training of many professions. The accusations made in closed training sessions are astonishingly venomous: Arrogant white supremacy is ubiquitous; white rage results when that supremacy is challenged; whites hold money and power because they stole it from other races; systemic racism and capitalism keep the injustices going.

All of this is based on categorically false assumptions about the past. We need only look at how the modern idea of our common humanity originated and developed to see that critical race theory has everything backward. A realistic history tells us that the thinkers and engineers of the Anglosphere, principally England and the U.S., are the heroes, not the villains, of this story, while the rest were laggards, not leaders.

For most of recorded history, neighboring peoples regarded each other with apprehension if not outright fear and loathing. Tribal and racial attitudes were universal. That’s a long way from the orthodoxy of our own time, which holds that we are all one human family. Before that consensus arose, a charge of racism made no sense. By today’s standards, everyone was racist.

It’s not hard to understand why tribalism once reigned everywhere. Without modern transportation and communication, most people knew nothing about other societies. What contact there was between different peoples often involved warfare, and that made everyone fear strangers. The insecurity of life in earlier times added to this anxiety. Protections we now enjoy didn’t exist: policing, banking, competent medical care, social safety nets. The supply of food was uncertain before trucks and refrigeration. In a dangerous world people clung to their own kind for safety, and that was a natural and even necessary attitude.

How did we get from this mindset to the idea of a common humanity? The practical impediments to the world’s peoples getting to know and eventually respect each other were largely removed by British and American engineers. They invented the steam engine, then used it to develop the first railways. They followed this by inventing and massproducing cars, trucks and finally airplanes. They pioneered radio, television, films, newspapers and the internet. The result was that ignorance of other peoples was turned around.

But in the 18th century the British did something even more important: They began to develop our modern outlook on race.

Why Britain? Liberalizing political developments beginning with the Magna Carta and the first representative Parliament, called by Simon de Montfort, fostered greater liberty for the British subject. Liberty led to increasing prosperity, and prosperity to a rapid increase in literacy. Widespread literacy created the first large reading public: By the beginning of the 18th century, dozens of newspapers and periodicals were being published in Britain. An extensive reading public allowed public opinion to become a powerful force, and that set the stage for manifestos and petitions, even campaigns about matters that offended the public’s conscience.

A series of British writers began to promote ideas about the conduct of life and the role of government. Among the most important was John Locke, who argued that every human life had its own rationale, none being created for the use of another. Another was David Hume, who wrote that all men are nearly equal “in their mental power and faculties, till cultivated by education.” These and many others were launching what would become the modern consensus that we are all one human family. The idea gained ground so quickly that in Britain, and there alone, a powerful campaign to abolish slavery arose. By the end of the 18th century that campaign was leading to prohibitions in many parts of the Anglosphere, while Africa and Asia remained as tribalist and racist as ever.

As this idea took hold it made the British see their empire differently. Like other European countries, Britain had initially sought empire to strengthen its position in the world—others would add territory if Britain didn’t, and Britain would be weakened. But if the peoples of the British Empire were one human family, how could some be subordinate to others? The British began to consider themselves responsible for the welfare and development of their subject peoples, and for giving them competent administration before they had learned to provide it themselves. That change inevitably led to the dissolution of empire, and to a consensus that the time for empires (of which there had been hundreds) was over. The world’s most influential anti-imperialists were British. The idea of a common humanity spread across the globe as the power and influence of the Anglosphere grew.

First, this new ideology spread throughout the quarter of the globe’s peoples that were in the British Empire, where different races were learning to live and work together. Next, the Anglosphere’s cultural influence went worldwide as Britain’s industrial revolution set off a culture of innovation that resulted in a universal civilization— that is, modernity. As that way of life spread throughout the world, it carried with it the idea of a common humanity. There’s a simple explanation for what critical race theory calls “white privilege.” Because the Anglosphere developed prosperous modernity and gave it to the world, English-speakers were naturally the first to enjoy it. People initially outside that culture of innovation are still catching up. Asians and Asian-Americans have done this with great success, but critical race theory impedes the progress of other groups by persuading them to demonize the people who created the modern values they have adopted. It betrays those values by stoking racial hatred. Critical race theory tells us that all was racial harmony until racist Europeans disturbed it, but the truth is rather that all was tribal hostility until the Anglosphere rescued us.

Monday, December 23, 2024

Steven Fry: "AI: A Means to an End or a Means to Our End?"

I have to pass on to MindBlog readers and my future self this  link to a brilliant lecture by Steven Fry. It is an engaging and entertaining analysis, steeped in relevant history and precedents, of ways we might be heading into the future.  Here is just one clip from the piece:

We cling on to the fierce hope that the one feature machines will never be able to match is our imagination, our ability to penetrate the minds and feelings of others. We feel immeasurably enriched by this as individuals and as social animals. An Ai may know more about the history of the First World War than all human historians put together. Every detail of every battle, all the recorded facts of personnel and materiel that can be known. But in fact I know more about it because I have read the poems of Wilfred Owen. I’ve read All Quiet on the Western Front. I’ve seen Kubrick’s The Paths of Glory. So I can smell, touch, hear, feel the war, the gas, the comradeship, the sudden deaths and terrible fear. I know it’s meaning. My consciousness and experience of perceptions and feelings allows me access to the consciousness and experiences of others; their voices reach me. These are data that machines can scrape, but they cannot — to use a good old 60s phrase — relate to. Empathy. Identification. Compassion. Connection. Belonging. Something denied a sociopathic machine. Is this the only little island, the only little circle of land left to us as the waters of Ai lap around our ankles? And for how long? We absolutely cannot be certain that, just as psychopaths (who aren’t all serial killers) can entirely convincingly feign empathy and emotional understanding, so will machines and very soon. They will fool us, just as sociopaths can and do, and frankly just as we all do to some bore or nuisance when we smile and nod encouragement but actually feel nothing for them. No, we can hope that our sense of human exceptionalism is justified and that what we regard as unique and special to us will keep us separate and valuable but we have to remember how much of our life and behaviour is performative, how many masks we wear and how the masks conceal only other masks. After all, is our acquisition of language any more conscious, real and worthy than the Bayesian parroting of the LLM? Chomsky tells us linguistic structures are embedded within us. We pick up the vocabulary and the rules from the data we scrape from around us - our parents, older siblings and peers. Out the sentences roll from us syntagmatically, we’ve no real idea how we do it. For example, how do we know the difference in connotation between the verbs to saunter and to swagger? It is very unlikely anyone taught us. We picked it up from context. In other words, from Bayesian priors, just like an LLM.

The fact is we don’t truly understand ourselves or how we came to be how and who we are. But we know about genes and we know about natural selection, the gravity that drives our evolution. And we are already noticing that principle at work with machines.



Monday, December 16, 2024

Analysis of the dumbing down of language on social media over time

Di Marco et al. (open source) do a comparative analysis of 8 different social media platforms (Facebook, Twitter, YouTube, Voat, Reddit, Usenet, Gab, and Telegram), focusing on their complexity and temporal shifts in a dataset of ~300 million English comments over 34 years.Their abstract:

Understanding the impact of digital platforms on user behavior presents foundational challenges, including issues related to polarization, misinformation dynamics, and variation in news consumption. Comparative analyses across platforms and over different years can provide critical insights into these phenomena. This study investigates the linguistic characteristics of user comments over 34 y, focusing on their complexity and temporal shifts. Using a dataset of approximately 300 million English comments from eight diverse platforms and topics, we examine user communications’ vocabulary size and linguistic richness and their evolution over time. Our findings reveal consistent patterns of complexity across social media platforms and topics, characterized by a nearly universal reduction in text length, diminished lexical richness, and decreased repetitiveness. Despite these trends, users consistently introduce new words into their comments at a nearly constant rate. This analysis underscores that platforms only partially influence the complexity of user comments but, instead, it reflects a broader pattern of linguistic change driven by social triggers, suggesting intrinsic tendencies in users’ online interactions comparable to historically recognized linguistic hybridization and contamination processes.

Thursday, December 12, 2024

Sustainability of Animal-Sourced Foods - how to deal with farting cows...

I've just read through a number of articles in a Special Feature section of  the most recent issue of PNAS on the future of animal and plant sourced food. After a balanced lead article by Qaim et al., a following article that really caught my eye was "Mitigating methane emissions in grazing beef cattle with a seaweed-based feed additive: Implications for climate-smart agriculture."  First line of it's abstract is "This study suggests that the addition of pelleted bromoform-containing seaweed (Asparagopsis taxiformis) to the diet of grazing beef cattle can potentially reduce enteric methane (CH4) emissions (g/d) by an average of 37.7% without adversely impacting animal performance."

Thursday, December 05, 2024

The Future of Warfare

Passing on an article from today's WSJ that I want to save, using MindBlog as my personal archive: 

OpenAI Forges Tie-Up To Defense Industry

OpenAI , the artificial-intelligence company behind Chat-GPT, is getting into the business of war.

The world’s most valuable AI company has agreed to work with Anduril Industries, a leading defense-tech startup, to add its technology to systems the U.S. military uses to counter drone attacks. The partnership, which the companies announced Wednesday, marks OpenAI’s deepest involvement yet with the Defense Department and its first tie-up with a commercial weapons maker.

It is the latest example of Silicon Valley’s dramatic turn from shunning the Pentagon a few years ago to forging deeper ties with the national security complex.

OpenAI, valued at more than $150 billion, previously barred its AI from being used in military and warfare. In January, it changed its policies to allow some collaborations with the military.

While the company still prohibits the use of its technology in offensive weapons, it has made deals with the Defense Department for cybersecurity work and other projects. This year, OpenAI added former National Security Agency chief Paul Nakasone to its board and hired former Defense Department official Sasha Baker to create a team focused on national-security policy.

Other tech companies are making similar moves, arguing that the U.S. must treat AI technology as a strategic asset to bolster national security against countries like China. Last month, startup Anthropic said it would give access to its AI to the U.S. military through a partnership with Palantir Technologies.

OpenAI will incorporate its tech into Anduril’s counterdrone systems software, the companies said.

The Anduril systems detect, assess and track unmanned aircraft. If a threatening drone is identified, militaries can use electronic jamming, drones and other means to take it down.

The AI could improve the accuracy and speed of detecting and responding to drones, putting fewer people in harm’s way, Anduril said.

The Anduril deal ties OpenAI to some tech leaders who have espoused conservative

ideals and backed Donald Trump. Anduril co-founder Palmer Luckey was an early and vocal Trump supporter from the tech industry. Luckey’s sister is married to Matt Gaetz, Trump’s pick to lead the Justice Department before he withdrew from consideration.

Luckey is also close to Trump’s ally, Elon Musk.

Musk has praised Luckey’s entrepreneurship and encouraged him to join the Trump transition team.

Luckey has, at times, fashioned himself as a younger Musk and references Musk as a pioneer in selling startup technology to the Pentagon.

The alliance between Anduril and OpenAI might also help buffer the AI company’s chief executive, Sam Altman, from possible backlash from Musk, who has openly disparaged Altman and sued his company. Musk was a co-founder of OpenAI but stepped away from the company in 2018 after clashing with Altman over the company’s direction. Last year. Musk founded a rival AI lab, x.AI.

At an event on Wednesday, Altman said he didn’t think Musk would use his close relationship with Trump to undermine rivals.

“It would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors,” Altman said at the New York Times’s DealBook conference in New York City. “I don’t think people would tolerate that. I don’t think Elon would do it.”

Anduril is leading the push by venture-backed startups to sell high-tech, AI-powered systems to replace traditional tanks and attack helicopters. The company sells weapons to militaries around the world and AI software that enables the weapons to act autonomously.

Anduril Chief Executive Officer Brian Schimpf said in a statement that adding OpenAI technology to Anduril systems will “enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations.”

Anduril, valued at $14 billion, is one of the few success stories among a crowd of fledgling defense startups. In November, the company announced a $200 million contract to provide the U.S. Marine Corps with its counterdrone system. The company said the Defense Department uses the counterdrone systems to protect military installations.

As part of this agreement, OpenAI’s technology won’t be used with Anduril’s other weapons systems, the companies said.

Altman said in a statement that his company wants to “ensure the technology upholds democratic values.”

The companies declined to comment on the financial terms of the partnership.

Technology entrepreneurs, backed by billions of dollars in venture capital, have bet that future conflicts will hinge on large numbers of small, AIpowered autonomous systems to attack and defend. Defense--tech companies and some Pentagon leaders say the U.S. military needs better AI for a potential conflict with China and other sophisticated adversaries.

AI has proved increasingly important for keeping drones in the air after the rise of electronic warfare, which uses jammers to block GPS signals and radio frequencies that drones use to fly. AI can also help soldiers and military chiefs filter large amounts of battlefield data.

Wading deeper into defense opens another source of revenue for OpenAI, which seeks to evolve from the nonprofit lab of its roots to a moneymaking leader in the AI industry. The computing costs to develop and operate AI models are exorbitant, and the company is losing billions of dollars a year.