Monday, June 24, 2024

The tyranny of words

Some reflections during a wake period at 1:30 a.m. this morning... 

Mulling on the tyranny of thought as a ruminating mind calms down and refuge is found in a quiet space from which words rise like wisps or vapors, a space free of subjects and objects in which there can be no hurry. 

Grateful to be experiencing an aging process that enables a dedifferentiating 82 year old brain to experience a return towards its youth, letting go of the clouds of senolytic discourse that have come to clutter it and increasingly experience being the calm and quiet space from which everything rises. 

Feeling sympathy for public intellectuals whose writing I follow, immersed in their addiction to words as they oblige themselves, many due to financial necessity, to keep writing a stream that includes mediocre as well as brilliant work, each generating a rivulet in the streams of discourse diverging and merging in an infosphere that is becoming increasingly contaminated by the words of robots that duplicate and obscure their efforts.  Grateful for the retired professor’s pension that  permits optional association with, or dissociation from, this world of words.

Friday, June 21, 2024

The other shoe is about to drop in 2024 - important closure moments in the state of the world.

I want to pass on a clip (to which I have added some definitions in parentheses) from Venkatest Rao’s most recent Ribbonfarm Studio installment in which he argues that the other shoe is about to drop for many narratives in 2024, a year that feels much more exciting than 1984, 1994, 2004, or 2014.
With the Trump/Biden election, the other shoe is about to drop on the arc that began with the Great Weirding (radical global transformations that unfolded between 2016 and 2020). This arc has fuzzy beginnings and endings globally, but is clearly heading towards closure everywhere. For instance, the recent election in India, with its chastening message for the BJP, has a 10-year span (2014-24). In the EU and UK, various arcs that began with events in Greece/Italy and Brexit are headed towards some sort of natural closure.
Crypto and AI, two strands in my mediocre computing series (the other two being robotics and the metaverse), also seem to be at an other-shoe-drops phase. Crypto, after experiencing 4-5 boom-bust cycles since 2009, is finally facing a triple test of geopolitical significance, economic significance, and “product” potential (as in international agreement that “stable coins” as well as fiat currencies are valid vehicles for managing debt and commerce). It feels like in the next year or two we’ll learn if it’s a technology that’s going to make history or remain a sideshow. AI is shifting gears from a rapid and accelerating installation phase of increasing foundational capabilities to a deployment phase of productization, marked by Apple’s entry into the fray and a sense of impending saturation in the foundational capabilities.
Various wars (Gaza, Ukraine) and tensions (Taiwan) are starting to stress the Westphalian model of the nation state for real now. That’s the other shoe dropping on a 400-year-long story (also a 75 year long story about the rules-based international order, but that’s relatively less interesting).
Economically, we’re clearly decisively past the ZIRPy (Zero Interest Rate Policy) end of the neoliberal globalization era that began in the mid-80s. That shoe has already dropped. What new arc is starting is unclear — the first shoe of the new story hasn’t dropped yet. Something about nonzero interest rates in an uncertain world marked by a mercantilist resource-grabbing geopolitical race unfolding in parallel to slowly reconfiguring global trade patterns.

Wednesday, June 19, 2024

Managing the human herd

 This post is a dyspeptic random walk through thoughts triggered by the front page photograph of the Wall Street journal of June 17, 2024, showing Masses of pilgrims embarked on a symbolic stoning of the devil in Saudi Arabia under the soaring summer heat. Such enormous mobs of people are those most easily roused to strong emotions by charismatic speakers.

How are the emotions and behaviors of such enormous clans of humans be regulated in a sane and humane way? Can this be accomplished outside of authoritarian or oligarchical governance? Might such governance establish its control of the will and moods of millions through the unnoticed infiltration of AI into all aspects of their daily life (cf. Apple's recent AI announcements). Will the world come to be ruled by a "Book of Elon"? 

Or might we be moving into a world of decentralized everything? a bottom up emergence of consensus governance the from the mosh pit of web 3, cryptocurrencies and stablecoins? The noble sentiments of the Etherium Foundation/ notwithstanding, the examples we have to date of 'rules of the commons' are the chaos of Discord, Reddit, and other social media where the sentiments of idiots and experts jumble together in an impenetrable cacophony.  

Who or what is going to emerge to control this mess? How long will the "permaweird" persist?  

 

Monday, June 17, 2024

Empty innovation: What are we even doing?

I came across an interesting commentary by "Tante" on innovation, invention, and progress (or the lack thereof) in the constant churning, and rise and fall, of new ideas and products in the absence of questions like "Why are we doing this?" and "Who is profiting?". In spite of the speaker's arrogance and annoying style, I think it is worth a viewing.

Friday, June 14, 2024

The future of life.

I want to pass on this science magazine review of Jamie Metzl's new book "Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World". Metzel is founder of the One Shared World organization. Check out its website here.
On the night of 4 July 1776, the Irish immigrant and official printer to the Continental Congress John Dunlap entered his Philadelphia print-shop and began to typeset the first printed version of a document that was to become the enduring North Star of the “American experiment.” It comprised an ideological handbook for its utopian aspirations and a codification of purported essential self-evident ground truths that included the equality of all men and the rights to life, liberty, and the pursuit of happiness. By the morning, Dunlap had produced an estimated 200 copies of the American Declaration of Independence, which Abraham Lincoln would later refer to as a “rebuke and a stumblingblock… to tyranny and oppression.”
In his erudite, optimistic, and timely book Superconvergence, the futurist Jamie Metzl laments the lack of any such authoritative reference to inform our exploration of an equally expansive, intriguing, and uncharted territory: humankind’s future. Replete with unprecedented opportunities and existential risks hitherto unimaginable in life’s history, the new world we are entering transcends geographical boundaries, and—as a result of humankind’s global interdependencies—it must, by necessity, exist in a no-man’s-land beyond the mandates of ideologies and nation-states. Its topography is defined not by geological events and evolution by natural selection so much as by the intersection of several exponential human-made technologies. Most notably, these include the generation of machine learning intelligence that can interrogate big data to define generative “rules” of biology and the post- Darwinian engineering of living systems through the systematic rewriting of their genetic code.
Acknowledging the intrinsic mutability of natural life and its ever-changing biochemistry and morphology, Metzl is unable to align himself with UNESCO’s 1997 Universal Declaration on the Human Genome and Human Rights. To argue that the current version of the human genome is sacred is to negate its prior iterations, including the multiple species of human that preceded us but disappeared along the way. The sequences of all Earth’s species are in a simultaneous state of being and becoming, Metzl argues. Life is intrinsically fluid.
Although we are still learning to write complex genomes rapidly, accurately, without sequence limitation, and at low cost, and our ability to author novel genomes remains stymied by our inability to unpick the generative laws of biology, it is just a matter of time before we transform biology into a predictable engineering material, at which point we will be able to recast life into desired forms. But while human-engineered living materials and biologically inspired devices offer potential solutions to the world’s most challenging problems, our rudimentary understanding of complex ecosystems and the darker sides of human nature cast long shadows, signaling the need for caution.
Metzl provides some wonderful examples of how artificial species and bioengineering, often perceived as adversaries of natural life, could help address several of the most important issues of the moment. These challenges include climate change, desertification, deforestation, pollution (including the 79,000-metric-ton patch of garbage the size of Alaska in the Pacific Ocean), the collapse of oceanic ecosystems, habitat loss, global population increase, and the diminution of species biodiversity. By rewriting the genomes of crops and increasing the efficiency of agriculture, we can reduce the need to convert additional wild habitats into farmland, he writes. Additionally, the use of bioengineering to make sustainable biofuels, biocomputing, bio foodstuffs, biodegradable plastics, and DNA information–storing materials will help reduce global warming.
Meanwhile, artificial intelligence (AI) can free up human time. By 2022, DeepMind’s AlphaFold program had predicted the structures of 214 million proteins—a feat that would have taken as long as 642 million years to achieve using conventional methods. As Metzl comments, this places “millions of years back into the pot of human innovation time.” The ability to hack human biology using AI will also have a tremendous impact on the human health span and life span, not least through AI-designed drugs, he predicts.
Metzl is right when he concludes that we have reached a “critical moment in human history” and that “reengineered biology will play a central role in the future of our species.” We will need to define a new North Star—a manifesto for life—to assist with its navigation. Metzl argues for the establishment of a new international body with depoliticized autonomy to focus on establishing common responses to shared global existential challenges. He suggests that this process could be kick-started by convening a summit aimed at establishing aligned governance guidelines for the revolutionary new technologies we are creating.

Wednesday, June 12, 2024

The pathologies of the educated elites

Another really nice opinion piece from David Brooks, who notes the consequences, as we have moved from the industrial age to the information age, of progressive energy moving from the working class to the universities, especially the elite universities.

I’ve looked on with a kind of dismay as elite university dynamics have spread across national life and politics, making America worse in all sorts of ways. Let me try to be more specific about these dynamics.
The first is false consciousness. To be progressive is to be against privilege. But today progressives dominate elite institutions like the exclusive universities, the big foundations and the top cultural institutions...This is the contradiction of the educated class. Virtue is defined by being anti-elite. But today’s educated class constitutes the elite, or at least a big part of it...This sort of cognitive dissonance often has a radicalizing effect. When your identity is based on siding with the marginalized, but you work at Horace Mann or Princeton, you have to work really hard to make yourself and others believe you are really progressive. You’re bound to drift further and further to the left to prove you are standing up to the man...elite students...are often the ones talking most loudly about burning the system down.
The second socially harmful dynamic is what you might call the cultural consequences of elite overproduction...the marketplace isn’t producing enough of the kinds of jobs these graduates think they deserve...Peter Turchin argued that periods of elite overproduction lead to a rising tide of social decay as alienated educated-class types wage ever more ferocious power struggles with other elites...The spread of cancel culture and support for decriminalizing illegal immigration and “defunding the police” were among the quintessential luxury beliefs that seemed out of touch to people in less privileged parts of society. Those people often responded by making a sharp countershift in the populist direction, contributing to the election of Donald Trump and to his continued political viability today...elite overproduction induces people on the left and the right to form their political views around their own sense of personal grievance and alienation. It launches unhappy progressives and their populist enemies into culture war battles that help them feel engaged, purposeful and good about themselves, but ...these battles are often more about performative self-validation than they are about practical policies that might serve the common good.
The third dynamic is the inflammation of the discourse. The information age has produced a vast cohort of people (including me) who live by trafficking in ideas — academics, journalists, activists, foundation employees, consultants and the various other shapers of public opinion...Nothing is more unstable than a fashionable opinion. If your status is defined by your opinions, you’re living in a world of perpetual insecurity, perpetual mental and moral war...French sociologist Pierre Bourdieu...argued that just as economic capitalists use their resource — wealth — to amass prestige and power, people who form the educated class and the cultural elite, symbolic capitalists, use our resources — beliefs, fancy degrees, linguistic abilities — to amass prestige, power and, if we can get it, money...symbolic capitalists turned political postures into power tools that enable them to achieve social, cultural and economic might...battles for symbolic consecration are now the water in which many of us highly educated Americans swim. In the absence of religious beliefs, these moral wars give people a genuine sense of meaning and purpose.
Brooks notes a number of potential ways of countering these dynamics, all of which require the educated class. progressive or not, to address the social, political and economic divides it has unwittingly created. But he also cites another, perhaps more likely, path:

Perhaps today’s educated elite is just like any other historical elite. We gained our status by exploiting or not even seeing others down below, and we are sure as hell not going to give up any of our status without a fight.

 Brooks then points to a forthcoming book, al-Gharbi’s “We Have Never Been Woke.” al-Gharbi notes:

...today’s educated-class activists are conveniently content to restrict their political action to the realm of symbols. In his telling, land acknowledgments — when people open public events by naming the Indigenous peoples who had their land stolen from them — are the quintessential progressive gesture...It’s often non-Indigenous people signaling their virtue to other non-Indigenous people while doing little or nothing for the descendants of those who were actually displaced...while members of the educated class do a lot of moral preening, their lifestyles contribute to the immiserations of the people who have nearly been rendered invisible — the Amazon warehouse worker, the DoorDash driver making $1.75 an hour after taxes and expenses.

 Brooks concludes:

That rumbling sound you hear is the possibility of a multiracial, multiprong, right/left alliance against the educated class. Donald Trump has already created the nub of this kind of movement but is himself too polarizing to create a genuinely broad-based populist movement. After Trump is off the stage, it’s very possible to imagine such an uprising....The lesson for those of us in the educated class is to seriously reform the system we have created or be prepared to be run over.

 

 

 

Monday, June 10, 2024

Protecting scientific integrity in an age of generative AI

I want to pass on the full text of an editorial by Blau et al in PNAS, the link points to the more complete open source online version containing acknowledgements and references:

Revolutionary advances in AI have brought us to a transformative moment for science. AI is accelerating scientific discoveries and analyses. At the same time, its tools and processes challenge core norms and values in the conduct of science, including accountability, transparency, replicability, and human responsibility (13). These difficulties are particularly apparent in recent advances with generative AI. Future innovations with AI may mitigate some of these or raise new concerns and challenges.
 
With scientific integrity and responsibility in mind, the National Academy of Sciences, the Annenberg Public Policy Center of the University of Pennsylvania, and the Annenberg Foundation Trust at Sunnylands recently convened an interdisciplinary panel of experts with experience in academia, industry, and government to explore rising challenges posed by the use of AI in research and to chart a path forward for the scientific community. The panel included experts in behavioral and social sciences, ethics, biology, physics, chemistry, mathematics, and computer science, as well as leaders in higher education, law, governance, and science publishing and communication. Discussions were informed by commissioned papers detailing the development and current state of AI technologies; the potential effects of AI advances on equality, justice, and research ethics; emerging governance issues; and lessons that can be learned from past instances where the scientific community addressed new technologies with significant societal implications (49).
 
Generative AI systems are constructed with computational procedures that learn from large bodies of human-authored and curated text, imagery, and analyses, including expansive collections of scientific literature. The systems are used to perform multiple operations, such as problem-solving, data analysis, interpretation of textual and visual content, and the generation of text, images, and other forms of data. In response to prompts and other directives, the systems can provide users with coherent text, compelling imagery, and analyses, while also possessing the capability to generate novel syntheses and ideas that push the expected boundaries of automated content creation.
 
Generative AI’s power to interact with scientists in a natural manner, to perform unprecedented types of problem-solving, and to generate novel ideas and content poses challenges to the long-held values and integrity of scientific endeavors. These challenges make it more difficult for scientists, the larger research community, and the public to 1) understand and confirm the veracity of generated content, reviews, and analyses; 2) maintain accurate attribution of machine- versus human-authored analyses and information; 3) ensure transparency and disclosure of uses of AI in producing research results or textual analyses; 4) enable the replication of studies and analyses; and 5) identify and mitigate biases and inequities introduced by AI algorithms and training data.

Five Principles of Human Accountability and Responsibility

To protect the integrity of science in the age of generative AI, we call upon the scientific community to remain steadfast in honoring the guiding norms and values of science. We endorse recommendations from a recent National Academies report that explores ethical issues in computing research and promoting responsible practices through education and training (3). We also reaffirm the findings of earlier work performed by the National Academies on responsible automated research workflows, which called for human review of algorithms, the need for transparency and reproducibility, and efforts to uncover and address bias (10).
 
Building upon the prior studies, we urge the scientific community to focus sustained attention on five principles of human accountability and responsibility for scientific efforts that employ AI:
1.
Transparent disclosure and attribution
Scientists should clearly disclose the use of generative AI in research, including the specific tools, algorithms, and settings employed; accurately attribute the human and AI sources of information or ideas, distinguishing between the two and acknowledging their respective contributions; and ensure that human expertise and prior literature are appropriately cited, even when machines do not provide such citations in their output.
 
Model creators and refiners should provide publicly accessible details about models, including the data used to train or refine them; carefully manage and publish information about models and their variants so as to provide scientists with a means of citing the use of particular models with specificity; provide long-term archives of models to enable replication studies; disclose when proper attribution of generated content cannot be provided; and pursue innovations in learning, reasoning, and information retrieval machinery aimed at providing users of those models with the ability to attribute sources and authorship of the data employed in AI-generated content.
2.
Verification of AI-generated content and analyses
Scientists are accountable for the accuracy of the data, imagery, and inferences that they draw from their uses of generative models. Accountability requires the use of appropriate methods to validate the accuracy and reliability of inferences made by or with the assistance of AI, along with a thorough disclosure of evidence relevant to such inferences. It includes monitoring and testing for biases in AI algorithms and output, with the goal of identifying and correcting biases that could skew research outcomes or interpretations.
 
Model creators should disclose limitations in the ability of systems to confirm the veracity of any data, text, or images generated by AI. When verification of the truthfulness of generated content is not possible, model output should provide clear, well-calibrated assessments of confidence. Model creators should proactively identify, report, and correct biases in AI algorithms that could skew research outcomes or interpretations.
3.
Documentation of AI-generated data
Scientists should mark AI-generated or synthetic data, inferences, and imagery with provenance information about the role of AI in their generation, so that it is not mistaken for observations collected in the real world. Scientists should not present AI-generated content as observations collected in the real world.
 
Model creators should clearly identify, annotate, and maintain provenance about synthetic data used in their training procedures and monitor the issues, concerns, and behaviors arising from the reuse of computer-generated content in training future models.
4.
A focus on ethics and equity
Scientists and model creators should take credible steps to ensure that their uses of AI produce scientifically sound and socially beneficial results while taking appropriate steps to mitigate the risk of harm. This includes advising scientists and the public on the handling of tradeoffs associated with making certain AI technologies available to the public, especially in light of potential risks stemming from inadvertent outcomes or malicious applications.
 
Scientists and model creators should adhere to ethical guidelines for AI use, particularly in terms of respect for clear attribution of observational versus AI-generated sources of data, intellectual property, privacy, disclosure, and consent, as well as the detection and mitigation of potential biases in the construction and use of AI systems. They should also continuously monitor other societal ramifications likely to arise as AI is further developed and deployed and update practices and rules that promote beneficial uses and mitigate the prospect of social harm.
 
Scientists, model creators, and policymakers should promote equity in the questions and needs that AI systems are used to address as well as equitable access to AI tools and educational opportunities. These efforts should empower a diverse community of scientific investigators to leverage AI systems effectively and to address the diverse needs of communities, including the needs of groups that are traditionally underserved or marginalized. In addition, methods for soliciting meaningful public participation in evaluating equity and fairness of AI technologies and uses should be studied and employed.
 
AI should not be used without careful human oversight in decisional steps of peer review processes or decisions around career advancement and funding allocations.
5.
Continuous monitoring, oversight, and public engagement
Scientists, together with representatives from academia, industry, government, and civil society, should continuously monitor and evaluate the impact of AI on the scientific process, and with transparency, adapt strategies as necessary to maintain integrity. Because AI technologies are rapidly evolving, research communities must continue to examine and understand the powers, deficiencies, and influences of AI; work to anticipate and prevent harmful uses; and harness its potential to address critical societal challenges. AI scientists must at the same time work to improve the effectiveness of AI for the sciences, including addressing challenges with veracity, attribution, explanation, and transparency of training data and inference procedures. Efforts should be undertaken within and across sectors to pursue ongoing study of the status and dynamics of the use of AI in the sciences and pursue meaningful methods to solicit public participation and engagement as AI is developed, applied, and regulated. Results of this engagement and study should be broadly disseminated.

A New Strategic Council to Guide AI in Science

We call upon the scientific community to establish oversight structures capable of responding to the opportunities AI will afford science and to the unanticipated ways in which AI may undermine scientific integrity.
 
We propose that the National Academies of Sciences, Engineering, and Medicine establish a Strategic Council on the Responsible Use of Artificial Intelligence in Science.* The council should coordinate with the scientific community and provide regularly updated guidance on the appropriate uses of AI, especially during this time of rapid change. The council should study, monitor, and address the evolving uses of AI in science; new ethical and societal concerns, including equity; and emerging threats to scientific norms. The council should share its insights across disciplines and develop and refine best practices.
 
More broadly, the scientific community should adhere to existing guidelines and regulations, while contributing to the ongoing development of public and private AI governance. Governance efforts must include engagement with the public about how AI is being used and should be used in the sciences.
 
With the advent of generative AI, all of us in the scientific community have a responsibility to be proactive in safeguarding the norms and values of science. That commitment—together with the five principles of human accountability and responsibility for the use of AI in science and the standing up of the council to provide ongoing guidance—will support the pursuit of trustworthy science for the benefit of all.
 

Friday, June 07, 2024

Is it a fact? The epistemic force of language in news headlines.

From Chuey et al. in PNAS (open source):  

Significance

Headlines are an influential source of information, especially because people often do not read beyond them. We investigated how subtle differences in epistemic language in headlines (e.g., “believe” vs. “know“) affect readers’ inferences about whether claims are perceived as matters of fact or mere opinion. We found, for example, saying “Scientists believe methane emissions soared to a record in 2021” led readers to view methane levels as more a matter of opinion compared to saying “Scientists know…” Our results provide insight into how epistemic verbs journalists use affect whether claims are perceived as matters of fact and suggest a mechanism contributing to the rise of alternative facts and “post-truth” politics.
Abstract
How we reason about objectivity—whether an assertion has a ground truth—has implications for belief formation on wide-ranging topics. For example, if someone perceives climate change to be a matter of subjective opinion similar to the best movie genre, they may consider empirical claims about climate change as mere opinion and irrelevant to their beliefs. Here, we investigate whether the language employed by journalists might influence the perceived objectivity of news claims. Specifically, we ask whether factive verb framing (e.g., "Scientists know climate change is happening") increases perceived objectivity compared to nonfactive framing (e.g., "Scientists believe [...]"). Across eight studies (N = 2,785), participants read news headlines about unique, noncontroversial topics (studies 1a–b, 2a–b) or a familiar, controversial topic (climate change; studies 3a–b, 4a–b) and rated the truth and objectivity of the headlines’ claims. Across all eight studies, when claims were presented as beliefs (e.g., “Tortoise breeders believe tortoises are becoming more popular pets”), people consistently judged those claims as more subjective than claims presented as knowledge (e.g., “Tortoise breeders know…”), as well as claims presented as unattributed generics (e.g., “Tortoises are becoming more popular pets”). Surprisingly, verb framing had relatively little, inconsistent influence over participants’ judgments of the truth of claims. These results demonstrate how, apart from shaping whether we believe a claim is true or false, epistemic language in media can influence whether we believe a claim has an objective answer at all.

Wednesday, June 05, 2024

Impact of our built environment on our microbiome and health

Bosch et al. do a perspective in PNAS (open source) pointing out that:
...contemporary built environments are steadily reducing the microbial diversity essential for human health, well-being, and resilience while accelerating the symptoms of human chronic diseases including environmental allergies, and other more life-altering diseases.
Here is their abstract:
There is increasing evidence that interactions between microbes and their hosts not only play a role in determining health and disease but also in emotions, thought, and behavior. Built environments greatly influence microbiome exposures because of their built-in highly specific microbiomes coproduced with myriad metaorganisms including humans, pets, plants, rodents, and insects. Seemingly static built structures host complex ecologies of microorganisms that are only starting to be mapped. These microbial ecologies of built environments are directly and interdependently affected by social, spatial, and technological norms. Advances in technology have made these organisms visible and forced the scientific community and architects to rethink gene–environment and microbe interactions respectively. Thus, built environment design must consider the microbiome, and research involving host–microbiome interaction must consider the built-environment. This paradigm shift becomes increasingly important as evidence grows that contemporary built environments are steadily reducing the microbial diversity essential for human health, well-being, and resilience while accelerating the symptoms of human chronic diseases including environmental allergies, and other more life-altering diseases. New models of design are required to balance maximizing exposure to microbial diversity while minimizing exposure to human-associated diseases. Sustained trans-disciplinary research across time (evolutionary, historical, and generational) and space (cultural and geographical) is needed to develop experimental design protocols that address multigenerational multispecies health and health equity in built environments.

Monday, May 27, 2024

Ancient origins of aspects of instrumental and song melodies distinctive from those of language.

 A global collaboration from many cultures shows that songs and instrumental melodies are slower and higher and use more stable pitches than speech, suggesting evolutionary origins universal to all humans that cannot simply be explained by culture. The numerous samples of music collected could be arranged in a musi-linguistic continuum from instrumental music to spoken language.

Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a “musi-linguistic” continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.