Friday, September 02, 2022

Increasingly good artificial intelligence (A.I.) is offering us promise and peril.

I want to pass on a few items from my list of putative MindBlog posts on A.I. 

The first derives from an series of emails on A.I. sent to the listserve of the "Chaos and Complex Systems Discussion Group" at Univ. of Wisconsin, Madison, that I attended until my move to Austin Tx. They point to the kerfuffle started by suspended Google engineer Blake Lemoine over whether one of the company's experimental AIs called LaMDA had achieved sentience, and How belief in AI sentience is becoming a problem.

(By the way, If you want to set up your own chatbot for self therapy, that won't collect and sell personal info you may divulge to it, visit https://replika.com/.... I actually tried it out, was underwhelmed by its responses, so deleted my account.)

An article by Kevin Roose on the potentials and risks of A.I. is worth reading.  One amazing feat of A.I. has been solving the "protein-folding problem," which I (as a trained biochemist) have been following for over 50 years.

This summer, DeepMind announced that AlphaFold (an A.I. system descended from the Go-playing one) had made predictions of the three-dimensional structures of proteins from their one-dimensional amino acid sequences for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.

Here are a few further clips:

Even if the skeptics are right, and A.I. doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA (language models) and DALL-E 2 (generating images from language descriptions) could become a powerful force in society. In a few years, the vast majority of the photos, videos and text we encounter on the internet could be A.I.-generated. Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. And tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale, distorting the political process in ways we won’t see coming.

Ajeya Cotra, a senior analyst with Open Philanthropy who studies A.I. risk, estimated two years ago that there was a 15 percent chance of “transformational A.I.” — which she and others have defined as A.I. that is good enough to usher in large-scale economic and societal changes, such as eliminating most white-collar knowledge jobs — emerging by 2036...But in a recent post, Ms. Cotra raised that to a 35 percent chance, citing the rapid improvement of systems like GPT-3.

Because of how new many of these A.I. systems are, few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the A.I. frontier...we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance, which resulted in nothing but gridlock and testy hearings...big tech companies investing billions in A.I. development — the Googles, Metas and OpenAIs of the world — need to do a better job of explaining what they’re working on, without sugarcoating or soft-pedaling the risks.

No comments:

Post a Comment