This post is a followup on MindBlog's 12/07/22 post on OpenAI's ChatGPT essay generating system. It was mentioned in Venkatesh Rao's essay "The Dawn of Mediocre Computing" that my techie son, Jonathan Bownds,sent to me. After pasting in the first two paragraphs of Rao's article, I want to pass on some essential clips that Jon extracted from Rao's overly long text, the sort of logorrheic writing that is responsible for the TLDR (too long didn't read) acronym.
Well, we all knew it was coming. Computers already easily overwhelm the best humans at chess and Go. Now they have done something far harder: achieved parity with David Brooks at writing.
OpenAI’s ChatGPT, released as a research beta two days ago, has done to the standard high-school essay what cameras did to photorealistic painting and pocket calculators did to basic arithmetic. It is open sign-up and free for now, but I suspect not for much longer, so go try it; and make sure to trawl social media for interesting and revealing examples being posted by people.
Definitions:
Mediocre computing is computing that aims for parity with mediocre human performance in a realish domains where notions of excellence are ill-posed. Excellent computing is computing that aims to surpass the best-performing humans in stylized, closed-world domains where notions of excellence are well-posed.
Most of us spend most of our time in realish domains. The urban built environment, workplaces, shopping, and modern systems of roads are all examples of realish domains. But I want to focus on two big and important ones in particular: language and money. Vast numbers of mediocre humans make good livings producing words and/or moving money around. These activities are also the home domains of the two frontiers of computing today, Al and crypto. The Second and First Foundations of the mediocre future of computing.
Via seemingly unrelated computational pathways, these two realish domains have succumbed to computerized automation. Incompletely, imperfectly, and unreliably, to be sure, but they definitely have succumbed. And in ways that seem conceptually roughly right rather than not even wrong. Large language models (LLMs) are the right way for software to eat language. Blockchains are the right way for software to eat money. And the two together are the right way to eat everything from contracts to code.
This reeks of real yin-yangery that extends to the roots of computing somehow. It's not just me hallucinating patterns where there are none.
I'm not trying to be cute here. I sincerely believe mediocre computing in realish domains is not just harder than excellent computing in stylized domains, but constitutes a whole higher category of hardness. There is an element of Moravec's paradox in my reasoning here. Roughly, the paradox states that tasks that look simple, and which all humans can do, are harder for Als than tasks that look hard, and which seem like exceptional achievements among humans.