Monday, May 29, 2023

To fulfill its promise, artificial intelligence needs to deepen human intelligence.

For MindBlog readers interested in AI, I have to point to another must-read article by Ezra Klein. Below are some clips that try to communicate his central points. (And no, I'm not using ChatGPT to generate this post, because of several of AI's limitations that he notes.) Klein starts by noting the many ways in which the internet has not fullfiled its promise, overwhelming us with more information than we can process, degrading our political discourse and attention spans, and leading us multitasking which not only diminished our cognitive depth but also activates our stress chemistry. He then lists several wrong directions that might be taken by large language models like OpenAI’s GPT-4 and Google’s Bard:
One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up...A question to ask about large language models, then, is where does trustworthiness not matter?...A.I. will be great for creating content where reliability isn’t a concern. The personalized video games and children’s shows and music mash-ups and bespoke images will be dazzling...But where reliability matters — say, a large language model devoted to answering medical questions or summarizing doctor-patient interactions — deployment will be more troubled, as oversight costs will be immense. The problem is that those are the areas that matter most for economic growth.
...Instead of generating 10 ideas in a minute, A.I. can generate hundreds of ideas in a second...Imagine that multiplied across the economy. Someone somewhere will have to process all that information. What will this do to productivity?...Email and chat systems like Slack offer useful analogies here. Both are widely used across the economy. Both were initially sold as productivity boosters, allowing more communication to take place faster. And as anyone who uses them knows, the productivity gains — though real — are more than matched by the cost of being buried under vastly more communication, much of it junk and nonsense.
Many of us have had the experience of asking ChatGPT to draft a piece of writing and seeing a fully formed composition appear, as if by magic, in seconds...My third concern is related to this use of A.I.: Even if those summaries and drafts are pretty good, something is lost in the outsourcing...It’s the time spent inside an article or book spent drawing connections to what we know and having thoughts we would not otherwise have had that matters...No one thinks that reading the SparkNotes summary of a great piece of literature is akin to actually reading the book. And no one thinks that if students have ChatGPT write their essays, they have cleverly boosted their productivity rather than lost the opportunity to learn. The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real.
These are old concerns, of course. Socrates questioned the use of writing (recorded, ironically, by Plato), worrying that “if men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves but by means of external marks.” I think the trade-off here was worth it — I am, after all, a writer — but it was a trade-off. Human beings really did lose faculties of memory we once had.
To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don’t overwhelm and distract and diminish us. We failed that test with the internet. Let’s not fail it with A.I.

No comments:

Post a Comment