On AI, and the risks of AI
[Wed May 31 08:43:18 CDT 2023]

Artificial Intelligence (AI) is all the rage lately in the news. The New York Times published yesteday that a group of experts had written an open letter to warn us about the dangers of the new technology, which they say poses a "risk of extinction". As a matter of fact, they call for governments to think about ways to mitigate its effects in the same manner they would tackle other "societal-scale risks", such as pandemics and nuclear war. The interesting thing is that this letter is not written by a bunch of technophobes or conspiracy theorists. On the contrary, it's written by people involved in the top leading AI companies, such as OpenAI, Google DeepMind, or Anthropic. My gues is that, just as we have always done with warnings from all sorts of experts in other areas of life, we will also disregard this one and move on. Then, when problems do show up, we will hear cries that "nobody saw it coming".

Also on the topic of AI, I recently read an article titled Beyond the ‘Matrix’ Theory of the Mind written by Ezra Klein that makes a few interesting points. He starts by drawing a simile between AI and the Internet itself. In particular, he stresses that the Internet was also seen at the beginning as a liberating tool that would come to spread knowledge around the planet. Yet, a few decades later, its record is a bit more mixed. Certainly, it did deliver on some of its promises, but it also created a few new problems that we didn't foresee. An example to illustrate this:

You can think of two ways the internet could have sped up productivity growth. The first way was obvious: by allowing us to do what we were already doing and do it more easily and quickly. And that happened. You can see a bump in productivity growth from roughly 1995 to 2005 as companies digitized their operations. But it’s the second way that was always more important: By connecting humanity to itself and to nearly its entire storehouse of information, the internet could have made us smarter and more capable as a collective.

I don’t think that promise proved false, exactly. Even in working on this article, it was true for me: The speed with which I could find information, sort through research, contact experts — it’s marvelous. Even so, I doubt I wrote this faster than I would have in 1970. Much of my mind was preoccupied by the constant effort needed just to hold a train of thought in a digital environment designed to distract, agitate and entertain me. And I am not alone.

And, as we all know, distractions are just one of the drawbacks. There's plenty more where that came from, including fake news, the creation of social "bubbles", digital addiction, social isolation, psychological problems associated to an excessive use of the Internet, easy access to pornography...

Summing up, Klein thinks we may be headed in the wrong direction in at least three ways:

One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up. I suspect this will slow their widespread use in important industries much more than is being admitted, akin to the way driverless cars have been tough to roll out because they need to be perfectly reliable rather than just pretty good.

(...)

The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?

(...)

My third concern is related to that use of A.I.: Even if those summaries and drafts are pretty good, something is lost in the outsourcing. Part of my job is reading 100-page Supreme Court documents and composing crummy first drafts of columns. It would certainly be faster for me to have A.I. do that work. But the increased efficiency would come at the cost of new ideas and deeper insights.

If I were to take a wild guess regarding Klein's second point above, I'd say there is a good chance that AI will dump a lot of mediocre text (even meaningless junk) onto the digital world. Perhaps more to the point, the reason why we communicate is precisely to transmit information from a person to another. Yet, do we necessarily care about what AI has to say? Let's put it this way, if I want to learn the basics of a particular topic, I may want to ask AI. But do I care what AI's opinion is on a political topic? On a moral issue? Is it even relevant? After all, no AI entity is a citizen of any polity. Do we care about its opinions? The peril here is that we may cheapen communication even more. A bew barrage of text and images may end up deepening our cycnicism which, in turn, could dissolve the foundations of our human societies even more. I'm not sure this is even being considered in the debate. {link to this entry}