[ Main ] [ Home ] [ Work ] [ Code ] [ Rants ] [ Readings ] [ Links ] |
|
|
[2024] [2023] December November October September August July June May April March February January [2022] [2021] [2020] [2019] [2018] [2017] [2016] [2015] [2014] [2013] [2012] [2011] [2010] [2009] [2008] [2007] [2006] [2005] [2004] [2003] |
On AI, and the risks of AI
[Wed May 31 08:43:18 CDT 2023]
Artificial Intelligence (AI) is all the rage lately in the news. The New York Times published yesteday that a group of experts had written an open letter to warn us about the dangers of the new technology, which they say poses a "risk of extinction". As a matter of fact, they call for governments to think about ways to mitigate its effects in the same manner they would tackle other "societal-scale risks", such as pandemics and nuclear war. The interesting thing is that this letter is not written by a bunch of technophobes or conspiracy theorists. On the contrary, it's written by people involved in the top leading AI companies, such as OpenAI, Google DeepMind, or Anthropic. My gues is that, just as we have always done with warnings from all sorts of experts in other areas of life, we will also disregard this one and move on. Then, when problems do show up, we will hear cries that "nobody saw it coming". Also on the topic of AI, I recently read an article titled Beyond the ‘Matrix’ Theory of the Mind written by Ezra Klein that makes a few interesting points. He starts by drawing a simile between AI and the Internet itself. In particular, he stresses that the Internet was also seen at the beginning as a liberating tool that would come to spread knowledge around the planet. Yet, a few decades later, its record is a bit more mixed. Certainly, it did deliver on some of its promises, but it also created a few new problems that we didn't foresee. An example to illustrate this: And, as we all know, distractions are just one of the drawbacks. There's plenty more where that came from, including fake news, the creation of social "bubbles", digital addiction, social isolation, psychological problems associated to an excessive use of the Internet, easy access to pornography... Summing up, Klein thinks we may be headed in the wrong direction in at least three ways:
If I were to take a wild guess regarding Klein's second point above, I'd say there is a good chance that AI will dump a lot of mediocre text (even meaningless junk) onto the digital world. Perhaps more to the point, the reason why we communicate is precisely to transmit information from a person to another. Yet, do we necessarily care about what AI has to say? Let's put it this way, if I want to learn the basics of a particular topic, I may want to ask AI. But do I care what AI's opinion is on a political topic? On a moral issue? Is it even relevant? After all, no AI entity is a citizen of any polity. Do we care about its opinions? The peril here is that we may cheapen communication even more. A bew barrage of text and images may end up deepening our cycnicism which, in turn, could dissolve the foundations of our human societies even more. I'm not sure this is even being considered in the debate. {link to this entry} |