Will AI Achieve Consciousness? Wrong Question
POSTED ON FEBRUARY 19, 2019 BY DANIEL C. DENNETT
Norbert Wiener imagined the future we now contend with in impressive detail and with few clear mistakes. More than any other early philosopher of artificial intelligence, he recognized that AI would not just imitate—and replace—human beings in many intelligent activities but would change human beings in the process. It’s an old, old story, with many well-known chapters in evolutionary history. Most mammals can synthesize their own vitamin C, but primates, having opted for a diet composed largely of fruit, lost the innate ability. The self-perpetuating patterns that we call human beings are now dependent on clothes, cooked food, vitamins, vaccinations, credit cards, smartphones, and the internet. And—tomorrow if not already today—AI. The real danger, he said, is that such machines, though helpless by themselves, may be used by a human being or a block of human beings to increase their control over the rest of the race or that political leaders may attempt to control their populations by means not of machines themselves but through political techniques as narrow and indifferent to human possibility as if they had, in fact, been conceived mechanically. Sure enough, these dangers are now pervasive. As I have been arguing recently, we’re making tools, not colleagues, and the great danger is not appreciating the difference, which we should strive to accentuate, marking and defending it with political and legal innovations. AI in its current manifestations is parasitic on human intelligence. It quite indiscriminately gorges on whatever has been produced by human creators and extracts the patterns to be found there—including some of our most pernicious habits. The gap between today’s systems and the science-fictional systems dominating the popular imagination is still huge, though many folks, both lay and expert, manage to underestimate it. Let’s consider IBM’s Watson, which can stand as a worthy landmark for our imaginations for the time being. The attitudes of people in AI toward these methods of dissembling at the “user interface” have ranged from contempt to celebration, with a general appreciation that the tricks are not deep but can be potent. One shift in attitude that would be very welcome is a candid acknowledgment that humanoid embellishments are false advertising—something to condemn, not applaud. AI creators have attempted to paper over the valley with cutesy humanoid touches, Disneyfication effects that will enchant and disarm the uninitiated. Joseph Weizenbaum’s ELIZA, a very early chatbot, was the pioneer example of such superficial illusion making, and it was his dismay at the ease with which his laughably simple and shallow program could persuade people they were having a serious heart-to-heart conversation that first sent him on his mission. Once we recognize that people are starting to make life-or-death decisions largely on the basis of “advice” from AI systems whose inner operations are unfathomable in practice, we can see a good reason why those who in any way encourage people to put more trust in these systems than they warrant should be held morally and legally accountable.
Copyright © 2019 EcoChi, LLC. All rights reserved.