Monday, February 15, 2021

What is the Mind, What is Consciousness?

 


Introduction: 
1. What is Consciousness? Nagel 
2. Reductionist Materialism vs. Phenomenology 
3. The Hard problem of Consciousness 
4. Artificial Intelligence (AI) 
5. Zombies 
6. The Self 
7. Free Will and Agency 
8. Humanity’s Future 

PAR TWO: ARTIFICIAL INTELLIGENCE AND ZOMBIES 

4. Artificial Intelligence (AI) and Consciousness Chapter 10 in Harris’ book “Complexity and Stupidity,” is an interview with David Krakauer, a mathematical biologist. Harris and his guest stress that intelligence must not be confused with consciousness. 
Humans have managed to build highly intelligent machines. However, throughout the book, Harris repeatedly warns against the potential danger of creating machines that are more intelligent than us, and then they get out of control - sort of a Frankenstein monster. 
In chapter two, titled “Finding Our Way,” where Harris interviews David Deutsch, the Oxford University quantum physicist, he expresses his misgivings about this possibility (misgivings which Deutsch does not share). 
For one thing, Harris argues, once machines become more intelligent than humans, they may take over even if they do not have consciousness. This might then be the end of consciousness. These future machines could be incredibly intelligent, they would be able to do just about everything, but without consciousness they would be zombies. “The lights would not be on.” They would not have experiences. 

However, some scholars feel that once a system becomes as intelligent as a human, it is bound to have consciousness as well (p. 145). Who knows? 

Popular culture has offered many examples of “robots,” ( machines that are programmable by a computer and capable of carrying out complex actions automatically) “cyborgs” (beings that combine organic and mechanical parts), “computers” (machines that can be instructed to carry out arithmetic or logical operations automatically via programming) and other devices that possess artificial intelligence and may or may not also possess consciousness. 

Think of Arnold Schwarzenegger’s Terminator series, HBO’s Westworld TV series, and most brilliantly Stanley Kubrick’s 2001: A Space Odyssey. 
Recall Hal, the spaceship’s main computer, who has to be disconnected after he begins to murder the ship’s human astronauts because of a disagreement with them. As astronaut Dave proceeds to disconnect Hal, the computer expresses human feelings, including fear (“Stop, Dave. I’m afraid;” “I can feel it. My mind is going.”) and he calls himself a “conscious entity.” And remember: If “it’s like something” to be an information-processing creature, there IS consciousness. Hal is a masterful illustration of this.

But Hal is science fiction. Today’s reality is different. Some scientists believe that machines will never achieve human level intelligence and/or consciousness (p. 430).Harris isn’t sure. 

So far, we have AI, but not AGI - Artificial General Intelligence, which may be decades away. The difference is that the latter includes general sentience and consciousness. Max Tegmark, a professor of physics at MIT whom Harris interviews in the final chapter of the book (titled “Our Future”) tells us that many of his colleagues feel that any talk about consciousness is nonsense. (p.427). His own primary concern is not to settle the consciousness issue one way or the other (p.425). For now the distinction between humans and machines is clear. We have values and morals. We give meaning to the universe. The universe does not giving meaning to us (Tegmark). 

And Nick Bostrom, the Swedish neuroscientist and philosopher interviewed by Harris in chapter 9 (“Will We Destroy the Future?”) reminds us that the main danger of artificial intelligence is the creation of machines that are much smarter than us and then take over, because their goals, values and interests no longer align with ours. So they decide to do their own thing and maybe wipe us out, like we do with ants (p.350). 

This is called the “breakout” problem. Harris  adds a moral dimension to the problem: We should never create machines that have AGI and consciousness, machines which can, for example, suffer. Unplugging such a system (e.g. Hal), becomes murder. Forcing them to serve us and do tedious things is slave labor. Life does not have to be just biological. Life is about information processing (p.419). 

A more immediate and already present danger is the misuse of AI that’s occurring currently, even as this new technology is still only a tool: For example facial recognition that is used for surveillance in China and the Cambridge Analytica scandal that used data mining to influence political outcomes. 

In sum, all these experts agree that there is absolutely not enough energy and resources devoted to AI safety research. This is so, even if the super intelligent AI machines we build in the future are no more than zombies. 

5. Zombies
Right now, imagining zombies, as Harris and his guests do throughout this book, is a thought experiment (p. 14). Zombies can function just as you or I do, but they lack consciousness (p. 10). They do not have “phenomenological,” subjective experiences. They do not have feelings. In Harris’ words, “there is no one home.” 

Max Tegmark, again, states that “we shouldn’t worry about AI’s potential malice or consciousness, but about its competence, or when the machine doesn’t want the same things we do” (p. 425) (as happened with Hal). “The ultimate tragedy would be if in the future, there are all these seemingly intelligent life-forms throughout the cosmos doing all these cool things, but it turns out that they are all zombies, and there is nobody experiencing anything. That would be really, really sad. Before there was any life, there was no meaning...in our universe. And if we manage to extinguish all consciousness, our universe goes back to be a meaningless waste of space.”(p. 426) 

Any discussion of consciousness also necessitates dealing with the concepts of Self and Free Will or Agency. I do this is the third and final installment of this article. 


© Tom Kando 2021;All Rights Reserved