Monday, February 15, 2021

What is the Mind, What is Consciousness?

 

Introduction: 
1. What is Consciousness? Nagel 
2. Reductionist Materialism vs. Phenomenology 
3. The Hard problem of Consciousness 
4. Artificial Intelligence (AI) 
5. Zombies 
6. The Self 
7. Free Will and Agency 
8. Humanity’s Future 

PAR TWO: ARTIFICIAL INTELLIGENCE AND ZOMBIES 

4. Artificial Intelligence (AI) and Consciousness Chapter 10 in Harris’ book “Complexity and Stupidity,” is an interview with David Krakauer, a mathematical biologist. Harris and his guest stress that intelligence must not be confused with consciousness. 
Humans have managed to build highly intelligent machines. However, throughout the book, Harris repeatedly warns against the potential danger of creating machines that are more intelligent than us, and then they get out of control - sort of a Frankenstein monster. 
In chapter two, titled “Finding Our Way,” where Harris interviews David Deutsch, the Oxford University quantum physicist, he expresses his misgivings about this possibility (misgivings which Deutsch does not share). 
For one thing, Harris argues, once machines become more intelligent than humans, they may take over even if they do not have consciousness. This might then be the end of consciousness. These future machines could be incredibly intelligent, they would be able to do just about everything, but without consciousness they would be zombies. “The lights would not be on.” They would not have experiences. 

However, some scholars feel that once a system becomes as intelligent as a human, it is bound to have consciousness as well (p. 145). Who knows? 

Popular culture has offered many examples of “robots,” ( machines that are programmable by a computer and capable of carrying out complex actions automatically) “cyborgs” (beings that combine organic and mechanical parts), “computers” (machines that can be instructed to carry out arithmetic or logical operations automatically via programming) and other devices that possess artificial intelligence and may or may not also possess consciousness. 

Think of Arnold Schwarzenegger’s Terminator series, HBO’s Westworld TV series, and most brilliantly Stanley Kubrick’s 2001: A Space Odyssey. 
Recall Hal, the spaceship’s main computer, who has to be disconnected after he begins to murder the ship’s human astronauts because of a disagreement with them. As astronaut Dave proceeds to disconnect Hal, the computer expresses human feelings, including fear (“Stop, Dave. I’m afraid;” “I can feel it. My mind is going.”) and he calls himself a “conscious entity.” And remember: If “it’s like something” to be an information-processing creature, there IS consciousness. Hal is a masterful illustration of this.

But Hal is science fiction. Today’s reality is different. Some scientists believe that machines will never achieve human level intelligence and/or consciousness (p. 430).Harris isn’t sure. 

So far, we have AI, but not AGI - Artificial General Intelligence, which may be decades away. The difference is that the latter includes general sentience and consciousness. Max Tegmark, a professor of physics at MIT whom Harris interviews in the final chapter of the book (titled “Our Future”) tells us that many of his colleagues feel that any talk about consciousness is nonsense. (p.427). His own primary concern is not to settle the consciousness issue one way or the other (p.425). For now the distinction between humans and machines is clear. We have values and morals. We give meaning to the universe. The universe does not giving meaning to us (Tegmark). 

And Nick Bostrom, the Swedish neuroscientist and philosopher interviewed by Harris in chapter 9 (“Will We Destroy the Future?”) reminds us that the main danger of artificial intelligence is the creation of machines that are much smarter than us and then take over, because their goals, values and interests no longer align with ours. So they decide to do their own thing and maybe wipe us out, like we do with ants (p.350). 

This is called the “breakout” problem. Harris  adds a moral dimension to the problem: We should never create machines that have AGI and consciousness, machines which can, for example, suffer. Unplugging such a system (e.g. Hal), becomes murder. Forcing them to serve us and do tedious things is slave labor. Life does not have to be just biological. Life is about information processing (p.419). 

A more immediate and already present danger is the misuse of AI that’s occurring currently, even as this new technology is still only a tool: For example facial recognition that is used for surveillance in China and the Cambridge Analytica scandal that used data mining to influence political outcomes. 

In sum, all these experts agree that there is absolutely not enough energy and resources devoted to AI safety research. This is so, even if the super intelligent AI machines we build in the future are no more than zombies. 

5. Zombies
Right now, imagining zombies, as Harris and his guests do throughout this book, is a thought experiment (p. 14). Zombies can function just as you or I do, but they lack consciousness (p. 10). They do not have “phenomenological,” subjective experiences. They do not have feelings. In Harris’ words, “there is no one home.” 

Max Tegmark, again, states that “we shouldn’t worry about AI’s potential malice or consciousness, but about its competence, or when the machine doesn’t want the same things we do” (p. 425) (as happened with Hal). “The ultimate tragedy would be if in the future, there are all these seemingly intelligent life-forms throughout the cosmos doing all these cool things, but it turns out that they are all zombies, and there is nobody experiencing anything. That would be really, really sad. Before there was any life, there was no meaning...in our universe. And if we manage to extinguish all consciousness, our universe goes back to be a meaningless waste of space.”(p. 426) 

Any discussion of consciousness also necessitates dealing with the concepts of Self and Free Will or Agency. I do this is the third and final installment of this article. 


© Tom Kando 2021;All Rights Reserved

6 comments:

Don Price said...

Read Bostrom's book "Superintelligence" for a very persuasive and scary elaboration of his position on the danger of a computer which can dodge our efforts to make sure it won't control us.
Read Stanislaw Lem's story about the world which was just a game in which the actors were digital creatures turned loose to do their thing with sad consequences, and berated their creators. That's a sort of mythical fable foreshadowing Bostrom's speculation that there's an overwhelming probability that other entities a little smarter than we preceded us in time and created the game that we are in. Well, I got going on Bostrom here, not all of which has directg bearing on the consciousness issue.
But the best kind of panpsychism I've seen so far is Nagel's "Mind and Cosmos" (2012), in which he demolished Dennett. My son, who majored in philosophy, took a course in which they called Dennett's book, "Consciousness Explained,""Consciousness Explained Away." I think Dennett himself is the best argument for zombies. Their neurons make them do it without any need for consciousness. Pulling a trigger, or saying a prayer.

Ann Welldy said...

Hi, Tom, This is fascinating stuff! Have you found the website WaitButWhy.com? It has some provocative articles on AI by Tim Urban: "The AI Revolution: The Road to Superintelligence", Part I, and "The AI Revolution: Our immortality or Extinction", Part II. These were published originally in Wait But Why in January 2015, but I think you can still pull them up online. I am no philosopher, but I've always thought the puzzle of consciousness would begin to yield up some answers if you knew what questions to ask. "How do I know what I am?" won't get you very far, but "How do I know THAT I am?" might at least get you started.

Gail said...

Wow! You’ve got me thinking on a much deeper level about all of this! I think that human empathy differentiates Homo Sapiens from lower life forms. Also what role does our intersubjectivity or one’s definition of the situation influential in what we describe as our experience. Is being a human being intrinsic to having the ability to develop meaning and turn on conscious awareness... interesting conversation.

Gail

Tom Kando said...

Great comments!
Gail raises the matters of human empathy, intersubjectivity , the definition of the situation - concepts suspiciously reminiscent of Symbolic Interactionism. She thereby anticipates some of what I write in the 3rd and last installment of this article.
Ann is apparently quite well versed in some of this material.
Don, though, is the one who takes the cake, in the sense of displaying such great erudition. Thank you, Don, for your useful references on a subject which I broached so recklessly.

Nephew Tomi said...

Information processing is life seems a bit of a push. Some other version of energy perhaps not yet named.
Sorry to post a video link but its followed by a short talk by Stewart Russel Professor of computer science at Berkley. Its also very entertaining.

https://youtu.be/HipTO_7mUOw

To Gail: There are plenty of "lower" life forms that show empathy and plenty of homo sapiens that show none.

Empathy has evolved in us as a survival tool. Could it be learned/understood by machines if they read enough of our history?

Gail said...

You got me thinking about consciousness vs. intelligence and the question of meaning. For example, meaning is analogous to the human taste buds. What is good about a cake is that it tastes sweet or what is good about a steak is that we savor the taste. When we give meaning to the universe we give it a flavor and that makes living worth it. Meaning offers human satisfaction. It is interesting to consider artificial intelligence as a helpful add on to human consciousness. Prior to social media and the flourishing of technology, I would have argued that there is no way that artificial intelligence can outsmart humans but I could see AGI creating real problems for us as humans along the following lines. Who wants to end the life of a cyberg or AI machine who exhibits conscious awareness. Worse, The supreme court may need to introduce new laws on penalties for murdering a cyberg or a machine who exhibits consciousness. I could imagine the owner of a machine doing prison time for inappropriately killing the device after living with it for more than two years if it is considered conscious to any sufficient degree. This idea would have fit nicely in Rod Sterling’s masterpiece, the Twilight Zone. Only this could very well be human reality in the next 50 to 100 years.

Interesting!
Gail

Post a Comment

Please limit your comment to 300 words at the most!