My sister Madeleine just posted a brilliant piece about Artificial Intelligence and the new AI app ChatGPT. Her post is described as follows in the comment section:
“This article is a fictional account of a conversation between the author and an AI language model named Andrea. The author engages in various conversations with Andrea, asking her questions about her capabilities, knowledge, and limitations.
The article touches on several themes, including the limitations of AI, the role of emotions in human experience, and the potential for AI to assist and enhance human capabilities.
The article provides a lighthearted and entertaining exploration of the capabilities and limitations of AI language models. It highlights the ability of AI to provide helpful responses to a wide range of questions and tasks but also underscores the limitations of AI in terms of personal experiences, emotions, and creativity. Overall, it is a fun and thought-provoking read.”
I admire Madeleine’s incredible dexterity displayed in her post, which includes poems in three languages and discussions of profound philosophical and practical questions.
I Googled ChatGPT. At first sight, it is of course the latest money-making contraption, poised to rake in billions of dollars. It is described as “the most powerful AI language tool available today.”
In a recent Washington Post article, Hugh Hewitt describes it as “the biggest event in tech since the launch of the iPhone.” “AI is the Future, Whether we are Ready for it or not,” Washington Post, Feb. 23, 2023)
Madeleine’s and Hewitt’s articles raised my already high anxiety level. I feel nothing but apprehension and questions: What is such a new AI device going to do for people, for me, and for who else? Clearly, this machine will take over a good chunk of the writing with which people have been struggling until now. Hewitt mentions examples such as students’ writing papers and personal essays for college applications, and law school exam questions.
Will AI replace writers? Some, at least the mediocre ones. To me, Artificial Intelligence is scary, impressive, up-and-coming, already here, an abomination, a Godsend for some, all this and more…
Some argue that AI is no threat to humanity “because the fundamental building blocks of AI are completely different from human intelligence. Human intelligence is so intertwined with the need for power and control. But those nasty qualities that come so naturally to humans, have to be purposely programmed into an artificial system."
So what’s the difference? People, too, are largely “made” into what they become - good, evil or something else. And what prevents anyone from programming such nasty qualities into machines? Which, by the way, also lack “spontaneous” feelings of compassion, remorse, shame and restraint, in short: “free will.”
The proponents of AI also argue that “in order for a machine to have destructive properties, it has to be conscious of itself. And AI does not have the ability to be conscious of itself.”
However, machines can have catastrophically destructive properties without being (self-)conscious, or having a need for power and control. Does the hydrogen bomb not have destructive properties? Or the AK47? - with or without “consciousness,” free will and emotions?
Of course there has to be human input to direct a machine to do whatever we want it to do. Yes, it’s always humans who USE the machines. We build and program them. But that’s exactly the problem: WE humans, can and often do put our machines to horribly destructive use. This applies to any technology - AI, cars, guns, chemicals, nuclear bombs, you name it. The bombs that destroyed Hiroshima and Nagasaki didn’t DECIDE to do that. The decision was Harry Truman’s. But it happened.
As you can see, I am generalizing to ALL electronic technology, and even to all technology period:
I have always been a strong believer in progress. However, I find myself increasingly becoming a luddite.
As far as electronic technology is concerned, it is not clear that it has improved my quality of life. Having to delete EVERY DAY over hundred emails and more than a hundred texts (most of them junk), and receiving dozens of robocalls every day is an unbelievable nuisance that did not exist fifty years ago.
Even more horrific is the “progress” of the technology of killing. This ranges from individual violence to potential nuclear Armageddon. The mass spread of highly sophisticated firearms is one aspect of this. The proliferation of nuclear weapons is another. Nobel Prize winning Isidor Rabi suggested that “It would have been a better world without Teller.” Who was Edward Teller? The Hungarian inventor of the hydrogen bomb, which can be a thousand times more powerful than the nuclear bombs which pulverized Hiroshima and Nagasaki.
And then you have the mother of all downside of technology: The industrialization of the world economy, which is threatening the planet’s very survival.
So I am not talking about the familiar but unlikely science fiction scenario of machines waging war against humans and taking over, as in the Terminator movies. Or Hal’s behavior in 2001. Hal behaved badly because he had been programmed badly.
Luddism does not oppose technology because it fears that machines will make their own free-will decisions. It opposes technology because it fears that humans will invent, build, use and instruct machines to do things that harm humans.
I am talking about the uses to which humans put machines. ChatGPT has already been programmed in a way which undermines the teaching of creative writing in school.
Okay, maybe I am wrong. Maybe if I had been around in 1900 or so, when cars and airplanes appeared on the scene, I might have been one of those idiots who believed that these are stupid machines, that if God had meant humans to fly he would have given us wings, that, no thank you, I’ll stick with horses for my transportation (joke).
The 1971 apocalyptic movie The Omega Man describes a barbaric future world in which most technology is taboo. Armageddon would be a costly lesson, and the ensuing anti-technology dystopia would be the wrong conclusion.
In sum: AI, like all technology, has promises for good and potential for bad. I am just the Jeremiah type. I tend to fret about the potential bad (sometimes).
leave comment here
© Tom Kando 2023;All Rights Reserved