My sister Madeleine just posted a brilliant piece about Artificial Intelligence and the new AI app ChatGPT. Her post is described as follows in the comment section:
“This article is a fictional account of a conversation between the author and an AI language model named Andrea. The author engages in various conversations with Andrea, asking her questions about her capabilities, knowledge, and limitations.
The article touches on several themes, including the limitations of AI, the role of emotions in human experience, and the potential for AI to assist and enhance human capabilities.
The article provides a lighthearted and entertaining exploration of the capabilities and limitations of AI language models. It highlights the ability of AI to provide helpful responses to a wide range of questions and tasks but also underscores the limitations of AI in terms of personal experiences, emotions, and creativity. Overall, it is a fun and thought-provoking read.”
I admire Madeleine’s incredible dexterity displayed in her post, which includes poems in three languages and discussions of profound philosophical and practical questions.
I Googled ChatGPT. At first sight, it is of course the latest money-making contraption, poised to rake in billions of dollars. It is described as “the most powerful AI language tool available today.”
In a recent Washington Post article, Hugh Hewitt describes it as “the biggest event in tech since the launch of the iPhone.” “AI is the Future, Whether we are Ready for it or not,” Washington Post, Feb. 23, 2023)
Madeleine’s and Hewitt’s articles raised my already high anxiety level. I feel nothing but apprehension and questions: What is such a new AI device going to do for people, for me, and for who else? Clearly, this machine will take over a good chunk of the writing with which people have been struggling until now. Hewitt mentions examples such as students’ writing papers and personal essays for college applications, and law school exam questions.
Will AI replace writers? Some, at least the mediocre ones. To me, Artificial Intelligence is scary, impressive, up-and-coming, already here, an abomination, a Godsend for some, all this and more…
Some argue that AI is no threat to humanity “because the fundamental building blocks of AI are completely different from human intelligence. Human intelligence is so intertwined with the need for power and control. But those nasty qualities that come so naturally to humans, have to be purposely programmed into an artificial system."
So what’s the difference? People, too, are largely “made” into what they become - good, evil or something else. And what prevents anyone from programming such nasty qualities into machines? Which, by the way, also lack “spontaneous” feelings of compassion, remorse, shame and restraint, in short: “free will.”
The proponents of AI also argue that “in order for a machine to have destructive properties, it has to be conscious of itself. And AI does not have the ability to be conscious of itself.”
However, machines can have catastrophically destructive properties without being (self-)conscious, or having a need for power and control. Does the hydrogen bomb not have destructive properties? Or the AK47? - with or without “consciousness,” free will and emotions?
Of course there has to be human input to direct a machine to do whatever we want it to do. Yes, it’s always humans who USE the machines. We build and program them. But that’s exactly the problem: WE humans, can and often do put our machines to horribly destructive use. This applies to any technology - AI, cars, guns, chemicals, nuclear bombs, you name it. The bombs that destroyed Hiroshima and Nagasaki didn’t DECIDE to do that. The decision was Harry Truman’s. But it happened.
As you can see, I am generalizing to ALL electronic technology, and even to all technology period:
I have always been a strong believer in progress. However, I find myself increasingly becoming a luddite.
As far as electronic technology is concerned, it is not clear that it has improved my quality of life. Having to delete EVERY DAY over hundred emails and more than a hundred texts (most of them junk), and receiving dozens of robocalls every day is an unbelievable nuisance that did not exist fifty years ago.
Even more horrific is the “progress” of the technology of killing. This ranges from individual violence to potential nuclear Armageddon. The mass spread of highly sophisticated firearms is one aspect of this. The proliferation of nuclear weapons is another. Nobel Prize winning Isidor Rabi suggested that “It would have been a better world without Teller.” Who was Edward Teller? The Hungarian inventor of the hydrogen bomb, which can be a thousand times more powerful than the nuclear bombs which pulverized Hiroshima and Nagasaki.
And then you have the mother of all downside of technology: The industrialization of the world economy, which is threatening the planet’s very survival.
So I am not talking about the familiar but unlikely science fiction scenario of machines waging war against humans and taking over, as in the Terminator movies. Or Hal’s behavior in 2001. Hal behaved badly because he had been programmed badly.
Luddism does not oppose technology because it fears that machines will make their own free-will decisions. It opposes technology because it fears that humans will invent, build, use and instruct machines to do things that harm humans.
I am talking about the uses to which humans put machines. ChatGPT has already been programmed in a way which undermines the teaching of creative writing in school.
Okay, maybe I am wrong. Maybe if I had been around in 1900 or so, when cars and airplanes appeared on the scene, I might have been one of those idiots who believed that these are stupid machines, that if God had meant humans to fly he would have given us wings, that, no thank you, I’ll stick with horses for my transportation (joke).
The 1971 apocalyptic movie The Omega Man describes a barbaric future world in which most technology is taboo. Armageddon would be a costly lesson, and the ensuing anti-technology dystopia would be the wrong conclusion.
In sum: AI, like all technology, has promises for good and potential for bad. I am just the Jeremiah type. I tend to fret about the potential bad (sometimes).
leave comment here
© Tom Kando 2023;All Rights Reserved
8 comments:
Amen (as it were)!
Deep fakes, ChatGPT, junk email, robo calls, bots..."Oy vey!"
I think I'm reading a novel series that may have been written with AI.
And now Putin keeps referencing nuclear eapons. Perhaps this is it...
Dear Tom,
I enjoyed your ruminations on your sister’s blog and on AI in general.
Could you send me a link to her blog?
I couldn’t find it anywhere online including on the blog site itself.
My email is: daviddmarquis@gmail.com
Thanks,
Dave
My experience with Microsoft's AI was really positive. I was writing a long piece and a screen popped up asking me if I wanted a summary of what I've written. I poked the button and immediately I had a summary. It wasn't the one I might have written but it was grammatically correct and pretty good. Imagine if, early in your career, you could have had all of your abstracts written for you. What a bonus that would be.
Excellent piece, Tom. Thank you for sharing. You helped me articulate better my deep uneasiness around AI... Our world should aim towards more authenticity. Not "artifice" or fake. The human man can get perverted enough by itself, without adding a morality-less, consciousnessless or emotionless entity to the mix...
Thank you for your comments.
They range from agreement with my “conservatism” regarding AI (and other “contraptions” such as computers), to an appreciation for them.
The best way to describe my attitude is “ambivalence.” I talked with Madeleine at length about AI. She focuses more on the possibilities, which is great.
I am more the “what if..” sort of worrier. What if a madman-genius were to program a lot of horrible evil directives into an AI mechanism?
But the French have a saying: “Avec des “si,” on mettrait Paris en bouteille.” With “what ifs,’ you could put Paris in a bottle. I.o.w.: With “ifs,” anything can happen, even the highly unlikely.
I suppose my attitude is contrary to my firm belief in science and humanity’s ever expanding knowledge.
But Caroline also makes a valid point, concerned about dehumanization.
Dave: This IS my sister’s blog. We are the co-administrators. All her posts are as readily available as mine. I’ll ask her to e-mail you.
An interesting post. I understand the perspective that a lot of technological advances can have unintended side effects, and of course some advances are explicitly targeted for military use. As Tom noted, though, it’s really up to the people using the technology… if they use it for good or for bad is up to them. Even that is a matter of perspective; the technology that enabled the industrial revolution and as a result threatens the planet’s survival was not intended to do harm. Instead, the goal was to improve people’s lives and in that I think it has succeeded. Similarly, someone that automates the mailing of millions of spam emails probably considers that ability beneficial to themselves as it helps pay the bills, even if it is detrimental to the recipients.
I tend to be more optimistic about technology. We have higher life expectancy, lower poverty, and higher levels of education than 50 years ago. Life expectancy dropped a bit due to COVID, but that the pandemic wasn’t much worse is entirely due to a new technology which gave us a vaccine in record time. While we aren’t there yet, we are well on our way to having solutions which can address climate issues. And I have no doubt that AI will be used to help reduce spam email & texts and block robocalls.
One thing that does make me feel like a luddite are my feelings about new cars. While a writer is not required to lean on AI such as ChatGPT to write an article, there really are few options to avoiding the plethora of (in my opinion) over-eager safety controls in new cars. Lane Centering Assist, Traffic Sign Recognition, Blind Spot Monitoring, etc., etc. It seems to encourage less attention and larger vehicles. The basic safety features mandated over 20 years ago (seat belts, antilock brakes, airbags) seemed to be more than enough. Ah well.
Everyone seems to forget that the dominance in technology, and in AI in particular, will be what will safeguard our freedom and Democracy. Technological innovation makes for global domination, as the invention of guns and other inventions have proven. If we don't fund and support innovation, someone else will dominate us. China is slowly taking the lead, even in AI development. My fear of who will control AI and AGI in particular, far outweighs my fear of AI itself.
An excellent point. I couldn't agree more. Throughout history, the technologically most advanced countries have had the advantage. For example, Britannia ruled for a long time because that's where the industrial revolution began.
Post a Comment
Please limit your comment to 300 words at the most!