Saturday, June 24, 2023

Should we Worry about AI?

By Madeleine Kando

If you want to learn about Artificial Intelligence, there is an abundance of material on the Internet that can last you a lifetime. It is filled with videos, articles, and blog posts that discuss the latest advancements, different types of AI, and their capabilities.

Artificial systems can be categorized into two sections: Artificial Narrow Intelligence (ANI), which represents the AI we have today, and Artificial General Intelligence (AGI), which is the level of intelligence we hope to achieve.

Types of AI

Most AI systems, including chess-playing computers, self-driving cars, and large natural language models like ChatGPT, rely on Deep Learning, which is composed of neural networks resembling the human brain, as well as Natural Language Processing (NLP). NLP enables computers to understand words in a similar way to humans. While these advancements are impressive, they fall under the category of Artificial Narrow Intelligence (ANI).

Companies such as OpenAI (Microsoft), GoogleAI, and DeepMind (also owned by Google) are competing to dominate the market.

OpenAI has developed a hide and seek game using Reinforcement Learning, where AI systems are set loose to learn, over millions of games. The designers themselves were amazed at what these ‘agents’ came up with, to maximize their rewards.


The next step is Artificial General Intelligence (AGI), which includes everything that humans can do. This phase does not exist (yet).

The third step is Artificial Super intelligence (ASI), systems that are smarter than humans.

Should we Worry?

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended, and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Elon Musk claims that AI is humanity’s “biggest existential threat”, surpassing the dangers of climate change.

More recently, Geoffrey Hinton, the AI scientist who developed the artificial neural network idea, is very concerned about the risks of artificial intelligence. “These things are totally different from us,” he says. “Animal brains and neural networks are completely different forms of intelligence. AI is a new and better form of intelligence.”

Some Examples 

Hinton’s lab tried to teach AI organisms to jump by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what was being measured, but they didn’t do what the designers wanted them to do. ‘The systems are really good at achieving the goal they learned to pursue, but we’re building systems we don’t understand, which means we can’t always anticipate their behavior. Right now the harm is limited because the systems are limited. But it could have graver consequences in the future as AI systems become more advanced”.

Consciousness

‘For millions of years intelligence and consciousness went together’, says Historian and philosopher Yuval Noah Harari. Consciousness is the ability to feel things like pain and pleasure and love and hate, whereas intelligence is the ability to solve problems. But artificial intelligence does not have consciousness. It just has intelligence. ‘AI is evolving at breakneck speed in the realm of intelligence, but since its inception, it has not advanced one millimeter in the realm of consciousness’ says Harari.

Even so, it is becoming so embedded in our culture, that it is changing the way we perceive the world around us. We cannot even agree on who won an election, or whether climate change is real. We can no longer rely on our own perception of what is real and what is not. Our human brains have been hacked.

Netflix already tells us what to watch and Amazon tells us what to buy. Soon machines will tell us what to study and where to work, whom to marry, and even whom to vote for. Why would a self-learning, unconscious system use data to help us? Why not manipulate us for profit? ‘If we are not careful, we will be dominated by entities that are more different from us than we are different from chimpanzees.’

Since AIs are not conscious, we need to teach them to detect the capacity to suffer as a consequence of their actions and stop them.

Who to kill?

‘When I think of coders and engineers’ says Harari, ‘I don't think of philosophers and poets, but they are increasingly solving philosophical and poetical riddles’.

For instance, if a kid jumps in front of a self-driving car and the only way to prevent running over him is to swerve and be hit by a truck. The owner will be killed instead. You need to tell the algorithm what to do in this situation. The designer needs to actually solve a philosophical question. 

The United Nations suggested a moratorium on artificial intelligence systems that seriously threaten human rights until safeguards are agreed upon. In fact, advisers to President Biden are proposing what they call a bill of rights to guard against some of the new technologies. But instead of halting the development of AI, why not slow down its deployment until we are absolutely sure that it does no harm?

Conclusion

We are all trapped in a ship with huge blinders. We cannot see anything except that narrow straight ahead of us. The path to more intelligence, more problem-solving capabilities. What if problems are part and parcel of being human? Do they all have to be solved? Is death a problem that has to be solved? Is happiness a problem because we want more of it? What about harmony? Acceptance? Peace? *

Why are we so over focused on intelligence? What will AI do for animals, for organisms that have no clue what intelligence means but still feel pain and happiness?

It is up to us, mere mortals, to steer this AI ship in the right direction before the ship steers itself over a cliff with all of humanity on board. leave comment here

* I am quoting Harari here.

No comments:

Post a Comment

Please limit your comment to 300 words at the most!