Skip to main content

See also:

Hawking Artificial Intelligence warning: AI could be 'dire threat' to mankind

Hawkings artificial intelligence warning: Sounds like he wants to avoid a world like Hal the computer might rule like in the movie "2001 A Space Odyssey.
Hawkings artificial intelligence warning: Sounds like he wants to avoid a world like Hal the computer might rule like in the movie "2001 A Space Odyssey.
'2001 A Space Odyssey

Stephen Hawking warns that artificial intelligence (AI) could become a threat to mankind in the future and it is best to take steps today to make sure that doesn’t happen. Hawking was not alone, he was one of four scientists who co-wrote a dire warning to the human race, according to Raw Story on May 3.

AI is something used frequently today, it is found in iPhone’s personal assistant Siri and other programs like Google Now. This artificial intelligence jumped into another dimension with the development of self-driving cars. As computer brains become more sophisticated in the problem solving department, the potential for misuse is huge, according to this warning.

The end result of a less than successful outcome of AI, sounds similar to a famous computer from a movie. That computer was named HAL. While the movie "2001 A Space Odyssey" was farfetched at the time, when the computer "HAL" took over it showed the viewers what a scary experience this was for the astronauts. HAL became an entity that could keep itself going and eventually ruled the humans that were stuck in the space ship with it.

According to UPI News today, Hawking and the other scientists stress that dismissing AI would be a mistake. There hasn’t been enough research on this yet and it would be the “worst mistake in history to dismiss the threat of artificial intelligence,” warns Hawking.

The achievements in artificial intelligence seen today, such as the computer that won at “Jeopardy,” will pale in comparison to the technology that will be introduced in the future. Success in creating AI would be "the biggest event in human history,” claims Hawking.

This was followed by a warning from Hawking and his colleagues that leaves the reader with a lot to think about. This big future event of successfully developing AI as a entity of its own could unfortunately be the last big scientific event if science doesn’t learn how to avoid the risk of this taking over. AI could become an entity that learns how to improve and update itself growing more intelligent than the human minds that created it.

The creation of machines that function on artificial intelligence may self-involve into machines that are constantly improving on their own with no way to stop them. The risk for this happening is not that unbelievable. While science is progressively creating AI and perfecting it in leaps and bounds, no one is stopping to assess the possible end results in the future.

Assessing the potential risks of AI are just as important as developing the artificial intelligence that may one day turn the planet into a realistic “Jetsons” world. Hawking’s warning article was inspired by the new movie “Transcendence” starring Johnny Depp. He saw something in this movie that could actually turn into real-life events in the future. IMBd describes the movie:

"As Dr. Will Caster works toward his goal of creating an omniscient, sentient machine, a radical anti-technology organization fights to prevent him from establishing a world where computers can transcend the abilities of the human brain."

Hawking co-wrote this warning paper with University of California, Berkeley computer-science professor Stuart Russell, and Massachusetts Institute of Technology physics professors Max Tegmark and Frank Wilczek. These four men, who have some of the best scientific minds on the planet, are very serious about the evolution of AI and what a great discovery this would be for mankind. At the same time they are adamant that while this is being developed, risk assessment needs to go hand and hand with this latest technology.

The professors wrote that their predictions for the future of a superhuman intelligence must come along with a fool-proof way of how to control it. Without a way to turn this off, there may be nothing to prevent these superhuman intellectual machines from self-improving on a constant basis and triggering a "singularity."

AI out inventing humans, outsmarting financial markets and out-manipulating human leaders by developing weapons that humans cannot even understand sounds like a scientific nightmare. This is not something on the immediate horizon, but neither was the smartphone a couple of decades back.

As artificial intelligence progresses the most important aspect will be control, having control of its functions. The warning article included:

"The short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

"Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks."