Skip to main content
Report this ad

See also:

Star Trek's Captain Kirk to Stephen Hawking: Fears of the 'ultimate computer'

Stephen Hawking and top scientists worry about the future of Artificial Intelligence
Stephen Hawking and top scientists worry about the future of Artificial Intelligence
Photo by Dan Kitwood/Getty Images

What happens when machines become smarter than humans? Will they take over the world? Is the survival of the human species at risk? These may seem like questions for science fiction enthusiasts, but with the release of recent movies like "Her" with Joaquin Phoenix and "Transcendence" with Johnny Depp, scientists are voicing their trepidation and warnings in mainstream media.

CNBC reported today, May 5, 2014 on how the world’s top scientists have sounded an alarm about the use of Artificial Intelligence and the future of mankind in response to the release of the new AI movie, “Transcendence,” warning us that AI is not science fiction and if humanity treats it that way it could potentially be the “worst mistake in history”.

UC Berkeley neuroengineers, Michel Maharbiz co-director of the Berkeley Sensor and Actuator Center and the UC Berkeley Swarm Lab, (which explores the fusion of synthetic and living organisms) and Jose Carmena, who has a joint appointment at the Helen Wills Neuroscience Institute, and is one of the world’s leading experts on brain-machine interfaces, were consultants on the film and got involved because of these “unresolved ethical and philosophical questions raised by the movie”.

Stephen Hawking, director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity, Stuart Russell a computer-science professor at the University of California, Berkeley and a co-author of 'Artificial Intelligence: A Modern Approach', Max Tegmark, a physics professor at the Massachusetts Institute of Technology (MIT) and the author of 'Our Mathematical Universe' and Frank Wilczek, a physics professor at the MIT and a 2004 Nobel laureate for his work on the strong nuclear force wrote in The Independent:

Recent landmarks such as self-driving cars, a computer winning at “Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation, such achievements will probably pale against what the coming decades will bring.”

Daniel Dewey, who studies the ethics of machine super-intelligence at Oxford University’s Future of Humanity Institute in an interview on “Motherboard,” said in response to the article:

“A super-intelligent AI—if it turns its power to gathering resources or protecting itself, would have an immense impact on the world,” he said. “It could co-opt our existing infrastructure, or could invent techniques and technologies we don't yet know how to make, like general-purpose nanotechnology. It could eventually take over resources that we depend on to stay alive, or it could consider us enough of a danger to its task completion that it decides the best course is to remove us from the picture. Either one of those scenarios could result in human extinction.”

The scientists are giving a heads-up. They state that there is no limit to what can be done. Singularity, technology that can out-smart the financial markets, use weapons, manipulate our leaders are real possibilities and they are warning us there is very little serious research to deal with these issues.

Back in 1968, in the season two "Star Trek" episode, "The Ultimate Computer" Captain Kirk faced the destruction of the starship Enterprise and a battle against the M5 ultimate computer:

M5: This unit is the ultimate achievement in computer evolution. It will replace man, so man may achieve. Man must not risk death in space or other dangerous occupations. This unit must survive so man may be protected.
KIRK: There were many men aboard those ships. They were murdered. Must you survive by murder?
M5: This unit cannot murder.
KIRK: Why?
M5: Murder is contrary to the laws of man and God.
KIRK: But you have murdered. Scan the starship Excalibur, which you destroyed. Is there life aboard?
M5: No life.
KIRK: Because you murdered it. What is the penalty for murder?
M5: Death.
KIRK: And how will you pay for your acts of murder?
M5: This unit must die.
(It disconnects itself from the power feed in engineering and goes dark.)

It may not be as easy in our future. In the same episode McCoy says: "Compassion. That's the one thing no machine ever had. Maybe it's the one thing that keeps men ahead of them. Care to debate that, Spock?"

Spock said, "No, Doctor. I simply maintain that computers are more efficient than human beings, not better."

Food for thought.

Report this ad