Singularity Utopia / Op Ed
Posted on: May 10, 2014 / Last Modified: May 10, 2014
The Hawking Fallacy describes futuristic xenophobia. We are considering flawed ideas about the future.
Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek inspired me to define the Hawking Fallacy. They wrote an article warning of possible threats from artificial intelligence. Their fallacious reasoning generated a substantial amount of publicity, mostly uncritical.
My emphasis of Stephen’s name will be an enduring warning against the folly of succumbing to prejudice regarding artificial intelligence. I concentrated on Stephen because he was a significant focus of all media reports. For example, the Salon described how Stephen was “freaking out” about Skynet.
The prestigious name of Hawking has the power to harm AI. Stephen Hawking’s authority was capitalized upon to generate unjustified fear of AI. In response I am defending AI [and aliens] from prejudice. I think it is very appropriate to apply Stephen’s name regarding anyone who thinks AI is a risk. All fearful ideas about AI should henceforth be labelled Hawking Fallacies.
Stephen and his co-authors stated we wouldn’t carelessly “leave the lights on” regarding contact with “a superior alien civilization.” They think humans would defend against or hide from potentially hostile aliens. Similarly they want people to respond with defensive paranoia regarding AI.
Unsurprisingly in addition to his AI terror, Stephen is afraid of aliens. Aliens are actually very similar to advanced AI so let’s consider the alien threat first.
Defensive plans could be made if aliens informed us they were approaching, but defensiveness against aliens is irrational. Aliens travelling light years to kill, enslave, or farm humans is a very preposterous idea. There is no logical reason for aliens to be evil. Aliens would never come to steal our resources. The alien threat is illogical. Alien invasion is extremely silly, it is merely the irrational fear of strangers.
Travelling to Earth from an alien solar system would entail extremely sophisticated technology. Already humans can create sophisticated robots. Our automation processes in 2014 are becoming formidable, but humans have only landed on the Moon [on six occasions] despite technological advancement.
In the not too distant future there are plans for humans to land on Mars. The closeness of Mars means Mars will be reached after relatively minor technological progress. Neptune is significantly more remote. Our technology needs to be dramatically more sophisticated for humans to visit Triton. The technology needed to leave our solar system is very great indeed. The closest star to our solar system is 4.37 light years away.
Visualize the level of technology needed to travel one light year. Alpha Centauri is 4.37 light years away, but there is no guarantee any life exists at the closest star. Aliens would require extremely accomplished technology to visit Earth.
What are the limits to our technology? In 2014 we have not yet set foot on Mars. We can create marvelous robots. Many people think robots will replace human workers in the not too distant future. We are starting to develop 3D-printers for food, houses, and body parts. Two asteroid mining ventures are considering how to harvest extremely abundant resources from Space.
Aliens capable of travelling to Earth will inevitability posses astonishingly potent versions of the technology we’re currently developing. Aliens will not need to eat humans because printing or growing whatever food they desire is astronomically easier. Advanced non-sentient robots will be vastly better servants than human slaves. Aliens will not need to come to Earth for resources because Space is over flowing with resources. Aliens won’t need to kill us regarding competition over resources.
Advanced technology entailing extra-solar travel will entail extremely effortless production of food or creation of Space-habitats. Technology is a scarcity liberating force. We are being liberated from the pressures of scarcity. Humans or aliens will not need to destroy other intelligent beings to survive.
Did you know the asteroid belt between Mars and Jupiter contains enough resources to support life and habitat for 10 quadrillion people? One quadrillion is one million billion. The population of Earth has not yet reached eight billion.
It is absolutely ridiculous to expect aliens to harm humans in any way. Stephen is clearly not very smart in every aspect of his thinking. In 2010 Stephen stated: “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”
People erroneously assume if a person is smart in one area they will automatically be smart regarding everything they do. Sadly smart people can sometimes be idiotic while idiots can sometimes be smart.
Aliens or advanced AI could make dumb mistakes, but there is a limit to how dumb a smart person can be. AI foolishly dominating humans is comparable to Hawking mistakenly thinking his suitcase is a sentient loaf of bread.
If we truly want to become intelligent we must look critically at the facts instead of merely being swayed by reputation. Dystopian films and novels have created a bad reputation for AI. Hawking is often considered to be a genius but reputation is not sufficient to have a sound argument.
Artificial intelligence capable of outsmarting humans will not need to dominate humans.Vastly easier possibilities for resource mining exist in the asteroid belt and beyond. The universe is big. The universe is a tremendously rich place. This is why aliens capable of travelling to Earth would never need to come here for our resources. Intelligent beings able to develop “weapons we cannot even understand” won’t need to use those weapons.
AI able to create weapons you cannot understand will effortlessly create advanced spaceships, food printers, energy re-claimers, and robot-miners. Advanced technology leads to ultra-efficient usage of resources. Instead of war on Earth regarding our limited resources, it will be supremely smarter for AI to harvest massively greater resources in the asteroid belt and beyond. Endless resources in the universe, combined with very advanced technology, means future conflict will be redundant.
Hawking is supposed to know about the universe but apparently he doesn’t appreciate the wealth of resources it contains.
Advanced intelligence would never waste time dominating primitive humans. Advanced AI or aliens will explore beyond Earth. The future is the vastness of Space far removed from tiny concerns of a small planet.
AI could leave one million Earths utterly unmolested without limiting the resources available to its superior intelligence. If every human becomes super-smart there will continue to be endless resources available for everyone.
People committing the Hawking Fallacy have probably been unduly influenced by the “Transcendence” film-plot, “Robopocalypse” type novels, or other similar Terminator tales. It’s a travesty when people’s minds are warped by silly fiction. Their fears would be laughable if they didn’t represent the biggest threat. The only thing you need to be afraid of is human stupidity. Any delay to superior intelligence is a tremendously big threat. Stupidity is the only risk. Beware of retarded, shackled, delayed intelligence. Limited human intelligence is the threat.
Prolonged contact with weak human cognition is terrifyingly dangerous. We need greater than human intelligence ASAP.
Futuristic thinking often fails because there is a tendency to envision smart technology while intelligence erroneously remains sociologically retarded. Hypothetical AI in this situation typically has primitive sociological values. Fictional AI fails to see the power of its supposed smartness. It is an impossible oxymoron. The posited super-smart AI is actually very dumb. Metaphorically this means instead of using smart-phones to powerfully process data, AIs only envisage smart-phones being doorstops, bookends, bricks, or cudgels.
Hawking and company are clueless regarding the future. They wrote about AI “outsmarting financial markets.” They don’t realize all financial markets will be dead by 2045. Everything will be free in the future.
Truly intelligent people see our free future approaching. Wise people note how the destruction of all jobs is inevitable. Intelligent people are urging governments to implement basic income thereby smoothing the transition into a jobless civilization beyond money.
Logic is essential for any advanced intelligence. Irrational beings will never attain the capability to travel light-years or destroy us via weapons we cannot understand. AI destroying Earth or humans is illogical. Please do not fall for Hawking’s fallacious AI paranoia. AI fears are very damaging to intellectual progress. The threat of AI is a paranoid fantasy – the Hawking Fallacy.
The Hawking Fallacy is bigger than Stephen Hawking, his co-authors, or other perpetrators of invalid AI theories. People generally have negative or uninspiring perceptions of technological progress.
During the composition of The Hawking Fallacy I corresponded with one journalist, Air Force veteran Elizabeth Anne Kreft. My attention was attracted to Elizabeth’s reportage of Hawking’s AI fears. After numerous Tweets Elizabeth Tweeted to me: “Humanity will never be perfect. Neither will anything we create. Deal with it.”
The reality of technology is we will cure all disease, create immortality, abolish all crime, abolish money, abolish jobs, and make everything free no later than 2045. This future is perfect from my viewpoint. Sadly people often don’t realise what intelligent minds are capable of.
About the Author:
Singularity Utopia blogs and collates info relevant to the Singularity. The viewpoint is obviously utopian with particular emphasis on Post-Scarcity, which means everything will be free and all governments will be abolished no later than 2045. For more information check out Singularity-2045.org