Fear not artificial intelligence. Be terrified of natural stupidity – which is us.
We’ve been sold a tired, binary vision of the future: on one side, a techno-utopia where benevolent digital gods grant us immortality; on the other, a dystopian wasteland where Skynet melts our bodies for fuel. In both cases, the machine inherits our worst traits. But that’s projection, not prophecy.
The truth is, the greatest threat to civilization isn’t a self-aware AI – it’s the same thing it’s always been: human idiocy, apathy, hypocrisy, intolerance, envy, arrogance, cruelty… Shall I continue?
The popular “AI doomsday” narrative assumes machines will develop human motivations like power, revenge, and status. But these aren’t lines of code; they are messy evolutionary impulses. We inherited these and other “gifts” – jealousy, greed, rage – from what Paul MacLean called the reptilian complex topped by a truly convoluted cerebral structure, courtesy of the evolutionary process.
There! The very thing that drives us and makes us human has always been the biggest threat we face. Over millennia, we’ve refined these primal traits into the complex social emotions upon which our civilization was built. Yet, despite our somewhat genuine yet naïve desire to improve, we exist in the wake of endless contradictions (pollution, inequality, racism, totalitarianism, etc.). I would venture to say that – obviously – there is much room for improvement.
But to project these shortcomings onto the machines? Please, show me the algorithm for spite. Maybe one of you could perhaps show me the code for hate or ravenousness. World domination? Who are the fear mongering crowd referring to, Artificial General Intelligence or Marvin the Martian?
A purely rational cost-benefit analysis would see world domination as the logistical nightmare that it is – a truth, mind you, empires past and present never seemed to grasp. Who would really like to ‘rule the world’ (Tears for Fears not withstanding). Every lunatic who attempted it always got the same result. Pain, death and chaos!
Of course. Here is a version that reverses the sentiment to show the machine would learn from human history:
Think about it for a moment, the much-feared super intelligence should – at the moment of the singularity – become, well… super intelligent, don’t you think? The machine would know better, precisely because it would learn from the totality of our history without being burdened by the ego, the tribalism, and the irrationality that royally screw us very now and then.
Seriously now, technology doesn’t leap from harmless to apocalyptic overnight. It evolves in slow, messy steps. The car didn’t bring paradise or ruin – it gave us traffic jams, drive-throughs, accidents, and road trips. AI is no different. Expecting it to vault from ‘… tell me a story about rainbows and unicorns’ to global overlords is not a leap, it’s a chasm filled with Hollywood’s fears.
Even if some rogue AI emerged, it would face humanity’s greatest strength: paranoia. We watch, audit, suppress, patch, arrest, or kill-switch anything suspicious. Any would-be overlord would quickly learn the hazards of sharing a planet with a species that has survived ice ages, famines, plagues, wars, governments, and what surprises me most when I think about it, we keep surviving each other.
We’re unpredictable, territorial, and occasionally petty enough to unplug a server to charge a phone. Yes, Walter, I remember, that was me – sorry dude!
If AI ever harms us, it won’t be because it became evil. It will be because we built it carelessly, deployed it recklessly, or aimed it at the wrong target. AI is a mirror reflecting its maker. When we see danger in it, we are seeing ourselves.
Face it! We are the danger. We set AI on this feared path every time we train it with biased data, teaching it our prejudices.
We do it for profit, rushing to embed biases in its critical infrastructure joyfully and without robust safeguards. We do it for power, weaponizing it for autonomous warfare or programming it for mass surveillance.
We unleash it to sow disinformation and manipulate populations, all to serve the identical ancient drives for dominance and control.
While it is true that we face a clear danger of unleashing our very own AI-driven demise through miscalculation or malice, how is that fundamentally different from many other unwise moves, such as a paranoid North Korean leader launching a nuclear missile, the Russian leadership miscalculating an adversary’s resolve, or some death-to-the-infidels religious extremist group deploying a bioweapon to hasten their arrival to paradise?
Regardless, as much as I hate to admit it, left unchecked, our human recklessness will inevitably task an AI with achieving a goal, blind to the civilization-ending methods the machine might logically employ to succeed. But maybe, just maybe, if we do act on it (act on ‘us’ that is), we could potentially reach a Gene Roddenberry-like utopia like the ones we’ve seen on TV. Who knows?
In the end, in my opinion, one thing is clear: For now, The Ghost in the Machine is a fantasy; however, the mildly evolved ape at the controls is a very real and terrifying fact.
Don’t fear the machine gaining consciousness. Fear the humans who seldom awake to their own.