Artificial Intelligence: Techno-Xenophobia or the next Atom Bomb?

Artificial Intelligence: Techno-Xenophobia or the next Atom Bomb?

Another day, another new technology, service, website, or app that is somehow destroying our lives as we know it. When I was a kid, video games melted your brain (except they don’t), and up until it wasn’t ‘cool’ anymore, Facebook was making us less social (which may actually be true). Uberization is bad if your sector is being ‘uberized,’ unless of course you’re a taxi driver, in which case I’ll take a black car any day. And today, some of the most prominent entrepreneurs, among them Bill Gates (on a Reddit AMA)) & Elon Musk (on Twitter), have voiced their concerns against Artificial Intelligence (AI) development. The most prominent argument is the Skynet-esque armageddon, whereby man creates computers more intelligent than us who bite the hand that fed them in the form of a nuclear fallout – or some iteration thereof.

De Techno-Xenophobia

Techno-Xenophobia — the fear of new technology — is a resistance that almost every new technology sees when it first reaches the market. Technologies typically raise concerns about the changes they will make in our lives – Social Media makes us less social, Video Games cause kids to spend more time indoors, and Artifical Intelligence is going to render us unnecessary as a species.
I find the idea that Artificial Intelligence will make decisions in our detriment a bit deceiving. It presumes that Artificial Intelligence makes decisions beyond what its creator might have intended, which has never been the case. Machines don’t always function as their creator intended – what we see today even within DARPA are initiatives to use existing code to create ‘autocomplete’ functions that will fill in code based on other code that has already been seen and created. This is a far cry from the Age of Ultron, Pinocchio-esque nightmare that many are warning about.

The Software equivalent of the Atom Bomb

Our creations as a species, up until this point, have been limited to our own intellectual capabilities; however, we have, in the last 100 years, created devices that could destroy us. From pointed rocks to gunpowder and all the way to the Atom Bomb, man has always sought to create more efficient ways to destroy his enemies. Now that many of the most prominent weapons are software-driven,-enhanced, or -developed, the fear that we might create the software-equivalent of an Atom Bomb is an understandable worry.
I don’t advocate writing off these concerns as fear-mongering; however, fear-mongering in the form of technological threats only encourages the US vs. THEM mentality that has thus far defined the 21st Century. We have made technology seem to be something that only MIT PhD’s can fully understand, the same way my college professors told me of the handful of people capable of understanding the proof for Fermat’s Last Theorem – and yet, Fermat seemed to understand it intuitively, despite the 356 years of mathematical work that went into the proof. Similarly, we have all been amazed by how children intuitively understand tablets (and subsequently expect paper magazines & televisions to work like a digital screen).

Taking things for granted

This is due to our species’ amazing ability not only to adapt and evolve, but to immediately take things for granted (like Flight) – I say that in an endearing way, the way a parent might call their children selfish for only thinking of themselves. Any significant technological advancement will immediately be met with a Bubble & the Skeptics, as the world adjusts and adapts. We’re in the nascent stages of the Internet of Things (I call it the Minitel phase), and Artificial Intelligence is no more than a term used to describe what mathematicians wished they were building.
The best AI experts praise Deep Learning, an attempt to make computers imitate our brains’ neural networks. Deep Learning has only shown the ability to do millions of calculations quicker than a human brain could perceivably do – unless of course you consider all of the calculations our brain constantly does in order to type on a keyboard, chew food, digest, breathe, balance, and the other things we consider to be basic actions.

Us vs. Them

The general public’s criticisms against technology often come in the form of “they took our job” – or the more eloquently put “Humans need not Apply” – demonstrating that robots, AI & the Internet are eliminating more jobs than they are creating. There is a lot of evidence to suggest this is true, but this is not a reason to avoid advancing technology, nor will it deter any inventor from pushing the needle forward.
If there exists technology, or can soon exist technology, that will replace human work by a computer, then that means that, today, humans are living computer-esque existences, performing computer-esque tasks. Blacksmiths were replaced by industrial manufacturing, carriage-drivers & their horses replaced by cars — or, at least, relegated from necessity to entertainment.
It is not because one profession will lose value that we should not innovate – in fact, innovation creates a need for new types of jobs, and, with the proper infrastructure, the very people who were performing computer-esque tasks could be re-educated and turned into the commanders of the new tasks.
The biggest problem technology faces today is that people don’t understand it. On the one hand, that’s kind of the point – it’s new, it’s hard, it’s worth building. On the other hand, people tweet because it’s like a text message, but to everyone – and making something easy for the general public to understand is important. People will fear AI as long as they don’t understand it, the same way some people point to people who aren’t of the same race or nationality as their own and call them a problem: it’s a lot easier to blame problems on things we don’t understand than to say “it’s all radio’s fault.” We know what Radio is, and isn’t, and we know that, like money, like education, technology is neither good nor bad, it is just a tool that good & bad people can use.

Learn, Master, Delegate

Humans will always follow the course of Learn, Master, Delegate – it is a motto I share often with the Rude team – we have reached a point where anything that is seen as a human value-add today will eventually be mastered, and then someone will come along who will automate part or all of that task. Salesforce broke down the entire work of a salesman into one program, for example, vastly reducing the ‘Glengarry Glen Ross’ allure that Sales executives had in the past. And yet, salespeople still exist. Despite outsourcing, despite call centers, despite LinkedIn, human-to-human interaction & emotional intelligence are still non-replaceable, core human competencies that are valued highly today.
Our obligation is to stay on our toes, to identify when we are doing tasks that could be automated, and to seek a higher existence by doing jobs that computers can’t do. After all: who wants to live as a computer?
For every new technological expansion, we adapt far too quickly, and consumers react negatively to things that don’t better their interests (including survival). This is the checks & balances system that goes into the free market – very rarely are things imposed upon consumers (although it’s easy to feel that way when you don’t understand the mechanics behind the economy machine), and likewise, I don’t see AI forcing the human race into submission: who would make money off of that? Who would fund the R&D? Even the NSA isn’t building robot soldiers, even DARPA – it’s just not economically worth it. Rather, AI allows us to tackle solutions of infinite magnitude or of infinite complexity: discovering planets in our universe that can support life, or dissecting lifeforms and building nanorobots that can imitate their marvelous abilities.
We’ve made it this far, and even though Oppenheimer may have regretted his invention, the atom bomb may very well have been the apple in the Garden of Eden, which we ignorantly plucked and were confronted with our naked mortality. If there is going to be a fallout with AI, it is likely to happen in a controlled way first – hopefully with a smaller mortality rate and ecological impact than the atom bomb – however, we cannot avoid potential mistakes by stopping innovation. We can merely try to control the evolution, and create an infrastructure to support constant change.

7 Responses

  1. peveuve

    We are not talking about passive things like atomic bombs, but super intelligent beings that would learn how to get more intelligent exponentially and indefinitely (self-learning machines).
    Atomic bombs are always under the control of some humans, but super AI won’t.
    The big problem is that at the beginning, the super AI will seem pretty dumb for quite a long time. But because they will learn at an exponential rate, when humans notice a substantial improvement (for example, an AI as intelligent as humans), it will be too late: it will be far more intelligent than humans and unstoppable.
    These super AI could be so intelligent that we would be unable to understand them, and it is impossible to guess what these beings will do and will want. It doesn’t mean that these super AI will be bad or good, we don’t know. But we will have to live and share the world with these “demigods”. For sure it will have a huge impact on our lives.

    • Liam Boogar

      “We” – I presume you mean the general we – talk about a lot of different things when we use the term AI or “Super intelligence.”
      Your fictional story sounds terrifying – but nothing that exists today suggests this is in our future.
      Everyone imagines what they want when they imagine AI, and we have come to assume that anything we can imagine will eventually be developed. This is not the case, or at least, not all things are developed the way we imagined.
      Code will always be limited to the intelligence of the engineer who wrote it.

    • wolfularity

      “Code will always be limited to the intelligence of the engineer who wrote it.”
      What if the ‘code’ is not created by an engineer? Neural networks are not programmed in the way that you seem to suggest. Control becomes less clear as programming is replaced by training, and as complexity grows. Though it is true that in 2015 computers do not have even close to the capacity of humans (or even small mammals), that does not constitute proof that they can not, or will not in the future. There doesn’t seem to be any physical limitation to creating a learning AI with a capacity beyond humans.
      So, though we don’t *know* that super intelligence is possible, we also don’t have any evidence that it is not. Most AI researchers and developers seem to agree that it *is* possible, it is more about ‘when’ than ‘if’. When a group of experts, who best understand the actual problems being faced and overcome, are pretty unified in the probabilities, then I think it is prudent to take notice. (I don’t think you would have found as many physicists in agreement before the first atomic bomb)
      I wonder if your objections are less about the actual possibilities of super intelligence, than in the simplistic and wildly speculative predictions that seem to come with the idea?
      I think that we can agree that any great innovation (and there can’t be much argument that AI is an incredible technology) will have a significant impact on us and our societies. Predicting exactly how this is going to play out, however, is beyond our ability. It is simple, however, for anyone to speculate (we are, after all, quite familiar with what it means to be intelligent). Media speculation is especially annoying because it is focused on entertainment (even in news), which results in sensationalism. Dire warnings sell advertising, while complex and balanced discussion does not.
      Are we destined to create a super intelligence? Maybe, maybe not, but I think we will face pressing questions before we even get there. Major disruptions to our social fabric, economics, political structures, etc. don’t usually go so smoothly. So even if we don’t face an immediate existential threat, many millions will face a more personal threat, well before more distant risks. The difficulties in retraining large numbers of people for the new jobs that may emerge is easy to gloss over in a high-level discussion, but very serious in reality. For example, how quickly will AI take over the job of driving taxis, trucks, and buses. What will all the people (families) that rely on that income do? How many of these people can train for high-tech jobs made available because of the new AI technologies?
      If AI is going to remake our world, it will do it years before we get to super-intelligence. That doesn’t negate the importance of questions about the pros/cons of developing AI intelligences that rival our capacity, it just makes the issues around AI evolution much more immediate than it may first appear. Whether we think we’re directly impacted or not, this does become concern for all of us.

    • peveuve

      I highly recommend this article ( about this subject on waitbutwhy, it is very clearly explained. Apparently there is no debate among scientists on if it will happen, but on when it will happen.
      This super AI emergence is what Elon Musk and co. fear the most these days. They talk about “summoning the daemon”, stuff like that.
      Obviously, scientists could be totally wrong, this happened regularly throughout History. General agreement is not necessarily a sign of truth, but we should take this seriously.

  2. Somnath Paul

    Gods have created humans with limitations, I repeat “limitations”. If we create something to overcome those “limitations” then the resultant whatever we may call it “Super AI” will surely challenge the Almighty God to create something beyond those “limitations” to become Superior by choosing only one option I.e. abolishing his creation and recreate in the pursuit to attain and not tend to infinity! There is nothing good or bad its the necessity and rationality that decides good or bad.Out of Necessity and Rationality AI will try to Prevail,Evolve and that poses a serious risk to our “Necessity” !.

    • peveuve

      Even if these super AI are far more intelligent than us, they will live in the same limited universe than us. They won’t be able to compete with the Creator of the universe, which by nature is not part of this world since He created it. They are just not on the same level.
      So super AI can be a threat for humans, but not for God Himself. Super AI will have a beginning and an end, will struggle for resources and for “life”, like any other creatures.
      Well, that’s my guess at least!

  3. cognosium

    I like this article. It is nicely written and, with one exception, does not fall into the anthropocentric traps to which most writers on this subject succumb.
    The one exception is the failure to appreciate that the evolution of technology is a process that, at roots, is essentially beyond our control. Short of a major catastrophe such as a nuclear holocaust it cannot be stopped.
    For the evolution of the internet (and, of course, major components such as YouTube) is actually an autonomous process. The difficulty in convincing people of this “inconvenient truth” seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities seems to be outside our comfort zone!
    This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.
    Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we “workers” scurry round mindlessly engaged in her nurture.
    By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary “big picture” provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.
    The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere
    Stephen Hawking, for instance, is reported to have remarked “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,”
    This statement reflects the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.
    Seemingly unrelated disciplines such as geology, biology and “big history” actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.
    This much broader “systems analysis” approach, freed from the anthropocentric notions usually promoted by the cult of the “Singularity”, provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.
    Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary “life” process from what we at present call the Internet. It is effectively evolving by a process of self-assembly. The “Internet of Things” is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, “enslaved” by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.
    There are at present an estimated 2 Billion Internet users. There are an estimated 10 to 100 Billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.
    That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 2 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.
    Without even crunching the numbers, we see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in processing power. And, of course, the degree of interconnection and cross-linking of networks within networks is also growing rapidly.
    The emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.
    This is the main theme of my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” which is now available as a 336 page illustrated paperback from Amazon, etc.
    Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity

Leave a Reply

You must be logged in to post a comment.