Artificial Intelligence: Techno-Xenophobia or the next Atom Bomb?

Artificial Intelligence: Techno-Xenophobia or the next Atom Bomb?
Uncategorized

robot
Another day, another new technology, service, website, or app that is somehow destroying our lives as we know it. When I was a kid, video games melted your brain (except they don’t), and up until it wasn’t ‘cool’ anymore, Facebook was making us less social (which may actually be true). Uberization is bad if your sector is being ‘uberized,’ unless of course you’re a taxi driver, in which case I’ll take a black car any day. And today, some of the most prominent entrepreneurs, among them Bill Gates (on a Reddit AMA)) & Elon Musk (on Twitter), have voiced their concerns against Artificial Intelligence (AI) development. The most prominent argument is the Skynet-esque armageddon, whereby man creates computers more intelligent than us who bite the hand that fed them in the form of a nuclear fallout – or some iteration thereof.

De Techno-Xenophobia

Techno-Xenophobia — the fear of new technology — is a resistance that almost every new technology sees when it first reaches the market. Technologies typically raise concerns about the changes they will make in our lives – Social Media makes us less social, Video Games cause kids to spend more time indoors, and Artifical Intelligence is going to render us unnecessary as a species.
I find the idea that Artificial Intelligence will make decisions in our detriment a bit deceiving. It presumes that Artificial Intelligence makes decisions beyond what its creator might have intended, which has never been the case. Machines don’t always function as their creator intended – what we see today even within DARPA are initiatives to use existing code to create ‘autocomplete’ functions that will fill in code based on other code that has already been seen and created. This is a far cry from the Age of Ultron, Pinocchio-esque nightmare that many are warning about.

The Software equivalent of the Atom Bomb

Our creations as a species, up until this point, have been limited to our own intellectual capabilities; however, we have, in the last 100 years, created devices that could destroy us. From pointed rocks to gunpowder and all the way to the Atom Bomb, man has always sought to create more efficient ways to destroy his enemies. Now that many of the most prominent weapons are software-driven,-enhanced, or -developed, the fear that we might create the software-equivalent of an Atom Bomb is an understandable worry.
I don’t advocate writing off these concerns as fear-mongering; however, fear-mongering in the form of technological threats only encourages the US vs. THEM mentality that has thus far defined the 21st Century. We have made technology seem to be something that only MIT PhD’s can fully understand, the same way my college professors told me of the handful of people capable of understanding the proof for Fermat’s Last Theorem – and yet, Fermat seemed to understand it intuitively, despite the 356 years of mathematical work that went into the proof. Similarly, we have all been amazed by how children intuitively understand tablets (and subsequently expect paper magazines & televisions to work like a digital screen).

Taking things for granted


This is due to our species’ amazing ability not only to adapt and evolve, but to immediately take things for granted (like Flight) – I say that in an endearing way, the way a parent might call their children selfish for only thinking of themselves. Any significant technological advancement will immediately be met with a Bubble & the Skeptics, as the world adjusts and adapts. We’re in the nascent stages of the Internet of Things (I call it the Minitel phase), and Artificial Intelligence is no more than a term used to describe what mathematicians wished they were building.
The best AI experts praise Deep Learning, an attempt to make computers imitate our brains’ neural networks. Deep Learning has only shown the ability to do millions of calculations quicker than a human brain could perceivably do – unless of course you consider all of the calculations our brain constantly does in order to type on a keyboard, chew food, digest, breathe, balance, and the other things we consider to be basic actions.

Us vs. Them


The general public’s criticisms against technology often come in the form of “they took our job” – or the more eloquently put “Humans need not Apply” – demonstrating that robots, AI & the Internet are eliminating more jobs than they are creating. There is a lot of evidence to suggest this is true, but this is not a reason to avoid advancing technology, nor will it deter any inventor from pushing the needle forward.
If there exists technology, or can soon exist technology, that will replace human work by a computer, then that means that, today, humans are living computer-esque existences, performing computer-esque tasks. Blacksmiths were replaced by industrial manufacturing, carriage-drivers & their horses replaced by cars — or, at least, relegated from necessity to entertainment.
It is not because one profession will lose value that we should not innovate – in fact, innovation creates a need for new types of jobs, and, with the proper infrastructure, the very people who were performing computer-esque tasks could be re-educated and turned into the commanders of the new tasks.
The biggest problem technology faces today is that people don’t understand it. On the one hand, that’s kind of the point – it’s new, it’s hard, it’s worth building. On the other hand, people tweet because it’s like a text message, but to everyone – and making something easy for the general public to understand is important. People will fear AI as long as they don’t understand it, the same way some people point to people who aren’t of the same race or nationality as their own and call them a problem: it’s a lot easier to blame problems on things we don’t understand than to say “it’s all radio’s fault.” We know what Radio is, and isn’t, and we know that, like money, like education, technology is neither good nor bad, it is just a tool that good & bad people can use.

Learn, Master, Delegate

Humans will always follow the course of Learn, Master, Delegate – it is a motto I share often with the Rude team – we have reached a point where anything that is seen as a human value-add today will eventually be mastered, and then someone will come along who will automate part or all of that task. Salesforce broke down the entire work of a salesman into one program, for example, vastly reducing the ‘Glengarry Glen Ross’ allure that Sales executives had in the past. And yet, salespeople still exist. Despite outsourcing, despite call centers, despite LinkedIn, human-to-human interaction & emotional intelligence are still non-replaceable, core human competencies that are valued highly today.
Our obligation is to stay on our toes, to identify when we are doing tasks that could be automated, and to seek a higher existence by doing jobs that computers can’t do. After all: who wants to live as a computer?
For every new technological expansion, we adapt far too quickly, and consumers react negatively to things that don’t better their interests (including survival). This is the checks & balances system that goes into the free market – very rarely are things imposed upon consumers (although it’s easy to feel that way when you don’t understand the mechanics behind the economy machine), and likewise, I don’t see AI forcing the human race into submission: who would make money off of that? Who would fund the R&D? Even the NSA isn’t building robot soldiers, even DARPA – it’s just not economically worth it. Rather, AI allows us to tackle solutions of infinite magnitude or of infinite complexity: discovering planets in our universe that can support life, or dissecting lifeforms and building nanorobots that can imitate their marvelous abilities.
We’ve made it this far, and even though Oppenheimer may have regretted his invention, the atom bomb may very well have been the apple in the Garden of Eden, which we ignorantly plucked and were confronted with our naked mortality. If there is going to be a fallout with AI, it is likely to happen in a controlled way first – hopefully with a smaller mortality rate and ecological impact than the atom bomb – however, we cannot avoid potential mistakes by stopping innovation. We can merely try to control the evolution, and create an infrastructure to support constant change.