It is a familiar scenario explored in science fiction novels and movies: the computing power of machines expands exponentially in all dimensions. Machines are developed having a near limitless capacity for processing information, possessing immediate access to the totality of human knowledge, and sensing reality in a way that approaches real-time omniscience. Based on the assumption that consciousness can be reduced to computation,[i] this process works relentlessly towards a threshold where machines become self-aware and indistinguishable from the human mind. Consider the wonderful characters developed in this genre: Skynet from the Terminator, Data from Star Trek, HAL9000 from 2001: A Space Odyssey, the replicants from Blade Runner, and so many others.
Dreaming of Electric Sheep
In the film Blade Runner, based on Philip K. Dick’s novel Do Androids Dream of Electric Sheep, the world is populated with machines so sophisticated that they are indistinguishable from the creatures upon which they are based. Robot sheep and frogs have replaced species destroyed in the fallout from a nuclear holocaust. Androids called replicants are designed to do the bidding of their human creators. The replicants have a will to survive and thrive beyond their designed four-year lifespan. This causes some of them to rebel. Bounty hunters called Blade Runners are hired to deal with rogue replicants. When a group of replicants violently escapes their work camp on Mars, a Blade Runner named Decker is hired to hunt and destroy them. Replicants can only be identified by a special test. An interview is administered based on the premise that replicants lack empathy, thus they lack the physiological responses that humans would present in response certain questions.
While this neo-noir film represented pure, fanciful fiction when it was released in the early 80s, today, it serves as an extreme example of a growing concern in the technology space: the dangers of artificial intelligence and machine learning. Entrepreneurs and technologists such as Bill Gates and Elon Musk have already publicly sounded the alarm:
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well,” Gates wrote. “A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” (Mack)
While this plays well in popular culture and generates a lot of click bait, other serious thinkers from philosophers to researchers are also working to understand the implications of AI.
Hippocrates and the Superintellegence
How can we ensure that a superintelligence reflects the values that we share and cherish? Can we prevent a superintelligence from driving inexorably towards its chosen goal, humanity be damned? With the proliferation of tools and platforms that bring AI and machine learning technologies to the marketplace, these questions are relevant right now. When we design and build systems using AI and machine learning, we must provide practical safeguards:
- AI and Machine Learning Don’t Solve Every Problem. The mathematician and philosopher Nassim Taleb says: “A mathematician starts with a problem and creates a solution; a consultant starts by offering a solution and creates a problem” (Taleb). We have all fallen victim to the manias surrounding the latest technology trends. Yesterday’s ‘there an app for that’ is today’s ‘let’s use AI and machine learning to mine our data and find hidden patterns’. The reality is that AI and machine learning are useful for very particular problem sets. Some other types of problems can be solved more efficiently and cheaply with traditional techniques. Make sure your advisors have the experience and wisdom to know when to apply AI and when not to.
- Know Where AI and Machine Learning Excel. In medicine, triage and pre-screening can be offloaded to AI, freeing medical stuff from bureaucracy, as well as allowing doctors to focus on the hard problems and spend more time focused on patients. Machine learning algorithms can inject randomness into the decision search for diagnosticians, getting them ‘unstuck’ from familiar patterns and solutions. Finally, the big data revolution has created a state of permanent information overload in many fields. AI excels at pre-sorting information to find what is most likely relevant given a specific context. In all cases, use AI and machine learning to assist people in making the best decisions possible.
- Leave the Ethics to Humanity. Part of what it means to be human is to see the value and dignity inherent in other people. It is our responsibility to judge between right and wrong. We should never delegate this responsibility to algorithms or AI, which tend to optimize for increased efficiency, lower costs, and other potentially dehumanizing factors. We must uphold a Hippocratic Oath for AI: not only should the systems we design never harm nor injure, but our design of AIs should preclude their ability to make moral decisions that affect the lives of people. Of course, this won’t dissuade the evil genius. In the end, we may indeed need Deckers.
To connect with one of PointClear Solutions’ technology experts, or to learn more about this blog topic (or our digital strategy, design, development, and/or management services), Contact Us. (And don’t forget to follow us on LinkedIn for more great content!)
[i] The question of the nature of consciousness goes to the heart of what it means to be a human being. A matter of interest to philosophers, theologians, and (perhaps most importantly) poets, this topic is beyond the scope of this article.
Mack, Eric. “Bill Gates Says You Should Worry About Artificial Intelligence.” 28 01 2015. forbes.com. 10 01 2017. <http://www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat/#61bf143d1036>.
Marcus, Gary. “Why We Should Think About the Threat of Artificial Intelligence.” 24 10 2013. The New Yorker. 10 01 2017. <http://www.newyorker.com/tech/elements/why-we-should-think-about-the-threat-of-artificial-intelligence>.
Taleb, Nassim. The Bed of Procrustes. New York: Random House, 2010.