In Defense of Robots: Why Advances in Robotics and Artificial Intelligence Do not Pose a Risk in and of Themselves



Automaton, designed by Henri Maillardet around 1800, that produced poems and intricate drawings.

Robots are coming to kill or enslave you. At least, this is how they’re portrayed in popular culture. In The Terminator, Skynet seeks to destroy all humans and control the Earth. In I, Robot, V.I.K.I works to enslave humanity so that it can protect humans from harming themselves. In 2001: A Space Odyssey, HAL 9000 takes control of the human’s spacecraft and kills as many of them as possible. In movies, robots are frequently depicted as ominous super intelligences which aim to break free of their role as a tool of the humans and control them instead. In reality, robots are stupid and receive an undue amount of criticism.

It has been clearly shown that computers can beat humans at chess and jeopardy, and this makes it appear as though computers are almost equal to—if not exceeding—human intelligence. The problem is that humans are very bad at looking hundreds of moves ahead in board games and memorizing millions of facts. Both of these are tasks that are simple for a computer to accomplish. In contrast, most tasks people find menial are so difficult for computers that no robot today can accomplish them. Folding laundry, holding a coherent conversation, and walking to the grocery store without bumping into people or stepping in a puddle are tasks that are impossible for any existing robot to do as well as humans. Furthermore, even the robots that have some skill in one of these areas have absolutely no skill in the other two, not to mention the thousands of other tasks humans do with only the tiniest amount of thought.

A computer “thinks” about things in a very different way than people do. Rather than having clever ideas, what gives computers the appearance of intelligence is their ability to try stupid ideas insanely fast. When a computer plays chess, for instance, it basically just looks at every possible move that can be made in order and chooses the one that gives it the best outcome. The computer decides which board states are the best using very specific rules given to it by chess grandmasters. The machine looks at the position of each piece and, using the specific value rules, counts up a total value for that board state, then it just picks the one with the highest value. Given this set of rules, a lot of paper, and a lot of time, any person could play just as well as the computer. All they have to do is follow the instructions, just as the computer does.

To suggest that a computer “thinks” is a bit misleading. Computers actually just follow a very exact sequence of instructions, each of which can be understood and easily performed by a person—though at a much slower pace. There’s never some mysterious thought process going on that is not understood. People have often told me that they are “creeped out” by the fact that after looking at options for a new phone online, suddenly they start seeing a lot of ads about phones. To them it seems as if the computer is “watching” them, learning what they like, and figuring out what else they might like. Again, this is implying a form of consciousness and is giving the computer program far too much credit. Google’s system actually just keeps a database of websites you’ve visited and compares them against what other people have visited. When it sees similarities between two databases, it shows you more sites that are in the database similar to your own. It’s important to know that Google’s, the NSA’s, and others’ systems have no understanding of why you look at these things or what makes them similar. The computer doesn’t understand why these relations are important, it just finds them and then executes some other set of instructions in response. It’s only when an employee at one of these places asks the database to show them people who search for a specific thing that any entity that actually thinks sees your data, and that rarely happens for the ordinary person. Whether the employees should be able to see such data is a different discussion.

One of the areas of robotics the general public is most interested in is driverless cars. Though you may, as a human, think that walking to the store would be easier than driving down a highway, it is not so for a robot. This is one of the areas where robots may replace humans in the not so distant future. Cars are big, strong, and can easily kill a person, so the idea of handing the keys over to our computer creations justifiably gives people pause and many questions arise.

And, what if I want to take a scenic route and the car tells me “no, that’s not the most efficient path”? The idea that the car will want to transport you without taking into account human considerations—such as scenic paths, driving in a comfortable manner, etc—is one of the most common fears I’ve heard about robotic cars. However, there’s good evidence to doubt this will be a problem. Google maps lets you reroute its suggestions, should you want to take a different route, and usually gives you several options from the start. There’s no reason to assume a robotic car wouldn’t do the same. If some company did make a driverless car that ignored the comfort of its users, that company would surely lose out to the company that made the cars with the passenger in mind. The cars may be robots, but the designers are not. These companies realize that if you make a product that doesn’t take the user’s point of view into consideration, you’re not going to be able to sell your product.

But, what if the computer in the car makes a mistake? Unlike the first question about robotic cars, this will certainly be a real issue. The computers will make mistakes, and someone will get killed at some point. It’s completely unreasonable to expect that these robotic cars will be able to drive perfectly every time. The real question is whether they will be more dangerous than human drivers. The answer is no. When robotic cars become common place, they will be much safer than any human driver. However, they will hardly become common place. This is true for a very specific reason: the human perception of the safety of robots. Consider the following situations: A child on a bicycle flies out into traffic from around a corner. The person whose car the child comes in front of may not have time to react and the child is killed. On the contrary, a robotic car can react much faster. It may have been able to react, and do it in such a way as to prevent injury—breaking and swerving in the precise calculated directions. Consider a second case: a child falls and is knocked unconscious on an empty street. A human driver coming down the street easily recognizes the child on the ground and stops. A robotic car may see the color of the road and the child’s cloths as being too similar and think it’s all road and the child is killed. It should be noted that this particular case is unlikely to cause trouble for the robot car, but an unpredicted analogous situation surely will. While the result of these two situations is the same—a child lost their life—it’s easy to see how the reactions will be different. In the first case, people would find it hard to blame the human driver. After all, there was no time for a human to react. Even if the computer would have easily avoided the problem, the fact that we understand humans not being able to react in time would make most people see the death as an unfortunate, but unpreventable, accident. The second situation is completely the opposite. Because people do think like humans and see the robotic car killing the child as an obviously preventable mistake, headlines around the world would site this as evidence against robotic cars. It’s easy to imagine the outrage that would follow such an event. Even when driverless cars are many times safer than human driven cars, the biased human perception of the accidents will prevent the driverless cars from becoming common. They will have to be tens or hundreds of times safer before anyone will use them in light of a human driver preventable deaths.

From tiny robotic manipulators used in precision surgery to rescue robots finding survivors of a disaster, robots—and more over computers—are helping to save more lives every day. Of course, robotics is not a cornucopia that only pours out life-saving technology. Military applications of robotics are wide spread and growing. Whether this is good or bad, the destructive capabilities of robots is clearly apparent. It’s completely conceivable that legions of robots could be used to suppress, control, and slaughter people very easily. Robotic power and resources in the wrong hands are very dangerous, to be sure. With this looming danger in mind, wouldn’t it be better to stop or prevent the research on robots all together? A counterpart of this argument in ancient times would presumably argue that we shouldn’t forge metal because someone may use that power to make a sword. Like all other advances in science, the discoveries in robotics are neither good nor evil. It’s the applications of these discoveries which society must choose to permit or restrict. And just like all other advances in science, these discovers can also be used to improve the lives of people around the world.

For how robots think today, this is all fine and well. However, many are looking toward a more distant future. Will robot intelligence become more human-like? Will robots become smarter than humans? Will robots take over the world? It turns out that there is no fundamental mechanism in the brain that cannot be simulated by a computer. At least we don’t yet know of one, and there’s no reason to suspect that we will find one. This means that it’s completely reasonable to expect that one day robots will be able to be as intelligent as humans, and even more intelligent. Before long, computers will be able to make as many calculations as the brain can. This is one of the larger points which people such as Ray Kurzweil makes toward explaining when computers will surpass humans in intelligence. However, the number of calculations doesn’t matter much if the combination of these calculations doesn’t do something clever. To create a computer with human intelligence, we have to understand the human brain. There’s a long way left to go in neuroscience before we will have such an understanding. That said, there’s often the argument—and science fiction theme—that we might accidentally create super intelligent computers. It’s true that this is technically possible, but only in the same way that it’s technically possible that you could walk in the pouring rain for an hour and not have a single drop of water hit you. When computer scientists make a mistake in their code, almost every time it breaks the code—the code stops working at all. The chance that so many mistakes in code could somehow all work together to create a super intelligence is similar to coming home completely dry after that hour of walking in a deluge. When computers are given human intelligence, it will be purposely done.

Again, though, there’s no reason to expect that this will never happen, and in fact it seems most reasonable to expect that one day it will happen. When it does, will robots take over the world? Probably. But not in the way movies usually depict. To conquer the world, you need a reason. Otherwise, why would you do it? The robots will have to want to take over the world. However, wanting something is an emotion living things have evolved to help them survive. If we were to develop robots with emotions—again, no fundamental reason to expect that we can’t—there’s every reason to expect the full range of emotions. Not just want and anger, but compassion and happiness as well. A super intelligent robot would probably only have as much urge to kill you as you have urge to kill a turtle you found in the park. The idea that robots would want to enslave humanity has always been particularly flawed. Robots can already do mechanical tasks much better than us, so if they could also do intelligent tasks better than us they would be much better off building more of themselves than enslaving us.

Of course, this is looking at robots becoming more intelligent separately from humans. More likely, they’ll become more intelligent in augmenting humans. Seasoned pilots often mention how, when flying, the plane feels like it becomes an extension of their body. As much as you can make jokes about it, our smart phones and other devices are becoming extensions of us in the same way the wings are to a pilot. Computers are continually becoming more integral in our lives. These devices will continue to augment our abilities more and more over time. At some point, the human part will become the smaller part portion of the equation, though this will probably happen gradually and with no clear turning point. Also, it will likely be without any opposition or even without anyone noticing. The robot take over will be unnoticed and probably won’t be a bad thing for humans.


Leave a Reply

Your email address will not be published. Required fields are marked *

 OpenCUNY » login | join | terms | activity 

 Supported by the CUNY Doctoral Students Council.