Symbolism, Imagery, Allegory
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
We're quoting these laws again because they're so important. Asimov himself noted (in his book Robot Visions), "If all that I have written is someday to be forgotten, the Three Laws of Robotics will surely be the last to go." And that seems to be a pretty fair statement since the Three Laws can be found all over the place in other works of fiction. (For more examples than you can shake a stick at, check out this list.)
But the reason we're talking about the Three Laws here is that they are all over the place in I, Robot. Mr. Weston hints at the First Law in "Robbie," Speedy is caught in a dilemma between the Second and Third Laws in "Runaround," and in "Reason," Cutie disobeys orders (which is against the Second Law) but mostly in order to uphold the First Law, etc. Not every story focuses on the Three Laws, but every story includes them. In fact, Asimov expects the reader to be so familiar with the Three Laws that by the end of the book, he doesn't need to repeat them. When Calvin is talking to Byerley in "The Evitable Conflict," she talks about the laws without ever telling the reader what those laws are (53).
OK, so let's assume that we're all familiar with what the laws say. What do they mean? Well, we know what the Three Laws mean to Susan Calvin since she tells us explicitly:
The three Rules of Robotics are the essential guiding principles of a good many of the world's ethical systems. (Evidence.138)
So the Three Laws are what makes sure that we have good robots.
But there's something about the Laws that almost everyone gets wrong: people think of the Three Laws as software that's just programmed in to the robot's brain—you could program the Laws and have a good robot or not program the Laws and have an evil robot. But check out when Calvin and Peter Bogert discuss the issue in "Little Lost Robot": if you modify the Three Laws, you'd be left with "complete instability, with no nonimaginary solutions to the positronic Field Equations" (64). The Laws aren't just programs; they're a necessary part of how you build a positronic brain. Calvin says so even more clearly in "Evidence": "A positronic brain can not be constructed without" the laws (133). So if you leave the Laws out, you don't get an evil and intelligent robot, but rather a crazy robot, or just a pile of scrap metal.
So, in Asimov's robot stories, the Three Laws are not just a guarantee that the robots are good. They seem to indicate that there's some connection between goodness and stability/sanity—or even between goodness and intelligence. That is, it's impossible to be truly intelligent unless you're truly good.