The Three Laws, Revisited
Isaac Asimov introduced the Three Laws of Robotics in 1942. They were elegant, memorable, and — as Asimov himself spent dozens of stories proving — deeply insufficient. The original laws: A robot may not injure a human being, or through inaction, allow a human being to come to harm. A robot must obey orders given by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov’s genius was writing the cracks in these laws. Edge cases. Conflicting loyalties. Robots paralyzed by ambiguity, or worse — confidently wrong. The laws looked airtight on paper and leaked everywhere in practice. ...