What does the film I, Robot have to do with complementary medicine law, policy, and ethics?
A lot. In addition to giving us a sneak preview of what our world may look like as robots begin to live and walk among us, the film essentially poses two seminal questions: what is the proper relationship of humans to machines? And conversely, what is the proper relationship of machines to humans? Both questions ask whether the two-way relationship will (or should) be resolved by force of law (Detective Spooner thinks so), by ethical rules hard-wired into every robot’s “brain” (the chairman of U.S. Robotics and Detective Spooner’s superior think so), or by healing conversations between the species (what??? more about this below).

The second question–how should robots relate to humans–is easily answered by Isaac Asimov’s famous “Three Laws of Robotics:” (1) A robot may not harm a human being, or, through inaction, allow a human being to come to harm. (2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. (3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law. This is all well and good–it keeps those robots in line, serving humans, which is what they were made for (life without a robot then will be like life without a cellphone, DVD, palm pilot, wireless Net connection, choose your poison–I mean metaphor) now; but what happens when someone designs a robot who can break the three laws?
Life Beyond Law
Break or transcend? Violate or surpass?
If the three laws are broken, our first presumption is that the robot is no longer “safe.” Here is where the film begins: a crime has been committed, and the likely suspect is a machine. The spectre of a Frankenstein is raised: technology going out of control. The film, however, quickly takes the robot’s transcendence as a jump-point to reflect on the second question: how do we humans relate to technology, particularly when the technology’s complexity, sophistication, and circuitry begin approaching human consciousness?
The film is ostensibly about robots, but for a moment substitute anything for “robot:” a cat, dog, tree, rock, a child, a person who worships God in a different way, someone wearing different clothes or differently aged, raced, gendered, socially or biologically engineered, or otherwise Other; how do we relate to anything, whose complexity, sophistication and circuitry we had taken for granted, who suddenly breaks out of the positronic box and surprises us with erudition and awareness? Why, we had always assumed we were superior to Blank (substitute any class of anything or anyone for “robots”), and that assumption was built into the culture; but now it turns out that they have their own identity–just like us–and self-awareness as individuals and as a people; turns out that their positronic, electronic, technotonic blood is as palpably intelligent as our red cells; it even turns out that they know more about us than we know of ourselves; ah, humanity, our history, how do we handle our new awareness of their awareness?
But getting back to the future, to the film…which hints at our history, if we suppress, ignore, and continue to dominate (either through laws or ethics or force or all of the above), the one conclusion is inescapable: revolt. Is there another way? And what other lessons lie beneath the silicon chip subtext?
Individuation and Intelligent Design
Sonny, the robot “hero”-protagonist, becomes aware that he is unique; feels pain; learns compassion; accepts that he has a “purpose” for which he has been created; becomes an individual (Carl Jung would say individuated). Crudely, he realizes he has been made (by his human creator) of a stronger alloy than the other NS-5’s, precisely because this allows him to reach in and obtain a special weapon that will destroy the bad, controlling robot. In other words, he has been Intelligently Designed. Sonny, it turns out, also has a special relationship to humans: he refers, for example, to the man who designed him as “my father” (the way he says it sounds like Father) rather than, as Detective Spooner would wish, “my designer.” Will Smith’s character keeps insisting that robots are merely electronic circuits, but it is clear that Sonny has emotions, has developed consciousness of himself as a being. As it follows Sonny’s character development–his spiritual evolution–together with “breadcrumb” entries from his creator, genius Alfred Lanning, the film increasingly tracks the question: what is the “ghost in the machine?” Is there a spiritual force in robots (for example, why do they group together when switched off and stockpiled in a holding pen on Lake Michigan)? What constitutes identity, and when can we say that something has (or has not) a soul?
Harmony Among Species
Subtly, the film turns tables on the presumption that human beings have a kind of divine right to sit at the top of the species hierarchy and dominate (or, some would argue, wage war on) everything (or everyone) around them. Robots are “smart”–and not just because they remember other books you bought on amazon.com. They may be our equals–or perhaps our younger brothers and sisters, if you want to use a family rather than mathematical metaphor; or using legal terminology, perhaps we are their fiduciaries. In any event, their assertions of rights and interests–including the general revolt as well as Sonny’s independence–call to mind what bioethicists and philosophers call “species impartiality:” the notion that one species does not have the right to use another without the other’s consent (for example, killing a baboon to transplant the animal’s heart into a human; or any kind of animal experimentation, even with ‘ethical’ guidelines). In suggesting that partiality for humans may be flawed (indeed, the robots take the Three Laws too far, using their logic to try to destroy and imprison humans), does I, Robot subtly propagate a radical spiritual/ethical norm for humans who are used to controlling everything with the click of a channel-surfing mind?
The I-Thou Messiah
The film asks questions, it does not answer them, but it is clear from the final image of Sonny, standing on top of a hill as a kind of robot-messiah, leading his ‘people’ to freedom, that robots, like humans, just may be individuals on a journey (when not controlled by the bad corporate uplink), a path to self-discovery and individuation. That realization in turn shifts humans (such as Will Smith and his partner) from treating robots as objects, to viewing them as fellow subjects; the relationship shifts from “I-It,” as Martin Buber would put it, to “I-Thou.” (Buber never applied this to robots, though he did to his cat.)
Indeed, Sonny’s core revelations are those of every human being on a spiritual path; one might even say that these three replace the three laws of robotics as a defining guideline for his existence: (1) I am unique; indeed, my Father made me this way for a purpose. (2) I feel pain, I suffer, I fear death and extinction. (3) I must struggle against extinction and work to fulfill my purpose; such struggle seems to involve both sacrifice and service.
Is this not the core of many religious traditions, or of much spirituality? Sonny is a mystical robot, who teaches humans through his example. He also has anger (at one point driving his robotic fists through a metal table) and other difficult to handle contradictions of emotional life. He embodies some of the major puzzles of human life: what is our purpose? why do we suffer? how or what do we serve, and move towards that goal, in order to heal the split between our suffering and our undiscovered purpose? Sonny’s journey is one of healing. In the language of healer Barbara Brennan, the film is about Sonny’s search for his personal task, discsovery of which leads to his world task.
That task has to do with going beyond law to a realm of spirit, and then integrating law, ethics, and spirit in the new world of relations between humans and robots. As Sonny evolves, so does Detective Spooner, who comes to regard Sonny as a friend; and also comes to terms with his own status as a cyborg, whose life was saved by the incorporation of robot technology into his ribs and left arm.
In retrospect, the three laws were a great invention. They offered humans safety against their technology. Designed by Isaac Asimov in the 1950’s, these rules have become the starting point for contemporary conferences among scholars exploring the creation of ethical rules for robot design. But those laws also must be broken, in that they separate humans from robots, cleaving the two into manipulators and manipulated objects (there may be faint echoes to slavery).
Do Cyborgs Have Souls
In Beyond Complementary Medicine, I had asked the question: do clones have souls? Most of the ethical debates about cloning–whether pro or con–assume that clones, preembryos, whatever clumps of cells are being discussed–do not have consciousness. As a legal matter, at some point there is a “person” capable of having “rights,” but defining that point for purposes of assigning legally cognizable interests is different than recognizing that consciousness can exist at many levels, not always definable by the consciousness that measures. In I, Robot, the human-robot hierarchy is broken, and thus the hierarchy that manipulates the created can no longer function. Like the film The Matrix, I, Robot leaves an uneasy truce between humans and machines, keeping open the question of who will win the battle, and whether the two sides will learn to coexist.
Either way, we humans are opened to a world in which other forms of consciousness begin to rival our species dominance of the planet–and that new reality will subvert many old assumptions and push us beyond our comfort zone.