When is genetic manipulation morally permissible? For health? Beauty? Wit? What sorts of animals is it acceptable to clone? Should we ban stem cell research?
This year marks the 200th anniversary of Mary Shelley’s brilliant novel, Frankenstein. So it’s a good time to ask: can technologies be monstrous? Can human beings create devices and platforms that run beyond our intentions and out of our control? What dangerous technologies may be lurking on the horizon? And what, if anything, can we do to prevent them doing damage?
These are difficult questions, and I feel we need to do some really careful thinking about them. I can’t agree with those who say that there’s nothing to worry about, and that any concerns one might raise are just “techno-panic.” Such easygoing folks like to point out that whenever a new invention comes on the scene—the printing press, the mechanized loom, newspapers, electricity, you name it—people are always freaked out, and then it turns out everything was perfectly fine. But there are plenty of examples on the other side. We only have to think about how vigorously we were assured that asbestos, DDT, thalidomide, tobacco, American football, and nuclear power stations were perfectly safe. (Or that our Facebook data are good and private!)
At the same time, we also shouldn’t go to the other extreme and decide that all new technology is dangerous. There’s plenty of technology that is straightforwardly beneficial, like pacemakers, reading glasses, and hearing aids.
Mary Shelley understood this complexity at a deep level. Unfortunately, most people don’t know this about her, since the standard way of reading Frankenstein is to see it as a simple morality tale: ostensibly, it informs us that nature is always good and technology is always bad. But this way of reading the novel is completely mistaken. First of all, Frankenstein is much more than just a morality tale: it’s also an exploration of deeply buried antisocial impulses, a philosophical investigation into personal identity, and a brilliant experiment with literary form. (I highly recommend rereading it this year!) And second, it’s actually deeply ambivalent on the question of technology.
There’s a wonderful moment in the novel when the Creature learns about language. He is blown away by it, and calls it a “godlike science.” He is equally impressed by writing—another “science”—and ends up reading Plutarch, Milton, and Goethe, all of whom he loves. As Shelley recognizes, language is a technology; writing is a technology; printing is a technology. All of these technologies produce the books the Creature loves. And all of these technologies also produce Frankenstein, the book we have in our hands. Surely there’s nothing particularly wrong with those kinds of technology.
I’m with Mary Shelley: I don’t think all forms of technology are dangerous, and I don’t think all forms of technology are harmless. It’s going to depend on the specific nature of the technology, as well as on how it is used. (There’s a decent argument to be made for Victor Frankenstein simply being negligent: if he had simply produced a less terrifying-looking creature, or had simply stuck around to take care of it, like a good father, the disasters need not have ensued.) So the task before us is to try to determine which technologies are dangerous and which are not.
One possible rule of thumb is this: technology is dangerous when it produces effects that can’t easily be predicted or controlled. If that’s correct, then perhaps we should be particularly careful when it comes to complex systems and distributed networks. And when it comes to technologies like those, we had better be vigilant. We had better train engineers to predict the effects of their inventions; we had better consider new regulations and new incentive structures; and we had better create a different culture, one that cares more about society-wide effects than about clicks, shares, and the bottom line.