Can Technologies Be Monstrous?

08 April 2018

This year marks the 200th anniversary of Mary Shelley’s brilliant novel, Frankenstein. So it’s a good time to ask: can technologies be monstrous? Can human beings create devices and platforms that run beyond our intentions and out of our control? What dangerous technologies may be lurking on the horizon? And what, if anything, can we do to prevent them doing damage?

These are difficult questions, and I feel we need to do some really careful thinking about them. I can’t agree with those who say that there’s nothing to worry about, and that any concerns one might raise are just “techno-panic.” Such easygoing folks like to point out that whenever a new invention comes on the scene—the printing press, the mechanized loom, newspapers, electricity, you name it—people are always freaked out, and then it turns out everything was perfectly fine. But there are plenty of examples on the other side. We only have to think about how vigorously we were assured that asbestos, DDT, thalidomide, tobacco, American football, and nuclear power stations were perfectly safe. (Or that our Facebook data are good and private!)

At the same time, we also shouldn’t go to the other extreme and decide that all new technology is dangerous. There’s plenty of technology that is straightforwardly beneficial, like pacemakers, reading glasses, and hearing aids.

Mary Shelley understood this complexity at a deep level. Unfortunately, most people don’t know this about her, since the standard way of reading Frankenstein is to see it as a simple morality tale: ostensibly, it informs us that nature is always good and technology is always bad. But this way of reading the novel is completely mistaken. First of all, Frankenstein is much more than just a morality tale: it’s also an exploration of deeply buried antisocial impulses, a philosophical investigation into personal identity, and a brilliant experiment with literary form. (I highly recommend rereading it this year!) And second, it’s actually deeply ambivalent on the question of technology.

There’s a wonderful moment in the novel when the Creature learns about language. He is blown away by it, and calls it a “godlike science.” He is equally impressed by writing—another “science”—and ends up reading Plutarch, Milton, and Goethe, all of whom he loves. As Shelley recognizes, language is a technology; writing is a technology; printing is a technology. All of these technologies produce the books the Creature loves. And all of these technologies also produce Frankenstein, the book we have in our hands. Surely there’s nothing particularly wrong with those kinds of technology.

I’m with Mary Shelley: I don’t think all forms of technology are dangerous, and I don’t think all forms of technology are harmless. It’s going to depend on the specific nature of the technology, as well as on how it is used. (There’s a decent argument to be made for Victor Frankenstein simply being negligent: if he had simply produced a less terrifying-looking creature, or had simply stuck around to take care of it, like a good father, the disasters need not have ensued.) So the task before us is to try to determine which technologies are dangerous and which are not.

One possible rule of thumb is this: technology is dangerous when it produces effects that can’t easily be predicted or controlled. If that’s correct, then perhaps we should be particularly careful when it comes to complex systems and distributed networks. And when it comes to technologies like those, we had better be vigilant. We had better train engineers to predict the effects of their inventions; we had better consider new regulations and new incentive structures; and we had better create a different culture, one that cares more about society-wide effects than about clicks, shares, and the bottom line.

 

Comments (3)


burke_johns's picture

burke_johns

Sunday, April 8, 2018 -- 11:51 AM

A big yes to need for widely

A big yes to need for widely educated engineers. But consider also wider education for business students. The big and potentially dangerous decisions for what we should deploy are made by management. Consider the bad examples we're seeing already: e.g. corner-cutting in driverless cars.

Harold G. Neuman's picture

Harold G. Neuman

Monday, April 9, 2018 -- 12:44 PM

I think if we go back a few

I think if we go back a few hundreds of years, maybe more; factor out our knowledge of modern marvel technology focusing on changes in technology, relative to their own time, we might decide that dangerous technologies have always been with us, the potential for mass destruction, notwithstanding. burke_johns seems to take such a relativistic approach, in a short but sweet comment. Steven Pinker was getting to this in THE ANGELS OF OUR BETTER NATURE, wherein he argues that violence has decreased over the ages. Yes, the matter of degree make us more globally dangerous than we ever have been. Our powers of technology may be exceeding our control over them: OOPs...

Vico's picture

Vico

Monday, April 9, 2018 -- 2:08 PM

In the latest "dilemma

In the latest "dilemma discussion" there seems to be a notable absence of consideration from the realm of the arts and religion. The quotidian disciplines of politics and commerce are legitimized by input from the greater disciplines of science, the arts, philosophy, and religion. In other words, morality, ethics, and facts must be at work for decisions to be desirable for the future. The seemingly ineffable qualities contained within the arts and spiritual practices get short shrift in many discussions I think because there are no hard facts or propositions that can be referred to. And yet these "soft" studies are arguably the most important ones for deciding our future. Just a thought from a reader.