The rapid advance of computer technology in recent decades has produced a vast array of intelligent machines that far outstrip the human mind in speed and capacity.
AI takeover—the hypothetical event wherein computers or robots take over the world and obliterate humankind—is a common trope in science fiction books and apocalyptic movies. But is superintelligent AI really something we should fear?
In this TEDTalk, scientist and philosopher Grady Booch thinks not. While movies like The Matrix, Metropolis, and The Terminator exacerbate humans' fears of being supplanted by technology—that is, that we might develop technology that is much too advanced for own good—we forget, in Booch's view, an important point. Engineers are not looking to build sentient machines, they are looking to build "simple brains" that can simply carry out tasks. And even if engineers did manage to develop the technology to make systems that have a theory of mind and ethical and moral foundations, he argues, we would teach them our own moral systems, not ones which would try to subvert us. Plus we can be assured that we can always unplug what we have built.
But is Booch too optimistic about the innocence of superintelligent AI? Could or is there some technology whose development worries you? Enter your comments below, and check out his TedTalk here:
Log in or register to post comments
Will Computers someday be able to have humanlike consciousnes and intelligence?
At least some versions of artificial intelligence are attempts not merely to model human intelligence, but to make computers and robots that exhibit it: that have thoughts, use language, and even
Fear is an emotion, but it is one with a long history in both political theory and politics in the real world.