Turbo-charging the Mind

28 December 2012

With all the rapid advances in computer technology, are we humans moving toward a day when we will be able to “turbo-charge” the mind? Will we soon develop machine-enhanced super-human intelligence? I’m not sure if the prospect of us becoming prodigiously smart cyborgs is exciting or terrifying, but I’m also not sure it’s realistic.

Take our so-called “smart technology.” It was not so long ago that you needed separate devices for taking photos, listening to music, surfing the web, checking email, and talking on the phone. These days, you can do all that and more on one attractively designed, lightweight device. We may call that a “smart phone” but that doesn’t mean it’s anywhere close to being genuinely smart, not in the way humans are. It does lots of things and it’s not very big, but that’s not enough to make it smart in any significant sense. What manufacturers call “smart technology” is not really smart, so we shouldn’t let the labeling mislead us.

Granted, we have developed some technology that has surpassed human intelligence in certain feats. Take Deep Blue, the chess machine that beat grandmaster Garry Kasparov back in the nineties. Although Kasparov is a remarkably smart human being, he was no match for the superior “brainpower” of Deep Blue. But what exactly does this power amount to? Certainly, Deep Blue is better and faster than even the smartest humans at calculating chess moves. That’s a very limited capacity—not something that, by itself, deserves to be called intelligence. If you asked Deep Blue to do something that any five-year-old could do, like get milk from the fridge, it would be stumped! How’s that intelligence?

You might think that although Deep Blue doesn’t have the kind of intelligence that surpasses human intelligence in all or even most domains, the fact that we can create machines that are superior to us, even in this limited capacity, suggests that we are moving in the direction of having genuinely smart technology, and that someday soon we will develop machines that truly deserve to be called “intelligent.” Indeed, Deep Blue is old news by now. Today we have platforms like Siri, that apparently “understand” natural language, which definitely seems like technological progress. But can we really say we’re moving any closer to something like human intelligence in machines?

Let’s return to the example of the five-year-old getting milk from the fridge, a pretty basic task by human standards. As simple as it is, it does involve a lot of very different capacities. First, the child has to be able to understand the request, which requires knowing English (or some other natural language). Then she has to be able to navigate her environment—she has to be able to get to the fridge without bumping into other objects, figure out how to open the door, and so on. On top of that, she has to be able to recognize milk among the many objects in the fridge. As far as I know, there’s no machine that can successfully accomplish basic tasks like that, never mind more complicated tasks.

What’s the point here? The point is that intelligence is not simply a matter of computing power. Deep Blue may be faster at retrieving information and calculating possibilities than humans, and if that’s all we mean by “intelligence” then sure, we’ve already built intelligent machines. But it’s nothing like human intelligence, so it’s not especially interesting in this context. Certainly, human intelligence is partly explained by computational speed and capacity, but that can’t be the full story or robots would already be fetching milk from the fridge, making tea, and asking if we’d like cookies with it.

The upshot is that if we are to build genuinely intelligent machines, we first need to figure out exactly what intelligence is, and what kinds of systems are capable of being intelligent. Only then can we realistically talk about “turbo-charging” the mind by incorporating intelligent technology into our bodies.

Of course, we already are incorporating technology into our bodies—we have pace-makers, artificial hips, cochlear implants, and so on. You can have a sub-dermal chip with your medical info implanted into your hand, if that’s your kind of thing. Google Goggles, which allow us to see the non-virtual world with all kinds of information virtually superimposed, have already been invented and maybe they’ll soon create a contact lens version. Perhaps it’s just a matter of time till we get nanotech phone or remote control implants. And that could happen whether or not we ever build intelligent machines. So, the more technology advances, whether it’s truly intelligent or not, the more and more we will be able to merge with machines. A cyborg future of sorts definitely could be in the cards for humans.

The question then is, will merging more and more with technology make us super-intelligent, or will it make us super-dumb? We already offload a lot of our cognitive work onto objects in our environment, which allows us to be efficient, but, it could be argued, also makes us stupid and lazy. Ever since I’ve had a cell phone, for example, I can’t remember anyone’s number. I’ve also become a really bad speller because I don’t need to remember exactly how to spell anymore. If I get close enough, the spell-checker will do the rest of the cognitive work for me. I even use a GPS device to get from my office to the bathroom down the corridor. Okay, it’s not quite that bad. Yet.

But there is a real danger of us becoming slaves to technology, less and less capable of doing things for ourselves. So, we need to ask whether all these technological advances ultimately increase or decrease our intelligence. And if it turns out they really can increase our intelligence, is that a goal we even ought to have? I mean, what’s the ultimate point here? Are we going to become wiser or happier as a result of becoming smarter? Will merging with the machine make us kinder to one another? Or is there a danger that we will lose our essential humanity the more we incorporate technology into our lives and into our bodies?

Comments (10)

Guest's picture


Monday, December 31, 2012 -- 4:00 PM

The future will make clear

The future will make clear which science fiction writer is the best prognosticator. Robert Heinlein's novel "Time Enough for Love" is based on a pseudo-immortality achieved through a combination of cloning, cryogenics and cybernetics. Aldous Huxley's "Brave New World" is based on a society in which chemical enslavement makes even drudgery pleasurable. In reality, the human organism is somewhat fragile and either electrical or chemical over-stimulation will simply cause a break-down or burn-out sooner or later.
Referring to breakdowns reminds me of an unrelated story from a few decades past when Affirmative Action was the incoming thing. It seems that the Pennsylvania Department of Commerce sent out a memo to each department requesting a list of all employees, broken down by sex. The first response came back almost immediately: "We have no one broken down by sex but we have two alcoholics."

Guest's picture


Wednesday, January 2, 2013 -- 4:00 PM

Coffee turbo charges my mind!

Coffee turbo charges my mind!
Sometimes a little to much.

Harold G. Neuman's picture

Harold G. Neuman

Friday, January 4, 2013 -- 4:00 PM

Thanks to Arvo for the

Thanks to Arvo for the insight into affirmative action history. I worked, for thirty years, in a government office, allegedly committed to equal opportunity for all. Logically, or not, affirmative action (AA) fell in and out of favor, depending upon which party held power, until about three years before I retired. Blacks who had "made it", eschewed AA as an artifice constructed by Uncle Tom, to make the white establishment appear sympathetic and supportive of the advancement of minorities (in those early days: negroes, and later, black people...) Thing is---some, if not many of those black naysayers had already benefitted from the AA policies they were now eschewing. I'll name two, just for examples: Justice Clarence Thomas---perhaps the most silent Supreme Court Justice of all time; and Thomas Sowell, a think-tank fellow who seems to live an insular life, based upon some perceived intellectual superiority. Or, whatever.
But--well, I have digressed from the post topic. Sorry for that indiscretion. It really does not matter too much if we do or do not turbo charge our minds, whatever-the-hell that means. Karl Popper said we would make mistakes---indeed, must make them---or fail to grow. I am only now examining his philosophy of critical rationalism. Ironic that he died the same year as my own father. A footnote to the previous rant: Silence can show a range of realities. The silent man/woman may be wise or witless. He/she may be positive, negative or neutral. Of course, there is another scenario: The silent person MAY be just smart enough to realize the efficacy of being quiet, lest he/she reveal his/her stupidity.
Your Friend;

Guest's picture


Monday, January 7, 2013 -- 4:00 PM

Well, we don't have to turbo

Well, we don't have to turbo-charge our minds, do we? We have all these marvelous toys. Everything from blue-tooth devices, hanging from our ears; to smartphones, connecting us with the known universe; global positioning systems---so we don't get lost when going to the mall; and, now, cars that warn us when we are about to back into other cars, mangle bicycles, carelessly left in the driveway, or run over children who were too busy talking on their cellphones or texting to notice that the rest of the world is happening around them. And that they are not the center of it.
Some other commenters have danced around these things. I understand what it means to challenge and criticize popular culture. I do it everyday, wanting nothing more than for people to think. A cup of coffee? Sure.

mirugai's picture


Monday, January 7, 2013 -- 4:00 PM


Last year's new years resolution was: don't be angry. I did a good job at that.
This year I resolved:1. Don't idealize, and 2. Keep my opinions to myself.

margaret's picture


Sunday, January 13, 2013 -- 4:00 PM

Seems to me the limits of our

Seems to me the limits of our mind are self-imposed, it is very anti-selective to be too smart.... Seems like the brain is capable of doing a LOT more without adding this primitive technology, all that stops us is some kind of normalizing selection of behavior, probably derived from culture and economics
Wouldn't it be nice if we could do all the thinking/performing we are already capable of?

Harold G. Neuman's picture

Harold G. Neuman

Tuesday, January 15, 2013 -- 4:00 PM

I shall hazard two comments

I shall hazard two comments on Margaret's notions of 01/14/2013: the first compound sentence was, to me. virtually incomprehensible, compounded by the phrase, "primitive technology". I have to say: huh? Her last line I can certainly identify with, given the 10% factor we have been imbued with for the last thirty or forty years.
It is all academic---or something or other... Advice to all of us: think, before you speak/write
Yours, (but not exclusively),

mirugai's picture


Friday, January 18, 2013 -- 4:00 PM

Margaret, may I? I think you

Margaret, may I? I think you are saying that we impose a bunch of restrictions on our mental processes (when we have vastly more capabilities) that come from our social (i.e., our confirming group's) imperatives. To be "smart" outside the social box is not just frowned on, it is actively prohibited in many ways. Since we are actually so smart, by comparison the AI "brain" is dumb...it is "primitive" in that it can only do what we feed it to do. Now that said, the biggest unanswerable, yet necessarily acknowledged, question in philosophy is "What or who is "we""? Can the mind act on the mind? What is that? Is there any consciousness other than "ours" that "we" can say anything about?

Laura Maguire's picture

Laura Maguire

Friday, January 18, 2013 -- 4:00 PM

I would add to what Margaret

I would add to what Margaret and Mirugai said that the very technology that is supposed to make us "smarter" can often be a way of diminishing cognitive faculties we already have. A person who always relies on a calculator to figure out simple arithmetic is not exercising certain natural capacities and so will start to lose them. Technology can make us lazy.
That's not to say that anytime we offload cognitive work onto objects in the environment, we're creating unhealthy dependencies. Take the way we write things down, for example, sometimes to aid us in remembering, sometimes to communicate to others, and sometimes as a creative expression. We need technology, even if it's as simple as a pencil and paper, to be able to write things down at all. But I would say that the development of these technologies that have helped us become writers in the most basic sense have helped us to develop specific cognitive capacities.
Now, if "smart" technology helped us to develop stronger and more flexible minds, it would be a great thing. I worry that it's going in the opposite direction, making us lazy and dependent.

Marc Bellario's picture

Marc Bellario

Saturday, June 13, 2015 -- 5:00 PM

So we first had computer

So we first had computer science and then ( which is an exceedingly brilliant oxymoron ) followed by
10 ^^ computer science = artificial intelligence ( and it is harder to spell ).  How is it POSSIBLE for
intelligence to be " artificial " ----- no, if it is intelligent it is obviously " real ".  Then you get " natural "
but - hey --- Like, I'll readily admit that a machine is smarter than me, so - what is the target?
John Von Neumann who was heavily involved with this stuff wrote a book about human
intelligence versus machine intelligence and explained how much more vast was human intelligence
to machine intelligence - but if you consider the internet as a single device ( then you have a
machine that could possibly be in that range so .. )
The big question is the " appropriate use " of this potentially " destructive " machinery.  The question
is do the machines become more like the people, or are the people becoming more like the
machines?  People should not ask the machine for guidance on that question.  That question
is only appropriate for people to ask themselves.  Just so you know the following is from wikipedia -
In 1995, a very small silicon chip measuring 7.44 mm by 5.29 mm was built with the same functionality as ENIAC. Although this 20 MHz chip was many times faster than ENIAC, it had but a fraction of the speed of modern microprocessors of the late 1990s.