Alan Turing and the Limits of Computation

Sunday, February 9, 2025

What Is It

Alan Turing was a 20th-Century English mathematician and cryptologist who is widely considered to be the father of theoretical computer science. In 1950, he published a definition of a computer that is both universal, general enough to apply to any specific computing architecture, and mathematically rigorous, so that it lets us prove claims about what computers can and can't do. What does Turing's writing teach us about the bounds of reason? Which thoughts are too complicated for a computer to express? Is the human brain just another kind of computer, or can it do things that machines can't? Josh and Ray calculate the answers with Juliet Floyd from Boston University, editor of Philosophical Explorations of the Legacy of Alan Turing.

 

Transcript

Transcript

Ray Briggs  
Can computers be like human minds?

Josh Landy  
Or is the human mind just a kind of computer?

Ray Briggs  
Is there anything computers can't do?

Comments (5)


beckyricee's picture

beckyricee

Thursday, January 9, 2025 -- 12:16 AM

Alan Turing's concept of the

Alan Turing's concept of the - Turing machine showed the fundamental - limits of computation, proving some problems are unsolvable by any machine. His work remains a cornerstone of modern computer science.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, January 16, 2025 -- 4:17 PM

Waedahminnut. Are you saying

Waedahminnut. Are you saying there's a cornerstone of computer science which shows how or at least where it's not a science? Is showing how a calculator can't do psychoanalysis a cornerstone of mathematics too? As to the issue of what a machine can't do, it seems the only certain candidate for such a sweeping claim is that it can't not not be a machine. Take the above post to which I now respond. If it was purely machine-generated, the second person singular pronoun used to address its author, while intended to refer to another person, that is, another "I" minus any unshared or individual characteristics, would in reality correspond only to a thought in the mind of the speaker (or in this case writer) who employs it. Because belief in the correctness of correspondence with the intentional content would then be false, if causing this belief were itself intentional it would amount to being a lie. But in order to lie, the truth must be known to the liar, which can not be the case here, since there is no subject which can hold something to be true. But then let's say that a machine-generated appearance absent the object which appears is so convincing that it becomes impossible to doubt the existence of this object in any way but a hypothetical manner. Would it matter that a speaker's intentional content could only refer to the thought of that content? And if knowing a thing to be false while allowing it to be believed true is understood as lying, while believing something false to be true is a kind of foolery, can the great promise of so-called "Artificial Intelligence", that is, the appearance of intelligence in unintelligent objects, be little more than a comprehensive training program of how society can be conveniently divided between liars and fools?

I've read and agree to abide by the Community Guidelines
praz's picture

praz

Friday, January 10, 2025 -- 2:47 PM

How do I watch this podcast?

How do I watch this podcast?

I've read and agree to abide by the Community Guidelines
Devon's picture

Devon

Monday, January 13, 2025 -- 7:54 AM

It will air on KALW 91.7 FM

It will air on KALW 91.7 FM in San Francisco on Sunday Feb-9 at 11 am pacific and then be available to listen to here.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, January 23, 2025 -- 2:22 PM

Because the Turing Test for

Because the Turing Test for machine intelligence is based (by my reading) on the indemonstrable assumption of its categorical distinction from human intelligence, it constitutes a test for whether thinking can be found anywhere in the pool of unproblematically existing machine intelligence. Apparently, thinking occurs as a single species of the wider genus of intelligence, which latter can predicate objects described as mechanized in addition to non-mechanized human beings.

If in equal apparent measures of intelligence-levels occurring in both sections (the human and the machine), no difference is detectable (or if one exceeds the other while each lack perceptible defects), then thinking occurs in the machine-section. Contained in the operative assumption of a radical human/machine distinction is that no limit can be placed on where machine thinking can be observed regardless of how closely it approximates the appearance of human thinking or exceeds it in apparent intelligence level. The question of consciousness is by this extraneous to the question of thought-occurrences, since only the appearance of a thinking object is asserted, not any claim of its existence. Wherever perceptible intelligence-level approximates to an undifferentiated degree characteristically human intelligence (or exceeds it), the mechanized object is said to be thinking without having to commit to a claim about what's doing it.

So does the categorical distinction between humans and machines, the general concept of intelligence, and the particular reference to thinking known only to occur in the human variety prior to computational models, indicate a direct opposition to later computational models of cognitive processes which seek to confirm by replication a cognitive model already worked out? Can it be plausibly said that because Turing insisted on a non-human status for machines, (e.g. discrete contra continuous systems), that his model could not tolerate the assertion of artificial intelligence, but only real or non-artificial thinking?

I've read and agree to abide by the Community Guidelines