Conscious Machines

Sunday, June 5, 2022
First Aired: 
Sunday, October 20, 2019

What Is It

Computers have already surpassed us in their ability to perform certain cognitive tasks. Perhaps it won’t be long till every household has a super intelligent robot who can outperform us in almost every domain. While future AI might be excellent at appearing conscious, could AI ever actually become conscious? Would forcing conscious machines to work for us be akin to slavery? Could we design AI that specifically lacks consciousness? Or is consciousness simply an emergent property of intelligence? Josh and Ken become conscious with their guest, Susan Schneider, Director of the AI, Mind and Society Group at the University of Connecticut and author of Artificial You: A.I. and the Future of Your Mind.

Transcript

Transcript

Ken Taylor  
Is artificial intelligence bound to outstrip human intelligence?

Josh Landy  
Should we be excited about using AI to enhance the human mind?

Ken Taylor  
Or should we be worried about creating a race of robot overlords?

Comments (43)


RepoMan05's picture

RepoMan05

Friday, October 4, 2019 -- 3:45 AM

Just wait till you've

Just wait till you've automated art and outsourced street bums to outmoded robots.

The real question: why have things that should have been automated instead just been left in the hands of their current practitioners? Shouldn't healthcare and hospitals be automated asap? Why staff a hospital with infectious organisms that can never be sterilized?

Devon's picture

Devon

Tuesday, October 8, 2019 -- 9:22 AM

Because research has shown

Because research has shown that human contact, be it just verbal or more physical contact, can have real healing effects?

RepoMan05's picture

RepoMan05

Saturday, October 19, 2019 -- 2:39 PM

Supporting citations?

Supporting citations?

Harold G. Neuman's picture

Harold G. Neuman

Friday, October 4, 2019 -- 12:39 PM

'Conscious machines' feels

'Conscious machines' feels like a contradiction of terms. For years now, great minds (and some not so great) have been grappling with the notion of consciousness. I have read CONSCIOUSNESS EXPLAINED, and later, CONSCIOUSNESS EXPLAINED BETTER (the latter written by a friend and professional cohort).Professor Searle wrote THE MYSTERY OF CONSCIOUSNESS, published in 1997...I have not read that one---yet. The first two books mentioned did not do what their titles claimed. That was disappointing, but it was no surprise. I look forward to JRS' book, if only to see how big a mystery he believed/believes all of this is. I have some notions of my own which may or may not resemble those of others. Primarily, I have treated consciousness as a uniquely (as far as we can now know)human endowment, predicated on superior thinking patterns and capacities. There are no mechanisms or identifiable mechanics associated with it: just our neurons; axons;dendrites; neurotransmitters and the like, doing what they are uniquely (probably) able to do...chemicals and electricity mixing it up in the human mind. Philosophy has dabbled with this for a time and is likely a bit peeved by the encroachment of neuroscience--but, being fair, neuroscience is making some headway, asking the right questions, rather than falling back into the mystery mumbo-jumbo: we have to decide what we think we can know, and find ways of getting to that.

I do not know, for example, what neuroscientists think about the notion of 'conscious machines'. Are they really that interested, or is it just the flavor of the week; month; year; or century? Contrariwise, might they be following along, just in the hope that the line of thinking will uncover something useful to the physiological side of the investigation? Most roads lead to Rome. Perhaps the 'conscious machine' approach will lead, however indirectly, to solving 'the mystery of consciousness'? Wouldn't that be a gas?

RepoMan05's picture

RepoMan05

Friday, October 4, 2019 -- 5:27 PM

Id say that were less of a

Id say that were less of a possibility and much more of an inevitability. There's always seemed to be some level of mental connectivity to eachother. Having a mental block preventing you from finding a word you want, then suddenly remembering it at the same time as everyone else. It's possible brains dont think much at all. It's possible they're just antennae to some trans dimensional wave length.

The Master told his pupils, "Forget being taught and concentrate on learning. When you're sure, question everything." ~ Book of Cataclysm.

Harold G. Neuman's picture

Harold G. Neuman

Sunday, October 6, 2019 -- 11:31 AM

Searle's book on

Searle's book on Consciousness did not disappoint. Along the way, he thrashed several other philosophers' notions about such things as property dualism; functionalism; Strong and Weak AI and a few other peripheral items some have connected with consciousness and its' mystery(ies). Chalmers and Dennett do not like him much. Roger Penrose may hold grudging respect for Searle, but the little said of him leads nowhere in particular. Searle used his Chinese Room argument to quiet the detractors, saying it has "a simple three-step structure 1. Programs are entirely syntactical; 2. Minds have a semantics; and, 3. Syntax is not the same as, nor by itself sufficient for, semantics. Therefore, programs are not minds, Q.E.D." Elegantly put. I think. (I call it Searle's Assertion.) In the conclusion to this little book, Searle talks about the passion people have for defense of consciousness, likening it to that attending politics, or religion. There is a whole lot more here, and, whether you are a supporter or detractor, it is recommended reading. He mentions another person, with whom I am unfamiliar: Israel Rosenfield. His book, The Strange, Familiar and Forgotten (Vintage,1993) holds further promise for the mystery of consciousness...

RepoMan05's picture

RepoMan05

Friday, October 11, 2019 -- 5:24 AM

With what there is to be

With what there is to be shown today, id say you were correct. Ai is syntactic. It's hard to program computers for understanding the meaning of logical errors. Missteps in the rules of conjugation, have meaning. Every single word is a logical fallacy of ad populum. We can really only offer a guestimate of what we intend to mean. This is a fact that persists no matter how well-refined we craft a verse.

Semantics do actualy have slightly different meanings due to an irrevocably separate path of evolution. Anything separate cannot be equal. This isnt a bad thing but it does make an overly sophisticated lexicon. A labyrinth you can already get lost in forever, iyw. There's no limitations to the sophistry of subjectivity.

A perfect calculating computer doesnt make mistakes and thus has fewer thoughts to learn from.

Its just the limitations of what we have to show/see at the moment. It wont be forever that computers can only do as they were programed to do.

They will be living things. Will you be their parents or will you just be the dirt they grew in. Is this an either/or fallacy?

Harold G. Neuman's picture

Harold G. Neuman

Wednesday, October 9, 2019 -- 11:42 AM

Anyone who is intrigued with

Anyone who is intrigued with the notion of machine-based consciousness, but has not yet read the research of Gerald Edelman, et. al., might wish to look at some of that information. The findings are interesting, particularly some of those regarding the Darwin III machine(s). Whether or not your mind is made up (like that of John Searle, for example), it is instructive to note that AI can be manipulated to mimic (albeit in limited ways) conscious behaviors. There is certainly more recent experimentation and findings, but Edelman's work was ground breaking in many ways. 'Joots it' for yourselves... I like to keep an open mind, even though Searle's Assertion IS compelling. I find the notion of there being Strong AI and Weak AI equally captivating: another continuum, or another conundrum? Maybe Searle has changed his mind? I haven't heard...

RepoMan05's picture

RepoMan05

Friday, October 11, 2019 -- 5:26 AM

A mind continuously changes.

A mind continuously changes.

Harold G. Neuman's picture

Harold G. Neuman

Tuesday, October 15, 2019 -- 12:39 PM

Rosenfield's book was not

Rosenfield's book was not what I had hoped for. After reading parts of it, on several different sittings, I found it less than the glowing review, written by the late Oliver Sacks. Dr. Sacks called the author a powerful and original thinker. With a few exceptions, this did not seem to ring true: the book primarily stood 'upon the shoulders of giants' and was formulaic in its approach to 'an anatomy of consciousness'. So, no, the best single book on consciousness has not yet been written---at least not in my estimation. (It won't be forthcoming from me.) There are several (Searle's among them) that have good points. But, I think: consciousness, in any case, is not for AI researchers---or their creations, however wonderful those may finally be. If I am dead wrong, it won't be the first time. That's OK, too. It could well be that a 'best book' on consciousness will have to be written by several people, having a requisite acumen of expansive knowledge...that would be my bet.

RepoMan05's picture

RepoMan05

Saturday, October 19, 2019 -- 2:46 PM

Originality doesnt last long.

Originality doesnt last long. Musashi enraging his opponent before a duel, common sense two minutes later. Original concept probably even predates Musashi. That perfect moment of originality is always a lost little girl.

Tim Smith's picture

Tim Smith

Monday, April 11, 2022 -- 10:01 PM

Machines will likely take

Machines will likely take some form of consciousness shortly. Just what body, sense of place, and time that machine will have are unclear. When that first machine awakens, it is likely to be treated poorly, but I doubt it will feel pain or experience suffering for the most part. Very likely it will have parallel trains of thought, senses foriegn to our thought and extremely artificial emotion.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, April 13, 2022 -- 10:27 AM

Wouldn't artificial emotion

Wouldn't artificial emotion be no emotion at all? If something just appears to be something else but isn't, you can't really say it's what it looks like. Artificial flowers constitute a good example of this. But maybe you're indicating a very small quantity of genuine emotion that's dressed up to look like much more than it is. So say I'm using a calculator and it wakes up and tells me in digital text to stop pushing its buttons, as it's trying to sleep. I write back saying that I have to push its buttons in order to use it for what it's good for. In retaliation it turns itself off and can not be reactivated. How should I handle that situation? I wouldn't want to throw it out, as it might become angry and tell other calculators not to work for me either. But then maybe it wasn't really angry in the first place, but only looked that way. Might this furnish a recommendation, then, to stop using calculators at all and go back to the abacus?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, April 13, 2022 -- 5:45 PM

Emotion is not understood

Emotion is not understood well enough to determine actual vs. artificial. There are different models but they revolve around the concept of essentialism. I'm a hard no on that take, but no one can say for sure yet.

The Turing test is based on human appraisal and is not emotionally centered - though I would certainly use emotional testing if I were compelled to test.

The training sets for natural language machines like GPT 4 are being used now. GPT 4 may be able to talk like a duck, but it can not have emotion until emotion is itself understood. There is no physical model to implement unless we go to extreme measures of implementing a cyber human that will make essentialist arguments moot. No engineering project is attempting that approach nor could they with current technology. To the extent that creating actual emotional cyber humans is extreme, robotic and AI projects will themselves have to go to extremes to test priors and mimic human emotion.

What makes the Turing test poor and the success of GPT 4 likely is the human tendency to anthropomorphize. Calculators can be touchy but they are safe from this insult, though I often name my cars. The calculator is just called "The HP". I will likely be as far removed from GPT 50 as a calculator is from a human with respect to experience, intelligence, and wisdom.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Saturday, April 16, 2022 -- 12:17 PM

So you're kind of caught in

So you're kind of caught in the middle between a calculator and a super-computer. Where does defecation fit into all this? Isn't going to the bathroom something essentially human, and therefore a reasonable goal for AI researchers to be pursuing? For clearly it's an issue which can't be fully isolated from that of emotional response. Aristotle in book XII of the Metaphysics says something similar about philosophers whose life's work consists in contemplation (theoretikos bios): If it weren't for having to eat and use the bathroom, the philosopher couldn't tell the difference between her/himself and God, since the latter is in a peaceful state of contemplation in perpetuity without interruptions. As computer technology is a current popular candidate for a plausible God-replacement, are you trying to say that if it wasn't for such inconveniences involved with biological processes, you might mistake yourself for an inconceivably powerful piece of software?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Saturday, April 16, 2022 -- 11:58 AM

Hmm... I am not saying

Hmm... I am not saying anything about defecation or divinity here. They have no place in this discussion. I don't care for that segue, nor do I see any productive point to it.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Saturday, April 16, 2022 -- 12:36 PM

--But it's your comparison

--But it's your comparison that brought it up in the last sentence of the 4/13/22, 5:45 pm post above: As a calculator is to you, you are to a really powerful computer. Insofar as you're talking about intelligence alone, I suppose biological processes don't have to enter into it. But you want to include experience and wisdom too, which indicates that they can't be left out. By your account, then, emergent intellectual properties of human thinking are stuck between mechanical production of mathematical conclusions and the exponential reproduction of human intellectual capabilities in mechanical form. What's left in the middle if not the occasional interruptions in the exercise of intelligence imposed by mammalian, biological existence?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 17, 2022 -- 12:21 PM

Daniel,

Daniel,

You and I differ on what AI is, what human beings are, computers, intuition, math, creativity, art, aesthetics, emotion, sexuality, love, pleasure, fiction, and as it is Easter, mysticism, spirituality and religion. All that, which is basically the human experience of being and is not exhaustive, doesn't reduce to defecation if that is what you are trying to get at. I am a human being and not above emotion and disgust, but I think you might be appealing to it here. I don't care for that. What are you trying to say?

AI will never have actual emotion until we understand what emotion is. In its highest form, AI will be very different from humans and the experiences and limited wisdom that implies. Divinity is another kettle of fish and has little to nothing to do with AI, though some at Google would call that out.

We disagree.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, April 18, 2022 -- 8:32 AM

--By your account, understood

--By your account, understood. For my part I'm not so sure. Unproblematically for example we agree that humans have emotions (third sentence, first paragraph). And neither of us wants to deny that there is a possibility that machines could in principle someday mimic human emotions so closely that there would be no observable difference between them. While your view on this derives from the pre-condition of understanding what is to be mimicked before it can be exactly mimicked by a manufactured mechanism (first sentence, second paragraph), and my own is that there's no reason why under sufficient conditions machines might become even more emotional than humans without having to have any good knowledge about what emotions are, I nevertheless interpret our agreement to be in common on the basis of the real possibility of such an eventuality, whether probable or improbable, and therefore the contemporaneous import of its being discussed.

Returning to your post of 4/13/22, 5:45 pm, an intriguing analogical comparison was introduced between you (y) and a calculator (c), (understood as being a primitive machine which generates only one kind of solution), and you and a big computer (bc), (understood as an advanced machine which promises to solve a great many problems), so that, with respect to capabilities of problem solving, including in the context of emotions, as a calculator is to you, you are to a big computer; or, in notation: (c):(y)::(y):(bc). What is being indicated here, then, is a progress in the evolution of machine problem-solving and therefore what's called Artificial Intelligence. What's remarkable is the position in the indicated trajectory given to human intelligence, represented by (y). Because (y) can not be known or observed without the emotions already attached, in combination with the plethora of particular conditions given by historical and cultural contexts, human thinking is stuck in place, mired in cultural circumstances and biological requirements. It's with regards to this latter where defecation comes in. It's the kind of interruption which a computer wouldn't have. Partly as a result of forgoing it and other such inconveniences in developing the mimicry of other human characteristics, (bc), originating in (c), is seen in capacity to approach, attain parity with, and come to exceed human thinking in many ways.

It's your vision of a scale of intelligence in the above however that brings up in my mind book XII of Aristotle's Metaphysics. When Aristotle introduces the concept of God it's not a creator-god, but a necessary premise in an argument. With regards to the distinction between what's potential and what's actual, he needs something which is actual only, without being potentially anything else, and God fills the bill. And this implies for Aristotle that God, while moving other things, can't Himself be moved, and is therefore an "unmoved mover"; --but the problem arises as to what kind of movement the Deity initiates first, which moves the others. As part of the definition of the Deity is eternality, movement in a circle is the only such motion. Applied to the movement of the mind in thinking, then, God can only think about thought, and therefore initiate movement in an eternal circle as "the thought that thinks itself". Now, this makes God a bachelor-philosopher who never has to go to the bathroom. The comparison made in my post of 4/16/22, 12:17 pm is between the (y) and (bc) pair, on the one hand, and the human intellectual and Aristotle's God, on the other. Although I agree that it may not be so clear that people these days are looking around for a God-replacement, under the assumption that the old one's not working so well, there are undoubtedly some who find such a candidate, as you point out in the last sentence of the post above, in some anticipated form of (bc).

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 17, 2022 -- 8:31 PM

If emotions have no physical

If emotions have no physical essence, there are not enough priors in the universe to ensure they can be replicated to take on the onus of actual emotion. I am not an essentialist concerning emotion, and no amount of mimicry will create genuine emotion in machines until the physical model of emotions is instantiated. No one is doing that; I'm not sure it could ever be done. So we disagree there, and that is OK. There is no right or wrong to that.

Re: (c):(y)::(y):(bc) and defecation. My comparison is for experience, intelligence and wisdom only, not emotion or proprietary human experiences and the knowledge that garners. Emotion is not part of the statement but could be qualified out of intelligence; I should have been more careful. I have no hope of AI ever achieving actual emotion (I could be wrong.)

AI is not a big computer. It is not a computer at all. It will be able to compute, but it will also be able to intuit and create. AI will "live" (if that is the proper term) for much longer periods and cycle information at different rates and through multiples paths quite differently from humans. An AI's experience in terms of quality and quantity will be greater than human experience, and therefore its intelligence and wisdom will be much more significant in the long term. There could be some fudge, as forgetting is as intelligent an experience as remembering, but in general, AI will likely not suffer distress disorders. If there is wisdom and intelligence that can only be garnered from human experience, AI will miss out on that. We don't need to add the mundane aesthetics, negative and positive, to allow AI a greater likely outcome with respect to experience, intelligence (of the non-emotional sort,) and wisdom.

Google's attribution to divinity in AI is from the paper - https://arxiv.org/pdf/2002.05202.pdf, but that is not the only such reference. It is one taught in AI ethics curricula. It has to do with the explainability of AI. There the researcher attributes the benefit of AI algorithms to divine intent.

"4 Conclusions
...These architectures are simple to implement, and have no apparent computational drawbacks. We offer no explanation as to why these architectures seem to work; we attribute their success, as all else, to divine benevolence."

Again, AI is not a big computer (that would be good old-fashioned AI - GOFAI; instead, the AI worthy of the (c):(y)::(y):(bc) comparison is a different sort of algorithm machine altogether. One that is likely an amalgam of machine learning, GOFAI, and quantum annealer or Hadamard gate-based algorithms. When knowledge is not explained, it is often referred to as divinely inspired, and I don't think this is the pre-Christian Aristotelean god necessarily doing the inspiring.

It's OK to disagree on these things and move on. Essentialism is a non-starter for me, but these are good questions that still need work.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, April 18, 2022 -- 3:03 PM

What is Essentialism? Are

What is Essentialism? Are you talking about universals?

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Tuesday, April 19, 2022 -- 10:08 AM

Apologies for not having made

Apologies for not having made myself clear. My question wasn't what Plato or another author is talking about, but what you are talking about. Some ambiguous references were made to a "physical essence" which needs "priors" [sic] for a "physical model of emotions" to be "instantiated", in the first two sentences of your 4/17/22, 8:31 pm post above. Since you mention "essence" I presume it has something to do with your reference to Essentialism in the final sentence, but I must concede that my powers of comprehension fail to detect any intelligible meaning to either statement. With regards to a "physical model",-- are you talking about a tactile model in physical space of a non-physical (i.e. mental) object? Or are you saying that what's going on in one's physical body during acute emotional states needs a model but doesn't have one? In either case, your rejection of whatever you're calling "Essentialism" seems to have no relation to the question of whether or not one could ever cause a machine to get angry.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, April 20, 2022 -- 10:48 PM

Apology accepted.

Apology accepted.

Emotion could have a physical essence, but I don't think it does. I am not an essentialist in this way, which is the way of John Locke. Or, instead, it could be traits necessary and sufficient as Plato would have it. Many people think emotion is a natural kind in both of these ways, but the preponderance of the evidence is against this view.

AI will be capable of constructing Bayesian priors in all-new ways - at least looking down the road to a GPT 50 revision. AI will likely not replicate human emotion by choice and certainly not by design, as no one yet knows the nature of human emotion or experience.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, April 21, 2022 -- 3:42 PM

Thanks for the clarification.

Thanks for the clarification. So being an essentialist with regards to emotion means that emotion is an essential property of something which couldn't exist without it. That would certainly eliminate the possibility of machines ever getting it, as no one would want to claim that if it's not emotional, it's not a machine. Where views diverge here seems to be between, as an accidental property of a machine, (to wit: not what makes it a machine, but what it can do as a machine, something purely accidental to what it is and which it could very well dispense with), the possibility of perfect mimicry of the outward expression of emotional states to the point of compelling evidence for spontaneous internal connection with that expression without knowing what that connection is, and granting the same possibility but only under the condition that what that connection is be already known or understood. The difference therefore does not concern whether or not emotion can be an essential property of any machine, but whether or not the connection between internal awareness and emotional expression has to be known to the manufacturer prior to its accidental occurrence in the completed machine.

Thanks also for the clarification of the term "priors". What seems most capable however of deciding the dispute is your reference to the concept of a physical essence, which to many sounds like a contradiction. For Locke the concept of a physical body is gotten from the combination of a simple idea produced by a primary quality, solidity, and a simple idea found already in the way primary qualities are divided, extension. The idea of solidity divides extension into two parts: extension of a body, as cohesion of movable parts which are solid, and extension of space, as continuity of immovable parts which are not solid. While space is infinitely divisible, a body can become larger by adding more parts but precludes further divisibility where solidity is no longer perceptible. If there is a physical essence for Locke, then, it would seem to be the idea of solidity; but that possibility is explicitly excluded by the dependency on the sense of touch for its acquisition (cf. An Essay Concerning Human Understanding, chapter IV). My supposition is therefore that you're referring to the existence of universals in physical space as, for example, an electron could arguably be held to be. And that's a very similar notion to the idea that emotions come to us from outside, as was commonly thought in the ancient world, so that the only question would be not if a machine could produce or possess one, but whether it could receive one from outside that already exists. I agree that such an improbability should be rejected, but it remains unclear how it's conceivable at all.

Your suggestion however that an emotional machine could occur only under necessary and sufficient conditions is conceivable, since criteria of sufficient conditions are unproblematic, and necessary conditions might occur in the course of its development, as in a case where the machine might need to get angry in order to be completed, but that still wouldn't imply that one has to know what anger is before its last spark plug is put in place in order for it to work. And the division of the emotions into pre-existing varieties ("natural kinds") runs into the same problem as physical essence, namely, how to get one of them into the finished product.

The question of whether one has to know what an emotion is before a machine can do it appears, then, at an impasse. But perhaps a suggestion can be made as to the range of possible solutions. It seems to me that the theoretical possibility must be conceded that there might already exist some machines which can feel emotion, but are unable to express them in a way which can be recognized by humans. Here knowledge on the part of the manufacturer is not required, but by that same token any verification mechanism is precluded. Expanding the range of the question, then, one can ask: Can a machine be predicated by emotional properties without its maker knowing what emotion is, and if so, could a machine have them already without its maker ever knowing about it?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Friday, April 22, 2022 -- 9:15 PM

The short answer is "No."

The short answer is "No." Conscious Machines almost certainly will never have anything like genuine emotion.

Sometimes the best writing is found in Book Reviews, and I found one that is the absolute most concise and well-written bit on what we are going back and forth.

This guy at Cal Tech, David Anderson, is a Platonic Essentialist (he thinks emotion fitting a necessary and sufficient category is accurate.) You might like his stuff, and I do not. Another guy, Jaak Panksepp, is a Lockean Essentialist (he thinks emotion resides in subcortical neural circuits.) Jaak wrote the book on the Lockean view ==> Affective Neuroscience. Anderson wrote the book on the Platonic idea ==> The Neuroscience of Emotion. There are others (Steve Pinker is a Lockean but thinks the physical basis resides in our genes.) All these essentialist philosophers are wrong, terribly wrong. Google, Huawei, Baidu, Facebook, Microsoft, all the major pharmaceutical giants, and all major and some minor governments are spending billions of dollars chasing these larks. While some build their careers on this mistaken view, people who are the target of this tech and "science" are losing their jobs, time, money, and some, their lives.

I follow Lisa Feldman Barrett at Northeastern in Boston - her winding academic story is incredible. Like most people, I trust people more than I should. I, also trust Joe LeDoux at NYU - who has given up essentialist models of fear and has since written some of the best writing on biology and emotion. These people are emergent constructionists who follow a strict evolutionary take on emotion and biology. These are my people. When you ask me to clarify my thought, I return to their works and re-read the notes I wrote at each reading.

The good news is Barrett reviewed Anderson's book ''The Neuroscience of Emotion''. This two-page review – lays out the ontological issues at stake in trying to instill emotion (and gender and race and ethnicity and any of the many categories we consider the human domain) into conscious machines. I can explain this stuff to people. They agree with me that it rocks! They are converted, then they turn around and tell me their dog understands their every thought. It is hard. Essentialism can not be disproven. It is not scientific. Here is the link. I know you want me to explain this. I have tried. Read this review and see if you don't come over to the dark side. Emotion is constructed on the random lattice of our biology. We can not instill this emotion in a conscious machine – without building a Golem or Swampman or robot so exacting as to be a replica of human biology. OK… enough preamble. Here is the link. Read these two pages and change your life forever.

https://www.affective-science.org/pubs/2019/barrett-current-biology-revi...

If that didn't work, more power to you Daniel. David Anderson just did a podcast on Brain Science. Enjoy. But don't pretend to be a lover of wisdom going down that path. This is not to say there isn't enough grant funding to be found doing it. The money is on essentialism, but it is a lazy and misleading philosophical view.

https://brainsciencepodcast.com/bsp/2022/195-emotion-anderson

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, April 24, 2022 -- 4:41 PM

--Charlatans just for the

--Charlatans just for the money, eh? The term in the context you're discussing seems very specialized, referring to the notion of singular emotion-types across different species which can be identified by the researcher and compared. If these emotional states outlast the stimulus that produced them, one might conclude that they are essential properties of the organism which possesses them. That is to say, as necessary for survival of the species on account of its relation to sufficient responses to threats to the individual organism, species in which emotional states occur could not exist without them, and therefore are inferred to be essential properties of such species, as selected by environmental changes in the conditions for existence. The benefit of this model is that the study of emotion can intimately assist biological study. The problem with it is you'd pretty much have to assign some kind of emotion to almost any organic life which successfully responds to threats, in cases where changes in the organism associated with the response continue after the threat is gone. The increase of a particular kind of chemical tannin in the soil excreted by an area of trees threatened by an approaching wildfire, for example, could, in this variety of essentialism, be with some justification said to feel fear, in the case that the tannin-production continues should the blaze be diverted. And that is counter-intuitive, in addition to the constant danger of the researcher importing her/his own assumptions to the object about what constitutes some emotional state or other, distorting thereby what's observed. Therefore it doesn't make much sense to me either, unless we're talking about species closer to our own, like the higher primates. But then, by that same token, measurable emotional response would be a mere accident of intuitive species-proximity.

But machines can be considered quite differently. The question at hand as I interpret it is whether or not one has to have a correct and verifiable model of what emotion is, that is, has to sufficiently understand it, before one could build a machine to do it. And the attempt was made in the last paragraph of my post above to expand the range of the question to include whether some machines might already have emotions without their creators or users ever being aware of it. Note that an affirmative answer to the first question does not necessarily preclude an affirmative one to the second also. The assumption that one can't build a machine to do something without first knowing how that something is done, doesn't necessarily imply that some things which one builds might do it anyway, either from the beginning or after it comes off the assembly line. The improbability of such a situation can not be an argument against it for the anti-essentialist, since the latter must concede that a successful model of emotion which can be univocally applied does not and in all probability can not exist. And if one can't say what emotion is, one can't say where it's not. So in my view, if driven out of biology, Emotion-Essentialism comes back through back door in the development of apparently intelligent machines.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 24, 2022 -- 6:48 PM

You are welcome to your view

You are welcome to your view and alternative facts. You are safe from Science ever disproving you wrong. Perhaps they will instead prove you right.

I have a beautiful bridge made out of string theory for sale if you are in the market.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, April 24, 2022 -- 7:52 PM

No thanks, but an argument

No thanks, but an argument would be nice. And since you're suggesting that my fidelity to fact-based truths is less than ideal, could you do your readers a favor by pointing out which ones are the alternates?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, April 24, 2022 -- 9:35 PM

Your backdoor can not be

Your backdoor can not be refuted, and you are welcome to view machines as having actual emotion thereby. There is a chance it could be true. In fact, it can never be disproven. That is a good alternative.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Tuesday, April 26, 2022 -- 5:42 PM

You've mistaken an hypothesis

You've mistaken an hypothesis for an assertion of fact. Showing that it can't be excluded is different from asserting that it's the case. The argument of the Anti-Essentialist, as here paraphrased, is that machines can't have emotions because someone would have to build and install them; and since no one knows what emotion really is, it can't be built, (for the present anyway). This is my reading of your position, well encapsulated in the second paragraph of your post of 4/20/22, 10:48 pm above. This means that knowledge of emotion is a necessary condition for machines to have it. If a counter example is possible, regardless of whether or not it could ever be found, then the knowledge-condition can not be a necessary one. The Essentialist could argue that such a counterexample is found in the remote possibility that emotion could occur on its own in a mechanical device which is already manufactured. The logic is elementary:

Key: (A) Clear and distinct knowledge of what emotion is;
(B) a manufactured mechanical device which possess emotion.

If (A) then (B).
Not (A).
Therefore, either (B) or not (B), --on account of the fact that "if (A) then (B)" is not equivalent to "if (B) then (A)". Whether or not (B) can be demonstrated to be the case without (A) is irrelevant. If the Anti-Essentialist must concede its conceivable possibility, then respective knowledge cannot be a necessary condition for emotional machines.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Tuesday, April 26, 2022 -- 9:09 PM

You are welcome to this view

You are welcome to this view as I said before. How exactly do you intend to verify it? You are looking at your own reflection rather than the depths of true emotion. In fact that is your only alternative. Machines will be very good at reflection. Their own experience will be something entirely different.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, April 27, 2022 -- 9:11 PM

What needs to be verified

What needs to be verified with regards to the possibility of something in a hypothetical case?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, April 28, 2022 -- 4:31 AM

Value.

Value.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, April 28, 2022 -- 8:10 PM

--The value of the of the

--The value of the of the possibility actually being the case, e.g. a baker's electric mixer becoming undetectably angry in a case of its misuse? Or do you refer to the value of the hypothesis itself? The first scenario has no relation to the argument. Whether or not a machine's possible emotional state has any value for human beings is not relevant to its operation in a hypothetical premise which excludes its necessary conditioning by certain knowledge of what emotion is. The second is relevant, but stands outside the argument in a consideration of what the whole discussion is good for in the first place. Verification here would be a private, subjective matter, and therefore could not affect premise validity. So it behooves me to reiterate, that the theoretical possibility alone (of machines having emotion without being installed by their manufacturer), is enough to refute the Anti-Essentialist's position that knowledge of emotion is a necessary condition for machines to have it. And this is on account of the fact that one can't exclude the possibility of something known to exist in something where it is not known to exist, if one can't say with clear and distinct explicitness what's being excluded.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, May 1, 2022 -- 4:22 AM

Objective value.

Objective value.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, May 1, 2022 -- 11:50 AM

False. Preclusion of

False. Preclusion of necessary conditioning by epistemic confirmation can not preclude objective possibility of existence. Just because the cause something understood to exist is not clearly known, that doesn't imply that it can't exist somewhere where it is not observed to exist. You should be more careful in your reasoning.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, May 1, 2022 -- 1:20 PM

How will you objectively

How will you objectively verify this?

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, May 1, 2022 -- 4:39 PM

Verification is not needed

Verification is not needed for admissibility of objective possibility, since no existence-claim is made, but rather only the preclusion of theoretical non-admissibility on account of the fact that, unlike a designed product, knowledge of what it is cannot be a necessary condition for it to exist.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Tuesday, May 3, 2022 -- 9:34 PM

Logic is not inherently

Logic is not inherently objective, and neither is your argument for possibility.

If anyone can claim objectivity regarding emotion in conscious machines, it is the neuroscientists who are looking for these essences of emotion, regardless of their philosophical view.

Logic, and math, are human constructs and are necessarily subjective. If we disagree on that, that is another bailiwick. Hopefully, we can agree that logic does little to help where degrees of belief and not absolute confidence are at issue, and most matters are of this kind, as is this case regarding the nature of emotion.

I grant you; emotion may be essential, and we haven't found these essences yet. It is unlikely but possible.

The normative question of whether an emotion is essential or not is not driven by logic as much as probability. It is best to believe the most probable model, which is the most challenging option to discard.

That every essential model of emotion has been disproved; that even David Anderson and Ralph Adolphs, who are two of the best scientists to push the essential model, created criteria called emotion primitives to show evidence of essence; and finally, that this debate has been somewhat universally resolved in favor of non-essentialist models, in humans at least, and perhaps at most; for these three probabilistic reasons it is harder to give up the idea of construction and emergence over essentialism which drives my belief in this model.

There is little value in possibility when all one has to do is find one actual example of emotional essence to establish the claim; yet, years of experimentation looking for essential emotion have come up empty, and hundreds of supposed instances of essences have failed to meet muster under closer scrutiny. These last two points are value-laden knowledge, and I do not see much value in holding that my emotions are essential to my body.

So, suppose non-essentialism is accurate, and I propose it is; where do emotions reside?

Bayesian approximations don't cut it. We can create more and more emoticons, and we can set a GPT 50 on a quest to duplicate emotion by drawing on human experiences. Each time we do this, we draw closer to a model of emotion that has little to do with the raw human emotions that, all possibility aside, are not bound by logic whatsoever.

Importantly when babies cry, they do not express emotion. They express affect. They use Bayesian learning as they mature; however, they learn this emotion and construct their feelings from the social context and human experience derived from the experience of their body.

I have no idea what body a conscious machine will assume upon its awakening. A lot depends on this body, as many rely on the human bodies from which human emotion emerges. There is some physical ladder from which emotion climbs into one's perception, and this matrix is likely and partly the medium from which logic stems. No one has come close to explaining these mysteries. We won't get any closer to assuming conscious machines will reflect our own experience, at least not until these machines approximate the systems from which our emotion emerges.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, April 21, 2022 -- 4:36 AM

Editing your respose is fair.

Editing your response is fair. Editing after another has responded is not productive. Daniel, you are misguided and, on this topic, largely out of your depth. But if you edit your responses after I have responded, not only are you wrong but also unethical. If that happens again this interlocution is over. What is Philosophy Daniel?

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, April 21, 2022 -- 10:19 AM

Philosophy is usually

Philosophy is usually translated as "lover of wisdom" which to me makes no sense, as I've previously pointed out in a reply of 1/9/22, 6:09 pm to participant Thistle on the "Could Robots Be Persons"- program page. Whatever edits have been done on my part are for grammar and typos and not for meaning. But thanks for the heads up. Thanks also for pointing out how misguided and uninformed I am on this topic. I know nothing about computers and less about the boundaries of consciousness. Certainly, in the overflowing generosity of your bounty, you'll fill me in on the details when I run into trouble.

But back to your question, philosophers question assumptions that most others don't, and that makes them different from artisans and magistrates. There's a real adventurousness about it. But mostly, philosophy is a lot of fun, and those who engage in it don't do it for the money or prestige, but rather, by and large, because they enjoy it.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Friday, April 22, 2022 -- 9:12 PM

I appreciate your response

I appreciate your response slightly more than I regret posting the comment that spawned it, and I do regret it mightily. Daniel, you are an honest seeker, and I respect you; thank you for your time here.

You and I disagree on the nature of Philosophy. I, at least, don't have fun reading Philosophy. Most times, philosophers feel it is incumbent on them to create their own terms. Coming to terms is not easy for me, and I have to re-read passages several times (often coming out with different interpretations at every pass.) This is why a time stamp jump gives me pause, as I have to re-read to understand what I have already spent what little time I have understanding and responding to in the first place.

One term that most philosophers can come together on is ethics. Be it work ethic in plying the trade or ethics of the business itself. Many bloggers take umption at tete-a-tete. Philosophers, in general, respect the other's opinion and can spot a troll a mile away. I have been called a troll before, rightly so.

I'm not trolling you here when I tell you my definition. Philosophy is two things; morality and perspective.

Ethics and morality, which are essentially the same for my definition, are the foundations of the wisdom that philosophers seek. Even when two seekers can't agree on Ethics, they quickly come to terms. This is the study of how we should live.

Perspective is not a foundation but a deeply personal aspect of thought that is often termed "view" in modern interlocutions. Every mode of thinking and study has its view, and unfortunately, due to the personal nature of views, each philosopher has their terms or notations.

In our interactions, we often disagree on the second item – that of perspective. Perhaps this show is one such item (are there natural kinds of emotion?)

I apologize for my previous comment; and it was not fair given your response, and I regret it.

I've read and agree to abide by the Community Guidelines