The Power of Prediction

Sunday, April 30, 2023

What Is It

You’re standing at the top of a mountain, surveying the vast landscape below. The information your senses take in flows to your brain, which processes it to create a representation of the scene. Or does it? What if instead of directly perceiving the world around us, the brain is more like a prediction machine that hallucinates a picture of the world? If that were the case, could we still rely on the so-called “evidence of our senses”? Would it be possible to avoid unpleasant sensory experiences, like hunger or pain, by simply changing our expectations? How can we harness the power of the predictive brain? Josh and and Ray predict a fascinating conversation with Andy Clark from the University of Sussex, author of The Experience Machine: How Our Minds Predict and Shape Reality.

 

Transcript

Transcript

Josh Landy  
Is the brain a prediction machine?

Ray Briggs  
What if our senses are just making things up as they go along?

Josh Landy  
Can we predict our way to a happier life?

Comments (21)


Tim Smith's picture

Tim Smith

Thursday, March 9, 2023 -- 12:49 PM

Prediction is an essential

Prediction is an essential and dangerous mode of cognitive behavior that doesn’t hallucinate pictures of the world as much as construct models. Hallucination and pictures are the wrong terms here, and this comes with the caveat that no one knows the experience of others, even when they relate their experience without apparent filters. So, for me, I don’t see pictures when I predict the location of my alarm without opening my eyes in the morning or waking minutes before the alarm goes off. There are several examples of picture-less but model-happy constructions without which our brains and bodies couldn’t survive or would not have evolved.

We should not rely on predictions but use them as we must, with or without thought. Prediction is a part of our biology. The primary reasons we shouldn’t rely on prediction are the potential for bias and the appreciation of new phenomena or insights. Both are limited when our predictions fence our perceptions (so yeah… natural language predictors, like Chat GPT and Bard, are only so helpful.)

Changing expectations is not easy, nor is avoiding unpleasantness, hunger, pain, or dissonance. The focus should be on understanding the interplay between expectation, attention, and sensory processing in shaping our experiences. That isn’t easy. The science of perception, like that, put forward in the episode on smell, and the others, like the philosophy of color, can all help.

We can best harness the power of prediction by placing its process in the brain, the brain in the body, and the body in a social context. There are ideas and experiences outside our own that I cannot predict, and those black swans are the ones that prediction can overlook.

Andy Clark is one of the better communicators of cognitive science, and I look forward to his much more nuanced take on detail and probably push-back here. He's using hallucination here for a reason, I'm just not sure why. Specifically, I would ask him to outline the subtlety between perception and prediction, and the contribution prediction makes toward understanding, and the role from which cultural context influences our cognitive flow.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, March 26, 2023 -- 10:46 AM

I've done a bit more reading

I've read more on Clark's use of the term 'hallucination' here. I don't have his book, but this looks to be about AI hallucinations. I'm not clear how far Andy is pushing the analogy of AI hallucinations to human thought, but here is a definition and use case breakdown of AI hallucinations.

AI hallucinations refer to instances when an AI model generates unrealistic, irrelevant, or nonsensical outputs. Several factors contribute to these hallucinations:

1. Data-related factors:
◦ Insufficient training data
◦ Inherent biases
◦ Noise in training data
◦ Human error in data annotation or preprocessing
2. Model-related factors:
◦ Overfitting
◦ Model architecture
◦ Limitations of the model
◦ Incomplete fine-tuning
3. Algorithm-related factors:
◦ Training objective misalignment
◦ Randomness
◦ Temperature settings
◦ Adversarial attacks
4. User-related factors:
◦ Ambiguity in input

These are not mutually exclusive or necessarily relevant here - I'm just qualifying my question about what Andy means by this term. He may have a complete and different definition and take.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Tuesday, April 4, 2023 -- 4:17 AM

I don't want to take this

I don't want to take this analogy too far, and I'm still thinking about this show on other lines, but here is a rough correlation to human prediction and thought.

1. Data-related factors (AI) and Experience-related factors (Human):
-Insufficient training data (AI) ↔ Insufficient experience, exposure, or abuse (child abuse even) (Human)
-Inherent biases (AI) ↔ Innate cognitive biases or cultural influences (Human)
-Noise in training data (AI) ↔ Misleading or ambiguous information (Human)
-Human error in data annotation or preprocessing (AI) ↔ Errors in learning or memory formation (Human)
2. Model-related factors (AI) and Cognitive/Neural elements (Human):
-Overfitting (AI) ↔ Overgeneralization or confirmation bias (Human)
-Model architecture (AI) ↔ Neural architecture or cognitive structure (Human)
-Limitations of the model (AI) ↔ Limitations of human mental capacity or neural processing (Human)
Incomplete fine-tuning (AI) ↔ Incomplete learning or skill development (Human)
3. Algorithm-related factors (AI) and Cognitive/Decision-making factors (Human):
-Training objective misalignment (AI) ↔ Misaligned goals or values (Human)
-Randomness (AI) ↔ Noise or randomness in human decision-making (Human)
-Temperature settings (AI) ↔ Level of exploration vs. exploitation in decision-making (Human)
-Adversarial attacks (AI) ↔ Deceptive or manipulative information, let's say it blogging (Human)
4. User-related factors (AI) and Communication/Context factors (Human):
-Ambiguity in input (AI) ↔ Ambiguity in communication,context maybe even poetry or metaphor? (Human)

Andy Clark's term "hallucination" refers to the brain's predictive processing framework, wherein our perceptual experiences are constructed based on a combination of sensory input and prior knowledge or expectations. I don't want to get too off-base with the AI sense of the term, but I'm interested in the rage that AI has become with the advent of public large language models.

Note some of the analogies I draw here are worth more than others. Bias and insufficient training are strong correlations, while temperature settings seem wholly inappropriate, and yet collectively or in social contexts temperature correlates well with brainstorming or mob behavior.

I'm still going back over 'Surfing Uncertainty,' which is the only book I have read that I recall, and more and more I don't. I realize I could have read better. I hope this adds something here. Hallucination seems to be the wrong term for what the embodied mind does, given its proximity to the real world and the 10:1 feedforward : feedback white matter ratio. The prompts that PT sent out by email are very interesting, and this show is going to be even more so.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Saturday, March 18, 2023 -- 10:03 AM

I wonder, opinions and

I wonder, opinions and positions aside for a few minutes. Is our brain the predictor or, is it more practical to claim prediction is seated in an accumulation of past experience? Certainly, experience and concomitant memory are storage areas in the brain, and, we experience and remember things everyday, cognition permitting. I suppose I am splitting hairs here, insofar as these experiences and memories are like grain in a silo. The brain is a big one, for its' size. Right now, I am trying to wrap mine around something called moral fluidity. See the Oxford University blog, if you are interested.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Sunday, March 19, 2023 -- 3:06 AM

Harold,

Harold,

Prediction could be premised on instinct or bias neither of which necessarily are premised on experience. These are good insights I hadn't really considered until you suggested this as the vast majority of prediction, I was considering, would be based on bayesian priors. Hopefully this will be discussed in the show. Practically I'm not sure it matters, and that makes me a bit uneasy.

Best,

Tim

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, April 10, 2023 -- 1:44 PM

Is prediction premised on

Is prediction premised on objective probability or subjective perception of likelihood? There is a tendency to assume the latter because the predictor must also be a perceiver. But the likelihood that the predictor is correct is determined by its probability independent of its being perceived. Take the probability before throwing a coin into the air, one side with the image of male human's head embossed upon it and the other the date of its being struck, of which side, head or date, would be facing upward after falling to the ground. The objective probability is 50% for each throw. But let's say person x throws it five times and each time it lands with the head-side up. It is perceived by this person much less likely that the coin will land head-side up after a sixth throw because the number of throws in the same series has increased, and the subjective perception of probability is applied to the series rather than the individual, adjusting to the objective decrease in probability of contiguous occurrences of identical outcomes. So let's say it lands with the date side up after the sixth throw as predicted. Person x is correct but only by overestimating the probability of a single throw's outcome. Does this make person x less likely to have been correct after the throw, making it less likely to have been successful and therefore specially informative if it is, and more likely to be successful before the throw on account of the unlikelihood of identical results in a random series? Has person x merely calibrated the unlikelihood of a post-hoc outcome to its likelihood beforehand? Is the posterior improbability of evidence for a given hypothesis, and therefore its informative value, inversely proportional to its prior probability of being found? If so, how could such an operation sufficiently confirm an hypothesis?

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Monday, March 20, 2023 -- 6:19 AM

Hi, Tim. Good to read from

Hi, Tim. Good to read from you! I have been elsewhere, blogwise but upon reading the introduction to this, wanted to add a couple of pennies. Brain as prediction machine jogged something. I am not sure why, except to say the human brain may well be thought of as such, but its' predictive power, if it has any, falls upon experience and memory of experience: if this, then that. Or,causation linked with previous occurrences. I don't think much of some of the claims and assertions now floating around AI. But now, I sorta understand why they are floating. And why people get agitated over transhumanism. Russell did not think much of causation, as I understand it. But he was more mathematician than philosopher and if I recall, neuroscience was not part of philosophy in 1950. Of course, neither was I. Times are changing as is/are our treatment(s) of reality. I may even read Chalmers' book.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, March 29, 2023 -- 7:35 AM

Hey Harold,

Hey Harold,

I’ve been reading your other posts, and it is good to see you occasion back here as well. You and I share an interest in Clark, and Andy Clark is to prediction as David Chalmers is to consciousness.

It would be wise not to talk too much about alternatives to prediction, but that has never stopped me, as I have said above – not all thought is based on forecasts, even as the singularity approaches based mainly on predictive “thought” if that is what we call what AI does. Innate behavior and bias, both personal and social, are crucial elements and will be fundamental for our lifetimes at least. Hopefully, reason will play a factor somewhere in there as well.

To characterize Bertrand Russell as not thinking much of causation is probably better phrased – he questioned causation as a force of nature and proposed that humans add causation to their observation of reality as a philosophical view. This is what he and you mean. That is a minority view for the moment, though statistical science has intervened in history since Russell’s best day.

I don’t want to get too sidetracked on Russell because he was prolific, but one aspect of his work that interests me is his theory of descriptions of non-existent objects. It would be interesting to hear Clark’s concern about whether prediction can discover and fundamentally create. I enjoy the mish-mash that AI can “create” as much as anyone. In the limited interaction I’ve had with natural language processing and art bots, it doesn’t help create “new” objects and ideas. Prediction, as a human behavior, makes me appreciate the other human aspects that add zest and vitality. I don’t want to be harsh on prediction, but most of the goodness in its assumed persona are the hints that this zest lurks behind these predictions. In AI, at least, for the moment, it does not. But, as you state, prediction is fundamental to the vital human.

Sorry to go on here; you touched on more than a few things. Transhumanism is undoubtedly real and an issue for this show to consider in the analogy of human predictive thought and AI algorithms. We are only just now appreciating the impact of smartphones. They have changed our social interactions and economy, but our change in thinking and perspective is much more subtle. We are changing whether we choose to or not.

Best to you,

Tim

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, April 10, 2023 -- 12:19 PM

The loss of public phones,

The loss of public phones, the old "phone booths", has been major shift described by a transfer of communication-resources from the public to the private sector. Now if communication is one of the tasks to which linguistic means is put, thinking can be said to be another. If replicas of linguistic thought products can be produced at levels sufficient for wide distribution, would this not constitute a similar shift in thinking which has occurred in communication resources? Is so called "artificial intelligence" (a term which of course must exclude any association with the term "consciousness") little more than an effort by concentrations of private power within the financial and technology sectors to privatize the public mind? --But here's the thing which I find so interesting: If one's private experience internally or "epistemic action" externally (cf. "The Extended Mind", Clark/Chalmers, p. 8) is characterized by optional results which are possible only on the basis and in the context of non-optional foundations the majority of which are shared collectively, and that this sector of the shared part of mind, (which might in passing be called the domain of "cultural objects"), constitutes the largest part of it, or the part with the most contents, then is it not the case that such an attempt at mind-privatization is just the latest attempt to get rid of thinking altogether insofar as such a thing is accessible in public contexts? Could one say that any deliberate creation of such a replica must necessarily be controlled by the will to replace what it imitates? Is intelligence generated artificially of its essence anti-intellectual?

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Friday, March 31, 2023 -- 8:11 AM

Hey, back at you:

Hey, back at you:
Read your account on Splintered Mind. Good stuff, for as much as I understand! Yes, I stir things up, at times, trying to remain respectful. PT taught me a lot and I still come back here occasionally. If you have read any of the Oxford ethics blog, you may have also read comments from my brother. He was a programmer, then systems professional. So he is engaged with AI and machine learning. Good to see you branching out a bit

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Friday, March 31, 2023 -- 7:56 PM

Hey indeed,

Hey indeed,

I happened on that post from a sidebar 'Heap of Links' mention in the Daily Nous that I chanced upon from a reference in one of your posts on the ethics blog to a petition on AI, which is a concern for my day job. The petition has issues, and is a bit shortsighted, but I wouldn't have crossed it had you not mentioned it in one of your posts. Thanks for that.

You've found a sweet spot, Harold. You never know when things come around, but I won't fret not seeing you here as often, knowing you are engaged. Few people read through the comments to blogs. I do, and so do you perhaps.

Eric Schwitzgebel is an interesting character, and he always gets back on comments - a service the world doesn't deserve but is all the better for.

I've been back and forth with another poster in the Berit Brogaard show on Hate about emotion as expressed in the brain, only to come across Chalmers' joint work with Andy Clark in their 1998 paper titled "The Extended Mind" (https://consc.net/papers/extended.html .) I need to tack hard on the wind to stay within the possibility of panpsychism in my wondering about emotion, but 'Active Externalism' is a much easier chew. I wanted to mention it to you as you considered going at Chalmers. It's much shorter and enjoyable.

I will comment back and post this show in either case. Writing what I learned is good, so I can remember when the show repeats itself. My present-day brain doesn't always agree with my past.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Friday, April 7, 2023 -- 5:26 PM

So how do you know they're

So how do you know they're both yours? If lack of agreement is co-emergent with a distinction between a currently observable, (observable in principle, anyway), functioning brain and the recollections and imaginings which generate cross-modal inferences of past experience understanding, then how can one claim that each belongs to a singular subject which refers to them both, as you have above?

The solution seems to be to semantically disentangle the term "thinking" from the means of cognition. One way this is accomplished is by translating the phenomenon of consciousness of time described in the final sentence above into a model of how this phenomenon is produced, locating this process in the brain and, as a mechanism, understanding it as the means of perception of time-relations in the context of complex biological systems. The phenomenon can be described as holding on to something that's leaving, getting ready for something that's coming, and getting hit with something that's here. Although these are simultaneous intentions of the individual, at very minute levels this characteristic of being intended can be exaggerated so that memory and anticipation of individual experience as distinguished from retention and prediction of sensory input is not categorical.

If this can be shown or, if not, heuristically assumed, the brain can be described in a biological sense as the mechanism by means of which time-awareness is generated in the organism. The first task would be to preclude any application of the designation of analytic judgement by the claim that all judgements are synthetic.* The second is to distinguish between those which have aposteriori or apriori grounds, i.e. with or without the aid of experience. If prediction can in a not less than strict sense be made apriori, then occurring sense-perception and retention of what's perceived can be said to be made on aposteriori grounds. Because immediate perception is limited in range, import capacity, etc., only a much smaller quantity of stimuli can be retained, as when material is pushed through a mesh. The apriori predictions must correspondingly be very selective, even if they are described as occurring so rapidly that they are sub-personal, (to avoid using the term "sub-conscious"), and come to be made by a spontaneous and pervasive distrust of everything which is given without them. Because this process is describable without any deliberate intervention by the organism, it can be called a mechanism and as such be found in domain independence of brain functioning. Now if prediction functions to exclude almost everything that is given in reality outside the mechanism, continually readjusting to aposteriori post mesh retention, the cognitive product, when compared to what prediction must dismiss, can not incorrectly be called an "hallucination".

Is this plausible? How does it compare to your sweeping claim in the post of 3/9/23 above that cognitive function generates models of reality rather than hallucinations which ignore it?
__________
* The distinction between analytic and synthetic propositions is of course not affected by the naturalization of truth conditions.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Friday, April 7, 2023 -- 10:29 PM

Hey Daniel,

Hey Daniel,

Physicists and philosophers both question time. PT has done a couple of shows on time already, and it may be, wait for it, time to do another, or maybe that is a construct in my head. Dave Albert spoke to the fundamentals in his show - https://www.philosophytalk.org/shows/time and Julian Barbour talked to some of the same ideas you allude to in his - https://www.philosophytalk.org/shows/reality-time . Andy Clark, too, may speak of it in this show. Time as a human construct is profound and sweeping even. My claims, however, are not so sweeping but subtle, and powerful all the same.

There isn’t much philosophical difference between predictions and modeling in the human brain, and not much changes using one view or the other. Not much is not none, however. Models have an external connotation that encompasses Clark’s idea of externalities, and this subtle difference describes human social behavior and other types of cognitive thoughts. For example, human models such as reputation and trust have an external component that post-traumatic stress disorder doesn’t. Additionally, there are plenty of complex models, like quantum physics, which are flat-out counter-intuitive, and “predict” as well as any model ever has yet escape intuitive human prediction. The simplest case of weather forecasting based on past weather patterns vs. a person creating a model of the weather system and forecasting based on understanding (or as is the modern case exhaustive calculation) and not experience suggests a broader view of thought. I think models are, precisely this, a better view of what humans do, and as I have said, not the only view. Prediction is pretty close to modeling in any case, and Clark’s task here is to speak to prediction in particular.

Externalism is the subject of Clark’s paper I shared with Harold above and have shared in other discussions. It is all of nine pages; just saying. Clark may talk about the concept of models in his chat with Josh and Ray, as well as time. One can only hope.

Regards,

Tim

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Monday, April 10, 2023 -- 11:17 AM

So does that mean that

So does that mean that prediction is the best model, or is it the case that all models predict? Although it appears to me that you're opting for the latter, a bi-semiotic rivalry on my part must be conceded with respect to higher level reception of what is conveyed. But for the sake of argument let's say that's the case. If predictions occur at the sub-personal level as populations associated with corresponding stimuli the error-minimization within which generates favored individuals which in turn generate new populations from the error-resistant ones which in turn minimize the errors within the new population, and so on, then this results in a higher-level model of reality-comprehension in the person which arises from lower level, sub-personal error-minimization within populations of minute probability judgements. The big predictions that can be talked about are products of all the smaller ones which are excluded without noticing it. Is that right? A model is just a big prediction made up of a lot of imperceptibly smaller ones, the majority of which are excluded from one's spontaneous beliefs without any intervention of the part of the person.

So how does one arrive at upper levels of model formation from lower levels of prediction-error minimization for sensory input which have their own? The operative term as you point out above is "Bayesian inference". From the probability that an hypothesis is true anterior to its evidentiary confirmation, a post-evidentiary inference is drawn of the prior probability of the special nature of its confirmation. When a guest at a social gathering observes another with animated gestures while holding a cup containing a beverage, and it is predicted that this person will unintentionally spill the beverage which this person is holding, and this person does so in one particular manner rather than another, say by tipping it too far instead of dropping it, and an inference is drawn therefrom that it was more likely to happen this way instead of by it being dropped, the guest has made a Bayesian inference. If apologies for the crudeness of this account can be accepted, (and corrections certainly invited), permission is requested to solicit an answer to a question which arises from the characterization of upper level models as deriving from lower level prior probabilities of particular sensory inputs: Is your brain lying to you? If one can ascribe prediction-error minimization for sensory input to spontaneous processes expressible in a cortical measurability of optional exclusion of all input-predictions which are most likely to be wrong, then how could a hypothesis ever be shown to be true, if its evidentiary confirmation is a function of its prior probability of being confirmed?

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Friday, March 31, 2023 -- 9:03 AM

After reading and more

After reading and more closely parsing your 3/29 remarks above, I had a couple of random neuron firings on prediction and predictivity. We get some of our ideas about these things through outcomes of scientific inquiry. Repeatability gives a level of comfort/confidence in predictability....like my little maxim of doing the best you can with what you have and know. Then, there are the wild cards---found iamong Taleb's 'black swans'. The brothers Matt and Mark Riddley have said some things about this, if memory serves. Anyway, according to NN Taleb, black swans are mostly, if not entirely, UNpredictable. 'Mostly' attains because any slim chance of predictability rides on chance or probability. For some events/occurrences, like astronautical disasters, we don't want to play poker or spin the wheel. In a sweeping generalization, then, the brain's efficacy as prediction machine seems less than tentative here. The generalization may be too broad. But, when do we take a chance?

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Friday, March 31, 2023 -- 11:30 AM

This would be good for a

This would be good for a check-in with Andy Clark. Black Swans are detected through feed-forward neural predictions but realized in cross-modal cognition and failures in mental modeling of reality. When your ears don't match your eyes, or when Superman is discovered underneath Clark Kent's button-up. I don't know about the power of metaphor to periscope new lands creatively, but a prediction might be able to crash the submarine into a black swan or two at least. I'd like to hear Andy get back on this. The power of prediction is not infinite.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, April 6, 2023 -- 4:03 PM

--Nor can it be limited by

--Nor can it be limited by properties belonging to objects which are not perceived. Compare seeing a grey dog with imagining one. In both cases the same judgement is made with regards to object's color, and differ only in how the object is already there, or its "given-ness". One satisfies a criterion of plenary intuitive correspondence and the other of comparatively empty correspondence, but they are distinguished only by degrees, where one predominates more than the other, and therefore do not constitute different kinds of objects, but rather describe a sequence where imaginative form passes over into intuitive content, and intuitive content in turn reoccurs in conceptually regenerated form, retaining the judgement of its intuited properties originally predicted. Is this compatible with Russel's theory of descriptions mentioned in the fourth paragraph of your 3/29/23 7:35 am post? Let's say the name of the grey dog in the example is "Fido". A correct description of that situation would be stated as "Fido is a dog which is grey". Because "dog" describes a set of objects to which Fido belongs, it constitutes an indication of Fido which is qualitatively degenerate in that it can only pick out a few basic qualities of the individual, while at the same time it represents a greater quantitative extension of the reference. What is lost in qualitative intension is gained by quantitative extension. But to understand the relevance to the individual to the set, such sources of intuitive contents must be both spontaneously, or "sub-personally", predicted and retained by imaginative means. Rather than describing human cognitive resources then as a controlled hallucination, would it be better stated along the lines of Russel's theory as a constrained imagination?

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Friday, March 31, 2023 -- 12:36 PM

I think we are both getting

I think we are both getting better with diversification. Chance is not overrated, only undervalued. Taleb forgot grey swans, seems to me. His life background and history are indicative. He and Pinker are antimatter. Thanks for your support.
Be well, do good where you can. Why not.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Friday, April 28, 2023 -- 5:49 PM

With respect to philosopher

With respect to philosopher Daniel Dennett's very interesting published commentary on Clark's 2013 article on prediction in cognitive science* in the Open Peer Commentary section a question can be seen to arise which should perhaps be tossed out for possible arbitration. This involves a characteristic task which he claims philosophers are expected to undertake: reconcile the specialized scientific view with the general world of ordinary experience. Applied to the study of the brain, this refers to the view of cognitive science as compared with the commonplace experience of thinking and perceiving. One attempt at reconciliation is to apply the relation of identity to both areas by stating that a mental event, e.g. perception of a sunrise, is simply the same thing as an event in the brain. On first glance it doesn't pass critical muster, since in order to confirm such a fact one would have to observe the brain-event from outside of it, which would preclude any identity with what is observed. The solution Dennett offers is to bring Clark's distinction, between top-down uninformed predictions and bottom-up minimization of errors in these predictions, into relation with Locke's distinction between primary and secondary qualities, or formal qualities of objects and those associated with material responsiveness of sensory stimulation. These latter constitute the error-minimizers whose upward or "forward" progress against the downward (or backward) flow of diverse predictions produces the objects of everyday experience. Is this what the Identity-theorist most needs to prove the thesis? If experience consists of its contents, and perceiving only exists insofar as something exists which is perceived, then could it be the case that because the perceived objects involve a judgement about their location, that it could be in the mechanism of perception just as much as in the visible and tactile objects traditionally understood as outside typical corporeal boundaries? And if the cascade of downward-flowing predictions are all inaccurate in the sense of correspondence, while the worst errors are progressively minimized by the upward flow of (in effect) selected sensory qualities, could it be correctly said that the object of perception consists in fact just in this physiological event, and not in the objects of traditional experience, redefined accordingly as the least wrong of possible objects which can be generated encephalically?
_______
* Andy Clark, "Whatever next? Predictive brains, situated agents, and the future of cognitive science"; Behavioral and Brain Sciences (2013), p. 29.

I've read and agree to abide by the Community Guidelines
MayeConsidine's picture

MayeConsidine

Tuesday, November 7, 2023 -- 7:07 PM

However, understanding the

However, understanding the predictive nature of the brain can have practical implications. By recognizing that our perceptions are not fixed representations of reality, we can potentially harness the power of the predictive brain to improve our lives. For example, in the field of mental health, therapies such as cognitive-behavioral therapy (CBT https://geometrydashlite.io) utilize the concept of reshaping expectations and beliefs to promote positive change.

I've read and agree to abide by the Community Guidelines
hueljannie's picture

hueljannie

Tuesday, December 26, 2023 -- 7:19 PM

There may be real-world

There may be real-world applications to learning about the brain's predicting abilities, though. One way to gain access to the predictive brain's capacity for personal growth is to acknowledge that our perceptions are not static depictions of reality. As an example, cognitive-behavioral therapy (CBT https://basketballstars-game.com/) is a mental health treatment that uses the idea of changing one's expectations and beliefs to improve one's mental health.

I've read and agree to abide by the Community Guidelines