Mind Sharing

Sunday, March 31, 2024

What Is It

Mind reading might sound like the stuff of science fiction. But in philosophy and psychology, mind reading is something that human beings do whenever we try to guess what another person is thinking. Could it be that people are also natural born mind sharers, unconsciously shaping our behavior to be understood by others? How do we change or exaggerate our actions when others are present? And how can we use these insights to communicate better with our loved ones? Josh and Ray share their mind(s) with Julian Jara-Ettinger, Director of the Computational Social Cognition Lab at Yale University.

 

 

Comments (3)


Harold G. Neuman's picture

Harold G. Neuman

Monday, January 29, 2024 -- 11:48 AM

Interesting this comes

Interesting this comes forward. There is another blog I visit, where the author revisits an older film in his examination of "the dematerialization of sex". The film was Demolition Man, staring Bullock, Snipes and Stallone.. Futurist society;pacifist culture deems, sex dirty, so don't do it. Sex became a mind game, literally AND totally. I guess reproduction was artificial, and therefore, fully under government control.. Underlying motive: sex begets violence---not good for public order. Sex is, of course, mostly in the mind, how it feels matters less if one is not thinking about it.. Just found it interesting, your doing something on it, just now. How is NV doing?I communicate with ES often..

That z, in the word salad, second line above.? I never can keep it straight and KNOW an 's' is better.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Wednesday, February 14, 2024 -- 6:07 AM

Theory is often understood as

Theory is often understood as the branch of a discipline which is of a practice but not part of it. Hence e.g. music theory is not musical. But there's also a reference of the term which consists of a capacity, whether or not it is exercised. These theories might be called innate models which are latent and only exercised upon some external stimulus the response to which is an inference. Only because an inference is made not just from the particular stimulus but from other potential ones of similar type, can the network of sub-personally held relations upon which the response depends be called, it seems to me, a "theory". I interpret Ettinger to have deployed the term in this way when he describes an interpretation of the result of a behavior which is unexpected.* If one is successful in making sense of such behavior, one must, as I read his analysis, make use of an ability to have a picture of the mind of the agent under consideration. The ability to picture is the theory; and since the picture is of someone's mind, it's called "Theory of Mind".

In contrast with theories that are collaboratively constructed, then, Theory of Mind, understanding that it's already there, has to be discovered. For this task data-processing machines are suggested: Start with the assumption that all mind-generated actions are modifications of reward-maximization and cost-minimization processes, add a world model which is the picture an agent has of an anticipated action's context, and combine them to generate a sequence of actions which a human observer can make sense of, or its "policy". As combination-diversity increases to include as many instances as possible, some policies are reinforced by the reward-function and others are make weaker by the cost-function, with the result that the statistical regularity of a small number of policies out from all possible ones should resemble human behaviors and therefore furnish strong support for a Theory of Mind predicated by the same axiomatic assumptions.

Although Ettinger doesn't appear to fully endorse this proposal, and suggests instead an inverse form of it based on Bayesian inference, the use of data-processing mechanisms raises a question about the use of scientific instruments in hypothesis-confirmation: How is agency related to their reliability? If human-resembling behaviors are produced mechanically from a few fundamental axioms of favored and disfavored actions and how they fit together in a series, a procedure is undergone which must be in principle describable. In the case of very rapid procedures which exceed the range of possible human perception, however, an account of them might not be possible, insofar as the human developer of the tool in addition to the researcher shares the preclusion of procedure-perception and thence observational impossibility. Reliability of their results, e.g. in confirmation of an hypothesis of agent-based effects, becomes detached from trust in its developer. This doesn't occur for example in the case of a boat, where functional reliability is tied at each point to its manufacturer which, though not always possible for topical reasons, can be accounted for in its optional and generation-specific characteristics, and is always possible where such reasons are removed.

So is it necessary for reliability of a scientific instrument to be connected to trust in its manufacturer's accountability? Do systems of extra-perceptible hypothesis-verification drive research into areas of narrower responsibility for confirmation-mechanisms? In the case above, does the Theory of Mind if confirmed computationally hinder the exportability of epistemic grounds for the confirmation of its claims?
__________
*Julian Jara-Ettinger, "Theory of Mind as Inverse Reinforcement Learning"; Current Opinion in Behavioral Sciences, 2019, 29: 105-110; p.105.

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Thursday, February 15, 2024 -- 4:56 PM

My remarks above do not

My remarks above do not address the contents or results of Professor Jara-Ettinger's research directly which, in my brief acquaintance with his fascinating work for the purpose of commentary regarding this broadcast's topic, required some special definition of terms. The use of the term "theory" as cited in the 2/14/24 post above, for example, is revisionist in the sense that it refers to the object of a theory, or a theory of how one possesses and revises an innate theory of other minds.*

Of particular interest on the side of epistemic justification is how results are confirmed by computational models.** The concern raised is whether the reliability of test-results contravene the sharing of epistemic grounds where no full account of the mechanism of confirmation is possible. If individual steps in the procedure remain essentially opaque to both the machine's manufacturer and its user,*** the capacity to share the epistemic grounds of such results and therefore their export into other fields may be impeded. If that's the case, a broad question about the ethical component of collective epistemic enterprises arises: Does result-reliability undermine trust in the individual researcher's capability of procedure accountability? If responsibility to vouch for such an accounting is overridden by the reliability of the essentially hidden functioning of the instrument of confirmation, is the glue of collaboration which holds the scientific project together weakened to some degree?
_____________
*Another term I found interesting is "Granularity": The granularity of something, --and of course this is just my layperson's interpretation, is a predicate of systems, or things with a lot of components which work together, the granularity of which is either coarse or fine. Plato's system of uninstantiated universals, for example, might be an example of a system with coarse granularity, as its components are maximally discrete, whereas the atomic system of Democritus can be described as one of fine granularity, the components of which are minute enough to escape component-perceptibility and can only be inferred by the emergent properties generated by their combination. Perception and spontaneous motor impulses in biological organisms are of this latter type, whereas the question is open as to whether or not linguistic behaviors share this "high level" of granularity (cf. Introduction to the article cited below). One consequence of my acquaintance with this concept was the question of whether there is anything to which the predicate of granularity can not apply. Are there systems with only a single component? The definition seems to exclude it, which indicates that the distinction between granularity and non-granularity is non-granular and is not self-predicating.
**Julian Jara-Ettinger and Paula Rubio-Fernandez, "Quantitative Mental State Attributions in Language Understanding"; Science Advances, 17 Nov. 2021, vol. 7, issue 47; in the Abstract: "These judgements matched the predictions made by our computational model..."
***Noted is potential objection to this on the part of the manufacturer. Agreed however is that some results by procedures under the rubric of "Artificial Intelligence" are not possible to observe or to be given an account of which excludes alternates of how a particular result is arrived at

I've read and agree to abide by the Community Guidelines