Logic For Everyone

Sunday, May 26, 2024

What Is It

Logic may seem like a dry, abstract discipline that only the nerdiest of philosophers study. After all, logic textbooks are full of weird symbols and proofs about abstruse entities, like "the set of all sets." On the other hand, don’t we all try to think logically, at least in some contexts? Why do we believe, for example, it’s bad to contradict yourself and good to be coherent? And what’s the connection between the abstract rules of logic and the everyday practice of poking holes in each other's arguments? Josh and Ray entail their guest, Patrick Girard from the University of Auckland, author of Logic in the Wild.

Transcript

Transcript

Josh Landy  
Coming up on Philosophy Talk...

Star Trek  
I am a logical man, Doctor. It'll take more than logic to get us out of this. Perhaps, Doctor, but I know of no better way to begin.

Josh Landy  
Logic for Everyone.

Comments (3)


Chiaramandres's picture

Chiaramandres

Tuesday, April 30, 2024 -- 10:36 AM

Prof. Briggs does a beautiful

Prof. Briggs does a beautiful job of infusing classically "logical" subjects, like math, with a pinch of mystery, a dash of poetry. For those of us with more "artistic" minds, what do you find most compelling about the dance between logic and our creative/emotional selves?

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, May 26, 2024 -- 4:08 PM

While logic is derived from

While logic is derived from everyone as a descriptive science, it's also prescribed to everyone as a prescriptive practice. The practice of valid inference is a special area of study designed to be exported to other areas, the lacking import of which precludes collaborative overlap of rigorous methods. In its prescriptive form valid inferential practice constitutes a recommendation which requires special effort to be accepted and therefore is different from ordinary inferential practice. There is therefore a special element in such practices which lacks formal neutrality to its contents and effects. As such it should have one of three distinct biases: moral, ontological, or technical. The moral neutrality of logic is shown by judgement-indifference to the potential of its immoral use. Logic's ontological neutrality is more controversial but can be plausibly established by counter example-absence for the assumption of illimitability by terrestrial conditions. That leaves logic's technical non-neutrality. As a specialized practice rather than an effortless or sub-personal possession, logic and its distributed material effects, papers, written equations, invention of conventional symbols, etc., could be considered in singular form under particular historical conditions as an artifact produced by the human species, namely, of what's called "the mind".

Logic described as an artifact of the mind has the advantage of emphasizing the question of what it's supposed to be good for, and the grounds of its academic export recommendation. These contain the claim that without it, the contents and results of research in a particular field could not be shared with others. As the use-recommendation is issued together with its result-recommendation, the recipient participates in the maintenance of its productive functioning. It can be observed however that a nexus between the areas of computation and cognitive science contains a recommendation for inferential practices which overrides the value of use by that of result, and eclipses the activity of production of the artifact with the automated distribution of its products. This is done not by logical inference but by computational inference, which might produce a result which is practically identical to the former, but without any deployment of the practice of inference itself. This is most pronounced around the issue of functional universality. The idea of "redness" applies to all red things, but is not adequate to intuitive experience, regarding which one refers rather to "hues". But computational inference does not recognize or include this distinction as it applies to intuition, and instead designates the universal as one member of the coarse-grained set of colors, and the particular hues as members of a fine-grained set of visual stimuli. Granularity of reference contents therefore generates the replicated result of inferential practices, and effectively overrides them in terms of export recommendation and curricular priority.

My thesis here is of one artifact replacing another, and of computational inference hampering or altogether eliminating the active use of inferential practice. As logic is a learnable skill which is recommended to other areas of study for the purpose of interdepartmental interchange and research-result shareability, computation is a mechanistic disposition the specialization of which eludes shareability with other areas, even with its own developers, to whom the process of specific result-generation often remains opaque. Its effect is and has been to reduce the quantity of distributable contents of research products, and to move many elements away from areas of collaborative overlap. Skill in logic and inferential practice differs most prominently from computational inference in the fact that logic can't be privately owned, while the means of computational inference constitute marketable products. In this sense I think it is probably not far off to assert that the latter's use, in confirmation of hypotheses the arrival at which precludes researcher-recognition of its process, amounts to what could be accurately described as the attempt by a very small class of specialists to translate the human mind as a public asset into a tradable commodity which can be privately owned. This is effected by the usurpation of functional inferential practice by dysfunctional computational inference.
___________________
Bibliography:
Aspeitia, Axil Arturo Barcelo; What is Logical Form? (2024).
Aspeitia, Axil Arturo Barcelo; An Intensional Definition of the Intrinsic/Extrinsic Distinction, (2023).
Christiaens, Tim; Nationalize AI!, AI and Society, (2024).
Dung, Leonard; Understanding Artifical Agency, (2024).
Elgin, Samuel Z.; Monism and the Ontology of Logic, (2024).
Heyndels, Sybren; Technology and Neutrality, vol. Philosophy & Technology, (2024) 36.75.
Weiss, Bernhard and Kuebis, Nils; Molecularity in the Theory of Meaning and the Topic Neutrality of Logic, (2024).

I've read and agree to abide by the Community Guidelines
Daniel's picture

Daniel

Sunday, June 2, 2024 -- 3:49 PM

Because the usual edit

Because the usual edit function is currently unavailable in my case and in a last ditch effort to garner participant input, in particular from supporters of the computational model for cognitive processes, some clarification of the translation of concept analysis into data-storage granularity mentioned above is perhaps due. First, the universal functions to maximize the coarseness of data-input without intuitive input, so that fine grained outputs, e.g. a detailed image, result in an intuitively adequate stimulus. Where the connections between the coarse grained and the finely grained data sets is plannable before the production of the stimulus and traceable afterwards, there is sufficient transparency to insure responsibility for the product, called "in the loop" in the literature. In moving from image replication to decision-making and conclusion replication however, the data sets become too large to plan or trace their coarseness for or to their fineness, (in my limited understanding of the matter), so that the process by which the stimulus is produced can not be discovered or adequately detected either by the agent of its optional initiation or by its perceiving recipient, who are therefore "out of the loop" from its production.

There's been a lot of discussion about who is responsible for such products and to what their authorship can or can not be ascribed. Here however I am concerned only to point out that authorship can not be ascribed to an independent agent and is therefore is characterizable as fundamentally non- or ir-responsible. For this reason it seems to me that the term "artificial intelligence" is an exceptionally good one and has a precise referent. Similar to artificial flowers, it constitutes an example of replication or embodied resemblance which looks like something it isn't. It looks like intelligence but there's nothing there that in fact is intelligent (on account of the fact that the element of intuition is missing), but nevertheless has intelligence considered as a faculty which generates task-completions. If one speaks of artificial knowledge however the situation is not replicated, and because knowledge presupposes a knower, it can not be generated artificially.

Reliable knowledge of machine automation in general and artificial intelligence in particular is in part constituted by the fact that they are artifacts of industrial production the use of which is optional and therefore attributable to responsible agency. But this can not be said for their operation after the decision for their use is made, which makes any contract for the sale of these artifacts legally problematic. In the sale of automated weapons, for example, the effects of their legitimate use might be different than the use intended by their manufacturer, which would preclude retailer-understanding of the sale-contract's terms in cases of potentially dangerous products. In the extreme case of automated target-generation governing the use of a purchased weapon, the retailer can and should be held liable for any damaging effects of that use under the branch of tort liability which began in Thomas v. Winchester (N.Y. 1852). Because no adequate care can be taken by the purchaser to prevent unintended harm to a third party, the retailer along with the manufacturer can not be shielded from liability by privity of the sales contract.

Perhaps Heraclitus was right when he wrote (Diels-Kranz 48): "the name of the bow is life", that is, one can give a dangerous product a harmless name, "but its function is death", that is, if one knew the full consequences of its use, one would be morally obligated to prevent its manufacture. Can the mere appearance of conformity to logical rules in automated systems which function without intuitive input to closely resemble intelligence operations with intuitive input, be accurately characterized as constituting an opposite effect from the one upon which its manufacture is recommended? This seems difficult to deny in cases of the sale of automated weaponry the operation of which is hidden from the contracted parties.

I've read and agree to abide by the Community Guidelines