The Soul and the Brain

  • Thread starter Thread starter scameter18
  • Start date Start date
Status
Not open for further replies.
The common sense conclusion is that studies in this genre cannot be applied to the functions/abilities of human nature.
Here’s study that’s come up in other conversations that I think is relevant to the issue of abstraction and conceptualization as natural phenomena, here:

Discriminating the Relation Between Relations: The Role of Entropy in Abstract Conceptualization by Baboons (Papio papio} and Humans (Homo sapiens)

The link is to the PDF of the entire article. Here’s an interesting fragment:
We have recently discovered that, when they are trained and tested under similar experimental conditions, animals from widely different species can acquire a same-different concept. In one illustrative example (W asserman, Hugart, & Kirkpatrick-Steger, 1995; Young & Wasserman, 1997, Experiment 1), pigeons were first taught to peck one button when they viewed an array of computer icons that comprised 16 copies of the same icon and to peck a second button when they viewed an array that comprised one copy of 16 different icons (a same-different discrimination task). These same and different training displays were created from one set of 16 computer icons. The pigeons were later tested with new same and new different displays that were created from a second set of 16 computer icons that had never before been shown during discrimination training. Accuracy to the training stimuli averaged from 83% to 93% correct, and accuracy to the testing stimuli averaged from 71% to 79% correct; in each case, choice accuracy reliably exceeded the chance score of 50% correct. Such robust discrimination learning and stimulus generalization attest to the pigeon’s acquisition of an abstract same-different concept (for more on the nature of this concept, see Wasserman, Young, & Nolan, 2000; Young & Wasserman, 1997; Young, Wasserman, & Dalrymple, 1997; Young, Wasserman, & Garner, 1997).
In a second illustrative example (Wasserman, Fagot, & Young, 2001), baboons were similarly trained and tested with the same visual stimuli. Accuracy to the training stimuli averaged 91% correct, and accuracy to the testing stimuli averaged 81% correct; in each case, choice accuracy reliably exceeded the chance score of 50% correct.
It is a highly advanced intellectual feat for animals like pigeons and baboons to detect the sameness or differentness of a collection of visual stimuli and to make two distinctively different responses in order to report those same-different relations (Delius, 1994). An even more advanced feat would be for animals to match the relation between relations—in other words, to exhibit the essence of analogical reasoning (Premack, 1983; Thompson & Oden, 2000).
It’s important to grasp the distinctions being made here between simply identifying “difference”, and abstracting the concepts of “sameness” and “difference”. When the baboon (or human) is shown a grid of icons that are each different with respect to each other, the concept of “internal difference” is learned, and transported to new displays. That means that when the baboon subject is presented with a new grid of icons, icons which are NOT icons seen before, the baboon is able to apply the abstraction of “internal difference” or “instance variance” – images that are different with respect to each other.

This is not visual recognition of the prior images. The sample is shown, then a pair of choices is presented. The “correct answer” is NOT a duplicate of the images in the sample when the icons are different in the sample. The “answer” icons are different with respect to each other, but ALSO different with respect to the icons in the sample. This means the baboon is no recognizing the original (sample) icons, but is instead recognizing the icons relationship to each other – “difference”. An abstraction – a conceptual isomorophism where the icons themselves are irrelevant, but what is salient for the cognitive process is the visual relationship between them.

The “General Discussion” section of the paper opens thus:
General Discussion*There is no evidence that monkeys can perceive, let alone judge, relations-between-relations. This analogical conceptual capacity is found only in chimpanzees and humans. *(Thompson & Oden, 2000, p. 363)
The results of the present series of experiments suggest that we reconsider Thompson and Oden’s (2000) recent appraisal of the species generality of abstract relational conceptualization. Perhaps baboons too can judge the relation between relations. However, might their successful relational matching behavior have been based solely on a perceptual attribute of the displays?
There’s page of discussion provided in anticipation to the objection that the results can be accounted for by perceptual mechanisms without abstraction, worth a read if you’re inclined to protest along those lines (page 326).

My reason for posting this study is to draw attention to the experimental evidence that the natural brain can and does possess abstraction faculties, in this case the ability to judge “relations between relations”, a second-order judgment, observed in baboons (and humans), baboons ostensibly being without an “intellective soul”, or whatever immaterial thing is supposed to account for conceptual processing in humans by subscribers to supernaturalist models of mind.

-TS
 
I am not sure what you mean “by the intellect does not use the brain to abstract from the phantasms”. It can be interpreted in different ways. So I thought I would articulate the Aristotelian-Thomistic position for the sake of clarity.

The phantasm, as recognized by Thomistic psychology is the product of sense knowledge. The content of the phantasm is always of what is particular. Hence, it must be accounted for by the activities of the sense organs and brain. Sense knowledge is inherently dependent on matter for all of its operations.

In contrast, the extension of the intellectual power to the apprehension of universals is due to its purely psychic nature. It is inherently independent of matter in all of its operations. The intellect depends on the body merely for its object, since it does not act through a material instrument. As Aristotle said, images are the objects of intellect. And since one cannot have imagery without a material organ, there can be no intellectual operation without the cooperation of matter.

Sense and intellect must work together in the production of the idea. The dependency of the latter on the former is merely of an objective or extrinsic sort, inasmuch as the senses furnish the data from which the intellect abstracts the intelligible species.
That’s what I thought. It’s this model which is disproved by neuroscience.
 
Toushstone if you can’t see the difference between physical representations, as in the activation of neural networks and the PHENOMENOLOGICAL EXPERIENCE OF MEANING, then I give up with you on that argument. One last time, the thought, the abstraction, is not the activity of a neural net. They are associated, and it may be the case that without the activation we are unable to do anything with the concept. But that still doesn’t make it the same thing! Even NowAgnostic has moved to a position of accepting an interactionist mind-body dualism. I must have seriously misunderstood him/her at some point because I didn’t think that s/he would support that.

I feel like I’m trying to explain the difference between wet and dry to a fish. I mean no insult btw, it just expresses the feeling I have on this issue. I expect that you feel much the same way 😉
 
All right, now to Dehaene…
OK. There have been lots of posts on this so I won’t be able to get to all of them. And, my apologies for not linking to a full version of the papers, but there is no freely-available version online. Pretty much, all objections are either trying to (unsuccessfully) foist the Thomistic model in Procrustean fashion to the data, or not understanding the meaning of “representation”, or worries about replicability (there are many, many studies implicating the parietal lobes for numerical representation).
I’m suspicious how a number could be “conveyed” by “nonsymbolic” dot patterns. If a dot pattern is made to express a number … then it’s symbolic.
This just means there are no other symbols: e.g. you could put seven dots together to form the letter “A”. Rather, they were put in a random configuration so as not to show any other symbol. Yet, there are seven dots. That’s how the number is “conveyed”.
Yet, why were these phantasm being arranged in this way?
Aristotle and the Scholastics said (I’m pretty sure at least) that phantasms should become organized. If phantasms are more organized they can be retrieved more effectively, and thus so too their corresponding concepts.
What does that have to do with anything regarding a hypothesis regarding abstract coding of numerical magnitude?
And that last sentence is the killer (the killer of your theory, NowAgnostic).
Saying two phantasms are linked together doesn’t kill my theory at all. My theory doesn’t say that’s impossible.
“They support the idea the symbols acquire meaning by linking neural populations coding symbol shapes to those holding nonsymbolic representations of quantities.”
Correct me if I’m wrong, but it says that the phantasms of number symbols are linked to the phantasms of nonsymbolic representations of quantities.
How can there be a phantasm for a quantity?

What is a phantasm for the quantity “two”? There can’t be. It’s not something physical; it’s an abstract quantity.

Even if you were correct, the last sentence of the abstract is only an ancillary conclusion regarding how the brains gets meaning from symbols, the main one is that the abstract coding of numerical magnitude residing in the intraparietal sulcus was critically tested.
I’m sorry but isn’t “nonsymbolic representations” kind of an oxy-moron? Maybe I’m wrong. It seems to suggest that the dot patterns (the “nonsymbolic” stuff) are actually not the quantities themselves but represent (or SYMBOLIZE!) the quantities.
No, the patterns don’t symbolize anything. They represent the quantities simply by the number of dots.
Perhaps more natural symbols (like dot patterns) are organized in the brain such that the phantasms of the more conventional symbols are linked to them subserviently … or something.
Again, there’s no pattern to the dots.
The fact is though, this explanation doesn’t aid in disproving immaterial abstraction at all. It clearly says that the “neural populations” hold representations. And where oh where do we find the meaning of those representations?
In the fact that the representations are the representations of meanings or semantic representations.

You’re missing the fact that the study is showing the same brain region to be involved in modality-independent quantity representation. So, you can have one region for the phantasms of numerical symbols, another for the visual phantasm of the dots, and we’ll see that on fMRI. But why do deviants (numbers of dots, or Arabic numbers vastly different in quantity from the baseline) activate the same additional region? If all the immaterial mind needs to do is to abstract the quantities (which involves no brain activation) and then compare (which involves no brain activation) then why does this additional activation show up, and in precisely the same area?
 
Toushstone if you can’t see the difference between physical representations, as in the activation of neural networks and the PHENOMENOLOGICAL EXPERIENCE OF MEANING, then I give up with you on that argument. One last time, the thought, the abstraction, is not the activity of a neural net. They are associated, and it may be the case that without the activation we are unable to do anything with the concept. But that still doesn’t make it the same thing! Even NowAgnostic has moved to a position of accepting an interactionist mind-body dualism. I must have seriously misunderstood him/her at some point because I didn’t think that s/he would support that.
Assuming my usual stance, here, I’m not convinced there’s a positivist way to show any difference, and instead focus on a monist view as a function of parsimony, and ask – if the monist view were false, if, here, the phenomenological experience was the sensation cognition predicated on physical embodiments of abstractions, how would this be shown? That is, if you were incorrect in asserting “they are not the same thing!”, how would that become known to you? It seems to me, you are relying entirely on some subjective/intuitive sense of immateriality, an intuition that can be accounted for as an intuition conducive to our self-identification.

If monism does obtain here, what would you expect the phenomenological experience to be, and how would it be different from your current experience? I can’t think of any basis for establishing what that difference would be, and it occurs to me that on monism, we would expect to have the intuitions we have – the brain is optimized for functional performance: survival, propagation, goal seeking, rendering the extramental world intelligible to as great a degree as possible. But there’s no imperative, and no innate mechanism for the brain to understand its own mechanisms – it takes scientific discovery to uncover and untangle what has been unseen and unfathomable for the last million years up until just a few decades ago.

So, on a naturalist model, the mind, the “self” we’d expect to be appear “disembodied” – a “given”, so to speak, that operated on the intuition that our “mind is not our body” just because the mind has always been so inscrutable to itself.
I feel like I’m trying to explain the difference between wet and dry to a fish. I mean no insult btw, it just expresses the feeling I have on this issue. I expect that you feel much the same way 😉
I have a long career as a dualist, remember. I get the intuition, and agree its a strong one. Science is the best friend and validator of intuition on many fronts. But on many others, the evidence it uncovers overturns and repudiates our intuitions right and left. The more we learn, the more the immaterial mind gets pushed back into a belief underwritten by the intuition, and against the increasing scope and depth of natural models of cognition based on evidence and empirical observation.

Who knew that time was relative to motion? What’s more counter-intuitive than that? Or that a rock isn’t “really” solid, but is mostly empty space? Doesn’t fit our intuitions! Science is an unusual enterprise that way, as it provides in its collective pooling of observation and analysis a kind of “mirror” on our intuitions, a way to judge the merits of what we suppose we “just know”. As I said, a great many of those intuitions pass scientific testing with flying colors. But many of our intuitions, some of them the strongest ones, even, run into serious trouble when the instrumentation of science as a collective, disciplined enterprise is aimed at. What we “just knew” we didn’t really know, sometimes. On the evidence that’s emerged in the last two decades in neuroscience, the case for the intuition of the immaterial mind grows weaker and weaker…

-TS
 
Here’s study that’s come up in other conversations that I think is relevant to the issue of abstraction and conceptualization as natural phenomena, here:

Discriminating the Relation Between Relations: The Role of Entropy in Abstract Conceptualization by Baboons (Papio papio} and Humans (Homo sapiens)

The link is to the PDF of the entire article.
Very interesting link and definitely worth a better read than a quick scan. Thank you for thinking of me.

I did read the discussion on page 320 which began: “Our primary objective in this first experiment was to see if a nonape, primate species, the baboon, could successfully solve a task that appears to require the discrimination of an abstract same-different relation.”

Also read this interesting bit on page 326: "Baboons Versus Humans
Is relational matching-to-sample similarly performed by humans and baboons? Other data that we collected in these experiments help us answer this question. However, the answer is not a simple yes-no matter.

“Baboons and humans both learned the relational matching-to-sample task, in which physical identity between the sample and testing stimuli was eliminated as a controlling stimulus. However, humans learned far faster than baboons, and their final level of discriminative performance was much higher.”

Note: Both above quotes are taken out of context.
My reason for posting this study is to draw attention to the experimental evidence that the natural brain can and does possess abstraction faculties, in this case the ability to judge “relations between relations”, a second-order judgment, observed in baboons (and humans), baboons ostensibly being without an “intellective soul”, or whatever immaterial thing is supposed to account for conceptual processing in humans by subscribers to supernaturalist models of mind.

-TS
I hesitate to comment because I have not truly studied this research. However, this statement (page 324) taken out of context has fueled my curiosity.

“Method” “Subjects” "
The same 2 baboons served here as had served in Experiments 1, 2, and 3. Approximately 2 months passed between Experiments 3 and 4, during which time the baboons were on holiday from the experiments."

Obviously, the statement was written in a strict, serious, scientific manner. Thus, I will leave to others the matter of demonstrating how the results overcame all of the inherent limitations found in various parts of the research.

From page 316: “The results suggest that animals other than humans and chimpanzees can discriminate the relation between relations. They further suggest that entropy detection may underlie same-different conceptualization, but that additional processes may participate in human conceptualization.”

The honesty of using the word “suggest” is sincerely appreciated.
 
TS,

You refer to evolutionary pressures that led to the development of mental processes - this is how the brain has evolved.

Yet, the sense of “I”, the sense of a disembodied self, the self reflective nature of mind has no survival value. Blindsight demonstrates, for example, that people are able to perform ‘visual’ tasks without conscious awareness of vision. Thus, the mental processes that are needed for survival - memory, planning, decision making, spatial awareness etc are possible without consciousness. In which case why would an organ evolve a biological function that has no or little benefit in terms of survival? This is especially problematic as consciousness takes an enormous amount of energy and neuronal acitivity to run (the brain consumes more energy than any other organ). In addition, great apes, dolphins, elephants and corvids (the closest species to us in terms of cognition and who appear to be able to recognise themselves in the mirror task) do not have consciousness to the same extent as us and yet they survive and thrive as we do. This evolved, physiological function appears very expensive and with little benefit. That sounds contrary to natural selection to me. At least male peacocks have increased opportunities to mate; while we get existential angst, egocentrism, a huge variety of mental illness and addiction - all traceable to consciousness!

Before you go there, if you try to explain the hypothetical survival value of consciousness in terms of social groups remember that you are embarking on making two arguments - that of biological determinism for psychological traits and that for sociobiology. Sociobiology has little empirical evidence and can be accused of constructing and retrofitting convenient narratives on current observations.

Another question that I have - how do you propose the immaterial is embodied in the material? Or are you saying that consciousness and all that pertains to it is concrete?
 
Yet, the sense of “I”, the sense of a disembodied self, the self reflective nature of mind has no survival value. Blindsight demonstrates, for example, that people are able to perform ‘visual’ tasks without conscious awareness of vision. Thus, the mental processes that are needed for survival - memory, planning, decision making, spatial awareness etc are possible without consciousness.
Blindsight represents an abnormal state of functioning; the subject’s sight is temporarily or otherwise disabled, yet some remnants of vision enable the performance of contrived visual tasks. The person would be unable to use their eyes in a normal way, ie to guide their body around. This does not address the role of consciousness in all the other aspects of living a human life.

It’s not obvious to me, anyhow, that “consciousness” involves a “disembodied self”. ISTM that experiencers of human consciousness experience themselves as very much embodied; YMMV. The “disembodied self” arises, IMO, from classical philosophy and the fear of death, rather than from anything innate. Our mind makes us a thinking body; not a disembodied self.

ICXC NIKA.
 
TS,

You refer to evolutionary pressures that led to the development of mental processes - this is how the brain has evolved.

Yet, the sense of “I”, the sense of a disembodied self, the self reflective nature of mind has no survival value.
I think you have the dynamics of natural selection switched around, there. Between a system that omits or just doesn’t address “physical sense of mind as brain”, and a brain that does establish those faculties, which is the more efficient in terms of survival? The “physical sense of mind as brain”, however that might be arrived at neurological would be the gratuitous development, not leaving it out. You are thinking it’s more streamlined to have this physical sense of mind as brain activity, but it’s clearly not – think how much more machinery and network has to be deployed to that end above the minimal case – “the disembodied self” – and to what advantage in terms of survival or fecudity. “Disembodied self” is what you get for free, when there is the absence of awareness and cognitive machinery, not the presence of “illusory sensation”. We just infer “disembodiment” because we have a hole vacuum in our cognitive landscape on that issue. As a neuroscience researcher friend says “fish forget they are in the water for very good and practical reasons”. Being the “I”, being the cognitive self is so utterly pervasive, like the water to a fish, that it is effectively transparent. It’s “out of mind”, something the mind takes for granted, which it should.
Blindsight demonstrates, for example, that people are able to perform ‘visual’ tasks without conscious awareness of vision. Thus, the mental processes that are needed for survival - memory, planning, decision making, spatial awareness etc are possible without consciousness.
This is equivocating between casual usages of “consciousness” and scientific ones. You remember “B.D.” and Dr. Weiskrantz, surely. B.D., missing his left half of his field of vision due to a tumor that damaged his visual cortex, could “see” some stimuli. His extraordinary case and others like it that came after demonstrate awareness and facility in processing external information outside of the conventional visual processing system, or more precisely, the obvious parts of that system.

When Weiskrantz would hold up a stick in B.D.'s “blind field”, B.D. was able to guess whether the stick was being held vertically or horizontally, even though he couldn’t see the stick. B.D. was still processing visual stimuli, just in a deprecated, and “unconscious” (in the casual use of the term) fashion. Much testing on this phenomenon since has shown that blindsight is much more sensitive to oscillating, flickering, or strobing stimuli – dynamic, large scale stimuli – than it is to static, steady and smaller scale stimuli. This is a description of prototypes of human visual processing, note – the kinds of early developments in organisms with rudimentary nervous systems that made profitable use out of much more basic (name removed by moderator)ut analysis – light to dark, dark to light, dark/light patterns that line up on some axis on the field of view, etc.

However that works, spatial awareness is consciousness – it’s just form of processing we aren’t commonly aware of; the visual acuity of a working pair of eyes completely drowns out and obviates the utility of the blindsight faculties the famous Mr. “B. D” demonstrated.

-TS

(con’t)
 
In which case why would an organ evolve a biological function that has no or little benefit in terms of survival?
It wouldn’t, in all likelihood. Remember, here, that we are talking about cortical blindness in cases of blindsight, not loss of eye function (B.D.'s eyes and optic nerves, etc. were anatomically fine and functional). In the case of blindsight, then, you have working (name removed by moderator)ut (eyes are still “seeing”), and the loss of visual processing that the patient has DECLARATIVE KNOWLEDGE of. Blindsight is visual processing then – demonstrably by the performance of subjects in experiments – that “flies under the radar” of our conscious thoughts.

This is backed up by our experiments and anatomical investigations. As you know, blindsight is the result of damage to the striate cortex, part of the visual cortex. Research on both humans and monkeys with lesions on the striate cortex reveals a “residual vision” outside the striate cortex (that is, in an undamaged nearby part of the brain), where “vision” indicates a much more rudimentary processing network, centered around coarse pattern and motion stimuli.

That’s not so amazing. Lots of functions are distributed being their “basic locus” we identify for them in the brain. What’s unusual here is that we can use this “residual vision”, our blindsight, without being aware that we are using it.

All of that functionality is functional toward survival and fecundity in a straightforward way. It’s just an unusual artifact of particular traumas that we have insight in to our “unconscious awareness” in terms of rudimentary visual processing of pattern and motion stimuli that lie outside of the normal visual processing we do with the visual cortex.
This is especially problematic as consciousness takes an enormous amount of energy and neuronal acitivity to run (the brain consumes more energy than any other organ). In addition, great apes, dolphins, elephants and corvids (the closest species to us in terms of cognition and who appear to be able to recognise themselves in the mirror task) do not have consciousness to the same extent as us and yet they survive and thrive as we do.
Yeah, big brains like humans have are a luxurious asset in some ways. It’s a tricky trade-off, and while one that works quite well for us, would be disaster for other niches in the environment. Something like 20% of the energy in everything you eat goes straight to servicing your brain. But your objection is your answer; the kind of encephalization quotient humans have puts large demands on your metabolic processes, and aside from the obvious reason that many lineages just have not had the enabling variations arise and get a chance at fixation in the population, the environment niche many species exploit simply doesn’t make that a profitable trade. As an extreme example, one of the “wisdoms” of being a bug is that an individual can survive and propagate on very small amounts of energy, meaning viability obtains in all sorts of nooks and crannies of the natural environment that would not support an organism with a 100cc brain to power and lug around.
This evolved, physiological function appears very expensive and with little benefit. That sounds contrary to natural selection to me. At least male peacocks have increased opportunities to mate; while we get existential angst, egocentrism, a huge variety of mental illness and addiction - all traceable to consciousness!
Yeah, I understand where you’re coming from on that now, and suggest again you have that precisely backwards. A “physical sense of mind as brain” is the luxury that is hard to justify in terms of fitness, and bewildering to imagine how it would emerge directly, instead of indirectly, the method we are using to look at it now (we are discussing this on an Internet forum).

If you think about where man came from, he came form “non-self-contemplation”, clearly. Our ancestors way back had no theory of mind, no meta-representational faculties. So while theory of mind and meta-representational abilities have been fabulously successful in not just surviving, but ascending to a level of dominance over ever other kind of organism (OK, we can’t beat the bugs!), “monism” is not a feature our evolutionary path would take as a matter of efficiency. Dualism, the intuition drawn from a “cognitive cavity” just way cheaper, leaner, meaner.

Yes, it’s a drag, the existential angst, the religious intuitions, the superstitious teleo-centrism we have as a result, but it’s a trifling price to pay for being able to skip the exotic project of “physical sense of mind as brain”. It’s a pittance compared to what that would cost.

-TS

(con’t)
 
Before you go there, if you try to explain the hypothetical survival value of consciousness in terms of social groups remember that you are embarking on making two arguments - that of biological determinism for psychological traits and that for sociobiology. Sociobiology has little empirical evidence and can be accused of constructing and retrofitting convenient narratives on current observations.
 
But I’m not saying the mind is purely material! What I am saying is that spirit and matter, brain and mind, are so intricately linked that there is no mental activity of any sort without physical activity, and vice versa. Such a “brain-mind” entity is capable of grasping immaterial concepts, and yet when it does there will be physical correlates to that activity.
My question is “Why don’t you think the mind is purely material?” I’m not sure what you’re thesis is anymore. You might have explained it in the earlier tomes of this thread … but I didn’t look very hard (my fault of course). So … in what way do you believe in “spirit.”

Also, of course, I still don’t know what you mean by “correlate.”
Maybe. I think Thomism really claims more than the possibility of disembodied spirits. I think the claim is that abstraction and what follows are themselves purely immaterial processes.
Once again, I never saw him affirm or deny that.
Well if you agree that there can be some intrinsic relation between mind and brain such that higher-order mind activity doesn’t occur without a brain correlate then maybe we aren’t that far apart.
Perhaps … perhaps. In fact, I hope so. Once again (and I apologize) it depends upon what is being claimed about the physical correlative of abstraction.

If you mean, for example, that the correlative of the immaterial concept is “the physical concept in the brain” (that is, you think concepts can be material, in addition to being immaterial), then perhaps we disagree still.
And the idea of a form (e.g. understanding what is meant by the word “form”) is itself a concept, yes? So then you need yet another form in the intellect to have the concept of an idea of a form. But that’s yet another concept, and yet another form, off to infinity.
To have the form of a tree in your head doesn’t require an understanding of “form.” Our knowledge is about things through having the concepts of those things. Our knowledge is not about concepts of things (but about things themselves) … otherwise, yes, you would end up in that silly infinite disaster. I think this idea is called the “The 3rd Man Paradox” (or something … I might be off)
You need to show why these co-divisions or co-classifications aren’t concepts, and how you can pick the “right” classification that actually corresponds to the concept.
The divisions and classifications are indeed concepts … in fact they are logical concepts (that is, of things that do not exist outside the mind). Even though the classifications are not real, the concepts they deal with are very much real.

An example of this is the relation we put on a symbol. We signify “a stick man” with “men’s bathroom” (for example). Both the sign of the stick man, as well as the men’s bathroom are things that both exist. However, the relation we put between the two does not exist in reality … but in our minds. Likewise, the classifications of things are purely mental inventions, but the things they are classifying are objective concepts (opposed to purely logical concepts). If this is not accepted, it appears that we can have no knowledge of reality.
Which argues against your idea and you’ve contradicted yourself. Is classification necessary or not? Which is it? We need a definition, you say, to avoid the ridiculous circular definition of essence which explains nothing. The definition entails classification. But classification can occur in several different ways, and thus you can get several different results. Thus classification can’t be necessary, you say, because then you wouldn’t have an immutable concept. So just how do you get a concept without a definition, and without a definition how do you avoid a circular “concept” of a tree as that which makes a tree what it is.
I suppose you’re asking “how do we sufficiently express an essence using language?” The fact is, we can’t. No physical thing can show the entire truth of an essence, which is why concepts (which are essences of things as they have been grasped by the mind) are immaterial and thus not in the brain. Language is likewise deficient. But that is why symbols, images, words are just symbols … and not the concepts themselves. They refer to concepts but can’t be concepts themselves, since symbols are physical.

Definitions are composed of words and thus symbols. Hence definitions are not essences. They express in a limited way some truth about them by comparing and contrasting them to some other essence. The only way to do that is to have concepts of those essences.

If all concepts are just classifications, comparing and contrasting them to other classifications, then everything is meaningless. For there to be any kind of relativity, there needs to be absolutes. Why are certain things classified a certain way? Because of what they are. Essences are known, but they cannot be expressed completely.

I hope that made sense. Honestly, if you don’t accept this, how is reality known?
 
Does a deciduous tree have the same substantial form (causing “treeness”) as an evergreen tree, leaves falling in the fall being a mere accident. Or does a deciduous tree actually have a different substantial form as an evergreen; we’ve merely mislabeled them both as “trees” when in fact they belong in different categories. I’m honest enough to say we don’t know; tree is just a category that we have created.
Both trees have the substantial forms of tree, but it doesn’t mean they are the same specific substance. Likewise, humans and dogs are both animals, but they have different specific differences. They share in a generic substantial form. (if someone knows any better, please speak up)
This just means there are no other symbols: e.g. you could put seven dots together to form the letter “A”. Rather, they were put in a random configuration so as not to show any other symbol. Yet, there are seven dots. That’s how the number is “conveyed”.
In this case, they are practically the same thing. They both stand for something else, which was my point.
What does that have to do with anything regarding a hypothesis regarding abstract coding of numerical magnitude?
I’m just saying that this doesn’t conflict with what Aquinas said.
How can there be a phantasm for a quantity?

What is a phantasm for the quantity “two”? There can’t be. It’s not something physical; it’s an abstract quantity.
You see, this confused me. I thought you have been saying that abstract things like “quantity” were not immaterial concepts. But you’re obviously not saying that. I was wrong to infer that. So … I’m confused.
No, the patterns don’t symbolize anything. They represent the quantities simply by the number of dots.
I’m glad we agree.
You’re missing the fact that the study is showing the same brain region to be involved in modality-independent quantity representation. So, you can have one region for the phantasms of numerical symbols, another for the visual phantasm of the dots, and we’ll see that on fMRI.
This doesn’t conflict with Thomistic Epistemology. The Scholastics said (perhaps cryptically) that phantasms could be organized together.
But why do deviants (numbers of dots, or Arabic numbers vastly different in quantity from the baseline) activate the same additional region? If all the immaterial mind needs to do is to abstract the quantities (which involves no brain activation) and then compare (which involves no brain activation) then why does this additional activation show up, and in precisely the same area?
Okay, I think I understand what your confusion is (but … probably not). The two key things to look at: the interior senses of imagination and memory.

The memory stores phantasms. The imagination brings phantasms into consciousness (or at least into active use), and can then drop them back into dormant memory. In order for abstraction from a phantasm to occur, the phantasm must be in the imagination (not simply in the unconscious memory). So, yes, the brain provides the intellect with phantasms, but it’s not like the intellect looks at a sleeping brain and picks what phantasms its want for its abstracting pleasure (without the brain doing anything) … the brain must be operating such that the relevant phantasms are viewed in the imagination, for it is only there that abstraction is possible.

Hence, brain activation is necessary for abstraction.

Does that make sense? You might have already known that. I perhaps missed your point.
 
I believe meaning is representational. Do you suppose meaning is not representational? The “representationality” of meaning is straightforward to show, I think.
Hmm.
Consider: is the meaning of the words “Halle Berry” representational in nature, that is to say, symbolic? Well, “Halle Berry” as words is certainly representational – the referent (the person named “Halle Berry”) is not the symbol. So we can agree that Halle Berry’s name is representational. “Halle Berry” as text has symbolic meaning in pointing to the person. Same goes for a picture of Halle Berry – the representation is not the referent, the picture is not the person.
I agree.
But in our minds, no matter whether you embrace the idea of mind in dualist fashion or not – this obviously holds true, no matter how meaning is stored and processed. Halle Berry, the person, is not “in your mind”. She’s a person that exists outside our minds. The meaning, then of “Halle Berry”, is NECESSARILY representational. Whether its neurons and axons or “immaterial mind non-substance”, doesn’t matter, the meaning a mind accepts, recalls and uses is representational, symbolic.

This is true for all meaning. Meaning is symbolic, and provably so, just by noting that the anchors of meaning – the referents – are extramental. Meaning is a semantic representation of referents outside the mind.
I agree that Halle Berry exists outside of my mind (and thus actually exists and exists in reality, etc.). However, in some way, she must exist in your mind, as well.

First of all, you say that the “meaning” (of a symbol) is symbolic. Thus, symbols symbolize symbols. And of course, those symbols symbolize other symbols that symbolize other symbols. Really, there are only symbols and no meaning. Right?

Now, you say there are anchors … the extramental referents. What of numbers though? And other abstract things? They have no extramental reference.

To know an apple, we simply can’t have an image of an apple in our head, since image does not equal meaning. We actually need the apple in our head (crazy! I know!). That is, the form that makes an apple an apple must also exists in our intellect. It took me awhile to understand this. But I realized it was the only way to explain how we can really know anything.
 
Hmm.
I agree that Halle Berry exists outside of my mind (and thus actually exists and exists in reality, etc.). However, in some way, she must exist in your mind, as well.
Well, no. “She” doesn’t exist in my mind, even in a little bit. To say such is just to confuse the symbol and its referent. All that exists of her in my mind are indirections that point to her (or my percepts of her).
First of all, you say that the “meaning” (of a symbol) is symbolic. Thus, symbols symbolize symbols. And of course, those symbols symbolize other symbols that symbolize other symbols. Really, there are only symbols and no meaning. Right?
Symbols are the bearers/containers of meaning. Where there is meaning, there are symbols, and where there are symbols, there is meaning. This is tautologous. “Symbol” is a symbol for “bearer of meaning”. No meaning, it’s not a symbol (it could be meaningless to you, but meaningful to someone else, of course).
Now, you say there are anchors … the extramental referents. What of numbers though? And other abstract things? They have no extramental reference.
I take a nominalist view on universals. Universals are names. Some concepts have internal referents, which are derived transforming references to concrete extramental referents, but those internal referents are just as real and physical as any other natural phenomemon; they are brain-states. A “cartreuse gorilla”, for example, would not have a real, extramental referent itself, but would be an internal concept created in my mind (and – heh – now in yours! now it’s “extramental” to me, in a way…) as a name for a collection or derivation of concepts that do have real referents.
To know an apple, we simply can’t have an image of an apple in our head, since image does not equal meaning. We actually need the apple in our head (crazy! I know!).
That is crazy! I’ll just chalk it up to Thomistic flamboyance. 😉
That is, the form that makes an apple an apple must also exists in our intellect. It took me awhile to understand this. But I realized it was the only way to explain how we can really know anything.
That seems crazy, and futile, path to go toward that goal. “A little apple in my head” doesn’t get you closer to an explanation, but puts you farther away from one, I suggest. Now you have to explain an apple in your head, which is a way stronger problem than the challenge of universals (which is a confused, feeble problem anyway) was to begin with.

-TS
 
Well, no. “She” doesn’t exist in my mind, even in a little bit. To say such is just to confuse the symbol and its referent. All that exists of her in my mind are indirections that point to her (or my percepts of her).
No, I can say “I have her in my mind.” Read on.
Symbols are the bearers/containers of meaning. Where there is meaning, there are symbols, and where there are symbols, there is meaning. This is tautologous. “Symbol” is a symbol for “bearer of meaning”. No meaning, it’s not a symbol (it could be meaningless to you, but meaningful to someone else, of course).
Um … I’m not sure if my point was addressed. I agree that a symbol must have meaning, and that when we say “meaning” we are talking about a symbol. But that doesn’t mean that the meaning of a symbol is just another symbol. This is what you seemed to be saying. Right?
I take a nominalist view on universals. Universals are names.
Is there an objective way to … use those names. Or can we just make stuff up with them? Is there any way to misname things? What is the criteria regarding how names are applied to certain things?

I think my question is … universals are names FOR WHAT?
Some concepts have internal referents, which are derived transforming references to concrete extramental referents, but those internal referents are just as real and physical as any other natural phenomemon; they are brain-states.
So, the number “2” is physical? Because “2” is a brain state?
A “cartreuse gorilla”, for example, would not have a real, extramental referent itself, but would be an internal concept created in my mind (and – heh – now in yours! now it’s “extramental” to me, in a way…) as a name for a collection or derivation of concepts that do have real referents.
What would constitute a cartreuse gorilla? Would any mental image do? What is the concept like?
That is crazy! I’ll just chalk it up to Thomistic flamboyance. 😉
You know us Thomists … bouncing off the walls.
That seems crazy, and futile, path to go toward that goal. “A little apple in my head” doesn’t get you closer to an explanation, but puts you farther away from one, I suggest.
It’s actually the only explanation for knowledge whatsoever. In your system, there is no such thing as knowledge (or rather, no knowledge about reality at least).
Now you have to explain an apple in your head, which is a way stronger problem than the challenge of universals (which is a confused, feeble problem anyway) was to begin with.
I’m not saying the apple is physically in your head, but its form exists immaterially in your intellect. The form of a thing is what gives it its meaning/definition/essence/etc. Thus, the only way to have that meaning in your head … is … well … to have the form of the thing in your intellect. Right? Otherwise, you simply have symbols in your head to deal with. How do you know what the symbols refer to? You need to know what they refer to … hence you need to know about the things themselves (and we start over again). So simply having symbols in your head is not sufficient. You need to know things themselves … you need to have them in your intellect.

I’ll just throw that out there. See what happens.
 
Um … I’m not sure if my point was addressed. I agree that a symbol must have meaning, and that when we say “meaning” we are talking about a symbol. But that doesn’t mean that the meaning of a symbol is just another symbol. This is what you seemed to be saying. Right?
No. I think I was clear in saying that symbols can point to real referents. That would terminate the symbolic chain, in that case. Symbols can point at other symbols as their referents, or non-symbol referents. But in either case, the symbol bears the semantic freight.
Is there an objective way to … use those names. Or can we just make stuff up with them? Is there any way to misname things? What is the criteria regarding how names are applied to certain things?
I’m not sure what “misnaming” involves, beyond establishing symbols that are ineffective or confusing to the self or others (when they get invoked), but a name is just a name. And just to make things more confusing, our symbols are not “names”, physiologically. But that’s a digression.

If I am mistaken, and have in my mind the image of Oprah Winfrey associated with the name “Halle Berry”, I think there’s meaning in saying that symbol has a bogus referent, as it doesn’t match the image with the real-world face and person it belongs to. To the extent that mismatch is a problem for me, it’s a problem.
I think my question is … universals are names FOR WHAT?
A math example: 4 eggs may exist, and 4 chickens may exists, but “4” itself does not exist. This isn’t strange, especially when we note that “chickens” don’t exist per se, either – there are no extant generic, abstract chickens, only particular individual chickens. “Chicken” and “chickens” are abstractions in the same way that “4” is, a symbol that points at a list of referents that embody “4-ness”, that give “4” meaning, like 4 chickens clucking on your desk, or 4 fingers hanging from your hand next to your thumb.
So, the number “2” is physical? Because “2” is a brain state?
The concept for “2” is real and physical – a brain-state. The ‘physical number 2’ is an incoherent phrase, akin to 'the smell of the color nine".
What would constitute a cartreuse gorilla? Would any mental image do? What is the concept like?
Well, just thinking about in my head, it’s my concept of gorilla – including some prototypical image of a gorilla I have access to, rendered in the color chartreuse. I can’t say what your particular formulation would be, of course, but the image that phrase conjures is predictably similar to yours, I guess.
You know us Thomists … bouncing off the walls.
Ok, it’s a brooding, sober kind of crazy, then.
It’s actually the only explanation for knowledge whatsoever. In your system, there is no such thing as knowledge (or rather, no knowledge about reality at least).
Aww, now you’re just being parochial about what knowledge is. If it doesn’t line up with your construal of the term, then nobody gets theirs, either. Not only is knowledge real in this system – physically real – but it’s demonstrable, and in objective terms.
I’m not saying the apple is physically in your head, but its form exists immaterially in your intellect. The form of a thing is what gives it its meaning/definition/essence/etc.
Well, if something can “exist immaterially”, then of course. You don’t even need to worry about “form”, in that case. Just assert that the apple “exists immaterially” in my intellect. Save a step, and no more vacuous.
Thus, the only way to have that meaning in your head … is … well … to have the form of the thing in your intellect. Right? Otherwise, you simply have symbols in your head to deal with.
I thought we covered this above? To have symbols standing in relation to each other and to real entities is to have meaning.
How do you know what the symbols refer to? You need to know what they refer to … hence you need to know about the things themselves (and we start over again). So simply having symbols in your head is not sufficient. You need to know things themselves … you need to have them in your intellect.
I’ll just throw that out there. See what happens.
Well, if we need to “know things in themselves”, we’re totally hosed, because that isn’t any more coherent or practical as the basis for knowledge than “immaterially exists”. This is the same axle Searle gets wrapped around with his Chinese Room gedankenexperiment – an *a priori *intuition of what “to know” entails (where Searle focused on ‘mentality’)… it’s an intractable, perfectly subjective complaint. When all you have to build on in terms of meaning for “know” is the intuition of … “know[ing] things in themselves”, you’ve deprived me of any meaningful way to answer.

-TS
 
My question is “Why don’t you think the mind is purely material?” I’m not sure what you’re thesis is anymore. You might have explained it in the earlier tomes of this thread … but I didn’t look very hard (my fault of course). So … in what way do you believe in “spirit.”
Most neuroscientists don’t think the mind is purely material either AFAIK. Only reductive physicalism posits a purely material mind. Non-reductive physicalism is the popular choice because of the “hard problem” of consciousness. We experience things and think things. While this involves neurons firing, there is more going on than that, something non-physical.
Also, of course, I still don’t know what you mean by “correlate.”
Put simply, A is correlated with B if when you have A, you also have B and vice versa. Correlation doesn’t say or imply anything about causation. So mind being correlated with brain doesn’t allow us to say whether mind causes brain, brain causes mind, or neither.
Perhaps … perhaps. In fact, I hope so. Once again (and I apologize) it depends upon what is being claimed about the physical correlative of abstraction.
If you mean, for example, that the correlative of the immaterial concept is “the physical concept in the brain” (that is, you think concepts can be material, in addition to being immaterial), then perhaps we disagree still.
A physical correlative of abstraction means brain activity associated with the process. When there is abstraction, there is that brain activity and vice versa; the brain activity is not a “physical concept”. This does not refer however to the brain making the phantasms present because, while that is necessary for abstraction, that is a different process and has different physical correlates.
To have the form of a tree in your head doesn’t require an understanding of “form.” Our knowledge is about things through having the concepts of those things. Our knowledge is not about concepts of things (but about things themselves) … otherwise, yes, you would end up in that silly infinite disaster. I think this idea is called the “The 3rd Man Paradox” (or something … I might be off)
How do we “have” the concepts of things but not “know about” the concepts of things? And what, exactly, do we know about things? We have the form of a tree in our heads but don’t actually know anything about that form. So how on earth do we know anything about a tree?

I’m actually going to skip the neuroscience for a bit and talk philosophy, because I think it is relevant. Here is what I think happened. Aristotle happened to note that the brain is very good at making classifications. (A neural network classifier with 100 billion neurons would be.) We recognize a “tree” instantly under normal conditions (clear sight, etc.) Aha, he said, since his metaphysics posits a substantial form, the form must be in our heads - that’s how we recognize it so fast, we “abstract” to the “essence” of a tree based on our sensory (name removed by moderator)uts. But, of course, it’s not necessary to do that to arrive at the classification of a tree, computers can be trained to do that. Yes, you will say computers can’t understand concepts, and you will be correct, but the point is that the classification can precede the generation of the concept. You don’t need a concept to do the classification.
The divisions and classifications are indeed concepts … in fact they are logical concepts (that is, of things that do not exist outside the mind). Even though the classifications are not real, the concepts they deal with are very much real.
But the problem is there’s no way to distinguish between a classification/concept with refers to a real essence, and one which does not.
An example of this is the relation we put on a symbol. We signify “a stick man” with “men’s bathroom” (for example). Both the sign of the stick man, as well as the men’s bathroom are things that both exist. However, the relation we put between the two does not exist in reality … but in our minds.
OK.
 
Likewise, the classifications of things are purely mental inventions, but the things they are classifying are objective concepts (opposed to purely logical concepts). If this is not accepted, it appears that we can have no knowledge of reality.
You’ve taken the problem even one step further, indeed. How do we know that a “tree” is a single ontological entity, or anything else for that matter, given quantum physics? Is a bound electron a real ontological entity? If not, maybe the atom, but atoms are bound in molecules, but molecules are also bound to each other, etc., etc. My claim is that we don’t have an absolutely certain knowledge of objective reality; we can know that it exists, but not exactly what it is. One might naively think in his first trip to the mountains that each aspen tree is a separate ontological entity, but in fact their root systems grow completely together over time and thus they could be considered a single plant. When a cell undergoes mitosis, when, exactly, do we say there are two cells instead of one?

Moreover, you say our classifications aren’t referring to a reality, but only conceptual - but you want to make an exception when it comes to something you think refers to an essence. But you don’t know what those essences are! The fact that your brain acts fast in making the classification doesn’t mean anything. Your brain instantly recognizes an automobile, and yet you wouldn’t say “automobile” is an essence with a substantial form. Or would you?

You have no reason for assuming “tree” is the substantial form, instantiated in individual instances of deciduous and evergreen trees. “Plant” could be the substantial form, instantiated in individual instances of everything in the plant kingdom. I’ll give you the taxonomic chart, can you tell which classifications are the real essences and which are mere human classifications. And what about crossovers, if we mix plant with animal DNA? Etc., etc.
I suppose you’re asking “how do we sufficiently express an essence using language?” The fact is, we can’t. No physical thing can show the entire truth of an essence, which is why concepts (which are essences of things as they have been grasped by the mind) are immaterial and thus not in the brain. Language is likewise deficient. But that is why symbols, images, words are just symbols … and not the concepts themselves. They refer to concepts but can’t be concepts themselves, since symbols are physical.
Definitions are composed of words and thus symbols. Hence definitions are not essences. They express in a limited way some truth about them by comparing and contrasting them to some other essence. The only way to do that is to have concepts of those essences.
No, what I’m asking is when we allegedly understand an essence, what exactly is it that we understand, that’s why I brought up the definition/classification, that’s something we can understand. Other wise, we go around in a circle again. We understand the essence of water as that which makes water water. And what’s that which makes water water? Its essence. Thus we actually understand nothing about the essence of water. Or, we understand what water is in itself? And what is water in itself? Why, what it is in itself!

Even if you say we just “know” the essence, we don’t know we know it. I could say the same for a mere human classification. Let’s say automobile is just an artifact without a real substantial form. We can still recognize them, but that would be just a classification. I can still claim we just “know” the essence of an automobile and you would have no way to refute it.
If all concepts are just classifications, comparing and contrasting them to other classifications, then everything is meaningless. For there to be any kind of relativity, there needs to be absolutes. Why are certain things classified a certain way? Because of what they are. Essences are known, but they cannot be expressed completely.
I hope that made sense. Honestly, if you don’t accept this, how is reality known?
But even if essences are known, how is their knowledge known? And if it is not, again how is reality known? You’ve just pushed back the question one step. You have no reliable way of distinguishing between human classifications and abstracting to essences.
 
First of all, you say that the “meaning” (of a symbol) is symbolic. Thus, symbols symbolize symbols. And of course, those symbols symbolize other symbols that symbolize other symbols. Really, there are only symbols and no meaning.
Thank you explaining this. I had a sense of this but hadn’t articulated it to myself and couldn’t therefore explain it to anyone else either.

TS,

Really interesting series of posts from you. I’m enjoying reading them and will come back once I’ve thought them through. Thank you.
 
Status
Not open for further replies.
Back
Top