Something like the experience of the color Red (in general, everything that is called ‘qualia’ in philosophy) lies outside science. Yes, we know that it is the result of light of a certain wavelength falling on the retina, which then via nerve pulses transfers a signal to the brain that then is processed etc. But this scientific, technical explanation still does not describe the experience itself. Even if science could map with precision and in the finest detail all the neuronal firings in the brain that are connected with this experience, it would still not describe the subjective experience of the color Red itself.
Hi Al,
To be frank, I find the idea of qualia about as interesting as the idea of phlogiston. It’s pretty neat from an historical perspective.
We aren’t born understanding what red is. We have to learn what red is. As we are “learning our colors,” we associate the perceptual (name removed by moderator)ut of redness with other memories like the word red and things that are red like blood and tomatoes. The subjective experience of red comes from the perceptual (name removed by moderator)ut stimulating memory and cognitive processes in an integrative manner. The neurons which receive the (name removed by moderator)ut of “red” from the eye, send the data into an integrative center in the visual cortex which links memories of red and cognitive processes related to redness and the “experience” of redness thus emerges.
These neural pathways are unique to each individual, but I predict they would be replicable for that individual. Every time a person perceives redness through the visual system, the same neurons in the visual cortex would fire, the same integrative neural networks would activate, and the same integrated memories and cognitive processes would follow. That’s my understanding of the neural network structure of the brain.
As for transferring those neural firings to another person, I don’t think it would be possible since we map our learning on to our existing neurons and everyones micro-neural architecture should be different. However, if a machine learned via high-fidelity fMRI which neurons were firing when we perceive red and which neurons were firing when we hear the word red and which neurons were firing when we think about redness, etc… then the machine would be able to learn about our perception of redness. It would learn how we experience redness by “seeing” (via fMRI). Such a machine would need to learn exactly how our brain things, but with enough calibrated data, it would hypothetically learn how we think and be able to predict our behavior and such like that.
It’s scary stuff, in my opinion. But not wanting it to be true doesn’t make it not true.
. . .Suppose a scientist were crazy enough to try to observe your experience of tasting chocolate by licking your brain while you ate a chocolate bar. . . your brain probably wouldn’t taste like chocolate to him at all. But even if it did, he wouldn’t have succeeded in getting into your mind and observing your experience of tasting chocolate. He would have discovered. . . that when you taste chocolate, your brain changes so that it tastes like chocolate to other people. He would have his taste of chocolate and you would have yours."
Our experiences of chocolate are different because we have different memories associated with our perception of chocolate. I don’t see how complex experiences could be translated to another mind. But experiences could hypothetically be understood, with varying degrees of accuracy, by a machine intelligence.
is irrelevant to the problem. Simulations of the brain can only illuminate the workings of the brain. Yet pay attention to what I said before: Even if science could map with precision and in the finest detail all the neuronal firings in the brain that are connected with this experience, it would still not describe the subjective experience of the color Red itself (see also Thomas Nagel above).
This stuff has been in the mass media for years, I’m surprised you haven’t heard of it.
IBM Says We’ll Have Mind-Reading Computers Within Five Years
Peter Pachal December 19, 2011 by Peter Pachal
Mind-reading program translates brain activity into words
The research paves the way for brain implants that would translate the thoughts of people who have lost power of speech
Ian Sample, science correspondent
guardian.co.uk, Tuesday 31 January 2012 16.59 EST
Computers that read minds are being developed by Intel
New technology could allow people to dictate letters and search the internet simply by thinking, according to researchers at Intel who are behind the project.
Richard Gray
7:50AM BST 22 Aug 2010