Wednesday, October 14, 2009

Intentionality, Water, and the Obama Configuration: A Reply to Clayton and BDK

I was going to respond to Clayton as soon as I got the time. And the water=H20 example occurred to me when I read what he wrote.
But look at how the water case went historically. We started calling something water that had certain phenomenal properties. We were prepared to call anything water that had those properties. We figured out that the only thing that ever had those properties had a particular chemical structure. We melded the idea of water with the idea of having just that chemical structure. In other words, it looks as if when the discovery was made, language made a shift to the chemical structure conception. The class of objects now referred to as water changed intension but extension remained the same.

When we get to intentional matters, we are being told that we are not going to get a single brain-structure that will be present in all cases of, say, thinking about Obama. Correct me if I'm wrong about this, but if we examined everybody's brain who is thinking about Obama right now, neuroscience would not be able to go in and say "Aha. There it is. The Obama configuration. That's what he's thinking about. Hmmm. This looks like a Republican brain, since his endorphins drop every time he thinks about Obama. You don't get that with a Democratic brain."

So I'm not sure these water examples help at all.

And then you have the problem of non-normative information entailing something normative. If you don't think you can derive "morally good" from physical configurations, why do you think you can get "thinking about Obama" from those same configurations.

I think the prospects of getting an entailment from the left hand side to the right hand side are not good.

30 comments:

JSA said...

Actually, there *are* specific neurons that fire when someone thinks of a close friend's face, or a very famous person's face -- and these neurons are specific to the person being observed.

Marco Iacobini is one of the researchers who discovered these neurons. His book, Mirroring People, describes the research.

However, this does not disprove your point. Neuroscience has simply shown that the process of *identification* resolves to single neurons, in certain cases.

JSA said...

BTW, the book also has a chapter specifically on political junkies, and how you can, in fact, tell when they are thinking about stuff like this. Again, it doesn't disprove your point, but using examples of famous people and politicians combined is not necessarily the best way to make the case, since we do in fact have a good deal of very recent neurobiological research about these.

Blue Devil Knight said...

I don't really understand your Obama example Victor. What you described is pretty much what neuroscience already does. We show different stimuli and observe the neuronal representations of those stimuli, representations which can be quite easy to individuate.

The question of normativity is a red herring.

Victor Reppert said...

But is there a natural state that is identifiable in everyone who thinks about Obama. The strict and exceptionless correlation between water and H20 made identification plausible. Do you really think neuroscience is going to get it down to this level of precision?

I don't see that the normative is as red herring at all. Many claim that intentional attribution has normative content.

Doctor Logic said...

But is there a natural state that is identifiable in everyone who thinks about Obama? The strict and exceptionless correlation between water and H20 made identification plausible. Do you really think neuroscience is going to get it down to this level of precision?

Three points:

1) Meaning is imprecise. Is a panda a bear? Is Pluto a planet? Whatever the mechanism of intentionality, it won't be "exceptionless" across minds.

2) There's the common phenomenon of pareidolia - seeing meaning in noise.

3) How would you tell if a device was a radio scanner? Not all radio receivers or scanners have identical structure. You would deduce that the device was a radio scanner by showing that it functioned as a radio scanner.

If a machine has structures that recognize Obama and what he represents, then the machine has intentionality. There's the possibility that the machine could be faked out by certain inputs, but the same can be said for humans.

If you don't think you can derive "morally good" from physical configurations, why do you think you can get "thinking about Obama" from those same configurations?

I'm not seeing a connection. Normativity predicts nothing. There's no reason to believe that anything (e.g., the "good" or rationality) is fundamentally normative. "Good" behaviors and "rational" behaviors are normative to be called "good" and "rational" respectively.

I can subjectively consider X to be normative, and we could find a mechanism that makes me feel X is normative, and then we're set.

Victor Reppert said...

What is it to recognize Obama? What does the word mean here?

Clayton Littlejohn said...

I think that the water example is one example, but it's one among many. You are right that there are some examples where we can make an empirical discover and thereby discover the specific necessary connection between two things that were previously unconnected (e.g., water and H2O molecules). I don't think that the lesson to take from this however is that with full empirical knowledge we are in a position to know all the necessary truths or truths necessitated by the empirical truths. The normative propositions come to mind here whether they be epistemic or moral. I think there are necessary connections between mental and moral propositions, but I doubt that we can know all of them apriori. I also think that there are necessary connections between large conjunctions of empirical and mental propositions and epistemic ones, but I don't think we can know all of them apriori. Because these propositions will be, by hypothesis, necessary truths but truths that are epistemically open, I guess I just don't get all excited by the prospect of a metaphysical discovery when we discover that we can't discern the necessary connection between things.

Dinner is ready, GTG as the kids used to say in the late 90's.

Doctor Logic said...

Victor,

Let's say, for argument's sake, that recognition in this context is a neuron turning on for Obama, and another turning on for liberal, and another for Democrat, etc. That is, recognition is a particular output.

Isn't that adequate for fixing reference?

Of course, when I see Obama, my one Obama neuron won't be the only neuron turning on. There may be neurons for every kind of relevant abstraction firing at the same time. Neurons for man, African, tall, Democrat, etc. might all fire at the same time as the Obama neuron. Indeed, they may be the very inputs that cause the Obama neuron to fire.

The mechanism that causes the "man" neuron to fire was forged by finding the "man" pattern in my sensory inputs. There's a clear causal link, so I don't see how it can fail to fix meaning. As I said, there might be some false inputs could fake out the system, but then any such inputs would fake out your dualistic minds too.

Moreover, when I refer to "man" in a proposition, I can simply be referring to "whatever would fire my man neuron". This allows me to refer to counterfactuals or potential inputs, e.g., I can refer to "the first man to land on Mars" without said man actually existing yet. I simply refer to the class of inputs that would fire my "first" "man" "on" "Mars" neurons simultaneously.

Blue Devil Knight said...

The water-H20 analogy isn't meant to map over in every way. Nobody would say that there is one and only one brain state that represents Obama. Similarly, nobody would say that there is one and only one microstate that instantiates a particular temperature (in fact temperature-mean kinetic energy is another a posteriori identification that may be more apt than water-H20).

Or, closer to home, consider the property of having a heart. Look at the hearts in dogs, humans, the leech. They are very different. That doesn't mean hearts are nonphysical, or that human hearts aren't reducible to biology. The question is how local the reduction needs to be to be applicable. In color vision, for instance, we find the same rules apply accross subjects (with variations following predictable patterns for certain photoreceptor variants as in the red-green colorblind). Even monkeys have a similar color system to humans.

In our "face region" in the brain, there is a population of neurons each responsive to different types of faces (some even to individual faces). The principles of operation of these population codes is likely similar across individuals, even if there isn't a 1:1 mapping from neuron to neuron.

Churchland discusses this problem in depth: how to identify concepts/representations across subjects with different numbers of neurons. I think it is in his book 'Neurophilosophy at work'.

William said...

Sorry, finding a unique output neuron is NOT finding a concept. What happens if you lyse that unique neuron? Very little I suspect.

The Churchlands would like to re-define the "left hand side" concepts as folklore, but I think that is just wishful thinking on their part.

Doctor Logic said...

William,

What happens if you lyse that unique neuron?

Well, I said that, for argument's sake, Obama would be represented by a single neuron. In reality, it's probably more than one, but that doesn't make much difference to your question. Lyse the neuron (or group of neurons) and you know longer know about Obama. You may remember the name, and the face will be familiar, but the concept for President Obama in particular will be lost. Your brain can probably partially reconstruct it from its own memories by finding patterns in memories relating all your experiences, and can fully reconstruct it with more experience.

Neurons find patterns in experience. This is a fact. The found patterns are stored in the neurons and their interconnections. Destroy the neuron and the concept has to be re-learned.

Sorry, finding a unique output neuron is NOT finding a concept.

Then, what does a concept have that a neuron doesn't? The neuron can feedback into its inputs and regurgitate the memories that programmed the output. When I recall Obama, I trigger the networks that programmed the neuron, so I see Obama's face in my mind, hear his voice, and I recall prominent events, e.g., his election. Moreover, I can imagine "Obama asleep in bed" even if I have never actually seen him asleep in bed. I can do this because my neurons relate to my sensory inputs, and I consequently know roughly what my sensory inputs would be like if I saw Obama asleep in bed. (What comes to mind are the most prominent memories of Obama and a bedroom combined.)

J said...

If a machine has structures that recognize Obama and what he represents, then the machine has intentionality

Nyet. At best some recognition software--SnitchWare--hooked to a video cam associates a pic (or retinal scan, DNA profile, etc) to a file with bio-data etc. A machine--or CPU-- at no time thinks, or "has intention": merely follows routines/parameters inputted by humans.

There's no magic to computing: a CPU and peripherals (hard drive, monitor, etc) are just high-powered, digitalized-adding machines, made possible by electronics.

Circuits are not neurology, whatsoever; a CPU has no idea what "sour" means, or blue, or democracy (tho' it could pull up a definition, really quick). Or the meaning of Debussy's La Mer. A geek might write a program simulating qualia of various sorts (already done--an action figure in a game shrieks when his head's cut off, etc), but that's hardly an organic human brain.

So the identification (mind/brain if you will) has not been established by neuroscience. They can simulate, maybe translate some brain firings (handicapped people flick on lights, etc) but have not at all simulated or reproduced human consciousness; that may not prove a Cartesian mind, or substance dualism as they say, but does suggest non-reducibility (property dualism).

When you can interface your brain via a USB port then maybe one might revise the assumption non-reducibility .

(The Churchlands sound mo' Mengele like each year).

Doctor Logic said...

J,

Circuits are not neurology, whatsoever; a CPU has no idea what "sour" means, or blue, or democracy (tho' it could pull up a definition, really quick).

But the difference is abstraction. The kinds of simplistic systems you cite have no ability to abstract. Neural networks can. Consequently, CPU-driven simulations of neural networks can.

Imagine that you are given a drug which prevents new abstractions from forming. Maybe you can still recognize a cat, but you would be unable to learn to recognize a new species. For example, if you had never seen a wombat before, you would not be able to form a concept for wombats after seeing examples of them. Instead, you would be inflexible, and sometimes classify a wombat as a squirrel or a cat or a dog. Though you might be able to refer to specific memories of specific wombats, you wouldn't be able to create a concept for a wombat that applied to all instances of wombat which you might see in the future. You would lack a way to refer to wombats in a general sense.

William said...

J and Doctor logic: your oversimplistic using of computer metaphors for the brain and then your drawing of erroneous conclusions from your metaphors is a good example of the use of reason, but it's (ironically) not supported by neurologic reality. In Nobel laureate Penfield's words, "I question whether we should speak of a 'physiology of the mind'...until we discover more about the nature of the mind."

J said...

William: your superficial reaction to a few computing terms missed the point. I do not agree with the AI types, and never claimed that brains or neurology were CPUs (see above: "circuits aren't neurology"). Precisely the opposite. I said Mind was irreducible at this stage, did I not.

Dr Log: Simulation of human consciousness is not human consciousness itself. And the sim stuff's not that powerful at this stage. A computer may duplicate human's quantitative skills to a large extent--say playing chess, calculation, number crunching of various sorts etc--but it doesn't understand meaning whatsoever--one might say connotation. Adding machines are not conscious. And a computer is simply a sophisticated adding machine. It doesn't understand anything, or form concepts, or experience the world.

A great part of being human relates to qualia--memories, sensations, impressions, connotations, etc; thus a computer is not a human--it doesn't grow up, learn from experience, create things, go to the malt shoppe with Daisy Mae, etc.

Blue Devil Knight said...

The Churchlands would like to re-define the "left hand side" concepts as folklore, but I think that is just wishful thinking on their part.

No, they both have theories of consciousness, mental representation, concept use in brains. What they don't like are theories that advance that the brain uses a language of thought, or propositional attitude psychology. The realm of the mental is much broader than the language of thought.

People do not read the Churchlands nearly as often as they read about them through rather biased filters, so I can't blame folks for incorrectly thinking the Churchlands want to get rid of mind (the left hand side).

If people would just pick up any of their books they would find theories of mental representations, consciousness, concepts, all sorts of mental entities.

Anonymous said...

They have theories of the mental in the way atheists have theories of God. They're called eliminative for a reason, y'know.

Blue Devil Knight said...
This comment has been removed by the author.
Doctor Logic said...

J,

Adding machines are not conscious. And a computer is simply a sophisticated adding machine. It doesn't understand anything, or form concepts, or experience the world.


But what is the difference between a mind and an adding machine?

The difference is the ability to abstract!! Only a few specialized software applications abstract, and they run on PC hardware which is a million times less powerful than the brain. At the same time, no one here seems to see that abstraction is vital to our own intentionality. I suppose there's no wonder that you think it would be irrelevant to machine intentionality.

You and others are arguing that the inability of anemic, non-abstracting computers to think intentionally proves that no possible computer can have intentionality. That's like arguing that the inability of the Etch-a-Sketch to get input at a distance disproves the possibility of making a TV.

There's no need for non-reductive anything. In almost every possible world, if the mind had a non-material functional element, it would be possible to break that function of the brain without breaking the mind. For example, non-physical minds might not need memory, or recognition, or systems to correlate what we see with how we feel about what we see. Yet every one of these functions is part of the physical brain. Given what we know, it has to be millions to one against any element of mind being non-physical.

Blue Devil Knight said...

Anon wrote, of the Churchlands:
They have theories of the mental in the way atheists have theories of God. They're called eliminative for a reason, y'know.

Actually, no. An atheist wants to eliminate gods, while the Churchlands do not want to eliminate mind, as I said in my previous comment that you ignored. You've been reading too much internet commentary from people who need a straw man. What I presented wasn't a straw man, but what they actually say.

If you read their books, what they actually advocate, it is clear (even from the chapter titles, for goodness' sake) that they both think minds are real, that consciousness is real, that we have internal mental representational states (indeed, the most annoyed I have ever seen Pat Churchland at a talk when I was at grad school at UCSD was when Tim van Gelder suggested we could do away with mental representations!).

So, you are simply wrong, and don't know what you are talking about. However, they are eliminativists. About what? What they want to eliminate is a certain model of the mind (the 'language of thought' model espoused by Fodor, or propositional attitude psychology). They think a more nuanced, neural-based theory of mental representation, consciousness, concepts, is on hand. Their theory can only be considered radical by proponents of propositional attitude psychology (of course for nonnaturalists their theory would be radical but so would every theory that does like propositional attitude psychology, from Dretske to Fodor, so that is a different issue).

However, by all means continue to beat up on a straw man if it makes you feel smart.

The one quote I have seen that might make people misinterpret their intentions is a summary of eliminative materialism from an introductory philosophy of mind book, where Paul was laying out the theoretical landscape. What he ends up actually advocating is a very restricted eliminativism, not about the mind tout court, but a specific theory.

If anyone actually wanted to take the time to learn what they actually say, rather than the crap you find about them on the internet, Pat's book 'Brainwise' is a good popularization, while Paul's book 'Studies in Neurophilosophy' is a good technical overview (and 'The Engine of Reason, the Seat of the Soul' is a good popularization from a decade or so ago).

I know people prefer straw to meat, so I won't hold my breath. You could just read the chapter titles though, to see what I'm talking about.

J said...

The difference is the ability to abstract!! Only a few specialized software applications abstract, and they run on PC hardware which is a million times less powerful than the brain. At the same time, no one here seems to see that abstraction is vital to our own intentionality.

That's a difference, but not the only difference. Computers however powerful, don't abstract--or shall we say conceptualize. Computing's about quantitative information, really, even when humans input some "fuzzy logic," or something. Computers and software perform excellently with accounting, stats, chess playing, calculations. But they don't experience they world. They are not persons. They do not have even a rat-level of intelligence;is no qualia, no sensation, no concept-forming (except in a basic deductive sense, perhaps--still a routine)--and of course no specifically human emotions, desires, memories.

A computer does not get the opening of the 5th symphony, and will not, until consciousness is itself synthesized, or duplicated. However, corporate-industrial AI when it comes about in the next century or two will probably be applied to drones, soldier-automatons etc. (as it already is) Not for Beethovens. Governments need Kill-bots, not Kurzweils

Doctor Logic said...

J,

Computers can abstract, and numenta is doing so today.

However, even if contemporary computers don't have some (or even all) of the attributes you mention, that's not a good argument against reductionism or the possibility of superior AI's.

Overwhelmingly, the evidence points to us being biological machines. Physics is quantitative, too, but that doesn't stop us from thinking. So why shouldn't a simulation be as conscious us we are?

I'd bet that, within 25 years, the smartest, most creative intelligence on this planet will be artificial.

Blue Devil Knight said...

A simulation of a tornado won't blow your house down, and I won't be surprised if a simulation of consciousness isn't conscious. Obviously that doesn't imply anything about whether tornadoes or consciousness are natural processes.

I think Dr Logic has latched onto a very interesting theory, a cool thought playground in which to play around with ideas, the theoretical possibilities of (artificial) neural network models. It is far from an established theory in neuroscience.

J said...

However, even if contemporary computers don't have some (or even all) of the attributes you mention, that's not a good argument against reductionism or the possibility of superior AI's.

Au contraire. Quite a sound argument--they only have quantitative abilities, which were parameters set by humans anyway. They have no qualia, except those few new-fuzzy logic schemes, also inputted by humans. As I said, computers can out-number- crunch humans--the post-Deep Blue chess engines now reportedly defeat grandmasters, regularly. So what??? Merely speed of processing, not actual thought, or conceptual knowledge. The chess engine can grind out possible moves at 100x (or more, of course) what a Kasparov does. But that's not thinking, or at best a simulation of human thinking--and the examples you mention are the same. Simulation--.

Now, I agree eventually there will probably be AI which does conceptualize, plan, scheme, invent its own programs--or viruses. And it will (if 20th century history is any clue) probably lead to Matrix-like scenarios, malware invasions, etc. The optimism of most AI types should offend us as much as the reductionism. Ever read PK Dick, or William Gibson? I think those writers offer a better vision of our dystopian future than Heinlein or Assimov space opera, or the Minsky/AI geeks.

Doctor Logic said...

BDK,

A simulation of a tornado won't blow your house down, and I won't be surprised if a simulation of consciousness isn't conscious.

Yet a virtual tornado can blow down a virtual house. I don't see why consciousness would be any different.

I agree that HTM's are "far from an established theory in neuroscience," but they do abstract, and the neocortex is known to be built out of similar structures. So, HTM's are perfectly adequate to refute the AfR. The AfR says "No material systems except human minds have intentionality, therefore, in spite of overwhelming evidence to the contrary, human minds can't be material." Even in the absence of an abstracting machine, the AfR is weak. With an example, the AfR is just refuted.

Doctor Logic said...

J,

What's so special about the qualitative that makes it impossible to represent with the quantitative? (If anything, it seems to me it would be the other way around.)

And what do you have against simulation? You appear to deny that simulations will have qualia (to the extent that qualia exist) merely by assuming your conclusion.

If qualia are generated by physics (e.g., if physicalism is true), why won't a simulation of the physics produce qualia?

And what if our universe is a simulation?

Blue Devil Knight said...

Dr L: I agree that it is useful for providing existence proofs about the range of possibilities. While the 'hierarchical temporal memory' (HTM) models are cool, and loosely inspired by neuronal models, they haven't been of much help in neuroscience. I'd be curious to see experimental papers showing that the cortex uses something like HTM. I know when I saw Hawkins speak about this at a neuroscience conference those of us in the audience who were actual neuroscientists thought it was fun as engineering, but clearly not neuroscience.

The same can be said about most work in artificial neural networks. Interesting explorations of possibility space, but not neuroscience. It's more like theoretical psychology than neuroscience. This has always been my main concern with Paul Churchland's work as well.

Blue Devil Knight said...

Yet a virtual tornado can blow down a virtual house.

Yes, but there is still no wind blowing in the computer doing the simulating. I take a much more biological approach, so consciousness is more like digestion than computation. A computer simulation of flatulance doesn't smell, and a computer simulation of perception doesn't see.

You'd need an argument that consciousness is somehow special, that merely by simulating it we create something with the property being simulated. An argument that makes it different from all other known biological properties that aren't reproduced merely by simulating them.

I suppose you could argue that consciousness is a kind of computational phenomenon. For instance,a Turing machine running a program that performs addition is an adding machine. So if consciousness is a computational process, perhaps a Turing machine running a simulation of thinking is a thinking machine.

Doctor Logic said...

BDK,

You'd need an argument that consciousness is somehow special, that merely by simulating it we create something with the property being simulated. An argument that makes it different from all other known biological properties that aren't reproduced merely by simulating them.

It seems to me that either consciousness is what consciousness does or else consciousness is a sort of substance.

If consciousness is what consciousness does then consciousness is analogous to adding. A simulation of an adding machine is an adding machine. A simulation of a heart (connected to physical actuators) is a heart.

If consciousness is a substance, then we're talking about vitalism, and it's vitalism that has fallen by the wayside. It would be consciousness that would be exceptional.

If consciousness is not what consciousness does, if consciousness is a substance, why is it valuable to us?

I can put it another way. If consciousness is a substance, why does that imply I ought not upload myself and run in a virtual paradise? Functionally, I would be identical in simulation, but I would lose my conscious substance. What's wrong with this? It can't be because I would lose self-awareness because self-awareness is a function, and it's already stipulated that consciousness is not functional.

Blue Devil Knight said...

If consciousness is what consciousness does then consciousness is analogous to adding. A simulation of an adding machine is an adding machine. A simulation of a heart (connected to physical actuators) is a heart.

A simulation of a heart is not a heart. It will not pump blood. Adding physical actuators means the model plus the actuators make up an artificial heart. Perhaps at that point the hardware interacting with the actuators would play the role of the pacemaker, but not of a heart (this assumes the simulation can run fast enough to hook up with the world).

You seem to be arguing for a kind of computational functionalism, which is dandy and could turn out to be true despite the fact that more biological approaches are all the rage nowadays. I'd rather study brains, do the biology, that's precisely why I left philosophy.

Despite your incorrect claims about simulations in general (hearts, tornadoes and such), for the case of consciousness you could say consciousness is special in that it doesn't need actuators, that it is more like the 'pacemaker' in that it only depends on the relations to other elements within the body, not to behavior or whatever. E.g., during dream sleep we don't move, are effectively paralyzed.

As I said, that is a possibility that has some merits, I would leave it to the naturalistic philosophers to fight amongst themselves about that.