Thursday, February 03, 2011

Darek Barefoot's defense of the Argument from Reason

A re-redated post.

I have redated this post because Darek has responded to Jim, but it has been awhile (13 days) since it was up.

A few years ago Darek Barefoot sent me a defense of the Argument from Reason. I ran across it lately, and asked him if I could share it on the blog. He said yes, so here it is.

Simulating Sleep: A Thought Experiment to Demonstrate the Argument from Reason

by Darek Barefoot

Under a physicalist model of reality, all connections are instances of cause and effect (quantum events having a peculiar "disconnectedness"). Whatever the mind or brain comes to know about reality must be through such cause-and-effect connections with the events and objects that are known. We know that an object is red, with a high degree of probability, because it looks red to us. Some connections are less direct, and might be called "detections." We come to know the barometric pressure at a given moment because what is obvious to our senses when we look at the dial of a barometer is causally connected with a physical state difficult or impossible to sense directly. The cause and effect relationship between what we see on the dial and actual atmospheric pressure is critical. To repeat, under physicalistic assumptions all acts of knowing must be causally connected with the state of the object being known; no other type of connection is available.

Physicalists such as Daniel Dennett insist that even consciousness, to the extent that the word describes anything real, is a causally driven physical activity and nothing more. What we think of as conscious functions such as sensation and reason are interpreted as behaviors. For convenience we might divide these behaviors into those that are external and therefore obvious to any observer, such as walking and talking, and those that are internal actions of the brain that must be detected, such as the electrical changes measured by an EEG.

Logical connections, however, defy being equated with or reduced to chains of cause and effect. We can demonstrate this incompatibility with the following thought experiment. Suppose I lie down on a bed, close my eyes, relax my limbs and deliberately begin to breathe in the deep, regular pattern that we associate with asleep. Now, as I lie on the bed under these conditions, how do I know that I am awake and not asleep? I can sense that the position of my body and my external behavior are consistent with sleep. If a third party were to walk into a room and observe my behavior, they would think it more likely than not that I was asleep.

If I know myself to be awake, in spite of sensing my bodily behavior to be that typical of sleep, I must under physicalist assumptions be sensing or detecting the other behavioral component, my brain activity. After all, these two types of behavior allegedly comprise everything we call the state of being awake, and my act of knowing myself to be awake must be linked to my "awake" behavior, uninterruptedly, by means of cause and effect. The trouble is, I can offer no sensory account of what my brain is doing. I cannot see or hear any part of my brain, nor can I claim to see or hear electrical or chemical changes going on in it. I cannot "feel" my brain in the somatosensory way I feel the position of my limbs or the way I feel my heart beating when my pulse is rapid.

To know things about the state of my brain by causal means through detection, as opposed to direct sensation, would require special instruments. For example, suppose an EEG were set up in the room where I am lying down and that I were connected to it. Suppose further that instead of just a graph it
generated an audio signal in the form of beeps and that from the timing of these beeps it could be determined whether the person hooked up to the machine was awake or asleep. Even lying on the bed with my eyes closed I could listen to the beeps and detect my brain activity to be that of someone who is awake. It is important to note that this is public, not private, information. A person walking into the room could also detect from the beeps that I was awake in spite of observing that my bodily behavior was consistent with sleep.

The ability of an EEG to convey to me my brain state is well and good, but returning to my original situation in the room, I have no EEG and cannot detect my brain activity. I have only the messages I am receiving from my body as effects in the form of sensations. To the extent that these sensations "say" anything about my condition with respect to my being asleep or awake, they say "probably asleep." How, then, do I nevertheless know myself to be awake? I do so by logical means. I am feeling the state of my body, and since I can only feel things when I am awake, it is overwhelmingly probable that I am awake. Logic transforms "probably asleep" to "probably awake," and I am content to put faith in this mysterious alchemy. How is this happening?

Perhaps I was mistaken about the causal inputs, the "sensations," I am receiving. Perhaps the logically active part of my brain, in the cerebral cortex, is actually connected by cause and effect means with "awake" activity in some other part such as the mid-brain. In reality, then, the way I know I am awake is by sensing the awake state of my brain, or at least part of it. But what about the syllogism that I (mistakenly) thought was means by which I know I am awake: "I only experience bodily sensations when I am awake; I am experiencing bodily sensations; therefore I am awake." Is the syllogism still sound? If the syllogism is sound and if it is different than the sensing of an awake state of one part of my brain by another part, then I was not mistaken after all. But the implication is dire for any physicalist model of the situation, because it means that I can know about the awake state of my brain without any sensory, causal connection to that state. It should go without saying that the brain activity entailed by my act of knowing cannot be causally connected to itself; the event sequence involved cannot be its own cause.

Few options are left, physicalistically speaking. It might be claimed that I do not in fact know that I am awake in the circumstances specified. Physicalists who wish to ply that argument are free to do so, and the rest of us may be excused for paying them little attention. The other strategy is to identify the syllogism with the act of sensing of an awake state in some part of my brain. This would have strange implications, to say the least. My means of knowing that I am awake would actually be not reason distinct from sensation but reason as sensation--and as a sensation different that the one the syllogism identifies, to boot. Remember, I thought that I arrived at my knowledge by reasoning based on sensations originating in my body. Can any of our knowledge, including cognitive science, survive if syllogisms are equated with sensations? Is logic, as distinct from simple sensation, a kind of illusion? That can hardly be true if the word "true" is to retain any meaning.

It is pretty routine for us to differentiate between sensations and syllogisms. Are the sensations entailed in seeing and smelling a rose the equivalent of propositions inferred logically about a rose? For one thing, syllogisms and propositions may be sound or unsound, true or false. Sensations, on the other hand, are not true or false or correct or incorrect, they simply "are." As Ayer observed, "A sensation is not the sort of thing which can be doubtful or not doubtful. A sensation simply occurs. What are doubtful are the propositions which refer to our sensations." To claim that there is no sharp divide between sensations and syllogisms is, at the bare minimum, to assume a heavy burden of explanation.

Let's imagine, instead of me lying on a bed, an electrical device with a meter that displays the level of incoming voltage. The voltage meter might due to unexpected mechanical events read "0" when in fact there was still an electrical current present in the device. Let's roughly compare this circumstance to my appearing to be asleep when I am not. Let's also suppose that, forseeing this possibility, the makers of the device have installed a small red indicator light that stays lit when any current is present, even if the voltage meter happens to read "0." There might even be a small sign beneath the light reading, "power on." Is there any distinction in principle to be made between the manner of operation of the voltage meter and that of the indicator light? Both undergo an obvious physical change when electrical current flows through them. We may call the branching circuit to the indicator light a "logic circuit," but only as a subjective attribution. One circuit in the device is no more "logical" than another. Nor does a distinction in principle appear with increasing complexity of the device. Even if the indicator light were replaced by a chip with an LCD display that read, "Beware, I have determined that current is present in my circuits!," it would remain a sensor. In principle, a sensor is a sensor. It is impossible to distinguish between types of sensors in the same way we can between sensations and syllogisms.

The simulated sleep experiment is designed to make obvious the gulf between sensation and reason that always exists but may be harder to differentiate when impression and conclusion coincide more closely. Other examples can be offered, however. If I look through red-tinted glasses I understand that objects that are not red will look red and red objects will look white. My knowledge does not cause me to experience red when I see white, however, nor do I have to stop and picture a white-appearing object as red in my mind to understand logically that it must really be red. Nor does my experience of knowingly viewing objects through red-tinted glasses amount to sensory dissonance. It is not the same as, say, looking at what appears to be a melting ice cube and discovering with surprise that it is warm to the touch. "Looks white through red-tinted glasses, is therefore red," is in terms of raw sensory experience unfathomable, but it is a phenomenon that I have no difficulty coming to terms with as long as logic is available to me.

The simulated sleep experiment points up the difficulty physicalism has accounting not only for reason, but for consciousness as well. Consciousness in the context of the experiment is substantially synonymous with the state of being awake and is defined behaviorally along the same lines. As I point out above, there are three means of determining that I am conscious. Means No. 1 is to observe my waking or conscious behavior. Someone who sees me walking, talking and otherwise responding will know that I am conscious. I myself have access to that behavior by sensory means as well. I can feel my body moving and hear myself speaking. Means No. 2 is to detect my conscious brain activity, as by the EEG. Both I and an observer potentially have access to this information as well. Means No. 3 is my private experience as the basis of a logic transaction that obtains the information otherwise available by Means Nos. 1 and 2. The problem for physicalistic theories of consciousness is that they cannot accomodate "private" knowledge as objective, and Means No. 3 of determining whether I am conscious necessarily is private. If I decide to give any physical clues at all about my state to an observer, the observer would receive the information through Means No. 1. If the observer, on the other hand, resorts to instrumentation or any causally-based investigative method to determine my brain activity, the information he obtains will owe to Means No. 2. No advances in technology or cognitive science can remedy the problem unless they endow test subjects and observers with mental telephathy.

The irreconcilability of private knowledge with physicalism can be grasped through analogy. Suppose I am an astronomer who comes to know, by some power of my brain not entirely understandable to me, the precise position and orbit of a heretofore uncharted moon of Jupiter. I know this fact apart from anything that may be construed as observation of the moon or its effects. If I train my telescope on the right portion of the sky, I can say I have observed the moon to exist. If someone else happens to do the same, they will make the discovery as well. Before either observation is made, however, I and I alone know the moon exists, and I know it with the same degree of certainty that I know myself to exist. If, after another astronomer observes the moon, I were to tell him that I knew it was there previous to his discovery without being able to give him any observational account to justify this supposed insight of mine, he would likely regard me as a crank. Presumably the same astronomer would, on the other hand, have a different opinion if the object of knowledge were not a hypothetical moon of Jupiter but my own unobserved state of consciousness.

There must be a factor associated with what we call consciousness that is absent from physical events and objects as we encounter them externally. This factor has most often been associated with so-called qualia, the "what it's like" of experience. Famous expositions of "what it's like" to exist as a member of a particular animal species (Nagel, 1974) or to see colors (Jackson, 1982) make the fairly obvious point that there is something to experience that cannot be captured in a factual description. The ineffable "something" that comprises qualia is amorphous, objective in its existence but subjective as to its essence. Whether I know myself to be conscious as I lie on the bed is a question of hard fact. Either I know my condition or I do not. If I know myself to be conscious, then the source of my knowing is as objectively real as the knowledge itself.

To say that there is something more than physical cause and effect occuring in consciousness and reason is not to deny that in our experience they are conditional on certain physical states. Too much should not be made of the association. If I tampered with the components of a radio, the quality of the sound it produced would be effected. If I tampered with the radio forcefully, it would cease to produce sound at all. Only through a fallacy could I conclude from these facts that the broadcast heard over the radio originated in the radio itself. The analogy is imperfect, as all analogies are, but it remains true that the observed necessity of certain conditions for the occurrence of certain events does not confine the explanation of those events to the conditions alone. The existence of a body of water of a certain size may be a condition of having high and low tides, but a body of water does cause tides to occur simply by existing. If the brain is more like an interface than a stand-alone machine, then it cannot be in contact with the "mindless" entities we know as physical substances. It is easy to appreciate that this unsettling implication is more congruent with theism than with atheistic materialism.

The Argument from Reason was popularized by C.S. Lewis in his book Miracles (1960) and developed by others since. The experiment described here is an effort briefly to illustrate his thesis.


A. J. Ayer, Language, Truth and Logic (1952) 93.

Thomas Nagel, The Philosophical Review LXXXIII, 4 (October 1974) 435-50.

Frank Jackson, 'Epiphenomenal Qualia,' Philosophical Quarterly (April 1982) 127-136.

C. S. Lewis, Miracles (1960) 12-24.

For a good overview of the Argument from Reason, see Victor Reppert's article of the same name at


Lippard said...

This appears to me to be a pseudo-problem.

What physicalists (except behaviorists) say that we can't have some kind of private access to information about our brain states? This argument seems to presume that, according to the physicalist, there is no evidence of brain states except from perceiving parts of my body other than my brain.

The premise "I only experience bodily sensations when I am awake" is also a false premise, but can be made true by restricting it to an appropriate set of perceptions (probably including some temporal continuity of perceptions).

I would also point the author to the work of John Pollock in support of having some basic knowledge directly on the basis of perception rather than inference. It's not a matter of replacing logic with sensation--there is still knowledge on the basis of inference. It's a mistake to conclude, from the possibility of knowledge on the basis of direct experience or perception, that there is therefore no knowledge on the basis of reason or inference.

BTW, Lying and resting is a distinct state from lying and sleeping, and there are both differences in brain activity and perceptions. Note also that there are *intermediary* stages between sleep and being awake, which include states of being semi-conscious but unable to move (hypnopompic or hypnagogic sleep), being ambulatory but asleep (somnambulism), and having lucid dreams where one is fully aware of the fact that one is dreaming. All of these states have specific physical differences, some of which are perceivable or inferrable by the subject.

I've frequently had both lucid dreams (where I am aware that I am dreaming and have the ability to manipulate the course of the dream) and hypnopompic sleep (where I think I'm waking up, feel completely conscious, but something is not quite right and then I realize I'm not really awake, or where I feel awake but am immobile). These states can result in a mistaken belief that I'm fully awake and conscious when I'm not, which I can determine is not the case by oddities in the experience or by really waking from it.

Lippard said...

The following is of possible relevance (from

States of Consciousness

Hobson, in his talk "Dreaming and the Brain," took another approach to understanding consciousness, addressing "states" of consciousness between sleeping and waking. Hobson said that levels of consciousness change with levels of activation in the brain. Different regions of the brain have also been linked to different states of consciousness by comparing positron emission tomography brain images of sleeping and waking people. Different sets of chemicals are associated with the sleeping and waking states as well.

"The kind of consciousness we feel," Hobson said, "is actively controlled by the brain stem," the lower stalk of the brain that connects it to the spinal cord. In different sleep states, Hobson explained, the sensory inputs are blocked; at other times the abstract thinking inputs are blocked. Different waking states such as daydreaming, being vigilant, relaxed, or drowsy, are also governed by the brain stem in this way.

"So now we're beginning to get a rather more complete picture of how consciousness changes with changes in the brain state," said Hobson. "Consciousness is the forebrain's representation of the world, our bodies and ourselves, and of course this is the great mystery...we still haven't said how this happens. What we can say is it is always a construction whose level, focus and form depend on the brain stem."

Lippard said...

Here's another fascinating article from earlier this year:

Researchers are reporting what they describe as the first clear picture of how and why consciousness fades as someone falls asleep.

A technique called transcranial magnetic stimulation shows that "the brain breaks down into little islands that can't talk to one another," explained Dr. Guilio Tononi, a professor of psychiatry at the University of Wisconsin-Madison, and lead author of a report on the work in the Sept. 30 issue of Science.

[There are critical comments from Stickgold and positive comments from Massimini in the article.]

Clayton Littlejohn said...

"The premises of the argument in the article are (1) that all knowledge entails a connection between the knower and the object/event/state that is known, (2) that physicalism only allows for physical, causal connections between objects/events/states, and (3) that it seems impossible to provide such a physical connection between my brain as knower and the awake state of my brain as the object of knowledge under the circumstances I describe."

Two points. First this is a causal requirement on knowledge that is independent from physicalism "Whatever the mind or brain comes to know about reality must be through such cause-and-effect connections with the events and objects that are known." There might be independent motivation for it, but if there is, it would apply to non-physicalist accounts of knowledge. If there's no motivation for it, the argument is a very weak _external_ critique of physicalism (i.e., the causal requirement on K, which physicalism isn't committed to and which isn't independently motivated, causes trouble for physicalism when combined with that view).

Now, suppose you think that the causal requirement is independently motivated. Then other views have to accommodate it, too. Does dualism accommodate it? I agree that, "Logical connections, however, defy being equated with or reduced to chains of cause and effect." Logical connections cannot be equated/reduced to logical connections between states of material or immaterial substances, so the objection would apply with equal force to dualism.

Second, does the argument assume that a purely material being cannot represent states independent from it? If you agree that a material being could have beliefs and experiences, why can't the physicalist say that reasoning involves reasoning in accordance with external rules? Isn't that just what the dualists think? Or, do you think dualists haven't read Frege and accept psychologism about logic?

My guess is that the argument only has force if it can show that a perfectly material being doesn't have beliefs. The rest of it, I think, is a distraction

D3U7ujaw said...

Darek Barefoot,

But what about the syllogism that I (mistakenly) thought was means by which I know I am awake: "I only experience bodily sensations when I am awake; I am experiencing bodily sensations; therefore I am awake." Is the syllogism still sound? If the syllogism is sound and if it is different than the sensing of an awake state of one part of my brain by another part

The syllogism could be sound but we have trained ourselves (to the extent that it doesn't come naturally) to pass the abstractions of bodily sensations and memories through mental logical/linguistic rules to arrive at beliefs.
Crudely, it would be like if I type all my sensations (symptoms) into WebMD (, hit submit, and then believe everything the software tells me about myself because it's rarely wrong. That would demonstrate a continuous causal flow from my sensations to my beliefs with a complex mathematical, logical, rule-based and "physicalistically operating" (I hope we would agree) software program linking the two. Can't the brain work similarly?

So I'm not getting exactly where the thought experiment is challenging physicalism.

Darek Barefoot said...


"First this is a causal requirement on knowledge that is independent from physicalism"

It has been a while since I wrote that piece and and thought about it. On a practical level, physicalism enshrines scientific knowledge, and scientific knowledge requires some objective, observational basis. I was trying to get at an acquaintance relation, the purely subjective character of the knowledge we have of our own thoughts and state of thinking.

At the very least, I think that we can separate objective, observational knowledge from the kind of knowledge we have of our own thoughts. If both kinds of knowledge are deserving of the word, then we have at the very least a descriptive dualism--two separable but valid ways of knowing.

Does that kind of divide sit more comfortably with physicalism or dualism? Obviously that is a judgment call.

"If you agree that a material being could have beliefs and experiences"

I suppose that is exactly the question to which I was trying to tease out the answer: can purely material relations, as they are understood from an objective, observational perspective, capture the reflectivity with respect to thinking states that I was describing.


I suppose the point is not whether a computer can process symbols (or objects that we can perceive as symbols) but whether we can infer from symbol processing that subjective experience of conscious thinking states is occurring. Does the computer "experience" that it is thinking in virtue of processing symbols? I think many of us can be forgiven for doubting that symbol processing alone(whether representing mathematical equations or syllogisms) is evidence of the kind of Cartesian knowledge we have of our own thought processes.

Again, if there is a divide between objective, observational knowledge of the world and subjective, experiential knowledge of our own thinking states, then we have a categorical divide, i.e., a dualistic rather than monistic situation, at least with respect to knowledge.

Clayton Littlejohn said...

Hey Darek,

Thanks for the response. So, I was pushing on two points. The first is simply that physicalism is a metaphysical thesis and whatever epistemological implications it has aren't obvious. The causal theory of knowledge is the sort of thing people often subscribe to on grounds that have nothing to do with physicalism, so that's why I was worried.

On the issue of whether material beings can have thoughts/beliefs, this is one of those issues that is difficult to sort out. My worry is simply this. If you can show that they cannot, _that_ seems like the real problem with physicalism. We have beliefs/thoughts. If, however, we want the argument from reason to do some real work against the materialist/physicalist who thinks that material/physical beings can have thoughts, it's not clear if the argument you've offered points to any additional problem or just amounts to the claim that material things cannot reason because they don't have beliefs. On this reading, the argument from reason is _really_ just the argument that all materialists are committed to eliminativism.

Darek Barefoot said...


A highly nuanced physicalist account will of course attempt to include mental states. I suppose part of my point is that most secularists have a general notion of naturalism/physicalism that includes a strongly science-based epistemology: What we know of the world is either directly accessed through the senses or on inference to the properties of unobservables based on direct sensory information.

This is a simple and appealing picture that blithely leaves out of account the acquaintance information we have of our own mental states. At the very least, this creates a more complex epistemological situation than garden-variety secularists might expect. Mental states seem to put a large and mysterious (in some degree) wrinkle in the epistemological picture.

To the deeply committed and conversant physicalist, this may indeed seem to be just an oddity. But it deserves more recognition that it gets.

Another way to come at the issue is the state of the other minds problem. I'm not up-to-date on philosophy-of-mind, but I doubt that the other minds problem has been completely put to bed.

Edwardtbabinski said...

DAREK: "Logical connections, however, defy being equated with or reduced to chains of cause and effect."

ED'S RESPONSE: Depends on how you define "reduced to." Both you and Vic appear to define "reduced to" as invalidating the very existence of logical connections. But neither of you tackle the opposite question, namely, how "logical" are thoughts that are NOT connected to sufficient causes?

In naturalism and physicialism the brain is embedded in the causal flow of cosmic forces (admittedly forces in physics can be quite weird but still part of all cosmic forces), as an integral part thereof. So the brain-mind makes sense by being a part of that whole. But brain-mind substance dualism leaves the "mind" "free floating" as it were and with behavioral possibilities that have no sufficient antecedent cause and hence no reason, nor any reliability when it comes to future plans, predictions, or promises.

The very ideas of logic and reason imply that one is embedded in nature, that one can tell what is like and unlike other things. But the definition of libertarian free will is its inherent unpredictability even if a person is put into exactly the same situation in time and space for a second time. Such "freedom" is more like spinning a wheel of fortune than anything else. It provides no sufficient causality lying behind one's decisions, nor sufficient connectedness with nature.

Edwardtbabinski said...


And on this same theme: If you were assigned the task of trying to design and build the perfect "free-will" model (let us say the perfect, all-wise, decision-making machine to top all competitors' decision-making machines), consider the possibility that your aim might not be so much to "free" the machinery from causal contact, as the opposite, that is, to try to incorporate into your model the potential value of universal causal contact; in other words, contact with all related information in proper proportion -- past, present, and future.

It is clear that the human brain has come a long way in evolution in exactly that direction when you consider the amount and the kind of causal factors that his multidimensional intracranial vortex draws into itself, scans, and brings to bear on the process of turning out one of its preordained decisions. Potentially included, thanks to memory, are the events and collected wisdom of most of a human lifetime. We can also include, given a trip to the library, the accumulated knowledge of all recorded history. And we must add to all the foregoing, thanks to memory and foresight, much of the future forecast and predictive value extractable from all these data. Maybe the total falls a bit short of universal causal contact; maybe it is not even quite up to the kind of thing that evolution has going for itself over on Galaxy Nine; and maybe in spite of all, any decision that comes out is still predetermined. Nevertheless it still represents a very long jump in the direction of "freedom" from the primeval slime-mold, the Jurassica sand dollar, or even the latest model orangutan.

JSA said...

Sounds like Darek has never had a lucid dream, or a dream within a dream.