Saturday, November 07, 2009

Reason's debt to Freedom? Why I'm skeptical

Aristophanes wrote: I have a question for Professor Reppert. Does anyone know where to direct it? It pertains to the argument Lewis gives against Naturalism in Miracles. Lewis's argument seems to me to be that determinism gives us a defeater for reason. Can anyone point out where to direct questions for Professor Reppert's attention?



Well, it depends on the kind of determinism. If it is determinism from a non-rational source, then this is a problem. On the other hand I have no choice about whether or not to believe that 2 + 2 = 4 or not. If I am rightly situated with respect to that truth, then I don't choose whether or not to accept it or not.




Although there have been papers like Warner Wick's "Truth's Debt to Freedom," that is not the way that I would be especially inclined to pursue.

Mind 1964 LXXIII(292):527-537 (1964).

38 comments:

Anonymous said...

While you're taking requests, would you be interested in doing a brief post on Galen Strawson's Basic Argument against moral responsibility?

Steven Carr said...

What are 'rational' sources and what are 'irrational' sources?

Is ectoplasm rational,or whatever it is that a mind is made out of?

Jesse said...

I agree Victor. In fact, it seems to me the whole of Lewis' argument is not (CE) vs. (GC), but also includes the one, unique type of (CE) involved in (GC) vs. the other type of (CE). Lewis states that knowledge must be caused by the thing known, which he confesses might be considered a (CE) cause, but a special one, having no comparison with physical (CE). Given the unique type of (CE), it is the thing itself which we know about, which is the cause of our ideas (I don’t directly know my ideas, I know through my ideas – my ideas are about the thing I know, not the thing I know, which involves a “wholly immaterial relation”); however, given the non-rational (CE), devoid of “aboutness”, it must be our mental state itself, that is, our own mind, of which we are directly aware. In that case, our starting point is a self-refuting conjecture; self-refuting because it cuts us off from the ability to know about anything outside our own mind and at the same time continues to assert that physical causes, outside our own mind, exist.

The argument, therefore (and in sum), starts at the fact of knowledge, that "I know" something; what Lewis seems to be saying, looking on his argument as a whole, is that in light of that fact Naturalism both reduces knowledge to that of mental representations and, what's worse, presmises the formation of those representations on "perpetual happy coincidence throughout the whole of recorded time."

Dorothy said...

Dear Site Owner,

I would like to say that your blog is well-written and it contains lots of useful and up-to-date information. We really got interested in your web resource and we would like to cooperate with you. My websites are devoted on various financial themes and the have got amazing traffic and Google Values.

My sites contains loads of useful financial information presented in news and articles that highlight the most much-talked-of issues such as credit cards, debt solutions, financial crisis, ways out of it, and many more. We believe this information can awake interest in your readers and can help you gain more and more traffics well. We would like to do some link exchange with your sites. If you agree then please let me know as per your convenience: dorothy786@gmail.com

We thank in you in advance for your cooperation.

Best regards,

Dorothy Parker

Doctor Logic said...

Victor,

I'm trying a new approach in explaining my view in the hope that it will make my position clearer.

I'm sure you agree that it is possible for a mechanical system to find patterns in its inputs.

This is called pattern recognition in computer science, and, for the rest of this comment, I will refer to it as "recognition". You don't have to take this to mean conscious recognition in the same form we humans experience it, but it is recognition nonetheless. I'm not trying to sneak anything in by my use of the word.

Now, there are some mechanical recognizers that are very brittle. For example, early character recognition systems would break if the scanned letters were too large or too small, or if the letters were rotated, or in a different place in the visual field, or if the letters were slightly misprinted, etc.

However, there are learning pattern recognizers that are very flexible. In terms of optical character recognition, they can recognize a letter A if it is anywhere in the visual field, at any size, somewhat rotated, and even with parts missing.

What can we say about a flexible pattern recognizer? We can say that it doesn't only recognize a specific input. It recognizes a whole class of inputs as matching a pattern. In terms of character recognition, it doesn't only match a single letter "A", but a letter "A" in any font, scale, rotation, and in levels of degraded print.

continued...

Doctor Logic said...

This learned pattern is an ABSTRACTION. Again, so as not to sneak in anything via terminology, I'm defining abstraction as a class defined by a filter. The recognizer is the filter... characters that it recognizes belong to the class.

For us humans, "rabbit" is an abstraction. It doesn't refer to a particular, but to a class of things we would call "rabbit". This includes non-biological rabbits, and even rabbit-shapped jello desserts.

There are two ways to look at this. I think a Thomist view is to say that there is this real class of things with rabbitness in them, and that perhaps we humans somehow have a rabbitness detector in our minds.

A better approach is to say that our prior experience with rabbit-shaped things creates a rabbit filter in our minds. There is no real class of rabbits outside of minds, there are only individuals, and the fact that a mind builds a filter based on its experience of rabbit-shapped external entities is what gives rise to the perception of a class of rabbits.

Abstractions have the property that they would recognize a pattern, even if that pattern is not ever actually presented. A flexible, trained, mechanical rabbit recognizer would recognize a purple rabbit, even if 1) it had never seen a purple rabbit before, and 2) even if no purple rabbits exist in the universe. This is a simple statement of a counterfactual. If the system's inputs were as if generated by a purple rabbit, the system would recognize those inputs as a rabbit.

continued...

Doctor Logic said...

Okay, so, back to mechanical recognizers. Our flexible character recognizer defines a class of stuff. Stuff that is recognized, and stuff that isn't. In that sense it is an abstraction.

The next level is to consider patterns within patterns. Surely, it is possible for a system to recognize a two rabbits in its visual field, as opposed to one rabbit. And two squirrels in its visual field, as opposed to just one squirrel. If a pattern recognizer can recognize patterns in patterns, then there's no reason why it cannot learn to recognize that it is currently seeing two of X, where X is any input.

In that case, the system has learned, from nothing but its inputs, that there are rabbits, squirrels, pairs of each, and that there is an pattern called "TWO". This "TWO" pattern is an abstraction of the number two. It recognizes two of anything. It will even recognize two of something it doesn't recognize, yet. If I hand you two bridge rectifiers, you know I'm handing you two of something even before you know what it is I'm handing you.

continued...

Doctor Logic said...

Where does this leave us?

First, we can see how a mechanical system is causally linked to mathematical abstractions. That is, any mechanical system that can find patterns in its inputs, and that can find patterns in patterns can find numbers. It's not a long step from this to arithmetic and from arithmetic to formal systems. If aboutness requires a causal link, we have found one. Not just for mathematics, but for any pattern that exists directly in sensation, or within patterns, or patterns of patterns of patterns, etc.

Second, semantic meaning can be cashed out in terms of these pattern recognizers. When I speak of rabbits, I don't need to be referring to some Platonic class of rabbits. I can be referring to that pattern recognizer in my head whose matches I refer to as rabbits. I can mechanically and meaningfully say that "that crunching sound I hear is a rabbit behind the sofa munching on a carrot." And this all makes perfect sense, even if the sound is actually being caused by a wombat eating bonbons. It makes sense because the references can be cashed out in terms of what would be recognized were it the case.

I know this is a complicated issue, and blogs seem to have a limit on how complex an idea they can convey. I just want some confirmation that you folks understand what I'm saying, and I would like to hear your direct answer to what I am saying.

Note: I'm not solving the "hard problem" here, but I AM solving the aboutness problem. A robot will be able to have intentional thought, and will be able to refer to mathematical and local rules, even with no input from us humans, and even if this mechanism does produce consciousness in the same way it does in humans. There is no problem of meaning or intentionality. I don't want to hear about how some other argument means that naturalism won't work - I want to stay focused on intentionality.

William said...

Doctor Logic:

I agree that intentionality as you define it (mechanistic abstraction) is linked to basic mechanisms of attention and as such is much more more a feature of life in general than is consciousness: the sperm has an intentionality about the egg, for example :).

That said there is in normal human wakefulness always a bit of P-consciousness mixed in with our A-consciousness. So you are explaining something rather short of what Lewis was talking about.

Doctor Logic said...

William,

Are you saying that Lewis would agree that mechanical minds could be A-conscious of numbers, mathematics, "the good", subjective morality, etc., through normal physical causation and self-organization, but that Lewis merely rejects the possibility of finding corresponding P-consciousness in those mechanical minds?

That's not what I've understood over the years I've been reading posts here. Rather, it seems that the argument is that machines cannot come upon A-consciousness of abstractions by themselves through self-organization. For example, the argument is that we won't ever create a simple mechanism that evolves/learns to think "A-about" what we can think about. And the fact that minds can think about instances of things which do not actually exist, is supposed to indicate that there can't be a natural, causal way to explain intentional thought.

I suspect Lewis (& Victor) and all the other supernatural lovers here would say that qualia pose some sort of hurdle for naturalism (for unrelated reasons), but is that really the intentionality argument they are making in the AfR?

If qualia can never be explained by naturalism, there's no point in making that "disproof" of naturalism a premise in some other argument (the AfR) against naturalism.

Anonymous said...

I agree with Williams that p-consciousness is what makes the problem of intentionality a real problem. There are p-consciousness states of intentionality that wouldn't qualify as intentionality in the a-consciousness sense: f.e. if I think (as Descartes proposed) about a polygon with a 1000 sides. my brain never was in a causal relationship with any such thing and this brainstate is probably not regularly linked to certain states of my environment. So it's clearly not an abstraction of anything I've ever seen. If anything, it's a heavy modification of an abstraction. But how does this fit into your picture of a-consciousness intentionality?

Second it seems that intentionality as understood by Dennett and you (a-consciousness as you call it) poses a real problem for anyone who takes scientific reductionism seriously. What exactly is the intentional object of a mechanical system? How do we determine it? What mechanical systems are candidates for having a intentional object? Why should my brain have an intentional object and not a part of it, or a part of that part? It seems that we can only answer these questions by appealing to folk-ontology that seems incompatible with scientific reductionism.

Doctor Logic said...

Anonymous,

f.e. if I think (as Descartes proposed) about a polygon with a 1000 sides. my brain never was in a causal relationship with any such thing and this brainstate is probably not regularly linked to certain states of my environment. So it's clearly not an abstraction of anything I've ever seen.

But it clearly IS an abstraction of something you have seen. You've seen triangles and squares and pentagons. All of these instances are flat with sides of equal length. You can abstract from numbers to counting, and then to counting sides on polygons. A 1000-sided polygon is a flat geometric figure, with sides of equal length, and when you count those sides you count to 1000. So it is a straightforward abstraction from things you're familiar with.

What exactly is the intentional object of a mechanical system? How do we determine it? What mechanical systems are candidates for having a intentional object?

Intentionality requires several things. It requires the ability to learn to recognize patterns. It requires the ability to create abstractions, i.e., flexible pattern recognition that can recognize classes of patterns by general features (e.g., the pattern of a polygon). Finally, it requires the ability to flexibly recognize patterns within abstractions. Intentionality may also require the ability to create propositions about these abstractions, but let's leave that out for a moment.

Bricks, Deep Blue, sperm and pocket watches do not have these things, so they lack intentionality. However, the ability to flexibly recognize patterns is fairly straightforward to identify. It may be fuzzy, but it's fuzzy like the difference between a lake, a sea, and an ocean is fuzzy.

Attention may also be required for intentionality, but even attention is a concept that is understood in the study of neural networks.

For the AfR to have any bite, it must be more than just an argument from ignorance. If the AfR showed that it was impossible for material minds to be causally connected to ideas, then it would be interesting. But if material minds are connected to the relevant concepts by abstract pattern matching, then the AfR has nothing to say. The AfR gets nowhere if it asks "how does the brain do that?" It has to make an argument like "it is impossible for the brain to do that!"

Anonymous said...

@Dr. Logic
But it clearly IS an abstraction of something you have seen. You've seen triangles and squares and pentagons. All of these instances are flat with sides of equal length. You can abstract from numbers to counting, and then to counting sides on polygons. A 1000-sided polygon is a flat geometric figure, with sides of equal length, and when you count those sides you count to 1000. So it is a straightforward abstraction from things you're familiar with.

It seems to me that you first claim that it's an abstraction and then argue directly against this claim by pointing out that it's a heavy modification of an idea that was an abstaction.

Intentionality requires several things. It requires the ability to learn to recognize patterns. It requires the ability to create abstractions, i.e., flexible pattern recognition that can recognize classes of patterns by general features (e.g., the pattern of a polygon). Finally, it requires the ability to flexibly recognize patterns within abstractions. Intentionality may also require the ability to create propositions about these abstractions, but let's leave that out for a moment.

This is exactly where the folk-ontology comes into play. What or who is a candidate to fulfil these requirements? Quine said that anything is an object if we can quantify over it. This is a clear definition but it inflates our ontology. With Quines definition nations, families, parts of harddiscs, harddiscs, computers containing harddiscs, clouds, parts of clouds, clouds and rain, ant colonies, ant colonies - 1 ant, ant colonies + a spider, and so on fulfil your requirement for intentionality. (And if they don't it's easy to find other absurd examples by playing with Quines idea.)

You need either an additional theory that claims pattern to be ontologically fundamental (which would be a huge philosophical surprise) or you admit that pattern and pattern-recognition is observer-relative. And this would mean that intentionality is observer-relative. But clearly many cases of intentionality are not observer-relative.

It seems to me that the fatal flaw of your definition of intentionality is the crude folk-ontology it's based upon.

William said...

See Searle's Minds, Brains and Programs (1980). Basically, he argues that if a machine is made to be intentional, its intentionality is borrowed from its designer or user. I think Lewis might say that a machine reasons the way a slide rule multiplies: as an extracorporeal extension of our intrinsic reasoning. So, for Lewis, machine consciousness as a human artifact would beg the original question.

William said...

anonymous said:
There are p-consciousness states of intentionality that wouldn't qualify as intentionality in the a-consciousness sense...

Yes... This is almost too abstract to easily grapple with, but let me take an initial stab at it.


In particular, the AfR requires that we look at the system of reason, of science or logic, as an abstract object of p-consciousness. We need to do this in order to evaluate our logic as being valid in its basis or not. This is not just automatic pattern recognition with the behavioural readiness that characterizes a-consciousness. It is actually p-consciousness of what a-consciousness can do. This is not just a higher-order thought because it is not just a pattern of patterns, but rather involves a subjective experience of looking at the reasoning to decide on what is true or valid (p-consciousness of the abstraction).

Anonymous said...

@ Dr. Logic:

For the AfR to have any bite, it must be more than just an argument from ignorance. If the AfR showed that it was impossible for material minds to be causally connected to ideas, then it would be interesting.

The AfR works fine with a causal connection between material minds and ideas. (See f.e. Plantingas "Content and natural selection" for this, http://philosophy.nd.edu/people/all/profiles/plantinga-alvin/documents/CONTENTANDNATURALSELECTION.pdf)

All the AfR needs is a lack of truthrelated constraint imposed on the mental content by the physical correlate. The only attempt to show that there is such a constraint has been made by people like Dretske or Millikan with their theories about indicator-semantics. But their attempt seems question-begging, for they simply assume that there is such a constraint and then try to formulate it.

@William: Everybody has his prefered terminology which makes it sometimes hard to understand each other :-) But I think I agree with you.

Doctor Logic said...

Anonymous,

It seems to me that you first claim that it's an abstraction and then argue directly against this claim by pointing out that it's a heavy modification of an idea that was an abstraction.

You'll have to make this objection more explicit. Everything I've described is a case of finding flexible pattern matches (abstractions) or flexible pattern matches within flexible pattern matches (abstractions of abstractions).

With Quines definition nations, families, parts of harddiscs, harddiscs, computers containing harddiscs, clouds, parts of clouds, clouds and rain, ant colonies, ant colonies - 1 ant, ant colonies + a spider, and so on fulfil your requirement for intentionality.

Gosh, I really don't see this at all. I assume I haven't been clear.

In what way are clouds flexible pattern matchers? They surely contain patterns, but they don't find patterns themselves. They don't regurgitate a complete pattern when exposed to a part of it, for example. If you see an elephant's trunk, your mind recognizes it as an elephant. You see a complete elephant in your mind's eye. You have associative memory. A cloud has no memory at all, let alone associative memory.

Doctor Logic said...

William,

Basically, he argues that if a machine is made to be intentional, its intentionality is borrowed from its designer or user.

But I can create a machine that doesn't look for any specific pattern, but which finds patterns in its environment. I have not, in creating this machine, infused it with any intentionality about anything in particular.

In particular, the AfR requires that we look at the system of reason, of science or logic, as an abstract object of p-consciousness. We need to do this in order to evaluate our logic as being valid in its basis or not.

I disagree. Validity is a pattern, just like any other. An a-conscious system is perfectly capable of knowing whether its reflections have validity. An a-conscious-independent p-consciousness of this is not necessary. If p-consciousness is a byproduct of a-consciousness, we get a very pretty picture.

I think your argument is not valid. :) You are saying that, since your experience of decision-making always involves your subjective experience of what it is like to make a judgment, judgment must require this subjective experience independent of an a-conscious event. I can find no hint of justification for this claim. At best, you might be able to say that p-conscious might be necessary for making judgments, but for the AfR to argue against the unknown, it needs a strong claim. It is at least as reasonable to say that p-conscious states are corresponding byproducts of a-conscious states.

Take A=B, B=C, A!=C in arithmetic or in some other system described by simple mathematical rules. Do you think that an a-conscious system could not recognize that such a statement was invalid, and that it could not abstract from that invalidity to a more general notion of invalidity? It seems obvious to me that it could.

Doctor Logic said...

Anonymous,

The AfR works fine with a causal connection between material minds and ideas...

All the AfR needs is a lack of truth-related constraint imposed on the mental content by the physical correlate.


This is a separate point, IMO. I want to establish that material systems can have a-conscious thoughts about things, that their references make sense, that they can mentally process abstractions, thoughts about things that might not physically exist.

Victor is often asking what makes one piece of matter about another. I'm answering that. A machine can form its own concepts and abstractions and think about things without humans implanting that intentionality. This should be uncontroversial, IMO.

If you want to argue (like Plantinga) that what it is like to run from a lion is actually what it is like to paint your house, go right ahead, but it's a different argument.

Personally, I think Plantinga's argument is absurd. Plantinga is supposing that in a natural system, p-conscious states are wired to the wrong a-conscious states. But where did the p-conscious states come from? Plantinga seems to think we independently evolved physical mechanisms for both kinds of states because there need be no correlation between them. Even if I were an epiphenomenalist, this would not make sense to me. I can't believe so many folks out there think this is a plausible idea.

If I am a baby and I have my first a-conscious brain state, which p-conscious state do I get?

If my first a-conscious experience is of seeing red, my first p-conscious experience can't be that of feeling cold because I haven't yet experienced cold, so the false p-conscious state I'm supposedly mapping to doesn't exist for me to map to yet.

The only thing it can be like to have my first a-conscious state is what it is actually like, by definition. P-conscious states are trivial, and cannot be mistaken (this is what Wittgenstein says in his private language argument).

William said...

Doctor Logic:
---
Validity is a pattern, just like any other.
---

Pattern matching is what the system does AFTER it has criteria of validity. You are assuming the system knows what validity _is_ in order to match the pattern to the valid. For example, a digital system has to know a 0 from 1. Arithmetic logic is assumed prior to the pattern matching. The AfR is looking at the rules behind your matching.

Anonymous said...

To Doctor Logic:

This is a separate point, IMO.
I agree that it's strictly speaking a different point than establishing a-consciousness at the level of material objects. But usually it is assumed that once we have established the a-consciousness meaning of the terms like "intentionality" and "representation" it's a small step to p-consciousness. Or sometimes it's denied that it's any step at all (Dennett and others).

I want to establish that material systems can have a-conscious thoughts about things, that their references make sense, that they can mentally process abstractions, thoughts about things that might not physically exist.
I understand your task but I think my objection stands: it's based on folk-ontology and ignores scientific reductionism. You only demonstrated that we can look at agglomerates of particles as if they were objects and represent other objects. But that's not a very interesting conclusion.

Personally, I think Plantinga's argument is absurd. Plantinga is supposing that in a natural system, p-conscious states are wired to the wrong a-conscious states
Once you admit that there is a distinction between a-consciousness and p-consciousness you're in Plantinga's game. There are presumably laws that link a-consciousness to p-consciousness. Laws can have any value that is logically possible. Why should we assume that the laws guarantee the truth of the content of states of p-consciousness? Plantinga doesn't think p-consciousness has evolved independently, he thinks god has fixes the psycho-physical laws in a way true beliefs would be selected for.

I think his argument is ingenious because it puts doubts about our epistemic status that go back to Descartes into the context of modern philosophy of mind.

Doctor Logic said...

William,

Pattern matching is what the system does AFTER it has criteria of validity. You are assuming the system knows what validity _is_ in order to match the pattern to the valid.

I think you are assuming that I programmed the system to match validity. A system that does nothing but discover new and novel patterns will find "validity" as a pattern in its inputs.

Here's an analogy. Children learn to recognize objects in their visual field. We're not born with a built-in validation rule for identifying squares, triangles or orange toothpaste tubes. We start with a general system for matching patterns in our inputs, and then we self-organize recognition of particulars based on past experience.

Similarly, a general pattern matching system (which is fed no prior information about the kinds of patterns it might find in its inputs) will learn to recognize a pattern of logical validity. The developer of this system would not have to embed a rule for logical validity into the pattern matcher.

You could argue that the general pattern matching algorithm contained rules of its own, but what is at stake here is where those rules came from. There's nothing controversial about saying that a pattern matching algorithm could evolve because even simple PM systems have adaptive advantages.

Doctor Logic said...

Anonymous,

Plantinga doesn't think p-consciousness has evolved independently, he thinks god has fixes the psycho-physical laws in a way true beliefs would be selected for.

Sorry, I meant what Plantinga thinks in the context of naturalism.

There are presumably laws that link a-consciousness to p-consciousness. Laws can have any value that is logically possible. Why should we assume that the laws guarantee the truth of the content of states of p-consciousness?

Because the content is trivial. (This was where I connected with Wittgenstein's PL argument.)

In the following, the a-conscious stuff is capitalized.

We generally think that an a-conscious experience leads to a corresponding p-conscious experience:

X -> x
Y -> y
Z -> z

Plantinga says that the causal connection can be scrambled:

X -> y
Y -> z
Z -> x

However, this won't work. To see why, we have to go back to the definitions of the symbols. A-conscious experience of X relates to X in the real, physical world. My A-conscious experience of running from a lion involves the lion, and my escaping from it.

But how is x defined? x is defined as "what it is like to have the A-conscious experience of X." In the lion example, it is defined as "what it is like to run from a lion". And it cannot possibly be wrong because whatever p-conscious state I am in when X happens is x by definition.

Also, if X is the only a-conscious state I recognize (e.g., I am an infant), then not only is x defined to be correctly correspondent to X, there are no other self-organized y's or z's yet existing for my X to be mapped to.

What Wittgenstein says is that our private language statements (which involve "what it is like") are incorrigible. They are what they are.

So, Plantinga fails on two counts. First, he forgets the definition of p-conscious states and assumes they can be made independent of a-conscious states, and then he neglects to consider what sort of causal mechanism would cause the mapping to get scrambled.

Your statement that "Laws can have any value that is logically possible" is true only in the absence of any constraints. If the mind is self-organizing, then there is a big constraint. Both a-conscious states and p-conscious states are self-organizing based on experience. When X is learned, X can only be mapped to p-conscious states that already formed through prior self-organization. I can't map "typing on my keyboard" to "running from a lion" if I have never run from a lion.

I think it's pretty clear that while Plantinga claims to assume naturalism, he continues to think of p-conscious states as something supernatural. Plantinga ignores the simple naturalistic model of how minds work. If p-conscious states are physically generated by the firing of neurons, what it is like to X is what it is like to fire the neurons corresponding to X, and so it is silly to think that what it is like to fire the neurons for X is what it is like to fire the neurons for Y.

Maybe Plantinga's mistake in thinking "laws can have any value" is to assume there is some law of mapping particulars of consciousness, when the actual physical law would be indifferent to particulars. The law that maps a-conscious states to p-conscious states is independent of the particular state, but instead must have something to do with the implementation of the a-conscious state.

Gregory said...

I'm not skeptical about reason's debt to freedom. Here's why:

Basic algorithmic instructions follow a single course. In computer programming and electrical circuitry, there are 2 basic modes/switches: 0 and 1

The algorithm will designate what the arrangement/pattern of those 2 switches is. AND gates and OR gates are the basic instructions that specify whether the switch, or chain of switches, is "on" or "off" In terms of household electricity, for example, the only possible behavior given for the flow of an electrical current is either "Direct Current" (D/C) or "Alternating Current" (A/C). The difference being that A/C allow the current to change/alternate directions along the conduction pathway.

In terms of Computer technology, the arrangement of the switches is more complex. We call the "switch", in the computer parlance, a "bit"....not to be confused with "byte" (i.e. a unit of data storage). My PC, for instance, operates at 32 bits. That means that the circuitry on the motherboard can accommodate 32 distinct "on/off" channels/switches, which regulate the currency of specific on-board devices. The more "bits" you have, the more devices you can utilize....or the more you can expand the use of existing devices (like RAM). The Basic Input/Output System (BIOS) gives the Motherboard permission to turn on/off certain on-board devices, to designate the order of operation in the boot sequence, and to regulate the amount of currency sent to those devices. The computer screen/monitor is nothing but an electronic tablet for pixels that light up in certain regions because of the synergy of the BIOS, the Motherboard, the various components, the external devices that allow user interface, the Operating System and the various programs loaded onto the hard drive.

The PC's algorithm, otherwise known as the "programming code", is the master plan for the function of the PC. It makes the PC more than simply an assemblage of electronic parts and devices, of which can transmit electrical currents. The algorithm designates a specific set of instructions mandating the input/output pattern of the PC so that you and I can use the internet to post on this blog. And unless the PC malfunctions/glitches, or the programming code is manually altered, the PC will continue to operate along it's predestined pathways.

Humans, by contrast, are not anything like computers...in, that, we don't follow any set of programming instructions. If it's argued that we do, in fact, happen to follow such an algorithm, then that would only mean that there is a strong case to be made for an Intelligent Programmer. Rather, human beings are the creators of algorithm's. Algorithm's neither create, communicate nor innovate; they simply operate. Humans beings do, if nothing else, create, communicate and innovate.

And for the skeptic to attempt to provide a refutation by counter-example or some type of "reason", let's say, is to simply prove my point: that humans are not causally determined....unlike computers and machines. Human beings, on every level, go "against the grain". Gravity pulls objects towards the earth, but men build planes, shuttles and rockets. Disease afflicts biological agents, but men invent medicines and cures. Weather can be harsh and unforgiving, but men build houses and sow garments. Nature is harmonically cacophonous, but men build musical instruments and sing. Animals simply sleep, hunt and eat, but men philosophize and write poetry. The world seems completely indifferent to our needs, and supposedly confers benefits on those "fit to survive", but men build churches, hospitals and homeless shelters.

To deny all this, ironically, is to deny "humanism" and the existence of "Humanities" departments at Universities.

Mankind provides, on so many levels, a "reason" to believe in God. Atheism might have some plausibility....that is, if, and only if, there was a complete absence of human life!!

Anonymous said...

Different anonymous here, but I think one point is being omitted here.

Plantinga's argument isn't against evolution. It's against the conjunction of naturalism and evolution. It's oft-repeated that for naturalistic evolution, "the only final product of evolution is survival". Even Plantinga doesn't argue that on naturalism + evolution, having true beliefs is impossible - it just becomes tremendously unlikely. In part because "survival" is committed to as the only final product.

Now, if we start to argue that the laws which govern evolution, or minds, etc make it so the reliable final product of evolution is not just "survival" but "truth", that's fine. But we're now introducing some very obvious teleology into evolution (even more than is already there, and any intrinsic teleology in nature, certainly evolution, is problematic to say the least on naturalism). Doubly so if the innate ordering of evolution towards attaining truth is extended beyond "truth that aids survival" to "truth that does not aid survival".

In other words, if the way to argue against Plantinga's EAAN is to argue that evolutionary laws and/or psycho-physical laws are set up in such a way that makes the correspondence between thought and truth to be very likely, it's just another way of saying "Plantinga won this one".

Doctor Logic said...

Anonymoususes,

I have a suggestion... why not sign the bottom of your comments with a name (fictitious name, if you like)?

It's a bit disconcerting when I don't know who's responding and how to address you. Even when there's only one Anonymous, I can still be uncertain as to whether I'm responding to the same person.

Doctor Logic said...

Gregory,

I could not find a formal argument in your comment. Bacteria "go against the grain" as much as we do. Bacteria even go against our human grain.

Doctor Logic said...

Anonymous #2,

Plantinga's argument seems to be taken in two ways.

One way argues, as you have, that survival and truth are not necessarily the same thing. In that approach a-conscious beliefs and p-conscious beliefs may coincide, but there's no reason for a-conscious beliefs to correspond to truth. This approach doesn't work because truth is prediction, and prediction is survival. The adaptive advantage of the human mind isn't contained in specific, evolved beliefs, but in its ability to discern the rules in new or changing environments. It's like evolution on steroids because we can adapt within a single generation. Yes, there can still be unreliable beliefs, but the kinds of beliefs that are more likely to be unreliable are the beliefs about unverifiable claims, e.g., the supernatural.

The other way Plantinga's argument is taken is in the fashion I was responding to above, i.e., that a-conscious states need have no connection to p-conscious states, and I've given my rebuttal of that argument above.

Anonymous said...

DL, that really seems like equivocation of the highest order. In fact, you didn't even deny anything I've said - you've justified it by saying "Well, truth = survival, so it's all good". Now, you're free to do that - but ultimately, if we're going to expand "survival" in such a way, you're back to conceding to Plantinga. You're insisting the laws of evolution and the psycho-physical are such that guarantees for great reliability / "truth" are in built right into them. There's a price to pay for that kind of guarantee, and the price is an explanation that is a tremendously awkward fit with naturalism - a concern which goes away when naturalism is given up. (Hence the typical naturalist focus on linking evolution not to truth or true beliefs, but simply to actions regardless of the associated beliefs. But it's precisely that conjunction of E&N that Plantinga targets.)

What's more, "unverifiable claims" go vastly beyond "the supernatural", whatever that may be. Naturalism, idealism, neutral monism, dualism, etc.. all deal with "unverifiable claims" in large part, along with "theories" in general.

Doctor Logic said...

Anonymous,

It's not equivocation, but it was abbreviated.

There is an adaptive advantage to being able to adapt to new and changing environments in less than a generation. This advantage comes into play when 1) humans migrate into new environments, 2) environments change around us, e.g., through climate changes, 3) when humans are responsible for establishing key features of the environment, e.g., through social customs, warfare, etc.

If our ability to adapt within a generation is adaptive (which it is), then humans need to be able to predict things about their environment without dying first. Whereas crocodiles with maladaptive behaviors and responses die out and breed maladaptive behaviors out of the population over generations, humans need to be able to do the same within a single lifetime using brainpower.

This means humans need to infer the correct behavior from observations. Where A implied B in the last forest, perhaps A implies C instead in the new forest. The use of inference means humans must not only learn specific facts, but also must learn to create a coherent web of inferences gluing them together.

It is the need for these experiences to form a coherent picture that dooms the first interpretation of Plantinga's argument. If I lack the ability to make halfway decent inferences or form coherent belief systems, I lose my brain's adaptive advantage over species that learn through genetics. Survival is a matter of creating a coherent web of testable propositions and inferences.

Now, the other interpretation of Plantinga is to say that, well, yeah, the a-conscious inferences and behaviors form a coherent web that connects with reality, but the p-conscious states corresponding to all the a-conscious states might be mapped incorrectly.

My post above addresses this latter problem scientifically and philosophically. Scientifically, there's no way to map onto p-conscious states that don't exist yet.

Philosophically, there is no possible "incorrect" p-conscious state corresponding to an a-conscious state. Whatever it is like to have the a-conscious state X is the corresponding p-conscious state for X by definition.

Anonymous said...

No, it's an equivocation - you've inflated "survival" from a mere material end-result into a built-in guarantee and bias for true beliefs. If you're going to argue evolution and psychophysical laws are inevitably oriented towards producing true beliefs, well done - you've now given an argument that is vastly more likely to fit in to an Alfred Russell Wallace view of evolution than a Charles Darwin view of evolution. Again, you've surrendered to the Evolutionary Argument Against Naturalism - your defense of evolution comes at the expense of crippling Naturalism horribly by imbuing it with tremendous latent teleology. And you're a hair away from talking about omega points or final causes. Just cram all that under "survival" as well, I suppose.

It's not the beliefs that evolution operates on, but the behavior, period. "Predicting things about our environment" is only useful insofar as it leads to actions that are beneficial for our survival and propagation by orthodox evolutionary theory - but that's precisely because those actions are all that matter. Any belief that leads to the beneficial action will suffice, true or false, rational or irrational. That's what you have to defend to get naturalism out of this unscathed, and Plantinga's argument illustrates why this won't work.

So you're (wisely) not making this argument. But you're (not so wisely) making an argument that gives away the store anyway due to the bias you're building into evolution, and the dramatic expansion you're giving to survival. If you back off and say that humans only need to "infer the correct behavior", well then you're out of the truth game and back into the behavior game, and Plantinga wins on that horn instead.

Remember: "With me [says Darwin] the horrid doubt always arises whether the convictions of man's mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would anyone trust in the convictions of a monkey's mind..." Darwin isn't worried about the ability of man to survive. It's specifically about the convictions of man's mind, owing precisely to his evolutionary theory - and his theory is not Wallace's, or de Chardin's, or any others.

Doctor Logic said...

Anonymous,

I've responded with a specific argument, but you seem to be stuck at 50,000 feet, unable to see any details.

So let's get into the details.

Do you agree that, for humans to have an evolutionary advantage to their big brains and frail bodies, they need to be able to learn new rules of behaviors about their environment?

For example, a bushman sees a migrating herd of animals walking through the valley. He has learned that this kind of animal always stops at the water hole. He has learned he can catch and kill smaller individuals of this species, and that he must do so before subset, or the lions will steal his prize (and eat him too). But he has also learned that this species always stops at the water hole, and this will give him enough time to organize a hunting party that will catch up with the herd. He knows this because he has learned how long it takes to organize a hunting party, how long a day lasts based on the location of the Sun in the sky, etc.

This is a complex web of inferences and predictions about what actions will work. For the bushman to have evolutionary advantage, he can't just learn them genetically. It's not like there were a million tribes, and the ones who happened to come up with these behavioral responses survived and the other dies out. Any species can do that. What makes humans adaptive is there ability to infer and create sophisticated predictive models.

Now, you are saying that he doesn't actually have the corresponding conscious beliefs about the world, but he does learn the corresponding behaviors through a subconscious or non-conscious learning inference. He still has to make the correct behavioral inference, even if his conscious beliefs are out of whack. Right?

Again, do you agree that the bushman needs to have a behavior learning mechanism that responds to his sensory inputs with (at least sub-/non-conscious) inferences that lead to complex behaviors, and that these behaviors will be substantially the same as if he had beliefs corresponding to reality, even if his actual beliefs are wildly different?

(You should agree because you say Plantinga does not dispute evolution will work, he just disputes the idea that our beliefs will be reliable if we evolved.)

Anon1 said...

Hey Doctor Logic

This concerns your answer to my post:


But how is x defined? x is defined as "what it is like to have the A-conscious experience of X." In the lion example, it is defined as "what it is like to run from a lion". And it cannot possibly be wrong because whatever p-conscious state I am in when X happens is x by definition.


I think I can see your reasons for opposing Plantingas argument. I think however that they are based on a major confusion widespread among a certain group of physicalists. p-consciousness states are not defined in any way (as you claim), they are either ontologically fundamental or in some sense derived of the physical. In both scenarios there is no place for definition. There is a fact about questions of correlations between a-consciousness and p-consciousness. Given this, there is a meaningful question about correlations of a- and p-consciousness:

Can we correlate x with Y instead of X? Given any sort of dualism this is logically possible, although no nomologically. Given reductionism, it's not even logically possible. But in both scenarios the content of p-consciousness can or cannot be a representation of the world. This is all that is needed for Plantingas argument and most people agree that this is the case.

Also I think you're making a categorical mistake with your talk about truth. Of course a conscious state can't be true, very much the same way a physical state can't be true. But nobody claims this to be so, I only said that the content of a mental state can be true, namely the meaning of a sentence or a proposition. And this seems beyond doubt.

I find it interesting that you defend your argument by appeal to Wittgenstein. Most philosophers nowaday agree that his approach to meaning in the philosophical investigations ignores much about the phenomenon of introspection and language. And many even think that his argument is unintelligible. And the few who actually use the argument in this context (Beckerman f.e.) deny that it's conclusive. I think your defense against Plantinga is obviously weak.

It's also noteworthy that you ignored my ontological argument. It seems to me that this part of my argument is even stronger and easier to talk about.

Doctor Logic said...

Anon1,

Can we correlate x with Y instead of X? Given any sort of dualism this is logically possible, although no nomologically. Given reductionism, it's not even logically possible. But in both scenarios the content of p-consciousness can or cannot be a representation of the world.

But what is the content of p-consciousness? Suppose my infant brain is in the a-conscious state of seeing red. That is, upon sensing red, my brain self-organizes around the pattern of red versus non-red. Supposing I have no other abilities of a-conscious recognition yet, what is the corresponding p-conscious state, and what is its content?

I think those who side with Plantinga assume that there's a whole other world of p-conscious states that are somehow defined independently, in a sort of fantasy world. That, maybe, my seeing red in the a-conscious world might correspond to my licking an ice cream cone at the beach in the p-conscious world. However, this cannot really work, can it? If I don't recognize anything but red as yet, how can the content of the corresponding p-conscious state (which is also physically generated and self-organized), correspond to licking an ice cream? For that matter, how can it correspond to anything at all except the a-conscious state that caused the p-conscious state?

I see what you're saying about the p-conscious state being determined by a physical mechanism, but content of p-conscious states is trivial. The content of the p-conscious state is "what it is like to X". There's no content there other than the awareness of the a-conscious state of X.

The only way I can see for there to be any further p-conscious content is if there are testable relationships between different p-conscious states.

So, I ask you, do you think p-conscious states have logical relationships? Are there apparent nomological relationships between events in the p-conscious world (even if those p-conscious states don't necessarily correspond to the "truth" of the a-conscious world)?

If so, how is the logic of these states established? If the logic is generated by the logic of the a-conscious states, then the only thing free to float are the mappings from one mental subject to another, e.g., lion = bus, or something like that.

This is where the parallel with Wittgenstein comes in. The relationships between the p-conscious states are public language, not private language. And the private language is linguistically trivial.

This is all that is needed for Plantinga's argument and most people agree that this is the case.

Well, I don't think most people agree with Plantinga. Most apologists do, but not most philosophers.

It's also noteworthy that you ignored my ontological argument. It seems to me that this part of my argument is even stronger and easier to talk about.

Hmm. If I understand what you were saying, then I think this was where I started in this thread, talking about abstractions. I have a pretty firm definition of these things, so I don't see any folk-ontology or hand-waving.

The dualist assumes that intentionality cannot be defined, and then argues that no physicalist theory of intentionality suffices. But if the dualist refuses to define intentionality (just says he knows it when he sees it), then he's begging the question.

anon1 said...

But what is the content of p-consciousness?

That's the central question here. I think a little phenomenology here is important. My p-consciousness has various contents, sometimes they are related directly to the world (in case of sensations) sometimes they are loosely related, if they are related at all (dreams, the first glimpse of consciousness in the universe). So it seems to be a brute fact of the universe that there are different contents of p-conscious states. No definition of content is involved here, it's a matter of fact.

I think those who side with Plantinga assume that there's a whole other world of p-conscious states that are somehow defined independently, in a sort of fantasy world.

"Whole other world" hits the nail on the head in my opinion. Why is there such a big fuzz about consciousness? Because, in the words of McGinn, it seems like a radical novelty in the universe. Or in the words of Chalmers: it seems we could now every physical fact of the universe and we'd still not know anything at all about consciousness. I think most philosophers agree that consciousness seems like "a whole new world".

However, this cannot really work, can it? If I don't recognize anything but red as yet, how can the content of the corresponding p-conscious state (which is also physically generated and self-organized), correspond to licking an ice cream?

Why can this not work? How can the content of my p-conscious state when being exposed to red for the first time be red, if I haven't been exposed to red before? The sensation of seeing red is an absolute novelty. It seems to depent upon a brute fact of the universe, a psycho-physical law, that might have any value. This is the point you need to be attacking. Why should there be this interesting correlation and not anything else?

I have a pretty firm definition of these things, so I don't see any folk-ontology or hand-waving.

My point was that your definitions involve fictional objects. You talk about systems, computers, objects, sensors and so on. But physics doesn't know these things. Physics is about elementary particles. And any reasonable ontology should respect physical reductionism and thus deny the existence of objects like computers, systems, sensors, and so on. Systems are like animal shaped clouds. We can certainly look at clouds that way, but would we really want to commit ourselves to the existence of cloud-animals? I think you built your theory of consciousness on cloud-animals when it should be built on ontological facts about the universe.

anon1 said...

So, I ask you, do you think p-conscious states have logical relationships?
Are you asking whether there are logical relationships between p-conscous states or between some of their contents?

To the first question my answer would be that I don't know what that means. Are there logical relationships between physical states? Well, physical states don't seem to violate any logical laws, but are there direct relationships? I don't know whether this question makes sense.

To the second question I'd answer yes. Any argument involves states of p-consciousness and logical relationships.


Are there apparent nomological relationships between events in the p-conscious world (even if those p-conscious states don't necessarily correspond to the "truth" of the a-conscious world)?

It certainly seems like that, but my judgment about that matter depends on the existence of such nomological relationships, so it's quite obviously question-begging.

anon1 said...

The dualist assumes that intentionality cannot be defined, and then argues that no physicalist theory of intentionality suffices. But if the dualist refuses to define intentionality (just says he knows it when he sees it), then he's begging the question.

I don't see why he is begging the question. If he claims intentionality to be impossible to define and then doesn't define it, how is this question begging? It seems to be a prime example of being consistent. It also seems to be consistent to attack definitions of intentionality if you think you can't define it. Some parts of the world are necessarily brute facts, possibly intentionality is one of then and it certainly seems consistent to think it is.

Anonymous said...

DL,

The details are precisely what I see, and they're favoring Plantinga. That's why most of your effort is being put into how to categorize the details (by fixing "evolution" in such a way that drastically inflates "naturalism" in a way that ultimately makes it meaningless), rather than fighting on the details themselves.

And no, I disagree that humans "need to learn new rules of behaviors about their environment", because putting it that way obscures what they really do "need". "Learning new behaviors" is a contradiction in terms here, because if the only thing that really matters are the behaviors, the "learning" is superfluous. What is "learned" can be utterly disjoint from the behaviors that are associated with such behavior, so long as those behaviors are beneficial. (AI in video games can "learn" all kinds of "beneficial behavior", but I wouldn't trust the "beliefs" it has. Indeed, I doubt it has any.)

As well, no, I wouldn't agree that the bushman needs to have "a behavior learning mechanism that responds ... substantially the same", since we'd have to talk about what "substantially the same" means here, when it's entering the equation, and under what view of evolution. Do you mean the person with true beliefs and the person with false beliefs would both need behavior that contributes to fitness? Sure, why not. Are you asking me, if the behaviors of the deluded specimen and the 'true-seeing' specimen are identical, that.. their behaviors are identical? It goes without saying.

I've pointed out repeatedly that sure, you can "solve" Plantinga's problem by insisting that there's something about evolution whereby the production of "true beliefs" is intimately worked into the system. You just happen to deal a hammer blow to naturalism in the process, which is part of the point. Now, Plantinga clearly isn't arguing against evolution here, since he argues the theist has no built-in problem with an evolutionary theory. He's arguing against the conjunction of naturalism with evolution. That's where you have to make your stand: Trying to think of some way, any possible way, that evolution could theoretically lead to true beliefs misses the point entirely. Just make evolution teleological!