Thursday, March 24, 2011

The Steve argument and the AFR

I need to go back over what the Steve argument is. It is a response to the "anti-causal" reply to the argument from reason, or, more particularly, the problem of mental causation. Actually, it's one of a few arguments I use against this position, which goes back to Anscombe's critique of Lewis. I have to give kudos to Clayton for keeping focused on what this post is about.

The Steve argument is an illustration of the fact that when we say that someone is rational, we are saying that evidential relationships are relevant to the actual occurrence of beliefs as psychological events. In particular, when we try to explain why we are rational in believing something, we make counterfactual claims about the relationship between evidence and our beliefs, such as "If the evidence for evolution weren't so strong, I wouldn't believe in it." A typical way in which people impugn the rationality of others is to say that smart people believe things for not-so-smart reasons, and then use their reasoning powers to justify what they have already committed themselves emotionally to believe. In fact, people like Loftus very often charge that Christians are something like Steve; that is, they for a belief in Christianity through processes that could just as easily produce a  false belief as a true belief, and then find whatever arguments they can to undergird those beliefs which were really reached in a non-truth-conducive way.

But what we are saying when we say we believe something for a good reason is that the presence of reasons is relevant to the production of our beliefs, that, unlike those benighted folks over there, we have actually paid attention to the evidence and are following it wherever it leads, whether it makes us feel good or not.

But what that means is that evidential relationships are relevant to what beliefs we hold, and therefore, what states our brains get themselves into. But evidential and logical relationships are abstract states. They are not physical things, and they do not have particular locations in space and time. Yet they are, apparently, causally relevant to the beliefs we form. Or, at least they can be.

But these same people will say that the mind is the brain, and that what goes on in the brain is simply physical causation playing itself out. Abstract objects don't, they say, cause anything to happen in the brain, since the brain is a physical system and events in the brain are caused just like any other events. It's just the laws of physics playing themselves out.

Lewis wrote: But even if grounds do exist, what exactly have they got to do with the actual occurrence
of the belief as a psychological event? If it is an event it must be caused. It must in fact be simply one link in a causal chain which stretches back to the beginning and forward to the end of time. How could such a trifle as lack of logical grounds prevent the belief’s occurrence or how could the existence of grounds promote it? (1960b: 20)

(1960b) Miracles: A Preliminary Study, 2nd edn. (London: Fontana, 1974)

It seems to me that this points to something paradoxical in the naturalist's view of reasoning.

96 comments:

Anonymous said...

But maybe the counterfactual requirement is simply irrelevant? I mean, it isn't self-refuting to disclaim that one wouldn't believe even if the evidence wasn't there. You can still be actually rational.

That said, the non-causal view does seem counterintuitive. We often say things like 'I believe because of the evidence!', but that can't be true given naturalism.

Either way, a lot of atheistic rhetoric shall have to be dropped: they cannot say that anyone believes anything because of the evidence, nor that they are counterfactually rational.

Farewell, O outsider test!

One Brow said...

One of the simplifications we all like to engage is to think of systems of causes as being chains. A1 causes A2 causes A3, while some unconnected B1 causes B2 causes B3. Under such a mode of thinking, it can certainly seem odd that something like a belief can be cause. At a physical level, a belief will not be something that forms or reponds to single phenomena, because any individual manisfestation of a belief would be a combination of several different phenomena.

However, systems of causes can also be lattices, with multiple phenomena jointly causing multiple phenomena in various combinations. A belief itself is an absrataction for a group of very similar manisfestations. However, so are the abstact concepts we refer to as "logical grounds", which form the putative basis for the belief. So, Lewis is basically asking how one group of manisfestations of groups of phenomena be causally linked to another group of manisfestations of groups of phenomena. I don't see the paradox.

William said...

The problem for One Brow is that claiming that logical reasons are the associated latticed epiphenomena of grounding physical phenomena, as he does, is that such either mental epiphenomena are causal or they are not. If causal, how then, under materialism? If not causal, then AFR does create paradox.

Crude said...

But these same people will say that the mind is the brain, and that what goes on in the brain is simply physical causation playing itself out. Abstract objects don't, they say, cause anything to happen in the brain, since the brain is a physical system and events in the brain are caused just like any other events. It's just the laws of physics playing themselves out.

I think what may trick some people up here is that they switch between accepting and rejecting things like 'abstract objects in the mind' without even realizing they're doing it.

There's the calculator example. If I type "1 + 1 =" into a calculator, did the calculator reason its way to 2? Does it know that "1 + 1 = 2", or put another way, did reflection on the truths of mathematics play a role in its answer? Even if it announces it in audio? I think it's clear the answer is no, and I think most people would accept that. Calculators are tools whose purposes are derived, and they aren't reasoning.

But in another context I think some people will forget that and think something like, "A physical system reasoning isn't a problem. Look at a calculator! You ask it what 1 + 1 is, and it tells you 2! And calculators are totally physical!"

Maybe that's where part of the problem lies. Forgetting about the 'why' of an answer being given up, and sticking with 'Is it for someone to produce sound that, if I read it this way and in this language, counts as a correct answer to a question that was just asked?' as a placeholder for 'Can a purely physical system be rational?

One Brow said...

If causal, how then, under materialism?

So, you're asking me how the group effect of the firing a neurons in a particular pattern, as opposed to the individual firing, can be causal (possibly indirectly) of other neurons firing in a particular pattern as a group, as if this is a difficulty for materialism?

Why would it be a problem for group effects with specific characteristics to cause other groups effects with specific characteristics?

Crude said...

So, you're asking me how the group effect of the firing a neurons in a particular pattern, as opposed to the individual firing, can be causal (possibly indirectly) of other neurons firing in a particular pattern as a group, as if this is a difficulty for materialism?

I think William is asking what it is about neurons firing, either in a group or individually (take your pick), that constitutes abstract thought or reason being a cause or playing a role in the eventual effect.

One Brow said...

Crude,

Calculators are extremely simple devices. There inability to rationally understand the results of their labor are in no small part due to an inability to put those results in to a context of the general notion of when the various abstractions are applicable. This is a difference in quality between calculators as they exist today and humans.

However, while I don't think Watson thinks like a human, it is evidence that we have begun to bridge this divide. Watson does translate the general ideas into related ideas.

I don't know if any mechanical device can ever think like a human, simeply because our biology allows for so many fine, distinct, precise, multifactored data inputs and interactions, and our our instincts would be hard to duplicate. But, I still have not seen evidence this is a matter of quality as opposed to quantity.

One Brow said...

I think William is asking what it is about neurons firing, either in a group or individually (take your pick), that constitutes abstract thought or reason being a cause or playing a role in the eventual effect.

The recognition of the group pattern by another set of neurons acting as a group.

Shackleman said...

Mr. One Brow, it seems you're trying to convince your audience, using *non-physical* "arguments" and "reason", that "reason" is somehow still physical.

How much does your argument weigh? How big is it? What are the physical properties of your argument?

How does your argument physically get inside my brain to manipulate my neurons into a pattern that would trigger something that resembled the physical neuronal pattern known as "agreement"?

William said...

"The recognition of the group pattern by another set of neurons acting as a group."

But where are the logical reasons and reasonings here? In a computer, the logical reasons are from or in the tool user.

In the brain, is the intentionality of the logic (its being about the reasons) present in the neurons themselves? In the patterns? How?

Crude said...

Calculators are extremely simple devices.

So? Does that mean they're reasoning, it's just a tiny bit of reason?

However, while I don't think Watson thinks like a human, it is evidence that we have begun to bridge this divide. Watson does translate the general ideas into related ideas.

Does Watson think, period? Is it conscious at all? When Watson correctly shows "Who is Hyde?" on screen after the appropriate question (well, answer) is given, what knowledge and reason on Watson's part played a role?

The recognition of the group pattern by another set of neurons acting as a group.

Who, or what, is doing any recognizing?

If I throw a handful of a stones at a stack of building blocks, you can argue that no single stone caused the blocks to topple over, and what fell over was not a single block but a group of blocks. I suppose you can further frame that as one pattern (group of stones) causing another pattern (stack of blocks) to make yet another pattern (tumble of blocks). Did something recognize a pattern there?

Shackleman said...

For what it's worth, computers are quite literally mere abacuses. Really, really, fast abacuses, with a gazillion beads, but still just abacuses.

One Brow said...

Shackleman,
Mr. One Brow, it seems you're trying to convince your audience, using *non-physical* "arguments" and "reason", that "reason" is somehow still physical.

No need to be so formal. Only my students refer to me as Mr. One Brow.

You initial sentence contained one error and one unproven assumption. The error is that I'm not trying to convince anyone of anything stronger than the position that a particular argument against materialism is invalid. The assumption is that I'm using something non-physical to do this with?

How much does your argument weigh?

Nothing.

How big is it?

I presume you are asking what its volume is (0) instead of how important or authoritative it is (I don't have sufficient expertise to judge such a thing).

What are the physical properties of your argument?

Depends on the location in which are discussing it. In my brain, and yours, it's a pattern made of neurons, one that is subject the constant change bilogical organisms undergo. On the screen ipresume you read it from, it's the pattern of the electrons that are constantly being refrashed onto that screen.

How does your argument physically get inside my brain to manipulate my neurons into a pattern that would trigger something that resembled the physical neuronal pattern known as "agreement"?

I beleive I have better reason to expect "disagreement". :) Outside of that, your eyes see light, your retina translates the light into neural impluses, your hindbrain assemples the neural impluses into a picture, your forebrain compares that picture to a storehouse of knowledge of picture to determine the best matches in terms of letters, then words, then concepts.

One Brow said...

William said...
But where are the logical reasons and reasonings here?

The are the patterns that the groups of neurons form.

In a computer, the logical reasons are from or in the tool user.

In a computer, the logical reasons are encoded into a pattern of ones and zero, which is why I don't need an engineer from MicroSoft physically present to run Word.

In the brain, is the intentionality of the logic (its being about the reasons) present in the neurons themselves? In the patterns? How?

It would be present in the patterns, and there attempt to match them to similarly encountered, stored patterns.

Mind you, Idon't think that's the whole story, but this is just the general flavor.

One Brow said...

So? Does that mean they're reasoning, it's just a tiny bit of reason?

It depends on what you call reasoning. Also, are you referring to "tiny bit" qualitatively or quantitatively?

Does Watson think, period?

Is "think" an absolute that deserves a period?

Is it conscious at all?

Not to any reasonable definition I can thing of.

When Watson correctly shows "Who is Hyde?" on screen after the appropriate question (well, answer) is given, what knowledge and reason on Watson's part played a role?

The knowledge of it's resources, and the reason of matching patterns in an abstract way.

Who, or what, is doing any recognizing?

Groups of neurons.

Did something recognize a pattern there?

A group of neurons.

Crude said...

It depends on what you call reasoning. Also, are you referring to "tiny bit" qualitatively or quantitatively?

Alright: If "it depends", then there's a way. Tell me the way in which calculators reason. Qualify it as you need to.

Is "think" an absolute that deserves a period?

Yes.

The knowledge of it's resources, and the reason of matching patterns in an abstract way.

Alright: Neither Watson nor its components are conscious. So it certainly is not aware of any of its resources, or that it is matching any patterns, or that there are any patterns to match.

"Knowledge" here is nothing but a collection of 1s and 0s with no intrinsic meaning.

"Matching patterns in an abstract way" is nothing but "determinately responding to a stimulus in a way we extrinsically label as appropriate". There's no consciousness, there's no conscious intentions, and "unconscious intentions" ("matching" "patterns") are just a label for stimulus and response.

Correct?

Groups of neurons.

1: What definition of "recognize" are you using? Here's a helpful list.

2: Is there any difference here between "recognizing a pattern" and blindly, mechanically reacting to a stimulus?

William said...

"In a computer, the logical reasons are encoded into a pattern of ones and zero, which is why I don't need an engineer from MicroSoft physically present to run Word.
"

We have many Tin Man and Data robots in our popular culture, and too many people think they are true representations of computers. We personify machines easily, and objectify persons all too easily, I'm afraid.

cl said...

Victor,

"...evidential relationships are relevant to what beliefs we hold, and therefore, what states our brains get themselves into. But evidential and logical relationships are abstract states. They are not physical things, and they do not have particular locations in space and time. Yet they are, apparently, causally relevant to the beliefs we form."

Given physicalism, wouldn't evidential and logical relationships reduce to something like ad hoc afterthoughts? I mean, if the mind is what the physical brain does, then isn't matter simply leading the way, producing an illusion of rationality? It seems to me this must be the case, which prompts me to agree when you say,

"...this points to something paradoxical in the naturalist's view of reasoning."

One Brow said...

Alright: If "it depends", then there's a way. Tell me the way in which calculators reason. Qualify it as you need to.

One aspect of reasoning is the formal ability to correctly process inputs into outputs according to the rules determined by the system. In a small way, using methods that seem to be different from teh methods of humans, that's what calculators do.

Yes.

I disagree that "think" is an on/off attribute. If it were, you could never say one person thinks better than another person, because either both could think or one could not. Does Watson think as well as a philosophically aware, sober human? No. Does Watson think better than a falling-down drunk whos about to drive? Watson could probably advise them not to drive.

Alright: Neither Watson nor its components are conscious. ... Correct?

I agree.

1: What definition of "recognize" are you using? Here's a helpful list.

I suppose you can go with either
--spot: detect with the senses;
--perceive to be the same

Since one-celled animals are capable of either, I hope you are not going to claim such facilities require consciousness.

2: Is there any difference here between "recognizing a pattern" and blindly, mechanically reacting to a stimulus?

I don't know with only certainty. The existence of that difference is basically what you're trying to prove, isn't it?

One Brow said...

Given physicalism, wouldn't evidential and logical relationships reduce to something like ad hoc afterthoughts? I mean, if the mind is what the physical brain does, then isn't matter simply leading the way, producing an illusion of rationality?

Your statements seem to carry the presumption that evidential and logical relationships themselves are not grounded in physical phenomena, nor the rationality. If rationality is a part of the physical (as a mansifestiontation of group activity among neurons), why should it not be able to engage in phhysical causation?

woodchuck64 said...

William,

We personify machines easily, and objectify persons all too easily, I'm afraid.

Nevertheless, the "firing" of transistors in computer hardware to process logical or mathematical abstractions defined by its software seems a lot like the way we process abstractions. Would you say a computer challenges naturalism? If yes then how about a computer designed by evolutionary processes, theoretically? If no then doesn't it demonstrate how abstractions (software) can cause physical events (transistor switching, and eventually something like graphical output)?

Anticipating the response that a computer does not really experience abstractions, beliefs or consciousness, I agree, but I'm not sure how that supports the AFR. True, the existence of a computer doesn't seem to help us much with the hard problem of consciousness, but it does offer some unique insights into what abstractions may be-- they are literally "soft"-ware-- and how they can interact with the physical, it seems to me.

Shackleman said...

Mr Woodchuck,

"Nevertheless, the "firing" of transistors in computer hardware to process logical or mathematical abstractions defined by its software seems a lot like the way we process abstractions."

Computers don't process logical or mathematical abstractions. They quite literally play shuffleboard with electrical impulses. That's all they do. It is only the programmer's mind that does the logical processing.

*All* the computer does is put a string of electrical impulses (impulse=1, no impulse=0) on one "shelf" (register), then the next string of electrical impulses tells the computer to place the *next* string of impulses on a different shelf. Then, another string of impulses tells the computer to combine (in this example---sum) the two shelves, and place the result onto the output bus (your monitor, for example).

There is nothing whatsoever that even resembles reasoning that happens inside the CPU.

Unless you think the abacus can perform "abstract logical processing", then neither can computers. They don't. They move impulses in and out of shelves. They literally function very much like an abacus functions.

*Minds* can use an abacus to assign "value" to the otherwise arbitrary patterns of beads. And it's in this assignment of value to the arbitrary where reason starts to take place. The CPU is really an abacus on steroids.

One Brow said...

Shackleman said...

Computers don't process logical or mathematical abstractions. They quite literally play shuffleboard with electrical impulses. That's all they do. It is only the programmer's mind that does the logical processing.

I agree with you that computers basically play electronic shuffleboard. However, if the logical processing were not stored in that shuffleboard, but present only in the mind of the programmer, then the programmer would need to be present to guide the program.

I agree that the logical processing originates in the mind of the programmer, but you are saying something beyond that.

Shackleman said...

Mr. One Brow,

(Thanks for the permission to drop the formalities, but in my experience, it helps to keep the online banter friendly, helpful, and respectful :-)

It might seem like splitting hairs, but the distinction I think is important. The "logic" is not stored in the CPU. A basic set of instructions are stored. The CPU blindly moves bits into and out of it's registers and sends output out to the external buses.

An analogy would be an old phone switching operator, who sees Line A light up, requesting connection to Line Z. The operator then moves the switching cable to connect the two. Granted, the operator had to recieve and then follow instructions on what to do when it sees certain bulbs light up, but the operator is in no way *performing* and "logical abstractions". He or she is just moving a cable. That's all the CPU is doing....moving bits to and from registers.

I think that *we*...minds...are so well-equipped to *see* the logic of the output that we think the computer is *doing* the logic for us. It isn't. We are.

It's like reading a math book and thinking the printing press that put the ink to paper did the calculations. It didn't. It just put it to paper.

William said...

"If rationality is a part of the physical (as a manifestiontation of group activity among neurons), why should it not be able to engage in physical causation?"
---

If all MUST be purely the physical as defined by 20th century physics, of COURSE it can :).

Once you assume what the AFR is arguing against, you have denied the AFR before it starts, by contradicting it as an assumption of your metaphysics.

Such assertions don't address the argument except by assuming that it is wrong :)

One Brow said...

Mr. Shackleman said...
It might seem like splitting hairs, but the distinction I think is important. The "logic" is not stored in the CPU. A basic set of instructions are stored. The CPU blindly moves bits into and out of it's registers and sends output out to the external buses.

Again, I find myself in agreement with you, to a limited degree. The logic is certainly not directly stored in the circuits. The logic arises out of the patterns that the instruction set creates. If it didn't, than the particulars of the instruction set could not perform according to the programmer's logic.

Granted, the operator had to recieve and then follow instructions on what to do when it sees certain bulbs light up, but the operator is in no way *performing* and "logical abstractions". He or she is just moving a cable. That's all the CPU is doing....moving bits to and from registers.

How demeaning to our poor operator. Is not the recognition of a particular light being on, and the connection of that particular instance of light with the general notion of lights being on, the use of an abstraction?

Still, I understand the point you are trying to make in separating the creative act of putting a logic into a program with that of following instructions.
Where we seem to differ, perhaps, is whether this creatively derived logic persists in the program after said creation is completed and the program is in use. Are you saying that the logic of the creator is in no way present in the computer program?

I think that *we*...minds...are so well-equipped to *see* the logic of the output that we think the computer is *doing* the logic for us. It isn't. We are.

I agree, and I certainly hope I am not making that error. To encode logic is not to perform logic.

On the other hand, properly programmed computers seem to be able to generate geometric proofs, a task that sure calls for them to use a logic creatively. Or, if you take the contrary point that there is no creativity demonstrated in the solving of geometric proofs, why should there be such being demonstrated by the logic of computer programmers?

One Brow said...

If all MUST be purely the physical as defined by 20th century physics, of COURSE it can :).

I am merely arguing that such is possible, and the AFR is not able to remove this possiblity.

William said...

"Anticipating the response that a computer does not really experience abstractions, beliefs or consciousness, I agree, but I'm not sure how that supports the AFR."
---

It does not, except by denying that a computer is a counter example to the AFR.

Since Lewis would have seen computer AI as derivative of human intelligence, and his argument is about the origins of reasoning and what that means for its reliability, he might also have seen computer logic as begging the origin question as long as the computer design was somehow derived from a human designer.

William said...

"I am merely arguing that such is possible, and the AFR is not able to remove this possiblity.
"

So, you assert without proof:

Axiom 1. Reliable reasoning is purely a part of the physical as defined by 20th century physics.


Your axiom contradicts the AFR by defining it as wrong. Lewis was trying to show that Axiom 1 was not very reasonable to assume. You argue against the AFR by simply re-asserting the axiom. We have nothing to argue about, then :)

Shackleman said...

Hi again, Mr. One Brow,

"How demeaning to our poor operator. Is not the recognition of a particular light being on, and the connection of that particular instance of light with the general notion of lights being on, the use of an abstraction?"

In the case of our human operator, yes. However, I used him/her as an example for explanatory purposes only (and for onlookers at that since I see from your profile that you are in the field).

We can replace our human operator with an analog system to perform the same cable connections. We can create a circuit such that when Line A lights up, the circuit completes and then connects to Line Z. In this case nothing is "recognizing" anything.

"Are you saying that the logic of the creator is in no way present in the computer program?"

The logic of the creator does *not* persist in the *program*, but only *emerges* when the program is executed. Until then, any logic is still only in the programmer's mind. And, as we've shown in our explanation of the inner workings of the CPU, the execution itself is not the logic either, as it is merely shuffling the bits to and fro.

Therefore since the program itself is not the logic, and the act of executing it is not the logic, the logic exists in the mind of the programmer. *All* the rest, is decoding as output what the programmer's mind put in as input.

Again, would you say that the mathematical logic is somehow "in" or "on" the sheets of paper of your math books? I would argue that the ink is just ink and it has no logic. Minds assign abstract value to the particular arrangements of the ink on the paper, and then minds "execute" the logic derived from their encoded values.

William said...

woodchuck64:
"how about a computer designed by evolutionary processes, theoretically?"

Fun to speculate about that?

I remember an essay by Dawkins
( here I think ) where he tried to show that any possible emergence of that type of complexity would require natural selection. This was part of his attempt to elevate the laws of biology to the level of the laws of physics. A nice try, but...

Anyway, if Dawkins is right about the principles of evolution in the absence of true design, I don't see how a tool like a computer could evolve, except as some kind of incidental digital appendage of some more intentional and goal-directed life form.

cl said...

One Brow,

"Your statements seem to carry the presumption that evidential and logical relationships themselves are not grounded in physical phenomena, nor the rationality."

False. My statements presume that if epiphenomenalism is true, rationalism is an illusion. If you can show why that's not so, I'm interested.

Crude said...

One aspect of reasoning is the formal ability to correctly process inputs into outputs according to the rules determined by the system. In a small way, using methods that seem to be different from teh methods of humans, that's what calculators do.

Calculators no more "follow rules" than a rock does when it tumbles down the side of a hill after being blown by a gust of wind. Unless you're putting some kind of intentionality - even unconscious - into the fundamental physical world, in which case you're ditching materialism anyway, and taking on some very unique view of the material in the process.

If it were, you could never say one person thinks better than another person, because either both could think or one could not.

Sure I can, because 'thought' in this case is analogous to 'consciousness'. One person or another may do this or that particular mental activity better, but no one is 'thinking better' in an appropriate sense of the term. As someone else, I think, said - it's like being 'a little pregnant'.

Since one-celled animals are capable of either, I hope you are not going to claim such facilities require consciousness.

If you're claiming the sense and perception is equivalent to blind, senseless automata, you're taking a vastly more problematic position than someone who would claim one-celled animals are conscious.

I don't know with only certainty. The existence of that difference is basically what you're trying to prove, isn't it?

I'm pointing out what's entailed by taking the position you are on machines and computers. Words like "recognize" and "perceive" don't fit - they're intentional concepts themselves. But stripping them away results in blind automata that does not reason, but simply reacts.

But yeah, I suppose one way to blunt the AfR is to question whether there's any "R" anyway.

woodchuck64 said...

William and Shackleman,

A computer has logic/abstractions encoded into transistor circuitry by the work of a programmer, but then couldn't human beings also have logic/abstractions encoded into neural circuitry by the (indirect) work of evolution?

I'm also not certain why we must adopt the position that logic/abstractions exist independently from their encoding (in brains/silicon). Isn't that committing to dualism in the first place? I would think that the experience of logic/abstractions is as neural encodings and, therefore, the causal affects of logic/abstractions are only from physical neural encodings.

Anticipating a reply, I agree that the above leaves out the hard problem of consciousness -- why is there something it is like to experience and perform logical analysis -- but I think that should be kept separate from the AFR or it seems that the AFR morphs into the hard problem of consciousness.

William,


Anyway, if Dawkins is right about the principles of evolution in the absence of true design, I don't see how a tool like a computer could evolve, except as some kind of incidental digital appendage of some more intentional and goal-directed life form.


All organisms, even bacteria, are like computers in that they use simple logical rules and process simple abstractions to get food, avoid predators, and reproduce. It could be that all organisms are an extension of the intentionality of a creator, but they could also be the result of mutation and natural selection, it seems to me.

One Brow said...

William said...
It does not, except by denying that a computer is a counter example to the AFR.

I don't think anyone is claiming that computers can currently reason sufficiently to be a counterexample to the AFR.

..., he might also have seen computer logic as begging the origin question as long as the computer design was somehow derived from a human designer.

By my understanding the AFR is different from the origin question.

One Brow said...

William said...
So, you assert without proof:

Axiom 1. Reliable reasoning is purely a part of the physical as defined by 20th century physics.


Not at all. I am only noting that this proposition must be false for the AFR to be a sound argument, but there has been no proof that this proposition is false.

One Brow said...

Mr. Shackleman
We can replace our human operator with an analog system to perform the same cable connections. We can create a circuit such that when Line A lights up, the circuit completes and then connects to Line Z. In this case nothing is "recognizing" anything.

I can not speak for every programmer, but I would find it very inconvenient to program the computer with a separate instruction for every possible instantiation of the blinking light (say, on on Jan 1 2011 at 00:00:000, then again at 00:00:003, again at 00:00:007, etc.) when they need to perform the exact same function regardless. So, I usually program so that the computer will treat each instance of a light blinking in exactly the same fashion, even though they are separat light blinks. What is not, if not an abstraction of different light blinks based upon a common feature?

The logic of the creator does *not* persist in the *program*, but only *emerges* when the program is executed. Until then, any logic is still only in the programmer's mind.

I don't remeber if it was on this blog (it may have been Dr. Feser's), but I have been told using such an emergence in a description, when applied to the human mind, was akin to invoking magic. Do you think there is a difference in quality, or merely quantity?

At any rate, if the logic emerges when theprogram was executed, and I think we agree this is not random, how can yousay the logic is not encoded into the program?

And, as we've shown in our explanation of the inner workings of the CPU, the execution itself is not the logic either, as it is merely shuffling the bits to and fro.

Just as I would agree that even in a purely physical interpretation of though, the logic is not the firing of a given neuron.

*All* the rest, is decoding as output what the programmer's mind put in as input.

Yet, you acknowledge the logic emerges from the storage ofthe program, so it is present in this decoding, unless I have misunerstood something.

Again, would you say that the mathematical logic is somehow "in" or "on" the sheets of paper of your math books? I would argue that the ink is just ink and it has no logic. Minds assign abstract value to the particular arrangements of the ink on the paper, and then minds "execute" the logic derived from their encoded values.

I agree the ability to accurate interpret the pattrn that the logic is encoded with relies on the facility of an accurate intrpreter, but if such an interpreter were to pull similar reasoning from random blots of ink, we would consider it unreliable. The need for an interpreter does not seem to alter the existnce of the logic in the pattern.

One Brow said...

cl said...
False. My statements presume that if epiphenomenalism is true, rationalism is an illusion. If you can show why that's not so, I'm interested.

Can you define/accept a definition for "rationalism" that does not inherently rely on it being non-physical? I can, but I'm not sure you wouold find it acceptable, and I would hate to be accused of trying to rig the answer into the question.

Of course, if the only acceptable definitions of "rationalism" imply something not rooted in the physical, I certainly make no claim to being able refute your statement.

William said...

woodchuck64:
I'm also not certain why we must adopt the position that logic/abstractions exist independently from their encoding (in brains/silicon). Isn't that committing to dualism in the first place? I would think that the experience of logic/abstractions is as neural encodings and, therefore, the causal affects of logic/abstractions are only from physical neural encodings.

Anticipating a reply, I agree that the above leaves out the hard problem of consciousness -- why is there something it is like to experience and perform logical analysis -- but I think that should be kept separate from the AFR or it seems that the AFR morphs into the hard problem of consciousness.


The higher order kind of indirection the AfR uses, to reason about reason, seems to require that we have intentionality about our reasoning, and this in turns seem to link to the hard problem, yes, but only in contrast to the mechanical things computers do with syntax, as opposed to semantics, as Searle has said elsewhere.

One Brow said...

Crude said...
Calculators no more "follow rules" than a rock does when it tumbles down the side of a hill after being blown by a gust of wind. Unless you're putting some kind of intentionality - even unconscious - into the fundamental physical world, in which case you're ditching materialism anyway, and taking on some very unique view of the material in the process.

I was referring to behavior, not intent. Your asked me to "qualify it as you need to", and never made any condition on the ability to respond to something as an abstraction of that something.

Sure I can,

Well, I don't see the ability to think that way, but I will try to adjust my usage of it in my conversations with you in the future in light of your statement that if something thinks, they think equally to anyone else.

In any case, in the particular activity of taking in symbols symbols and producing the correct symbol as a result, calcultors do better than humans.

If you're claiming the sense and perception is equivalent to blind, senseless automata,

That statement seems oxymoronic. If sense and perception exist, by definition is is not senseless. Do you mean, "if I am equating aspects of a process found in blind, senseless automata to aspects of a process found in the use of sense and perception, ..."? Because I don't see why that would be troubling.

I'm pointing out what's entailed by taking the position you are on machines and computers.

My position is that there is no good way to show the differences are ones of quality, as opposed to function and quantity.

Words like "recognize" and "perceive" don't fit - they're intentional concepts themselves.

So the words are definied in a way that can not be applied to physical, and then are applied to what people do as an assumption. You are free to make such assumptions, and I'm not going to try to disprove them, but that does not constitute evidence.

But yeah, I suppose one way to blunt the AfR is to question whether there's any "R" anyway.

Under your definition, I'm sure that's how you see it.

Shackleman said...

Hi Mr. One Brow,

I think the rest of your comment can be addressed if we just focus on this part:

The need for an interpreter does not seem to alter the existnce of the logic in the pattern.

The logic doesn't exist in the pattern, it exists in the *interpretation* of the pattern.

The very word logic means "inference". Ink blots do not infer. Neither do transistors. They just are.

This is the fundamental difference between our positions. Without the _interpretation_, ie the assignment of abstract values upon the patterns, there *is* no logic.

You may not agree, but unless and until you understand this fundamental difference in our positions, we will be talking past each other.

Let's tackle this from the other side. Show me in an inkblot where the logic lives. I'll bet you a beer you can't. You can have as many as you need to. But here's the trick. You're not allowed to assign any abstract value or meaning to any of the inkblots, for if you do, you'll be interpreting them, and by your account, interpretation isn't a requirement for the logic in the blots.

cl said...

One Brow,

I'm open to your definition. Give it your best shot and let's see where it leads.

woodchuck64 said...

William,

The higher order kind of indirection the AfR uses, to reason about reason, seems to require that we have intentionality about our reasoning, and this in turns seem to link to the hard problem, yes, but only in contrast to the mechanical things computers do with syntax, as opposed to semantics

Do you see computers as being able to approach the reasoning sophistication of a philosophical (human) zombie with sufficient technological innovation or is there a barrier? If there is no barrier, then it seems to me the AFR is merely saying that materialism can't account for the experience of reason (i.e. hard problem), not the practice, origin and appearance of it from the 3rd person perspective. If there is a barrier, that is an interesting claim as it implies there is something we do with reason that can not be encoded or represented with physical symbols.

One Brow said...

Mr. Shackleman said...

The logic doesn't exist in the pattern, it exists in the *interpretation* of the pattern.

In that the interpretation of the pattern is as essential to the use of the logic as the pattern itself is, I agree. I'm not sure if you mean something different, though.

The very word logic means "inference". Ink blots do not infer. Neither do transistors. They just are.

Ink blots and individual transistors do not run computer prorams, either. I find your analogy ineffective.

This is the fundamental difference between our positions. Without the _interpretation_, ie the assignment of abstract values upon the patterns, there *is* no logic.

How is it transferred, if it is not present? If you don't want to call the part sotred in the pattern the "logic", what is stgored that is different from random noise?

Show me in an inkblot where the logic lives. ... But here's the trick. You're not allowed to assign any abstract value or meaning to any of the inkblots, for if you do, you'll be interpreting them, and by your account, interpretation isn't a requirement for the logic in the blots.

Is there a difference between the logic being put into a pattern and the logic of the pattern being understood? If not, what do you call what is in the pattern?

One Brow said...

cl said...
I'm open to your definition. Give it your best shot and let's see where it leads.

One notion of being rational is the ability to accurately use the tools of a formal system (usually consisting of axioms and and an acceptable calculus regarding statements) to derive conclusions. For example, when Dr. Feser says he thinks teaching kids that Santa Claus is real is wrong, he is being rational based upon his beliefs and the system he uses for deciding what is correct.

This would be a definition that would not require intentionality, but it is also one that computers have already achieved for some very simple formal systems.

William said...

Do you see computers as being able to approach the reasoning sophistication of a philosophical (human) zombie with sufficient technological innovation or is there a barrier?

Invoking future technology is like invoking future physics, it allows us to escape any limits put in front of us and then return to claim the barrier is gone :).


Zombie concepts are tricky. Most people are talking about sensory qualia when they are talking about zombies. But computers don't just lack qualia, they lack the semantics of reasoning, due to lack of intentionality.

In people, access and phenomenal consciousness are always mixed, but I think in some concepts of zombie-dom :-) they are separable.

Shackleman said...

Hi Mr. One Brow,

"Is there a difference between the logic being put into a pattern and the logic of the pattern being understood? If not, what do you call what is in the pattern?"

This is excellent. The logic isn't being put into the pattern. It's being assigned in the abstract, and here we will find our fundamental differences.

Imagine three unique and randomly oriented inkblot splatters. Let's agree to call them blotA, blotB, blotC.

Is there any logic "in" the blots yet? Any remnants? I'm sure you'll agree that the answer is emphatically no---all we have so far are three meaninglessly oriented blobs of ink with no intrinsic value.

Now, let's agree to assign the value "10" to blotA, the value "20" to blotB, and the operator "addition" to blotC.

*Now* we will see the logic. But where in space/time is it? It's not in the blots themselves....we've already agreed to that. It is only in our minds and exists solely in the abstract.

Only after we applied abstract symbolic meaning to the blots did the logic emerge from them.

So it is also with computers. There is no logic somehow secretly "embedded" in them. The power on cycle of a modern computer begins the task of "emerging" the logic encoded in the symbolism human minds have applied and then standardized, to the *otherwise* random and meaningless electrical impulses. No power, no decoding. No decoding, no logic. Turn off a PC and literally, the entire house of cards upon which the virtual, abstract world is built doesn't just "go dark"---it ceases to exist.

William said...

If there is a barrier, that is an interesting claim as it implies there is something we do with reason that can not be encoded or represented with physical symbols.

That this might surprise us means that we are caught too deeply in our monistic metaphysics, presumably in your case including some kind of mere computationalism about intentionality.

We need to pull back and let our empirical understanding show the way.

Some analogies:

Isn't there something about gravity that is not captured by the mathematics Newton and Einstein alone? Think about your everyday experience of gravity :)

Isn't there something about the radiation disaster in Japan that is not captured by physical theories alone?

One Brow said...

Mr. Shackleman,

You didn't answer my question. If you don't want to say that the pattern the programmer creates, and the computer reads, contains logic, then what does it contain.

I believe I understand your inkblot example. Most skeptics agree humans can find patterns where none exist.

So it is also with computers. There is no logic somehow secretly "embedded" in them. The power on cycle of a modern computer begins the task of "emerging" the logic encoded in the symbolism human minds have applied and then standardized, to the *otherwise* random and meaningless electrical impulses.

We must have a different notion of random. The positions of the circuits in the computer may be indistinguishable from a random array to the uninitiated, but they are not random by any understanding I have of the word. In the case of cumputers, they have often been plotted out in great detail to carefully work with the interpreters. If you reeally think "random" is the best word to describe such a result, I find I have little else to say on the matter. The differences are too fundamental. I would think that shoud be clear from the fact that a truly randoomized disk can't be used by the interpreter. You seem to acknowledge no difference being a disk containing programming and one that has been magnetized. Is that your true position, or am I overreading into your words?

woodchuck64 said...

William,

Isn't there something about gravity that is not captured by the mathematics Newton and Einstein alone? Think about your everyday experience of gravity :)

Isn't there something about the radiation disaster in Japan that is not captured by physical theories alone?

Experience/consciousness/1st-person perspective is not captured by accounts of gravity/mathematics/radiation/etc., agreed. That's the hard problem of consciousness. The question I have is whether computers can reason as well as human beings in the absence of conscious experience or if there is some barrier.

But computers don't just lack qualia, they lack the semantics of reasoning, due to lack of intentionality.

What do you think about Watson? Surely Watson understands some semantics and exhibits some intentionality or it couldn't win at Jeopardy.

It could be said that Watson gets all its semantic processing and intentionality from its human programmers, but that doesn't change the fact that Watson is demonstrating the ability to do some semantics of reasoning in pure silicon. If Watson can correctly process some semantics, why wouldn't more processing power, more algorithms, etc., lead to better semantic processing, leading eventually to (the appearance of) human-like understanding and intentionality?

Shackleman said...

Mr. One Brow,

I did answer your question, and as I feared, we are talking past each other. You keep looking for *logic* in the physical where none exists. A pattern "may" exist in the physical, but a mere pattern isn't logic. Logic is inference and can only be extrapolated from the pattern *when* abstract values are applied to the patterns. Patterns, all by themselves, carry with them no intrinsic meaning, value, or logic. My inkblot example tried to show this.

Your question regarding a completely zeroed out, magnetized disk confuses the issue. We can get back to it in a bit, but for now, let's instead simply look at a very basic logical expression, represented in binary.


0011000100101011001100010011110100110010

What does this mean?

I can tell you that it is a logical expression that every grade-schooler will recognize, but ONLY if they properly decode it.

Now, I'll give you the proper decoder wheel.....it's ASCII.

Now is the logic "in" the 1's and 0's? No. The logic is in the ASCII standard encoding!

Change the encoding, and you will change the expression, or, more probably change our logical expression into something that is....wait for it....a meaningless random jumble of gobbly-gook.

Try it yourself by entering in a random array of 1's and 0's into your favorite ASCII converter, and you will get random junk in return.

Now, let us go back to your magnetized disk.

We will in essence get a pattern of all zeros. All zeros is STILL a pattern.

00000000000000000000000000...

If you and I so chose, we could arbitrarily assign the value "hello" to a pattern of 1 trillion sequential zeros, and thereby we can still make LOGIC out of a different pattern.

William said...

Surely Watson understands some semantics and exhibits some intentionality or it couldn't win at Jeopardy.

Interesting claim. Would you mind showing how this is so?

cl said...

One Brow,

How does anything you've said in your comment March 27, 2011 11:03 AM relate to any comments I've made? I don't see that you've engaged with my question, although, I could be missing something.

One Brow said...

Mr. Shackleman,

It's odd that I thought I not only understood your point, but essentially granted it.

Your question regarding a completely zeroed out, magnetized disk confuses the issue.

I was unaware that magnetized disks acutually had all zeros. I had thought that, due to the variability inherent in waving a magnet over something, the combination of 0s and 1s would be random (in the sense of uncontrolled and unpredictable). I aplogized if tht caused confusion.

Try it yourself by entering in a random array of 1's and 0's into your favorite ASCII converter, and you will get random junk in return.

So, what word do you use to describe the difference between the contents of the random array and the one that has been encoded, since you don't like to say that contents is the logic used to create the post-encoded-string?

If you and I so chose, we could arbitrarily assign the value "hello" to a pattern of 1 trillion sequential zeros, and thereby we can still make LOGIC out of a different pattern.

Of course. What does that have to do with what is present in a string that has already been encoded according to a specific procedure, and whether it contains something besides the circuits?

One Brow said...

How does anything you've said in your comment March 27, 2011 11:03 AM relate to any comments I've made? I don't see that you've engaged with my question, although, I could be missing something.

You said if epiphenominalism is true, rationalism is an illusion. I asked if you had a definition for being rational that did not make this true by definition, and you suggested I present one of mine. I certainly could have misunderstood.

William said...

Experience/consciousness/1st-person perspective is not captured by accounts of gravity/mathematics/radiation/etc., agreed. That's the hard problem of consciousness.

That is not quite what I meant, though since I am talking about things in the world that are not capturable in abstract symbols, it is hard to write about it.

Do you think that there is a difference between the noise made by a falling tree in the forest and a complete computer simulation of the fall of the tree, even if no one is there to hear it?

It is that difference I am trying to indicate.

Shackleman said...

Mr. One Brow,

"I was unaware that magnetized disks acutually had all zeros. I had thought that, due to the variability inherent in waving a magnet over something, the combination of 0s and 1s would be random (in the sense of uncontrolled and unpredictable). I aplogized if tht caused confusion."

There *are* no ones or zeros in/on/around the disk. What we *interpret* as a one or zero is a positional difference between two magnetic domains (in the case of a hard disk drive), or the positional difference between electron grids in a solid state flash drive.

We can impart the abstract value "1" or "0" or any value we want for that matter, onto **anything**, which is why the inkblot example works to show that neither the *data*, nor the inference derived thereof (logic), is inherent in the medium.

I'm not sure how else I can describe this.

"So, what word do you use to describe the difference between the contents of the random array and the one that has been encoded, since you don't like to say that contents is the logic used to create the post-encoded-string?"

Again, the content doesn't contain the logic! The logic emerges as output. Remove any part of the input/output process and the logic ceases to be.

Again, Let "00110001" be the content.

The content itself is meaningless.

Now, pass "00110001" through the an ASCII filter and you get the logical expression: "1"

Pass the *same* string through a binary numeric filter and you get the logical expression: "25".

What changed? The string didn't. That's because there is no logic in the string. It's in the filter we apply to it.

So, what would I describe the pattern of magnetic domains stored on a hard disk drive separated from the execution process (including the encoding filters) of the corresponding computer components?

A meaningless pattern of magnetic domains.

"Of course. What does that have to do with what is present in a string that has already been encoded according to a specific procedure, and whether it contains something besides the circuits?"

By itself the string is a meaningless array of magnetic domains.

Take your HDD out of the computer. Look at it. Now, point to the logic. If you can't, you owe me a beer.

One Brow said...

Mr. Shackleman

We can impart the abstract value "1" or "0" or any value we want for that matter, onto **anything**,

Yes, but that requires a change in the interpreter.

which is why the inkblot example works to show that neither the *data*, nor the inference derived thereof (logic), is inherent in the medium.

So, what is inherent to the medium? Why property of the medium causes the same behavior in the disk when put into different computers, when the programmer is not present. You have gone to great lengths to say what is not there, but what is your terminology for what is there? You have already said it is not random, so what is it?

I'm not sure how else I can describe this.

I'm not sure why you are describing something I have already agreed to as if I have not.

"So, what word do you use to describe the difference between the contents of the random array and the one that has been encoded, since you don't like to say that contents is the logic used to create the post-encoded-string?"

Again, the content doesn't contain the logic!


I left my quote to emphazsize this: I acknowledge we won't say the logic is on the disks, and ask you what you would say is on the disk. Your response, again, is to tell me the logic is not on the disk. So, it's not randomness and it's not the logic of the program. What is it? Repeating that it is not the logic seems counter-productive.

What changed? The string didn't. That's because there is no logic in the string. It's in the filter we apply to it.

Yet, you have also said that using teh filter on a random string will produce a random output. So, it's not just in the filter. The filter has something, and so does the storage medium. Do you have names for them?

A meaningless pattern of magnetic domains.

It's not meaningless to the interpreter it was designed to be translated by. If it truly had no meaning, why should the interpreter get something meaningful out of it. I agree electrons on a screen have no inherent meaning as electrons on a screen. But I am writing what will appear as a meaningful pattern of electrons to you? Is this a coincidence that what I write is meaningful to what you read? If not, what do you call the non-coincidental aspects of this pattern from which you draw meaning?

By itself the string is a meaningless array of magnetic domains.

Take your HDD out of the computer. Look at it. Now, point to the logic. If you can't, you owe me a beer.


Because my unaided eyes and brain are an insufficient interpretor?

Shackleman said...

Mr. One Brow,

"So, what is inherent to the medium"

For the third time: There is nothing inherent to the medium save a meaningless pattern of magnetic domains.

"Yes, but that requires a change in the interpreter. "

Precisely.

"Wh[at] property of the medium causes the same behavior in the disk when put into different computers, when the programmer is not present."

Huh? You need more than the disk and that's precisely the point! The order of magnetic domains on the HDD *must* be passed through an interpreter, otherwise there is nothing there *but* the magnetic domains.

"The filter has something, and so does the storage medium. Do you have names for them?"

The name of the "something" depends on the medium. It could be "ink blots", or "text", or "electon grids", or "magnetic domains".

I think you want me to call them "logic", but they're not. They're just *symbols* to which we apply our logic.

"Because my unaided eyes and brain are an insufficient interpretor?"

01011001011001010111001100100001

Shackleman said...

What do YOU call the following WITHOUT the aid of an interpreter?


01011001011001010111001100100001

I call it a meaningless pattern of ones and zeros, yet, you seem to be claiming it's something else. What is it?

cl said...

One Brow,

"I certainly could have misunderstood."

As could have I. How does your definition of rational falsify the claim that given epiphenominalism, rationalism is an illusion?

cl said...

One Brow / Shackelman / William,

Of course, anybody can answer, it just seems that we're the last ones standing here. So, let me bounce this off y'all... for me at least, the question has been along the lines of, "can we be truly rational?" or "is rationalism an illusion?" given materialist determinism and/or epiphenomenalism?

It seems to me that this question parallels Galen Strawson's handling of ultimate moral responsibility. Strawson argues that if free will does not exist, we cannot be held ultimately responsible for our actions. Okay, well... if that's true, then nobody is ultimately responsible for their rationality or their irrationality. Seems straight-forward enough, right?

However, I see a catch, one that potentially falsifies the claim I've been making to One Brow. Strawsonian morality boils down to luck. Okay, well... then perhaps we say that one *is* truly rational or irrational, that rationality is actually *not* an illusion, it just all boils down to luck.

The obvious ramification I see is that those who fancy themselves more rational than others really don't have much to brag about, whereas those less endowed with rationality become about as guilty of sin as a guy who loses his hair.

Your thoughts? This is a really interesting question to me.

William said...


the question has been along the lines of, "can we be truly rational?" or "is rationalism an illusion?" given materialist determinism and/or epiphenomenalism?

It seems to me that this question parallels Galen Strawson's handling of ultimate moral responsibility


Interesting. So you think that the type of "deterministic reductive "materialism" that the AfR is directed against also would leave moral responsibility without a decent basis?

woodchuck64 said...

William,

My claim is that Watson is processing semantics, not just syntax, because I think it's obvious that syntax only would never allow winning at Jeopardy. Do you see this as a controversial claim?

Watson's processing of semantics looks from the outside like reasoning and intentionality but is not anything like it necessarily. I'm only interested, for the moment, in how well computers can eventually appear to reason, given unlimited processing power, not what actually goes on inside.

Do you think that there is a difference between the noise made by a falling tree in the forest and a complete computer simulation of the fall of the tree, even if no one is there to hear it?

For purposes of determining the acoustic wave pattern of the sound of a tree falling, there is no difference (assuming the simulation closely matches the physical environment it's modeling).

If this analogy is appropriate for the AFR, does that mean the appearance of flawless reason in silicon in no way challenges the argument?

One Brow said...

Mr. Shackleman said:
For the third time: There is nothing inherent to the medium save a meaningless pattern of magnetic domains.

This is how I am understaing your position: when I put disk A in my computer, it plays Tropico. When I put disk B in my computer, it plays Aretha Franklin songs; then, I go to a second computer, and disk A will play Tropico, and disk B will play Franklin; also at a third computer; however, what's on the disk is completely meaningless; the computer just randomly chooses to play Tropico or the Queen for no particular reason.

If you think the last clause is an accurate depiction of your views, please acknowledge so directly. No big deal to me. However, if it is not accurate, and you want me to understand your position, please elucidate on the difference between the meaninglessness that plays Tropico and the meaninglessness that plays soul music, and how they differ from a randomized meaninglessness.

Huh? You need more than the disk and that's precisely the point!

A point I have already acknowledged multiple times.

I think you want me to call them "logic", but they're not.

I just would like a term we could use to distinguish them. Feel free to use a different term.

One Brow said...

cl,

As a materialist, I'm not sure what it would be to hold somebody "ultimately responsible" for a transgression. I mean, does it prevent us from putting people in jail for crimes? Executing them? I'm opposed to capital punishment, but not for that reason (a discussion for another day). Certainly the notion of a sin plays no part in my decision-making.

Does being rational, and having the ability to think rationally, boil down to being lucky? I often see it that way. I don't find it anymore praisworthy to be intelligent than to be tall.

On the otherr hand, praise does act as positive reinforcement. Even if people behaving rationally is a circumstance of luck, those of us who through other circumstances of luck see this behavior as valuable will still use praise to enourage the increase of what we desire to be increased.

Shackleman said...

Mr. One Brow,

"what's on the disk is completely meaningless; the computer just randomly chooses to play Tropico or the Queen for no particular reason.

If you think the last clause is an accurate depiction of your views, please acknowledge so directly."


No. Not only is it inaccurate, it is so far removed from what I've said as to cause me to think we're just talking completely past each other.

"However, if it is not accurate, and you want me to understand your position, please elucidate on the difference between the meaninglessness that plays Tropico and the meaninglessness that plays soul music, and how they differ from a randomized meaninglessness."

I've already done this with the inkblot example. The source, inkblots in that example, has no inherint meaning. Meaning is derieved from them, or rather, imparted to them by minds. I don't even know what you mean by "randomized meininglessness". There can be randomized sequences of patterns which, by themselves are meaningless. However, minds can impart meaning to the *otherwise* meainingless pattern if they so desire, and that's precisiely what computers do. They produce as meaningful output, what human minds encode as meaningful input. If there is no mind doing the inputing, and no mind interpreting the output, then there exists no meaning and no logic with any part of them.

That was my last effort. Thanks for the discussion. If it hasn't been helpful for you, then perhpas it's been helpful at least to some onlookers who are not themselves educated with the actual inner workings of computers.

Computers are digital abacuses. Without a human mind assigning abstract meaning to the arrangement of the beads on the abacus, there would be no logic derived from them. *Exactly* as it is so with computers. So, computers are not merely inferior reasoners. They don't reason at all. And, more importantly they *can't* reason. They're bead pushers...nothing more, nothing less.

I wish you nothing but good cheer, sir. Until next time then....

01100010011110010110010100100001

Shackleman said...

"Interesting. So you think that the type of "deterministic reductive "materialism" that the AfR is directed against also would leave moral responsibility without a decent basis?"

This is exactly right.

Shackleman said...

Mr. CL,

"The obvious ramification I see is that those who fancy themselves more rational than others really don't have much to brag about, whereas those less endowed with rationality become about as guilty of sin as a guy who loses his hair.

Your thoughts? This is a really interesting question to me."


This too is exactly right. If all is matter, blindly playing out the laws of physics, then having a "right and true thought" has nothing whatsoever to do with "reasons", or one's ability "to reason", but instead is a necessary and pre-determined result of the physical forces, which began at the big bang, playing themselves out, as determined by the laws of physics.

"Materialsm", as a rational position, is therefore self-refuting because "rationality" is an illusion---a mere epiphenomena from a particular arrangements of atmos.

Al Moritz said...

"Materialsm", as a rational position, is therefore self-refuting because "rationality" is an illusion---a mere epiphenomena from a particular arrangements of atoms.

I agree. And a materialist (a naturalist) can never know if naturalism is true, which is also why the position is self-refuting, see:

http://home.earthlink.net/~almoritz/naturalism_is_true.htm

cl said...

William,

"Interesting. So you think that the type of "deterministic reductive "materialism" that the AfR is directed against also would leave moral responsibility without a decent basis?"

In the sense atheists typically propose, yes, absolutely. However, I don't see that determinism in all forms is 100% incompatible with ultimate moral responsibility. For example, one could imagine a scenario in which all our choices were made prior to our conception, and that our lives are a simple "playing out" of said choices. This would circumvent P2 of Strawson's Basic Argument, because one would become causa sui. But, yeah: Strawson and those who accept his Basic Argument under the traditional determinist rubric must deny ultimate moral responsibility.

One Brow,

"On the other hand, praise does act as positive reinforcement."

The question for me then becomes, Does the "effectiveness" of praise and condemnation reduce to illusion? How does the determinist explain this? I suppose they might say that the sound waves produced by kind/mean words actually change the structure of brain matter, leading to different arrangements of brain matter for the agent in question. I'm skeptical, but couldn't reject it out of hand.

Also, if I might cut in on your discussion with Shackleman:

"If there is no mind doing the inputing, and no mind interpreting the output, then there exists no meaning and no logic with any part of them."

While I'm sympathetic to your argument because I don't think value can exist without a valuer, I don't think that's right. A dictionary has inherent meaning. Even if you took away all observers, a dictionary by itself would have inherent meaning. It would have purposefulness or teleos infused into it. I suppose this might be because I think information is a field unto itself, something that actually exists, as opposed to mere mental construct.

Shackelman,

"Materialsm", as a rational position, is therefore self-refuting because "rationality" is an illusion---a mere epiphenomena from a particular arrangements of atoms.

I tend to agree, but do you think this holds even if praise and condemnation *can* actually influence agents? Why or why not? Or, if you don't have an opinion just yet, what further questions might we wish to explore?

One Brow said...

Mr. Shackleman said...
No. Not only is it inaccurate, it is so far removed from what I've said as to cause me to think we're just talking completely past each other.

Perhaps. We agree that the computer doesn't just choose to play something randomly from the insertion of a disk, but you seem to have a cognitive block to the notion that there has to be some feature of the disk that is reason for this. Instead, all disks are the same, but produce different results.

"However, if it is not accurate, and you want me to understand your position, please elucidate on the difference between the meaninglessness that plays Tropico and the meaninglessness that plays soul music, and how they differ from a randomized meaninglessness."

I've already done this with the inkblot example.


The inkblot example gave no indications that there is any difference in any particualr collection of inkblots, and that all the meaning comes from the interpreter. How is that different from saying that the interpreter is randomly deriving meaning from the inkblots? Since it can't get meaning from them, what does it get from the inkblots?

That was my last effort. Thanks for the discussion.

Thak you as well. I apologized that it must have seemed dificult at times.

One Brow said...

cl said,

The question for me then becomes, Does the "effectiveness" of praise and condemnation reduce to illusion? How does the determinist explain this?

Speaking only for myself, "praise" and "condemnation" and generalizations of phenomena that are uttered and heard by beings with a sense of abstraction that includes these notions.

I suppose they might say that the sound waves produced by kind/mean words actually change the structure of brain matter, leading to different arrangements of brain matter for the agent in question. I'm skeptical, but couldn't reject it out of hand.

I would not envision a chain quite that direct or linear, but that woudl be one summary.

A dictionary has inherent meaning.

I agree with that, in a limited sense of the word "meaning". It was Mr. Shackleman who offered that inkblots were meaningless.

William said...

Hm, I think that there is one thing at least missing from the inkblot discussions: culture.


I've already done this with the inkblot example.

The inkblot example gave no indications that there is any difference in any particualr collection of inkblots, and that all the meaning comes from the interpreter. How is that different from saying that the interpreter is randomly deriving meaning from the inkblots? Since it can't get meaning from them, what does it get from the inkblots?


Although the interpreter creates the meaning, the meaning the interpeter makes is shaped in many cases if the inkblots have an intelligent designer.

Shackleman said...

Mr. CL,
I tend to agree, but do you think this holds even if praise and condemnation *can* actually influence agents? Why or why not? Or, if you don't have an opinion just yet, what further questions might we wish to explore?"

What is "praise" and "condemnation" in reference to the physical? It'd be equivalent to the notion that if you tell a rock "good job" then it will decide to roll down the hill. It's nonsensical to talk about "praise" in a mechanistic, material cosmos.

Shackleman said...

"Instead, all disks are the same, but produce different results."

No, all disks are not the same. They have differences in the magnetic domains on the tape, and it is those which we interpret as 1's and 0's, the patterns of which we can encode to produce meaningful output when passed through an appropriately encoded interpreter.

Shackleman said...

Mr. CL,

"A dictionary has inherent meaning. Even if you took away all observers, a dictionary by itself would have inherent meaning. "

I disagree. As a thought experiment:

Assume the existence of an Oxford English Dictionary.

Now assume the language "English" had never existed.

Assume also that there has never existed anything such as human language.

What happens to your Oxford Dictionary?

It loses all information and meaning because the existence of all interpreters have been removed from existence. Don't agree? Well, lets see....

Now assume several million years pass and an organism evolves which constructs something akin to language, and it even evolves a similar text-based notation.

Assume also they find your dictionary, and assign *different* values to the ink blots on the paper.

*All* of the information would be completly and utterly **changed**.

For example, we interpret the ink configuration: YES

As "an affirmation".

However our new life forms interpret the ink configuration: YES

As the notation for what you and I would call say, an apple.

So, when we write "apple", they would write "yes".

Where did the original information, known as "An affirmation" go? It seems to have vanished, and has been replaced with "an apple"! But how can this be if the information was somehow "in" the Dictionary?

It didn't go anywhere because it never existed in the dictionary in the first place. The Dictionary is just an *otherwise* meaningless compilation of ink blots. Not unless and until a mind assigns *abstract* value to the inkblots will there exist any information therein.

The information existed in the abstract. It didn't exist anywhere in the physical Oxford Dictionary.

Shackleman said...

William,

"Although the interpreter creates the meaning, the meaning the interpeter makes is shaped in many cases if the inkblots have an intelligent designer."

The interpreter *is* the intelligent designer.

Shackleman said...

I've already shared a more streamlined example of my thought experirment, but perhaps it's worth repeating.

In English, the following:

00110001

contains the *information* (or meaning if you prefer):

zero zero one one zero zero zero one

However, in ASCII, that same pattern contains the information:

1

However, in numeric binary, that same pattern contains the information:

25

If the information was somehow native, or "in", the physical ones and zeros themselves, how in the world have we derived at three completely *different* sets of information from the same exact sequence of blots?

Shackleman said...

Oops. 25 should be 49. :-)

There are 10 kinds of people in the world. Those who understand binary, and those who don't.

For a second there I was the latter. :-)

William said...

woodchuck64:

My claim is that Watson is processing semantics, not just syntax, because I think it's obvious that syntax only would never allow winning at Jeopardy. Do you see this as a controversial claim?


My calculator can beat most people at exponentiation. I see no essential difference, given unlimited computing power.



Do you think that there is a difference between the noise made by a falling tree in the forest and a complete computer simulation of the fall of the tree, even if no one is there to hear it?

For purposes of determining the acoustic wave pattern of the sound of a tree falling, there is no difference (assuming the simulation closely matches the physical environment it's modeling).



Is an acoustic wave the same as the sound the tree made? Isn't the wave-form you refer to a set of computer parameters referring to a mathematical abstraction?

If you really think that a computer simulation of a thing can be completely equivalent in your own life to the thing itself, this is where our differences lie, and I don't think I can go further that direction.

One Brow said...

William said...
Although the interpreter creates the meaning, the meaning the interpeter makes is shaped in many cases if the inkblots have an intelligent designer.

I agree, even if the designer is not intelligent. However, the question remains regarding what is in the inkblots that allows the transfer of knowledge when said designer is not present.

One Brow said...

Mr. Shackleman said...
... the patterns of which we can encode to produce meaningful output when passed through an appropriately encoded interpreter.

I think I finally have my answer. The difference between the disks used to store information and those with random circuits is the presense of an encoded pattern, which you contend is also meaningless in and of itself. Is that accurate?

One Brow said...

Mr. Shackleman said...
Where did the original information, known as "An affirmation" go? It seems to have vanished, and has been replaced with "an apple"! But how can this be if the information was somehow "in" the Dictionary?

Is it ever possible to learn a language that has been lost, to anty degree at all? I think this has been done, but perhaps only by comparison to exiting languages.

Shackleman said...

Mr. One Brow,

"I think I finally have my answer. The difference between the disks used to store information and those with random circuits is the presense of an encoded pattern, which you contend is also meaningless in and of itself. Is that accurate?"

It's very close. I'd quibble with the semantics a bit, but I think we can go with it for now and see where it leads if you so desire.

I'll make my intentions clear. My aim throughout this entire discussion is to dispel what I think is a myth about computers, what they do, how they work, and how they relate to human minds and/or reasoning.

This is why I hesitate to assume too much with respect to your disks. If *all* you have is the disk, and you take away the rest of the computer, and you take away all of the coding languages and all of the parts that would execute output as a result of the input of bits stored on the disk, I argue that you are left with nothing but a piece of plastic with no intrinsic value or meaning.

You said that anytime someone evokes "emergence" they are retreating to magic. There is nothing magical about computers, however my position clearly is that there is no information unless and until all of the code is executed. Once executed, the information will emerge, but until then, it doesn't exist. Can you prove me wrong?

Shackleman said...

Mr. One Brow,

"Is it ever possible to learn a language that has been lost, to anty degree at all? I think this has been done, but perhaps only by comparison to exiting languages."

Only partially. We can make excellent and somewhat accurate guesses as to the general meaning behind certain inscriptions left behind by *human* markings. But this is because humans are humans. Modern humans are equipped with the *exact* same input mechanisms (senses) as ancient humans. We are also equipped with the exact same *output* mechanisms as ancient humans (the ability to produce sound and leave markings). So, we can in some cases reverse engineer the input/output encoding of the markings.

But if you think carefully about this, when we do so we are *executing* the "code", and we are only able to do so because we are all equipped at birth with the same basic encoding/decoding mechanisms.

But, could Helen Keller ever recreate on her own, without assistance from a hearing/sighted person, the language that hearing/sighted people developed? I would say it'd be utterly impossible because she did not have the same input/output mechanisms. She learned our language because she had an *interpreter* who could translate the inputs hearing/sighted people could understand, into the inputs of touch which Helen could understand.

woodchuck64 said...

William,

If you really think that a computer simulation of a thing can be completely equivalent in your own life to the thing itself, this is where our differences lie, and I don't think I can go further that direction.

While that's an interesting issue and I can see several different ways I'd address it, I don't think it's relevant to my questions regarding the AFR. I'm interesting in knowing whether the hypothetical appearance of flawless reason in silicon challenges the AFR in any way. Put another way, I'm trying to understand if the AFR concludes or implies that intelligent machines that reason indistinguishably from human beings (from 3rd-person perspective of course) are impossible.

Shackleman said...

Mr. Woodchuck,

The fact is we *do* know how computers work, and we know that any "appearance" of rationality they show is purely illusory. So, what does it matter if we can envision a perfect illusion of rationality?

Unless you're prepared to say that human rationality is *also* an illusion, which some (honest?) materialists do, then computers have no barring on the AfR because they don't reason, which has been my point all along.

A magician can make it look like he's floating in mid-air. The fact that it appears so doesn't make it so.

Watson appears to be reasoning. The fact that it appears so, doesn't make it so.

One Brow said...

Mr. Shackleman said...
It's very close. I'd quibble with the semantics a bit, but I think we can go with it for now and see where it leads if you so desire.

Well, that tangent came out of a discussion you were having with Mr. woodchuck64, but I don't think I want to engage in a general support of his positions.

I don't recall reading a response to when I answered your questions at March 25, 2011 7:43 AM (at least, it displays as such on my screen). If you have no further response/questions, I think I understand, and agree with you to a large degree, your positon on what is stored on disks.

You said that anytime someone evokes "emergence" they are retreating to magic.

If you were under the impression that I endorsed that view, I am very sorry to have misled you. That's what I was told by another commenter (it may have been on Dr. Feser's blog) when I described consciousness as possibly arising from physical phenomena.

Can you prove me wrong?

I may not understand your position on the difference between being encoded data streams and being information in the exact same way you do, but I agree with the version I understand you to hold.

I will point out that gnerally, even though they may not reason (does Polya reason?), computers do use abstraction. Do you disagree?

But if you think carefully about this, when we do so we are *executing* the "code", and we are only able to do so because we are all equipped at birth with the same basic encoding/decoding mechanisms.

An excellent point.

William said...


I'm trying to understand if the AFR concludes or implies that intelligent machines that reason indistinguishably from human beings (from 3rd-person perspective of course) are impossible.


The AfR says nothing of the sort. However, if a robot that was sufficiently like a person that one would reasonably conclude it was a reasoning person, was shown to be entirely, mechanically deterministic in its reasoning and communication of its reasons about its personal behavior, this would I think disprove the AfR.

I think that 'Watson' does not fit this description.

And a promise of what you think future robotics could do does not convince me. In fact, it's possible that a Commander-Data-like robot would be shown to be indeterministic, and would itself not see itself as deterministic. Who knows? That future would tend to support the AfR.

Shackleman said...

Mr One Brow,

It can sometimes be difficult to discern the difference between a question stemming from curiosity, versus a question offered as a challenge.

Both are quite good and fair in debate/discussions, but if confused, can result in miscommunication. Sorry if I confused the two during the course of our dialogue. In any event, I enjoyed it very much, thank you.

Shackleman said...

Mr. One Brow,

"I will point out that gnerally, even though they may not reason (does Polya reason?), computers do use abstraction. Do you disagree?"

It depends on what you mean by "use" and "abstraction".

woodchuck64 said...

Shackleman,

The fact is we *do* know how computers work, and we know that any "appearance" of rationality they show is purely illusory. So, what does it matter if we can envision a perfect illusion of rationality?

If a computer could demonstrate a perfect illusion of rationality (i.e. passes the Turing test), wouldn't the AFR be essentially restating the hard problem of consciousness? Then, the only difference between reason and the appearance of reason is that "there is something it is like to reason" which the computer does not possess. I'm trying to understand if the AFR is intended to be separate from or saying something more than the hard problem of consciousness.

Take a computer demonstrating a perfect illusion of rationality, now solve the hard problem of consciousness by magically giving it a rich inner subjective experience of rationality; bang, AFR solved, too, right?

William,

However, if a robot that was sufficiently like a person that one would reasonably conclude it was a reasoning person, was shown to be entirely, mechanically deterministic in its reasoning and communication of its reasons about its personal behavior, this would I think disprove the AfR.

An interesting point, thank you.

William said...


Take a computer demonstrating a perfect illusion of rationality


This presupposes that such a computer would be mechanistically deterministic in the way Lewis specified. Would it be, I wonder?

Edward T. Babinski said...

Vic, you wrote, "However, suppose that on all disputed questions Steve rolled dice to fix his positions permanently."

RESPONSE: No brain-mind functions purely like tossing dice. Though I bet some interesting things happen at the earliest stages of neuron formation and sensation development in the womb and in babies, probably a lot of sensations get weeded out via feedback mechanisms so that they eventually develop in a way that functions in the world, makes sense out of it. We know that lots of neurons die from birth onwards (probably due to lack of feedback of particular neuronal pathways/connections). If they didn't, I suppose the brain-mind probably wouldn't be able to make much sense of anything at all, not if all neuronal connections remained in tact from birth onwards.

You talk about the sub-stratum, but it's not irrational, just non-rational. Atoms are not rational, which is different from irrational. That's about all one can say or needs to say about that. That doesn't mean rationality cannot arise in a brain-mind system that takes in enormous loads of sensory data on a far more comprehensive and larger level than the purely "atomic" level. We learn how to react to nature and other brain-minds on a macrolevel. Reasoning has to evolve. Reasoning is simply a word for our ability to distinguish between things, combined with memory and foresight that makes better distinctions in time as well as space.

How are beliefs produced and sustained? That is a crucial question. You think you know how? Have you studied the evolutionary history of the brain-mind? Or the development of the brain-mind from embryo to adult? Cognitive science? Etc.?

Edward T. Babinski said...

There is no definition of what a Turing Test might include is there? It's a test of whether or not one can be fooled by a machine into thinking that machine has a human-like mind, right? Well, some people are far more easily fooled than others. The computer that won on Jeapordy did a pretty good job of fooling people, but only in the limited sense of reacting to Jeapordy questions. Some people might have been fooled into thinking it was conscious since it could react to complex questions (in Jeapordy they receive the answers) in human-length of time.

What I'm saying is that the Turing Test is only limited by how many tests one may decide to put the machine through, and some people can come up with tests and further tests all day, not just asking it questions, but asking the machine to share first hand experiences, like seeing things for the first time, tasting them, touching what's inside a drawer, and naming it, etc. For a machine to interaction with the environment, a highly variegated environment filled with diverse sensory inputs, and not just spit out strings of words in reaction to other strings of words, is something else. Even getting a machine to indulge in highly metaphorical conversations, poetry, forge analogies that make sense is extremely difficult since language itself is slippery, and only by understanding things firsthand, via sensory input, to know how such things are seen and felt and taste, only by experiencing those things first hand are connections forged inside the human brain-mind. But IF a machine can do all that, experience things first hand, drink in the same sensory world we do, and also forge connections between such sensations and language, it could truly pass a Turing Test. I suppose such a machine would also have to learn via trial and error and neural net reinforcement of certain patterns, and it would take it time to do so, to interact with nature and maintain the most reinforced patterns. But that's basically what the human brain-mind appears to be doing to.