This is a blog to discuss philosophy, chess, politics, C. S. Lewis, or whatever it is that I'm in the mood to discuss.
Saturday, January 05, 2008
17 comments:
- One Brow said...
-
Computers have no first-person perspective. Therefore, they do not literally add 2 + 2. They do not perceive the relationship amongst the meanings.
Computer operate in the same formal system, applying the same rules we do. What is the evidence any difference in meaning is qualitative instead of quantitative?
We perceive those relationships. However, physical facts are not perspectival. If my perspective determines how atoms go in my brain, we have a non-publicly accessible fact that determines physical states. That's not considered good naturalism.
Of course, if your perspective is merely the accumlation of physical facts into a larger whole, it is quite possible that ths accumulation of facts can have effects that subcollections can not have. - January 06, 2008 9:43 AM
- Ilíon said...
-
(It's too bad that the only way to "edit" one's comment is to delete it and re-enter it.)
VR: “Computers have no first-person perspective. Therefore, they do not literally add 2 + 2. They do not perceive the relationship amongst the meanings.”
One Brow: “Computer operate in the same formal system, applying the same rules we do.”
A computer is nothing but ("Nothing Buttery" isn't inherently fallacious) a glorified abacus. A computer no more “operate[s] in the same formal system, applying the same rules we do” than does an abacus.
*We* don’t actually use that formal system when we add 2 + 2 -- because, as Mr Reppert says, we *understand* the meanings and the relationships between the meanings.
On the other hand, computers don’t understand anything, and are therefore quite unable to add 2 + 2. Instead, they are designed to use that formal system to mechanically simulate what we do when we add 2 + 2.
The number “2” is the name (if you will) for the equation “1 + 1” -- which equation, one may notice, itself contains the “+” operation. But computers no more understand “1 + 1” than they do “2 + 2.”
And for that matter, computers no more understand “1” that they do “2” or “+.” If one insists upon thinking of computers as understanding anything, what they understand is “something” and “nothing;” “something” can stand for “1” or for “true,” “nothing” can stand for “0” or for “false.”
But -- and how fortunate for us, since our modern world could not function without computers -- the equation “1 + 1” is equivalent to counting up one from one (which I will write as “1 & 1” to differentiate from “1 + 1”)
Computers don’t add (or subtract or multiply or divide), they count, just as an abacus “adds” by counting up -- counting is what that formal system is about. That is, whereas we add “2 + 2,” a computer counts “(1 & 1) & (1 & 1)”
An abacus “adds” because a human being (who understands) is physically moving physical beads around. A computer “adds” because human beings (who understand) figured out how to mechanize moving the beads, and eventually figured out how to electronically *simulate* moving beads around.
One Brow: “What is the evidence any difference in meaning is qualitative instead of quantitative?”
LOL. Please! By definition, all “difference in meaning is qualitative instead of quantitative” - January 07, 2008 8:50 AM
- One Brow said...
-
A computer is nothing but ("Nothing Buttery" isn't inherently fallacious) a glorified abacus. A computer no more “operate[s] in the same formal system, applying the same rules we do” than does an abacus.
When you start out disagreeing with a position, do you always give evidence to support the position you are disagreeing with?
*We* don’t actually use that formal system when we add 2 + 2 -- because, as Mr Reppert says, we *understand* the meanings and the relationships between the meanings.
What does the computer not understand?
On the other hand, computers don’t understand anything, and are therefore quite unable to add 2 + 2. Instead, they are designed to use that formal system to mechanically simulate what we do when we add 2 + 2.
Since all humans do is use a formal system to add 2 + 2, you haven’t provided a difference.
The number “2” is the name (if you will) for the equation “1 + 1” -- which equation, one may notice, itself contains the “+” operation. But computers no more understand “1 + 1” than they do “2 + 2.”
Repeating an assertion is not proof.
And for that matter, computers no more understand “1” that they do “2” or “+.” If one insists upon thinking of computers as understanding anything, what they understand is “something” and “nothing;” “something” can stand for “1” or for “true,” “nothing” can stand for “0” or for “false.”
Again, I don’t see anything for computers to not understand.
But -- and how fortunate for us, since our modern world could not function without computers -- the equation “1 + 1” is equivalent to counting up one from one (which I will write as “1 & 1” to differentiate from “1 + 1”)
Computers don’t add (or subtract or multiply or divide), they count, just as an abacus “adds” by counting up -- counting is what that formal system is about. That is, whereas we add “2 + 2,” a computer counts “(1 & 1) & (1 & 1)”
That’s one way humans learn to add, as well. In my daughter’s first grade class, they use the ‘counting points” technique. Of course, another way humans learn is by rote. The computer equivalent is a look-up table.
An abacus “adds” because a human being (who understands) is physically moving physical beads around. A computer “adds” because human beings (who understand) figured out how to mechanize moving the beads, and eventually figured out how to electronically *simulate* moving beads around.
A human “understands” because a prior human taught them by moving around beads, or by rote.
One Brow: “What is the evidence any difference in meaning is qualitative instead of quantitative?”
LOL. Please! By definition, all “difference in meaning is qualitative instead of quantitative”
So, there is no difference in the meaning of “1 gram of water” versus “1 kilogram of water”, or there is a qualitative difference between “gram” and “kilogram”? - January 07, 2008 3:04 PM
-
-
I am wondering whether the AFR should be taken as merely showing that naturalism is irrational or, in fact, false. Strictly speaking, of course, it only shows the former, but isn't this a case of special pleading?
Suppose that there was a worldview, let’s call it worldview x, that made the claim that Al Gore didn’t exist and then you found out that Al Gore did exist. In arguing against that worldview, you might proceed like this:
1. According to worldview x, Al Gore doesn't exist.
2. But Al Gore does exist.
3. Therefore, worldview x is false.
The proponent of worldview x could always respond by saying that you did not actually discover the existence of Al Gore, your cognitive faculties merely caused you to form the belief that you did. That is, the proponent of worldview x could choose nonrationality; he could abandon the primary role of our cognitive faculties in forming our beliefs. But would we say that you had only showed worldview x to be irrational or that you also showed it to be false? If we answer the former, aren’t we excluding the possibility, in any case, of actual falsification? - January 08, 2008 10:47 AM
-
-
How does the physicalist account for understanding? Or... how does the physicalist account for the fact that two people can read the exact same thing (such as the post that these comments are related to) and one is able to 'understand' what is being discussed, while another (who understands what the individual words that compose the paragraphs mean) doesn't understand the topic nor the point that either side is trying to make?
They are reading the exact same passage. They both know what the words that constitute the passage mean (if not, they can look up the definition of an individual word)... but they don't get the 'point', or they don't 'understand' what is being said. If intention doesn't exist, or it exists and it can be accounted for via materialistic explanations... how do we account for this example of two reading the same passage - one understanding, one confused as to the nature of the discussion. - January 08, 2008 3:26 PM
- One Brow said...
-
Hopefully I'm not abusing the near-metaphor too much.
How does the physicalist account for understanding?
Additional processing power in our biological computers.
Or... how does the physicalist account for the fact that two people can read the exact same thing (such as the post that these comments are related to) and one is able to 'understand' what is being discussed, while another (who understands what the individual words that compose the paragraphs mean) doesn't understand the topic nor the point that either side is trying to make?
They are reading the exact same passage. They both know what the words that constitute the passage mean (if not, they can look up the definition of an individual word)... but they don't get the 'point', or they don't 'understand' what is being said. If intention doesn't exist, or it exists and it can be accounted for via materialistic explanations... how do we account for this example of two reading the same passage - one understanding, one confused as to the nature of the discussion.
Differing structure and inputs. Because of the method of construction, no two human biological computers are ever identical, and because of differences in the efficiency and environment of the 15 input modes (senses), no two ever receive the same set of lookup tables, algorithms, etc., so why should you expect any two to get the same information from a particular input, unless their programming regarding that input string has been very carefully regulated and adjusted? - January 08, 2008 6:54 PM
-
-
Hi one brow, thanks for the response.
Regarding your answer then; how do you account for the one (whose understanding was lacking in my situation above) being able to later understand the topic? Let's say he started to read more on philosophy of the mind/intention/etc... developed the ability to better comprehend the post (in which all of these comments are in relation to) so that he would be able to carry on a discussion with person A (who had an initial understanding of the post in question).
You stated:
Because of the method of construction, no two human biological computers are ever identical, and because of differences in the efficiency and environment of the 15 input modes (senses), no two ever receive the same set of lookup tables, algorithms, etc.,
Which sounds kind of damning for person B. However, person B was able to later develop an understanding.
why should you expect any two to get the same information from a particular input, unless their programming regarding that input string has been very carefully regulated and adjusted?
Do mechanical computers actually 'understand' what they are receiving? But I think this is besides the point. Who/what does the programming? Am I able to use my mental faculties (abstract/immaterial) to modify my brain states (physical structure)? I think you will probably say, "Of course not... immaterial mental states don't exist". But what is that element of volition that allows my brain states to change - allowing me to further understand a particular topic? - January 09, 2008 7:08 AM
- One Brow said...
-
Hi one brow, thanks for the response.
The pleasure was mine.
Regarding your answer then; how do you account for the one (whose understanding was lacking in my situation above) being able to later understand the topic? Let's say he started to read more on philosophy of the mind/intention/etc... developed the ability to better comprehend the post (in which all of these comments are in relation to) so that he would be able to carry on a discussion with person A (who had an initial understanding of the post in question).
When you add more information, more algorithms, more coding to a biological computer, you get an improved ability to deal with different sorts of inputs. Much like installing Word lets you do things and read document you can't see correctly in Wordpad or Notepad.
Which sounds kind of damning for person B. However, person B was able to later develop an understanding.
There is no reason to stop learning, as long as your memory apparatus is functioning correctly. It's nature and nurture.
Do mechanical computers actually 'understand' what they are receiving?
I believe what I was trying to say earlier is that our level of understanding seems to me to be a quantative difference more than a qualatative difference with the understanding of a computer. I won't pretend I can prove this.
But I think this is besides the point. Who/what does the programming?
Certainly, a great deal is pre-programmed in the construction of the brain itself. Who programs a wasp to puncture the brain of a roach and in just the right place inject a toxin that makes the roach follow the commands of the wasp? Do wasps have souls, or is instinct a physical phenomenon? If it can be physical for a wasp, a human could certainly have a great deal of pre-programming.
Am I able to use my mental faculties (abstract/immaterial) to modify my brain states (physical structure)? I think you will probably say, "Of course not... immaterial mental states don't exist".
I am a naturalist, but I'm not a reductionist. I think it is much more useful to say mental states exist, with the understanding that mental states are the overall sum of a large number of physical effects. In the same way, all biology is at a very basic level chemical interactions, but who would want to describe the social behavior of wildebeest solely in terms of chemical elements?
So yes, mental states can affect your brain.
But what is that element of volition that allows my brain states to change - allowing me to further understand a particular topic?
The subject of much ongoing research, as I understand it. I believe we know it is affected by many of the physical characteristics of the brain, such as the thickness of the frontal lobes, but even the most knowledgable don't have all the details, and I'm strictly amatuer. - January 09, 2008 8:10 AM
- Shackleman said...
-
I sometimes grow frustrated with arguments that relate the brain and mind to the computer. Especially from those who are not computer professionals (note, I'm not suggesting anyone who commented here is *not* a computer professional---I wouldn't know one way or another).
Computers do one thing, and one thing *only*. They move electrons to one "side" or another. Period.
To insinuate that a computer understands anything at all is pure nonsense.
Please people, try to remember that only human minds can associate "Meaning" from this:
01001101011001010110000101101110011010010110111001100111
The computer does NOTHING more than store that combination of ones and zeros. (Imagine the zero being an electron on the "left" and the ones being an electron on the "right"). Human minds are the decoder wheels that give that otherwise random display of zeros and ones meaning. The computer is utterly and completely blind to it.
Oh, and in case you didn't know, the above ones and zeros are the binary equivalent of the word "Meaning".
The computer didn't "see" "Meaning", it only "saw" the ones and zeros.
Enough of the comparisons between computers and human minds already! - January 09, 2008 11:12 AM
- One Brow said...
-
I sometimes grow frustrated with arguments that relate the brain and mind to the computer.
It's not a comparison most people find to be flattering.
Computers do one thing, and one thing *only*. They move electrons to one "side" or another. Period.
Certainly the information transferrence mechanics int he brain are more complex.
To insinuate that a computer understands anything at all is pure nonsense.
I'd be happy to evaluate, and even possibly agree with, a non-circular, qualitative, verifiable description of this "understanding" that is innately not available to computers.
Please people, try to remember that only human minds can associate "Meaning" from this:
01001101011001010110000101101110011010010110111001100111
The computer does NOTHING more than store that combination of ones and zeros. (Imagine the zero being an electron on the "left" and the ones being an electron on the "right"). Human minds are the decoder wheels that give that otherwise random display of zeros and ones meaning. The computer is utterly and completely blind to it.
No argument. The question is, is that a function of degree and more sophisticated programming or a due to a fundamental, unbridgable difference? - January 09, 2008 12:08 PM
- Ilíon said...
-
Shackleman "I sometimes grow frustrated with arguments that relate the brain and mind to the computer. Especially from those who are not computer professionals ..."
It's even worse when "computer professionals" try to equate the human mind with a computer; they *know* better.
Shackleman "The computer does NOTHING more than store that combination of ones and zeros. (Imagine the zero being an electron on the "left" and the ones being an electron on the "right"). Human minds are the decoder wheels that give that otherwise random display of zeros and ones meaning. The computer is utterly and completely blind to it."
one brow "No argument. The question is, is that a function of degree and more sophisticated programming or a due to a fundamental, unbridgable difference?"
I'ts a dncferefie of knid, not of drgeee; 'its a dneeifcfre of cosmphreenion, not of cimpuotaton. Or, who wtore the "pgarorm" taht is you? Who econded taht "parrgom" itno a snigle clel? - January 09, 2008 1:16 PM
- One Brow said...
-
It's even worse when "computer professionals" try to equate the human mind with a computer; they *know* better.
So they are saying things they know to be false? That’s an interesting thing to say. Why would you assume people are deliberately saying false things in this argument?
I'ts a dncferefie of knid, not of drgeee; 'its a dneeifcfre of cosmphreenion, not of cimpuotaton.
Do you have anything besides bald assertion to support that?
Or, who wtore the "pgarorm" taht is you?
Like very many complex programs, no one person or group of persons.
Who econded taht "parrgom" itno a snigle clel?
Do you think it’s not possible to program computers to take garbled input and assign a most likely output based upon a number of factors, or was there some other point you were trying to make? - January 09, 2008 2:25 PM
- Shackleman said...
-
Hi one brow. Thanks for the reply.
one brow: It's not a comparison most people find to be flattering.
This is veiled ad hominem. My reason for the frustration is because the comparisons are simply inaccurate and are frequently repeated as a point of argument.
one brow:Certainly the information transferrence mechanics int he brain are more complex.
Here's where we differ. My position is that there is *no* innate "information" in an otherwise random series of ones and zeros. The computer is not transferring information amongst itself and is never a causal agent. From the computer's "perspective" it's shuffling around meaningless electrons at the express command of the end-user. That's it. End-users are the causal agents, using programmers as their proxy to the filing cabinet that is the computer. No matter how complicated the programming, the computer is *never* more than a filing cabinet for the information that is stored in the end-user's mind, and the programmer's role in that, is to encode and decode that information on behalf of the end-user in the form of representational zeros and ones. The onus is on you to prove that the computer does more than this if you want to imply that computers have comprehension and understand meaning.
one brow:I'd be happy to evaluate, and even possibly agree with, a non-circular, qualitative, verifiable description of this "understanding" that is innately not available to computers.
That's fair. Why don't we start with *your* definition of what you think *my* definition of "understanding" is. Second, why don't you offer to us what *your* definition is and how it differs. From there we can discuss whether or not computers have it.
one brow: No argument. The question is, is that a function of degree and more sophisticated programming or a due to a fundamental, unbridgable difference?
Unabridgeable difference. See ilion's reply. - January 09, 2008 4:07 PM
- Shackleman said...
-
"Unabridgeable". Funny. I assume you knew what I "meant" regardless of the typo {smile}. Further, I think you'd agree in principle that the computer did *not* understand what I meant either way!
- January 09, 2008 4:17 PM
- One Brow said...
-
Hi one brow. Thanks for the reply.
It is pleasurable to engage in debate with people who feel rationality is the primary field of battle.
one brow: It's not a comparison most people find to be flattering.
This is veiled ad hominem.
My abject apologies. After all, I don’t find the comparison personally flattering, either, and I meant no offense nor intended any denigration in your direction. I was merely pointing out the human tendency that we all have disagreeing with arguments we dislike emotionally. I’m no different in that regard.
My reason for the frustration is because the comparisons are simply inaccurate and are frequently repeated as a point of argument.
I have mentioned before that I regard the comparison as at least partly metaphorical.
Here's where we differ. My position is that there is *no* innate "information" in an otherwise random series of ones and zeros. … The onus is on you to prove that the computer does more than this if you want to imply that computers have comprehension and understand meaning.
Well, I firstly don’t recall saying that computers have comprehension and meaning, whatever they may be. I do recall comparing how a computer performs addition operations to how my first-grader is learning them, and how people learn them in general. However, I would not say my daughter currently has comprehension of addition or understands the meaning of addition (as I use the terms, anyhow). That will come later for her. As for computers, who can say?
one brow:I'd be happy to evaluate, and even possibly agree with, a non-circular, qualitative, verifiable description of this "understanding" that is innately not available to computers.
That's fair. Why don't we start with *your* definition of what you think *my* definition of "understanding" is. Second, why don't you offer to us what *your* definition is and how it differs. From there we can discuss whether or not computers have it.
I would not try to put words in your mouth. To me, "understanding" would mean being an applier can use the concept correctly in an environment where is has not been used previously in the experience of the applier without prior instruction.
one brow: No argument. The question is, is that a function of degree and more sophisticated programming or a due to a fundamental, unbridgable difference?
"Unabridgeable". Funny. I assume you knew what I "meant" regardless of the typo {smile}. Further, I think you'd agree in principle that the computer did *not* understand what I meant either way!
Actually, I meant unbridgeable. A river can be bridgeable (you can build a bridge across it), but is seldom abridgeable (you generally can’t shorten or narrow it to any degree). Sorry for the typo. I was asking what you think is the nature of the gap that computers can never cross, and would this be a testable, qualitative, and non-circularly defined gap? - January 09, 2008 5:25 PM
- Shackleman said...
-
Hi one brow,
one brow: It is pleasurable to engage in debate with people who feel rationality is the primary field of battle.
Me also! I appreciate your responses very much and am enjoying the discussion.
one brow: My abject apologies. After all, I don’t find the comparison personally flattering, either, and I meant no offense nor intended any denigration in your direction. I was merely pointing out the human tendency that we all have disagreeing with arguments we dislike emotionally. I’m no different in that regard.
Apologies unnecessary but warmly appreciated and accepted. I take issue with the implication that my frustration is an emotional one. It's truly not. It's true that I would dislike it if my perception of my own "self" and my own "comprehension" were nothing more than an illusion. But my frustration has nothing to do with that. The frustration stems from what I perceive as repetitive misinformation being offered up as points of argument in favor of the position that there is no mind/matter problem, and that computers show how the AfR fails. My position is that the "computers" argument (to assign a category to it if you will) is a red herring and typically is framed erroneously . Hmmm, I'm still not sure if I'm being clear. I hope you can gather my meaning from this. I welcome yours or others reworking or rewording of it though if you please.
one brow: Well, I firstly don’t recall saying that computers have comprehension and meaning, whatever they may be...
Sorry, I thought when you said that "[you'd] be happy to evaluate, and even possibly agree with, a non-circular, qualitative, verifiable description of this "understanding" that is innately not available to computers", that you were arguing the point that human minds and computers either share the power of "understanding" with one another, or that *neither* human minds nor computers "understand"---that all either do is compute-in-the-dark so to speak. Did I get the wrong meaning then from your posts?
one brow: As for computers, who can say?
Well, that's sort of my point, and what I suspect is the disconnect between our positions. *I* am saying computers do *not* "understand" and supported that position by showing what computers *do*. You called into question (so I thought) what "understanding" is in the first place. And if we can't agree what "understanding" is, without first defining it then my entire post is moot. I don't think my post is moot, but I'm willing to back track a bit to first define our terms (understanding and comprehension) if you need to. Or, we can agree, without proper definitions, on the meaning of those terms from a common sense perspective and a qualia about them that you and I share, and move on from there. In which case I think my points regarding computers and how they shouldn't be allowed into the discussion in order to refute the AfR has weight and merit.
one brow: I would not try to put words in your mouth. To me, "understanding" would mean being an applier can use the concept correctly in an environment where is has not been used previously in the experience of the applier without prior instruction.
First I'm genuinely asking you to put words in my mouth {smile} because I think you'd do a better job articulating it for me than I'd for myself. I mean that as a sincere compliment. Your calling into question how I used the term "understanding" implies that you understand how I'm *meaning* the term. If so, I'm inviting you to frame the definition and I'll come and play in the sandbox you construct there so that we can move forward in the points about computers and how they don't *have* any understanding {smiles}. As for your definition, I honestly am not following it. Would you mind trying to phrase it differently for me?
one brow: Actually, I meant unbridgeable. A river can be bridgeable...
No, no, no...I was poking fun at *myself* not at you! *I* was the originator of the typo! {smiles} - January 09, 2008 7:50 PM
- One Brow said...
-
The frustration stems from what I perceive as repetitive misinformation being offered up as points of argument in favor of the position that there is no mind/matter problem, and that computers show how the AfR fails. My position is that the "computers" argument (to assign a category to it if you will) is a red herring and typically is framed erroneously . Hmmm, I'm still not sure if I'm being clear. I hope you can gather my meaning from this. I welcome yours or others reworking or rewording of it though if you please.
Actually, I agree computers are a red herring, and go further to say this is true from both sides of the issue. Computer are neither a help nor a hindrance to the AfR, as far as I can tell. However, some proponents of the AfR do try to use computers to bolster their position. Anything I am saying about computers under this post has the point of saying that the difference between computers and humans does nothing to support the AfR.
Sorry, I thought when you said that "[you'd] be happy to evaluate, and even possibly agree with, a non-circular, qualitative, verifiable description of this "understanding" that is innately not available to computers", that you were arguing the point that human minds and computers either share the power of "understanding" with one another, or that *neither* human minds nor computers "understand"---that all either do is compute-in-the-dark so to speak. Did I get the wrong meaning then from your posts?
Partially, perhaps. Certainly all computers do is compute-in-the-dark. Human reasoning is more than that. For me, the open question is whether this is innate to being human, or a result of our having a much different construction, much more sophisticated input/output mechanisms, etc. Is reason a different thing from performing operations, or an emergent property of many of these operations on some very complex, biological hardware. Until we get to the point where our computer approach the capacity of the brain, out input methods approach the richness of the human senses, and our programming approaches the complexity of the brain connections, we really won’t know.
Your calling into question how I used the term "understanding" implies that you understand how I'm *meaning* the term. If so, I'm inviting you to frame the definition and I'll come and play in the sandbox you construct there so that we can move forward in the points about computers and how they don't *have* any understanding {smiles}. As for your definition, I honestly am not following it. Would you mind trying to phrase it differently for me?
Let’s try an example of that phrase. When you teach your kid to count, you pull out checkers. If they have no input on counting, all they see is the counting of checkers. After a while, they’ll nonetheless start counting marbles, apples, etc., without any prompting from you. At that point, they have shown they understand counting. By “understanding”, I mean taking the process and correctly applying it into a completely separate context. As far as I know, computers don’t do that yet.
one brow: Actually, I meant unbridgeable. A river can be bridgeable...
No, no, no...I was poking fun at *myself* not at you! *I* was the originator of the typo! {smiles}
I thought you really meant unabridgeable, and that I was talking about shortening the distance between us and the computer. I’m over-literal at times. - January 10, 2008 9:19 AM
VR: So when we say mental states are brain states, what do we mean?
Shygetz: We mean that each mental state corresponds to one and only one brain state. However, you labor under the misconception that when you think "apple" and I think "apple" we have the same brain state. We do not--when you think "apple" what color fruit are you imagining? What size, what exact shape, shiny or dull, alone or in a context? If "apple" doesn't refer to a single unique physical state, what makes you think it refers to a single unique mental one.
VR: There are, of course, various chairs, some made of different stuff and some different colors, but there is something that makes them all chairs. Doesn't everyone's thought of an apple have to have something physical in common if it is a physical state.
More, A can correspond to B without being identical to B, so there has to be more to identity than correspondence.
Causal role is determined by physical structure. If there is nothing about the property "being a thought about a pencil" that is identical to some particular physical state-type, then the mental state-type cannot be causally relevant.
Shygetz: And the only way you can do this is to posit something that interacts intimately with physical matter and can be strongly affected by physical matter while remaining somehow distinct from physical matter in some manner you have yet to even attempt to explain. I hope you are not trying to imply that dualism is more simple than physicalism, because it is not by some undefined by doubtlessly large amount. You are positing an entire new branch of physics based on a substance that violates its current laws.
VR: No problem. We need this one in order to preserve the logical foundations of science.
Shygetz: Yet again, you make a bald assertion that actually flies in the face of (admittedly incomplete) evidence. Do you have any evidence that this is true? If so, by all means present it. If not, you are not making an argument--you are declaring by fiat. Argument is necessary, sir; I don't think anyone here will be convinced by raw audacity. Show me a reason to think that physical data are insufficient to determine mental states--otherwise, you merely continue to beg the question.
VR: It's very simple really. Identity claims are necessary truths. In order for physical states to determine intentional states uniquely, it must be logically contradictory to deny the mental state once the physical information is given. Postulate any amount of physicalistic information you want, and you will never get anything that logically entails the existence of a mental state. The only way to get an argument that has a conclusion "X is about B" is to have intentional states in the premises. It doesn't matter how much physical information you give, it will always be logically possible for me to deny the existence of the mental state without logical contradiction.
The irreducibility of intentional states to physical states is held by many philosophers, many of whom, like Donald Davidson, are philosophical naturalists. There is also the argument that intentional-state attributions involve normative elements, and therefore, cannot follow necessarily from the existence of physical states. Many naturalists accept a dualism of properties but try to avoid a dualism of substances. The problem the arises as to how those nonphysical properties fit into a physical world, and also how non-physical properties can possibly be causally relevant.
Shygetz: Ah, now you are at least starting down the right path. Have you ever, and I mean EVER, added, subtracted, or manipulated powers in a mental vacuum? No; you always bring along your "unique perspective" which changes your mental state. Computers can add in a vacuum; if I take two identical computers and have one add 2 + 2, then take another computer and manipulate its physical states so they replicate the first one exactly, the second computer will have added 2 + 2. What is the reason to think that the human brain is different when adding 2 + 2?
VR: Computers have no first-person perspective. Therefore, they do not literally add 2 + 2. They do not perceive the relationship amongst the meanings. We perceive those relationships. However, physical facts are not perspectival. If my perspective determines how atoms go in my brain, we have a non-publicly accessible fact that determines physical states. That's not considered good naturalism.
It's like taking a bunch of indicative facts about the world and concluding the existence of an objectively binding moral obligation. You have to wrong type of facts on the one side to draw the proper conclusions on the other.
You have to go from facts are not subjective or perspectival, not normative, not intentional, and not purposive, and yet these facts have to entail truths that are subjective/perspectival, normative, intentional and purposive. That is a good deal more than just a question about how the bacterial flagellum got engineered.
See this on the god of the gaps complaint. http://dangerousidea.blogspot.com/2008/01/blog-post.html