Wednesday, January 09, 2008

Computers and the argument from reason

A. The Argument from Computers

Some people think it is easy to refute any argument from reason just by appealing to the existence of computers. Computers, according to the objection, reason, they also are undeniably physical system, but they are also rational. So whatever incompatibility there might be between mechanism and reason must be illusory. However, in the case of computers, the compatibility is the result of mental states in the background that deliberately create this compatibility. Thus, the chess computer Deep Blue was able to defeat the world champion Garry Kasparov in their 1997 chess match. However, Deep Blue’s ability to defeat Kasparov was not the exclusive result of physical causation, unless the people on the programming team (such as Grandmaster Joel Benjamin) are entirely physical results of physical causation. To assume that, however, is to beg the question against the advocate of the Argument from Reason. As Hasker points out:

Computers function as they do because they have been constructed by human beings endowed with rational insight. A computer, in other words, is merely an extension of the rationality of its designers and users, it is no more an independent source of rational insight than a television set is an independent source of news and entertainment.81

The argument from reason says that reason cannot emerge from a closed, mechanistic system. The computer is, narrowly speaking, a mechanistic system, and it does “follow” rational rules. But not only was the computer made by humans, the framework of meaning that makes the computer’s actions intelligible is supplied by humans. As a set of physical events, the actions of a computer are just as subject as anything else to the indeterminacy of the physical. If a computer plays the move Rf6, and we see it on the screen, it is our perception and understanding that gives that move a definite meaning. In fact, the move has no meaning to the computer itself, it only means something to persons playing and watching the game. Suppose we lived in a world without chess, and two computers were to magically materialize in the middle of the Gobi desert and go through all the physical states that the computers went through the last time Fritz played Shredder. If that were true they would not be playing a chess game at all, since there would be no humans around to impose the context that made those physical processes a chess game and not something else. Hence I think that we can safely regard the computer objection as a red herring.

27 comments:

Anonymous said...

Hello Victor,

“Computers function as they do because they have been constructed by human beings endowed with rational insight. A computer, in other words, is merely an extension of the rationality of its designers and users, it is no more an independent source of rational insight than a television set is an independent source of news and entertainment.81
The argument from reason says that reason cannot emerge from a closed, mechanistic system. The computer is, narrowly speaking, a mechanistic system, and it does “follow” rational rules.”

In an essay readily available upon the internet, in his CHOICE AND RATIONALITY, Flew argues that Peter Geach was correct that reasoning, language and choice cannot be explained naturalistically. Stewart Goetz recently has argued from the teleology involved in human practical reasoning that materialism is false. Searle also provided the famous (or infamous, depending upon your perspective) Chinese Room argument which shows language use/rationality cannot come from a purely mechanistic being and process (i.e., a computer). Another prong which attacks materialism and shows that reasoning cannot come from a purely mechanistic being and process is linguistics. The smallest child, with ordinary and properly functioning faculties has no difficulty with human language. So far we are not even close to building a computer that can engage in conversation or tell a story.

What these and other sources like this have in common is that they all recognize that practical reasoning “cannot emerge from a closed, mechanistic system.” The gap between solely mechanistic processes/beings, and conscious, rational, freely choosing, and language using humans, at present, is vast.

Hopeful materialists believe the chasm will be bridged someday, Christians see this gap as never being bridged. In my opinion the argument from reason comes straight out of this chasm and so either the gap will be closed (or in principle can be closed) or it never will. I will with great confidence put my bets on the “never will” option.

Robert

One Brow said...

Some people think it is easy to refute any argument from reason just by appealing to the existence of computers.
Those would be remarkably foolish people, I should think. Do you know of any such people?

Now, I know of one person who, upon reading your essay on Internet Infidels, pointed out that the argument presented rested on a false notion that reason was some Platonic ideal waiting to be discovered, as opposed to the invention of people over time. You can respond to that idea at http://lifetheuniverseandonebrow.blogspot.com/2008/01/logic-everywhere-or-nowhere.html or elsewhere, should you so desire. I don’t know of anyone who says the computers we have today can reason.

The argument from reason says that reason cannot emerge from a closed, mechanistic system.
That is certainly the goal, I agree.

In fact, the move has no meaning to the computer itself, it only means something to persons playing and watching the game. Suppose we lived in a world without chess, and two computers were to magically materialize in the middle of the Gobi desert and go through all the physical states that the computers went through the last time Fritz played Shredder. If that were true they would not be playing a chess game at all, since there would be no humans around to impose the context that made those physical processes a chess game and not something else.
Did the physical states include a display of a chessboard on a screen? Let’s add such a screen to your thought experiment. Would people be able to learn the game of chess by watching the computers? Would they then see the meaning in the moves? If so, is not the meaning being transferred by the computers? Are you saying they can store the meaning, and relate it to people who have no experience with it, yet can't absorb the meaning?

Also, what is this meaning to which you refer, anyhow? After the programming is complete, the computer sees Rf6 as being a move that gets it closer to winning the game or further from losing it. There needs to be no intervening act by the programmer to tell the computer the game has ended. What are saying needs to exist for understanding, but does not?

One Brow said...

In an essay readily available upon the internet, in his CHOICE AND RATIONALITY, Flew argues that Peter Geach was correct that reasoning, language and choice cannot be explained naturalistically.
I’ll give it a read.

Stewart Goetz recently has argued from the teleology involved in human practical reasoning that materialism is false.
Is there teleology in the behavior of insects? How do you measure teleology, to determine which behaviors entail it and which do not?

Searle also provided the famous (or infamous, depending upon your perspective) Chinese Room argument which shows language use/rationality cannot come from a purely mechanistic being and process (i.e., a computer).
If the Chinese Room is functional, it shows that language use and rationality can be saved in a material way.

Another prong which attacks materialism and shows that reasoning cannot come from a purely mechanistic being and process is linguistics. The smallest child, with ordinary and properly functioning faculties has no difficulty with human language.
How many children do you have? My children, and all their friends, regularly make mistakes of vocabulary, gender, tense, etc.

So far we are not even close to building a computer that can engage in conversation or tell a story.
Do you provide any reason to say this is a qualitative issue, as opposed to a function of capacity and input processes?

The gap between solely mechanistic processes/beings, and conscious, rational, freely choosing, and language using humans, at present, is vast.

Hopeful materialists believe the chasm will be bridged someday, Christians see this gap as never being bridged. In my opinion the argument from reason comes straight out of this chasm and so either the gap will be closed (or in principle can be closed) or it never will.

I believe I have read on this blog that the Argument from Reason is most assuredly not a “God of the Gaps” argument. I think you are misconstruing it, so I will refrain from addressing your point here directly.

Anonymous said...

Victor,

Can a computer spot a sublte equivocation in an argument?

Can a computer spot a non-sequiter in a valid argument:

If I comb my hair, then I'll get the girls. I combed it. I'll get 'em.

This is enough to show the disanlogy between the mind and the computer, I think.

One Brow said...

Can a computer spot a sublte equivocation in an argument?
Humans often miss subtle equivocations.

Can a computer spot a non-sequiter in a valid argument:

If I comb my hair, then I'll get the girls. I combed it. I'll get 'em.

How much information does the computer have regarding the process of getting the girls and the ways into which hair combing may or may not play a part?

This is enough to show the disanlogy between the mind and the computer, I think.
Which of those functions requires a process that is impossible to program, and what is that process?

Anonymous said...

Regarding these arguments there always seems to be a problem.
It's like arguing against the validity of IC inferences regarding structures like a bacterial flagellum by positing a scenario in which something we know to be designed (a mouse trap) could have come about via blind/unguided naturalistic accounts.
Possibly the mousetrap served initially as a clip board, then a tie-clip. All intermediates being functional. However, there is no question to the fact that mousetraps were indeed designed.
If such mental gynmastics can 'solve' a problem that never needed a solution to begin with, how can we trust that a similar scenario can truly account for the potential IC at the cellular level?

Same with the computer-mind analogies.

One Brow said...

It's like arguing against the validity of IC inferences regarding structures like a bacterial flagellum by positing a scenario in which something we know to be designed (a mouse trap) could have come about via blind/unguided naturalistic accounts.
I’m not sure what “IC inferences” are supposed to be. A structure either exhibits IC or it does not. There are a couple of IC versions of the flagellum, if I recall correctly.

However, I think if you investigate, you will find there are actual pathways for the bacteria that lead up to the flagellum. Of the 20 or so essential proteins, 18 known homologues appear in other species of bacteria. For bacteria, generating a couple of new proteins and combining 18 others is no big feat. This is just one example that being IC is not an indicator of being designed.

If such mental gynmastics can 'solve' a problem that never needed a solution to begin with, how can we trust that a similar scenario can truly account for the potential IC at the cellular level?
Well, we all ultimately choose what we are comfortable with trusting, I suppose.

Anonymous said...

Hi One Brow,
I have to disagree with you regarding the avenues leading up to the bacterial flagellum. I'm assuming you are refering to the cooption of TTSS export system (then through cooption and mutation - intermediate functional steps - eventually we achieved a BF.
The problem with this scenario is that there is no reason to believe that the TTSS is ancestral to the BF. TTSS are found predominantly in gram negative bacteria. However, BF is found in gram postive and negative, thermo-philic and mesophilic bacteria, high and low GC content bacteria.
The universality of the BF should lead one to believe that the BF is ancestral to the TTSS.

Also, the TTSS has the main function of allowing prokaryotes to form symbiotic relationships with eukaryotes. Further supporting the notion of a later appearance for the TTSS.


For bacteria, generating a couple of new proteins and combining 18 others is no big feat.

As can be seen, we are back to a bigger feat in need of being pulled off.
But even if this wasn't the case - 18 proteins aren't usually just stumbled upon. Considering how highly coordinated the intra-cellular environment is; if you have a step that isn't functional or a pre-existing protein with 'hopes' of being coopted for a future, novel function... the cell needs to have plans that states what the modified shape of the protein is going to be and how it plans on attaching to and interacting with another set to yield a novel, selectable function.

One Brow said...

I'm assuming you are refering to the cooption of TTSS export system (then through cooption and mutation - intermediate functional steps - eventually we achieved a BF.
No, I don’t recall mentioning the TTSS, although it is an example of a differing structure that uses many of the homologous proteins.

The problem with this scenario is that there is no reason to believe that the TTSS is ancestral to the BF. TTSS are found predominantly in gram negative bacteria. However, BF is found in gram postive and negative, thermo-philic and mesophilic bacteria, high and low GC content bacteria.
The universality of the BF should lead one to believe that the BF is ancestral to the TTSS.

It’s also possible neither is ancestral to the other.

For bacteria, generating a couple of new proteins and combining 18 others is no big feat.
As can be seen, we are back to a bigger feat in need of being pulled off.

What would that be?

But even if this wasn't the case - 18 proteins aren't usually just stumbled upon.
It wouldn’t need to be usually, or even occasionally, stumbled upon. The way that bacteria share their genomes horizontally, it’s enough that it would happen very rarely. When there are trillions of living bacteria, and something along the lines of a quadrillion spilts every year, plus all the genome exchanges, unlikely things regularly.

Considering how highly coordinated the intra-cellular environment is; if you have a step that isn't functional or a pre-existing protein with 'hopes' of being coopted for a future, novel function... the cell needs to have plans that states what the modified shape of the protein is going to be and how it plans on attaching to and interacting with another set to yield a novel, selectable function.
The reasoning here seems a little fuzzy. Currently cellular biology seems to have many inefficiencies. I don’t see the issue with carry around a protein for a few generations that isn’t doing much.

Of course, this is getting a little off-topic.

Anonymous said...

Hi One Brow,

No, I don’t recall mentioning the TTSS, although it is an example of a differing structure that uses many of the homologous proteins.

That is exactly why I said "referring" and not that you "mentioned" the TTSS.
I am unaware of any other bacterial process or mechanism that usually gets mentioned in regards to being ancestral to the BF (I think even the wikipedia entry mentions it, as well as being used as evidence in Dover).

But, regardless of it being a different structure utilizing some similar proteins, there is no reason to assume that it is ancestral to the BF for the reasons stated in my post prior to this one.

It’s also possible neither is ancestral to the other.

Granted, but the thrust of the argument directed at its IC potential is that the TTSS is indeed ancestral.

What would that be?

Just the fact that there is currently no evidential support in the claim that the TTSS is ancestral to the BF; therefore, it can't be use to account for a portion of the proteins utitilized in the context of the BF. Because of this, we are back to a bigger problem - since the cooption of the TTSS is now less plausible.

It wouldn’t need to be usually, or even occasionally, stumbled upon. The way that bacteria share their genomes horizontally, it’s enough that it would happen very rarely. When there are trillions of living bacteria, and something along the lines of a quadrillion spilts every year, plus all the genome exchanges, unlikely things regularly.

This assumes that there are beneficial mutations that can be selected, and that those beneficial mutations aren't just very slightly beneficial (falling into Kimura's range of ultimately neutral mutations); not allowing them to even be selected to begin with (assuming they happen randomly).

I don’t see the issue with carry around a protein for a few generations that isn’t doing much.

Depends on the mutation rate. A few generations for a gene that isn't strongly conserved (or conserved at all) will lead to a gene that has undergone some deleterious mutations.

Of course, this is getting a little off-topic.

Agreed, but my initial point was that hypothesizing evolutionary scenarios for contrivances that we know for certain were designed doesn't bode too well for the mental gymnastics that were utilized. Because if such a scenario can account for something that wasn't brought about via a naturalistic account (the mousetrap) how do we know that it can provide true insight into something that we are uncertain about - a biological structure that appears to be IC?

One Brow said...

That is exactly why I said "referring" and not that you "mentioned" the TTSS.
I am unaware of any other bacterial process or mechanism that usually gets mentioned in regards to being ancestral to the BF (I think even the wikipedia entry mentions it, as well as being used as evidence in Dover).

Well, if you want a detail, serious link, you can look at Matzke’s article from 2003, available in a couple of forms at http://pandasthumb.org/archives/2007/10/science-v-intel-1.html and choose your link. You’ll note that there is no claim the TTSS is ancestral to the flagellum.

Granted, but the thrust of the argument directed at its IC potential is that the TTSS is indeed ancestral.
That seems to be a common misunderstanding.

Just the fact that there is currently no evidential support in the claim that the TTSS is ancestral to the BF;
Since I was not making that claim, there is no extra burden when it is removed.

This assumes that there are beneficial mutations that can be selected, and that those beneficial mutations aren't just very slightly beneficial (falling into Kimura's range of ultimately neutral mutations); not allowing them to even be selected to begin with (assuming they happen randomly).
It’s interesting that you are focusing on mutations. You realize there are 10-15 different ways an organism can change it’s DNA or the way it is interpreted?

Depends on the mutation rate. A few generations for a gene that isn't strongly conserved (or conserved at all) will lead to a gene that has undergone some deleterious mutations.
Sure, the unused proteins will tend to come and go over time.

Agreed, but my initial point was that hypothesizing evolutionary scenarios for contrivances that we know for certain were designed doesn't bode too well for the mental gymnastics that were utilized.
It bodes nothing at all. It’s a tongue-in-cheek response to material that is designed to fool the public. I hope you have not mistaken it for a serious thing.

Anonymous said...

Hi One Brow,

Since I was not making that claim, there is no extra burden when it is removed

Okay, then what were you postulating as being that which was coopted into the BF if not pre-extant export machinery?

One Brow said...

Okay, then what were you postulating as being that which was coopted into the BF if not pre-extant export machinery?
I don't think I could improve on Matzke's article.

Anonymous said...

Hi One Brow,

I don't think I could improve on Matzke's article.

It's still too simplistic.
I hope that you did read the paper because you'd see that the model is still built around an initial export system (like I stated).
Let's assume that the BF was indeed designed. And that the plans to develop this BF were stored in various genes. How do you propose that the protruding portion of the BF would be constructed?
A BF is too big to construct within the confines of the intracellular enviroment (and then subsequently shipped the membrane - and finally penetrating the membrane), it would be too big of a structure to transport around.
The BF would need to be constructed from the inside out & from the bottom (cellular cytoplasmic side) up (out side of the cell membrane).

The flagellar shaft is hollow, allowing the bacteria to build this structure from the bottom up. You would need an export system as the base of the BF to even allow this. How else could it conceivably be constructed?
So, pointing at an export system (which N.Matzke does) and then hypothesizing a mutation in a protein that initially was exported (allowing it to stick to the export system - along with many other proteins.... eventually forming the body) doesn't solve the problem. Because an export system would need to be there in the 1st place to even allow the bacteria to construct the BF - there would be no other way for the bacteria to construct such a structure.
Now, if that protruding portion were NOT hollow you might be on to something.

One Brow said...

I don't think I could improve on Matzke's article.
It's still too simplistic.

Really? Because below you make it sound as if you didn’t read it.

I hope that you did read the paper because you'd see that the model is still built around an initial export system (like I stated).
All the previous references I have seen in these comments refer to a TTSS export system. I did not realize you meant the primitive Type Three export system. You should probably drop the SS to avoid confusing others with the TTSS.

Let's assume that the BF was indeed designed.
You mean, that it developed, or that you want to assume what you’re trying to prove?

And that the plans to develop this BF were stored in various genes. How do you propose that the protruding portion of the BF would be constructed?
A BF is too big to construct within the confines of the intracellular enviroment (and then subsequently shipped the membrane - and finally penetrating the membrane), it would be too big of a structure to transport around.
The BF would need to be constructed from the inside out & from the bottom (cellular cytoplasmic side) up (out side of the cell membrane).

The flagellar shaft is hollow, allowing the bacteria to build this structure from the bottom up. You would need an export system as the base of the BF to even allow this. How else could it conceivably be constructed?

This is the topic of section 3.4 of Matzke’s paper. While it is possible the flagellum began as a solid structure or a cap, the most plausible explanation is that it began as a ring structure that was later capped.

So, pointing at an export system (which N.Matzke does) and then hypothesizing a mutation in a protein that initially was exported (allowing it to stick to the export system - along with many other proteins.... eventually forming the body) doesn't solve the problem. Because an export system would need to be there in the 1st place to even allow the bacteria to construct the BF - there would be no other way for the bacteria to construct such a structure.
The proposed primitive Type Three export system would be highly similar to the known, current Type II export system. This is discussed in section 3.2.

Anonymous said...

Hi One Brow,

All the previous references I have seen in these comments refer to a TTSS export system.

Sure, I was referring to the TTSS initially... specifically. Many others have made that claim.

And I am aware that N. Matzke wasn't claiming that. And, as soon as you mentioned Nick's paper I referred generally to:

Okay, then what were you postulating as being that which was coopted into the BF if not pre-extant export machinery?

pre-extant export machinery.
Also, in my reply I am not assuming the needle nose section of the TTSS when I stated:

So, pointing at an export system (which N.Matzke does) and then hypothesizing a mutation in a protein that initially was exported (allowing it to stick to the export system - along with many other proteins.... eventually forming the body) doesn't solve the problem.

'Forming the body' from mutated proteins which form a filament.

You stated:
Really? Because below you make it sound as if you didn’t read it.

Really?

You mean, that it developed, or that you want to assume what you’re trying to prove?

You should probably read my whole post before taking snippets to comment on. I think you'd see where I was going with this.

This is the topic of section 3.4 of Matzke’s paper. While it is possible the flagellum began as a solid structure or a cap, the most plausible explanation is that it began as a ring structure that was later capped.

How does this address the point I was making about the need to have it constructed from bottom-up even if it were purposefully intended.

The proposed primitive Type Three export system would be highly similar to the known, current Type II export system. This is discussed in section 3.2.

Not to be rude, but what's your point. How does this address what I stated?

One Brow said...

And I am aware that N. Matzke wasn't claiming that. And, as soon as you mentioned Nick's paper I referred generally to:

Okay, then what were you postulating as being that which was coopted into the BF if not pre-extant export machinery?

pre-extant export machinery.

Well, in context I found your switch from TTSS to Type Three export to be difficult to see. Let's just leave it at that. At any rate, apparently we now agree that the proposed mechanism is not based on the TTSS.

You should probably read my whole post before taking snippets to comment on. I think you'd see where I was going with this.
Just trying to be cautious in language.

Not to be rude, but what's your point. How does this address what I stated?
Actually, I don’t think it did. I misread your objection as concerning the hollow nature of the flagellum. However, I can’t really see what your objection is.

The flagellar shaft is hollow, allowing the bacteria to build this structure from the bottom up. You would need an export system as the base of the BF to even allow this. How else could it conceivably be constructed?
So, pointing at an export system (which N.Matzke does) and then hypothesizing a mutation in a protein that initially was exported (allowing it to stick to the export system - along with many other proteins.... eventually forming the body) doesn't solve the problem. Because an export system would need to be there in the 1st place to even allow the bacteria to construct the BF - there would be no other way for the bacteria to construct such a structure.
Now, if that protruding portion were NOT hollow you might be on to something.

Right, you need an export system there before the flagellum, and Matzke hypothesizes an export system very similar to the Type II we currently see in bacteria was there before the flagellum. What’s the objection? Do you think you need an export process before there is an export process?

Ilíon said...

VR: "Some people think it is easy to refute any argument from reason just by appealing to the existence of computers. Computers, according to the objection, reason, they also are undeniably physical system, but they are also rational. So whatever incompatibility there might be between mechanism and reason must be illusory. However, in the case of computers, the compatibility is the result of mental states in the background that deliberately create this compatibility. ..."

The computer -- the physical hardware -- is an "undeniably physical system." However, when people try to float these silly claims/assertions about computers being a refutation of the AfR, they don't have in mind the *computer,* they have in mind the computer software, which is not at all physical.

Moreover, neither the hardware nor the software is itself 'rational,' in the sense of "possessing rationality" (which is the false idea that persons so asserting are attempting to slip unnoticed into others' minds) -- the humans who create both hardware and software are rational ("possessing rationality"), but the computer hardware no more possesses rationality than does a shovel, and the software no more possesses rationality than do the instructions on a bottle of shampoo.

Computer hardware is "rational" in exactly the same sense that a shovel is rational -- both are physical systems created by rational beings as tools to extend some inherent capability; both are tools with which we leverage off our ability to reason and thereby execute some specific task more efficiently than without the tool.

Computer software is "rational" in exactly the same sense that the instructions on a bottle of shampoo are rational -- both are sets of rules, which exist nowhere except "within" minds, for carrying out some carefully specified task in a rational manner (though, it must be noted that the instructions on the average bottle of shampoo are usually written as an infinite loop, which isn't a very "carefully specified" thing to do).



Computer software doesn't physically exist anywhere (it isn't located anywhere, except "within" a mind or minds), it doesn't occupy a volume of space, and it isn't physically made of anything (it is "made" of ideas).


When people try to convince themselves and others that "computers" are proof that minds are (or can be) simply the results of states/state-changes of physical systems, they're falsely attributing physicality to the immaterial software and falsely equating the physical hardware with the non-physical and immaterial software.

Anonymous said...

Synthesis of Urea

Vitalism says that organic compounds can only be made by life.

Some organic compounds were synthesised by chemists.

After that some idiots claimed that vitalism was refuted.

How does that refute vitalism?

These compounds were synthesised by a life-form!

Victor's logic is entirely similar to claims that vitalism was not refuted by the synthesis of urea, and that there is something magical about organic chemistry.

Ilíon said...

Robert: "... Hopeful materialists believe the chasm will be bridged someday, Christians see this gap as never being bridged. In my opinion the argument from reason comes straight out of this chasm and so either the gap will be closed (or in principle can be closed) or it never will. I will with great confidence put my bets on the “never will” option."

This gap will never be closed; it cannot in principle be closed, which means that it cannot in practice be closed.

The 'materialists' have actually *two* false hopes here. These are tightly related and very similar, but not exactly the same. The first "hope" is that an artificial mind can be constructed of computer hardware and software. The second "hope" is that artificial minds "superior" (whatever that word is supposed to mean in such a context) to human minds can be constructed.

These "hopes" are false on a multitude of levels and are maintained by clinging to (known!) false premises and "arguments" and other logical errors. And no amount of attempting to reason with them will get those clinging to these false hopes to admit to reality. These false hopes are held precisely as a means of avoiding ultimate reality -- that there is a God, that there is more to reality than "nature;" i.e., that we, ourselves, are "supernatural" when "nature" is understood in terms of 'naturalism' (that is, that *we* refute 'naturalism' merely by existing).
.
.
Now as to showing some of the erroneous thinking that goes into these false hopes:

The immediate support for these 'materialist' false hopes is the erroneous assertions (and accompanying bombast) that 'mind' is nothing but a complex collection of algorithms and 'thinking' nothing but computation or execution of algorithms (i.e. a 'thought' is the result of execution of one or more of these algorithms), and that these algorithms arise naturalistically (i.e. they "just happen" deterministically due to the nature of "nature," but are in no wise intentionally caused by an ontologically prior 'mind').

Now, were it indeed true that 'mind' is nothing but algorithm, then, at least in principle, we ought to be able to construct artificial minds. The careful thinker will notice that were we ever able to construct an artificial mind, it would not actually support, but would tend to de-support, the other notion, which is that minds "arise" naturalistically. Algorithms are not naturalistic and they don't "arise;" quite the opposite on both counts.

But let us ignore that, for now, and pretend that 'mind' really is nothing but (a complex collection of) algorithms and that 'thinking' really is nothing but the execution of these algorithms, and that we can in principle, therefore, build artificial minds. That is, let's examine the second false "hope" of the 'materialists,' which is that we can construct artificial minds "superior" to our own.

Now, if "superior" means merely faster, that's trivial and uninteresting: computers can already execute algorithms far faster and with far less chance of error than we can; executing an algorithm faster changes the nature of neither the algorithm nor of the result. If "superior" is to mean anything interesting in this context, it must refer to artificial minds with can think thoughts we cannot think.

But, there is an immediate problem with this: (assuming, as we are, that 'mind' really is nothing but algorithm and 'thinking' really is nothing but execution of algorithms) it's logically and mathematically impossible for us to build an artificial mind that is able to think thoughts we cannot think. The mathematician Gregory Chaitin has (to the best of my admittedly limited knowledge) most fully worked this out (Andrei Kolmogorov's work figures into this, too), but it goes back to Gödel's work. To put it into a physical analogy: you can't fit ten pounds of sugar into a five-pound bag.

You see, these artificial minds are (as we are assuming/pretending) nothing but the algorithms we assemble of which to comprise them (and disregarding whence came the collections of algorithms of which we are comprised). But, we cannot include in those collections of algorithms of which we are constructing these artificial minds any algorithms we do not already possess (i.e. any algorithm which is not already included in the collections of algorithms which are us)! Therefore, these artifical minds can never include an algorithm which does not already exist within our own minds. Therefore, these artificial minds can never think a thought we cannot think.
.
.
The average 'materialist' -- because he *refuses* to think clearly, logically, rationally -- will mindlessly object that what I've just said is false. His reasoning (may I be forgiven for so abusing that word) will be something like this: "Well, sure, *we* can't directly build an artificial mind which can think thoughts we cannot think, But, we can build an artificial mind which can build the artificial mind can build that "superior" artificial mind."

'Materialism,' 'naturalism.' 'atheism' -- whatever one wants to call it -- is inherently irrational. And its adherents must continuously resort to illogic and irrationality to shore-up their false belief-system. We "theists" do ourselves, nor them, no favors at all by pretending otherwise; we must meet their irrationality *as* irrationality and stop trying to treat it as rational/logical argumentation when it is not. If an 'atheist' makes a rational/logical argument against "theism," that is a different thing; but it's also a very rare thing.
.
.
So, as we've seen, even were it true that 'mind' really is nothing but a complex collection of algorithms and that 'thinking' really is nothing but the execution of these algorithms, and even were it true that we could someday build artificial minds, we can never build a mind which is "superior" to our own -- we can never build an artificial mind which can think thoughts we cannot think. Indeed, even a moment's thought will make it clear that we can never in principle know that these artificial minds are even equal to us, for we can never know that we have identified all the algorithms which comprise us.

But, what about the first false hope of the 'materialists:' that it is possible in principle to someday build an artificial mind? As pointed out above, this "hope" is based on "defining" 'mind' as a complex collection of algorithms. But this is false; for this is not what minds are.

*IF* it were true that 'mind' really is nothing but a complex collection of algorithms (i.e. that 'thinking' is nothing but the execution of these algorithms and that a 'thought' is nothing but the resulting computation), THEN it would be utterly impossible for us to have *new* thoughts and ideas. This was demonstrated above in showing that it is impossible for us to build an artificial mind "superior" to ourselves. I don't mean that it would be impossible for you, the individual, to have a new thought -- provided a new algorithm could somehow be added to the collection of algorithms that is you. I mean that it would be impossible for new algorithms to be added to the sum total collection of all algorithms which are all human 'minds.'

Unless, of course, there is an "over-mind" who is capable of injecting "new" algorithms into the pool. But, this is something 'materialists' need to reject, for it is to admit that there is a Mind which is ontologically prior to us. (Moreover, it is ultimately to admit that this Mind is ontologically prior to "Nature.")

But, we humans have new thoughts all the time: all our cultural and technological "advances" for these past thousands of years have been the results of new ideas. Either we really *do* have new ideas, either we really can *exceed* what we knew and thought before (in which case 'mind' is not a collection of algorithms), OR there is a "superior" Algorithm which periodically injects new-to-us algorithms into the pool of algorithms with is us.
.
.
Either way you look at it, 'naturalism' is refuted by the existence of 'mind.' And not one 'naturalist' will ever admit this; he will never show that it is false and he will never admit that he cannot show it false.

One Brow said...

Well, ilion, that’s just way too much to deal with in a blog comment. I’ll pick out a couple of the choicer bits, though.

Moreover, neither the hardware nor the software is itself 'rational,' in the sense of "possessing rationality" …
That definition strikes me as somewhat circular. What is your test for determining when something is rational?

Computer software doesn't physically exist anywhere (it isn't located anywhere, except "within" a mind or minds), it doesn't occupy a volume of space, and it isn't physically made of anything (it is "made" of ideas).
It strikes me that larger amounts of software code tend to require more physical space to store.

When people try to convince themselves and others that "computers" are proof that minds are (or can be) simply the results of states/state-changes of physical systems, they're falsely attributing physicality to the immaterial software and falsely equating the physical hardware with the non-physical and immaterial software.
Well, you’re certainly certain. What’s your proof that it is false?

This gap will never be closed; it cannot in principle be closed, which means that it cannot in practice be closed.
More certainty, no proof.

Now, were it indeed true that 'mind' is nothing but algorithm, then, at least in principle, we ought to be able to construct artificial minds. The careful thinker will notice that were we ever able to construct an artificial mind, it would not actually support, but would tend to de-support, the other notion, which is that minds "arise" naturalistically. Algorithms are not naturalistic and they don't "arise;" quite the opposite on both counts.
So, you’re claiming that the instincts of a wasp are not natural?

If "superior" is to mean anything interesting in this context, it must refer to artificial minds with can think thoughts we cannot think.

But, there is an immediate problem with this: (assuming, as we are, that 'mind' really is nothing but algorithm and 'thinking' really is nothing but execution of algorithms) it's logically and mathematically impossible for us to build an artificial mind that is able to think thoughts we cannot think.

We built a computer that designed an antenna no human could design. Does that count?

The mathematician Gregory Chaitin has (to the best of my admittedly limited knowledge) most fully worked this out (Andrei Kolmogorov's work figures into this, too), but it goes back to Gödel's work. To put it into a physical analogy: you can't fit ten pounds of sugar into a five-pound bag.
Too many people think Goedel said much more than he did.

amandalaine said...

Ilion: "...However, when people try to float these silly claims/assertions about computers being a refutation of the AfR, they don't have in mind the *computer,* they have in mind the computer software, which is not at all physical."

Thanks for your thoughts Ilion. They are very, very interesting. One Brow, your answers great too.

Here's my question, and it may be super stupid: if computer software is not physical, then what is it? And what is the definition of physical? Can it truly be defined? And if not, can we have this conversation (meaning in a beneficial way)?

Ilíon said...

Amanda Laine: "Thanks for your thoughts Ilion. They are very, very interesting. ...
.
Here's my question, and it may be super stupid: if computer software is not physical, then what is it? And what is the definition of physical? Can it truly be defined? And if not, can we have this conversation (meaning in a beneficial way)?
"

Amanda, as we all know, "there are no (super) stupid questions." The refusal to think rationally and/or admit truth, which is where One Brow is at, and from which he evidences no inclination to budge, is a different matter.


Amanda Laine: "if computer software is not physical, then what is it?"
Non-physical, of course; which is to say, immaterial (i.e. not "made of matter"), intangible. Computer software is "made of" ideas/concepts.


Amanda Laine: "And what is the definition of physical?"
physical, tangible, touchable
having substance or material existence; perceptible to the senses; "a physical manifestation"; "surrounded by tangible objects"

The computer hardware is tangible, material, physical; the computer software is intangible, immaterial, non-physical.

Amanda Laine: "Can it truly be defined? And if not, can we have this conversation (meaning in a beneficial way)?"
Can *any* concept be defined if some person is set on denying that it has been defined? To be more precise, can any meaningful conversation occur where one party is intent upon denying either obvious truth(s) or commonly known definitions of the words in use?

You may have noticed that I mostly ignore One Brow's comments; there is a reason for this.


Amanda Laine: "if computer software is not physical, then what is it?"
Is either my post or your post physical -- that is, is this particular exchange you and I are having a physical thing? And, if it is, then where is it located, what is it made of, how great a volume of space does it occupy? How big a box do we need to put this exchange into? To safely store this exchange for some period of time, does the box need to be made of some special material (say, lead, if the exchange were made of radioactive matter)? Will the exchange still be in the box a year form now (assuming no one has tampered with it)?

No! Of course not. Even if we were physically speaking to one another, the *conversation* is not physical. The *conversation* is the exchange of ideas -- which are immaterial -- the spoken and/or written words which we use in this communication are not the message, they are the signal. Words, spoken or written, are inherently meaningless symbols by which we humans convey messages. Because the words are symbols, any other symbol can stand in the place of any given symbol; all that matters is that both parties agree on the meaning to be associated (in some context) with the symbols being used.


Computer software, specifically the low-level software with which few persons ever interact is a Universal Turing Machine (see this article on Turing Machines, in general). The people who try to claim that the human mind *is* equivalent to a computer are claiming that the human mind *is* a (Universal) Turing Machine.

Notice this about a TM: "5. What Cannot Be Computed
.
A crucial observation about Turing machine is that there are only countably many machines (a set is countable if it is finite, or may be placed in one-to-one correspondence with the integers.) It follows immediately that there are uncountably many real numbers that are not computable, since the reals are not countable. There are simply not enough Turing machines to compute all of those numbers.
.
Like the real numbers, the number of functions on the natural numbers is not countable. It follows therefore that there are uncountably many such functions that are not computable by any Turing machine.
"

Ilíon said...

Ilíon: "Computer software, specifically the low-level software with which few persons ever interact is a Universal Turing Machine ..."

Technically (which does matter) speaking, this is incorrect; the definition of a "Turing Machine" includes the concept of an infinite memory "tape." Nevertheless, people *do* speak of "the electronic computer" as being a "Turing Machine" and/or a "Universal Turing Machine."

One Brow said...

Here's my question, and it may be super stupid:
Not at all.

if computer software is not physical, then what is it?
It's not "just" physical, it is also a pattern in which the physical things are arranged according to a spcific interpreter.

And what is the definition of physical?
I can go with "something that can be objectively measured".

Can it truly be defined?
Well, ultimately all definitions are circular, and also subject to debate and exceptions. It can be defined as well as anything else can be.

And if not, can we have this conversation (meaning in a beneficial way)?
That's a good question of any discussion.

One Brow said...

Amanda, as we all know, "there are no (super) stupid questions." The refusal to think rationally and/or admit truth, which is where One Brow is at, and from which he evidences no inclination to budge, is a different matter.
So, since I don't grant your very questionable and unproven basis for your arguments, I'm unrational. How unkind.

Can *any* concept be defined if some person is set on denying that it has been defined?
It becomes much more difficult when people substitue definition for proof.

To be more precise, can any meaningful conversation occur where one party is intent upon denying either obvious truth(s) or commonly known definitions of the words in use?
Well, I persevere nontheless.

You may have noticed that I mostly ignore One Brow's comments; there is a reason for this.
I have no doubt there are any reasons, including at least one you will admit.

Amanda Laine: "if computer software is not physical, then what is it?"
Is either my post or your post physical -- that is, is this particular exchange you and I are having a physical thing? And, if it is, then where is it located, what is it made of, how great a volume of space does it occupy?

It's located on the eblogger server, it's made of silicon and other elements in storage chips, and it occupies a very small space on a hard disk. Of course, it is also much more than these physical properties.

How big a box do we need to put this exchange into?
Smaller than a pin, these days.

To safely store this exchange for some period of time, does the box need to be made of some special material (say, lead, if the exchange were made of radioactive matter)?
Any material that stores computer information will do.

Will the exchange still be in the box a year form now (assuming no one has tampered with it)?
Yes.

No! Of course not. Even if we were physically speaking to one another, the *conversation* is not physical. The *conversation* is the exchange of ideas -- which are immaterial -- the spoken and/or written words which we use in this communication are not the message, they are the signal.
You have confused "physical" with "permanent", apparently. Something which requires physical space to store, which causes physical actions, would be physical.

Words, spoken or written, are inherently meaningless symbols by which we humans convey messages. Because the words are symbols, any other symbol can stand in the place of any given symbol; all that matters is that both parties agree on the meaning to be associated (in some context) with the symbols being used.
Ideas do indeed have a non-physical component, as well as a physical one.

Notice this about a TM: "5. What Cannot Be Computed
.
A crucial observation about Turing machine is that there are only countably many machines (a set is countable if it is finite, or may be placed in one-to-one correspondence with the integers.) It follows immediately that there are uncountably many real numbers that are not computable, since the reals are not countable. There are simply not enough Turing machines to compute all of those numbers.
.
Like the real numbers, the number of functions on the natural numbers is not countable. It follows therefore that there are uncountably many such functions that are not computable by any Turing machine."

This disproves naturalism how? Humans have only calculated a finite number decimal places on a finite set of numbers, much less a countable set.

Anonymous said...

To Ilion and One Brow,

Thanks for your responses! I totally appreciate it.

I see a small fuzzy line between what is physical and what is non-physical. Very simply, I think to a large extent, we don't know what we're talking about. So much is assumed to discuss what we "know". For example, I looked up the definition of physical and some definitions just said it is that which is other than non-physical. However, that assumes the existence of the non-physical so that definition is no good. Often, the definition of physical was "material" but then the definition of material was "physical". I didn't see a strong demarcation line. If you look up the word matter, it also uses a circular reference saying it is "physical" or that it is other than mind or spirit (but then again this assumes the existence of mind and spirit).

In science, we say we are studying the physical world, but light, which is part of what science studies, doesn't seem to fit into this category.

Computer software is not made of ideas/concepts, it is made of commands that get transmitted to the hardware of a computer. That command is itself physical; it is just very small. As far as I can tell, all portions of the software/hardware of a computer are physical. If you wish, you could make the claim that what created the software/hardware is immaterial (the thoughts of the programmer), but that's different. I don't see the case for saying that software is immaterial. Where does this idea spring from? Do you have a response Ilion?

One last comment: the word "physical" is often equated with something that is "tangible" but I can't touch a cell or an atom. Are these not physical?