-William A. Dembski, author of The Design Inference Ha Ha Ha
-Gary Smith, Fletcher Jones Professor of Economics, Pomona College If I want to buy a Mercedes I might ask him about it.
-J. P. Moreland, PhD, Distinguished Professor of Philosophy, Talbot School of Theology, Biola University, and co-editor of The Blackwell Companion to Substance Dualism An expert on theology, how impressive, yawn.
Ok, so I admit, lining up a few kooks willing to put their names on your work does not logically prove the work is wrong, and criticizing the work by pointing out they are kooks is logically guilt by association, but still, sometimes one does make certain judgements by the company a person keeps.
In my defense, Victor, it is just a link to an Amazon book sales advert. Not sure where you are going with this one...
"What You Do That Artificial Intelligence Never Will" Man will never fly. Traveling faster than 25mph will kill you. Those little flip communicators on Star Trek will never happen. Science fiction never becomes science reality.
Victor, at least Searle made arguments that are publicly available and can be used as a starting point for discussion.
But recall, Searle is a materialist. His point was not that material systems are incapable of manifesting consciousness, only that certain architectures of computing machines are not capable of manifesting consciousness.
For Searle software alone running on an arbitrary computing platform is just a model, and those who say digital computers are conscious (Carnegie Mellon) are confusing the model with the thing itself, thus committing the fallacy of reification.
He is arguably himself committing the fallacy of composition, or perhaps his inductive reasoning holds and perhaps he is correct that present day digital computers have the wrong hardware architecture to manifest consciousness.
But modern digital computers are by no means the only sort of hardware that can be constructed. Back in the day some computers were analog, so, to multiply, an analog amplifier would put out a voltage that was a multiple of the input voltage.
Networks can be built in hardware. There is a lot of hype right now about quantum computing. There has been some development with optical computers.
Human beings have lived essentially as we presently are for at least 100,000 years. Stone tools have around for over 3,000,000 years.
Electronic computers were invented less than 100 years ago. What sort of computers will we have in 100,000 or 3,000,000 years? It is pretty ridiculous that this guy thinks he can tell us what will never be.
I'm not enough of a New Atheist or Trump worshipper to feel the need to insult the intelligence of everyone stupid enough to disagree with me (yes I know not all Trump supporters do this), but I do see little reason to believe this guy has any basis to declare what AI will NEVER do, NO MATTER WHAT.
Usually when people say that things are impossible they are assuming certain things as factual (the LNC for instance) so I doubt the author is claiming NO MATTER WHAT. Instead I assume he is stating that given what we know, it is impossible.
I suppose some people believe anything is possible.
Sometimes I wonder if we are getting "AI" commentators posting here:
Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.
Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don't know when they're making mistakes. Sometimes it's even more difficult for an AI system to realise when it's making a mistake than to produce a correct result......
Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles' heel of modern AI and that a mathematical paradox shows AI's limitations...... The paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency.
Victor, Still don't see where you are going with this post. Ok, somebody claims, after a mere matter of decades, that AI will never do XYZ. What is that supposed to demonstrate?
Well, one can easily claim just the opposite in terms of naturalism. "In his influential book, The Rediscovery of Mind (RM) (Searle 1992), John Searle declares that “the famous mind-body problem, the source of so much controversy over the past two millennia, has a simple solution.” (1992, 1) His proposal is simply to acknowledge that “Mental phenomena are caused by neurophysiological processes in the brain and are themselves features of the brain.” " https://www.ucd.ie/philosophy/perspectives/resources/issue3/Perspectives_volumeIII_SearleMaterialismMindBody.pdf
Searle, of course, is well known for the Chinese room thought experiment, as well as his rather snarky put downs of those claiming that computers think. He is also known for his claim that software running on a digital computer in principle cannot think.
Yet Searle is a naturalist. For Searle mental phenomena are features of the brain. The inability of digital computers to exhibit the full range of mental phenomena is irrelevant to Searle's biological naturalism.
In very broad terms one can speak of hardware, software, and wetware. For Searle the architecture of software running on hardware cannot emulate wetware.
But that does not rule out human constructed hardware/software that can emulate wetware, because wetware is a sort of hardware/software platform.
I guess it shows that people who understand how things work also understand the limitations of what those things can do and what they cannot do. People who don't understand how things work or who lack critical thinking ability may think anything is possible. Looks like Searle understands at least how computers work and is intelligent enough to see what they can't do.
For Searle the architecture of software running on hardware cannot emulate wetware.
But that does not rule out human constructed hardware/software that can emulate wetware
So Searle thinks non-human things construct computers and so once humans start making them they can start to think? Wow. Guess he never thought of that.
10 comments:
-William A. Dembski, author of The Design Inference
Ha Ha Ha
-Gary Smith, Fletcher Jones Professor of Economics, Pomona College
If I want to buy a Mercedes I might ask him about it.
-J. P. Moreland, PhD, Distinguished Professor of Philosophy, Talbot School of Theology, Biola University, and co-editor of The Blackwell Companion to Substance Dualism
An expert on theology, how impressive, yawn.
Ok, so I admit, lining up a few kooks willing to put their names on your work does not logically prove the work is wrong, and criticizing the work by pointing out they are kooks is logically guilt by association, but still, sometimes one does make certain judgements by the company a person keeps.
In my defense, Victor, it is just a link to an Amazon book sales advert. Not sure where you are going with this one...
"What You Do That Artificial Intelligence Never Will"
Man will never fly.
Traveling faster than 25mph will kill you.
Those little flip communicators on Star Trek will never happen.
Science fiction never becomes science reality.
Victor, at least Searle made arguments that are publicly available and can be used as a starting point for discussion.
But recall, Searle is a materialist. His point was not that material systems are incapable of manifesting consciousness, only that certain architectures of computing machines are not capable of manifesting consciousness.
For Searle software alone running on an arbitrary computing platform is just a model, and those who say digital computers are conscious (Carnegie Mellon) are confusing the model with the thing itself, thus committing the fallacy of reification.
He is arguably himself committing the fallacy of composition, or perhaps his inductive reasoning holds and perhaps he is correct that present day digital computers have the wrong hardware architecture to manifest consciousness.
But modern digital computers are by no means the only sort of hardware that can be constructed. Back in the day some computers were analog, so, to multiply, an analog amplifier would put out a voltage that was a multiple of the input voltage.
Networks can be built in hardware. There is a lot of hype right now about quantum computing. There has been some development with optical computers.
Human beings have lived essentially as we presently are for at least 100,000 years. Stone tools have around for over 3,000,000 years.
Electronic computers were invented less than 100 years ago. What sort of computers will we have in 100,000 or 3,000,000 years? It is pretty ridiculous that this guy thinks he can tell us what will never be.
I'm not enough of a New Atheist or Trump worshipper to feel the need to insult the intelligence of everyone stupid enough to disagree with me (yes I know not all Trump supporters do this), but I do see little reason to believe this guy has any basis to declare what AI will NEVER do, NO MATTER WHAT.
Usually when people say that things are impossible they are assuming certain things as factual (the LNC for instance) so I doubt the author is claiming NO MATTER WHAT. Instead I assume he is stating that given what we know, it is impossible.
I suppose some people believe anything is possible.
Sometimes I wonder if we are getting "AI" commentators posting here:
Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.
Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don't know when they're making mistakes. Sometimes it's even more difficult for an AI system to realise when it's making a mistake than to produce a correct result......
Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles' heel of modern AI and that a mathematical paradox shows AI's limitations......
The paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency.
https://www.sciencedaily.com/releases/2022/03/220317120356.htm
so I doubt the author is claiming NO MATTER WHAT
I suppose if the book description takes liberties beyond what the book actually says, then that could be true. The description is all I have though.
https://thefederalist.com/2023/02/20/why-artificial-intelligence-can-never-outpace-humans/
A review of the book with some explanations of the claims.
Victor,
Still don't see where you are going with this post. Ok, somebody claims, after a mere matter of decades, that AI will never do XYZ. What is that supposed to demonstrate?
Well, one can easily claim just the opposite in terms of naturalism.
"In his influential book, The Rediscovery of Mind (RM) (Searle 1992),
John Searle declares that “the famous mind-body problem, the source
of so much controversy over the past two millennia, has a simple solution.” (1992, 1) His proposal is simply to acknowledge that “Mental
phenomena are caused by neurophysiological processes in the brain
and are themselves features of the brain.” "
https://www.ucd.ie/philosophy/perspectives/resources/issue3/Perspectives_volumeIII_SearleMaterialismMindBody.pdf
Searle, of course, is well known for the Chinese room thought experiment, as well as his rather snarky put downs of those claiming that computers think. He is also known for his claim that software running on a digital computer in principle cannot think.
Yet Searle is a naturalist. For Searle mental phenomena are features of the brain. The inability of digital computers to exhibit the full range of mental phenomena is irrelevant to Searle's biological naturalism.
In very broad terms one can speak of hardware, software, and wetware. For Searle the architecture of software running on hardware cannot emulate wetware.
But that does not rule out human constructed hardware/software that can emulate wetware, because wetware is a sort of hardware/software platform.
I guess it shows that people who understand how things work also understand the limitations of what those things can do and what they cannot do. People who don't understand how things work or who lack critical thinking ability may think anything is possible. Looks like Searle understands at least how computers work and is intelligent enough to see what they can't do.
For Searle the architecture of software running on hardware cannot emulate wetware.
But that does not rule out human constructed hardware/software that can emulate wetware
So Searle thinks non-human things construct computers and so once humans start making them they can start to think? Wow. Guess he never thought of that.
Post a Comment