Wednesday, December 05, 2018

I could have done otherwise

Shouldn't it at least be possible that we could have done otherwise than what we did? If we murdered someone, shouldn't it at least  have been possible that we thought better of it and refrained? Otherwise, is the murder really our fault?

234 comments:

1 – 200 of 234   Newer›   Newest»
John Moore said...

a) About "could have done otherwise":

If the universe were different, then you would certainly have acted differently. No choice about it. If the universe is the same one you actually live in, then you certainly end up acting the way you do. No choice.

If you lived in the universe in which you thought better of it, then you would certainly have thought better of it.


b) About being at fault:

You will experience the natural consequences of your actions, regardless of the fact that you have no free will. Many actions carry their own punishments, like jumping off cliffs.

Sometimes the natural consequences of people's actions don't seem good to other people, so those other people might pile on social consequences. For example, when somebody seems to be getting away with murder, society can take extra steps to punish the murderer. This is morally fine.

And no, people in society don't have the free choice about whether to punish people or not. See above, under "could have done otherwise."

Legion of Logic said...

Given that I experience decision making throughout the day, every day, it would take an extraordinary amount of evidence to tell me that no, I'm in fact not experiencing it.

And given that we lack the ability to rewind and allow the decision to be made over and over, all we have to factor is our subjective experience filtered through our foundational worldview. And a worldview I disagree with (philosophical materialism) is hardly sufficient evidence to deny free will. So long as I experience it, I have no reason to deny its existence.

oozzielionel said...

No choice we make is free from external influences. Nor is the choice free from the inclinations of our character,

Legion of Logic said...

If free will is defined as existing in a vacuum, then sure I deny it as well. Such a thing can't exist.

Hal said...

Interesting book I stumbled upon regarding differing Christian views of free will:
The Battle Over Free Will


Haven't read enough of it to give an accurate overview but I find it interesting that it is such a contentious issue irregardless of one's metaphysical beliefs.

One Brow said...

Legion of Logic said...
Given that I experience decision making throughout the day, every day, it would take an extraordinary amount of evidence to tell me that no, I'm in fact not experiencing it.

Is free will the ability to make decisions, or some hypothetical ability to have made a different decision? If the former, computers have free will. If the latter, how can you ever know? Don't we always decide according to our judgments and preferences?

Legion of Logic said...

Is free will the ability to make decisions, or some hypothetical ability to have made a different decision?

I don't really know terminology in this topic, but I guess I would fall under some sort of compatibalist heading? A self-driving car must stop ten times out of ten for a pedestrian. A person may stop ten times out of ten and probably will, but he can freely choose not to. The car cannot.

Diving into the theoretical questions of "could it have possibly gone different given the exact same history from creation to present" is a fun exercise maybe, but so long as people experience and resist temptations (while others give in), so long as we weigh pros and cons and make decisions based upon our reasoning, I don't find it particularly useful or compelling to say the word "decision" is an illusion.

Hugo Pelland said...

" A self-driving car must stop ten times out of ten for a pedestrian. A person may stop ten times out of ten and probably will, but he can freely choose not to. The car cannot."

But that's also less and less true as machine learning and complex algorithms introduce ways for software to identify the odds that 1 time out of 10, let's say, the pedestrian was a false positive and the car won't stop.

Taking it even further, the car could one day realize that it's better to hit 1 pedestrian to avoid a crash that would kill its 3 occupants. It's not that far from a human "freely" choosing to hit a pedestrian, as we would instantly asked: but WHY did the human choose that? We would never say, oh it doesn't matter why, it was just their free choice.

I.e. truly free choices make less sense than determined choices. We can't really exlain when we make a choice that appear to be free, like picking a random number. Think of one now, any number, and explain why you picked it... can you?

One Brow said...

Legion of Logic said...
I don't really know terminology in this topic, but I guess I would fall under some sort of compatibalist heading? A self-driving car must stop ten times out of ten for a pedestrian. A person may stop ten times out of ten and probably will, but he can freely choose not to. The car cannot.

If the car is programmed with a random number generator, it may not stop for the pedestrian, even in otherwise identical conditions.

Diving into the theoretical questions of "could it have possibly gone different given the exact same history from creation to present" is a fun exercise maybe, but so long as people experience and resist temptations (while others give in), so long as we weigh pros and cons and make decisions based upon our reasoning, I don't find it particularly useful or compelling to say the word "decision" is an illusion.

I agree that the word "decision" is not an illusion. My point is computers also make decisions, e.g., games between chess-playing AIs are not identical game-to-game. Is the making of decisions enough to say the entity has free will?

bmiller said...

Computers only make the decisions their programmer programmed them to make.
A computer is just a souped-up abacus.

Hugo Pelland said...

bmiller, that's not true anymore. See neural networks; humans can't tell what's happening exactly in the software.

Plus, we also are souped-up abacus so that doesn't change much. We have 1 goal: survive to reproduce. And we've become so good at it that we have time to write on blogs on the side, among other things.

One Brow said...

bmiller said...
Computers only make the decisions their programmer programmed them to make.
A computer is just a souped-up abacus.


As one who programs, I can assure I am sometimes surprised by the outputs. :)

However, this just takes me back to my previous questions:
Is free will the ability to make decisions, or some hypothetical ability to have made a different decision? If the former, computers have free will. If the latter, how can you ever know? Don't we always decide according to our judgments and preferences?

Hal said...

Hugo,
I wouldn't put it quite as crudely as bmiller, but in principle he is correct. Machines like computers may act in accordance with rules, but they cannot follow rules. The programmer decides what rules she intends to follow and programs the computer to act in accordance with those rules.

One Brow,
Even if I do calculations by hand I can be surprised by the outcome.


Computers are fantastic tools invented by humans. I'm very skeptical that they can be used to help illuminate or shed light on the actual capacities humans have. Though at one time I held the view that the human mind was like a computer, I now find that view to be rather perverse.:-)

Hugo Pelland said...

Hal,

Neural networks follow rules the same way humans do afaik. Why do you disagree we with that?

bmiller said...

Crude gets the point across concisely. đŸ˜‰

Hal said...

Hugo,

Neural networks are based on human brains. Human brains do not follow rules. Human beings follow rules.

One Brow said...

Hal said...
Computers are fantastic tools invented by humans. I'm very skeptical that they can be used to help illuminate or shed light on the actual capacities humans have. Though at one time I held the view that the human mind was like a computer, I now find that view to be rather perverse.:-)

I agree with this sentiment generally. I'm not trying to equate humans and computers, rather, I'm saying that if we equate free will to making decisions, that definition also applies to the type of processing computers do, that is, both have the property of being able to consider alternatives and make decisions. Analogously, I can say an apple and a lime are the same shade of green without saying I think apples are just like limes.

Hal said...

One Brow,

I agree with you view that there is more to the question of free will than the capacity to make a decision.

I do have difficulty understanding how computers can give any insight into the question of human free will. Any decision making performed by a computer is simply an extension of human decision making. It is a tool designed by humans.

To make a crude analogy: equating the decision making of a computer to a human making decisions is like equating the crying of a barbie doll to a human crying.

Hugo Pelland said...

Hal,
"Any decision making performed by a computer is simply an extension of human decision making."
But that's not true in the case of neural networks! We​ genuinely don't know how they make decisions.
So yes, it's like a brain, which also has many mysteries when it comes to 'how' decisions are made.
Again, where's the real difference? Complexities, sure, by leaps and bounds, but beyond that... what else?

Hal said...

Hugo,

I don’t see how that negates my original point.

And, to repeat, the human brain does not make decisions. Nor do human brains follow rules.

One Brow said...

Hal said...
And, to repeat, the human brain does not make decisions. Nor do human brains follow rules.

What does?

To make a crude analogy: equating the decision making of a computer to a human making decisions is like equating the crying of a barbie doll to a human crying.

If the doll has been programmed to use crying to indicate error or inadequate resources, how is that different?

Again, I'm with you that silicon computing is not a good facsimile of biological construction.

Hal said...

One Brow,

It is the human being who makes decisions and is capable of following rules. It doesn't make sense to ascribe those capacities to a part of the human (such as the brain). A brain can't satisfy the behavioral criteria we use to identify such powers.

As to your second question, the tears of a human express the pain or emotional distress or joy that human is feeling. A doll has no feelings.

Based on our knowledge of humans, we can make very sophisticated dolls that can mimic human behavior. But that shouldn't fool us into thinking that such dolls are conscious and feel the pain that a crying human does.

bmiller said...

Hal,

Though at one time I held the view that the human mind was like a computer, I now find that view to be rather perverse.:-)

From our prior discussion I think I know how you ended up where you are, but if I understood correctly you originally didn't think the human mind was like a computer, then you did, and now you again don't think so.

I think it's an interesting story. Would you care to elaborate?

Hal said...

bmiller,

Well I was a Christian for much of my early life. So it never occurred to me then that the mind might be like a computer.:-) I did read many of C.S. Lewis's books during that time.

After I'd pretty much abandoned my belief in the Christian God (about 20 or more years ago) I spent more time reading books that focused on biological evolution and philosophy of mind. Also spent a lot of time on the Internet Infidels forum. Reductionism and the concept of the mind as a computer were pretty much the de facto assumption of many of the posters there. If I recall correctly, it was my involvement in a very long thread started by someone who questioned reductionism that began my move away from that position. Also, Ernst Mayr's book What Makes Biology Unique?: Considerations on the Autonomy of a Scientific Discipline was a big influence.

It was Peter Hacker's book Philosophical Foundations of Neuroscience that finally caused me to reject reductionism and many of the ideas that were popular in the philosophy mind at that time. Of course, that has led me into a more in-depth study of Wittgenstein and analytic philosophy in general.

Have also been influenced by the writings of Bede Rundle, John Dupre and Mario Bunge. Bunge is a hardcore materialist but he has written an interesting book presenting a good defense of emergence: Emergence and Convergence: Qualitative Novelty and the Unity of Knowledge

Not sure how interesting this all is. Most of the changes in my philosophical views have taken place over a long period of time. Can't really go into much detail over the actual arguments that led to those changes and that is why I listed some of the books that impacted my views.

Hugo Pelland said...

Hal,
You referenced your earlier point, was it: "Machines like computers may act in accordance with rules, but they cannot follow rules. The programmer decides what rules she intends to follow and programs the computer to act in accordance with those rules."
?

That's what I am directly addressing by bringing up neural networks, because they are literally machines that "follow" rules; they do not do what a programmer told them to do. The only thing given to a neural networks is the same you would give the human: rules to follow, not step-by-step recipes.

This is really insightful because it could have failed... nobody knew that these machines would become so good at precisely what you're claiming they are not doing: following rules without any direction at all.

One more thing to clarify: this has nothing to do with 99.99..% of the software we use. You get that, right?

bmiller said...

Hal,

Well I was a Christian for much of my early life. So it never occurred to me then that the mind might be like a computer.:-)

Computers were not yet invented way back then? :-)

Reductionism has never made sense to me at all but it seems sounded convincing to you at one point. How about your earlier beliefs and what caused you to abandon those beliefs if you don't mind my asking?

Hal said...

Hugo,

Perhaps I need to clarify the difference I am making between ‘following a rule’ and ‘acting in accordance with a rule’. Hopefully this example will help:

Two friends are playing a game of chess. They have been doing this since they learned chess in their youth. One of the players recently acquired a pet chimpanzee. While they are playing the chimp comes in the room and pushes one of the bishops diagonally along the board. Is the chimp acting in accordance with the rules of chess or is it following a chess rule?

Hugo Pelland said...

Interesting, I don't know what you mean in this case so can you elaborate?
My gut reaction was to say neither because the chimp might just be copying it just saw happened.

Hal said...

Hugo,

Well, the move was in accordance with the rules of chess. After all the bishop is moved diagonally. But it wasn’t following a rule of chess. One can only follow a rule of chess if he understands the rules.

Suppose you are teaching a young child to play chess. After playing for awhile you notice that the child does move the bishop diagonally but only 2 squares at a time. When asked why, he explains that a bishop can only move two squares at a time. But that is not a rule of chess. Even though he has been moving the bishop in accordance with the rules he has not yet acquired an understanding of the rules. He is not yet following the chess rules due to his lack of understanding of those rules.

Hal said...

bmiller,

How about your earlier beliefs and what caused you to abandon those beliefs if you don't mind my asking?,

I'm sorry, I'm not sure what earlier beliefs you are referring to.

bmiller said...

Hal,

You mentioned you were a Christian for much of your early life. Where you brought up in a Christian home and never questioned things? Or did you find your way there. Then what make you leave your early faith?

Hal said...

bmiller,
I was raised in the Lutheran faith. Of course, like most people I had questions about things that a young child takes for granted. For a number of years in my twenties I attended non-denominational churches. Around age thirty I converted to Catholicism. Then as time went on it became more and more difficult to believe in God. Am not a hardcore atheist. It certainly is possible that God may exist. But I simply lack the belief that there is such a being.

One Brow said...

Blogger Hal said...
It is the human being who makes decisions and is capable of following rules. It doesn't make sense to ascribe those capacities to a part of the human (such as the brain). A brain can't satisfy the behavioral criteria we use to identify such powers.

Would you agree that the brain is the only indispensable part of human for this purpose?

As to your second question, the tears of a human express the pain or emotional distress or joy that human is feeling. A doll has no feelings.

Given only the exterior behavior of a given doll, with no knowledge of the internal construction, how can you tell?

Hal said...

One Brow,
I would agree that a brain is necessary. But why on earth would I want to reduce what it is to be a human to the physiology and anatomy of a brain? A brain can't see or hear or feel without the rest of the human body. A brain by itself can't exhibit any of the behavior we associate with thinking or feeling.

Is it possible given the right circumstances that we could be fooled into thinking the doll was actually expressing real feelings or emotions? Sure, just as moviegoers in the early 20th century were fooled into thinking what they saw in a moving picture was really taking place before them.

We know the doll's history. We know it was constructed by humans in order to mimic or copy some of the behavior found in humans. We know the difference between living organisms and machines. Of course we can imagine such characters as Data or Pinocchio or Mr. Toad existing, but that doesn't mean such imaginary characters can actually exist.

Human behavior doesn't take place in a vacuum. It is part of a community of other humans, part of a particular culture. Some of that behavior is spontaneous but much of it is also learned from the culture in which one lives.

bmiller said...

Hal,

Then as time went on it became more and more difficult to believe in God.

Interesting story. Most people lose their faith in their 20's, but it sounds like you kept interest much longer.
Was there a series of events that made you change your mind or did your faith just become less and less important in your life?

Hal said...

bmiller,

Well I don't recall particular events that made my change my mind. I still enjoy reading the Bible. Am currently reading Robert Alter's excellent translation of the Hebrew Bible. I just can't find myself able to believe in God. Just as I don't believe in the Greek gods but still enjoy reading about them.

bmiller said...

Hal,

I liked the Greek myths too. Of course the Prime Mover of Aristotle and the One of Plato are completely different from the Greek gods.

Hal said...

bmiller,

Good point. Yet the myths, especially as presented in Greek drama, can provide some interesting philosophical positions. For example, in Oedipus Rex, there is no conflict between fate and free will.

I think the God of the Old Testament is more like Zeus than he is Aristotle’s Prime Mover.

bmiller said...

Hal,

Well Zeus and the other Greek gods were never considered the eternal cause of all existence. They were more like comic book superheroes. In fact Thor actually is a comic book superhero. I like the Norse myths also.

One Brow said...

The Genesis story is about the Old Testament Yahoweh taking a preexisting, featureless land and creating world and life, not about being the eternal cause of all existence. Much more similar to Zeus.

Legion of Logic said...

"In the beginning, God created the heaven and the earth."

Zeus seems closer to Michael than God, head of the angelic host. Gaia is the closest Greek deity to the Genesis description of God's activities that I know of, though I'm no expert in Greek mythology.

bmiller said...

The Old Testament considers Yahweh not only the creator but also the sustainer of all things. Also the only eternally existing being.

You are Yahweh, you alone. You have made heaven, the heaven of heavens, with all their host, the earth and all that is on it, the seas and all that is in them; and you preserve all of them; and the host of heaven worships you (Nehemiah 9:6).

Zeus on the other hand was not a creator and came from Rhea who came from Gaia who was preceded by Chaos.

When I started reading Greek mythology I wondered how the ancient Greeks could worship Zeus since he was not considered an eternal creator. He had a beginning so he could also have an end as did the other Greek gods.

Legion of Logic said...

Seems similar to political lobbyists of today. Doesn't matter which politician holds the office, if you need something you go to the office with the power to grant your desire or need and you gain its favor. Probably wouldn't matter to them if Zeus got overthrown, so long as there was someone in place to bring the rains.

One Brow said...

Blogger Hal said...
I would agree that a brain is necessary. But why on earth would I want to reduce what it is to be a human to the physiology and anatomy of a brain? A brain can't see or hear or feel without the rest of the human body. A brain by itself can't exhibit any of the behavior we associate with thinking or feeling.
...
We know the doll's history. We know it was constructed by humans in order to mimic or copy some of the behavior found in humans. We know the difference between living organisms and machines. Of course we can imagine such characters as Data or Pinocchio or Mr. Toad existing, but that doesn't mean such imaginary characters can actually exist.


Hal, I see this as an argument that the feelings of a doll/computer/other sentient mechanical life would be vastly different from what we alien. However, it is another step entirely to say they would have no feelings at all.

Hal said...

One Brow,

The reason for attributing feelings to a mechanical device is that it mimics the behavior that we use to attribute feelings to other humans. But now you apparently want to use these behavioral criteria to identify feelings that are 'vastly different'. Sorry, but that makes no sense to me.

If a talking doll's lips move and fake tears roll down its plastic cheeks while a recorded message the doll-maker has put in it says "I feel sorry.", what is this vastly different feeling that you believe we can attribute to it?

One Brow said...

Hal,

I don't know how we can know what, or if, there is a feeling. As we have agreed all along, the comparison between how our brains work and how computers work is thin and often over-stretched.

However, the thinness of that comparison does not mean there can never be a neural network of sufficient complexity that it has it own decision making process and it's own version of feelings that are not simply programmed responses. We already have neural networks that teach themselves how to play sophisticated games not only better than humans can play them, but even better than humans can program computers to play them. What happens with an AI like that gets combined with a few different ways to learn about the environment, another couple of magnitudes of processing power, and time enough to learn to interact with humans on its own?

Hal said...

One Brow,

I don't know how we can know what, or if, there is a feeling. As we have agreed all along, the comparison between how our brains work and how computers work is thin and often over-stretched.

Have we agreed to that? I wrote this earlier:
Computers are fantastic tools invented by humans. I'm very skeptical that they can be used to help illuminate or shed light on the actual capacities humans have. Though at one time I held the view that the human mind was like a computer, I now find that view to be rather perverse.:-)

And you responded: I agree with this sentiment generally.

To clarify: I find the idea of comparing the brain to a computer to be completely misguided. Computers are designed to mimic some of the things humans can do, but that does not entail that computers are really like brains even if they (computers) are designed with neural networks.

I admit to being somewhat confused by your apparent assumption that there would be any kind of connection between AI and the feelings or sensations that human beings have. Why should the development of more advanced AI entail the acquisition of sensations or feelings? Would you mind explaining this connection?

One Brow said...

Hal,

Sensations are the ways that we interact with the world and within ourselves. We have marvelously intricate and capable interaction mechanisms, but most of them would be duplicable with sufficient technology.

Feelings are reactions to these sensations and the various thoughts they trigger. A self-raised computer would not react to their sensations in the same manner as humans, but they would react.

I'm not sure what you mean by "connection" here, since I have said there would be very little similarity.

Hal said...

One Brow,

I'm not sure what you mean by "connection" here, since I have said there would be very little similarity.

Earlier you wrote

a neural network of sufficient complexity that it has it own decision making process and it's own version of feelings that are not simply programmed responses.

You seemed to imply that increasing complexity in a neural network in order to result in a more powerful AI will also result in the neural network having feelings. The 'connection' was in reference to that complexity.

In any case, it makes no sense to ascribe consciousness or rational powers to the brain. It is the human being who has sensations, perceives the world around him, reasons and acts for reasons, not the human brain. So it makes no sense to think that a neural network modelled on the brain is goig to have capacities that we cannot logically attribute to the brain itself.

Hugo Pelland said...

Hello, online again after a sudden emergency trip...

Hal, based on what you just said, what do you think of someone with locked in syndrome? Are they not human anymore? All they have is basically a functioning brain.

At the same time, I agree it isn't the brain on its own that makes a person, but a piece of software on its own wouldn't be considered conscious either. It might be possible only if the software as some form of inputs/outputs, presumably some hardware or, somehow, some other software decoupled enough from it. But why wouldn't this be a possible sentient being just like human beings?

And again, neural networks are not anywhere near that yet, but they aren't software programmed to do something, anything, by humans. They just respond to their (extremely limited) environment and follow some (extremely simple) rules, just like human beings do by feeling their environment, their own thoughts, and focusing on them selectively based on limits imposed by their own bodies, brains and that same environment. Were is that fundamental difference that makes humans humans?

The theists here have a straightforward answer here; humans have a soul. But I still don't get what your explanation is Hal?

Hal said...

Hugo,
Sorry to hear about the emergency. Hope all went ok.

A computer is a machine, a non-living substance.

Why are you making the assumption that a non-living substance like a computer can have the same capacities found in a human being? Sentience is widespread thoughout the animal kingdom. There are myriad forms of living substances that are conscious and have sensations and perceive and interact with their environment. We can using non-living substances to mimic those living animals. But that does not imply that those non-living substances are conscious entities.

I have an Aristotelian conception of the mind. So I don't think consciousness is a mark of the mental. As I just mentioned, many animals are conscious beings but they do not have minds. It is because humans have rational powers, can retain knowledge, use language and act for reasons that they can be said to have a mind. But the mind is not a seperate entity. The mind does not interact with the body. The mind is not an agent. It is the human being that is an agent and has the capacities I just mentioned.

Most modern materialists are dualists. They identify the mind or the self with the brain and see the brain and body interacting with each other. That sort of dualism doesn't make any more sense to me than the Christian dualism which views the soul (or mind) as an entity interacting with the human body. Not all Christians share that form of dualism.

One Brow said...

Hal said...
A computer is a machine, a non-living substance.

Only living things can ever experience feelings? Why?

Most modern materialists are dualists. They identify the mind or the self with the brain and see the brain and body interacting with each other. That sort of dualism doesn't make any more sense to me than the Christian dualism which views the soul (or mind) as an entity interacting with the human body. Not all Christians share that form of dualism.

I see the mind as the patterns of impulses on the brain, in the same way that a cross is a pattern of positions for sticks. The brain interacts with other parts of the body, but you could say that about the heart, liver, kidneys, leukocyctes, or any other individual cell/organ. The brain is definitely where any activities we think of as indicating a mind occur, as is evidenced by the loss of different parts of the brain means a loss of different types of these functions.

Hal said...

Hugo,
Hugo Pelland said...
Hello, online again after a sudden emergency trip...

Hal, based on what you just said, what do you think of someone with locked in syndrome? Are they not human anymore? All they have is basically a functioning brain.


Of course a person suffering from locked in syndrome is still a human being. I'm not a behaviorist. One may not be able to display the behavioral criteria we use for attributing the rational powers humans have and still have the capacity to reason.

I disagree with your view that all they have is basically a functioning brain. They still have a living body. They are still able to communicate with others.

Hal said...

One Brow,

Only living things can ever experience feelings? Why?

Why is reality the way it is? Why is there something rather than nothing?

Not sure how to answer your question. Different substances have different powers. Inanimate substances can't have have feelings. After all, that is why we call them inanimate.


The brain is definitely where any activities we think of as indicating a mind occur, as is evidenced by the loss of different parts of the brain means a loss of different types of these functions.

Giving reasons for one's actions, describing one's plans for tomorrow, arguing over philosophical issues are some of the behavioral criteria we use to attribute minds to human beings. They are partly constitutive of what it is to be a rational creature. There is a logical connection between those activities and our concept of rationality.
It is only because we already know what it is for a human being to reason and act for a reason that we are able to inductively (non-logically) correlate brain activity with those rational powers.

Neuroscience is great. Hopefully, it can contribute to treatments of neurological disorders that many people suffer from. But it seems to me to be mainly useless in understanding human nature, what it is for creatures such as ourselves to be human beings.

One Brow said...

Hal said...
Why is reality the way it is? Why is there something rather than nothing?

Not sure how to answer your question. Different substances have different powers. Inanimate substances can't have have feelings. After all, that is why we call them inanimate.


Is a current lack of a specific power an limitation that such a power can never be had? 1,000,000 years ago the ancestors of the human population had no spoken language; did humans at that point have the power to acquire human language, or was that power gained over time? If the former, and a power can be unexpressed over thousands of generations, how can you be certain any type of thing lacks that power? If the latter, how can you be certain this particular power can not be gained by computers?


Giving reasons for one's actions, describing one's plans for tomorrow, arguing over philosophical issues are some of the behavioral criteria we use to attribute minds to human beings. They are partly constitutive of what it is to be a rational creature.

These are all abilities that are lost subsequent to damage of specific, different parts of the brain.

It is only because we already know what it is for a human being to reason and act for a reason that we are able to inductively (non-logically) correlate brain activity with those rational powers.

I am not disagreeing with this, but I don't see how it forwards an argument that we could not in the future correlate machine activity with these rational powers.

Neuroscience is great. Hopefully, it can contribute to treatments of neurological disorders that many people suffer from. But it seems to me to be mainly useless in understanding human nature, what it is for creatures such as ourselves to be human beings.

I agree that neuroscience can not replace philosophy, but again, I don't see how that says we can place hard limits on whether machines could experience an internal world that would include things like feelings.

Hal said...

One Brow,
Is a current lack of a specific power an limitation that such a power can never be had?

It is not simply a question of the possibility of acquiring new capacities. It also depends on the substance one is referring to.
Simply because it is possible for living things to evolve and grow and acquire new abilities or powers does not entail that it is possible for non-living things like rocks or machines to acquire consciousness, let alone the rational powers needed to have a mind.

The gamma camera I use for acquiring scans has a device on it that allows the camera heads to move very close to a patient without touching her. It does this automatically if I set up the imaging protocol correctly. When I describe this to my patients I tell them not to worry, they can't get hurt because the camera will know when to stop because it can see how close it is. Even though I use the words "know" and "see" when referring to the camera, I don't meant to imply that the camera is conscious of what is happening or that it actually sees the patient. It is simply a machine. No more conscious than a toaster, or the Tin Man or Data.

Sorry, but it looks to me like you are proposing a fantasy scenario when you talk about machines having sensations or feelings just as humans do. Obviously, there is nothing I can say that will dissuade you from believing in it. But I cannot believe in it. It looks quite impossible to me.

One Brow said...

Hal,

It is not simply a question of the possibility of acquiring new capacities. It also depends on the substance one is referring to.
...
Sorry, but it looks to me like you are proposing a fantasy scenario when you talk about machines having sensations or feelings just as humans do. Obviously, there is nothing I can say that will dissuade you from believing in it. But I cannot believe in it. It looks quite impossible to me.


Actually, I am proposing machines having sensations/feelings that are completely different from those that humans do. Ultimately, we are of the same substances as machines, just in different proportions and with different organization.

Hal said...

One Brow,
Yes, I understand that is what you are proposing. It doesn't make sense to me. You are claiming that these machine sensations/feelings are completely different from human sensations/feelings. But you still want to call them sensations/feelings. What are they? How do you identify them?

It is true that the subatomic particles that make up a machine also can be used to make up a human being. But as we move up the scale from atoms to molecules and more complex combinations of matter we can see the different forms that substances can take with vastly different properties and capacities. Non-living matter lacks many the properties and powers we find in living substances.

It is because substances have "different proportions and with different organization" that machines cannot have sensations while humans and other sentient animals can.

I do recall that a lot of the reductive materialists on the philosophical discussion forums shared the view you are proposing regarding computers. Perhaps because they also shared something like your conception of the mind: I see the mind as the patterns of impulses on the brain, in the same way that a cross is a pattern of positions for sticks.??

Hugo Pelland said...

Thanks for your message Hal,
Things went relatively fine but the circumstances were really bad as my father-in-law passed away... my mother-in-law lost her mom and husband within a month so it's been tough. She has a lot of support thankfully.

Got to go offline again so no time to comment again on the interesting topic at hand, later...

Hal said...

Hugo,
Wow, that is a lot to deal with. Always very hard when close ones pass away. Take care.

One Brow said...

Hugo,

Condolences and well wishes. Sorry we cant' do more.

One Brow said...

Hal,

I don't know that we could identify the feelings/sensations of a machine, any more than I can identify feelings inside you. They are internal to the being experiencing them.

Invoking the notion of patterns as something identifiable takes my position out of reductive materialism, to my understanding, but I'm not hung up on the categorization of my thoughts, as long as the concept is passed on.

It is because substances have "different proportions and with different organization" that machines cannot have sensations while humans and other sentient animals can.

Part of the issue with Aristotelianism is the tendency to forbid things from happening based on non-evidenced categorizations. We have no way to disprove that electronic machines can develop an internal experience of the world, and have the rough equivalent of sentience. Talk of the limits of capacities, which limits can't be proven, dismisses rather than addresses the argument.

Hal said...

One Brow,
We have no way to disprove that electronic machines can develop an internal experience of the world, and have the rough equivalent of sentience. Talk of the limits of capacities, which limits can't be proven, dismisses rather than addresses the argument.

But you just said you don't know how these internal feelings could be identified. So where is your argument that can show us that machines have such internal feelings?

One Brow said...

Hal,

The same way that you would know my feelings, or I would know yours. The machine would tell us.

bmiller said...

Hugo,

Condolences to you and your family.

bmiller said...

The same way that you would know my feelings, or I would know yours. The machine would tell us.

"There's a snake in my boot!"

Machines are different things than humans.

Hal said...

One Brow,

The same way that you would know my feelings, or I would know yours. The machine would tell us.

So there is a way to tell how others are feeling. Do you think there are other ways?

Also, you wrote earlier:
We have no way to disprove that electronic machines can develop an internal experience ...

That cuts both ways. If you can’t disprove it you can’t prove it.

And you said:
Part of the issue with Aristotelianism is the tendency to forbid things from happening based on non-evidenced categorizations

Where did you get that idea.? Our conceptual categories like sentient / non-sentient and biotic / abiotic are based on our observations of the world. Isn’t that evidence?

Hal said...
This comment has been removed by the author.
One Brow said...

Hal said...
So there is a way to tell how others are feeling. Do you think there are other ways?

All the ways I can think of involve signaling of one fashion or another.

That cuts both ways. If you can’t disprove it you can’t prove it.

Agreed.

Where did you get that idea.? Our conceptual categories like sentient / non-sentient and biotic / abiotic are based on our observations of the world. Isn’t that evidence?

What's the observation that says electronic machines are unable to develop an internal world?

Hugo Pelland said...

Thank you for the kind words folks.

Hal,
Regarding locked in patients, yes, of course they are still humans, but why? Is it really because they have a body, even when they can't use it at all. Is it really because they can communicate, when I'm pretty sure some of them have 0 way of communicating with others. Therefore, they are the closest to what a machine with consciousness "could" be, and that seems to me to be a strong argument in favor of the "possibility" of that happening. It's not prood that it definitely can be done and certainly not proof that it will, but it's not far.

Second, our ability to use reason isn't that incredible in my understanding. We don't really decide what to reason about and certainly not how. Our conscious experience is one of thoughts being generated by the brain, without our direct control, and the "I" just focus, or not.

Therefore, we are really closer to machines already than it seems on the surface. Sure, we're complex and wouldn't be who we are witout our bodies; locked in patients, to close the loop, wouldn't think about much, if anythinf, if they had not experienced the world prior to become in that state. But what they think about is just that, complex combinations of past experiences and the ability to focus on these thoughts as they arise in consiousness, or not.

Creating a piece of software with some hardware for sensing the world could in principle yiekd something similar. It's not an assumption, it follows from what we know about the brain and our own self-awareness.

Hal said...

One Brow,
All the ways I can think of involve signaling of one fashion or another.

Would you mind providing some examples? You mentioned language earlier. I'm looking for some other specific ways you can tell how substances have sensations.


What's the observation that says electronic machines are unable to develop an internal world?

They are artefacts composed of non-sentient, non-living substances and designed by humans to process information. As I mentioned above, only living, sentient substances are known to have sensations. That is why we call them sentient substances.

Hal said...

Hugo,
I think we agree that there are no spiritual or mental substances. The only substance that exists is matter. But there are myriad forms of matter. There are living substances, there are non-living substances, there are sentient and non-sentient substances. Some of those sentient substances are also rational substances. We are rational substances: beings with the capacity to not only observe and interact with the world but to act for reasons. And we are moral beings able to distinguish between good and evil.

Since we are rational substances, we have somatic and mental properties. In other words, we are bodies with the capacity to feel pain or pleasure, to observe and interact with the world, to act based on reasons.

As with all other living substances we are subject to the effects of aging and injury. We don't cease to be humans because of that. That locked in patient is still the same substance.

If I understand you correctly, you share the common reductive materialist view that a human is really just a brain encased in a human body. That it is really the brain that thinks and feels and it moves the body sort of like one moves a puppet on a string. That to me seems as misconceived as mind-body or soul-body dualism.

You wrote:
Creating a piece of software with some hardware for sensing the world could in principle yield something similar. It's not an assumption, it follows from what we know about the brain and our own self-awareness.

No it doesn't follow from what we know. It follows from your dualistic conception of human nature.

What we do know is that these machines are composed of non-sentient, non-living substances. That they are designed by humans to mimic or copy some of the things that humans can do. That is no reason to assume that they can feel the pains or enjoy the pleasures humans do based on that knowledge of what a machine is.

Why do you want to turn humans into machines?

One Brow said...

Hal said...
Would you mind providing some examples? You mentioned language earlier. I'm looking for some other specific ways you can tell how substances have sensations.

Smiling, body posture, arm positions, reactions to stimuli, etc.


They are artefacts composed of non-sentient, non-living substances and designed by humans to process information. As I mentioned above, only living, sentient substances are known to have sensations. That is why we call them sentient substances.

I am composed on non-living, non-sentient substances organized in a way that allows for sentience. I find this argument somewhat circular.

Hal said...

One Brow,
Smiling, body posture, arm positions, reactions to stimuli, etc.

That's fine but you are leaving out the contexts in which those occur.

I am composed on non-living, non-sentient substances organized in a way that allows for sentience. I find this argument somewhat circular.

I already acknowledged that all matter at the subatomic level is the same. We only differentiate between sentience/non-sentience and living/non-living at higher levels when a wide array of different substances like living cells and organs and bodies are formed.

Humans are in the category of living, sentient beings. Computers are in the category of non-sentient, non-living artefacts created by humans. I don't see the fact that computers can process large amounts of data as providing a reason to think those categories are flawed. I can no more credit a computer of having sensations than the caclulator on my desktop.

I suppose you could try and make a computer out of living cells. But then I don't think it would any longer be a computer. Probably closer to Frankenstein's monster.

Hal said...

Just throwing this out there:

I don't think this question concerning the possibility of an electronic machine such as a computer having the sensations that animals have correlates with atheism vs theism. I'm an atheist and bmiller is a theist but we seem to be in complete agreement on this issue. However I have encountered a large number of atheists on the web who are firm believers in the possibility of making machines that are conscious sentient beings.

Considering the vast differences between machines and all living, sentient beings, let alone rational beings such as ourselves, I am surprised that more atheists aren't skeptical of this possibility. Especially, those who pride themselves on only going where the evidence leads.

Hugo Pelland said...

Hello Hal,

Starting from the latest, it is true that this the topic of this thread has little, if anything to do with atheism vs theism. But it’s probably more interesting than that question anyway!

One thing I must point out at this point is that I see you use language which I find to be quite rhetorical, instead of presenting just arguments, and it makes it look like you’re attached to some romantic view of what it means to be a human, rather than just looking at the cold hard facts. Let me give some examples:

"I have encountered a large number of atheists on the web who are firm believers in the possibility of making machines that are conscious sentient beings"
Firm believers? But it’s just a possibility… and do many of these individuals told you that they are that attached to that “strong belief”? From a less emotional standpoint, it’s again just about looking at what we are made of and how we came to be that way. As One Brow pointed out, we’re made of the same stuff as machines are, or that sentient machines would be made of. It’s just the arrangement that differs. You have not explained what is so ‘magical’ about human beings. Your latest answer was actually another example of something more rethorical:

" I suppose you could try and make a computer out of living cells. But then I don't think it would any longer be a computer. Probably closer to Frankenstein's monster."
Why the comparison to some fantastical grotesque creature? It could just be something else we don’t know of... and that thing could well be sentient and even morally superior to us flawed human beings. We just don’t know. But what we do know is that it would be made of the same material, just arranged differently. Going back up a bit more:

" Computers are in the category of non-sentient, non-living artefacts created by humans. I don't see the fact that computers can process large amounts of data as providing a reason to think those categories are flawed. I can no more credit a computer of having sensations than the caclulator on my desktop."
That is again more rhetorical than logical, and it goes back to the example of neural networks. I must repeat, they are NOT software that are made by humans, not in the sense that you imply at least. These very special kind of software are more self-made than human-made; they are closer to how our brains evolved than how regular software is made. Humans did not program them to do anything at all; they just try to reach some goals, like humans do. Again, and obviously, they are nowhere near human beings in terms of complexity, experiences, interactions, etc… but they are also nowhere near traditional software either. They could have failed, they could have proved that machines must be made by humans to do specific tasks. Surprisingly, and so freaking quickly, these systems became efficient at reaching goals without any instructions at all.

Hugo Pelland said...

Next example:
" What we do know is that these machines are composed of non-sentient, non-living substances. That they are designed by humans to mimic or copy some of the things that humans can do. That is no reason to assume that they can feel the pains or enjoy the pleasures humans do based on that knowledge of what a machine is.

Why do you want to turn humans into machines?
"
Why are you afraid to recognize that humans are in fact a kind of machines, a biological one? What are you afraid of losing exactly? I don’t see it as limiting nor dehumanizing. If anything, it’s aggrandizing to better understand ourselves. It makes us more aware of our true capacities and limitations.

Plus, it’s important to repeat what I just said above: the potential sentient machines we are talking about are NOT designed by humans to mimic or copy some of the things humans can do. That is not how we might create sentient machines at all. I.e. see neural networks again; they don’t mimic nor copy what humans do at all. They reach goals their own way, just like different living things manage to reproduce in many different ways, for example, and evolved intelligence as a fantastic way to increase the success rate of reproduction.

Hugo Pelland said...

Now, going back up to the rest of your answer to my last post:
" I think we agree that there are no spiritual or mental substances. The only substance that exists is matter. But there are myriad forms of matter. There are living substances, there are non-living substances, there are sentient and non-sentient substances."
Sure, not bad to confirm our common grounds! And given that we discuss just what we disagree on here, I hope we can remember that we probably agree on more things than not, but it’s more fun to discuss what we disagree on and teach other about what we may not know. At least that’s why I am here on this blog. it’s not to win a debate or convert someone to something, it’s always to test ideas and learn from it. Anyway...

" Some of those sentient substances are also rational substances. We are rational substances: beings with the capacity to not only observe and interact with the world but to act for reasons."
I have to be really picky with the words here. In the specific context of this thread on "I could have done otherwise", it’s not obvious to me that we have the “capacity” to act for reasons, because I don’t think we really choose to reason. We can’t decide to not reason. We are compelled by things we hear or see, or by our own thoughts that arise in consciousness. We don’t decide them, they just show up, and we can focus on them, or not.

" If I understand you correctly, you share the common reductive materialist view that a human is really just a brain encased in a human body. "
Not at all; that makes no sense to me.

" No it doesn't follow from what we know. It follows from your dualistic conception of human nature."
Dualistic? But you’re the one who insist that humans are something more, and I still don’t know what exactly. You label my opposing view as ‘reductionist’, basically implying that I am denying something, reducing something from a greater state to something smaller. That’s what that word means, no? And every time you use it I wonder exactly what you mean. I have heard it in other philosophical context so I usually assume that it goes back to the notion that the non-material can exist independently of the material. That’s where you see some ‘reduction’ as the non-material things we talk about are “reduced” to an existence that depends on the material.

From my point of view, it’s the alternative that is dualistic: the non-material exists, no matter what, independently of the material. It’s possible, but I don’t see how we can justify that position, it’s just that. I don’t find it silly and it doesn’t make a big difference either in my opinion, since we can use the same concepts anyway.

But it does make a difference in the way we approach consciousness it seems, as it makes it to be something more than just the result of some material process driven by the brain (but no, not only the brain…)

Hugo Pelland said...

Finally, I could summarize the issue of "I could have done otherwise" and this long comment just now one more different way, which has to do with what I put in bold above, by referring to the best tool we have to study the nature of consciousness: meditation. That’s the most straightforward way to understand this notion of thoughts coming into consciousness. Does that mean anything to you? I would be very curious to know... it really changed a lot of my perspective, and even more drastically in the last year or two. Even if I am super novice at it, the understanding of what it is, how it works and what we can feel is remarkable. In just a few minutes, you can truly perceive the thoughts arising in consciousness, completely out of your control. The “I” that we are so used to live with fades away, leaving something else in its place, some more mysterious.

But anyway, I have not done enough to really go further than that, and that’s enough for the point I am making here: we are not the author of our thoughts. We perceive them.

Hal said...

Hugo,
it’s again just about looking at what we are made of and how we came to be that way. As One Brow pointed out, we’re made of the same stuff as machines are, or that sentient machines would be made of.

No. We are not made of the same stuff. Different things are made of different stuff.

I hope we can remember that we probably agree on more things than not, but it’s more fun to discuss what we disagree on and teach other about what we may not know. At least that’s why I am here on this blog.

Ok, let me teach you about substances. :-)

A substance can be a kind of thing. There are many individual kinds of things in my house: me, a book, a goldfish, a rose, a lamp, a peach, etc. These individual items can be placed or assigned to a general kind of thing: I am a kind of human being, the goldfish is a kind of fish, a rose is a kind of plant, etc.

A substance is also a kind of stuff. The material substances I differentiated in the above list are made of different kinds of stuff. In this sense, we can say something is a sticky substance or a chemical substance. Other kinds of stuff are iron, steel, sand, butter, water, etc.

All of these kinds of things and stuff are substances. All of those substances are placed in the category of material substances. So, yes, a computer is a material substance and a human being is a material substance.
We use the term ‘matter’ to refer to all these kinds of things and stuff. But matter itself is not a kind of substance. It is the overarching category into which we place these various things and stuff.
So, to repeat, it is clearly false to claim as you and One Brow do, that machines and humans are made of the same stuff.


But you’re the one who insist that humans are something more, and I still don’t know what exactly.

I don't understand. I've said that human beings are a kind of substance that consists of different stuff from a machine. I've denied that there are mental or spiritual substances.
I have pointed out what differentiates a human being from an artifact such as a computer.

What is this 'something more' that you think I am referring to?

One Brow said...

I already acknowledged that all matter at the subatomic level is the same. We only differentiate between sentience/non-sentience and living/non-living at higher levels when a wide array of different substances like living cells and organs and bodies are formed.

So, the first area of agreement is that the difference between non-living, non-sentient stone; living, not-sentient wood; and and living, sentient apes are the organization of the matter, not the matter itself. Is that correct?

Humans are in the category of living, sentient beings. Computers are in the category of non-sentient, non-living artefacts created by humans.

To quote myself, "Part of the issue with Aristotelianism is the tendency to forbid things from happening based on non-evidenced categorizations." Simply saying, 'computers are in the category of things that can not be sentient' is not evidence they can not be sentient, it is circular reasoning.

I don't see the fact that computers can process large amounts of data as providing a reason to think those categories are flawed. I can no more credit a computer of having sensations than the caclulator on my desktop.

I agree that the ability to process a large amount of data is not proof of sentience, nor the ability to acquire sentience.

I suppose you could try and make a computer out of living cells.

That's not relevant for this discussion, to me.

I don't think this question concerning the possibility of an electronic machine such as a computer having the sensations that animals have correlates with atheism vs theism.

Again, if a computer were to have sensations, they would not be the same sensations that animals have.

However I have encountered a large number of atheists on the web who are firm believers in the possibility of making machines that are conscious sentient beings.

I'm not sure how to "believe" in a "possibility". I would think that it is the impossibilities that require belief.

Considering the vast differences between machines and all living, sentient beings, let alone rational beings such as ourselves, I am surprised that more atheists aren't skeptical of this possibility. Especially, those who pride themselves on only going where the evidence leads.

You need to have evidence for it to lead somewhere. What is your (non-circular) evidence that there can be no non-living, sentient being?

Hugo Pelland said...

Hello Hal and One Brow,

Hal said:
" Ok, let me teach you about substances. :-)"
Yes, great, this was information as to what ‘substances’ mean here in this context. So to be clear, what I was referring to is something more fundamental then. One Brow just quoted you on that:
" I already acknowledged that all matter at the subatomic level is the same. "
This is what I meant by “we’re made of the same stuff as machines are, or that sentient machines would be made of.” Looks like the word ‘stuff’ can mean different things so that was helpful, thanks.

One Brow added:
" So, the first area of agreement is that the difference between non-living, non-sentient stone; living, not-sentient wood; and and living, sentient apes are the organization of the matter, not the matter itself. Is that correct?"
Obviously, I think this is correct, and I would expand on this a little by pointing out that the structure of the brain is understood well enough to know that neurons are essentially tiny switches that let electrons flow, or not. They are almost identical to transistors. And I almost wrote that they are “literally” like transistor honestly, but I want to remain a bit cautious here, because it is extremely complicated after all.

Therefore, I am willing to agree with you Hal that machines and humans are "not" made of the same stuff, when using that definition of “stuff” you presented. But then, I would also say that a wooden spoon is not made of the same stuff as a metal spoon. Yet, both are solid objects that can look almost identical and serve the same purpose, and more importantly, are made of atoms that bind to each other because of the same physical forces at the atomic level. With computers, that’s the same kind of similarities that we can get too; the circuits are made of tons of tiny electric switches, just like the brain is made of tiny electric switches.

Now, there is a crucial distinction here though, as our everyday computers are made to follow sets of instructions programmed by humans. Brains don’t do that obviously. But why conclude that machines cannot also evolve to become like human beings? It’s the machines of “today” that are not like brains at all, but it doesn’t mean they never will.

And that’s where this statement about machines that “mimic” human behavior is misguided, and you mentioned that 6 times in this thread already Hal... If a machine were to become sentient, it’s not because someone would program it to mimic human behavior. The only thing that might “mimic” something else is the machine’s internal switches that would mimic the brain’s internal switches. In both cases, it’s really just electricity flowing through complicated networks of switches and the hard problem here is that we don’t know how consciousness arises from these electrical switches in our brain. Nobody knows; it’s truly mysterious and fascinating to try to figure it out.

Hugo Pelland said...

Regarding that related-but-slightly-different topic:
" I have pointed out what differentiates a human being from an artifact such as a computer.

What is this 'something more' that you think I am referring to?
"

I don’t know what more you are referring to, that’s why I am asking what is being ‘removed’ when labeling my objections to libertarian free will as ‘Reductionism’. Does it mean something else then? But why is it called ‘Reductionism’ then? I wrote a paragraph on what I think it means above...

One Brow said:
" I'm not sure how to "believe" in a "possibility". I would think that it is the impossibilities that require belief."
That’s an interesting statement, but I think that “possibilities” need to be justified too. And that’s why on this specific topic, we cannot just sit back and say that machines could become sentient no matter what. But we do have reasons to explain why it’s possible, so it’s rational to believe that this is a real possibility. If we had less knowledge of the brain and never developed computers, like it was the case not that long ago after all, then we would not be justified in believe either; the rational position would be to claim that we don’t know whether it’s possible. But today, I think it’s rational to claim it’s possible and this is a form of belief I think.

This means that your question is completely justified imho:
" What is your (non-circular) evidence that there can be no non-living, sentient being?"

Hugo Pelland said...

After re-reading, I feel like adding even more to this:
If we had less knowledge of the brain and never developed computers, like it was the case not that long ago after all, then we would not be justified in believe either; the rational position would be to claim that we don’t know whether it’s possible.
Because, and I know I am repeating too, neural networks did teach us something surprising, and that was developed really recently. I was personally so shocked to learn about how AlphaGo Zero was able to be so much better than AlphaGo, the one that was able to beat the Go champion Lee Sedol. Most people have heard about the latter in the news, but I think very few people realize the incredible difference with AlphaGo Zero being able to beat AlphaGo. That version did not look at any games played by humans! This is not something that was predicted with confidence as far as I know. The machine could have failed to find ways to play the game, but no, it ended up being much much better than the machine that looked at humans play. I don’t remember the exact sources I read a few months ago but the Wikipedia article should be a good starting point if you’re interested:
https://en.wikipedia.org/wiki/AlphaGo_Zero

Hal said...

Hugo,

This is what I meant by “we’re made of the same stuff as machines are, or that sentient machines would be made of.

I should correct and clarify what I said earlier. Atoms are not made of stuff. They are made of particles. The things we refer to as being made of stuff are the entities that exist above the atomic level.

I also see you keep comparing computers to brains as a reason for thinking they can be sentient. But in my conception of the human being, brains are not sentient beings. It is the human being who has a brain that is a sentient being. It makes no more sense to say that a brain thinks or feels than it does to say the engine of a jet plane flies. It is not the engine that flies, but the plane that flies. That is a conceptual truth not an empirical one.

Hal said...

One Brow,
So, the first area of agreement is that the difference between non-living, non-sentient stone; living, not-sentient wood; and and living, sentient apes are the organization of the matter, not the matter itself. Is that correct?

I don't understand your question. Matter itself is not a kind of substance. All the different kinds of substances fall under the overarching category we call "matter".

As I pointed out to Hugo, there are many kinds of substances in the world: there are many kinds of things (or entities) and many kinds of stuff.

Simply saying, 'computers are in the category of things that can not be sentient' is not evidence they can not be sentient, it is circular reasoning.

Taxonomy is to one degree or another arbitrary. But any type of classification is going to have some reasons for setting it up. We could choose, like many ancient cultures did to place things like rocks and plants in the category of sentience. Or place what we call a mammal like a whale in the non-mamallian category of fish. Descartes place non-human animals in the category of machines. He didn't think they were really conscious or felt pain.

I happen to agree with Descartes' view that machines are not conscious beings with the capacity to feel things like pain or pleasure. I disagree with him regarding animals.

So ultimately, I think, this is a disagreement over how we think the substances in this world should be classified. I'm having trouble seeing a good reason why we should re-classify a non-living substance such as an electronic machine as a sentient being. I know of no other non-living substances that are sentient. Do you?

Hal said...

Hugo,
I don’t know what more you are referring to, that’s why I am asking what is being ‘removed’ when labeling my objections to libertarian free will as ‘Reductionism’. Does it mean something else then? But why is it called ‘Reductionism’ then? I wrote a paragraph on what I think it means above...

Good question. I keep thinking of your position as being reductionistic because you keep referring to brains rather than the human being. Humans are not brains.

Also, because you seem to place so much emphases on the subatomic level. I see no reason for thinking that the subatomic level can play any role in making the distinctions we do between sentient, living substances and non-sentient, non-living substances.

Hal said...

Hugo,
Why are you afraid to recognize that humans are in fact a kind of machines, a biological one? What are you afraid of losing exactly? I don’t see it as limiting nor dehumanizing. If anything, it’s aggrandizing to better understand ourselves. It makes us more aware of our true capacities and limitations.

I wouldn't say I am afraid, rather concerned. It is de-humanizing to place humans in the category we call machines. We can turn a machine off or on. Machines do things automatically. They are tools meant to be used by humans.

Are humans tools? Only of value if they can be used by someone else?

Humans do things for reasons. They are responsible for their actions.

What new understanding of humans are you acquiring by calling them machines?

Hal said...

Hugo and One Brow or anyone else reading this thread,

Next couple days are going to be kinda busy for me. Gonna check out of here for awhile.

Wishing you all have a good time with family and friends during this holiday season!!! :-)

Hugo Pelland said...

Thanks Hal, wishing you the same! And similar situation here; no more time to write reasoned comments today and I'll be traveling like crazy yet again over the next few days... see you later.

One Brow said...

Hal said...
I don't understand your question. Matter itself is not a kind of substance. All the different kinds of substances fall under the overarching category we call "matter".

For you, is there anything to being a specific sort of substance besides the matter and the way it is organized?

Taxonomy is to one degree or another arbitrary. But any type of classification is going to have some reasons for setting it up. We could choose, like many ancient cultures did to place things like rocks and plants in the category of sentience. Or place what we call a mammal like a whale in the non-mamallian category of fish.

Biologically, all mammals are fish.

I know of no other non-living substances that are sentient. Do you?

Not yet. Is a minimum count required?

Hugo Pelland said...

Merry Christmas again! Got some time on my 1-day transition after all...

Hal said: "The things we refer to as being made of stuff are the entities that exist above the atomic level."

Ah ok, then sentient machines, should we be able to make them, will most likely not be made of the same stuff, as it seems more likely that we will continue to build machines out of non-biological material; we're mostly building on silicon afaik.

"I also see you keep comparing computers to brains as a reason for thinking they can be sentient. But in my conception of the human being, brains are not sentient beings. It is the human being who has a brain that is a sentient being. It makes no more sense to say that a brain thinks or feels than it does to say the engine of a jet plane flies. It is not the engine that flies, but the plane that flies. That is a conceptual truth not an empirical one."

We agree here, so I don't know why you insist that I keep making a false comparison. The plane flies, but without an engine, it can't. Humans are sentient, but without a brain, they couldn't. At least regarding humans we see around us today... remove the brain, you remove sentience.

So, to re-phrase what we do agree on: a brain, alone, wouldn’t be sentient; that makes no sense. It is the human being who has a brain that is a sentient being. Where do you see a disagreement?

Next, you mentioned that "Atoms are not made of stuff. They are made of particles."
and that
"Matter itself is not a kind of substance. All the different kinds of substances fall under the overarching category we call "matter"."
along with that line quoted above.

This was clear before, now I am confused by your labeling again... let me try to summarize. And as I started to write this, I had to go back to previous message. Writing a separate comment...

Hugo Pelland said...

From "let me teach you about substances" above, I will try to translate your labelling system into something that doesn't depend on English. Does that make sense? Basically, it's hard to keep track of what labels are interchangeable with other language-dependent concepts or synonyms, versus contrasting things that we experience versus think about, for example.

The latter is universal; regardless of language we all have a basic understanding of what it means to refer to things we think about versus things we have in front of us. It's more obvious in person but I hope that's not controversial when doing it in writing...

But first, before digging in, there's 1 word that is problematic here, so I want to clarify it. Maybe everything I wrote here is wrong because of that starting point... Every "thing" is some thing, otherwise it's nothing, literally. Therefore, each individual material objects around us or each concept or idea we can think of is a "thing" as being not-a-thing is nothing. Something that exists in the sense that we can talk about is necessarily a "thing". Any label is a "thing" itself and can be used to group other things, other labels, other things we experience, other things we think about, other concepts we agree or disagree on, etc... regardless of which language we speak, we experience things and we think about things. It's one of the simplest unitary labels we universally use, even if we have different words, symbols or even gesture for it.

1a) A substance can be a kind of thing.
1b) There are many individual kinds of things
Given examples: "me, a book, a goldfish, a rose, a lamp, a peach, etc."
1c) There are general kinds of things
Given examples: "I am a kind of human being, the goldfish is a kind of fish, a rose is a kind of plant, etc."

Translation into the language-independent experience versus thinking:
Together, these statements mean that individual items can be placed or assigned to a general kind of thing. If we were together in the same room, we could look at an aquarium, point to it, and talk about that specific object, "a fish", currently swimming in it, which is a substance of the general kind "fish". There are many other fish of all shapes and sizes, but we have a general understanding of what the label "fish" refers to, and the one in front of us is an individual example. We can assign tons of other labels to talk about that fish, specifically, and fish in general, as a kind of things. They live in water, they lack limbs with digits, they are not mammals, etc.. generally speaking. But assigning a label "fish" to a specific object doesn't imply that it had any specific characteristic that another thing called "fish" has. We label both as "fish" because they have something in common, but not necessarily everything and the similarities can vary cross the different kinds of fish.

Hugo Pelland said...

"2) A substance is also a kind of stuff."
Given examples: "me, a book, a goldfish, a rose, a lamp, a peach, etc." are material substances.
More examples: "sticky, chemical" (descriptive adjectives)
More examples: " iron, steel, sand, butter, water, etc." (kind of things)

Translation into the language-independent experience versus thinking:
The "suff" here represents attributes of the things we are talking about. A book is a material things while the number two is not a material thing. The same book is not sticky, by default. Something like "honey" is sticky because of how it behaves when we touch it. A book can become sticky if it touches honey, and someone picking up a book that has a bit of honey on it would feel it to be sticky. This is an example of and "adjective", some label that describe a specific thing, or a kind of things, a group of things. The kind of "stuff" that is associated with a single thing can also refer to other kind of things, such as when stating that a skyscraper is made of steel, glass, and a lot of other simple things, combined together.
When "describing" a particular material object, we are saying what kind of stuff it is.

So combining 1 and 2, we can have a lot of different types of "fish", which we either think of or experience directly:
A) Individual item that we experience: a fish that is swimming in front of us right now, in an aquarium.
B) Collective label for what we experience, or can experience: the kind of thing that fish in front of us is labeled as, the label "fish" can refer to all living fish, present alongside that fish in the same aquarium, or in general in the world's lakes, rivers and ocean.
C) Individual item we can think of: that fish we saw in person, I can think about it later on. What I think of is not that fish anymore, it's a representation I can think of. We can describe it, talk about it, and remember the same exact fish if we both saw it.
D) Collective label for what we can think of: we can discuss what things are labeled as "fish" and why, we can also think of some fish that cannot possibly exist materially, such as a 1km-long pink salmon with blue & green dots. We can think about it relatively easily, it's just a giant salmon with weird colors, but it doesn't have a material form we can point to.

Hugo Pelland said...

"3a) All of these kinds of things and stuff are substances."
"All of those substances are placed in the category of material substances. [...] We use the term ‘matter’ to refer to all these kinds of things and stuff."
"3b) But matter itself is not a kind of substance. It is the overarching category into which we place these various things and stuff."
"3c) There is no mental substance.
3d) All the different kinds of substances fall under the overarching category we call "matter".
"
" I've denied that there are mental or spiritual substances"

Translation into the language-independent experience versus thinking:
Given that "matter" here is described as a category, into which we place the things mentioned above, and given that it explicitly excludes the mental, it is a label assigned to things that we experience, and contrasts with things we think of.

It seems to me that, in English, we can thus use the label 'material' to refer to the kind of things we experience and 'non-material' to the kind of things we think of. This is the most fundamental distinction we can make among all of the things we can talk about.

This also seems to imply that 'substance' is a synonym to 'material' or 'matter'. It's pretty clear from 3d, which was a bit below in the thread. These are the things that we experience. We can then think about them too, in the form of a concept, but that image in our minds, these thoughts, they are a model, not the actual physical material substance we interact with. We experience/interact with substances, material things, and can think about them.

There was a further clarification:
"4a) Atoms are not made of stuff. They are made of particles.
4b) The things we refer to as being made of stuff are the entities that exist above the atomic level.
"

Translation into the language-independent experience versus thinking:
Among the things we experience, be it directly or indirectly by using physical instruments, some things are 'made of stuff' and some are not. Atoms are not considered to be made of stuff; atoms are a kind of things not part of the 'made of stuff' kind. The kind of things that is 'made of stuff' includes only objects above the atomic level.

This means that something like an atom of hydrogen is not 'made of stuff' because it is at the atomic level. An atom of oxygen is also not 'made of stuff' for the same reason. A molecule of water is something in between, it is 'made of stuff' because it is not an atom, but it is made of 3 things that are themselves not 'made of stuff'. The molecule of water is thus at that transition point; it is not 'made of stuff' but it is itself a kind of stuff.

Next, combining more of these kind of things, we get something like a glass full of water, which contains millions of molecules of water. The contents of the glass, the liquid we can experience, is 'made of stuff', which is water. We can add some sugar, some flavor, some coloring and make Pepsi out of it, something a bit more complex, 'made of stuff'.

Hugo Pelland said...

Wow, that took me forever to go through... and made me realize what the real problem is here, and I think One Brow had nailed it already when he stated:"Simply saying, 'computers are in the category of things that can not be sentient' is not evidence they can not be sentient, it is circular reasoning." because that points to the issue of trying to place labels on certain things to then conclude that they must be something else.

I understand the appeal because it's useful in so many different ways, but it doesn't always work. Let's say that there is a basket a fruits in front of me. I take one of them, which looks like an apple, and I bite into it. Sure enough, it looked like an apple and it tastes like an apple so I conclude that it is an apple. I look at the rest of the fruits and some look like bananas, oranges, pears, etc... so I conclude that because the one I ate looked like a fruit, the others must all be fruits too. Specifically, the ones which are red and spherical look exactly the same as the one I ate and I would thus label them as 'apple' for sure. But what if one of them turned out to be made of plastic? The fact that I labeled it as 'apple' to begin with did not, in any way, ensure that it was in fact an apple that I can eat.

You already agree with that point as you said:
"Taxonomy is to one degree or another arbitrary. But any type of classification is going to have some reasons for setting it up. We could choose, like many ancient cultures did to place things like rocks and plants in the category of sentience."
Exactly, labels are convention, based on the actual material things we are looking at. Therefore, the following is easy to answer:
"I'm having trouble seeing a good reason why we should re-classify a non-living substance such as an electronic machine as a sentient being. I know of no other non-living substances that are sentient. Do you?"
No, they don't exist yet... but why should we thus conclude that it's impossible? What is so unique about living substance that they, only, can potentially be what experiences feelings, consciousness, self-awareness?

You already hinted at why it's actually possible:
" Also, because you seem to place so much emphases on the subatomic level. I see no reason for thinking that the subatomic level can play any role in making the distinctions we do between sentient, living substances and non-sentient, non-living substances."
Exactly, because it's the same at that level! It's interesting that you didn't realize how writing that actually supports the point that machines could become sentient, precisely because the subatomic level is he same and thus has no role to play directly. The electrons flowing in the brains of sentient beings (who are more than just their brains) are the same electrons flowing in the CPU of a computer, which would be a lot more than just its isolated CPU, should the computer become sentient. The computer would need to have a way to communicate to other sentient beings about why it thinks it is sentient, and we would only be able to tell because we, somehow, can communicate with it and, somehow, determine that it really is sentient.

Hugo Pelland said...

Hal said:
"I wouldn't say I am afraid, rather concerned. It is de-humanizing to place humans in the category we call machines."
First off, it's just a label. It doesn't remove anything from humanity.
Second, I don't think it's de-humanizing to better understand how we work and why.
" We can turn a machine off or on."
Well, we can certainly turn humans off or on so I am not sure how that's so different.
" Machines do things automatically."
Almost everything we do is automatic.
"They are tools meant to be used by humans. Are humans tools? Only of value if they can be used by someone else? "
As if humans were not tools used by other humans... of course we can be and often are. It would be very hypocritical of you to pretend that the anonymous person that take your order at a fast food restaurant is less than a tool than the ATM machine that gives you cash. Of course the fast food worker is a lot more, as a human being, but for you he/she really is just a tool. You expect them to act like that, to just do what you want them to do, no more no less. The extra smile and small talks are just reminders that the person is indeed human.

"Humans do things for reasons. They are responsible for their actions."
Indeed.

"What new understanding of humans are you acquiring by calling them machines?"
We don't acquire anything by calling humans machines. You're the one who places labels on things to then try to infer other things about them... You place the label non-sentient on machines and because they are not made of the same stuff (living cells VS silicon) you conclude that the machines cannot be sentient. So it's you who need to explain why you think you get some understanding of certain kinds of things simply by labelling them a certain way.

In other words, it's the other way around. It's because of what we can learn about humans that we can point to their similarities, and countless differences, with machines. A lack of complete libertarian free will is one of them, and that was the point of this thread.

I will need to go back to that point about meditation for instance ( too much text here already though) but that's the conclusion here once again. Thoughts come into our consciousness; we are not the author. We can only focus on them. We are responsible for our actions because we have that ability to focus, and nothing else.

And if I had to summarize all of these long comments, we disagree on just a few small things I think, really, and it's regarding whether we actually have some form of libertarian freewill and whether machines could one day be actually sentient like we are today. The reasons why we disagree are so subtle though that it takes forever to get to the details... and I think language is the key. I wonder whether English not being my first language makes a difference.

Hal said...

Hugo,
"Simply saying, 'computers are in the category of things that can not be sentient' is not evidence they can not be sentient, it is circular reasoning."

Unfortunately, that is a distortion of my position. I pointed out that taxonomy is to some degree or other arbitrary. I also pointed out that we usually have reasons for the categories we use. One very good reason for placing different substances in different categories is that substances display a wide variety of different properties and capacities.

I wrote:"Also, because you seem to place so much emphases on the subatomic level. I see no reason for thinking that the subatomic level can play any role in making the distinctions we do between sentient, living substances and non-sentient, non-living substances."
You replied: Exactly, because it's the same at that level! It's interesting that you didn't realize how writing that actually supports the point that machines could become sentient, precisely because the subatomic level is he same and thus has no role to play directly.

Sorry but I don't understand how this supports the claim that a non-living substance such as a machine could be sentient. Sentience doesn't exist at the subatomic level. Sentience in the world only emerged after certain substances in the world converged during the long evolutionary process that has resulted in the wide varity of living things we encounter in the world.

No, they don't exist yet... but why should we thus conclude that it's impossible? What is so unique about living substance that they, only, can potentially be what experiences feelings, consciousness, self-awareness?

Living substances are different from non-living substances. It is because we can perceive those differences that we place them in the categories we do.

I wonder whether English not being my first language makes a difference.

You seem to me to have a very good understanding of English, so I doubt that is what makes the difference.

Hal said...

Hugo,
I will try to translate your labelling system into something that doesn't depend on English. Does that make sense? Basically, it's hard to keep track of what labels are interchangeable with other language-dependent concepts or synonyms, versus contrasting things that we experience versus think about...

Sorry, but any description you use is going to have to rely on language. We use words to express our concepts. If you know the meaning of the word "fish" then you know what a fish is, what our concept of fish is.

If you don't understand my usage of a word, all you need do is ask me to explain it.

You place the label non-sentient on machines and because they are not made of the same stuff (living cells VS silicon) you conclude that the machines cannot be sentient.

Would you care to explain how a machine composed only of inanimate substances could have sensations?

Hugo Pelland said...

Hello Hal,
"Sorry but I don't understand how this supports the claim that a non-living substance such as a machine could be sentient. Sentience doesn't exist at the subatomic level. Sentience in the world only emerged after certain substances in the world converged during the long evolutionary process that has resulted in the wide varity of living things we encounter in the world."

The point is that living things are not that different from non-living things, as they are made of the same subatomic particles, which are necessary but not sufficient to yield the 1 kind of sentience we know of, the 1 that requires a brain and neurons. Therefore, given what we know about the brain’s electric circuitry, it yields the possibility that another system made of another kind of substance, non-living matter, be sentient too, as the processes is, in principle, reproducible via any kind of switches. Why do you reject that possibility?

Plus, there could be other ways for sentience to evolve, in non-living things or even other living things; nobody can know for sure as it’s impossible to imagine what it would be like to be a sentient machine. We can only try to figure out while communicating with that other thing or being whether the lights truly are on.

" You seem to me to have a very good understanding of English, so I doubt that is what makes the difference.
[...]
any description you use is going to have to rely on language. We use words to express our concepts. If you know the meaning of the word "fish" then you know what a fish is, what our concept of fish is.

If you don't understand my usage of a word, all you need do is ask me to explain it.
"

You didn’t get my point, which probably means that you don’t know another language fluently (?). The idea here is that whatever something is, logically, it is that thing regardless of the language used.

My wife speaks Bengali, for example, and to my surprise, I learned that they only have 1 verb for eating/drinking/smoking. More recently, when I was chatting with my wife’s 4yo niece, who started to go to daycare in English, she said she wanted to ‘eat’ water. Obviously, we know which action she was referring to, and it was cute to see her use the wrong word... but it doesn’t change anything about the fact that she wanted to consume a liquid, not solid food.

Similarly, I find it really confusing when you write things like what I put in bold above. Let me put them all next to each other:

1a) A substance can be a kind of thing.
1b) There are many individual kinds of things
1c) There are general kinds of things
2) A substance is also a kind of stuff."
"3a) All of these kinds of things and stuff are substances."
"3b) But matter itself is not a kind of substance. It is the overarching category into which we place these various things and stuff."
"3c) There is no mental substance.
3d) All the different kinds of substances fall under the overarching category we call "matter"."
"4a) Atoms are not made of stuff. They are made of particles.
4b) The things we refer to as being made of stuff are the entities that exist above the atomic level."

Things, stuff, substance, kinds, matter, etc... are all words that have general usage in our everyday lives, and they are straightforward, but in that kind of philosophical discussion, it is really not that obvious. As I detailed above for instance, you put together under ‘kind of stuff’ adjectives such as ‘sticky’ and more basic material such as ‘iron’. I see these 2 as completely different things and that’s where I wonder whether that’s a slight mental model difference, based on the language we spoke growing up. I don’t see them as the same at all, yet you did lump them together, like my niece confusing drinking and eating. And I must be doing similar distinctions, or lack thereof, in other occasions.

" Would you care to explain how a machine composed only of inanimate substances could have sensations?"
Sensors...

Hal said...

Hugo,
The point is that living things are not that different from non-living things, as they are made of the same subatomic particles, which are necessary but not sufficient to yield the 1 kind of sentience we know of, the 1 that requires a brain and neurons. Therefore, given what we know about the brain’s electric circuitry, it yields the possibility that another system made of another kind of substance, non-living matter, be sentient too, as the processes is, in principle, reproducible via any kind of switches. Why do you reject that possibility?


I reject the possibility because there is actually a huge difference between living and non-living things. Metals, plastics, etc., the kinds of things that go into making up a computer no more have the capacity to feel sensations than they have the capacity to die. It doesn't matter how you combine the parts of a computer that are composed of those non-living substances you are not going to be able to create a living being that has the capacity to feel sensations.

Hal said...

Hugo,
I know you've put a lot of effort into writing some of your above posts. Unfortunately, you put so many different things into them, I find it too time consuming and difficult to try address them.

Hal said...

Hugo,
Similarly, I find it really confusing when you write things like what I put in bold above. Let me put them all next to each other:

1a) A substance can be a kind of thing.
1b) There are many individual kinds of things
1c) There are general kinds of things
2) A substance is also a kind of stuff."


What the bolded amounts to is that we can use the word "substance" to refer to objects and to refer to what the objects are made of. As an example, a computer is a machine. It is made up of silicon and plastic and various metallic substances.

Hal said...

Hugo,

I wrote: " Would you care to explain how a machine composed only of inanimate substances could have sensations?".
You responded: Sensors...

The gamma camera I use at work has PSD's - pressure sensitive devices on the camera heads. If the are pressed on the camera stops moving. Great for preventing patients from getting crushed. Not so great for generating sensations in the camera. It doesn't feel anything when the sensors are pressed.

Hal said...

Hugo,

Things, stuff, substance, kinds, matter, etc... are all words that have general usage in our everyday lives, and they are straightforward, but in that kind of philosophical discussion, it is really not that obvious. As I detailed above for instance, you put together under ‘kind of stuff’ adjectives such as ‘sticky’ and more basic material such as ‘iron’. I see these 2 as completely different things …

That was the point of my comparison, There are completely different kinds of stuff that make up the things in the world.

One Brow said...

Hal said...
The gamma camera I use at work has PSD's - pressure sensitive devices on the camera heads. If the are pressed on the camera stops moving. Great for preventing patients from getting crushed. Not so great for generating sensations in the camera. It doesn't feel anything when the sensors are pressed.

If the camera doesn't detect anything when the sensors are pressed, why would they stop the camera?

Now, if you were to say that the camera does not have the mental capacity for self-reflection, and so has no internal experience of having the sensor pressed, I agree. However, what happens when we combine a neural network of sufficient sophistication to have an internal experience, and these sensors? It's not a feeling like humans have, but that doesn't mean it's not a feeling.

Hugo Pelland said...

Hello Hal,
"I know you've put a lot of effort into writing some of your above posts. Unfortunately, you put so many different things into them, I find it too time consuming and difficult to try address them."
Not a problem at all but thanks for saying that. It’s mostly for myself when I write long posts honestly; it’s a way to put thoughts in writing and re-read myself to see whether it makes sense. I often change quite a lot of details but nobody can tell... If something that was very important gets ignored, I can always bring it back up again.

"I reject the possibility because there is actually a huge difference between living and non-living things. Metals, plastics, etc., the kinds of things that go into making up a computer no more have the capacity to feel sensations than they have the capacity to die. It doesn't matter how you combine the parts of a computer that are composed of those non-living substances you are not going to be able to create a living being that has the capacity to feel sensations."

There are several points in that 1 paragraph:

1) There is a huge difference between living and non-living things.
Yes and no, depending on what we are talking about, and that’s why we have been running in circles regarding that topic. Because even though living and non-living things are very different, they do have a lot in common too, including 1 crucial thing: they can carry electricity exactly the same way.

2) Things that go into making a computer don’t have the capacity to feel sensations.
Yes and no, again. No, plastic doesn’t have the capacity to feel sensation, but so does carbon, proteins, keratin, etc... On their own, cells don’t feel anything. As one wise person once said, it’s the human that feels sensations, not the brain, but it’s because the brain is connected to the cells and that they can react to electrical impulses that send signals to the brain that the emerging human person can feel sensations.

3) Things that go into making a computer don’t have the capacity to die.
Yes and no, yet again. Livings cells live and die, but bones are part of the human body and don’t really fit that description. Similarly, computers can be made not only of parts that never “die”, such as their hard drive, but also of parts that can “die”, such as the RAM that requires permanent electricity to retain information. Plus, the hard drive may be able to sustain information, but it still requires some active electrical components to have its contents read.

4) It doesn't matter how non-living substances are combined, you are not going to be able to create a living being.
I didn’t know you were a Creationist! Joking aside, we know that living things are the combinations of non-living substances combined in such a way that the living organism can feel sensations. It’s absurd to claim that it doesn’t matter how the non-living substances are combined; it does matter tremendously how they are combined. That’s pretty much the only thing that matter... biological evolution is what made them arrange in such a way that we evolve sentience.

Hugo Pelland said...

Moreover, this part of the paragraph is worse than just that, as it also imply that only a living being can have the capacity to feel sensations. But that’s exactly what One Brow and I have been asking you to justify! Why is it that only living things could feel sensations? Too different? Nope, just electricity flowing in reaction to stimuli. Capacity to feel sensations? Nope, living and non-living things can both have sensors that detect changes around them. Capacity to die? Nope, living and non-living things both have parts that are either passive or active and thus permanent or not, like dying or not. Capacity to be combined in a specific way? Well, that’s the whole question...

We don’t know exactly how we could combine non-living things to get them to be sentient, but there is nothing, in principle, that prevents a machine from sensing its environment and, somehow, have a conscious experience of what’s happening, along with self-awareness and a bunch of other things we may not even have the proper language to describe.

Hal said...

Hugo,
"4) It doesn't matter how non-living substances are combined, you are not going to be able to create a living being."

That is not what I said in the paragraph you are responding to.

I said: It doesn't matter how you combine the parts of a computer that are composed of those non-living substances you are not going to be able to create a living being that has the capacity to feel sensations.

See the difference?

Also, I would suggest you read up on bone tissue. Bones can die just as other parts of the human body can. I do scanning for avascular necrosis. And I think your claim that "electricity is carried in exactly the same way" by living and non-living substances is false. Nerve cells are quite different than the wires that are used to carry electricity in a computer.

Hal said...

Hugo,
I didn’t know you were a Creationist! Joking aside, we know that living things are the combinations of non-living substances combined in such a way that the living organism can feel sensations. It’s absurd to claim that it doesn’t matter how the non-living substances are combined; it does matter tremendously how they are combined. That’s pretty much the only thing that matter... biological evolution is what made them arrange in such a way that we evolve sentience.>

At the molecular level non-living substances can combine to form living things: cells.

Once you have cells, living substances, then those substances can combine to form substances such as conscious beings with the capacity to feel pain or other sensations.

Why would you think the combinations of non-living substances above the molecular level could result in the emergence of conscious beings?

Hugo Pelland said...

Hello again Hal,
"See the difference?"
Yes, but...

Yes, I see the difference. It is obvious we cannot combine "computer parts" such as a CPU or memory chips to create a living sentient beings. That's just silly.

But I never saw this conversation as being about making a machine "alive"; we are not talking about re-creating biological things using silicon, and that's just 1 example. I have actually repeated many times how it is not about things like a crying dolls emulating human sadness.

So, again, the parts I was talking about are simpler. That's why I rephrased your objections. Because it's about the underlying mechanisms. To me, it is wrong to assume too much about what making a sentient machine entails, because we don't know exactly what parts we have at our disposal, yet. One Brow even went 1 step further by claiming we don't even need to justify why it's possible; you need to explain why it's not, and your statements don't meet that burden.

Therefore, your last question is not relevant. You're again attacking the notion that we cannot mimick how biological things work, using computer-like parts, and conclude we thus cannot do it at all. But we can't tell, yet, how a sentient machine would work exactly, so that's beside the point. It's a lack of imagination in a way. What you have said in this thread could be re-phrased as: "I can't see how a non-living thing could be consious."

What we do know however is that electrical switches, in the forms of neurons, are what the brain uses to create that sense of self, from which conscious emerged. And that's what non-living things have in common, because it's literally the same thing: electrical current causing tiny things, neurons or non-living things like transistors, to change states. You mentioned that there's a difference, somehow, but I don't see it.

In other words, to use your last question's words, the combinations of non-living substances above the molecular level that could result in the emergence of conscious beings is some undefined arrangement of switches and their connexions to external sensors, just like human brains are.

What is so different about human brains and the body they are connected to? Why couldn't a machine evolve the same kind of outcome and become aware of its own existence?

Finally, preemptively, it seems to me that your answer is again and again that it can't happen because only living things are consious and we thus can't have non-living consious things, because consious is a subset of living.

Overalls, it's like saying that mammals can't possibly fly because birds are the only animals that can fly. Yet, fying bats, and their convergent case of evolution, prove that statement wrong. One day, a machine could prove you wrong...

Hal said...

Hugo,

Well it is true that non-living substances are not conscious. A thing that is not conscious cannot have sensations or feelings.

So, yes it is silly to think you could take artificial parts that are not living and combine them in some way that gives them the capacity to be conscious entities. Not sure why you think zapping those parts with electricity is going to accomplish that.

And, as I mentioned earlier, brains are not conscious entities. It is the human being that is conscious. And to get back to the OP, it is the human being with the capacity to act or refrain from acting for a reason.

Hal said...

Hugo,
But we can't tell, yet, how a sentient machine would work exactly..

When you are able to, let me know.

We don’t know exactly how we could combine non-living things to get them to be sentient, but there is nothing, in principle, that prevents a machine from sensing its environment and, somehow, have a conscious experience of what’s happening, along with self-awareness and a bunch of other things we may not even have the proper language to describe.

Since you don't even know how to describe it, how can you claim there is nothing in principle preventing it?

Plus, there could be other ways for sentience to evolve, in non-living things or even other living things; nobody can know for sure as it’s impossible to imagine what it would be like to be a sentient machine.

Why should I believe in something that no one can know for sure and it is impossible to imagine?

And if I had to summarize all of these long comments, we disagree on just a few small things I think,..

No, we disagree on some pretty important things.

Hugo Pelland said...

Hello Hal,
"No, we disagree on some pretty important things."
Ok, well, yes, we disagree then, because I don’t consider this thread as being about anything important! For example, it doesn’t matter to me whether we really have full one libertarian freewill; it has not impact on how we should approach morality imho as we are human beings making choices. What we are discussing here is more entertaining than worldview altering.

" Well it is true that non-living substances are not conscious. A thing that is not conscious cannot have sensations or feelings."
One does not follow from the other. It’s again and again your only argument really. It’s what I would label an argument ‘by definition’. Non-living substances are not conscious; therefore, they can’t be. It doesn’t say why it’s not possible, it’s just restating that it can’t, because of what it’s not, today.

"So, yes it is silly to think you could take artificial parts that are not living and combine them in some way that gives them the capacity to be conscious entities."
Except that it happened already. Unless you don’t believe that consciousness evolved naturally. And we can trace so far back. We know that the heavy atoms in our bodies came from stars that exploded. These were not living for billions. of years before life could start. It’s not so silly to think that consciousness could arise again, starting with tiny parts made of similar atoms, but arranged differently. Perhaps it does require something akin to biological evolution and that’s why something like neural networks is more likely to span a new form of consciousness than software programmed entirely by humans.

" Not sure why you think zapping those parts with electricity is going to accomplish that."
Wow... zapping parts with electricity... really?

So all of these times that I mentioned electric switches, you had no idea what I am talking about it seems... do you not understand the parallel between transistors and neurons for example? Or how electrical current propagates through the body the same way it does through electronic circuits? The parts are different but the principle is identical, and the sub-atomic particles are literally the same. Again, what’s so special in living things that non-living things couldn’t possibly emulate?

So I have to flip the question:
Since you don't even know how to describe it, how can you claim there is something in principle preventing it?

bmiller said...

Hugo,

Have you ever heard of the fallacy "Argument from ignorance"? You should look it up.

Hugo Pelland said...

bmiller,
Have you looked up trolling yet?

bmiller said...

Hugo,

Sorry if you cannot see the relevance of my question. Let me spell it out.

Just because someone can imagine something that has never happened doesn't entail that it can actually happen. To use the argument that 'no one can know it's impossible' is using the fallacy in question.

Now if you disagree, please go ahead and explain yourself. But trolling? Please.

Hugo Pelland said...

bmiller,

I know exactly what you meant, obviously. You're not hard to follow...

I call you a troll because you decided to put your comment in the form of a provocative question, as you often do.

Hal said...

Hugo,
I wrote:"So, yes it is silly to think you could take artificial parts that are not living and combine them in some way that gives them the capacity to be conscious entities."
You replied:Except that it happened already.

No it has not. I already clarified that I was talking about substances above the cellular level. And artificial parts are not the same as molecular parts that form a living cell.
This is the second time you have misrepresented my position. The first time you deliberately misquoted me and then tried to joke about me being a Creationist. I don't appreciate such behavior. And I am not joking.

Perhaps it does require something akin to biological evolution and that’s why something like neural networks is more likely to span a new form of consciousness than software programmed entirely by humans.
ANN's still rely on algorithms. Do you know of any algorithms that can be written to 'span a new form of consciousness'?
By the way, what is a "new form of consciousness"

I call you a troll because you decided to put your comment in the form of a provocative question, as you often do.
The question was relevant to this discussion. I see nothing trollish in him pointing out the fallacy you are making. I noticed you simply ignored his point and made a snarky reply instead.

Since you don't even know how to describe it, how can you claim there is something in principle preventing it?
You are the one claiming it was indescribable, not me. It is not my question to answer.

Hal said...

bmiller,
Just because someone can imagine something that has never happened doesn't entail that it can actually happen. To use the argument that 'no one can know it's impossible' is using the fallacy in question.

Well put! :-)

Hugo Pelland said...

Hal said:
"I already clarified that I was talking about substances above the cellular level. And artificial parts are not the same as molecular parts that form a living cell."
True, scratch the word artificial.
Now, can we get living consious beings out of non-living parts?

"ANN's still rely on algorithms. Do you know of any algorithms that can be written to 'span a new form of consciousness'?"
They're not algorithms in the typical software sense, no. Algorithms usually follow something a human can describe, some sort of rules to follow to get an output. That's not what neural networks do afaik.
But I don't see why some code for processing external outputs, combined with a capacity to evolve akin to biological evolution, couldn't possibly lead to a machine become sentient once it develops an ability to conceive itself, react to

"You are the one claiming it was indescribable, not me. It is not my question to answer."
Claiming that you don't know how something can be done, therefore it can't be done, also is an example of the argument from ignorance. It's obvious that you've been doing that for days now, on this thread. But I didn't call you out on it because it's more interesting to discuss the reasons why we hold certain positions, instesd of calling something we see as wrong as a logical fallacy.

The bar is much higher to claim that something is impossible, as you need to demonstrate that you have either ruled out all possibilities, or that it's impossible because of a specific reason.

You have done neither here. You have not explores all possibilities because we don't know what they are and you have not proven why it's impossible. You fall more on the latter as you claim that sentience is only seen in living things, artificial parts can't be made to be living, and therefore can't be sentient.

I think both might be possible. A human-made thing made out of living parts could become sentient. You dismissed that as some Dr. Frankenstein creations; not sure why it couldn't be sentient. A human-made thing made out of non-living parts could become sentient too. Something like an artificial body, a replica as close as possible a brain and all the sensors a human body has, with a capacity to learn and grow, could become sentient. The switches found in the brain use electric current, just like transistors do, so in principle, one could be replaced by the other.

You never addressed the notion of switches, especially after your bizarre "zapping" comment. Why can't neurons be replaced, in principle, by artificial switches?

Do I think it's likely by the way? Not really, seems too difficult, useless and perhaps hard to prove anyway. But possible, sure, the bar is much lower for that.

"I don't appreciate such behavior. And I am not joking."
Calm down, I missed 1 word and never misrepresented you on purpose. In case you missed it, you're siding with the idiot that asked me whether I would encourage my mom to prostitue prostitute herself after I merely said that I am fine with a woman deciding herself regardless of what we think about prostitution. And now he had just called out a fallacy name without saying anything. That's the definiton of trolling; it's provocation without substance for the sake of getting a reaction.

"Well put! :-)"
Ya, it was obvious... that's why I said that I disagreed with One Brow on that specific point: I do think we need to justify why something is possible if we are to accept it as possible. Otherwise, the rational position is "I don't know" whether it's possible. I don't take that position because of the hundreds of words I wrote about it, and you also don't take that position and explained why. That's why I didn't call your position and argument from ignorance but, as I explained, your so much closer to it than my position possibly could be...

Hal said...

Hugo,
In case you missed it, you're siding with the idiot that asked me whether I would encourage my mom to prostitue prostitute herself after I merely said that I am fine with a woman deciding herself regardless of what we think about prostitution.

Sorry, but you need to apologize to bmiller for calling him an idiot and a troll.

I see little use in engaging further with you on this issue until you do so.

Hugo Pelland said...

Haha, happy new year, bye!

bmiller said...

Maybe ad hominem will be the last fallacy of 2018?

Hugo Pelland said...

No, it's not an ad hominem, I am not saying you're wrong because you're an idiot. You're just an idiot and a troll.

bmiller said...

In case you missed it, you're siding with the idiot that asked me whether I would encourage my mom to prostitue prostitute herself after I merely said that I am fine with a woman deciding herself regardless of what we think about prostitution.

Don't side with bmiller....because why? Sure looks like ad hominem to me.

One Brow said...

bmiller said...
Just because someone can imagine something that has never happened doesn't entail that it can actually happen.

If something has never happened, can you rule out the possibility that it can happen in the future? We have been asking Hal's basis for claiming computers can never be sentient, not trying to prove it is possible.

bmiller said...

If something has never happened, can you rule out the possibility that it can happen in the future? We have been asking Hal's basis for claiming computers can never be sentient, not trying to prove it is possible.

How strange that an atheist would rule out the possibility of God existing but not Pinocchio.

Happy New Year One Brow.

Hugo Pelland said...

"Don't side with bmiller....because why? Sure looks like ad hominem to me."

You're a troll because of how you made your point, not what it was, and you're stupid for not underestimating simple things like that. They need to be spelled out. But I still can't tell whether you're doing it on purpose; either a smart-ass, or just an ass. Impossible to tell. But you're hiding your smarts very well if you are. Nothing to do with disagreement on anything. And it was shocking to me that Hal thought that stating 'heard of that fallacy? you should look it up' is an engaging critique. It's a snarky jab without any content, what a troll does.

Hal said...

bmiller,

How strange that an atheist would rule out the possibility of God existing but not Pinocchio.

Yes, isn't it.:-) And he apparently thinks he does not need to provide any evidence for believing in this possibility. The burden of proof lies entirely with anyone failing to believe it.

Happy New Year!

Hugo Pelland said...

Hal, you should look up what the burden of proof entails when claiming something is impossible. Unless you now switched to I don't know? Or you'll just ignore this important thread now given that you got your easy way out, how convenient.

Hal said...

One Brow,
We have been asking Hal's basis for claiming computers can never be sentient, not trying to prove it is possible.

Early in this discussion you wrote:
Actually, I am proposing machines having sensations/feelings that are completely different from those that humans do. Ultimately, we are of the same substances as machines, just in different proportions and with different organization.

The first sentence claims it is possible for machines to have sensations/feelings.
Looks to me like the second sentence is your reason for believing in that possibility.

If you wish a skeptic like myself to accept your claim, you need to provide substantially more credible evidence.

Unfortunately, there is even a bigger problem with your claim than lack of credible evidence: it doesn't make sense. Unless you can explain what these sensations/feelings are I don't know what you are referring to. All you've said so far is that they are completely different from the feelings/sensations humans have.

Happy New Year!

Hugo Pelland said...

And once again, Hal uses the argument from ignorance fallacy AND shifted the burden of proof.

One Brow said...

bmiller said...
How strange that an atheist would rule out the possibility of God existing but not Pinocchio.

How typical that you would confuse a lack of belief with a ruling out of possibilities. I fully acknowledge that it is possible the God exists, or Vishna, or reincarnation into nirvana. I merely see no reason to believe in any of them.

May we both see many more New Years.

One Brow said...

Hugo Pelland said...
Ya, it was obvious... that's why I said that I disagreed with One Brow on that specific point: I do think we need to justify why something is possible if we are to accept it as possible.

Perhaps we are discussing two different meanings of "possible" here. For me, "I don't know if it can be done" is a subset of "It is possible".

In my case, the inability to identify a non-circular reason that forbids machine sentience means machine sentience is possible.

One Brow said...

Hal said...
The first sentence claims it is possible for machines to have sensations/feelings.
Looks to me like the second sentence is your reason for believing in that possibility.


That which is not impossible is possible. My reason for believing it is possible is that the impossibility has not been demonstrated.

If you wish a skeptic like myself to accept your claim, you need to provide substantially more credible evidence.

I make no claim beyond "not impossible", that is, "possible". Possible does not imple provable nor constructable.

Unfortunately, there is even a bigger problem with your claim than lack of credible evidence: it doesn't make sense. Unless you can explain what these sensations/feelings are I don't know what you are referring to. All you've said so far is that they are completely different from the feelings/sensations humans have.

Fair point. Still, we can't even be sure how a bat experiences their sensations, so how could I describe how a machine would do so in positive terms?

Happy New Year!

May we see many more.

bmiller said...

Hugo,

I wanted to call your attention to the fallacy you were engaging in. I assumed you were arguing in good faith and were unaware of the fallacy so merely pointing it out would be enough.

Here you say I'm a troll because I did not include details of how you were engaging in the fallacy:
You're a troll because of how you made your point, not what it was, and you're stupid for not underestimating simple things like that. They need to be spelled out.

But when I did post the details you told me that it was unnecessary:
I know exactly what you meant, obviously. You're not hard to follow...

So I'm a troll for being concise and also a troll for spelling things out. What's a troll to do?

Hugo Pelland said...

One Brow said...
"Perhaps we are discussing two different meanings of "possible" here. For me, "I don't know if it can be done" is a subset of "It is possible"."
Correct, that is two different meanings actually, so maybe we don’t disagree and the 3 of us just have different viewpoints.
1) I believe it is impossible
2) I don’t know whether it is possible
3) I believe it is possible

My position is 3) mostly because of what we know about the brain. In principle, we could replace each and every one of a brain’s neuron by some other artificial electric switches and we would get, in this case, a sentient human being with a normal body, but a different kind of brain. It’s not a strong statement as I don’t know whether it’s possible to build it in practice; that’s why I keep insisting that this is just a response. But there are also other options that I discussed before and I am not sure which one could be built, so there’s a bit of both 2) and 3) as part of my opinion, if we get more specific

In other words, 2) and 3) are way closer to each other than 1), and that’s why I thought we disagreed, but just a little bit. Perhaps we don’t disagree at all, after all as you were just arguing for 2), seen as a subset of “Not-1”, which both 2 and 3 are part of, by definition,

To contrast, Hal’s position is 1), to reject the possibility, because there is actually a huge difference between living and non-living things. I don’t think this meets the burden of proof for 1) as the bar is so much higher in this case. Proving that something is impossible when we have so many possible already options listed is difficult because each possibility has to be ruled out explicitly, all other unnamed possibilities have to be ruled out too, or some principles has to be laid out to show why it’s impossible in all cases.

Hugo Pelland said...

bmiller,
I don't believe you are arguing in good faith, that's the main problem. And if you are, you make a lot of stupid statement so that's just even worse, except that I feel slightly bad for calling you a troll. Again, go read the explanations I gave you already, for the 2 examples of you acting like a complete troll. You're either a smart-ass, or just an ass, sorry I don't know...

bmiller said...

Hugo,

As far as I can see, you are just mad that I pointed out your fallacious reasoning. If you had any counter arguments other than calling people names I haven't seen them.

And it's not just me that you've been snotty to. It seems you've reached that point with Hal also.

Hugo Pelland said...
This comment has been removed by the author.
Hugo Pelland said...

bmiller,

See you just did it again! You poke to trigger a reaction. I already explained how it had nothing to do with your critique, but you claim that I am mad because of it, and if I don't reply, it looks like I am proving you right because you stated I have no counter arguments. Obviously, this thread has over 140 comments to prove you wrong by now, but you don’t care about that, you just like to insert your super-short blurbs here and there to see if there’s a fish that will bite.

And again, I don’t know whether you are smart enough to do it on purpose. Just now, did you really think that I mad because someone is pointing out a fallacy? That’s ridiculous; if you have paid any attention, you would notice that this is what I am looking for. I explained in this thread, and you’ve seen me comment elsewhere, that this is the main reason for commenting on a blog for me. It’s not about finding people who agree; it’s about finding people who don’t. Therefore, your comment is absurd, and you either know it and troll anyway, or you don’t know it because you’re just too stupid to understand these subtlety of social interactions.

Plus, yes, I did the same, on purpose, twice in the last few comments, just to make a point as to how silly and annoying these types of comments can be. You might have spotted the 2 comments in question, that’s what you called being ‘snotty’ because you couldn’t fathom that this was me emulating your silly trollish behavior on purpose, just to make a point. They were addressed at Hal to show him what it feels like to have a troll just poke instead of engaging, but he’s already annoyed so he, wisely, just ignored... I admit I am not that good, I find it hard to not point out the absurdity of such behavior, even when it’s obviously just trolling.

Psychoanalyzing is fun, isn’t it?

bmiller said...

Hugo,

you just like to insert your super-short blurbs here and there to see if there’s a fish that will bite.

No. I'm just making the observation that you tend not to engage rationally and get snotty when people don't agree with you. Another observation is that you accuse others of the exact fallacy that you are engaged in. For example, you started accusing Hal of argumentum ad ignorantiam after it was pointed out that you were employing it.

However, regarding my short comments to you, there really isn't much to discuss with someone who can only name-call rather than present actual rational arguments.

Hugo Pelland said...

bmiller said...
" I'm just making the observation that you tend not to engage rationally and get snotty when people don't agree with you."
Complete lie. We had tons of comments before you showed up with your snotty comment of ‘do you know about that fallacy? You should look it up’. If you had any genuine interest in engaging rationally, you wouldn’t add the blurb about ‘you should look it up’. That serves no purpose other than to insult, to imply that the person you are addressing doesn’t know what you’re talking about. What’s fascinating is that perhaps you really do think that this is something others might not know, as if it was some great insight, which makes you the stupid one. So that’s why I keep saying that I don’t know whether you’re a smart ass or just an ass. If you’re giggling after writing these little jabs and getting a response, good for you, if not, you’re not a smart-ass. And I have told you that before.

" Another observation is that you accuse others of the exact fallacy that you are engaged in. For example, you started accusing Hal of argumentum ad ignorantiam after it was pointed out that you were employing it."
I explained in detail what I meant, and I explained why I was NOT accusing him of it. I explained how a superficial look at his argument would lead to calling him out for using that fallacy. So again, you’re either too stupid to understand the subtly presented here, or you’re just playing me. If it’s the latter, good, at least you’re not stupid. So are you trolling me or not? Seriously, I wish you would just say ‘yes’ and I could laugh at it...

" However, regarding my short comments to you, there really isn't much to discuss with someone who can only name-call rather than present actual rational arguments."
I am calling you, and only you, names because of what you wrote here and on previous threads. You’re the one who thought it was appropriate to ask me whether I would encourage my mother to prostitute herself. This alone is enough to not ever take you seriously, but it’s still interesting to analyze your non-sense and try to figure you out.

Moreover, in the same vein of analysis, searching for your alias here on this thread shows that you literally made no argument whatsoever; most comments were just chit-chat with Hal, nothing wrong here, but not relevant to the topic. So isn’t it interesting that you would then mention that there really isn’t much to discuss; indeed, you offered no argument at all. Plus, it’s even worse than that actually as your very first comment was just flat out false, and you didn’t bother to follow up. Once again, what a troll does.

Finally, here’s what I did wrong: care about your comments. I was having an interesting discussion with Hal and I got really annoyed by your jab, called you stupid for no good reason (even though it’s justified, it’s never good I admit) and now because Hal takes this very seriously apparently, he decided not to engage at all. Too bad, I should have just ignored you, but I am not apologizing for pointing out these subtle, yet annoying, idiotic jabs you keep making, be it on purpose on not. You're one of these online folks who like to add color to their commentaries and that derails things more than it helps. Again, congrats if that's your goal; you should be embarrassed if it's not.

Again, quite interesting to psychoanalyze some random person online, isn’t it? So much we can infer but so much cannot know for sure... not a bad way to spend some time off.

bmiller said...

Hugo,

I pointed out that you were employing argumentum ad ignorantiam in this thread and the fallacy of using a double standard in the other. Your only response to me has been to call me names.

You say this now:
I explained in detail what I meant, and I explained why I was NOT accusing him of it.

But you said this then:
Claiming that you don't know how something can be done, therefore it can't be done, also is an example of the argument from ignorance. It's obvious that you've been doing that for days now, on this thread.

This is clearly an accusation of Hal even if you went on to mention that you were being generous on not calling him out earlier. And for the record, he is not guilty of the fallacy.

Maybe you like to psychoanalyze people, but I'd rather consider their arguments and whether they are sound and valid.

Don't feel you need to respond. That's the best course of action if you think you are engaging a troll and it makes you upset.

Hugo Pelland said...

bmiller said...
" Don't feel you need to respond. That's the best course of action if you think you are engaging a troll and it makes you upset."
Got to repeat... since I am not sure whether you are trolling, and you’re trying hard to claim you are not:
I find it interesting to engage and figure out why you say certain things, and I am also trying to explain what you’re doing wrong, if you’re truly trying to engage rationally. What was upsetting is only the snarky comments you make, the silly callouts I mentioned above. I am certainly not upset now, but rather quite amazed by how you really don’t get it!

Going backward, for instance:
" Maybe you like to psychoanalyze people, but I'd rather consider their arguments and whether they are sound and valid."
Of course, arguments are more interesting. Where are yours? Nowhere... you preferred to interject with the empty mention of a logical fallacy, combined with a provocative ‘you should look it up’. If you don’t see the latter as such, I repeat, you got a problem with social cues.

"This is clearly an accusation of Hal even if you went on to mention that you were being generous on not calling him out earlier. And for the record, he is not guilty of the fallacy."
The 2 lines you quoted are the accusation, yes, but the part you didn’t quote was explaining why it was not relevant to throw that accusation. The logical fallacy would be to "just" claim that we don’t know how it can be done, therefore it can’t. Hal gave reasons why he think it can’t be done. That’s why he is not guilty of the fallacy.

Similarly:
" I pointed out that you were employing argumentum ad ignorantiam in this thread "
That is also a wrong accusation for the exact same reason. I even clarified which part I am not sure about and why I cannot claim with certainty that everything about a sentient machine is possible to be done. There are tons of paragraph above explaining why, why not, and how.

" Your only response to me has been to call me names."
You presented nothing, so you "would" deserve no answer at all indeed. Yet, as I am doing now, I am trying to explain why your accusation is false, even you did not try to support your own accusation. The only reason why I am doing so is because I actually thought that it was "partially" a good flaw in my arguments and that it was thus relevant to clarify some points. Because you present nothing though, it really didn’t change my mind on anything. Too bad, it could have happened. I did adjust some details of my positions over the several days this has been going on though; but certainly NOT because of the 1-liner " Have you ever heard of the fallacy "Argument from ignorance"? You should look it up."

Hugo Pelland said...

Finally, I am going to be truly snooty here, since you did accuse me of doing that after all... because you don’t seem to realize that I have good reasons to know that my intellect is not just average. Hal did mention his medical credentials too a couple of time after all, and I don’t feel like I am publicly bragging much as there’s probably no more than 5 people who might read this. So anyway, here it goes...

I managed to graduate in Computer Engineering from one of the best schools in North America, during the first 4 years of my life when I was studying in English, a language I was not even using daily at the time. I took the GMAT two times and scored in the top 1% of the critical reasoning section each time. This helped me get admitted at one of best business school in the world, in the #1 part-time MBA program in the US. My job requires logical thinking and pays me well for it, really well. And on a personal level, I have been enjoying analyzing complicated arguments like the ones we discussed here for over 15 years now.

None of this is relevant to what is true, of course, and I definitely make mistakes nonetheless; that’s why I don’t normally need to mention it, and I find it crass to do so. But today, this might be one of the very few times when it is actually relevant, because it may just make you understand why it looks so incredibly dumb to me to be told to look up a logical fallacy, as a response, without anything else attached to it. It’s so benign, so insignificant; it completely lacks class, reasoning, effort, etc... It’s really just what a troll would do, or a not-so-bright person. I am sorry to hear you insist on convincing me you fall under the latter category.

Hal said...

One Brow,

How typical that you would confuse a lack of belief with a ruling out of possibilities. I fully acknowledge that it is possible the God exists, or Vishna, or reincarnation into nirvana. I merely see no reason to believe in any of them.

I’m a little confused. Is it your position that you lack a belief in the future existence of sentient machines? That is, you lack a belief that such a machine will ever be built?

bmiller said...

Hugo,

My first comment in this thread was that:
"A computer is just a souped-up abacus." It is a tool created and by humans used to augment certain exercises humans do.

You seem to think they can become conscious (or something analagous to consciousness) without knowing or being able to explain exactly what consciousness is in the first place. When faced with this contention, you employed the argument from ignorance fallacy against Hal challenging him to show it was impossible. Hal pointed this move out although he did not explicitly mention the fallacy. So I did. There was no need for me to go into details that Hal already went into.

Hal was not guilty of the fallacy so it was wrong for you to accuse him of it. It is not up to him to show how consciousness, something that no one understands, can somehow not become present in a machine just because you claim it can.

I try to be concise and especially with you since I find you tend to post long-winded meandering posts that never seem to get to a point. I've noticed this in the past discussions I've had with you so when I engage you I want to keep the discussion focused on a single thing. Also as I mentioned you tend to get snotty as the discussion goes on, so I don't feel a need to be overly polite.

bmiller said...

Now regarding my claim that a computer is just a souped up abacus. I am not the first, nor alone.

1.Computers are just a version of an abacus.
2.An abacus, as all machines, cannot understand anything.
3.Claims of machine learning examined. "Unsupervised" learning is really "supervised".
4.ANN-back propagation. Curve fitting does not make a machine intelligent.

The CPU of a computer is like an abacus in that it takes numbers (binary) and instructions and makes a computation. Then it takes the next set of numbers and instructions and carries out the next computation and so on. The CPU does not understand what it is doing. It is a tool designed by engineers that no more understands that it is in a state of 0001 than a Hydrogen atom knows it has one electron in an unexcited state.

Engineers design and test these machines to ensure they do what they are supposed to do. Even supposedly "unsupervised learning" machines are evaluated to ensure they do their expected task.

The fact that a machine can adjust it's output depending on it's input (back propagation) is not a new concept. Machine control theory has been around a long time...in fact since the first machines (with gears and such).

Regardless of the hype of marketeers (or software people) computers are just machines that can aid humans in doing what they want to do. They cannot want to do anything themselves.

Hal said...

One Brow,
That which is not impossible is possible. My reason for believing it is possible is that the impossibility has not been demonstrated.

And that which is impossible is not possible. All that amounts to is that it is a contradiction to claim that something is both possible and not possible. It doesn't entail that possibility is the default position when there is a question or dispute over whether or not a possibility actually exists.

One Brow said...

Hal said...

I’m a little confused. Is it your position that you lack a belief in the future existence of sentient machines? That is, you lack a belief that such a machine will ever be built?

I would describe that as 'I see no reason to rule out our ability to build a sentient machine. I have no firm reason to accept it is possible.'

One Brow said...

bmiller said...
Now regarding my claim that a computer is just a souped up abacus. I am not the first, nor alone.

Now, all you need to do is prove that a souped-up abacus can never experience a form a sentience.

One Brow said...

Hal said...
It doesn't entail that possibility is the default position when there is a question or dispute over whether or not a possibility actually exists.

I find circular arguments insufficient to settle the default position as 'impossible'. That's the importance of "demonstrated".

Hal said...

One Brow,
But that is no reason to assume that the default position is 'it is possible'. It is simply a moot question at that point.

Hal said...

One Brow,
Now, all you need to do is prove that a souped-up abacus can never experience a form a sentience.

According to your earlier post you have no firm reason for accepting such a possibility.

If you want to substantiate the claim that there is a possibility here you have to provide some good reasons to accept it.

Hal said...

One Brow,

Also, you keep insisting that my argument is circular. I don't understand why you would think so. I've provided reasons for why I think it is not possible for a computer to be conscious and have sensations. I don't simply assume that these mechanical devices are non-sentient.

Can you clarify why you think my argument is circular?

One Brow said...

Hal said...
But that is no reason to assume that the default position is 'it is possible'. It is simply a moot question at that point.

Again, "possible" does not imply provable, constructable, believable, nor demonstrable. It's simply a statement that the phenomenon in question has not been disproven or demonstrated false. "Possible" is the neutral position, and is my default position on any phenomenon until I see a reason to move to "impossible" or "demonstrated".

According to your earlier post you have no firm reason for accepting such a possibility.

If you want to substantiate the claim that there is a possibility here you have to provide some good reasons to accept it.


I accept the possibility of many things without having any reason at all. Do you always assume any particular phenomenon is impossible until proven otherwise?

Also, you keep insisting that my argument is circular. I don't understand why you would think so. I've provided reasons for why I think it is not possible for a computer to be conscious and have sensations. I don't simply assume that these mechanical devices are non-sentient.

Can you clarify why you think my argument is circular?


My understanding of your argument is that you assign mechanical things to the category of the nonliving, and say that sentience can never be the property of nonliving things. You have built the conclusion you wish to draw (machines can not be sentient) by creating a larger category (nonliving things) and arbitrarily assigning them the property (anti-property?) you want the smaller group to have (can not be sentient). It is the arbitrariness of the assignment of this property that makes your argument circular.

If you were able to defend the notion that being nonliving rules out sentience in some manner, you could remove the circularity. However, you have not tried to defend this notion, besides falling back on other assigned properties of the categories you created (different substances, for example).

Of course, this is even assuming creating this categories is a valid way to approach the universe at all, but that's a discussion for another comment thread.

Hal said...

One Brew,
If there is a dispute over there being a possibility, then it is not the neutral position. If you wish to assert or claim something to be possible then you need to provide reasons to support that claim.

Hugo Pelland said...

Hal and One Brow,
I think you talked passed each other in these last few comments... As I mentioned before, there are 3 options:
1) Believe it is impossible
2) Don’t know whether it is possible
3) Believe it is possible

It seems to me that I am mostly a 3), One Brow is a 2) and Hal is (mostly?) a 1).
Earlier, I thought One Brow was also a 3), but the latest comment confirms something that falls under 2):
'I see no reason to rule out our ability to build a sentient machine. I have no firm reason to accept it is possible.'

To Hal’s point, the default position is definitely not 3).
‘...that is no reason to assume that the default position is 'it is possible'...’
But it’s also definitely no 1) either, for the reasons listed a few comments before.

The default is thus 2), and this is the general principle anyway; not believing a claim is the default. We know that either ‘it’s possible’ or ‘it’s not possible’ must be true, but it doesn’t mean that we must accept one or the other. It is perfectly rational to not be convinced by either because of a lack of evidence.

Hugo Pelland said...

bmiller,
"...I don't feel a need to be overly polite."
You were so far from polite; what you wrote was a stupid trollish comment of ‘look up that fallacy’. How many times do I need to repeat that to you? Why can’t you see that it had nothing to do with the content of what you wrote?
Just like it was stupid to ask me whether I would encourage my mom to prostitute herself.
Unlike Hal, I did not completely ignore you because I thought it was interesting to address the points you bring up nonetheless. But it seems that if I do, the reasons of why I called you well-deserved names are lost on you, so I will stop there this time, even if I have already read your links and started to reply to the points above that line...

Hugo Pelland said...

I had not realized there were new comments before I posted mine...
I think this 1-line by One Brow is particularly interesting:
"Again, "possible" does not imply provable, constructable, believable, nor demonstrable. It's simply a statement that the phenomenon in question has not been disproven or demonstrated false. "Possible" is the neutral position, and is my default position on any phenomenon until I see a reason to move to "impossible" or "demonstrated"."
Makes me wonder whether my 1), 2), 3) above is actually badly defined, because I do agree that this version of "possible" is pretty much synonym with the neutral position.

bmiller said...

Hugo,

Just like it was stupid to ask me whether I would encourage my mom to prostitute herself.

I asked you that question to see if you held a double-standard. It appears you do.
I find it hard to believe that you've been on boards discussing morality for 15 years and never ran across this type of objection. One Brow said he had no problem with his daughter choosing to prostitute herself so at least he is consistent.

Regarding my comments pointing out your fallacy, I can't help it if you react emotionally to blunt and concise comments. I don't like to write long posts re-wording some Wikipedia article. I thought about including a link, but decided against it since there are plenty of other sites explaining it.

Hugo Pelland said...

bmiller, ok, you really don’t understand what I am saying...

(1) Regarding the prostitution comment, I was pointing out that it doesn’t matter what your opinion (or mine; that was implied) is about prostitution. I was making a point about how I think this is about the woman being able to choose what she wants to do. Because you didn’t understand that, you thought it was about me perhaps being fine with a family member prostituting herself. That would make me consistent in your view, as you just said. That was already wrong of you but that was not the stupid part. The stupid part is that YOU ASKED WHETHER I WOULD ENCOURAGE MY MOTHER. Can’t you see the giant jump from being ‘ok’ with strangers, or even a daughter, to choose to prostitute herself to actively encouraging my mother? To me, this looks like nothing but provocation, just like the other mild provocative snippets you inserted in that same paragraph, which I will ignore like I do 90% of the time. For instance, the last time I had just asked ‘what’s wrong with you?’ and didn’t bother commenting again. But apparently, if I don’t spell out the exact reasons why it was incredibly stupid of you to say something like that, you think it was just a benign question about being consistent.

(2) Regarding the fallacy comment, again, this has nothing to do with you being blunt or concise, it has everything to do with YOU ASKING TO LOOK UP THE MEANING OF A FALLACY. Fallacies are not complicated; it is logic 101, something we covered in my first philosophy class when I was 17. Maybe you don’t know that precisely, but it’s still so stupid to assume I might not know what it means. It is thus nothing but an attack on my intellect, a comment that implies that this is something I must not know. Or, you knew that I knew but wrote that as a comical insult, i.e. trolling. But you keep insisting that you are not trolling, so it means that you really think I am too dumb to know what the fallacy from ignorance is, and that’s what make me call you stupid. Btw, it’s ironic that you accused me of just name-calling when this was my point basically; you’re throwing useless insults instead of arguments. And to be clear, linking to a Wikipedia article or describing what the fallacy is wouldn’t be better, at all. It might show more effort on your part, but it would have been just as stupid to do so since it would imply the same misconception. What Hal was doing otoh was the correct approach; debating the details of the positions and figuring out what we agree on, or not, and see where it leads. That was never the problem; I welcome such criticism.

Now, do these somewhat long explanation mean that I am reacting emotionally to these things or that it matters? Of course not... it’s not important at all. But it seems that if I just say that you’re a stupid troll, you (and Hal who might not have known of the full context) conclude that I am just calling you names... hence the explanations yet again, because these are 2 super clear examples when you were truly stupid bmiller, and you still didn’t understand why apparently, based on your very last comment.

bmiller said...

Hugo,

To me, this looks like nothing but provocation,

I don't think you've had many discussions on prostitution if this is the first time you've heard this type of challenge. It's meant to frame the question in terms of real people we care about as opposed to abstract women in the street out for fun.

I've done all I can to explain my reasoning. If you want to continue to call me names there's nothing I can do about that and frankly I don't care. It says more about your than me.

bmiller said...

your sb you

One Brow said...

Hal said...
If there is a dispute over there being a possibility, then it is not the neutral position.

Please. A lack of belief in any God, while not denying the possibility of their existence, is the neutral position, and that position is constantly disputed.

If you wish to assert or claim something to be possible then you need to provide reasons to support that claim.

How could such a claim be supported? If I were to provide a general overview of the process, I would be claiming the AI was constructable. If I were to give a formal, step-by-step proof from hypotheses to conclusion, I would be claiming the AI was provable. How do you demonstrate something is possible?

Hal said...

One Brow,
A lack of belief in any God, while not denying the possibility of their existence, is the neutral position, and that position is constantly disputed.

That is not a neutral position. If you believe it is possible for any God to exist then those who believe it is not possible are going to dispute your belief and ask you to justify it.

How could such a claim be supported? If I were to provide a general overview of the process, I would be claiming the AI was constructable. If I were to give a formal, step-by-step proof from hypotheses to conclusion, I would be claiming the AI was provable. How do you demonstrate something is possible?

We demonstrate possibilities all the time. Roger Bannister demonstrated that humans could run the mile in less than 4 minutes. The NASA space program demonstrated that it is possible to land a man on the moon.

You are essentially claiming that non-living substances can be conscious entities. There are no known non-living entities that are conscious. Even though living entities can be conscious, only a small subset of that category actually are conscious. We do have a fairly good idea of how those living things can be conscious. And you appear to be acknowledging you know of no way that a non-living entity could be conscious since you can't demonstrate it.

So why should I believe in this possibility? Because you can imagine it? I can do that to. We can also imagine toads singing in English even though we know such a thing is physically impossible. So, I see no good reason for believing it to be possible simply because you can imagine such a thing.

You are making a rather extraordinary claim. I actually think it more likely that God exists than that a non-living machine can be conscious.

bmiller said...

We can also imagine toads singing in English even though we know such a thing is physically impossible.

Well, there's Kermit :-)

Hal said...

The stupid part is that YOU ASKED WHETHER I WOULD ENCOURAGE MY MOTHER.

This is another reason for not engaging with Hugo.

Bmiller actually said: Would you approve of or encourage your mother into prostituting herself? Or any other female relative?

It was obviously a hypothetical question to elicit whether or not he really believed prostitution was an ok thing. One Brow got it and answered the question honestly.

Hal said...

One Brow,
My understanding of your argument is that you assign mechanical things to the category of the nonliving, and say that sentience can never be the property of nonliving things. You have built the conclusion you wish to draw (machines can not be sentient) by creating a larger category (nonliving things) and arbitrarily assigning them the property (anti-property?) you want the smaller group to have (can not be sentient). It is the arbitrariness of the assignment of this property that makes your argument circular.

But I din't create these categories in order to prove that non-living things can't be conscious. It is because of the capacities and powers we know to exist among the various substances that we have placed them in the categories we have. Are you claiming that all taxonomies are circular? That doesn't seem right to me.

I do agree with you that any system of taxonomy is to some extent arbitrary. I pointed it out in an earlier post. Here is what I said on December 24, 2018 10:05 AM:
Taxonomy is to one degree or another arbitrary. But any type of classification is going to have some reasons for setting it up. We could choose, like many ancient cultures did to place things like rocks and plants in the category of sentience. Or place what we call a mammal like a whale in the non-mamallian category of fish. Descartes place non-human animals in the category of machines. He didn't think they were really conscious or felt pain.

I happen to agree with Descartes' view that machines are not conscious beings with the capacity to feel things like pain or pleasure. I disagree with him regarding animals.

So ultimately, I think, this is a disagreement over how we think the substances in this world should be classified. I'm having trouble seeing a good reason why we should re-classify a non-living substance such as an electronic machine as a sentient being. I know of no other non-living substances that are sentient. Do you?

Hal said...

One Brow,
Well, there's Kermit :-)

He's a frog. Not a toad. Big difference. :-)

Hal said...

Oops! That last post should have been addressed to bmiller. Sorry about that.

bmiller said...

Hal,

Haha! I stand corrected.

Hugo Pelland said...

Hal said...
" It was obviously a hypothetical question to elicit whether or not he really believed prostitution was an ok thing."

bmiller said...
" Hugo,
The question was a serious one prompted by your remark...
"

Yet Hal, you're the one who got annoyed ("I don't appreciate such behavior. And I am not joking.")
for misrepresenting your position by mistake... something I acknowledged and said I would never do on purpose. So, in all good intentions, can't you see this is the same thing here? And don't you read the explanations I am giving to bmiller to explain why, even if the question were hypothetical, it would still be irrelevant (or stupid...) ?

Hugo Pelland said...

bmiller said...
" Hugo,

To me, this looks like nothing but provocation,

I don't think you've had many discussions on prostitution if this is the first time you've heard this type of challenge. It's meant to frame the question in terms of real people we care about as opposed to abstract women in the street out for fun.

I've done all I can to explain my reasoning. If you want to continue to call me names there's nothing I can do about that and frankly I don't care. It says more about your than me.
"

- True, never said I had many discussions about prostitution. If you’re thinking of the comment about 15 years of argumentation, this was referring only the mechanics of logical reasoning, not any particular topic.

- Yes, I get what you were trying to do. Again, that was not hard to follow... But it was wrong to make that comment for the 2 reasons I specified already.
First, our personal opinion of whether prostitution is morally acceptable was not relevant; go read the thread again I was clear on that. It was about the liberty of the consenting adults involved and how they get vilified by others.
Second, there is a big jump between ‘tolerating’ something and ‘encouraging’ something. That’s why I found it provocative. Tolerating would be like saying “dear daughter, I don’t think it’s a great idea, but if you really need to, and are certain what’s what you want, I won’t stop you” while encouraging would be like “mom, I think you should sell your body for sex.” Asking whether I would do the latter is... stupid, no? But again, I gave you way too much credit here apparently because you simply did not see these differences, and that was my mistake. I really thought you were asking that 2nd question to taunt, to provoke.

- I am calling you names about 2 very specific things only, so it really doesn’t mean much. Plus, so what if I use an explicit word? Don’t pretend that explicitly calling you ‘stupid’ was worse than all the subtle yet-no-subtle comments that you and Hal have been making here. As if Hal stating “This is another reason for not engaging with Hugo” was not an insult like saying I am stupid... As if you stating to look up the definition of a fallacy was not an insult... being passive-aggressive is not nicer in any way.

One Brow said...

Hal said...
A lack of belief in any God, while not denying the possibility of their existence, is the neutral position, and that position is constantly disputed.

That is not a neutral position. If you believe it is possible for any God to exist then those who believe it is not possible are going to dispute your belief and ask you to justify it.

So, your understandings of neutral positions is that no one will ever dispute them? Seriously?

We demonstrate possibilities all the time. Roger Bannister demonstrated that humans could run the mile in less than 4 minutes. The NASA space program demonstrated that it is possible to land a man on the moon.

Do you think everyone thought a 4 minute mile was impossible before Bannister? Why did he train to do the impossible? Do everyone think landing a person on the moon was impossible before 1969? Why did NASA spend years and millions of dollars to attempt the impossible?

You are essentially claiming that non-living substances can be conscious entities.

I am saying that there is no good reason to rule out their achieving consciousness at some point.

There are no known non-living entities that are conscious. Even though living entities can be conscious, only a small subset of that category actually are conscious. We do have a fairly good idea of how those living things can be conscious. And you appear to be acknowledging you know of no way that a non-living entity could be conscious since you can't demonstrate it.

All true.

So why should I believe in this possibility?

I'm not asking you to believe in anything. Accepting a possibility does not require belief.

Because you can imagine it?

Because we have no reason rule it out.

I can do that to. We can also imagine toads singing in English even though we know such a thing is physically impossible. So, I see no good reason for believing it to be possible simply because you can imagine such a thing.

Toads lack sufficient brain size to process English, and lack the physical structures (lips, teeth, a hyloid bone) to speak it.

You are making a rather extraordinary claim. I actually think it more likely that God exists than that a non-living machine can be conscious.

When did this discussion move into likelihoods?

One Brow said...

Hal said...
But I din't create these categories in order to prove that non-living things can't be conscious. It is because of the capacities and powers we know to exist among the various substances that we have placed them in the categories we have. Are you claiming that all taxonomies are circular? That doesn't seem right to me.

You can't create (in a metaphysical sense) categories because they don't exist, outside of being mental shortcuts that allow our brain to process the world more easily. Yes, we use categories to identify and sort by capacities/powers/properties/etc., but the reality of things are not required to conform to the categories we identify. Using these mental shortcuts is not circular. Putting something into a mental shortcut, and using that placement to limit the reality for that something, is circular.

Hal said...

One Brow,
So, your understandings of neutral positions is that no one will ever dispute them? Seriously?

The only thing I can think of that might be a neutral position is to take a position like an agnostic: to say that there is simply not enough information to make a decision regarding whether or not it is possible for a computer to have sensations.

Is that your position? Or are you going to keep claiming that it is possible for a computer to have sensations even without any good reason for making that claim?

I'm not asking you to believe in anything. Accepting a possibility does not require belief.

Yes, you are. You are asking me to believe that this possibility exists.

Hal said...

One Brow,
Do you think everyone thought a 4 minute mile was impossible before Bannister? Why did he train to do the impossible?

Some people believed it to be possible, some believed it to be impossible. Obviously, Bannister believed in the possibility or he wouldn't have made the attempt. Once he accomplished that feat, the possibility to run a mile in 4 minutes or less was established. It is no longer a question of believing in that possibility for we now know that possibility actually exists.

One Brow said...

Hal said...
The only thing I can think of that might be a neutral position is to take a position like an agnostic: to say that there is simply not enough information to make a decision regarding whether or not it is possible for a computer to have sensations.

Why does the possibility of some phenomenon require a decision? I don't see how it's possibility can depend on our current knowledge/understanding/determination.

Yes, you are. You are asking me to believe that this possibility exists.

I'm not sure what it means for a "possibility" to "exist", in a non-idiomatic sense.

One Brow said...

Hal said...
Some people believed it to be possible, some believed it to be impossible. Obviously, Bannister believed in the possibility or he wouldn't have made the attempt. Once he accomplished that feat, the possibility to run a mile in 4 minutes or less was established. It is no longer a question of believing in that possibility for we now know that possibility actually exists.

So, we agree that a phenomenon being heretofore without precedent is not a deterrent to it being possible.

So, I find myself asking again, how would you provide evidence that something is possible, as opposed to constructable/provable/demonstrable/etc.?

Hal said...

One Brow,
'm not sure what it means for a "possibility" to "exist", in a non-idiomatic sense.

It means that there actually is such a possibility.

We know it is possible for a person to drive a car from Oakland to San Francisco in less than an hour or two.
In technical terminology, I would call this the "actuality of a possiblity" or an "existential possibility".

Joe, who lives in Oakland, has a brand new Mustang that he wishes to show off to his buddy in San Francisco. Because we know that there actually is such a possibility we know that he could do that later today.


Is that an adequate explanation?

Hal said...

One Brow,
So, your understandings of neutral positions is that no one will ever dispute them? Seriously?

”NEUTRAL"

You keep using that word. I do not think it means what you think it means. :-)

Neutral (adjective): not helping or supporting either side in a conflict, disagreement, etc.;

When there is a dispute over whether or not it is possible for something to happen, it is not the neutral position to assume it is possible.

One Brow said...

Hal said...
It means that there actually is such a possibility.

We know it is possible for a person to drive a car from Oakland to San Francisco in less than an hour or two.
In technical terminology, I would call this the "actuality of a possiblity" or an "existential possibility".

Joe, who lives in Oakland, has a brand new Mustang that he wishes to show off to his buddy in San Francisco. Because we know that there actually is such a possibility we know that he could do that later today.


Is that an adequate explanation?


In the sense that "possibility" is an insufficient description of the full level of the support for the claim 'a person can drive a car from Oakland to San Francisco in less than an hour or two', sure. It's not just possible, it's both provable (based on the number of miles involved and the top speed of cars) and demonstrable (having someone actually perform the drive).

How do you show something is possible without it being provable/demonstrable/constructable/etc.? For example, in mathematics we have several conjectures that we know can not be proven nor disproven. One of the more well-known ones is the Axiom of Choice. It is possible that the Axiom of Choice is true, but this can never be demonstrated. Saying that the Axiom of Choice is true is taking a position; saying that it is false is taking a position; saying it is possibly true is the neutral position.

One Brow said...

Hal said...
You keep using that word. I do not think it means what you think it means. :-)

Bad news, I'm not left-handed either.

Neutral (adjective): not helping or supporting either side in a conflict, disagreement, etc.;

When there is a dispute over whether or not it is possible for something to happen, it is not the neutral position to assume it is possible.


By this interpretation of the definition, there can be no such thing as a position that is neutral on it's own merits. Any position you can will be neutral, or not neutral, depending on the discussion in question. Saying that God exists is a neutral position, as long as the disagreement is whether Christianity of Islam is the One True Faith, but is no a neutral position when debating whether Shinto or Judaism is true.

I interpret neutrality as a property of a position, not a relation to a position and some disagreement. For me, "possible" means "I see no reason why it should not occur", nothing more. This discussion feels, to me, like you are asking me why I see no reason to rule out the existence of machine sentience. I'm not sure how to prove that I don't see any reasons to rule it out. What might such a proof look like, to you.

Hal said...

One Brow,
By this interpretation of the definition, there can be no such thing as a position that is neutral on it's own merits.

It is not an interpretation. That is the definition of a neutral position: it doesn't take sides in a dispute.

If the dispute is over possibility and one take a neutral position they neither assume a thing is possible or impossible. They don't insist, as you are doing, that one of those disputed claims is the 'neutral' one.


There are two claims being disputed:
"It is possible for a computer to be conscious."
"It is impossible for a computer to be conscious."

As long as you are taking one side in this dispute, I am going to keep demanding you provide a good reason for accepting it.

Hal said...

One Brow,
I interpret neutrality as a property of a position, not a relation to a position and some disagreement.

Yes, it is a property of a position. A neutral position implies that there are disputed positions and it refuses to take sides in that dispute.
There is no neutral position unless you have positions being disputed.

Hal said...

One Brow,
In the sense that "possibility" is an insufficient description of the full level of the support for the claim 'a person can drive a car from Oakland to San Francisco in less than an hour or two', sure.

I don’t understand your response. In the post I was replying to you said:
I'm not sure what it means for a "possibility" to "exist", in a non-idiomatic sense.

I was simply trying to explain what it means for a possibility to exist. Do you now understand what it means? I can’t tell from your reply.

One Brow said...

Hal said...
It is not an interpretation. That is the definition of a neutral position: it doesn't take sides in a dispute.

I hate arguments by definition, especially when they only bother to quote one. Another definition of neutral is "having no strongly marked or positive characteristics or features". Taking a position that something is possible a position with no positive features.

If the dispute is over possibility and one take a neutral position they neither assume a thing is possible or impossible.

Accepting a possibility for a phenomenon is not an assumption.

There are two claims being disputed:
"It is possible for a computer to be conscious."
"It is impossible for a computer to be conscious."


"It is possible for a computer to be conscious." is not a claim.

As long as you are taking one side in this dispute, I am going to keep demanding you provide a good reason for accepting it.

I accept it because there is no good reason so reject. I have all the proof of Bannister or the Apollo team.

I interpret neutrality as a property of a position, not a relation to a position and some disagreement.

Yes, it is a property of a position. A neutral position implies that there are disputed positions and it refuses to take sides in that dispute.

You just contradicted yourself. If a position can only be neutral in relation to a particular dispute, it is not a property of the proposition itself.

I don’t understand your response. ... I was simply trying to explain what it means for a possibility to exist. Do you now understand what it means? I can’t tell from your reply.

I was asking what it meant when the possibility in question has not been proven or demonstrated. You responded with an example of a possibility that was demonstrated true.

Hal said...

One Brow,
I hate arguments by definition, especially when they only bother to quote one. Another definition of neutral is "having no strongly marked or positive characteristics or features". Taking a position that something is possible a position with no positive features.

It is not an argument. That is what the word means when used in the context of disputed claims.
Any other usage in this context is irrelevant.

"It is possible for a computer to be conscious." is not a claim.

Yes it is. Just as "It is not possible for a computer to be conscious." is a claim.

You just contradicted yourself. If a position can only be neutral in relation to a particular dispute, it is not a property of the proposition itself.

The 'property' of the position is that it is neutral. And being neutral it does not take sides in the issue being disputed. You cannot have a neutral position unless there is a dispute or conflict.

To illustrate:
There are two claims being disputed:
"It is possible for a computer to be conscious."
"It is impossible for a computer to be conscious."

Neutral position: "I don't know. Maybe it is possible maybe it is not possible. I can't see a reason for supporting either claim."

Another example:
"It is possible for a man to run the mile in less than 4 minutes."
"It is impossible for a man to run the mile in less than 4 minutes."

Neutral position: "Maybe it is impossible maybe it is possible. I simply don't know."

Hal said...

One Brow,
I was asking what it meant when the possibility in question has not been proven or demonstrated. You responded with an example of a possibility that was demonstrated true.


If you look at my post dated January 05, 2019 6:34 AM you will see that I was responding to this:
I'm not sure what it means for a "possibility" to "exist", in a non-idiomatic sense.

So I am wondering if you now understand what it means for a possibility to exist?

Hal said...

One Brow,
This discussion feels, to me, like you are asking me why I see no reason to rule out the existence of machine sentience. I'm not sure how to prove that I don't see any reasons to rule it out.

I don't see it that way. What I am trying to understand is why you think taking one side in a dispute is a neutral position. It makes no sense because when there are conflicts or disputations one can only be neutral if they refuse to take sides in the conflict.

If you really take a neutral stance to this particular dispute then there is no obligation for you to have reasons for believing or disbelieving in machine sentience. I wouldn't expect you to prove anything.

One Brow said...

Hal,
It is not an argument. That is what the word means when used in the context of disputed claims.

Right. Neutral has a different definition between the context of discussing proposition that are being considered in light of disputed claims, and the context of discussing propositions in and of themselves. In the former context, there are no inherently neutral positions, any proposition may be a subject of dispute. In the latter, some propositions are inherently neutral, they propose no positive assertions or features.

Any other usage in this context is irrelevant.

OK, you got it. For the remainder of our discussion, there are no inherently neutral positions.

"It is possible for a computer to be conscious." is not a claim.

Yes it is. Just as "It is not possible for a computer to be conscious." is a claim.

I interpret "It is not possible for a computer to be conscious." to mean that, regardless of how much effort, capability, etc. we put into a computer, there can never be one where it achieves consciousness. That is indeed a claim.

So, what is the equivalent claim for "It is possible for a computer to be conscious."?

The 'property' of the position is that it is neutral.

No, positions are only neutral with regard to specific arguments. If you look at a different argument, and the position may no longer be neutral. That's the definition you are insisting upon using.

And being neutral it does not take sides in the issue being disputed. You cannot have a neutral position unless there is a dispute or conflict.

Exactly. It is a property of the relationship between a proposition and a dispute.

To illustrate:
There are two claims being disputed:
"It is possible for a computer to be conscious."
"It is impossible for a computer to be conscious."

Neutral position: "I don't know. Maybe it is possible maybe it is not possible. I can't see a reason for supporting either claim."


What's the difference, to you, between "It is possible for a computer to be conscious." and
"I don't know. Maybe it is possible maybe it is not possible. I can't see a reason for supporting either claim."? What does one say about the world that the other does not?

So I am wondering if you now understand what it means for a possibility to exist?

So, when you say a possibility exists, you mean it has been confirmed by a real-world example or been proven true? If no one has ever done something, the possibility does not exist?

If that's not what you meant, you need to find a different example.

This discussion feels, to me, like you are asking me why I see no reason to rule out the existence of machine sentience. I'm not sure how to prove that I don't see any reasons to rule it out.

I don't see it that way. What I am trying to understand is why you think taking one side in a dispute is a neutral position. It makes no sense because when there are conflicts or disputations one can only be neutral if they refuse to take sides in the conflict.

OK, in relationship to our discussion, my position is not neutral.

Now, why do you expect me to prove that I don't see any reasons to rule out machine sentience?

If you really take a neutral stance to this particular dispute then there is no obligation for you to have reasons for believing or disbelieving in machine sentience. I wouldn't expect you to prove anything.

There is nothing about being a neutral stance towards a discussion that means it does not require proof. In fact, I'm still not sure what you think the difference is between my position and what you have stated the neutral position would be.

Hal said...

One Brow,
What's the difference, to you, between "It is possible for a computer to be conscious." and
"I don't know. Maybe it is possible maybe it is not possible. I can't see a reason for supporting either claim."? What does one say about the world that the other does not?


What it says is that you are not taking a position regarding the question of the possibility for a computer to be conscious. It lets the disputants know why you are not going to favor one position over the other.
If you had said that at the beginning of our discussion I would have known that you take no position on this question of machine sentience and so not have tried to explain why I didn't agree with you holding the opposing position.

Hal said...

One Brow,
Now, why do you expect me to prove that I don't see any reasons to rule out machine sentience?

You are the one claiming it is possible for a computer to be conscious. The burden of supporting that claim lies with you.

In the same way I have the burden of supporting my position.

Hal said...

One Brow,
OK, you got it.

And what is with the snarky comment? You were the one using the word 'neutral' inappropriately in this context. I'm glad to see that you recognize it is not really a neutral position. Maybe we can move on to more important issues.

Hal said...

One Brow,

To say, "It is possible for a computer to be conscious." is to say that you believe that possibility exists.

To say, "I don't know. Maybe it is possible maybe it is not possible. I can't see a reason for supporting either claim." is to say you take no position regarding the existence of that possibility. You neither believe it is possible for a computer to be conscious. Nor do you believe it is impossible for a computer to be conscious.

Hal said...

One Brow,
So, when you say a possibility exists, you mean it has been confirmed by a real-world example or been proven true? If no one has ever done something, the possibility does not exist?

If we know a possibility exists then, yes, it has been shown to be possible.
Simply because we don't know a possibility exists does not imply it doesn't. Nor does it imply that it does.

If we don't know a possibility exists, then it is a matter of belief. Typically when we believe something exists or doesn't exist we try to provide reasons for our belief.

You believe it is possible for a computer to be conscious. That is, you believe that possibility exists. I don't.

In retrospect my example was a poor one. Sorry about that.

One Brow said...

Hal said...
And what is with the snarky comment?

Because rather than address what I meant by referring to "the neutral position" (which would be long the lines of the second definition), you insisted that only definition 1 would be appropriate for this conversation. Sometimes, when people don't put in the effort to understand me, IO get snarky.

To say, "It is possible for a computer to be conscious." is to say that you believe that possibility exists.

To say, "I don't know. Maybe it is possible maybe it is not possible. I can't see a reason for supporting either claim." is to say you take no position regarding the existence of that possibility. You neither believe it is possible for a computer to be conscious. Nor do you believe it is impossible for a computer to be conscious.


Can you explain what you understand by '"It is possible for a computer to be conscious." is to say that you believe that possibility exists.' without using the word "possible" or it's derivatives?

If we know a possibility exists then, yes, it has been shown to be possible.

So, only possibilities that have been proven/demonstrated/constructed/etc. can exist?

Consider this page. ased on our current understanding training abilities, a 3:36 mile seems unlikely to be achieved. However, we don't know what will happen with human training in the next 100 years. Is it possible that, in the next 1000 years, a human will run a 3:30 mile?

You believe it is possible for a computer to be conscious. That is, you believe that possibility exists. I don't.

Perhaps it is usage, but I just don't agree with the equivalence as stated there.

«Oldest ‹Older   1 – 200 of 234   Newer› Newest»