tag:blogger.com,1999:blog-10584495.post112232084932133857..comments2024-03-28T12:34:14.649-07:00Comments on dangerous idea: Lippard on Original vs. Derived IntentionalityVictor Repperthttp://www.blogger.com/profile/10962948073162156902noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-10584495.post-24870241577523655852008-07-22T11:39:00.000-07:002008-07-22T11:39:00.000-07:00-----------I would say these robots' utterances ha...-----------<BR/>I would say these robots' utterances have semantic properties (i.e., the individual terms have referents) which confer upon their utterances truth values. This would still be the case if all humans were killed, if all non-robots were eradicated.<BR/><BR/>If you would not want to say their utternances have truth values, what epistemic or semantic properties would you give the utterances? Are their utterances no different than the babblings of a brook?<BR/>-----------<BR/><BR/>I missed the first part of this conversation, so I'm not really sure how you defined "robot". But if you're talking about a robot in the everyday sense of the word (i.e. a programmed computer) then it CAN NOT have semantics (by the *very definition* of a computer). If your robot was designed to have an artificial brain, then it CAN have semantics. With this, you can answer your own question.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-10584495.post-1122439147569758512005-07-26T21:39:00.000-07:002005-07-26T21:39:00.000-07:00Let's say the robot has internal states that were ...Let's say the robot has internal states that were designed (using some nice external sensors) to reliably covary with temperature as well as object identity (e.g., rocks, other robots, plants, animals). It meets another robot after walking a bit, and says "There is a big warm rock around the corner, behind the plant".<BR/><BR/>The other robot goes around the corner for a bit, and comes back saying "No, it is a big warm lion, not a rock. I lifted the plant out of the way and saw it was a lion." <BR/><BR/>The second robot says, "Oh, thank you: there is a big lion around the corner."<BR/><BR/>*******<BR/><BR/>The robots have internal states that were designed to covary with states of the world (in fact they have the function of picking out or referring to things in the world), these internal models can be <I>wrong</I> (i.e., these processes can malfunction), and the internal states can be revised in light of new evidence. <BR/><BR/>I would say these robots' utterances have semantic properties (i.e., the individual terms have referents) which confer upon their utterances truth values. This would still be the case if all humans were killed, if all non-robots were eradicated.<BR/><BR/>If you would not want to say their utternances have truth values, what epistemic or semantic properties would you give the utterances? Are their utterances no different than the babblings of a brook?Giordano Sagredohttps://www.blogger.com/profile/08687421992198952908noreply@blogger.com