Alan Turing, moralist

Strong AI. The Turing Test. The Chinese room. As I’m sure you’ll agree, not nearly enough has been written about these topics. So when an anonymous commenter told me there’s a new polemic arguing that computers will never think — and that this polemic, by one Mark Halpern, is “being blogged about in a positive way (getting reviews like ‘thoughtful’ and ‘fascinating’)” — of course I had to read it immediately.

Halpern’s thesis, to oversimplify a bit, is that artificial intelligence research is a pile of shit. Like the fabled restaurant patron who complains that the food is terrible and the portions are too small, Halpern both denigrates a half-century of academic computer science for not producing a machine that can pass the Turing Test, and argues that, even if a machine did pass the Test, it wouldn’t really be “thinking.” After all, it’s just a machine!

(For readers with social lives: the Turing Test, introduced by Alan Turing in one of the most famous philosophy papers ever written, is a game where you type back and forth with an unknown entity in another room, and then have to decide whether you’re talking to a human or a machine. The details are less important than most people make them out to be. Turing says that the question “Can machines think?” is too meaningless to deserve discussion, and proposes that we instead ask whether a machine can be built that can’t be distinguished from human via a test such as his.)

If you haven’t read Halpern’s essay, the following excerpts should help you simulate a person who has.

Turing does not argue for the premise that the ability to convince an unspecified number of observers, of unspecified qualifications, for some unspecified length of time, and on an unspecified number of occasions, would justify the conclusion that the computer was thinking — he simply asserts it.

A conversation may allow us to judge the quality or depth of another’s thought, but not whether he is a thinking being at all; his membership in the species Homo sapiens settles that question — or rather, prevents it from even arising.

…the relationship of the AI community to Turing is much like that of adolescents to their parents: abject dependence alternating with embarrassed repudiation. For AI workers, to be able to present themselves as “Turing’s Men” is invaluable; his status is that of a von Neumann, Fermi, or Gell-Mann, just one step below that of immortals like Newton and Einstein. He is the one undoubted genius whose name is associated with the AI project … When members of the AI community need some illustrious forebear to lend dignity to their position, Turing’s name is regularly invoked, and his paper referred to as if holy writ. But when the specifics of that paper are brought up, and when critics ask why the Test has not yet been successfully performed, he is brushed aside as an early and rather unsophisticated enthusiast.

Apart from [the Turing test], no one has proposed any compelling alternative for judging the success or failure of AI, leaving the field in a state of utter confusion.

[W]hen a machine does something “intelligent,” it is because some extraordinarily brilliant person or persons, sometime in the past, found a way to preserve some fragment of intelligent action in the form of an artifact. Computers are general-purpose algorithm executors, and their apparent intelligent activity is simply an illusion suffered by those who do not fully appreciate the way in which algorithms capture and preserve not intelligence itself but the fruits of intelligence.

Of course, Halpern never asks whether the brain’s apparent intelligence is merely a preserved fragment of its billion-year evolutionary past. That would be ridiculous! Indeed, Halpern seems to think that if human intelligence is open to question, then the Turing Test is meaningless:

One AI champion, Yorick Wilks … has questioned how we can even be sure that other humans think, and suggests that something like the Test is what we actually, if unconsciously, employ to reassure ourselves that they do. Wilks … offers us here a reductio ad absurdum: the Turing Test asks us to evaluate an unknown entity by comparing its performance, at least implicitly, with that of a known quantity, a human being. But if Wilks is to be believed, we have unknowns on both sides of the comparison; with what do we compare a human being to learn if he thinks?

I think Halpern is simply mistaken here. The correct analogy is not between computers and humans; it’s between computers and humans other than oneself. For example, I have no direct evidence that the commenters on this blog think. I assume they think, since they’re so darned witty and insightful, and my own experience leads me to believe that that requires thinking. So why should this conclusion change if it turns out that, say, Greg Kuperberg is a robot (the KuperBlogPoster3000)?

Turing himself put the point as well as anyone:

According to the most extreme form of [the argument from consciousness] the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks but A does not’. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.

There’s a story that A. Lawrence Lowell, the president of Harvard in the 1920’s, wanted to impose a Jew quota because “Jews cheat.” When someone pointed out that non-Jews also cheat, Lowell replied: “You’re changing the subject. We’re talking about Jews.” Likewise, when one asks the strong-AI skeptic how a grayish-white clump of meat can think, the response often boils down to: “You’re changing the subject. We’re talking about computers.”

And this leads to my central thesis: that the Turing Test isn’t “really” about computers or consciousness or AI. Take away the futuristic trappings, and what you’re left with is a moral exhortation — a plea to judge others, not by their “inner essences” (which we can never presume to know), but by their relevant observed behavior.

It doesn’t take a hermeneutic acrobat to tease this out of Turing’s text. Consider the following passages:

The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities, e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.

It will not be possible to apply exactly the same teaching process to the machine as to a normal child. It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. But however well these deficiencies might be overcome by clever engineering, one could not send the creature to school with out the other children making excessive fun of it.

If you want to know why Turing is such a hero of mine (besides his invention of the Turing machine, his role in winning World War II, and so on), the second passage above contains the answer. Let others debate whether a robotic child would have “qualia” or “aboutness” — Turing is worried that the other kids would make fun of it at school.

Look, once you adopt the “moral” stance, this whole could-a-computer-think business is really not complicated. Let me lay it out for you, in convenient question-and-answer format.

Q. If a computer passed the Turing Test, would we be obligated to regard it as conscious?
A. Yes.
Q. But how would we know it was conscious?
A. How do I know you’re conscious?
Q. But how could a bunch of transistors be conscious?
A. How could a bunch of neurons be conscious?
Q. Why do you always answer a question with a question?
A. Why shouldn’t I?
Q. So you’re saying there’s no mystery about consciousness?
A. No, just that the mystery seems no different in the one case than the other.
Q. But you can’t just evade a mystery by pointing to something else that’s equally mysterious!
A. Clearly you’re not a theoretical computer scientist.

As most of you know, in 1952 — a decade after his contributions to breaking the U-boat Enigma saved the Battle of the Atlantic — Turing was convicted of “gross homosexual indecency,” stripped of his security clearance, and forced to take estrogen treatments that caused him to grow breasts (it was thought, paradoxically, that this would “cure” him of homosexuality). Two years later, at age 41, the founder of computer science killed himself by biting the infamous cyanide-laced apple.

I agree with what I take to be Turing’s basic moral principle: that we should judge others by their relevant words and actions, not by what they “really are” (as if the latter were knowable to us). But I fear that, like Turing, I don’t have any argument for this principle that isn’t ultimately circular. All I can do is assert it, and assert it, and assert it.

46 Responses to “Alan Turing, moralist”

  1. Niel Says:

    Excellent post, Scott.

    Ultimately, it seems to me that the two major camps in discussions about AI are those who believe that a person is defined by their “essential being”, and those who believe that a person is defined by their “acts”. It’s the ancient moral debate all over again.

  2. Bram Says:

    The purpose of estrogen treatments wasn’t so much to attempt to alter orientation as to cause impotence, on the theory that that could stop homosexual behavior if not homosexual urges. I read this online someplace, and unfortunately don’t remember the reference.

    Martin Davis says that the manner of Turing’s death was inconsistent with cyanide ingestion, and that he probably left some cyanide out to give the appearance that he might have accidentally ingested some and died of that, so that his mother could have some hope that her son had died accidentally instead of commiting suicide, which has some significant stigma. This is from Davis’s excellent book ‘The Universal Computer’.

  3. Scott Says:

    Thanks, Niel!

    Bram: Yeah, you’re right about the estrogen treatments. I’ll correct that in the post.

    Apparently the coroner declared that the cause of death was cyanide poisoning, though the apple itself wasn’t tested for cyanide. I have heard the theory that Turing chose this manner of suicide to give his mother the out that it was accidental. If you read his mom’s biography of him (yes, she actually wrote one!), you’ll see that if that was his goal, he succeeded.

  4. Anonymous Says:

    I know next-to-nothing about AI, but my second-hand understanding is that the field of AI has largely given up on the goal of constructing computers that could pass the Turing test, focusing instead on more realtstic (more concrete?) goals. Is that not the case?

  5. Howie Says:

    I think modern chatbots already pass a weak form of the Turing test, i.e. they are good enough for various commercial purposes.

  6. Anonymous Says:

    Good writing, sir!

  7. Scott Says:

    anonymous 2:27: Yeah, building a computer that could pass the Turing Test is a lot like proving P!=NP, in that the more you understand about the problem, the smaller the steps you hope to take toward solving it.

    The trouble, of course, is that natural selection has had a billion-year head start. Our brains are at their most impressive when what they’re doing comes so “naturally” that we never reflect on it. At least, that’s the standard explanation for why tasks that are trivial for a five-year-old — walking on two legs, recognizing faces, etc. — have turned out to be harder than beating Kasparov at chess.

    There’s much more to say about this topic, but being a product of natural selection, I want some lunch now. Maybe I’ll blog about this another time.

  8. Paul Crowley Says:

    After reading that, I felt obliged to go and read the original article to see if it was really as bad as you claimed. I can save other readers the trouble. It is if anything worse. If you enjoy reading obnoxious rubbish, it seems there’s an even longer version on his website; I do not intend to waste the time!

    His failure to understand any of the sailent points of the Chinese room or the counterarguments against it is particularly striking.

  9. Scott Says:

    I think modern chatbots already pass a weak form of the Turing test, i.e. they are good enough for various commercial purposes.

    That’s true. If you really want to be amused, look at AOLiza — Joseph Weizenbaum’s ELIZA program unleashed on AOL chat rooms, where horny guys repeatedly try to pick it up.

    Excerpt:

    fortyeight: No, You’ve got one hot body.

    AOLiza: i’m not sure i understand you fully.

    fortyeight: I’d have wood all night long.

    AOLiza: why do you say you’d have wood all night long?

    Perhaps, in defining the Turing Test, one needs to stipulate that the human interrogator is smarter than a trout.

  10. Dave Bacon Says:

    I don’t understand this whole debate about computers being conscious or whatever. The question that fascinates me, is why can’t a bag of neurons calculate the first 10 billion digits of PI. Or at least give me a couple billion within the neurons lifetime. (“They’re made of meat.” “Meat?” “Meat.”)

  11. Ijon Tichy Says:

    You can read Part 1 and Part 2 of a more detailed and fully documented version of that article. I think that commenter niel’s perception about the “two major camps” of AI (is such a division warranted in the first place?) confuses the issue. Why bring morality into it? Without contradiction, one can hold one opinion on the AI debate, and quite another on the “ancient moral” debate. They’re two different debates.

  12. zoss Says:

    judge others, not by their “inner essences” (which we can never presume to know), but by their relevant observed behavior

    Maybe I’m arguing semantics here; but I just want to suggest that “judging others” implies judging their very essence which we don’t have access to (and which their actions does not neccessarily reflect). Thus, I would maybe rephrase the above statment as: judge others’ behaviour; not their essence.

  13. Eyal Rozenman Says:

    The Turing is the best you can do to test intelligence in a black box model, I think, but you can do better with humans than with computers. I can compare my brain activity and someone else’s brain activity when we do/experience the same things. This becomes more interesting when you think of more basic brain activities which are shared with other animals. Which animals feel pain the way we do? Is a Turing test for this convincing? Like trying to run away when poked, say? I am not sure, but since pain is a basic feeling which correlates with certain activity of the small brain, which exists already in the reptilian brain, we can learn something about feeling pain by looking inside animals’ brains.

    Hopefully the small brain/reptilian brain info is not nonsense – I did not double check before posting. Another merit of posting anonymously…

  14. Niel Says:

    I think that commenter niel’s perception about the “two major camps” of AI (is such a division warranted in the first place?) confuses the issue. Why bring morality into it? Without contradiction, one can hold one opinion on the AI debate, and quite another on the “ancient moral” debate.

    True. However, that doesn’t necessarily mean that one is being consitent if/when one does so.

    In earlier periods of history, people have thought that people with dark skin color were “not people”, that women were “not people” — not just in a legal sense, but in the sense that they were presumed to not have the same richness of mental states as white males. What is the strong AI debate but a question about whether it is possible for a computer to one day develop rich enough “mental” states to warrant being considered a person for practical (if not legal) purposes?

    In Western society, humans (other than white males) are regarded as persons because they campaigned for it, and in the process managed to convince people that they were not so different from those humans who already had civil rights. What is this if not a Turing-like test, where the victory of the canditate intelligence is that much greater because people could see that it wasn’t a white male who was speaking?

    Let me present an alternative to the Turing test, and see if the two debates are so “different” when you consider it. If computers (or computer programs) become capable of organising and demanding civil rights, should we then regard them as intelligent, or do we presume that they are incapable of the same inner life as humans because their construction is different?

  15. Osias Says:

    >>If you haven’t read Halpern’s essay,
    >>the following excerpts should help you
    >>simulate a person who has.

    heheheheheheheh, LOL LOL LOL !!

    Scott, what was the last time I told you I love you? 🙂

  16. Nagesh Adluru Says:

    This post should act like a wonderful additional explanation of why it is time for people to adapt computational thinking, a post on Lance’s blog.

  17. Scott Says:

    Scott, what was the last time I told you I love you?

    I don’t believe you ever have.

  18. Jud Says:

    I’m going to take the unpopular side of the discussion and argue that you’ve done something quite similar to what you accuse Halpern of doing – oversimplifying in order to make a (perhaps unfair) point.

    Re Halpern’s argument that a machine that passes the “Turing test” might not be truly capable of thought: As I read Halpern’s piece, this is not a moral judgment, but a criticism of the efficacy of the test in answering the question of whether one is dealing with intelligence – or to put it more simply, it’s at least in part intended to be a discussion of the Chinese room problem. (Halpern doesn’t help himself by raising the Chinese room problem explicitly only after he’s already spent several paragraphs putting forth the problem with considerably less precision than Searle.)

    A college psychology experiment in which I participated may help elucidate one of the difficulties with the Turing test that Halpern alludes to: its utter dependence on the (presumably) human half of the conversation. The experiment, I and the other participants were told, was to measure how people’s attitudes are modified through interaction. We were given a series of propositions and asked to indicate our feelings about them on a scale from 1-5, strongly agree to strongly disagree. Other participants – our experimental “partners” – would indicate their feelings about these propositions in the same way. Our task was to write arguments on behalf of our positions where we differed from our partners and see, through successive iterations, whether this modified their positions.

    On the second iteration, I noted that my partner’s response to what I thought was a calm and well-reasoned argument was to more strongly disagree with my position. Thus, on the third iteration, instead of continuing to make arguments, I sent a question to my partner: Was the previous response to my argument an honest one, or was it done only to see how I would respond?

    We stopped as planned after the third iteration. The responses we’d been told were coming from other participants had in fact been chosen at random by a machine. I was told (to my surprise) that thus far I had been the only participant to guess the general outlines of the experiment. So – why was I the only person of a few dozen to think of the possibility that my interlocutor was not responding “intelligently”? Was I so incisive that I easily saw through the experiment, or was I so egotistical as to think no intelligent person could more strongly disagree with me after reading my argument? The important points are that Turing tests may without much difficulty (certainly less than paying someone to sit in a room with a Chinese phrase book) be “wired” in such a way that a large majority of persons would suppose themselves to be dealing with intelligent entities when in fact they were not, and the feelings of the minority (that there were no intelligent entities) might not result from any special powers of discernment.

    ISTM that it is a legitimate question whether a Turing test can differentiate non-random from random responses in some defined number of iterations, let alone non-random from “intelligent” responses. (Voight-Kampff tests, anyone?)

  19. Scott Says:

    Jud: Thanks for your comments.

    Firstly, I didn’t accuse Halpern of “oversimplifying” — I accused him of being just plain wrong. 🙂

    Secondly, I think a key point about the experiment you describe is that it had only three iterations. In conversations, we tend to give people the benefit of the doubt — it can take a long time before we realize that someone isn’t actually responding to what we said! (This can happen not only because the “someone” is a chatbot, but also because he or she wasn’t paying attention.)

    That’s why I think that in the Turing test, either (1) the human interlocutor should be someone who’s seen many chatbots in the past and is skilled at exposing them quickly, or (2) there should be generous time available (several hours, or even several days).

  20. Osias Says:

    >>Scott, what was the last time
    >>I told you I love you?
    >
    >I don’t believe you ever have.

    uh… and do you want to me doing that now? 🙂

    Tsk, instead of doing these unfunny comments, I should be asking something useful. (remembering you told us to ask things here)

    Like: Why can’t I understand what a NP-hard language is? I re-read the definition several times in several places. I understand what is P, NP, NP-complete, I even understood the “SAT is NP-complete” proof. Do you know any site where it’s explained very, very simple?

  21. Scott Says:

    A problem is NP-hard if it’s “at least as hard as NP-complete” — in other words, if NP-complete problems can be efficiently reduced to it (but not necessarily vice versa). For example, the halting problem is NP-hard.

    If you understand NP-completeness, then you should certainly be able to understand that. (I’m claiming a reduction. 🙂 ) Indeed, the usual way one defines NP-completeness is to say a problem is NP-complete iff it’s both NP-hard and in NP.

    (To avoid circularity, one can define NP-hard as “any NP problem can be reduced to this,” which amounts to the same thing as “any NP-complete problem can be reduced to this.”)

  22. Osias Says:

    >A problem is NP-hard if it’s “at
    >least as hard as NP-complete” —

    I think I got it now. I think I didn’t understand it before because I saw no point of definind a class “upwards” like this.

    >in other words, if NP-complete
    >problems can be reduced to it
    >(but not necessarily vice versa).

    This “reduced” means “non-optimally reduced”, right? Like “padding” a simpler problem?

    >If you understand
    >NP-completeness,
    >then you should certainly be
    >able to understand that.
    >Indeed, the usual way one
    >defines NP-completeness is to
    >say a problem is
    >NP-complete iff it’s both
    >NP-hard and in NP.

    I had understood NP-completeness in terms of reductions to SAT and SAT as complete because of that Cook-Levin tableau. When reading things like “both NP-hard and in NP” I was atonished…

    As an undergraduate, my professor told us at the begining of a course: “at the end of this course we will learn the difference between NP-complete and NP-hard”. The course ended, but he must had forgotten that. 🙂

    >For example, the halting
    >problem is NP-hard.

    Now I’m lost. “Impossible” counts as “as hard as NP-complete”?

    Well, thanks a lot for the attention and answers.

  23. Jonathan Katz Says:

    Osias: at the risk of seeming egotistical, I can answer your question by referring to my
    lecture notes
    (Of course, there are many other fine lecture notes out there also…)

  24. Scott Says:

    This “reduced” means “non-optimally reduced”, right? Like “padding” a simpler problem?

    I don’t know what you mean by “non-optimally reduced.”

    Problem A is (Karp-)reducible to Problem B if there exists a polynomial-time algorithm that transforms any instance of A into an instance of B having the same answer.

    (One can also define Cook reductions, which involve multiple instances of B, but there’s no need for that right now.)

    >For example, the halting
    >problem is NP-hard.

    Now I’m lost. “Impossible” counts as “as hard as NP-complete”?

    Look, there exists a polynomial-time algorithm that, given any Boolean formula phi, produces a program A(phi) that halts if and only if phi has a satisfying assignment. This A(phi) simply loops through all possible assignments, halts if it finds one that’s satisfying, and otherwise runs forever.

    That’s why the halting problem is NP-hard — not because of anyone’s personal intuition about what should “count” as a hard problem.

  25. Anonymous Says:

    All these “Chatbot” type examples do not count — if you do the Turing test you know you are doing the Turing test, you are not an unsuspecting party.

  26. Scott Says:

    All these “Chatbot” type examples do not count — if you do the Turing test you know you are doing the Turing test, you are not an unsuspecting party.

    Unfortunately, the Loebner Prize competition (which Halpern discusses) showed that, even if people know they’re supposed to be distinguishing humans from computers, they’re often incredibly naive about it. For example, they often interpret nonsensical or off-topic remarks as attempts to be witty.

    Again, the Turing Test only works if the human interlocutor is NOT A DOOFUS.

  27. Anonymous Says:

    First, nice post, Scott. I like your perspective.

    Second, I think Halpern misses the point in understanding that the Turing test demonstates a functionalist perspective of intelligence that is useful (nay, necessary) as a foundation for a computational perspective of intelligence. I’m not sure making a machine that passes the test is a goal of AI, and many of Halpern’s quotes of AI researchers, which are comically misinterpreted, suggest this (indeed, what is the point of research that makes a conversational machine with no expertise and that doesn’t solve any problems but can fool humans?*). Rather, the test sets up a philosophical framework in which AI can happen.

    Halpern tries to attack this framework (which is legit, since there are alternate views from various philosophy of mind camps), but instead of looking at these interesting perspectives (his inabillity to understand the chinese room rebuttals is rather humorous, I agree with PC at 4:13), he leads us down a path that denies science the ability to discuss intelligence by making it mystical, unknowable, soul-ful, etc. Its like he’s a proponent of ID for AI.

    * On research into passing the Test: how would it be formalized? Does the machine have to fool most people most of the time, some poeple all of the time, etc? As soon as a solution is announced, it would be met with a line of skeptics huffing “well, I bet /I/ can tell the difference!” Indeed, I dont think this was ever meant as a subject for research. Turing thought a chatbot with a big enough input-output dictionary to be a reasonable machine, and so supposed that such a machine might come to exist or at least was not competely unreasonble. But, I don’t think he was saying “build me such a machine!”

  28. Anonymous Says:

    Roger Penrose (in a book that I have purchased but [he confessed] read only a little of) has explored this issue and others that may be of interest: See http://www.amazon.com/gp/reader/0140145346/ref=sib_rdr_next1_3/002-6170696-2199209?%5Fencoding=UTF8&p=S009&ns=1#reader-page et seq.

  29. Wolfgang Says:

    > even if people know they’re supposed to be distinguishing humans from computers, they’re often incredibly naive about it.

    I disagree a bit. The real issue is that making a ‘normal’ conversation does not require as much intelligence as one would assume. (And advanced chatterbots are much more convincing than you might assume. By the way Google is nothing but a chatterbot with simplified user-interface who ‘knows’ quite a lot.)

    For a while I was playing around with chatterbots and my conclusion is that it is indeed easy to tell the difference; but only by repeatedly making statements you would not make in a ‘normal’ conversation. (The best way is to make inconsistent statements and watch how the bot/human reacts.)

    However, if the rules would be ‘fair’, you could not make such statements, otherwise the bot/human should conclude that you are not an intelligent human being.

  30. Scott Says:

    However, if the rules would be ‘fair’, you could not make such statements, otherwise the bot/human should conclude that you are not an intelligent human being.

    No, you’re allowed to say anything you want. You’re not the one being tested! 🙂

  31. Scott Says:

    Roger Penrose (in a book that I have purchased but [he confessed] read only a little of) has explored this issue and others that may be of interest

    Oh boy — that’s a whole ‘nother can of microtubules.

    See here for some of my thoughts about Penrose’s Gödel argument, and here for my “application” of his ideas.

    To his great credit, Penrose at least understands what sort of thing would have to be the case for computer intelligence to be impossible in principle.

  32. Anonymous Says:

    It’s a shame what happened to Turing. I don’t understand why he didn’t move to the U.S? It might have been a little more tolerant than Britain.

    As for machine intelligence, I would distinguish between intelligence and consciousness. I think that a machine could be intelligent and yet not be conscious with an awareness of itself. I suspect that something is going on in our wetware that may not be reproducible in your common variety computer.

  33. Anonymous Says:

    This may be besides the point of morality, but there’s actually a big difference from the complexity point of view between Searle’s experiment and the way it’s described by Halpern.

    Halpern describes an experiment where a person that does not know chinese gets questions in chinese, finds the questions and the corresponding answers in a giant dictionary/lookup table and responds with the answers.

    The complexity theoretic response to this thought experiment is that one would never be able to build a lookup table that can cover the exponential number of possible
    questions, and hence if a computer would pass the test it will have to “think”.

    It’s not clear that this objection holds for Searle’s original experiment, it seems that there the person in the room does not use a lookup table, but actually simulates the execution of a Turing machine on the given input. In this sense Searle is a true theorist – the fact that the person will likely be a constant factor slower than the machine (roughly 10^{11} with current computers) does not bother him one bit.

    Boaz

    p.s. I took the description of Searle’s experiment from
    http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html

    Unfortunately the Wikipedia entry for Chinese room contains the text
    “[the computer] consults a large look-up table (as all computers can be described as doing)”

  34. Scott Says:

    It’s a shame what happened to Turing. I don’t understand why he didn’t move to the U.S? It might have been a little more tolerant than Britain.

    Maybe, maybe not. Turing did spend time in the US (in the 1930’s for his PhD at Princeton, and during WWII to work with Shannon at Bell Labs), so he knew what it was like. I don’t think attitudes liberalized in the US until the late 60’s, roughly the same time as in Britain — maybe someone else can offer more insight? (From a legal standpoint, the last state antisodomy laws were only struck down in 2003!)

    It’s difficult to enter the mind of someone in suicidal despair — especially in Turing’s case, since he left no note and, according to Hodges’ biography, seemed mostly unchanged in the months before his death (which in any case, was two years after the estrogen treatments had ended). One has to focus on how things would have looked to him then — and not on what we know now, which is that if he’d stayed alive, he would have lived to see Turing machines change the world.

  35. Scott Says:

    I suspect that something is going on in our wetware that may not be reproducible in your common variety computer.

    Wetware chauvinism plain and simple, that’s what I says it is! 🙂

  36. Scott Says:

    Boaz: Thanks for the insight! In debating this topic, I’ve noticed two complementary confusions:

    (1) People who don’t realize that a human can always be simulated on a finite set of questions, if necessary using a giant lookup table.

    (2) People who think a giant lookup table is more or less the only way to do it (i.e., who don’t recognize a difference between a lookup table and a Turing machine).

    In other words, some people don’t understand that every language is in EXP/exp; others don’t understand that some languages are in P/poly. 🙂

  37. Luca Says:

    Turing died precisely when the McCarthy witch hunt was peaking.

    In the McCarthy perspecitve, homosexuals were indistinguishable from communists, and they were equally persecuted:

    http://members.aol.com/matrixwerx/glbthistory/mccarthyism.htm
    (I couldn’t find a more scholarly reference, but this is well documented stuff. The famous play “Angels in America” touches on some aspects on that period.)

    Apart from the early 1950s hysteria, homosexuals were routinely fired from sensitive government jobs in the US, under the theory that they were vulnerable to blackmail. (Same in UK, and Turing lost his security clearence after his trial.)

    The irony in this is that the people who were discovered to be homosexuals, and who therefore were no more exposed to blackmail, would be fired, while those who were successfully hiding (and so exposed to blackmail) could keep their job.

    By the way, according to what I remember from Hodges’ biography, what got Turing really in trouble is that he was hanging out with a 19 years old, while the age of consent was 21, and that his naivety was in part to blame. After there was a burglary in his apartment, he called the police, and they asked him if he had any leads for them. He said maybe a friend of his who had been to the apartment might have given information to other people who actually broke into the apartment. Then the police asked, how come you let someone whom you suspect to associate with criminals into your apartment. And Turing told them they were dating. So the police forgot about the burglary and hauled Turing away. (That’s pretty much how Hodges tells the story.)

  38. Anonymous Says:

    In the McCarthy perspecitve, homosexuals were indistinguishable from communists, and they were equally persecuted:

    that is kind of ironic cause J. Edgar Hoover, Director of the FBI, was probably a homosexual. He left his life insurance policy to his second in command at the bureau who had also been a close friend for many years. Allegedly, Hoover kept his grip on his job for 50 years because he had so much dirt on everybody in his files that no one dared touch him.

  39. Osias Says:

    >That’s why the halting problem
    >is NP-hard — not because
    >of anyone’s personal intuition
    >about what should “count” as
    >a hard problem.

    Thanks, man. And relax: I know my personal intuition have no place on TCS world, that’s why I’m trying to understand thing correctly, reading books and asking questions here.

  40. Anonymous Says:

    Perhaps Searle’s experiment can be viewed as a demonstration that to a large extent we consider humans as individuals with thought, conciousness and free will, because we do not understand the brain and cannot predict it.

    The experiment can be seen as telling us that any device for which we have the “program”/”rule book” for, regardless of whether it is human or machine, will no longer be considered as an individual.

    Boaz

  41. Anonymous Says:

    p.s. I guess the corresponding thought experiment to what i wrote above would be: what if neurobilogists find out that the best way to describe what is going on inside an english-speaking person’s brain is that there’s a small being inside it, speaking only chinese and working with a book of instructions, that produces all the responses to the questions…

    Boaz

  42. Scott Says:

    what if neurobilogists find out that the best way to describe what is going on inside an english-speaking person’s brain is that there’s a small being inside it, speaking only chinese and working with a book of instructions, that produces all the responses to the questions…

    I like that!

    In the case of the Pentium chip, there really is such a “small being inside” — one that speaks only RISC opcodes, not x86 ones. (That’s something from my architecture class that I haven’t yet managed to forget.)

    (And actually, I guess it’s the reverse of your thought experiment, since English is a RISC language compared to Chinese.)

  43. Chad Okere Says:

    Bleh, I can’t stand people who don’t think computers will ever “think.” They never bother to define what it means to “think”.

    I bet that in 50 years, the idea that computers can “think” will be completely obvious to anyone, and only people who read up on the history of philosophy will wonder why anyone bothered to argue that they could (since they probably won’t even see the arguments about why they couldn’t)

  44. Aaron Denney Says:

    I’m surprised that no one has brought up Dijkstra’s quip: “The question of whether a computer can think is no more interesting than whether a submarine can swim.”

  45. L Says:

    English is a RISC language compared to Chinese.

    In some written sense, that’s obviously true, but I thought that once you parsed into syllables, English and Chinese were both pretty far on the RISC side.

  46. A.R.Yngve Says:

    Which is the most important goal: to create a “smart” machine or a conscious one?

    My money would be on a “smart” machine that could do work which requires rudimentary intelligence… its “level of consciousness” is of lesser importance.
    :-S

    If a machine had intelligence per se, this could be proved and tested. If a machine was conscious, this couldn’t be proved. But we need conscious machines like we need a hole in the head.

    Scratch that: PEOPLE need consciousness like they need a…