October 5, 2008
Computers can finally “think” for themselves, or can they?
By Justin Montgomery
Not since the IBM supercomputer “Deep Blue” beat world chess champion Garry Kasparov in 1997, have computers been regarded as being able to “think” at a human’s capacity. Very soon, six computers are about to converse with human interrogators in an experiment to prove once and for all that they can.
How can this happen you ask? In what they call a “Turing test,” a machine or group of machines can actually fool a group of judges into thinking they could be human in their responses. The test is performed by conducting a text-based conversation on any subject. If the computer’s responses are indistinguishable from those of a human, it has passed the Turing test and can be said to be “thinking”, according to an explanation by the Guardian.
Next week, six computer programs, or “artificial conversational entities” as they’re being called, will answer questions posed by human volunteers at the University of Reading in an attempt to become the first recognized ‘thinking’ machine since that of IBM’s Deep Blue over a decade ago. The interrogators will begin separate, simultaneous text-based conversations with any subjects they choose, as well as human respondents. After five minutes they will be asked to judge which is which. If they get it wrong, or are not sure, the program will have fooled them.
Professor Kevin Warwick, a cyberneticist at the university holding the testing, said: “I would say now that machines are conscious, but in a machine-like way, just as you see a bat or a rat is conscious like a bat or rat, which is different from a human. I think the reason Alan Turing set this game up was that maybe to him consciousness was not that important; it’s more the appearance of it, and this test is an important aspect of appearance.”
Whatever you take from the experiment, it’s still pretty cool. Having a computer carry on a relevant conversation is something nobody really thought was possible. On the other hand, while a computer can have a basic conversation, the computer itself has no idea what it’s really doing. What can we do with this type of technology? I think Professor AC Grayling of Birkbeck College had a decent conclusion of the circumstances; “The test is misguided. Everyone thinks it’s you pitting yourself against a computer and a human, but it’s you pitting yourself against a computer and computer programmer.”
Source: http://tech.blorge.com/
The test will determine how stupid the humans questioning the computer are, not whether the machine can successfully simulate thinking. Just ask the machine to identify itself, its education, social origin, etc. and then ask questions, knowledge of foreign languages, etc. to see if it can respond in a way appropriate to its “habitas” or milieu. I don’t think they have been able to program a computer to do that.
STRIKE THE ABOVE. READ THIS. The test will determine how stupid the humans questioning the computer are, not whether the machine can successfully simulate thinking. Just ask the machine to identify itself, its education, social origin, knowledge of foreign languages, etc. and then ask questions to see if it can respond in a way appropriate to its “habitas” or milieu. I don’t think they have been able to program a computer to do that.
I don’t think IBM claimed that Deep Blue was ‘thinking’
I think therefore I am.
When Susan first launched the “chat” feature of ChessDiscussion.com, I logged on and someone had a program of this type you could chat with. I forget the name (it was female), but I was easily able to tell it was a program.
The prior reason to believe that a program can pass the Turing Test is the “Strong AI Hypothesis” that the human brain and mind is a computer program to begin with, implemented mainly in carbon rather than silicon. The information capacity of the brain and nervous system is estimated to be on the order of several terabytes, i.e. 1000s of gigabytes. For comparison:
(a) The set of Nalimov Tablebases for playing perfect chess with 6 or fewer pieces is about 1.1TB, i.e. 1100 GB, on the same order as the human brain. But they are “inert” rather than “active”—the information just sits there and is read. Moreover, their information content is much less, because the rules of chess and the Nalimov file format can be specified in a couple of pages, and then everything else follows. So though they have the size of a brain, they are nowhere near the content of a brain.
(b) Most large computer programs today take up a few 100s of megabytes (100MB) of memory, and have on the order of 10MB of active information content. ChessBase (recent versions) is in this range. Individual chess engines have active information content that’s 10 times smaller—and while you can set their hash tables for game play to be 10 times bigger, that doesn’t put them any more on the content level of a brain.
(c) However, (b) is certainly an advance on the size of programs around 1980, when Bill Gates famously is said to have said, “640 kilobytes ought to be enough for anybody.” Since much of the 640K was needed for data structures like hash tables, it’s fair to say that the active information content of large programs today is 100–1000 times bigger than those of 25 years ago.
Since it takes “only” another 1000x factor to get into “brain range”, that’s why people like people like Ray Kurzweil are forecasing a “technological singularity” by 2030 to 2040. The scaling issue is that as programs get N times bigger, the difficulty of making them cohere and be bug-free grows faster than N. Thus putting programs on a human plane is believed to require enabling them to self-correct. On this I am skeptical—and I’m running a seminar this term on the simpler but nowhere-near-fulfilled task of helping humans express their programs’ correctness requirements in program code itself.
Some scientists disagree with the imminent ability of programs to simulate humans on stronger grounds than mine of complexity: they believe that the human mind is a fundamentally different kind of computer, if a “computer” at all. Roger Penrose’s 1989 book The Emperor’s New Mind argues that the brain incorporates quantum physics, which Peter Shor later showed can solve problems believed to be “infeasible” on standard computers. However, the specific brain hypothesis in his followup book is now generally regarded as refuted. Others disagree based on the nature of knowledge and mind (and soul) themselves, notably John Searle (see strong-AI reference above) and J.R.Lucas, the latter whom I knew personally at Oxford and whose arguments Penrose rehabilitated.
Ironically, while being equivalent to a “Universal Turing Machine” is seen as de-humanizing, our ability to carry out and appreciate universal TM computation strikes me as the clearest distinction between our cognitive capacity and those of animals—even the smartest apes. So I could subtitle the other course I’m teaching this term as, “How Not To Be An Ape” :-).
I do, however, regard developments in computation as taking force away from Descartes’ maxim “I think, therefore I am.” For a definition of personal identity I hence prefer “I remember myself, therefore I am” (which sounds nice in French as Je me souviens, donc je suis), in which the element of self-circularity is a feature not a bug.
Although KWRegan mentions it, I would like to emphasize, that until science learns, specifically, how the human brain works, it won’t be possible to make computers to actually think like humans. It can be emulated, probably better and better as technology progresses, but to make an electronic brain, before we solve the human brain, will not be possible.
Hello and thanks, Gabor—your comment prompts me to add to my remarks—at least for my courses, posting here rather than to our internal course forums.
First, my own statement of the “strong AI hypothesis” was stronger than that in the link I gave, which says only that computer programs can be written that [we judge to] think and have a mind of their own. This is possible even if the program works nothing like the brain, even if the human brain and mind [and soul] are “of a second substance”. This difference spans Gabor’s point—my selecting the stronger statement and expressing skepticism of Kurzweil et al. means I agree with Gabor.
Weak AI, however, sidesteps the question of mind and even thinking. It concerns expanding the range of “human” tasks that are performable by computers. For example, most humans in most cases can tell whether a US political blog comment comes from an Obama or McCain supporter. Can we train a machine to do this—much faster than we ever could, “data-mining” blog comments via Google’s mapping of the Web? Can we also tell when a comment is “new” rather than a response to an earlier comment? If so, then I predict most such “proselytizing” comments would be found to come from right-wingers—because I believe an organized marketing campaign sustains a much higher per-capita rate of emotional involvement than their liberal counterparts ever manage. At least that’s what I observe here…
kwregan said:
This difference spans Gabor’s point—my selecting the stronger statement and expressing skepticism of Kurzweil et al. means I agree with Gabor.
Kurzweil was always a bit too optimistic. He is famous because some of his predictions came through, but not at all. Anyway, here I am suggesting a “true” Turing test:
Set up a computer, with the program, but without any data, to accept and store any input the programmer so desires, via real time (non-accelerated) optical and auditory inputs (kind of like a newborn baby). If the computer will be able to fluently speak, in any language, say 5 years later (whatever speech synthesizer is allowed) , it may have gotten near to the human brain. This idea can be modified, it just popped in my mind.
Gabor, alas there is already a counterexample to your proposal.
——————————
Turing Test Questioner: “Have you ever been involved in any negotiations, for example, with the Russians?”
Respondent: We have trade missions back and forth, we do. It’s very important when you consider even national security issues with Russia. As Putin rears his head and comes into the air space of the United States of America, where do they go? It’s Alaska. It’s just right over the border. It is from Alaska that we send those out to make sure that an eye is being kept on this very powerful nation, Russia, because they are right there, they are right next to our state. (reference)
—————————
Given that the respondent has had not 5 but 44 years of the kind of optical and auditory inputs you postulate, it is possible for your experiment to re-create the experience of a human brain and yet not meet your condition of “fluently speaking”. Hence your particular methodology is not well-posed ;-> (that wink was supplied by the respondent).