Can a Computer Have a Mind?
The question of whether a mechanical device could ever be said to think–to experience feelings, or to have a mind–is not really new one. But it has been given a new impetus, even an urgency, by the advent of modern computer technology.
The question touches upon deep issues of philosophy. What does it mean to think or to feel? What is a mind? Does mind really exist? Assuming that they do, to what extent are minds functionally dependent upon the physical structures with with they are associated? Might mind be able to exist quite independently of such structures? Or are they simply the functionings of physical structures? In any case, is it necessary that the relevant structures be biological in nature (brains), or might mind be equally well be associated with pieces of electronic equipments? These are among the issues, I shall be attempting to address in this article.
Consider a hypothetical case in which a new model of computer has come on the market, with a size of memory store and numbers of arithmetic and logical units in excess of those in a human brain. Suppose also that these computers have been carefully programmed and fed with a great quantities of data of an appropriate kind. The manufacturers are claiming that the devices actually think. They are also claiming them to be genuinely intelligent. Or they may go further and make the suggestion that the devices actually feel – pleasure, pain, compassion, pride, etc. – and that they are aware of, and actually understand what they are doing. Indeed, the claim seems to be being made that they are conscious.
How are we to tell whether or not the manufacturers’ claims are to be believed? Ordinarily, when we purchase a piece of machinery, we judge its worth soleny according to the service it provides us. If it satisfactorily performs the task we set it, then we all well pleased. To test the manufacturers’ claim that such a device actually has the asserted human attributes we would, according to this criterion, simply ask that it behaves as a human being would in these respects. In other words, we ask the computer to produce human-like answers to any question that we may care to put to it and if it answers our questions in a way indistinquishable from human being, then the claims that it indeed thinks (or feels, understans, etc.) is satisfied.
To verify this claim, the computer together with some human volunteer, are both to be hidden from the view of some (perceptive) interrogator. The interrogator has to try to decide which of the two is the computer and which is the human being merely by putting probing questions to each of them. These questions, but more importantly the answers that the interrogator receives, are all transmitted in a impersonal fashion, say typed on a keyboard and displayed on a screen. The interrogator is allowed no information about either party other than that obtained merely from this question and answer session. The human subject answers the questions truthfully and tries to persuade the interrogator that he is indeed the human being and the other subject is the computer which has been programmed to ‘lie’ so as to try to convince the interrogator that it, indeed, is the human being. If in the course of a series of such tests the interrogator in unable to identify the real human subject in any consistent way, then the computer is deemed to have passed the test.
Now, it might be argued that this test is actually quite unfair on the computer. For if the roles were reversed so that the human subject instead were being asked to pretend to be a computer and the computer instead to answer truthfully, then it would be only too easy for the interrogator to find out which is which. All that he would need to do would be to ask the subject to perform some very complicated arithmetic calculation. A good computer should be able to answer accurately at once, but a human would be easily stumped. (One might have to be a little careful about this, however. There are human ‘calculating prodigies’ who can perform very remarkable feats of mental arithmetic with unfailing accuracy and apparent effortlessness. For example, Tathagat Avtar Tulsi, a PhD student in the department of physics, Indian Institute of Science, is able to multiply any two random number in less than a minute.)
Thus, part of the task for the computer’s programmers is to make the computer appear to be ‘stupider’ than it actually is in certain respects. For if the interrogator were to ask the computer a complicated arithmetical question then the computer must now have to pretend not to be able to answer it, and though this may give an eerie impression that the computer has some understanding, infact it has none, and is merely following some simple mechanical rules. The task of making the computer ‘stupider’ in this way is not a serious problem facing the computer’s programmers. The main difficulty is to make it answer some of the simplest ‘common sense’ type of question – questions that the human subject would have no difficulty with whatever!
Chess-playing computers probably provide the best examples of blunders made by the computer on some of the simplest ‘common sense’ type of move, which human subject will never make. In the figure below, which is supposed to be a chess board, “OO” stands for empty squares on the board, “BK” is Black King, “BR” is Black Rook, “BB” is Black Bishop, “BP” is Black Pawn, (All the queens and knights are gone) and then all the ones with “W” are the white pieces, which are all pawns and one king.
OO OO OO OO OO OO BK BR
OO OO OO OO OO BB OO BP
OO OO BP OO OO OO BP WP
BR BP WP BP OO BP WP OO
BP WP OO WP BP WP OO OO
WP OO OO OO WP OO OO OO
OO WK OO OO OO OO OO OO
OO OO OO OO OO OO OO OO
The situation here is that the white pawns form an impenetrable barrier for their king, boxing in all the black pieces so that can’t make any moves to take any of the white pieces. Even though the black side has more powerful pieces left (two rooks and a bishop), there is no way for them to get through the white pawns and place the white king in checkmate — they cannot take any of the white pawns — and the white king is safe as long as he remains behind the pawns, moving around freely. (Well, yes, it is a stalemate, but that is better than utter defeat.) But the Deep Thought computer***, playing the white side, did not grasp this. Instead, it took the black rook, thus opening up the barrier of pawns and it was all lost from there.
It is worth remarking that chess-playing machines fare better on the whole, relative to the comparable human player, when it’s required that the moves are made very quickly; the human players perform relatively better in relation to the machines when a good measure of time is allowed for each move (Please refer to the earlier article entitled “Are Computer Games Really Conscious” for detailed mathematical understanding). This is because of the fact that the computer’s decisions are made on the basis of precise and rapid extended computations, whereas the human player takes advantage of ‘judgements’, that they rely upon comparatively slow conscious assessments. The human judgements serve to cut down drastically the number of serious possibilities that need be considered at each stage of calculation, and much greater depth can be achieved in the analysis, when the time is available, than in the machines simply calculating and directly eliminating possibilities, without using such judgements.
The essential point here is that the quality of human judgement and understanding, which springs from consciousness, is an essential thing that the computer lack.
O son of Bharata, as the sun alone illuminates all this universe, so does the soul, one within the body, illuminate the entire body by consciousness – Lord Krsna [Bg 13.34]
***Programmed by Hsiung Hsu, of Carnegie Mellon University, which has a rating of about 2500 Elo, and recently achieved the remarkable feat of sharing first prize (with Grandmaster Tony Miles) in a chess tournament (in Longbeach, California, November 1988), actually defeating a Grandmaster (Bent Larsen) for the first time!