Sunday, October 27, 2019
Are Computers Really Intelligent?
Are Computers Really Intelligent? Are computers really intelligent? Computer Intelligence has been in hot debate since the 1950ââ¬â¢s when Alan Turing invented the Turing Test. The argument over the years has taken two forms: strong AI versus weak AI:. That is, strong AI hypothesises that some forms of artificial intelligence can truly reason and solve problems, with computers having an element of self-awareness, but not necessarily exhibiting human-like thought processes. (http://en.wikipedia.org/wiki/Strong_AI). While Weak AI argues that computers can only appear to think and are not actually conscious in the same way as human brains are. (http://www.philosophyonline.co.uk/pom/pom_functionalism_AI.htm). These areas of thinking cause fundamental questions to arise, such as: ââ¬ËCan a man-made artefact be conscious?ââ¬â¢ and ââ¬ËWhat constitutes consciousness?ââ¬â¢ Turingââ¬â¢s 1948 and 1950 papers followed the construction of universal logical computing machines, introducing the prospect that computers could be programmed to execute tasks which would be called intelligent when performed by humans.(Warner 1994: 118). Turingââ¬â¢s idea was to create an imitation-game on which to base the concept of a computer having its own intelligence. A man(A), and a woman (B), are separated from an interrogator, who has to decipher who is the man and who is the woman. As objective is to trick the interrogator, while B tries to help the interrogator in discovering the identities of the other two players.(Goldkind, 1987: 4). Turing asks the question: What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?ââ¬â¢ (Quoted from Goldkind 1987: 4). Turingââ¬â¢s test offered a simple means test for computer intelligence; one that neatly avoided dealing with the mind-body problem. (Millican, P.J.R., 1996: 11). The fact that Turings test did not introduce variables and was conducted in a controlled environment were just some of its shortfalls. Robert French, in his evaluation of the test in 1996, stated the following: ââ¬ËThe philosophical claim translates elegantly into an operational definition of intelligence: whatever acts sufficiently intelligent is intelligent.ââ¬â¢ However, as he perceived, the test failed to explore the fundamental areas of human cognition, and could be passed ââ¬Ëonly by things that have experienced the world as we have experienced it.ââ¬â¢ He thus concluded that ââ¬Ëthe Test provides a guarantee not of intelligence but of culturally-oriented human intelligence.ââ¬â¢ (Ibid : 12). Turing postulated that a machine would one day be created to pass his test and would thus be considered intelligent. However, as years of research have explored the complexities of the human brain, the pioneer scientists who promoted the idea of the ââ¬Ëelectronic brainââ¬â¢ have had to re-scale their ideals to create machines which assist human activity rather than challenge or equal our intelligence. John Searle, in his 1980 Chinese Room experiment argued that a computer could not be attributed with the intelligence of a human brain as the processes were too different. In an interview he describes his original experiment: Just imagine that youre the computer, and youre carrying out the steps in a program for something you dont understand. I dont understand Chinese, so I imagine Im locked in a room shuffling Chinese symbols according to a computer program, and I can give the right answers to the right questions in Chinese, but all the same, I dont understand Chinese. All Im doing is shuffling symbols. And now, and this is the crucial point: if I dont understand Chinese on the basis of implementing the program for understanding Chinese, then neither does any other digital computer on that basis because no computers got anything I dont have. (Free Inquiry 1998: 39). John Searle does not believe that consciousness can be reproduced to an equivalent of the human capacity. Instead, it is the biological processes which are responsible for our unique make-up. He says that ââ¬Ëconsciousness is a biological phenomenon like any other and ultimately our understanding out it is most likely to come through biological investigationââ¬â¢ (Searle, 1990 :58-59. Quoted from McCarthy, 2001, http://www-formal.stanford.edu/jmc/ââ¬â¢). Considered this way it is indeed far fetched to think that the product of millions of years of biological adaptation can be equalled by the product of a few decades of human thinking. John McCarthy, Professor Emeritus of Computer Science at Stanford University advocates the potential for computational systems to reproduce a state of consciousness, viewing the latter as an ââ¬Ëabstract phenomenon, currently best realized in biology,ââ¬â¢ but arguing that consciousness can be realised by ââ¬Ëcausal systems of the righ t structure.ââ¬â¢ (McCarthy, 2001, http://www-formal.stanford.edu/jmc/ ) The famous defeat of Garry Kasparov, the world chess champion, in 1997 by IBMs computer, Deep Blue, promoted a flurry of debate about whether Deep Blue could be considered as intelligent. When asked for his opinion, Herbert Simon, a Carnegie Mellon psychology professor who helped originate the fields of AI and computer chess in the 1950s, said it depended on the definition of intelligence used. AI uses two definitions for intelligence: What are the tasks, which when done by humans, lead us to impute intelligence? and What are the processes humans use to act intelligently? Measured against the first definition, Simon says, Deep Blue certainly is intelligent. (http://whyfiles.org/040chess/main3.html). According to the second definition he claims it partly qualifies.(Ibid). The trouble with the latter definition of intelligence is that scientists donââ¬â¢t as yet know exactly what mechanisms constitute consciousness. John McCarthy, Emeritus professor at Stanford University explains that intelligence is the ââ¬Ëcomputational part of the ability to attain goals in the world.ââ¬â¢ He emphasises that problems in AI arise as ââ¬Ëwe cannot yet characterise in general what computational procedures we want to call intelligent.ââ¬â¢ (McCarthy 2003: 3). To date, computers can perform a good understanding of specific mechanisms through the running of certain programs; what McCarthy deems ââ¬Ësomewhat intelligent.ââ¬â¢ (McCarthy 2004: 3). Computing language has made leaps and bounds during the last century, from the first machine code to mnemonic ââ¬â¢wordsââ¬â¢ In the 90ââ¬â¢s the so-called high-level languages were the type used for programming, with Fortran being the first compiler language. Considering the rapid progress of computer technology since it first began over a hundred years ago, it is likely that unforeseeable developments will occur over the next decade. A simulation of the human imagination might go a long way to convincing people of computer intelligence. However, many believe that it is unlikely that a machine will ever equal the intelligence of the being who created it. Arguably it is the way that computers process information and the speed with which they do it that constitutes its intelligence, thus causing computer performance to appear more impressive than it really is. Programs trace pathways at an amazing rate for example, each move in a game of chess, or each section of a maze can be completed almost instantly. Yet the relatively simple process of trying each potential path fails to impress once itââ¬â¢s realised. (Reed, 2003: 09). Thus, the intelligence is not in the computer, but in the program. For practical purposes, and certainly in the business world, the answer seems to be that if it seems to be intelligent, it doesnt matter whether it really is. (Reed 2003: 09). However, computational research will have a difficult task to explore simulation of, or emulation of, the areas of human cognition. Research continues into the relationship between the mathematical descriptions of human thought and computer thought, hoping to create an identical form.(Wagman, M., 1991: 2). Yet the limits of computer intelligence are still very much at the surface of the technology. In contrast, the flexibility of the human imagination that creates the computer can have little or no limitations. What does this mean for computer intelligence? It means that scientists need to go beyond the mechanisms of the human psyche, and perhaps beyond programming, if they are to identify a type of machine consciousness that would correlate with that of a human.. References Goldkind, J., 1987, Machines and Intelligence: A Critique of Arguments against the Possibility of Artificial Intelligence. New York: Greenwood Press Inquiry. Council for Democratic and Secular Humanism. Volume: 18. Issue: 4. . Page Number: 39+. McCarthy, J., 2001, ââ¬ËWhat is Artificial Intelligence?ââ¬â¢ Available online from: ââ¬Ëhttp://www-formal.stanford.edu/jmc/ââ¬â¢ [Accessed 14/11/06] Millican, P.J.R., 1996, The Legacy of Alan Turing. (Volume1). Oxford: Clarendon Press Online Encyclopedia. Available online from: ââ¬Ëhttp://en.wikipedia.org/wiki/Strong_AI.ââ¬â¢ [Accessed 17/11/06] Reed, F., 2003, ââ¬ËArtificial Intellect Really Thinking?ââ¬â¢. The Washington Times. May 1, 2003. p. B09 Wagman, M., 1991, Artificial Intelligence and Human Cognition: A Theoretical Intercomparison of Two Realms of Intellect. New York: Prager Warner, J, 1994, From Writing to Computers. New York: Routledge URLââ¬â¢S ââ¬Ëhttp://www.philosophyonline.co.uk/pom/pom_functionalism_AI.htmââ¬â¢ [Accessed 17/11/06] ââ¬Ëhttp://whyfiles.org/040chess/main3.htmlââ¬â¢ [Accessed 14/11/06] Further Reading DeLancey, C., 2002, Passionate Engines: What Emotions Reveal about Mind and Artificial Intelligence. New York: Oxford University Press Wagman, M., 2000, Scientific Discovery Processes in Humans and Computers: Theory and Research in Psychology and Artificial Intelligence. Westport, CT: Praeger
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.