How “intelligent” can Artificial Intelligence get?

Richard Benjamins    13 December, 2016

This post is the second in a series of three posts, each of which discuss the fundamental concepts of Artificial Intelligence. In our first post we discussed AI definitions, helping our readers to understand the basic concepts behind AI, giving them the tools required to sift through the many AI articles out there and form their own opinion. In this second post , we will discuss several notions which are important in understanding the limits of AI.

Strong and weak AI

When we speak about how far AI can go, thereare two “philosophies”: strong AI and weak AI. The most commonlyfollowed philosophy is that of weak AI, which means that machines canmanifest certain intelligent behavior to solve specific (hard) tasks,but that they will never equal the human mind. However, strong AI believes thatit is indeed possible. The difference hinges on thedistinction between simulating a mind and actually having amind. In the words of John Searle, “according to Strong AI, the correctsimulation really is a mind. According to Weak AI, the correctsimulation is a model of the mind.”

The Turing Test

Turing test
Figure 2. The set up of the original Turing Test.

The Turing Test was developed by Alan Turing in the 1950s and was designed to evaluatethe intelligence of a computer holding a conversation with a human. Thehuman cannot see the computer and interacts with it through aninterface (at that time by typing on a keyboard with a screen). In thetest, there is a person who asks questions and either another person or acomputer program responds. There are no limitations as to what theconversation can be about. The computer passes the test if the personcannot distinguish whether the answers or the conversation comes fromthe computer or the person.
ELIZA was the first program that challenged the Turing Test, even though it unquestionably failed. A modern version of the Turing Test was recently features in the 2015 movie Ex Machina, which you can see in the video below. So far, no computer or machine has passed the test.

The Chinese Room Argument

A very interesting thought experiment in the context of the Turing Test is the so-called “Chinese Room Experiment” which was invented by John Searle in 1980.This experiment argues that a program can never give a computer theability to really “understand”, regardless of how human-like orintelligent itsbehavior is. It goes as follows: Imagine you are inside a closed room with door. Outside the room there is a Chinese person that slips a note with Chinese characters under the door. You pick up the note and followthe instructions in a large book that tells you exactly, for the symbolson the note, what symbols to write down on a blank piece of paper. You follow the instructions in the book processing each symbol from thenote and you produce a new note, which you slip under the door.  The note ispicked up by the Chinese person who perfectly understands what iswritten, writes back and the whole process starts again, meaning that a realconversation is taking place.
Chinese Room menta experiments
Figure 3. The Chinese Room mental experiment. Does the person in the room understand Chinese?

The key question here is whether you understand the Chinese language. Whatyou have done is received an input note and followedinstructions to produce the output, without understanding anything aboutChinese. The argument is that a computer can never understand what itdoes, because – like you – it just executes the instructions of asoftware program. The point Searle wanted to make is that even if thebehavior of a machine seems intelligent, it will never be really intelligent.And as such, Searle claimed that the Turing Test was invalid.

The Intentional Stance

Related to the Turing test and the Chinese Room argument, the Intentional Stance,coined by philosopher Daniel Dennett in the seventies, is also of relevance for this discussion. The Intentional Stancemeans that “intelligent behavior” of machines is not a consequence of howmachines come to manifest that behavior (whether it is you followinginstructions in the Chinese Room or a computer following programinstructions). Rather it is an effect of people attributing intelligenceto a machine because the behavior they observe requires intelligence ifpeople would do it. A very simple example is that we say that ourpersonal computer is thinking when it takes more time than weexpect to perform an action. The fact that ELIZA was able to fool somepeople refers to the same phenomenon: due to the reasonable answers thatELIZA sometimes gives, people assume it must have some intelligence.But we know that ELIZA is a simple pattern matching rule-based algorithmwith no understanding whatsoever of the conversation it is engaging in.The more sophisticated software becomes, the more we are likely toattribute intelligence to that software. From the Intentional Stanceperspective, people attribute intelligence to machines when theyrecognize intelligent behavior in them.

To what extend can machines have “general intelligence”?

One of the main aspects of human intelligence, is that we have a generalintelligence which always works to some extent. Even if we don’t havemuch knowledge about a specific domain, we are still able to make senseout of situations and communicate about them. Computers are usually programmedfor specific tasks, such as planning a space trip or diagnosing aspecific type of cancer. Within the scope of the subject, computerscan exhibit a high degree of knowledge and intelligence, butperformance degrades rapidly outside that specific scope.
In AI, this phenomenonis called brittleness (as opposed to graceful degradation, which is how humans perform). Computer programs perform very well in theareas they are designed for, outperforming humans, but don’t performwell outside of that specific domain. This is one of the main reasons whyit is so difficult to pass the Turing Test, as this would require the computer to be able to “fool” the human tester in any conversation,regardless of the subject area.In the history of AI, several attempts have been made to solve the brittleness problem. The first expert systems were based on the rule-based paradigm representing associations of the type if X and Y then Z; if Z then A and B,etc.  For example, in the area of car diagnostics, if the car doesn’t start, then the battery may be flat orthe starter motor may be broken. In this case, the expert system wouldask the user (who has the problem) to check the battery or the check thestarter motor.
The computer drives the conversation with the user toconfirm observations, and based on the answers, the rule engine leads tothe solution of the problem. This type of reasoning was called heuristicor shallow reasoning.However, the program doesn’t have any deeper understanding of how a carworks; it knows the knowledge that is embedded in the rules, but cannotreflect on this knowledge. Based on the experience of thoselimitations, researchers started thinking about ways to equip a computerwith more profound knowledge so that it could still perform (to someextent) even if the specific knowledge was not fully coded. Thiscapability was coined “deep reasoning” or “model-based reasoning”,and a new generation of AI systems emerged, called “Knowledge-BasedSystems”.
In addition to specific association rules about the domain,such systems have an explicit model about the subject domain. If thedomain is a car, then the model would represent a structural model ofthe parts of a car and their connections, and a functional model of howthe different parts work together to represent the behavior of the car. Inthe case of the medical domain, the model would represent the structureof the part of the body involved and a functional model of how it works.With such models the computer can reason about the domain and come tospecific conclusions, or can conclude that it doesn’t know the answer.
The more profound the model is a computer can reason about, the lesssuperficial it becomes and the more it approaches the notion of general intelligence.Thereare two additional important aspects of general intelligence wherehumans excel compared to computers: qualitative reasoning and reflectivereasoning.
Figure 4: Both qualitative reasoning and reflective reasoning differenciate us from computers.

Qualitative reasoning

Qualitative reasoningrefers to the ability to reason about continuous aspects of thephysical world, such as space, time, and quantity, for the purpose ofproblem solving and planning. Computers usually calculate things in aquantitative manner, while humans often use a more qualitative way ofreasoning (if X increases, then Y also increases, thus …). The qualitative reasoning area of AI is related to formalism and the process to enable a computer to perform qualitativereasoning steps.

Reflective reasoning

Another important aspect ofgeneral intelligence is reflective reasoning. During problem-solvingpeople are able to take a step back and reflect on their ownproblem-solving process, for instance, if they find a dead-end andneed to backtrack to try another approach. Computers usually justexecute a fixed sequence of steps which the programmer has coded, withno ability to reflect on the steps they make.  To enable computers toreflect on their own reasoning process, it needs to have knowledge aboutitself; some kind of meta knowledge. For my PhD research, I built an AI program for diagnostic reasoning that was able to reflecton its own reasoning process and select the optimal method depending onthe context of the situation.

Conclusion

Having explained the above concepts, it should be somewhat clearer that there is no concrete answer to the question posed in the title of the post, it depends on what one wants to believe and accept. By reading this series, you will have learned some basic concepts which will enable you to feel more comfortable talking about the rapidly growing world of AI. The third and last post will discuss the question whether machines can think, or whether humans are indeed machines. Stay tuned and visit our blog soon to find out.

Leave a Reply

Your email address will not be published. Required fields are marked *