By Dr Richard Benjamins, VP for External Positioning and Big Data for Social Good at LUCA.
Artificial Intelligence is a hot topic at the moment. We definitely live in the AI summer, as opposed to the AI winter of the 1970s when AI research suffered a decline in interest and funding due to undelivered expectations. Today, AI is back in, and chatbots in particular are at the centre of every analysts attention.
Artificial Intelligence is a hot topic at the moment. We definitely live in the AI summer, as opposed to the AI winter of the 1970s when AI research suffered a decline in interest and funding due to undelivered expectations. Today, AI is back in, and chatbots in particular are at the centre of every analysts attention.
Facebook has recently launched a platform for developing chatbots, Google launched Allo, IBM has Watson, and there are of course Siri and Cortana. There are also hundreds of start-ups building their own chatbots such as you can see in this post from Venture Radar.
Chatbots are able to hold conversations with people in a relatively “natural way”. The business promise of chatbots is that they are able to automate human interaction, which is one of the biggest cost factors to organizations (for example, in customer service).
So what’s the history of AI? The first of what is now called a “chatbot” was ELIZA, a computer program written by Joseph Weizenbaum at the MIT AI Lab in 1964-1966. ELIZA simulated a Rogerian psychotherapist which people interacted with through typing. ELIZA was able to fool many people by convincing them that they were speaking with a real person, rather than a computer program. This also generated one of the first discussions on passing the Turing Test: building a computer program whose output humans judge as coming from another human. Eliza has been implemented thousands of times by students of AI courses (including myself), and there are still online implementations available. But how does ELIZA work?
![]() |
Figure 1: Example of a conversation with Eliza |
Basically, ELIZA is a rule-based system using pattern matching. The program reads the input from the command line and then parses the sentence looking for relevant keywords. When it finds a keyword, it plays back an appropriate answer to the user, often in the form of a new question (the Rogerian approach). And this repeats all the time. When ELIZA cannot make sense of the input, it returns a general answer such as “Tell me more about X” (where X matches a word from the user’s input), or “What do you mean by that?” Moreover, ELIZA has stored several alternative formulations for the same answer, so it doesn’t repeat itself all the time.
The goal of Rogerian Therapy is to provide clients with an opportunity to develop a sense of self where they can realize how their attitudes, feelings and behaviour are being negatively affected. In this sense, ELIZA just “listens” and plays back questions to the user to let the user tell more. After all, in today’s society, aren’t many just longing often for someone who just listens to us?
Below, you can see some of the code for when the user inputs something about the dreams he or she has. The code is written in Prolog, a high-level programming language used specifically for Artificial Intelligence:
rules([[dreamt,4],[
[1,[_,you,dreamt,Y],0,
[really,’,’,Y,?],
[have,you,ever,fantasied,Y,while,you,were,awake,?],
[have,you,dreamt,Y,before,?],
[equal,[dream,3]],
[newkey]]]]).
rules([[dream,3],[
[1,[_],0,
[what,does,that,dream,suggest,to,you,?],
[do,you,dream,often,?],
[what,persons,appear,in,your,dreams,?],
[do,you,believe,that,dreaming,has,something,to,do,with,your,problem,?],
[newkey]]]]).
And if you want to listen to an extended conversation with ELIZA, check out this video:
After ELIZA, many other attempts have been made to write computer programs that are capable of doing human tasks that require intelligence, for example: MYCIN for the diagnosis of Meningitis and DENDRAL for analyzing organic compounds.
The problem of these early AI systems was that they only had shallow knowledge: either the relevant knowledge was captured in the rule base, or the system didn’t know what to do. This phenomenon was referred to as “brittleness” of AI systems. AI systems were brittle compared with robust human intelligence: ask a person something at the edge of a certain domain, and he or she still will be able to give a reasonable answer. Computers weren’t able to do the same.
Later attempts tried to deal partially with this issue through the inclusion of so-called “deep knowledge” in their knowledge base. Through such knowledge an AI system was still capable of some reasoning even if the subject was out of the direct scope of the system. A seminal article on this subject was Randall Davis’ “Reasoning from first principles in electronic troubleshooting” which was published in 1983, which tried to code some kind of understanding of how devices work, and refer to that knowledge when solving unknown problems.
Real Artificial Intelligence, however, requires much more and has to include abilities such as Reasoning, Knowledge Representation, Planning, Natural Language Processing, Perception, and General Intelligence. Technology has changed and improved enormously since those early attempts, and new AI tools like Siri and Watson are streets ahead of ELIZA or MYCIN. However, there is still a long way to go for AIs to exhibit real human-like intelligence. We can all keep our jobs in the meantime.