Can machines think? Or are humans machines?

Richard Benjamins    20 December, 2016
This is the last in a series of three post about some fundamental notions of AI. The objective of these series of three posts is to equip readers with sufficient understanding of where AI comes from, so they can have their own criterion when reading about the hype of AI. If you missed any of the two previous posts, you can read the first one about what Artificial Intelligence is here, and the second one on how “intelligent” can Artificial Intelligence get, here.

This dimension for understanding AI refers to how a computer program reaches its conclusion. Symbolic AISymbolic vs non-symbolic AIrefers to the fact that all steps are based on “symbolic”human-readable representations of the problems which use logic and searchto solve problems. Expert Systems are a typical example of symbolic AIas the knowledge is encoded in IF-THEN rules which are understandable bypeople. NLP systems which use grammars to parse language are also symbolic AI systems. Here the symbolic representation is the grammar ofthe language.The main advantage of symbolic AI is that the reasoning process canbe understand by people, which is a very important factor for takingimportant decisions. A symbolic AI program can explain why a certainconclusion is reached and what the intermediate reasoning steps havebeen. This is key for using AI systems that give advice on medicaldiagnosis; if doctors cannot understand why an AI system comes to its conclusion, it is harder for them to accept the advice.

Non-symbolic AI systems do nomanipulate a symbolic representation to find solutions to problems.Instead, they perform calculations according to some principles which havedemostrated their capability to solve problems without exactly understanding howto arrive at their solutions. Examples include genetic algorithms,neural networks and deep learning. The origin of non-symbolic AI comesfrom the attempt to mimic the workings of the human brain; a complexnetwork of highly interconnected cells whose electrical signal flowsdecide how we, humans, behave. Figure 2 illustrates the difference between a symbolic and non-symbolic representation of an apple. Obviously, the symbolic representation is easy to understand by humans, whereas the symbolic representation isn’t.
Symbolic and non-symbolic representation
Figure 2: A symbolic and non-symbolic representation of an apple (source http://web.media.mit.edu/~minsky/papers/SymbolicVs.Connectionist.html).
Today, non-symbolic AI,through deep learning and other machines learning algorithms, isachieving very promising results, championed by IBM’s Watson, Google’swork on automatic translation (which has no understanding of thelanguage itself, it “just” looks at co-occurring patterns), Facebook’salgorithm for face recognition, self-driving cars, and the popularity ofdeep learning. The main disadvantage of non-symbolic AI systems is thatno “normal” person can understand how those systems come to their conclusions oractions, or take their decisions. See for example Figure 2: in the left part we can understand easily why something is an apple, but looking at the right part, we cannot easily understand why the system concludes that it’s an apple. When non-symbolic (aka connectionist) systems are applied tocritical tasks such as medical diagnosis, self-driving cars, legaldecisions, etc, understanding why they come to a certain conclusionthrough a human-understandable explanation is very important. In the end, in thereal world, somebody needs to be accountable or liable for the decisionstaken. But when an AI program takes a decision and no-one understandswhy, then our society has an issue (see FATML, an initiative that investigates Fairness, Accountability, and Transparency in Machine Learning).Probably the most powerful AI systems will come from a combination of both approaches.

The final question: Can machines think? Are humans machine?

It isnow clear that machines certainly can perform complex tasks that wouldrequire “thinking” if performed by people. But can computers haveconsciousness? Can they have, feel or express emotions? Or, are we,people, machines? After all our bodies and brains are based on a verycomplex “machinery” of mechanical, physical and chemical processes, thatso far, nobody has fully understood. There is a research field called”computational emotions” which tries to build programs that are able toexpress emotions. But maybe expressing emotions is different than feelingthem? (See Intentional Stance in this post).
Computers and emotions
Figure 3: Can computers express of feel emotions?
Another critical issue for the final question is whether machines can have consciousness. This is an even trickier question than whether machines can think. I will leave you with this MIT Technology Review interview with Christof Koch about “What It Will Take for Computers to Be Conscious”, where he says: “Consciousness is a property of complex systems that have a particular “cause-effect” repertoire. They have a particular way of interacting with the world, such as the brain does, or in principle, such as a computer could.”

In my opinion, currently, there are noscientific answers to those questions, and whatever you may think aboutit, is more a belief or conviction than a commonly accepted truth or a scientific result.Maybe we have to wait until 2045, which is when Ray Kurzweil predicts technological singularityto occur: the point when machines become more intelligent than humans.While this point is still far away and many believe it will neverhappen, it is a very intriguing theme evidenced by movies such as 2001: ASpace Odyssey, A.I. (Spielberg), Ex Machina and Her, among others.

Leave a Reply

Your email address will not be published. Required fields are marked *