Artificial Intelligence or Cognitive Intelligence? The buzz words of business

Paloma Recuero de los Santos    21 January, 2020

(Original post in Spanish: ¿Inteligencia artificial o cognitiva?)

Artificial Intelligence in the last 5 years has become the biggest buzz word with various spin offs including “Cognitive Intelligence”, “Smart Technologies” and “Predictive Technologies”. The often-negative associations that accompany the idea of Artificial Intelligence means some companies are shying away from declaring themselves as AI pioneers and instead are creating their own buzz words. But what is the real difference between these ideas and how do AI companies deal with possible negative connotations? 

What is AI?

The Encyclopædia Britannica defines the concept of Artificial Intelligence as the “the ability of a digital computer or computer-controlled robots to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience”. The problem with the term Artificial is that it gives the connotations of a lack of authenticity, robotic-like and unnatural when it’s aim is to be quite the opposite.

To summarize this rather lengthy explanation in fewer words, one could simply describe AI as creating a computer that can solve complex problems as a human would. It is a vital part of economic sectors such as Information technologies, Health, Life Sciences, Data Analysis, Digital Transformation, Security and now in the consumer sector with the development of smart homes etc.

Cognitive Intelligence and Machine Learning

Cognitive Intelligence is an important part of AI, that encompasses the technologies and tools that allow our apps, website and bots to see, hear, speak, understand and interpret a user’s needs in a natural way. That’s to say, they are the applications of AI that allow machines to learn their users’ language so that the users don’t have to learn the language of machines. AI is a much wider concept that includes technology and innovations such as robotics, Machine Learning, Deep Learning, neural networks, NLP etc.

Machine learning is one branch of Artificial Intelligence that allows researchers, data scientists, data engineers and analysts to build algorithms that learn and can make “data-driven” predictions. Instead of following a series of rules and instructions, these algorithms are trained to identify patterns in large quantities of data. Deep Learning takes this idea one step further and processes the information in layers, so that the result obtained in one layer becomes the input for the next.

So, if AI is so important why is the term often tip toed around?

The origin of negative connotations for AI

Firstly, it seems that it has become a “worn-out” word. It has been used so widely that the whole world seems to know about it (and have their opinion on) the subject! This widespread use has been accompanied by a lack of information. Many people can only base their understanding on what Hollywood has taught them; that AI is limited to robots and Strong AIs. Others think they are talking about AI when, in reality, they are talking about Machine Learning.

Secondly, there is the fact that Artificial Intelligence is not a new concept, meaning mal judgment has formed over many decades; in fact, it has existed since 1956. Over these years there have been different waves (such as the introduction of expert systems in the 80’s and the explosion of the internet in the 90’s). In each period, expectations have been greater than reality and there have been “troughs of disillusionment” the third phase of Gartner’s “Hype Cycle”.

Figura 1: Curva de Gartner. (De IOTpreneur , CC BY-SA 4.0)
Figure 1: Gartner Curve. (From IOTpreneur , CC BY-SA 4.0)

We currently find ourselves in a period of high expectations with respect to AI. Big companies are promising innovation beyond what we could have imagined 20 years ago. Some technological leaders talk about the dangers and impact that automation, robotics and AI might have on our lives and future jobs. Despite this, each day we are seeing more and more technologies that make our lives easier. These advancements, that help people see the reality of the technology may help to reduce the “baggage” of negative connotations surrounding the future of AI.

What areas does AI encompass?

AI is an ecosystem, where we can include technologies such as data mining, natural language processing (NLP), Deep Learning, Predictive and Prescriptive Analysis and many more. In this ecosystem we also find technologies that regularly assist in our daily lives such as recommendation systems which Netflix and AirBnb base themselves on.

All of these technologies are characterized by generating data which, if analyzed correctly, can offer great value and understanding. Due to this, one can say that artificial intelligence lies at the convergence of all these solutions.

Additionally, AI is closely linked to the four pillars of innovation and digital transformation: cloud computing, mobility, social analytics and Big Data as it powers some of the main accelerators of this transformation; including Cloud Computing, Cognitive Systems, the Internet of Things (IoT), Cybersecurity and Big Data technologies.

Digital transformation pillars

The technology sector is transforming into a sector of understanding. In order to take anything away from this understanding it is important to have technologies and “real-life” applications that are deeply connected. This is what we call the “digital economy”. As mentioned earlier, this transformation is based on four fundamental pillars:

  • Cloud computing
  • Mobility
  • Social Analytics
  • Big Data Analytics

These technologies and innovations are the true driving forces behind a digital transformation and they are so closely tied with AI that they sometimes get confused with AI itself. These four pillars support the “accelerators” of innovation.

The main accelerators are:

  • Cognitive services
  • Cybersecurity
  • IoT
  • Big Data

Cognitive services

All of these technologies are ever-present in our daily lives. Cognitive services aim to imitate rational human processes. They analyze large amounts of data that is created by connected systems, and offer tools that have diagnostic, predictive and prescriptive capabilities that are capable of observing, learning and offering Insights. They are closely orientated with the contextual and human interaction. For this, the challenge for Artificial Intelligence is to design the technology so that people can interact with it naturally. This involves developing applications with “human behavior”, such as:

  • Listening and speaking, or rather the ability to turn audio to text and text to audio
  • Natural Language Processing (NLP). Text is not just a combination of keywords, a computer needs to understand grammatical and contextual connections too
  • Understanding emotions and feelings (“sentiment analysis”). To create empathetic systems capable of understanding the emotional state of a person and to make decisions based on this.
  • Image recognition. This consists of finding and identifying objects in an image or video sequence. It is a simple task for humans, but a real challenge for machines.

Cybersecurity

Cybersecurity is also moving towards a more holistic focus, one that considers its environment and a more human dimension. Above all, it is becoming more proactive. Rather than waiting for a cyber-attack to happen, the key is in prediction and prevention. Now, AI can be used to detect patterns in the data and take action when alerts arise.

Internet of Things and Big Data

What about the Internet of Things and Big Data? In this case, the amount of data that is being created is clear, as is the fact that it is happening rapidly and often in unstructured forms. This can include data from IoT sensors, social networks, text files, images, videos and sound. Now AI tools such as data mining, machine learning and NLP mean that it is possible to turn this data into useful information.

Artificial Intelligence is a very broad term and encompasses many processes and technologies that can be applied in various industries. Companies must be able to explain simply the type of AI they are incorporating inorder to displace confusion surrounding the terms, which will make it a more accesible technology.

To stay up to date with LUCA, visit our Webpage, subscribe to LUCA Data Speaks and follow us on TwitterLinkedIn YouTube.

Leave a Reply

Your email address will not be published. Required fields are marked *