Patrick Buckley How IoT technology is helping candy producers make sweet profits! From chocolate bars to lollipops, gumdrops to Haribos, the confectionary industry is now worth an estimated 210$ billion worldwide. With the industry experiencing strong growth, it is no surprise...
Olivia Brookhouse When will Robots find a place in the Smart Home? With the mass introduction of smart speakers, smart doorbells, smart fridges and even smart toilets there is a world of possibilities when it comes to innovating our homes. Whilst...
Paloma Recuero de los Santos IoT and Big Data: What’s the link? The digital revolution has changed our lives. To begin with, technological advances were linked to the worlds of scientific research, industrial innovation, the space race, defence, health, private companies...
LUCA Are data-driven strategies in the energy sector competitive? Naturgy proves it. In today’s data story, we explore the success story of a large energy company, Naturgy, and how its digital transformation process towards becoming a data-driven company is already having...
LUCA Deep Learning and satellite images to estimate the impact of COVID19 Motivated by the fact that the Coronavirus Disease (COVID-19) pandemic has caused worldwide turmoil in a short period of time since December 2019, we estimate the negative impact of...
LUCA Success Story: LUCA Transit and Highways England The transport industry is very receptive to the application of Big Data and Artificial Intelligence strategies, as there are clear use cases that can maximize a companies’ efficiency and...
LUCA How the workers travel to Distrito T? Written by LUCA Big Data for Social Good Team. Telefónica operates in 17 countries and has presence in 24, however its central offices can be found in Madrid, in a...
LUCA Air Quality: How can Open Data and Mobile Data provide actionable insights? Today on our blog we’ve decided to take the mobility and traffic Big Data analysis we started here a little bit further, looking at the relationship between commuting and air...
A Brief History of Machine LearningLUCA 5 November, 2018 As members of the Machine Learning community, it would be a good idea for us all to have an idea of the history of the sector we work in. Although we are currently living through an authentic boom in Machine Learning, this field has not always been so prolific, going through periods of high expectations and advances as well as “winters” of severe stagnation. Birth [1952 – 1956] 1950 — Alan Turing creates the “Turing Test” to determine whether or not a machine is truly intelligent. In order to pass the test, the machine must be capable of making a human believe that it is another human instead of a computer. 1952 — Arthur Samuel writes the first computer program capable of learning. The software was a program that could play checkers and improved with each game it played. 1956 — Martin Minsky and John McCarthy, with the help of Claude Shannon and Nathan Rochester, held a conference at Dartmouth in 1956, which is considered to be where the field of Artificial Intelligence was born. Minsky convinced the attendees to adopt the term “Artificial Intelligence” as the name for the new field. 1958 — Frank Rosenblatt designs the Perceptron, the first artificial neural network. First Winter of AI [1974 – 1980] In the second half of the 1970s, the field suffered its first “winter.” Various agencies that had been financing AI research cut funds after years of high expectations and few actual advances. 1979 — Students at Stanford University invent the “Stanford Cart,” a mobile robot capable of moving autonomously around a room while avoiding obstacles. 1967 — The algorithm “Nearest Neighbor” is written. This milestone is considered the birth of the field of pattern recognition in computers. The Explosion of the 1980s [1980 – 1987] The 80s are known as the birth of expert systems, based on rules. These were rapidly adopted by the corporate sector, generating new interest in Machine Learning. 1981 — Gerald Dejong introduces the concept of “Explanation Based Learning” (EBL), in which a computer analyzes the training data and creates general rules allowing the less important data to be discarded. 1985 — Terry Sejnowski invents NetTalk, which learns to pronounce words in the same way a child would learn to do. Second AI Winter [1987 – 1993] At the end of the 1980s and the beginning of the 90s, AI experienced a second “winter.” This time, its effects lasted for several years and the reputation of the field did not fully recover until the early 2000s. 1990s — Work in Machine Learning moves from a knowledge-driven focus to a data-driven one. Scientists begin to create programs that analyze large quantities of data and extract conclusions from the results. 1997 — The computer Deep Blue, by IBM, beats world chess champion Gary Kaspárov. Explosion and Commercial Adoption [2006 – Present Day] The growth in the potential for calculation together with the great abundance of available data have relaunched the field of Machine Learning. Many businesses are moving their companies towards data and incorporating Machine Learning into its processes, products and services in order to gain an edge over their competition. 2006 — Geoffrey Hinton coined the phrase “Deep Learning” to explain the new architectures of profound neural networks capable of learning much better models. 2011 — The Watson computer by IBM beats it human competitors at Jeopardy, a game show that consists of answering questions in natural language. 2012 — Jeff Dean, at Google, with the assistance of Andrew Ng (Stanford University), leads the project GoogleBrain, which developed a deep neural network using all of the capacity of the Google infrastructure to detect patterns in videos and images. 2012 — Geoffrey Hinton leads the winning team in the Computer Vision contest at Imagenet using a deep neural network (DNN). The team won by a large margin, giving rise to the current explosion of Machine Learning based on DNNs. 2012 — The research laboratory Google X uses GoogleBrain to autonomously analyze Youtube videos and detect those containing cats. 2014 — Facebook develops DeepFace, an algorithm based on DNNs capable of recognizing people with the same precision as a human being. 2014 — Google buys DeepMind, a British deep learning startup that had recently demonstrated DNN capabilities with an algorithm capable of playing Atari games by simply viewing the pixels on the screen, the same way a person would. The algorithm, after hours of training, was capable of beating human experts in the games. 2015 — Amazon launches its own Machine Learning platform. 2015 — Microsoft creates the “Distributed Machine Learning Toolkit”, which allows for the efficient distribution of machine learning problems to multiple computers. 2015 — Elon Musk and Sam Altman, among others, found the non-profit organization OoenAI, providing it with one billion dollars with the objective of ensuring that artificial intelligence has a positive impact on humanity. 2016 – Google DeepMind beats professional Go player Lee Sedol five games to one at what is considered to be one of the most complex board games. Expert Go players confirmed that the algorithm was capable of making “creative” moves that they had never seen before. Today, we are experiencing a third explosion in artificial intelligence. Although there are skeptics who say we cannot discard the possibility of a third winter, this time the advances in the sector are being applied to business to the point of creating whole new markets and producing significant changes in the strategies of both big and small businesses. The wide availability of data seems to be the fuel behind these algorithm motors which, in turn, are surpassing the limitations of calculation that existed prior to distributed computing. All of this seems to indicate that we should continue to have access to more and more data that will feed our algorithms will the scientific community does not seem to be running out of ideas to continue advancing the field. The following years promise to be truly frenetic. Written by Víctor González Pacheco, Team Leader and Data Scientist at LUCA consulting analytics Don’t miss out on a single post. Subscribe to LUCA Data Speaks. You can also follow us on Twitter, YouTube and LinkedIn Warning About Normalizing Data4 trends in technology for 2019
Patrick Buckley How AI and Machine Learning help to develop vaccines As Christmas approaches this year, we have all been gifted the great news that the Pfizer/BioNTech vaccine has shown to be both safe and effective in creating an immune...
Patrick Buckley Is AI key to successful Real Estate investment? As Artificial Intelligence (AI) continues to shape the world around us, in today’s post we explore the impact of AI on commercial Real Estate investment. To what extent is...
LUCA Deep Learning and satellite images to estimate the impact of COVID19 Motivated by the fact that the Coronavirus Disease (COVID-19) pandemic has caused worldwide turmoil in a short period of time since December 2019, we estimate the negative impact of...
Patrick Buckley How should you speak to children about Artificial Intelligence? In previous years, people’s relationship with technology in general, and with artificial intelligence in particular, was based on “text”, usually using specialised programming languages. Today, however, artificial intelligence has...
Patrick Buckley The Smart Train – The key to future sustainable mobility Governments know that a functional and efficient transport system is key to economic growth and social development. Well run transport infrastructure unlocks the productive potential of an economy. Naturally,...
Patrick Buckley COVID-19 Shines a light on the huge possibilities of AI in Education In 2020 remote learning platforms have become essential for students around the world as a result the COVID-19 pandemic. Many of these platforms incorporate Artificial Intelligence (AI) technology which continues to...