Edge Computing and Machine learning, a strategic partnership

LUCA    24 February, 2020

Written by Alfonso Ibañez y Aitor Landete

Terms like Artificial Intelligence or Machine Learning are nothing new. Society, companies and governments are increasingly aware of techniques such as deep learning, semi-supervised learning, reinforcement learning or transfer learning, among others. However, they are not yet aware of the many benefits of combining such techniques with other emerging technologies such as Internet of Things (IoT), Quantum computing or Blockchain.

IoT and Machine Learning are two of the most interesting disciplines in current technology, as they are generating a profound impact on both companies and individuals. There are already millions of small devices integrated in factories, cities, vehicles, telephones and in our homes, collecting the necessary information to make intelligent decisions in areas such as the optimization of industrial processes, predictive maintenance in offices, mobility of people, energy management in the home and facial recognition of people, among others.

The focus of most of these applications is to detect information from the environment and transmit it to powerful remote servers via the Internet, where intelligence and decision making reside. However, applications such as autonomous vehicles are very safety critical and require fast and accurate responses. These new performance requirements play a critical role in decision making, so the use of remote and non-autonomous vehicle servers is not appropriate. The main reasons for this is to minimise the time spent on data transfer to external servers and the permanent need for Internet connectivity in order to process the information.

Edge computing

To help alleviate some of the above problems, a new computing paradigm is emerging. This approach brings data processing and storage closer to the devices that generate it, eliminating dependence on servers in the cloud or in data centers located thousands of miles away. Edge computing is transforming the way data is processed, improving response times and solving connectivity, scalability and security issues inherent in remote servers.

The proliferation of IoT devices, the rise of edge computing and the advantages of cloud services are enabling the emergence of hybrid computing where the strengths of edge and cloud are maximized. This hybrid approach allows tasks to be carried out at the optimal location to achieve the goal, either on local devices, on cloud servers, or on both. Depending on where the execution takes place, the hybrid architecture coordinates tasks between edge devices, edge servers and cloud servers:

  • Edge devices: are devices that are equipped with small processors to generate and store information and even execute, in real time, certain computing tasks. Those tasks that require greater complexity are moved to more powerful servers at higher levels of the architecture. Some examples of edge devices are ATMs, smart cameras, cars, etc.
  • Edge servers: are servers that have the capacity to process some of the complex tasks sent from the lower devices in the architecture. These servers are in continuous communication with the edge devices and can function as a gateway to the cloud servers. Some examples are the rack processors located in the operating rooms of industries, offices, banks, etc.
  • Cloud servers: these are servers that have a large storage and computing capacity to deal with all the tasks not yet completed. These systems allow the management of all system devices and numerous business applications, among many other services

Edge Analytics

Today, research in the field of Machine Learning has enabled the development of novel algorithms in the context of the Internet of Things. Although the execution of these algorithms is associated with powerful cloud servers due to computational requirements, the future of this discipline is linked to the use of analytical models within edge devices. These new algorithms must be able to run on devices with weak processors, limited memory and without the need for an Internet connection.

Bonsai and ProtoNN are two examples of new algorithms designed to run analytical models in edge devices. These algorithms are based on the philosophy of supervised learning and are capable of solving problems, in real time, on very simple and devices with few computing resources. One application of this type of algorithms is the intelligent speakers. On these devices, a trained model is integrated that is able to analyze all the detected words and identify, among all of them, which is the activation lever (“Alexa”, “Hello Siri”, “OK, Google”,…). Once the keyword is recognized, the system starts transmitting the audio data to a remote server to analyze the required action and proceed with its execution.

Unlike the previous algorithms, in which the training of the models is done on the cloud servers, the Federated Learning approach orchestrates the training of analytical models between the edge and the cloud. In this new approach, each of the edge devices in the system is responsible for training an analytical model with the data it has stored locally. After this training phase, each device sends its local model to the same cloud server, where all the models are combined into a single master model. As new information is collected, the devices download the latest version of the master model, retrain the model with the new information and send the resulting model back to the central server. This approach does not require the information collected in the edge devices to be transferred to the cloud server for processing, since the only thing that is transferred are the generated models.

All the new algorithms proposed in the literature attempt to optimize three important metrics: latency, performance and accuracy. Latency refers to the time needed to infer a data record, performance is the number of inferences made per second, and accuracy is the level of confidence in the prediction result. In addition, the power required by the device is another aspect to consider. In this context, Apple has recently acquired Xnor.ai, an emerging company that aims to promote the development of new efficient algorithms that aims to imporve battery efficiency.

Edge adoption

Companies are embracing new technologies to drive the digital transformation and improve their performance. Although there is no reference guide for the integration of these technologies, many companies follow the same pattern for the implementation of Edge-related projects:

  • The first phase consists of developing the most basic scenario. The sensors of the edge devices collect the information from the environment and send it to the cloud servers where the main alerts and metrics are analysed and reported via dashboards.
  • The second phase extends the functionality by adding an additional processing layer to the edge device. Before the information is sent to the cloud, the device performs a small analysis of the information and, depending on the values detected, can initiate various actions through edge computing.
  • The most mature phase consists of the incorporation of edge analytics. In this case, the edge devices process the information and execute the analytical models they have integrated to generate intelligent responses in real time. These results are also sent to the cloud servers to be processed by other applications.

Another newer approach associated with edge analytics applications is to enrich the predictions generated by the edge devices with new predictions provided by the cloud servers. From now on, the scientific community is challenged to develop new systems that decide, dynamically, when to invoke that additional cloud intelligence and how to optimize the predictions made with both approaches.

To stay up to date with LUCA, visit our Webpage, subscribe to LUCA Data Speaks and follow us on TwitterLinkedIn YouTube.

Leave a Reply

Your email address will not be published. Required fields are marked *