For many years we have been in a race to increase the speed of our connections. Ever since those modems that treated us to a symphony of beeps, the end of which we waited anxiously to see the speed at which we were connected finally confirmed, higher speeds have always been the goal to be achieved.
The incorporation of new technologies, such as ADSL, fibre optics, 3G or 4G mobile communications, private MPLS networks, has gradually brought higher and higher speeds. And in many cases, the commercial claim has been to promise more kilobits, more megabits in a technical and commercial race so that we can consume new services. For example, mobile internet consumption did not become widespread until the arrival of 3G. The case of HD or UHD video is unthinkable without these higher bandwidth values.
But bandwidth is not the only parameter that is important when consuming digital services. This is where latency comes in.
Latency, the great protagonist
Latency basically measures the time that elapses in the communication between the client initiating the communication and the time it takes to receive the response. The order of magnitude in which we move is milliseconds.
Latency, even if it has not been very visible, has always been there and some of its consequences are sometimes perceptible. When transatlantic communications were carried out via satellites in geostationary orbit, more than 35,000 km above the earth’s surface, the time taken for the signal to travel from the earth station to the satellite and back down to another earth station added enough delay to complicate communication between people, with timeouts, collisions between speakers, etc. Here the latency is in the order of hundreds of milliseconds.
Another example is in data centres when replicating data between two locations. There are hardware solutions that do not commit write operations to disk until the remote system has committed the equivalent write to the secondary system to ensure that the copy has been performed correctly. This is why many vendors have at least two data centres in the same metropolitan area to offer synchronous replication solutions.
In contrast, there are many other situations where latency is not relevant, because communications response times are much shorter than the processing time, or the responsiveness of a human being. For example, a large part of most web query applications are not particularly sensitive to latency.
In mobile communications, the advent of 5G has been a major departure from previous generations. While this technology promises a growth in speed, it has put latency at the centre. On the one hand, to achieve much lower values, and on the other, to ensure stable values, with little variation and very controlled. But this is not only happening in mobile communications: fibre networks also allow for lower and more stable latency values.
And it is latency that really puts Edge Computing at its best. Edge means in simplified terms that we are bringing computing capabilities to the edge. To the edge of the network.
Why bring this compute capacity to the edge of the network?
The main advantage is to improve the latency perceived by the consumer of this capacity. If instead of the hundreds or thousands of kilometres that the signal would have to travel to reach a traditional Data Centre, it only must travel a very short distance of a few kilometres, the latency is reduced to very low milliseconds.
But is it really worth the effort to deploy multiple nodes to bring computing closer to end users? For some use cases, it certainly is. And this is where one of the most important lines of work begins: identifying the use cases that really need a very low latency value.
In this line, at Telefónica we have been working for some time now with our customers and partners to identify these use cases that, only in an Edge Computing infrastructure, could happen. Many of them are the result of the most advanced lines of research and are still in a very preliminary stage. We can mention some of them, such as augmented reality, Smart Industry, image recognition in real time, gaming, drone management, etc.
For this reason, Next Generation Networks (5G and Fibre) combined with Edge Computing are the winning option to optimally develop solutions that are sensitive to latency.