Park your car with IoT

Beatriz Sanz Baños    9 March, 2019

Parking your car has turned into a difficult task in many cities. Looking for a space is a constant source of stress for citizens. Luckily, Internet of Things was born to make life easier for people. Their solutions have already been applied every day in cities to help people park more efficiently.

On the one hand, IoT parking solutions work thanks to sensor frameworks that collect information about the environment related to mobility, such as free parking spaces, the state of traffic or roads.

On the other hand, intelligent parking uses Big Data technologies to optimally analyse the large amount of data collected, with the aim of offering personalized solutions for each user in real time.

Intelligent parking has become a key element to improve mobility on the streets. Spanish cities such as Madrid, Barcelona, Santander​​ and Malaga have sensor systems that help their inhabitants to park much easily.

Intelligent parking has become a key element to improve mobility on the streets

The increase in pollution is causing more and more cities to consider limiting the circulation of vehicles. An example of this is the Madrid Central project, the restriction of traffic in the central area of ​​the capital of Spain, which establishes that polluting cars can only access the downtown area to park in a parking lot. In these circumstances, intelligent parking search becomes a very useful tool for drivers. In fact, the City of Madrid itself offers through an app real-time data of the available free spaces.

The IoT parking solutions provide the following benefits:

Reduction of the time citizens spend in the car. This contributes to improving productivity in our lives, as it decreases delays in arriving to work or class. The lower volume of vehicles circulating also helps reduce the number of road accidents.

Fighting against pollution: the circulation of vehicles at a slow speed, in search of parking, supposes a congestion of the traffic that is translated in high levels of gas emissions that are harmful to the public health. A more fluid traffic would lessen these effects and avoid the excess fuel consumption that involves going around the same place in search of parking.

Improving society´s welfare. The reduction of traffic jams leads to a reduction in the stress of standing with the car in motion without moving towards the destination. Likewise, it implies more time available to enjoy leisure and rest.

Urbiótica Smart Parking sensor technology is an example of autonomous and wireless parking based on the new NB-IoT communication standard, which allows direct communication of the sensor with the cloud without the need to install dedicated gateways. It has an integral network of sensors that are installed underground in just 10 minutes without cables and with minimum work, and is powered by batteries with a duration of up to 7 years. It is automatically calibrated and can be located on the curb or sidewalk, without the need to remove parked vehicles and greatly facilitating the deployment of the solution.

The system works using magnetic detection, its sensors read the variations that occur in the magnetic field when the mass of metal of a car parks. The entrance and exit of the vehicles in the parking spaces and the duration of their parking are detected in real time. This data is sent to an application so that drivers have updated information on their smartphone about the free parking spaces at all times, thus being able to find parking spaces quickly and efficiently. This reduces the time lost in parking search and the pollution generated.

The parking of the future is here thanks to IoT

A solution that takes one step further in the connected parking is Automated valet parking, a system designed by Bosch in which the car itself is connected to the smartphone and the sensors of the parking spaces. With this solution, it is the car and not the driver who receives the information from the sensors and decides where to park. Upon entering the parking lot, the driver gets out of the car and presses a button on an app that orders the vehicle to park. Then, the vehicle receives the information of the parking spaces and automatically moves to one of the available places.

The development of connected solutions applied to parking has already a leading role in many people´s daily lives, favouring a more sustainable circulation in Smart Cities. The parking of the future is here thanks to IoT.

IoT is increasingly getting safer

Beatriz Sanz Baños    7 March, 2019

Technology is constantly evolving and offering new applications to users. There are always risks when innovating and putting these inventions into practice is more likely when the risks are unknown or have not been taken into account. In this sense, the norms, standards and regulations are intended to make those risks public, inform how they can be treated and encourage and / or force designers, producers or engineers to put in practice the means that can mitigate such risks to take the appropriate actions. For these standards to be effective it is vital that they adapt to the new times, and this is precisely what is happening with those that apply to the Internet of Things, since recently two very relevant ones have been published: the first international standard ISO / IEC for Internet of Things and a technical specification of ETSI for IoT devices and services for the general public.

The new regulation is ISO / IEC 30141 and it establishes a common vocabulary for the design of IoT products, which allows the development of reliable, safe, protected, privacy-friendly systems capable of facing cyber-attacks. It joins the more than 600 international standards that are in waiting to be reviewed and officiated to regulate the industry 4.0, with the aim of reducing the invasive nature of new technologies.

Regarding the technical specification of ETSI (ETSI TS 103 645), it is a standard that establishes a series of security requirements for IoT products targeted for the general public, with the intention of establishing a basis on which future certification will be defined. Some of the most relevant requirements included in this specification are the prohibition of default passwords for all devices (e.g. admin / admin) or the requirement that there is has to be an official channel through which users can report vulnerabilities to manufacturers.

The main players in the technology industry are fully aware of the importance of taking care of safety

The main players in the technology industry are fully aware of the importance of taking care of safety. This year, the RSA Conference 2019 is held in San Francisco, a world trade fair and companies specialized in technology from all over the world attend this event every year, in order to show the advances and innovations that they have to contribute to the cybersecurity market. In addition to the stands of the companies, practical sessions, keynotes and informal meetings are held where the most current topics in the world of security and technology are discussed.

Telefónica will be attending for the fourth year in a row to show off to the technological community a wide variety of its most relevant and innovative products, most of which incorporate IoT technology, including CapaciCard, Stela FileTrack and Dinoflux.

The CapaciCard software is a plastic card that allows you to make payments and purchases online much more safely. Its function is the authentication, identification or authorization of users, avoiding that, in case of loss, third parties can make payments without authorization. With only one card it will be possible to authenticate yourself in different providers, in addition, in order to increase security, you can match up the card with your most commonly used devices. The technology behind this invention is the multitouch screens available to almost all mobile phones and laptops and where without additional hardware, bluetooth, NFC or any type of connection can the card can be verified.

Many companies have problems with the management and administration of thousands of documents stored on their computers. Stela FileTrack is the perfect solution for tracking those documents and classifying those that contain sensitive, confidential or personal information. Through a traceability layer, FileTrack shows online the life cycle of the most sensitive documents owned by a company.

The implementation of standards and the development of new cybersecurity technologies allow us to feel more secure

There will also be a demo on the function of “honeypot” IoT. This system consists of a series of IoT devices (such as routers, IP cameras, sensors of various types …) connected to the Internet that when receiving attacks from cybercriminals the automatic systems created by them will collect information about their techniques and procedures, and thus be able to generate cyber intelligence that can detect the actions that are taken on systems in production.

The increase in the use of technology, especially IoT, increases the amount of services available to users. The implementation of standards and measures such as those mentioned above or the development of new cybersecurity technologies allow us to feel more secure when integrating these devices in our home or workplace.

What are Blockchain and Smart Contracts?

Diego Martín Moreno    4 March, 2019

Blockchain is the new technology for the treatment of data that everyone speaks about, but what is blockchain?

Blockchain is a chain of blocks in which we have stored information, and where to connect the blocks, we use a hash (the hash is an output of a fixed size, which normally works through a text string and gives the same result with the same input, in ethereum is generated with the function KECCAK-256) that we generate and that allows us to connect them to each other.

In this way if someone wanted to maliciously modify a block, they should modify the entire text string from that point, with a computational cost so high that prevents any attack.

Therefore blockchain serves to store information, which cannot be manipulated and does not need to be validated by a central entity, if not by the network members themselves.

Before we continue, it’s interesting to talk a little about the history of blockchain to understand where we stand. Blockchain has several important milestones in its history:

In 1991 Stuart Haber and W. Scott Stornetta publish the first document about the blockchain concept (How to time-stamp a digital document).

Satoshi Nakamoto in 2008 publishes his Bitcoin document that comes out in October of that year. “Bitcoin: A Peer-to-Peer Electronic Cash System” 

And in 2009 version 0.1 of Bitcoin is published in Sourceforge.

As we can see, conceptually blockchain comes from the 90s, but the first implementation has less than 10 years.

On the other hand we have the Smart Contract, which is the second element that has given all the potential to blockchain. But what is a smart contract? A Smart Contract, a name used used by Nick Szabo for the first time in 1996 through the publication of his article ‘Smart Contracts: Building Blocks for Digital Markets’, is a computer program, which executes an agreement between two parties in a system not controlled by either party.

The conjunction of these two technologies, blockchain and smart contract leads us to decentralized applications (Dapp), autonomous applications, not controlled by any entity, and where data and transactions are stored in blockchain.

For the Dapp we have three types of applications:

Type 1. Dapp with its own blockchain, such as Bitcoin. Like the operating system of our computer.

Type 2. Dapp that uses type 1 dapp. With its own protocols and tabs like Omni Protocol, which are general type applications such as Excel.

Type 3. Dapp that uses type 2 dapps. It uses type 2 dapps for its operation, like µRaiden. These applications would be like an Excel macro.

All this opens a path that did not exist before and allows us to make applications that could not be made before.  This blockchain boom allows us to distribute information and ensures that it cannot be unduly modified.

In addition, it allows us to autonomously execute contractual conditions between two parties, without the need for arbitration or control from one entity above all others (such as a bank today).

What does blockchain offer us for the business models of the future? It opens the way to new ways of integrating business models, facilitating exchanges between different companies or simplifying functional processes, but above all where transactions are executed automatically, in an environment that cannot be manipulated, neither in data nor in contracts, and with traceability of everything that happens. Think of applications for traceability of goods, management of digital assets or simplifying processes between different companies.

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

You can also follow us on TwitterYouTube and LinkedIn

Don’t confuse the frequency of an incident with the ease you remember it

ElevenPaths    4 March, 2019
Imagine that there have been a few robberies in two parks of your town that have got all the attention for days. This afternoon you would like to go running around the park next to your home, so these incidents will quickly come to your mind, and this fact will make you think about the probability of being a victim of a robbery (or something worse) in that park. Your mind will make the following association:
Park = Danger!!!
The images you have watched on the TV and the Internet will make you overestimate the probability that you may be the next victim in any other park from a different town. As a consequence, you could avoid going running around the park near your home (or any other park) until the media echo ends. Only when you stop thinking “Park = Danger!!”, you will frequent parks again.
It is clearly an irrational behavior. In fact, your mind is using the following heuristic: if examples of something come easily to my mind, then that “something” must be common. This way, considering that when I think of “park” violent images come to my mind, then the probability of suffering a violent attack must be high. After all, who checks official statistics on muggings in parks? If two different persons have been assaulted in parks, this means that parks are danger places, no matter what statistics show, right?

Well, that’s not right. Psychologists name this sense error availability bias: the easier to remember an event is, the more probable we think it is.

We tend to overestimate the frequency of sensationalist causes and underestimate the frequency of mundane causes
Humans are really bad at numbers, let alone at estimating probabilities. Our risk perception seldom matches its reality. We tend to exaggerate spectacular, new, vivid, recent and emotional risks. The final result: we worry about risks that we could ignore without problems and we do not pay enough attention to those risks alerted by evidences.

The following table, adapted by Bruce Scheneier from scientific literature on the subject, summarizes how people perceive risks in general terms:

The availability heuristic explains most of the behaviors listed in the previous table. Similarly, we make decisions (big and small ones) in our everyday life that have direct implications for security:
  • Do I connect to this public Wi-Fi?
  • Do I plug my pen drive into the USB port?
  • Do I send this confidential file as an e-mail attachment?
We estimate risks automatically without paying too much conscious attention: we do not use a calculator or incident frequency rates to determine probabilities, so we let ourselves be guided by this availability heuristic: an incident related to this security challenge comes quickly to my mind? Is it not the case? Then it must be unlikely, so the risk will be low. Is it the case? Then it will be quite likely, so the risk will be high.
The point is: why some events are easier to remember than other ones? The answer to this question will help us make better security decisions and do not be easily influenced by others: sellers, bloggers, press, friends, etc.
Vivid stories are etched on our memory
In particular, researchers from the field have identified a number of factors that make an event be longer etched on our memory than other ones:
  • Any emotional content makes memories last longer. One of the most powerful emotions in this regard is precisely the fear. You may have noticed it in many sensationalist news and advertisements on cybersecurity.
  • Concrete words are better remembered than abstractions such as numbers. This is why anecdotes have a higher impact than statistical stories. Even if it pains us to accept it (weren’t we rational animals?), our decisions are more affected by vivid than by pale, abstract or statistical information.
  • Human faces tend to be easily remembered, at least if they express emotions. For this reason, the most successful advertisements and campaigns’ main characters have their own identity.
  • Events that have taken place recently are more easily remembered than old events. Memory degrades over time. If you are driving through a road and pass close to an accident you will be very aware of the risks of suffering one, so you will slow down and drive carefully along a few kilometers… until your conversation moves towards a different subject and you forget completely the accident.
  • Similarly, the newness of an event helps it to be etched on our memory. Everyday events go unnoticed, but extraordinary actions catch our attention.
  • As all students must know very well, concentration and repetition help with memorization. The more times information is presented, the better such information will be retained. How well publicists know this!

All these are cumulative effects. In summary, and according to the social psychologist Scott Plous: in very general terms: (1) The more available an event is, the more frequent or probable it will seem; (2) the more vivid a piece of information is, the more easily recalled and convincing it will be; and (3) the more salient something is, the more likely it will be to appear casual.

Where do you think we can find stories matching all these requirements? In the media!

If you see it on the news, don’t worry!
As it happens with many other biases and thought shortcuts, the availability heuristic is valid in most of our everyday situations: if many examples of something come to our minds it’s because it has actually happened many times.
I’m sure that men scientists spring to mind easily than women scientists, in the same way that our thoughts go first to U.S. global franchises than to Spanish ones, or to Champions League football players from Spain rather than from Malta. This is because there are many more examples of the first category than of the second. Therefore, the availability heuristic is useful most of the time, since the ease with we remember relevant examples constitutes a good shortcut to estimate their probability or frequency.
Nevertheless, this shortcut is not infallible. Some events may simply be more remarkable than others, so their availability results in a poor indicator of their probability. Negative information reported on the news is to a great extent the responsible for feeding this heuristic. By definition, an event must happen rarely to be reported on the news. In fact, it must be really prominent to catch people’s attention. That way, news report on facts that are statistically irrelevant, so biasing our perception of events’ frequency.
As a result, if people evaluate risk based on the ease with they remember several dangers, they will be worried especially about these dangers reported on the media, rather than about the dangers to which less attention is paid, even if these are the same or more lethal.
This is why we tend to believe that we are more likely to die from an accident than a disease, since the brutal crash of two vehicles on a bridge over a cliff has a higher media coverage than death by asthma, even though 17 more people die from diseases than from accidents. But, of course, we see news of accidents everyday while hear of deaths by asthma if it happens to a friend or a relative.
What’s more, some researchers have asserted that for this heuristic to work the event must not even have occurred actually. It may be pure fiction: we only need to have watched it in a movie or series.
And, of course, audiovisual media are more vivid than written ones, (and they have more human faces!). Over time, we tend to forget where we saw the event ⸺if it was at the cinema, on the news…⸺. The source of information fades out and only the example itself (whether real or fictitious) survives. How reliable the availability of an example is!
According to Daniel Kahneman: the world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.
How to survive the availability heuristic in the field of cybersecurity
The first step to fight a bias is to be aware of its existence. If you have reached this point, you may have a clear idea about how our availability heuristic works. Since now, what can you do?
  • As you already know, under the availability heuristic’s influence, users tend to overestimate the probability of vivid and surprising events and they will focus on easy-to-remember information. As a security manager you may take advantage of this effect by providing the users with simple and easy-to-remember stories instead of quoting statistical information and data: for instance, by sharing with them stories about how data exfiltration of a secret prototype led to an important case of industrial espionage where an unencrypted USB device had been stolen; instead of presenting the evidence that “more than half of employees have reported that they copied confidential information to USB flash drives, although 87% of these companies had policies forbidding this practice”.
  • Use repetition: the more you repeat a message (when good examples whenever possible) the more easily such examples will spring to users’ minds and, together with them, the message itself.
  • Take advantage of the media noise caused by security incidents and use them as spreading vectors of your security messages. Keep away from abstractions and impersonal data: anchor your message to the last example about which everybody is talking.
  • Pay more attention to statistics than to the daily danger. Don’t base your judgements on small samples of representative cases, but on big figures. The fact that something is currently appearing a lot in the media does not mean that it is frequent or highly risked; but just that it is newsworthy, that is to say: it constitutes a good story.
  • Don’t trust your memory either. Draw upon data before deciding on an event’s frequency or magnitude.
  • Under this heuristic, we feel more driven to implement security countermeasures after having suffered an incident than before. Check the statistics to understand what real risks we are exposed to. Don’t wait until to be hit to protect yourself. If the risk is high, ignore the media coverage that receives the danger. Protect yourself now!
  • We remember easily an incident than the lack of incidents. After all, each incident it’s a story itself, while the lack of them doesn’t build such an attractive story. For instance, at the casino music from fruit-machines sounds at full volume when they win a jackpot. However, those that don’t win, do not sound at all. This asymmetry will make you think that jackpot is much more frequent than actually it is. Pay attention not only to what you see, but also to what you don’t see: it is easy to remember a successful virus, but difficult to keep in mind millions of viruses that were not so successful.
  • Surround yourself with a team having numerous experiences and points of view. The simple fact of diversity will limit the availability heuristic, since your team members will challenge each other naturally.
  • Use your contact network beyond your organization when making decisions. Allow others to provide you with points of view that simply could not exist within your organization. Among these groups there will be other stories biasing their judgments towards different directions.
Hence, next time you make a decision, pause to ask yourself: “Am I making this decision because a recent event has come to my mind, or am I really considering other factors that I cannot remember so easily?”. The better we understand our personal biases, the better will be the decisions we take.
Gonzalo Álvarez Marañón
Innovation and Labs (ElevenPaths)

Blockchain and IoT predictions for 2019

AI of Things    1 March, 2019

Blockchain and IoT have been dominant players in the technology industry over the last 5 years, with their popularity and reputation growing in line with the explosion of Big Data and Artificial Intelligence technologies. Experts continually speculate on the fate of these technologies, often using 2020 as their benchmark year. Before we reach the acclaimed year, we´re going to run you through the top predictions for these technologies for 2019.

IoT predictions 2019

Continued growth of voice-controlled devices:

Thanks to the ever-increasing popularity of voice controlled smart devices, such as amazon´s Alexa, consumers are relying more and more on voice-controlled software, meaning that by the end of this year, we may see over 50% of households in the US using a voice-controlled home assistant. Experts expect an influx in sales of devices with a voice interface over 2019, and a new wave of voice-powered devices and accessories launching into the market throughout the year.

5G becomes a reality:

5G has been an anomaly for years, but it is becoming a reality as we speak. Over the next two years, over 66% of businesses are planning to deploy 5G technology into their business practices, the key driver of this being operational efficiency. Although 5G has the potential to become to backbone for IoT, public infrastructure will need to be adjusted accordingly beforehand.

Exponential growth:

It is no surprise that the IoT is growing fast, but it’s the predicted rate of this growth that comes as news to us. Statista forecasts that over 27 billion devices will be connected by the end of this year, with this rising to over 75 billion by 2025. Due to the low costs of manufacturing connected devices and the increasing power of the networks that provide this connection, IoT growth is exploding. 

Growing security risk:

The huge expected increase in the number of IoT devices being used leads to a parallel rise in security vulnerabilities. The more devices in use, the greater the opportunity for hackers and cyber criminals to carry out attacks and compromise private information, which is especially risky for governments and other organizations that work with classified information.

Figure 1.  Blockchain technology is growing at a rapid rate

Blockchain predictions for 2019

Transparency across industries:

Blockchain is built on the premise that it ensures the complete control and privacy of all user data. The future use of a single publicity available digital ledger will make it easier to reduce hacking success and ensure transparency and accessibility to the public.

Decentralisation of apps exchanges to play a key role:

Ethereum continues to be the most important and biggest platform for blockchain technology applications such as dApp and smart contract. It is expected that in 2019, most of the world´s dApps will reach a million users daily.

Autonomous trade: 

Thanks to the foundations of distributed consensus and exchange of value, the opportunity for autonomous negotiation is readily available, making the option of trade among applications a reality that will improve market efficiency.

Increased demand for Blockchain experts:

As this technology becomes more widely recognised, more companies will look to apply it. This increased demand will result in a bigger need for blockchain experts, a relatively niche set of skills due to the newness of the technology. 

Decentralised crypto exchanges will grow:

Cryptocurrencies look set for a good year ahead. In markets where cross-border payment and investing exist, these exchanges look to become more prominent. The main focus for these exchanges will be to meet the standards of quality of their centralized counterparts.

Distributed Data Models:

Over the coming decades, the way in which data will be distributed will become more widely spread with the increase in geographies and cloud data centres. Blockchain have highlighted that this will be a critical aspect to the way in which data will work in the future.

Ecosystem of specialized chains:

Due to the significant creation of the ´public technical debt‘, generated by defunct cryptocurrencies, it is likely that we will see the parallel operation of security and utility chains as we confirm and calibrate use cases.

Overall, the exponential rise in the application and popularity of these technologies looks set to rise at unprecedented rates over the next year and beyond. The issue of security remains prominent with the evolution of technology.

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

You can also follow us on TwitterYouTube and LinkedIn

Take control of your vehicles with Fleet Optimise

Beatriz Sanz Baños    28 February, 2019

Mobility has become more intelligent thanks to the presence of IoT. In the same way that users use the connected solutions to make more efficient journeys or to park more easily, companies can also benefit from this technology by optimising the management of their vehicles. Fleet Optimize is one of the best options for this.

How does it work?

Fleet Optimize is a B2B service for fleet management that works from the quick and easy installation of a small IoT device in the OBD port of the vehicles. Once installed, you can obtain real-time information on GPS location, driving habits and vehicle engine data, such as mileage or fuel level.

As an integrated service in the cloud, Fleet Optimize allows users to access the information at anytime and anywhere, facilitating their management through personalized reports and alerts. Thanks to this, they can know the use and status of the fleet, its GPS location, fuel consumption, mileage, vehicle breakdowns, the behaviour of the driver behind the wheel and even detect fraud attempts. Among its main features, we can also find the configuration of control panels to support decision-making, the creation of alerts, integration with customer systems or the use of various accessories such as the panic button for emergencies.

Who is it for?

Fleet Optimise is aimed at most sectors and types of companies, especially for those with fleets of vehicles or businesses where transport is an essential asset for its development, such as Rent-a-Car companies, Renting / Leasing, transportation, sales force, field strength or security.

Fleet Optimise brings the following benefits to companies:

  • Reduction of operating costs and optimize the use of your vehicles
  • Protection of your vehicles, employees and cargo, monitoring their status in real time, and detect anomalous situations.
  • Maximize the availability of your fleet, thanks to preventive maintenance and the anticipation of breakdowns based on real vehicle data: revolutions (rpm), speed, consumption and downtime.
  • Improves driver habits by tracking navigation routes, speed and acceleration, braking and sudden turns.

All types of vehicles, from light vehicles to industrial machinery, can benefit from these advantages as it is a plug & play device that does not require professional installation, and that will function by simply connecting it to the vehicle’s OBD port. In addition, the service is offered globally in all the countries where Telefónica is present, being a modular service that adapts to the needs of each client and offers a customized solution that can be integrated with its own systems.

For these reasons, Fleet Optimise is one of the global and a reference in the industry, with more than one and a half a million vehicles connected. Europcar, Ferrovial, Fujitsu, Hertz, Honda or Telefónica itself, are some of the successful companies that already enjoy the service offered by Telefónica. Some of them have even received international recognition for incorporating these types of solutions in their businesses, such as Ferrovial, which was awarded by EnerTic for its commitment to innovation in energy efficiency.

The connectivity allows the monitoring of driving, the optimization of costs, the increase of productivity and safety in the fleets of vehicles. A more sustainable mobility is possible with Fleet Optimize.

GSMA IoT Security Champion: Award to our IoT Security team

ElevenPaths    27 February, 2019
We have a lot to be happy about! Our IoT Security team, dedicated to cybersecurity specialized in the increasingly relevant world of the Internet of Things, has received a well-deserved award for its contribution to the dissemination and application of the IoT security guides of the GSMA, an entity that represents the interests of the most important mobile operators around the world and is in charge of the Mobile World Congress events around the world, among which is the Barcelona event that takes place this week.

For several years, Telefónica has collaborated with other companies in the sector in an initiative led by the GSMA. The interest and support shown by the company during these years has paid off with the achievement of this award, which rewards the work and dedication demonstrated. 2018 was a very important, as some KPIs were defined and fully met, regarding the use of the GSMA checklist in the security assessments of services or incorporating it as part of the RFPs, among others.
As part of this initiative, a set of documents describing the security requirements that must be taken into account when designing and implementing services have been prepared, as well as a checklist to assess to what extent these requirements have considered. You can consult the documents in the following links:
The three documents that talk about security, correspond to the three essential components of any IoT service: the device, the network and the platform to which the device connects to send information or receive orders.
In 2016, a case was published where these documents were applied in a project carried out for the port of Seville. In addition, during the past year, one of the case studies published within the framework of this initiative led by the GSMA was also contributed.
From the entire ElevenPaths team, we would like to congratulate Security IoT for this award, fruit of their great work and encourage them to continue with their great work. Vicente Segura, Telefónica’s Head of IoT Security, accepted the award on behalf of the team.

Python for all (5): Finishing your first Machine Learning experiment with Python

Paloma Recuero de los Santos    25 February, 2019

We have finally come to the last part of the Machine Learning experiment with Python for all. We have been taking it step by step, and in this last post we will address any doubts and keep going through to the end. In this post we will select the algorithms, construct the models, and we will put our validation dataset to the test. We have built a good model, and above all, we have lost our fear of Python. And so…we´ll keep learning!

The steps that we have taken in the previous post are as follows:

  • Load the data and modules/libraries we need for this example
  • Explore the data

We will now go through the following:

  • Evaluation of different algorithms to select the most adequate model for this case.
  • The application of the model to make predictions from the ´learnt´

3. Selecting the algorithms

The moment has come to create models from the known data and estimate their precision with new data. For this, we´re going to take the following steps:

  • We will separate part of the data to create a validation dataset
  • We will use cross validation for 10 interactions to estimate accuracy
  • We will build 5 different models to predict (from the measurements of the flowers collected in the dataset) which species the new flower belongs to
  • We will select the best model 

3.1 Creation of the validation dataset

How do we know if our model is good? To know what type of metrics we can use to evaluate the quality of a model based in Machine Learning, we recommend you read this post we published recently about the Confusion Matrix. We used statistical methods to estimate the precision of the models, but we also had to evaluate them regarding new data. For this, just as we did in the experiment before Machine Learning, this time in the Azure Machine Learning Studio, we will reserve 20% of the data from the original dataset. And so, applying this together with the validation, we can check how the model we have generated works, and the algorithm that we chose in this case with 80% remaining. This procedure is known as a holdout method.

With the following code, which, as we have done before, we can type or copy and paste into our Jupyter Notebook, separating the data into training sets   X_train, Y_train  and the validation ones  X_validation, Y_validation.

This method is useful because its quick at the time of computing. However, its not very precise, as the results already vary a lot when we chose different training data. To overcome these issues, the concept of cross validation emerged.

3.2 Cross-validation 

The objective of cross-validation is to guarantee that the results we obtain are independent from the partition between training data and validation data, and for this reason it is often used to validate models generate in AI projects. It consists of the repeating and calculating of the arrhythmic average of the evaluation methods that we obtain about different partitions. In this case, we are going to use a process of cross-validation with 10 interactions. This means that our collection of training data is divided into 10 parts, trained in 9, validated in 1, and the process repeated 10 times. In the image we can see a visual example the process with 4 interactions.

Figure 1: Cross validation, (By Joan.domenech91 CC BY-SA 3.0)

To evaluate the model, we chose the estimation variable scoring the accuracy metric that represents the ratio between the number of instances that the model has predicted correctly, against the number of total instances in the dataset, multiplied for 100 to give a percentage.

For this, we add the following code:

3.3 Constructing the models 

As beforehand, we don’t know which algorithms work best for this problem, so we will try 6 different ones, lineal ones (LR, LDA), as well as non-lineal ones (KNN, CART, NB and SVM). The initial graphs indicated that we can imagine they will work, because some of the classes appear to be separated lineally in some dimension. We will evaluate the following algorithms:

  • Logistic regression LR
  • Lineal discrimination analysis LDA
  • K-near neighbours KNN
  • Classification and regression trees CART
  • Naïve Bayes NB
  • Support Vector Machines SVM

Before each execution we reset the initial (seed) value to make sure that the evaluation of each algorithm is made sure to be using to same collection of data (data split), to ensure that the results will be directly comparable. We will add the following code:

3.4 Choosing the model that works best

If we execute the cells  (Cell/Run Cells)   we can observe the estimations for each model. In this way we can compare them and chose the best. If we look at the results obtained, we can see that the model with the highest precision value is KNN (98%).

Figure 2: Precision results of dinstinct algorithms .

We can also create a graph of the results of the model evaluation and compare the distribution and average precision for each model (each algorithm is already evaluated in 10 interactions for the type of cross-validation that we have chosen). For this, we add the following code:

We get this result:

Figur3 3: Box and Whisker plots of comparison algorithms.

In the box and whisker diagram it is clear that the precision shown in many of the models KNN, NB and SVM are 100%, whilst the model that offers the least precision is the lineal regression LR.

4. Applying the model to make predictions  

The moment has come to put the model we created to the test from the training data. For this, what we do its apply it to the original part of the dataset that we separate at the start as the validation dataset. As we have the correct classification values, and they have not been used in the training model, if we compare the real values with the predicted ones for the model we will see if the model is good or not. To do this we apply the chosen model (the one that gave us the best accuracy in the previous step) directly to this dataset, and we summarise the results with a final validation score, a confusion matrix and a classification report.

To apply the base model to the SVM algorithm, we don’t need to do any more than run the following code:

We get something like this:

Figure 4: Evaluation of the algorithm around the validation.

As we can see, accuracy is 0.93 or 93%, a good result. The confusion matrix indicates that the number of points the prediction model got correct (diagonal values: 7+10+11=28), and the elements outside of the diagonal are the prediction errors (2). From this, we can conclude that it is a good model and we can apply it comfortably to our new dataset.

We have based the model on the SVM model, but the precision values for KNN are also very good. Are you excited to do this last step and apply the other algorithm?

With this step ,we can officially say that we have finished our first Machine Learning experiment with Python. Our recommendation: re-do the whole experiment, taking notes of any doubts that arise, try to look for the answers, try to make small changes to the code, like the last one we proposed and… Across platforms such as Coursera, edX, DataCamp or CodeAcademy, you can find free courses to keep getting better. Never stop learning!

Other posts from this tutorial:

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

You can also follow us on TwitterYouTube and LinkedIn

IoT, the new ally of renewable energies?

Beatriz Sanz Baños    21 February, 2019

Since its arrival, IoT has been pushing society towards a future in which most devices will be connected. Climate change and pollution levels also lead us to a future of greater care for the planet, and even to reverse, to the extent that we can, the damage caused during so many years of emissions.

Has the time come to unify both impulses to conquer a more technological future, nourished by clean energies?

Although, nowadays, in Spain the photovoltaic or wind installations are more extended in the industrial or agricultural sector, the use of energy self-consumption in households will be generalized in the foreseeable future.

The main disadvantage of renewable energies is that they are very changeable, since they depend on a large extent on environmental factors. This variability makes energy distribution difficult: when too much energy is generated, it is not possible to store it completely and a large part of it is wasted and when energy is scarce it is necessary to have alternative sources of fuel. The perfect system would require a greater capacity of storage for energy in the moments of maximum generation. This is where IoT can help by solving these problems thanks to the monitoring of systems and acting remotely.

Foreseeing that clean energies will be here to stay, IoT technology is already working to enter this sector. Therefore, it tries to promote energies, such as solar and wind energy, and its management in order to curb issues such as climate change.

IoT tries to promote energies, such as solar and wind energy

IoT is especially good at achieving systems located in distant places communicate with each other and interact as if they were a unit. In this sense, IoT allows controlling the network of storage systems in a centralized manner, as a solution to the supply imbalance that generates the variability of energy storage.

Thus, each solar panel or wind turbine can be monitored and controlled remotely through sensors, thus allowing greater savings in consumption, in addition to a prediction of machinery repairs that are necessary, to avoid energy waste.

In the residential part, the application of this mode of self-consumption in smart buildings will undoubtedly achieve greater efficiency and savings. The lighting and the temperature in the houses can be controlled through the IoT technology depending on the stored energy. For example, artificial light will adapt to the level of illumination detected by the sensors, switching on only when necessary. In the same way, the heating will be regulated automatically to provide only the energy that is needed to reach the ideal temperature.

In Hawaii we found a good example of an IoT solution applied to renewable energies, in the project carried out by Steffes Corp. The company has set up a system that manages hot water heaters distributed around 500 homes. The connectivity allows, in this case, to adjust the supply to the demand, so that they adjust as much as possible and there is no waste of energy.

Florida Power and Light has also opted for IoT for renewable energy, creating smart energy meters for solar panels that collect information through sensors and, through networks, manage consumption and provide customers with information on this in time real. The implementation of this technology generated a saving of 3.4 million dollars and a reliability of 99%. A very beneficial result for the company and for the customers, but, above all, for the environment. Another success story of Smart Metering is the installation, by Telefónica, of a network of smart meters to measure and manage in real time the energy consumption of household appliances in the United Kingdom.

IoT is a great ally of renewable energies

Cases such as Steffes Corp, Florida Power and Light or Telefónica demonstrate the potential Internet of Things has to help achieve more sustainable energy management. The application of this technology will be fundamental for the development and consolidation of renewable energies and, therefore, for the fight against pollution and climate change.

With all that has been shown, we can clearly confirm that IoT is a great ally of renewable energies.

The Forrester Wave™: Specialized Insights Service Providers

AI of Things    21 February, 2019

2018 was an excellent year for us, in which we were recognised as a leader amongst specialized insights service providers by Forrester. The independent review highlights the importance of insight-driven digital transformations within organisations.

The report defines insights service providers as organisations that “offer access to advanced data and analytics skills, methodologies, and technology not always available internally.” We know that many 3rd parties look for help when dealing with data, and Forrester records that a grand total of 49% of decision makers reported engaging a strategic insights partner in 2016, with 62% of high growth (10% or more) companies in 2018 engaging with insight services organisations. 

The report affirms that the vast and varied nature of problems that businesses face is mirrored in the landscape of insight service providers, with each one offering tailored services that are specialised to individual business needs.

Forrester identifies two broad categories to define the nature of insight providers: Enterprise insights service providers from a traditional services background, and Specialised insights service providers extending existing offerings to deliver outcomes. 

The review summarises the benefits to businesses of engaging an insight provider in a fast-paced market, which includes: improving business acumen and services, expanding data and analytics assets to accelerate service delivery, ensuring insights are implemented to deliver value, investing in data and analytics talent and building a broad ecosystem. 

Forrester assessed the strengths and weaknesses of the top service providers using the following evaluation criteria:

  • Current offering 
  • Strategy 
  • Market presence

And the following evaluation vendors:

  • Advanced data competency 
  • Advanced analytics competency 
  • Functional breadth 
  • Vertical penetration 
  • Heritage

The report announces Telefonica as one of the two clear leaders in the sector with both “strong current offerings and compelling strategies.” It recognises our unique position within the market with our wealth of network and subscriber data power insights. 

Figure 2. Scorecard Q3 2018, The Forrester Wave™

Forrester also recognises our recent and successful acquisition of Synergic Partners, from which we are able to assume their rich data assets to deliver insights from the strength of the data of Telefonica’s 350 million subscribers across 17 countries. We ensure this data is always aggregated and anonymised to ensure the responsible use of data by all parties. “Customer references report high levels of satisfaction with the quality and sophistication of analytics services.”

Using three data sources, Forrester was able to weigh the strengths and weaknesses of each solution:

  • Vendor surveys. Forrester surveyed vendors on their capabilities as they related to the evaluation criteria.
  • Executive strategy briefings. Forrester held in-person and virtual meetings where participants described the company’s background, positioning, value proposition, customer base, and strategic services vision.
  • Customer reference calls and survey. To validate service and vendor qualifications, Forrester also conducted reference calls and fielded a short online survey to three of each vendor’s current customers.
Figure 3. The Forrester Wave™

Overall, the independent review recognises Telefónica as a leader amongst the most significant companies in the sector. According to the analysts, Telefónica is one of two companies that “stand out as clear Leaders with both strong current offerings and compelling strategies.” Our climbing market position places us in a good position to continue growing and take advantage of the insights we gain from our unrivalled database.

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

You can also follow us on TwitterYouTube and LinkedIn