From Automation To Inclusion: Why Banks Are Embracing Sponsored Data

AI of Things    12 July, 2017


Content Written by David Nowicki CMO and Head of Business Development at Datami and published through LinkedIn.

A wave of Sponsored Data deployments from leading retail banks is the latest in a long line of innovations aimed at increasing the accessibility of banking services while driving down the cost of provision.

50 Years ago this month, retail banking changed forever with the introduction of the first Automated Teller Machines. Designed to serve customers outside of traditional banking hours, and outside of the branch, the ATM enabled a far higher number of transactions, at a fraction of the cost of in-branch services.

It was a technology landmark; setting the scene for a half-century of strategic innovation aimed at giving customers greater control over their personal finances, while at the same time reducing banks’ cost of service provision.

The sophisticated app-based smartphone banking experience many of us enjoy today as part of the digital transformation of retail banking is both a direct descendant of the ATM and destined to replace it. 
Mobile banking services offer the same upsides as ATMs on a far greater scale. They put sophisticated banking capabilities directly into the hands of anyone with a smartphone, wherever they go. The technology scales far better than expensive ATMs, and the cost of mobile banking transactions can be as little as five percent of comparable in-person transactions, according to a report published by Frost & Sullivan.The potential for cost reduction is enormous. 

Mobile Banking
Figure 1: The cost of mobile banking transactions can be as little as five percent of comparable in-person transactions.

But to drive change in behavior and generate economic efficiencies, technology must be accessible. Not until banks had built ATM networks that reached significant numbers of their customers did the machines begin to deliver returns. 

Mobile banking services are subject to the same restraints in terms of accessibility: Only if consumers are able to use these services do they or their banking providers derive any benefit. In the case of mobile banking, the availability of the technology itself is not the problem. Mobile penetration is high, even in emerging markets, and retail banks have some of the best-engineered smartphone apps and customer websites available. 
Instead, it is the cost of mobile data associated with banking services that, for many consumers today, put those services out of reach, or limits usage. Just as consumers resent having to pay a transaction charge to extract cash from an ATM, many do not want to incur additional costs to access mobile services.
It is a particular problem in low-income segments and developing markets where the majority of consumers depend on prepaid tariffs and where, at the same time, regulatory pressure to extend financial services to the ‘unbanked’ is intensifying. The solution to this new accessibility challenge lies in yet more innovation: Banks are now absorbing the cost of the mobile data consumed by customers using their services.
By partnering with mobile operators to sponsor the mobile data associated with their services, banks are able to increase financial inclusion by encouraging uptake of their mobile services, improve the customer experience and drive economic efficiencies at ever greater scale.
Among the first banks to pioneer this approach was Brazil’s Bradesco. Within a month of introducing sponsored data for its mobile services, Bradesco doubled its monthly mobile banking registrations, signing up 400,000 customers.

In a little over a year following launch, the number of mobile banking customers increased from three million to seven million, with mobile banking growing to represent 29% of all transactions, according to Frost and Sullivan. Bradesco reported ROI of 3x within the same period, also revealing that—by enabling more customers to check their balance using their mobile phone—average ATM visit length has been reduced by 25 seconds.

Mobile Banking represents 29% of all transactions
Figure 2: Mobile banking represents 29% of all transactions.

The past two years have seen some of the largest and most influential banks in the world embrace sponsored data as part of their digital transformation and financial inclusion strategies, with a particular emphasis on the Latin American region. For example, Santander in Brazil, BBVA Bancomer in Mexico, and Davivienda in Colombia have all made significant early moves to enable customers to use their mobile apps without using their mobile data, pushing financial services to the customer, wherever they might be. 

This was a trend that began with the launch of the ATM. But as pivotal as the ATM was all those years ago, banks in 2017 are becoming mobile-first businesses. Sponsored data—the culmination of 50 years’ of innovation in customer facing banking technology—is set to far surpass the ATM in the drive to improve service, drive financial inclusion, and reduce cost.

Executive Insight Series by Glyn Povah, Director of Product Development at LUCA

AI of Things    10 July, 2017

LUCA Sign-Up, another exciting LUCA product

This content was originally published by the Mobile Ecosystem Forum, we see Glyn Povah the Director of Global Product Development Smart Digits give his insights below:
 
If data is the new oil, how are operators unlocking their deep reserves? In the latest Executive Insights video, supported by Mahindra Comviva, MEF talks to Glyn Povah, founder and director of global product development for Smart Digits at Telefónica’s Data Unit, LUCA. He explains why Telefónica started an entire new business unit to explore the community…

In the north of England, they have an expression: “where there’s muck, there’s brass”. It means you always turn trash (muck) into money (brass).
 
For a growing number of mobile operators, there’s a growing realisation that this applies to them. The muck in question? Their network data.
 
 
 

 

 
 
For years, telcos did nothing with the information that showed how customers were moving between their cell towers. It was useless. Waste product, they frequently called it.
 
But then came the information economy and the idea that “data is the new oil”. Soon, it became clear that lots of companies – transport providers, retailers, health authorities and more – would be very interested in finding out from telcos how crowds were moving. 
 
Telefónica was one of the first to act.
 
In 2012, it formally entered the big data space with Dynamic Insights. Its first product was Smart Steps, which promised to “measure, compare, and understnad what factors influence the number of people visiting a location at any time.”
 
Later came Smart Digits, which gave businesses the option to access business insights related to individual customers (with consent) in order to speed up transactions or reduce fraud. 
 
In October 2016, Telefónica wrapped up these various activities inside LUCA, a dedicated unit selling BDaaS (Big Data as a Service) using the Telefónica cloud infrastructure. At the time, Chema Alonso, Chief Data Officer of Telefónica, said: “Big data has helped us at Telefónica, and we strongly believe it will help our clients in decision-making, more efficient resource management and in returning the benefits of this wealth of information not only to their clients and direct users, but also to society.” 
 
To find out more about the Telefónica’s future as a data insights company, MEF spoke to Glyn Povah, founder and director of global product development for Smart Digits at Telefónica’s Data Unit, LUCA.
 

Telefónica is quite advanced in thinking about the value of it’s data. What’s behind this?

 
Well, if you look at Telefónica/o2 as a brand, we were early into the idea of using Big Data to drive better offers for customers. And in time we realised that the same data sets could drive new commercial business revenue too. It became pretty obvious that our data insights could help a wide range of third parties, while at the same time offereing benefits to our customers.
 

What kind of data forms the basis of these insights?

 
If you think about a mobile operator, it has a cellular network and people are moving between these different cell sites constantly. In the case of o2 in the UK, thats 25 million people. We can also see what kind of devices people are using, and trends that help us understand the type of mobile content and apps they are interested in.
 
This used to be thought of as waste product. But acutally it’s incredibly valuable – and it’s ubiquitous so we have instant scale. Even a company like Google can’t compete with that scale.
 
That led us to think about how we could become an information company, using anonymised and aggregated data to help businesses understand their markers and their customers better.
 

Can you give examples? 

 
Our first use case was probably the MBNA bank, which we helped with overseas fraud alerts. MBNA found that it was declining lots of legitimate transactions just because a card holder was trying to make a purchase abroad.
 
Clearly, if you’re a card company, this is the last thing you want. It’s lost revenue but also a huge customer experience problem. You have card holders who can’t make payments and maybe can’t even call you because of the time difference.
 
After the card holder gave consent we were able to offer an insight that helped MBNA match the country where a transaction was being made with the country where the phone was located. MBNA can combine the result with other data sets and make a better decision about whether to approve, decline or refer a purchase.
 
In time, we will be able to extend this into the domestic region. So the bank could do the same kind of query when there’s a big ticket purchase in a trusted retail location.
 
Another big focus for us has been reducing the incidence of account takeover fraud. What happens here is that organised criminals using phishing to take over a person’s bank account, then the next step is to move money from the victim’s account into a bank account they control. To do that they need to take over the outbound authentication, which could be a PIN sent by text. 
 
What we do is let the bank respond to a flag line new payee withdrawing $10,000 request. They can check to see if a SIM swap has been made in the last few days, and if it has, they could decline it or ask the customer to visit a branch.
 
It’s an extremely simple tool but it can have a big impact. In the UK, we’re working with four of the five big banks on this. Millions of financial transactions, where SMS authentication is used, are protected in this way every month, protecting both the banks and their customers from this type of fraud.

 

LUCA Sign-Up, another exciting LUCA product
Figure 1: LUCA Sign-Up is another exciting product in our portfolio.

How is Telefónica extending these ideas directly to customers?

 
We’re just starting to roll out service that empower people to use their own data for their own benefit. The best example is probably LUCA Sign-up
 
If you’re trying to register for a mobile service of any kind, a big challenge is completing forms. You’re on the move, your keyboard is small, you’re pressed for time. LUCA Sign-Up recognises you’re an o2 customer and asks if you want to fill in the forms with subscriber information you have already registered with o2, if available.
 
Obviously, we do look up in a very secure way – you have to enter your phone numer to verify – but it’s a very simple idea that reduces sign up time dramatically, removes frustration for our customer when signing up to services on their mobile devices and can significantly improve conversation rates on sign-ups for service providers.
 

Do you think customers will distrust and misunderstand your motives when it comes to personal data? 

 
They might. However, we’ve been very clear that we are developing tools and enablers as part of our fourth platform strategy to put trust and transparency at the heart of all data products and services. 
 
We announced at MWC this year that all our services are underpinned by three core principles of: Security, Transparency and Empowerment. It’s essential that customers feel in control of how their data is used and that we create easy and simple tools for them to do so.
 
But we’re not like some Silicon Valley companies that earn all their revenue from selling people’s data. We are not in the business of harvesting and selling private information. We sell tariffs and connectivity – and we’re very good at it.
 

How big an opportunity is it? 

 
This will not replace our core business. We are not trying to create a $6 billion big data division. We want to give customers back some genuine value and empower them, so that they can see more benefits from staying with Telefónica operators.
 

How can your data help with wider social issues?

 
I think there are any numer of ways to help societies make better decisions around things like traffic management, tourism and retail spending. And I believe there could also be a real impact in healthcare. 
 
If we can observe patterns of movement, health authorities might be able to make better decisions about the spread of epidemics for example. And you can extend this down to the individual level. If grandma hasn’t left the house for a few days, that could trigger an alert to relatives.
 
So big data is one of our key thems and we are strong believers in using data for social good so that societies can benefit in all the countries where we operate.
 
The Executive Insights Video Series is supported by Mahindra Comviva to look at network operator’s views on the drivers and trends of the Business of Tomorrow. 
 
From LUCA, we hope that this external coverage gives an insight into the work that we carry out, keep up with our next steps on both our Twitter and LinkedIn.

ElevenPaths participates in AMBER (“enhAnced Mobile BiomEtRics”) project

ElevenPaths    10 July, 2017
ElevenPaths participates in the AMBER (“enhAnced Mobile BiomEtRics”) project since 1st January 2017 as an Industrial Partner. AMBER is a Marie Skłodowska-Curie Innovative Training Network under Grant Agreement No. 675087, addressing a range of current issues facing biometric solutions on mobile devices. This project will run until 31st December 2020 and it will lead the training and development of next-gen researches in the biometrics area. Helping them to accommodate their research activities both with academic goals but also with industrial and professional market’s requirements.  

AMBER project will host ten Marie Skłodowska-Curie Early Stage Researcher (ESR) projects across five EU universities. Receiving direct support from seven industrial partners to mentor the project development and check its alignment with market necessities. The aim of the Network is to collate Europe-wide complementary academic and industrial expertise, train and equip the next generation of researchers to define, investigate and implement solutions to ensure secure, ubiquitous and efficient authentication whilst protecting privacy of citizens.

Over recent years the ubiquity of mobile platforms such as smartphones
and tablets devices has rapidly increased. These devices provide a range
of untethered interaction unimaginable a decade previously. With this
ability to interact with services and individuals comes the need to
accurately authenticate the identity of the person requesting the
transaction
many of which carry financial/legally-binding instruction.

Biometric solutions have also seen increased prominence over the past decade with large-scale implementations in areas such as passport and national ID systems. The adoption of specific biometric sensors by mobile vendors indicates a long-term strategy as a means of authentication. This adoption is at critical point – users need to be confident of biometrics in terms of usability, privacy and performance; compromise in any one of these categories will lead to mistrust and a reluctance to adopt over and above conventional forms of authentication. The design, implementation and assessment of biometrics on mobile devices therefore requires a range of solutions to aid initial and continued adoption. The EU needs to have experts trained specifically in the field to ensure that it participates, competes and succeeds in the global market. 

AMBER comprises four core elements to provide the training to recruited Early Stage Researchers (ESRs):

  • a host Beneficiary institution will provide resources and expertise directly associated with each of the projects
  • a secondment to a ‘link’ academic institution (another of the Academic Beneficiaries) working in a complementary sub-discipline providing additional expertise and resources
  • an industrial secondment within a company (Partner Organisation such as ElevenPaths) that will enable a understanding of the current and future market demands on solutions, access to industrial and customer resources and possible integration of solutions into market-leading technology implementation
  • a series of coordinated training events linking the various projects within AMBER and providing a range of transferable skills to ensure effective future research and development within the field.

ElevenPaths will support the University Carlos III of Madrid (UC3M) in the ESR9, Vulnerability assessment in the use of biometrics in unsupervised environments:

Using biometrics on mobile devices means that the authentication will be carried out without any kind of supervision. As there is no supervision, the user (or anyone having obtained access to the device) is able to perform any kind of attack to the authentication process without restriction. Therefore, mechanisms to detect those attacks and avoid the misuse of the device shall be implemented. Although this target is common to many other kind of authentication systems, new challenges appears when considering the use of mobile devices. The first one is the variety of manufacturers, models and operating systems of the devices owned by citizens. This challenge means that the solutions obtained shall be as multiplatform as possible. Another challenge is that mobile devices have not been manufactured considering biometric authentication, not even authentication itself, but for providing other kind of services to the users (e.g. calls, data connection, web-browsing, etc.). This means that the researcher should a-priori not consider any kind of help from device manufacturers, even though some manufacturers may be initially against any kind of suggestion to integrate new sensors due to a potential increase of its cost. On the other hand, mobile devices have many other sensors that could be exploited by the authentication process in order to mitigate vulnerabilities. So another challenge is to analyse how these can be used for the benefit of the citizen at low cost. 

This three year project will start by studying biometrics, mobile technologies and security. Following this, security analysis and risk assessment will be performed by the ESR, targeting different use cases. With the results obtained, in particular all the vulnerabilities detected, R&D will be conducted to develop a quantifiable framework and tools to identify and mitigate for vulnerabilities, keeping universality at a viable level (i.e. not reducing significantly the user population by the introduction of mechanisms). The mechanisms developed will be integrated in some of the most common applications to check performance, robustness and user acceptance, promoting the use of the device and framework by the industry.

Innovación y laboratorio
www.elevenpaths.com

What are the 5 principles of joined-up data?

Paloma Recuero de los Santos    7 July, 2017


The definition and principles of ‘open data’ are quite clear and simple but the principles of joined-up data are less clear. Can you enunciate five principles of joined-up data that could serve as a practical guide for others?”

The Joined-Up Data Standards (JUDS) project  ennunciated this 5 principles in terms of concrete guidance when it comes to a commonly recognised list of principles for interoperability – the ability to access and process data from multiple sources without losing meaning, and integrate them for mapping, visualisation, and other forms of analysis – at a global level.

Figure 2: The 5 Principles of joined-up data infographic (self production)
Figure 2: The 5 Principles of joined-up data infographic (self production)

Movies and Big Data: how to make your heart pound

AI of Things    5 July, 2017
Since movie theaters were invented, there are two types of people that go there. The first group will arrive just in time for the movie, rushing to their seat just as it is about to start. The second group will get to the theater forty-five minutes early so that they can watch every single trailer released before the movie screening. Regardless of which group you belong to, if you’re a movie lover, Big Data is starting to change movies and trailer editing for ever.
Trailers have been with us since the beginning of cinema, however, they have changed quite a lot during these years. You can check the video below for a quick summary of what has been going on in the history of trailers, courtesy of The Verge.

In recent years, trailers show most of the plot of the movie, guarding no surprises, so that we actually want to go to the movies. Although when we get to the cinema we expect there to be more than the trailer actually shows. As the video states, the trailer for the Star Wars movie “The Force Awakens” reversed that trend of revealing all key plot points: it showed as little as possible, leaving the rest to the imagination. We are finally being really teased.But why this sudden change? As Wayne Peacock, Disney’s vice-president of analytics insights, tells in this Financial Times article, the trailer was studied from every social media perspective, gathering the data of Star Wars fans. It was then crafted until they got what they thought could be a great hit on social media. And so it was. On the release day of the trailer, 112 million people watched it in 24 hours. And based only on this trailer, subsequent fan content was created such as trailer reactions videos, recreations with cardboard and more.

Here is where Big Data plays its role in this story. Movie studios have taken all the data from profiles who are probably going to like a given movie and analyze it so they can craft a campaign to the last detail. For example, in this study held by CitizenNet, a company specialized in the analysis of social media data for advertising, a correlation is found between Facebook likes and domestic box office returns.

If this study has been done with Facebook and box office data, imagine what could be done with more detailed datasets. That is exactly what the next example has done. You probably have heard about the popular game World of Warcraft. Last summer, the movie was released worldwide and it was a complete hit. But if you followed its box office numbers, you could notice that it wasn’t in the US where WoW performed best, it was actually in China. It made $47 million dollars in the US, and $213 million in China. The FT article explains why the reason for China’s success was not because there are more fans over there.

WoW was produced by Legendary Films, which has a 70 people team dedicated exclusively to marketing their productions. This team is run by Matthew Marolda, Chief Analytics Officer. Legendary was acquired last year by the gigantic Asian Wanda. And so this explains why they hit so much in that market. They had access to detailed data on that market, so they were able to follow closely the advanced ticket sales. Also, the Chinese population is much more digital than the American one, so they could study much more thoroughly and determine their commercial actions accurately.
But far more from sales strategies, Big Data has recently started to help in other areas in film production. If you have seen Best Picture Winner “The Revenant” you might have found it heart pounding. Nevertheless, how do the producers find out how you reacted to the movie? “The Revenant” directed by Alejandro G. Iñarritu, carried out an unusual experiment in one of their pre-release screenings. Twenty Century Fox, producer of “The Revenant” partnered with Lightwave, a bioanalytics technology company, and so with their technology they measured the audience’s reaction with a wearable device. These are the results they got from it:

The Revenant biometrics
Figure 1: The Revenant bioanalytics results to the experiment.
Imagine what film studios can do with this data. Not only can they use it to a commercial end, but they could also change they way the film is edited in order to fine tune it and make the audience feel more immersed in the movie. The are endless possibilities.

Stories have been part of our culture since the very beginning. They have been used to carry information when there was no written language, to trespass culture from generation to generation etc. To craft the perfect story relies on human creativity. Being able to scientifically measure the emotional response to a story could help us better understand ourselves and our emotions. At LUCA, we are eager to see how this industry develops new uses of data to help us find what does make our heart pound.

It’s raining data: How Big Data is changing weather forecasting

AI of Things    4 July, 2017
Be it is crazy hot Madrid summers or the ever-changing nature of British weather, the weather is a constant topic of conversation for people around the world. Historically, the weather is also something that has leveraged data. Record highs, record lows, and averages are frequently provided alongside daily weather forecasts. This data was all recorded and provided before the advent of Big Data. But how has Big Data changed the way weather is predicted?

On a personal level, Big Data is helping make weather prediction much more accurate and localized. Rather than having to check the weather for a large geographic area, companies like Dark Sky have developed apps that allow users to get weather forecasts specifically tailored to their exact neighborhoods. Or as Dark Sky puts it on their website, “With down-to-the-minute forecasts, you’ll know exactly when the rain will start or stop, right where you’re standing.” As Forbes highlighted in a recent article on different applications of Big Data, weather prediction is becoming more accurate because of the proliferation of sensors that can provide real-time weather updates. As the Internet of Things continues to expand, more and more everyday items, from fire hydrants to traffic lights, are equipped with smart technology that allows them to collect data and report back about conditions in their environment. This data can then be collated together with the available satellite weather data to provide incredibly accurate weather predictions. 
Big Data is also being used to reduce the impact of extreme weather occurrences. For example, it allows forecasters to more accurately predict where and when massive storm systems will hit. As an interesting video from Datafloq highlights, when Hurricane Sandy hit the eastern coast of the United States in 2012, experts were able to predict landfall within 10 miles. This is contrasted with a 1990 storm prediction, which could target landfall within 345 miles – a rather large distance range! This level of accuracy allows people to prepare more efficiently and for any necessary evacuations to be more targeted. The financial implications of this increased accuracy are also significant. As the video points out, losses from extreme weather occurrences amount to over $200 billion annually and more than a third of global GDP is subject to impact from natural disasters.

Big Data and Weather
Figure 2: For major storms like Hurricane Sandy, landfall predictions are increasingly accurate. 
As climate change causes an increasing frequency of extreme weather occurrences such as Hurricane Sandy, climate scientists are calling on Big Data to join the fight. On the blog, we have previously looked at cases of how Big Data is being used to map the impact of climate change, but it is also being used to try to reduce that impact before it happens. Tools like Surging Seas, a project from the organization Climate Central, map and track rising sea levels around the world. This allows people to learn about their flood zone areas and prepare if they are at risk. The organization Cloud to Street focuses on the same topic to combine Big Data, flood plain info and demographic vulnerability stats in order to help those most at risk from devastating floods. As Cloud to Street describes it, this results in “Smarter planning, local resilience, and data empowered communities.”

Big Data and Weather
Figure 3: Organizations like Cloud to Street are helping those most vulnerable to climate change weather events.

Whether it is making everyday life easier by telling people when they need an umbrella, or providing life-saving information about extreme storms, Big Data is changing what we know about weather. The elevated accuracy means that people can make better decisions around the weather, and this accuracy is only going to increase as more data is collected through the internet of things. At LUCA, we are excited to watch the development of how Big Data can be used to have both big and small impacts on how we interact with weather. 

New tool: PySCTChecker

ElevenPaths    3 July, 2017
This is a “Quick and dirty” Python script for checking if a domain properly implements Certificate Transparency. If so, it is possible to observe how Certificate Transparency is implemented on the server side.

When a server implements Certificate Transparency, it must offer at least one SCT (a proof of inclusion of the server TLS Certificate into a Transparency Log).
A SCT can be offered by three different ways:

  • Embedded in the certificate
  • As a TLS extension
  • Via OCSP Stapling

Using PySCTChecker is possible to identify the delivery options that the server uses and the logs where certificate has been sent to. Also, it is possible to check if the offered SCTs are valid and legitimately signed by logs.

This script needs just a list of domains as input. For each domain, it will check if the server implements Certificate Transparency. If the server offers any SCT, the script will show extra information about it, such for example the logs where the TLS certificate has been sent and which method the server uses to deliver the SCT.

Usage: 

python PySCTChecker/ct_domains_sct_checker.py [domain1 domain2 …] 

Output example:

This is a quick and dirty implementation since it uses OpenSSL for some features, but we hope it helps understand how certificate transparency works.

You can download and check source code from here.

This tool reinforces our set of tools related with Certificate Transparency developed from ElevenPaths:

Innovación y laboratorio
www.elevenpaths.com

Brand Sponsored Data: A creative tool for satisfying customers while meeting tangible customer needs

AI of Things    30 June, 2017

brand sponsored data

Normal
0

false
false
false

EN-US
ZH-CN
AR-SA

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:Calibri;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}


Cell phone data usage has been on a steady increase over the last several years. However, in certain areas of the world such as Latin America, that usage has exploded. This increase in usage presents a unique opportunity for brands to engage with customers by meeting their growing demands for data

Figure 1: Mobile data usage in Latin America is rapidly exploding.
In a recent post, we talked about how using Data Rewards as a tool creates a win-win scenario for both companies and customers. In this post, we are going to look at another tool that creates the same scenario: Brand Sponsored Data
Basically, Brand Sponsored Data gives customers access to companies’ content without them having to worry about data costs. When a company sponsors its app or mobile website, they offer customers to browse in their app or mobile website without consuming any of their mobile data. 
Like Data Rewards, the success of Brand Sponsored Data as a tool rests on the fact that it meets the needs of consumers. Especially in the LatAm region, the increasing demand of mobile data and the limited access presents a good opportunity to solve a consumer problem. In this market, most consumers are on prepaid data plans and over 30% of consumers run out of data each month. In fact, users develop data avoidance strategies: on average, one-third of users will only use or download apps when they are connected to wifi – and this percentage even increases when looking at consumers who are on a prepaid data plan. This means that consumers are less likely to engage with a company’s app or website unless the app’s name is WhatsApp, Facebook, Instagram or similar. Companies whose apps need to sell, offer services and have a lot of competition especially need to invest a lot of money to promote the popularity of their app, increase download numbers and transactions, improve customer experience, increase monthly active users and more.
brand sponsored data
Figure 2: Customers on prepaid data plans are hesitant to use apps unlesss connected to wifi.
This is where sponsored data comes in as a new marketing tool, helping companies to drive many of these KPIs simultaneously.
Due to the fact that sponsored data is helping customers to save mobile data and companies can give back a direct benefit to its users, customers tend to engage more often with sponsored apps compared to non-sponsored ones. For many users, it is difficult to assess how much data is used per activity. How much data is used to watch a video, to play a mobile game, to download an app or for a shopping transaction? Companies who sponsor their app take care of these anxieties and create a worry-free customer experience.
Netshoes, a popular online fashion app in LatAm is a good example of the multiple benefits of Brand Sponsored Data in action. Netshoes saw that mobile traffic was heavily increasing on both their app and on their mobile website, but the conversion didn’t follow. So Netshoes started to sponsor both – app and mobile website – and communicated that to their customer base and all potential new users. The results were impressive. Already after a short period of time, Netshoes saw tremendous results:
Netshoes started with Brand Sponsored Data in November 2015. Due to the success, Brand Sponsored Data remains a crucial and regular part of their marketing mix every year.
Since then, more companies have invested in sponsored data to support their marketing and sales goals, including:
  • More app usage
  • More time spent within the app
  • More sales conversions/transactions
  • More downloads
  • More video views

To learn more about other showcases and how this tool can help increase customer engagement while also meeting tangible customer needs, visit our website

Entrevista: Angela Shen-Hsieh desmitifica“la caja negra” del Big Data

AI of Things    28 June, 2017

AI and Big Data

Angela Shen-Hsieh es Directora de Predicción del Comportamiento Humano en la Innovación de Productos de Telefónica. En esta función, se encarga de liderar la innovación interna.


Antes de unirse a Telefónica en noviembre, Angela trabajó en IBM Watson, administrando las líneas de productos de inteligencia de conversación y descubrimiento de datos, y fue fundadora y directora general de varias startups de visualización de datos. Angela fue entrenada como arquitecta, recibiendo su Maestría en Arquitectura de la Universidad de Harvard.

Nos sentamos con Angela para hablar de su papel en la intersección de Inteligencia Artificial, Aprendizaje Automático y Big Data.

¿Cómo empezaste con tecnologías como la IA, el Machine Learning y el Big Data, sobre todo teniendo en cuenta tu experiencia en arquitectura?


En realidad, empecé a través del diseño. Dirigí varias compañías que diseñaban interfaces de usuario para datos, y esto se convirtió en una competencia en torno a desmitificar la caja negra de los análisis para que las personas encargadas de tomar decisiones pudieran actuar sobre los datos con confianza.
Uno de los problemas es que los análisis pueden ser misteriosos, así que trabajamos mucho en los sistemas de apoyo para la toma de decisiones internas. Por ejemplo, si colocas un número en un tablero o se lo das a una persona encargada de tomar decisiones y que no pueda desentrañarlo, es difícil que tengan confianza. No pueden mirarlo y entender todas sus implicaciones. Digamos, por ejemplo, que un número concreto es rojo. ¿Por qué es rojo? ¿Cuánto tiempo ha sido rojo? ¿Seguirá siendo rojo si restringimos los datos y excluimos este proyecto, un país o esta línea de productos?
Este es un problema de visualización interactiva fundamental, necesario para entender realmente de qué manera las personas utilizamos los análisis. Ahí es donde entra en juego el aspecto del diseño. Los arquitectos no construimos edificios, sino que hacemos imágenes de edificios. Así que los arquitectos son expertos en representación visual. Ese es el mismo enfoque que tomamos para hacer que los datos fueran más fáciles de leer para las personas de negocios. Esto hizo que se crearan aplicaciones que podían proporcionar análisis a los consumidores para ayudarles a mejorar su salud o mejorar otro tipo de comportamiento.
AI and Big Data
Figura 1: AI y Machine Learning son herramientas claves para Big Data.

¿Cuáles son algunos de los mayores obstáculos que encuentra en el uso de la IA y el Machine Learning para innovar con el Big Data?


El obstáculo más grande es el propio Big Data. El término “Big Data” hace mucho tiempo que existe, pero eso no significa que sea realmente tan accesible como uno cree. Todavía existen los mismos retos de inteligencia empresarial que ha habido siempre: datos que no están estandarizados, que no están armonizados, que no están limpios o que no están completos. Es difícil de averiguar y es muy manual. Así que el Machine Learning y la IA ofrecen esta automatización, pero en realidad todavía hay mucho trabajo manual. Esto es uno de los retos.
Otro reto es que, en la organización de productos, tenemos que averiguar cómo monetizamos el Big Data y este tipo de tecnologías. Una cosa es construir una solución a medida, y otra cosa completamente diferente es comercializar un producto que se puede vender a muchos clientes en volumen y escala: en lugar de 50 clientes con un pago cada uno de 100.000 €, ¿cómo deberíamos erguir un negocio con 500.000 clientes que cada uno gasta 100 € al mes? Hay todo tipo de retos más allá de resolver solo el problema técnico en el machine learning de si los datos pueden contar la historia que usted espera o no. En IBM Watson lo vi de cerca. 
Estábamos intentando adoptar esta increíble tecnología que ganó el juego de preguntas y respuestas de televisión Jeopardy! y erigir un gran negocio respondiendo a preguntas de servicio al cliente o buscar lo imposible en la literatura médica y mucho más. Hubo muchos falsos comienzos, un montón de desafíos alrededor de las expectativas de los usuarios, alrededor del enfoque, una gran presión para ganar dinero haciendo todo esto, y todo ello mientras el mercado se estaba moviendo para hacer más cómodas estas capacidades de consulta de lenguaje natural. Para comercializar datos e IA/ML, hay que solucionar un problema más amplio.

¿Qué la trajo a Telefónica?

Vine a Telefónica por dos razones: los datos y el proceso de innovación. Aunque IBM tiene una extensión y recursos increíbles como compañía de 450.000 personas, hay cosas que Telefónica tiene e IBM no. La primera son los datos.
Básicamente, este tipo de tecnologías requiere una dieta constante de datos para sobrevivir y mejorar. Estaba muy intrigada por las posibilidades y la extensión y riqueza de los datos que Telefónica tiene a su disposición. 
La segunda es el proceso de innovación en Telefónica. Me impresionó mucho lo bien pensado que está: estructurado, pero con un montón de espacio para experimentar y aportar nuevas ideas y personas. No todas las grandes empresas tienen esta manera de pensar y se necesita un tipo muy especial de enfoque y diferentes tipos de personas para coger las cosas que pueden ser herramientas de investigación y convertirlas en productos comerciales. Este proceso me impresionó y por este motivo el área en que más me gusta trabajar es la innovación, que yo sitúo entre la investigación más académica y el desarrollo de productos.
Cuéntanos más sobre tu papel como directora de predicción del comportamiento humano en Telefónica. ¿Puede hablarnos de alguno de los proyectos actuales que le emocionen especialmente?
¡Hay muchas cosas que me emocionan! Esta área había empezado de una forma bastante amplia, pidiendo ideas a toda la organización (que es la manera correcta). Estamos en un momento en el que, con la cuarta plataforma, podemos construir productos con más facilidad y necesitamos poner los proyectos en un enfoque común.
Conservamos las cosas buenas del proceso de innovación que tenemos, pero las hacemos más enfocadas para poder causar un mayor impacto. El enfoque que tenemos en predecir el comportamiento humano es lo que llamamos “experiencia cognitiva del cliente” es decir, cómo traemos datos contextuales para mejorar las interacciones con los clientes en todo el ciclo de vida de compromiso del cliente.
Pueden ser cosas tanto internas como externas, todo con una mirada hacia lo que será el próximo mercado de negocios que Telefónica pueda capitalizar en torno a los datos. ¿Cuál es el futuro de la publicidad tal y como la conocemos, agentes inteligentes, robots, cómo se realizan las compras y las transacciones, cómo se realiza el servicio al cliente…?
Todas estas cosas van a mejorar. La inteligencia parte de la inteligencia artificial y vendrá a través de una mejor comprensión del cliente y su contexto que viene a través de datos. Así que nos estamos enfocando en permitir que esto suceda.

¿Cómo ves la IA y el Machine Learning cambiando la forma en que usamos el Big Data?

En palabras sencillas, la mayoría de los análisis tradicionales han estado centrados en el pasado. Miran hacia atrás e intentan decirle cosas sobre el pasado que tal vez podría aplicar al futuro. Debido a que estas tecnologías aprenden con el tiempo y pueden hacer correlaciones y entendimientos que realmente no son tan visibles, deberíamos poder reaccionar más rápido, ser más proactivos y predecir más el futuro, y entonces podríamos ser más prescriptivos. Como alguien que pasó décadas construyendo tableros de mando que eran en su mayoría reactivos, esta es una evolución de cambio importante.
La otra cosa es que hay una línea muy fina, cuando hablamos de IA y Machine Learning, entre datos estructurados y no estructurados, y esto es muy importante. La industria ha estado intentando encontrar la manera de unir esas dos cosas, pero provienen de bases tecnológicas muy diferentes.
Los datos estructurados y no estructurados no funcionan bien juntos en los métodos tradicionales de inteligencia empresarial o de búsqueda empresarial. El contenido y los datos no hablan entre sí. Sin embargo, para el usuario final hay poca diferencia entre una respuesta a una pregunta como “¿Cuál es el nombre de la esposa de Harrison Ford?” y otra como “¿Cuánto mide Harrison Ford?” 
Piensan en eso como información o datos y, de hecho, estas palabras se usan indistintamente por la mayoría de las personas. Pero en términos de tecnologías, almacenamos esas respuestas de forma muy diferente (en una base de datos frente a un sistema de gestión de contenido) y seguimos las respuestas dependiendo de cómo almacenamos los datos (consulta frente a búsqueda). Pero el uso de IA y Machine Learning junto con el Big Data tiene el objetivo de unir datos estructurados y no estructurados.
structured and unstructured data
Figura 2: La perspectiva de reunir datos estructurados y no estructurados puede mejorar enormemente las capacidades de datos.

¿Cuáles son las mayores áreas de potencial sin explotar para el Big Data?

En el Machine Learning y la IA hay muchas técnicas emergentes como el aprendizaje profundo y las redes neurales. Por parte de los negocios, creo que va a haber sindicación de datos a través de un mercado o sistema de red. ¡O eso o dejamos que Google y Amazon tomen el control del mundo! En términos de casos de uso y cosas tangibles que sentiremos, estoy más interesada en cómo las cosas pueden ayudar a las personas. 
Estamos en una batalla en la economía de la atención, por lo que ello significaría hacer las cosas más libres de fricción, más personales y relevantes, y mirando cómo las cosas se pueden hacer con menos intervención y menos conexión manual de los puntos. En este momento, como consumidores todavía tenemos que conectar un montón de puntos para llegar de extremo a extremo mediante cualquier transacción. ¿Cómo puede esto tener menos fricción y hacerse de una forma más segura y fácil?
También estoy interesada en los tipos de casos de uso en los que el Big Data realmente puede ayudarme en algo que me costaría mucho conseguir. Cosas como comer mejor y hacer el ejercicio correcto. Estamos investigando otras áreas como aplicar esto al comportamiento en línea y al comportamiento del uso del teléfono móvil. Las personas hablan de lo decepcionadas que están con ellas mismas porque no pueden dejar Snapchat o Facebook. Tristan Harris, que es más conocido por su papel en la ética del diseño de Google, ha creado un “movimiento” llamado Time Well Spent (tiempo bien invertido). 
Habla sobre cómo estamos obligados porque se han desarrollado técnicas para hacernos adictos, y una vez que una aplicación utiliza esta técnica, las otras aplicaciones también tienen que hacerlo porque todas están compitiendo por nuestra escasa atención. Así que tal vez podamos usar el Big Data para darle la vuelta. Si podemos obligar y predecir nuestro comportamiento humano, deberíamos ser capaces de devolver estos datos para ayudarnos a mejorar nuestro comportamiento y otros aspectos de nuestras vidas y de nosotros mismos.

The new Spain: redrawing the country using mobility data

AI of Things    22 June, 2017

One of the most exciting fields that the use of data opens up is the study of networks. Our societies, communications, infrastructure, businesses and many other areas of life can be represented as a series of interconnected elements or networks that are constantly fed by data. 

The birth of platforms such as Facebook, Twitter or Instagram have turned social networks into a regular part of the daily lives of all citizens, with the consequent growth of the data generated by these networks being of great interest. The analysis of these network allows us to determine the most relevant people in the network, communities of users, people who are part of multiple groupings….For example, in Figure 1, you can see an analysis of my Twitter activity.

Figure 1 : Example of my network of ‘followers’ and ‘following’ on Twitter using the Gephi too, showing (clockwise from ‘música’) Music, Friends and Celebrities, Work Colleagues & Gaming and Internet.

These techniques are not just reserved for social networks. Any network can be subject to this analysis, meaning that we end up with examples that are as varied as a network of interactions between different proteins and a network that represents the exact connections of the neurons in the brain of an insect. With the data available to us at LUCA we can carry out a similar study using live mobility data. Driven by curiosity, we decided to build a map of the regions (of Spain) based on how we move, whilst leaving out the administrative divisions of the autonomous regions or provinces.

Obtaining the data

At LUCA, thanks to the Smart Steps platform, we have all the mobility data that we need available to us in a simple form. In the case of this study, we used data from January 2016 in Spain at both a provincial and municipal level. From there, we built a network in which the vertices are the provinces or municipalities and the arcs that connect them are given a size proportional to the number of people that have travelled between the two points (figure 2).

Figure 2 : Representative network of movement between Madrid and Toledo (500 people moving from Madrid to Toledo, and 200 in the opposite direction).

This visual representation of the network can be made using freely available tools such as Gephi and igraph. After this, we decided to apply an algorithm to detect the communities. These algorithms detect natural groups of notes (in our case the provinces or municipalities) basing itself on the interconnectivity between them, in such a way that very connected nodes will tend to appear as if they are in the same region. We have used the “Infomap” algorithm. The interesting thing afterwards is the the algorithm decides how many regions ought to be generated. With this information we can them visualize the date, gathering it in a map so that the analysis may be easier and more natural.

Analysis

Firstly, we carried out the algorithm on a network of mobility between provinces. The provinces are colored based on the one they were assigned, and the province in bold of each region is the one with the highest mobility. The results of this calculation is shown below:

Figure 3 : Regions at a provincial level.

The first thing that can be seen is that the country is represented in 7 compact regions and has a radial structure (perhaps influenced by the structure of major highways), that’s to say, one central region and then regions that surround it. Except in the case of Extremadura, no mobility region lines up exactly with the known administrative limits,  due to unions (Aragón and Cataluña) and separation (Castilla La Mancha).

The next step is to improve the detail of the map at a far more local level to obtain information about the behavior of the municipalities. The data already obtained allows us to have this precision without any difficulty. In the same way as in the previous map, we colored the municipalities according to its region and marked the municipality with the highest mobility as the most important.

Figure 4 : Regions at a municipial level

We can draw various conclusions from this map. The first is that many regions are formed around large municipalities, which are on the whole provincial capitals. One can understand the region as the area of influence that these main municipalities have. We can also see that the regions are strongly influenced by the network of highways. The most important municipality in every region is found at the intersection of the main roads (highways and freeways) and this quality helps them to develop and have a growing importance in the area.

Is is also interesting to observe that the regions respect (in the majority) the regional limits, which indicates that, despite being able to go to the nearest city, the population usually moves about within their own region. Nevertheless, it is common that a region will be fragmented internally into various communities, mainly due to the existence of various important nuclei of the population.

Applications

The knowledge extracted from this analysis can serve many uses:

  • Transport and infrastructure: since the map is obtained using information about mobility, it is evident that this would be the first use. By understanding the distribution of these regions we could re-plan the road network. For example, it would be interesting to have good connections between all the municipalities that are capitals of their region, or between the towns of the region and their “capital”. It would also be interesting to study the boundaries of the regions: by showing us that two regions are separated the algorithm indicates that there is low traffic between them which could suggest that there is poor infrastructure that forces the population to use other routes. We can also use the analysis to study the impact on mobility before and after a new highway is built.
  • Institutional buildings: these maps highlight the regional capitals that could be perfect locations for housing institutional buildings, such as local Police and Internal Revenue offices. This would result in a better service for the citizens, a reduced need for traffic and an offloading of the saturation of these services in the capitals of the provinces o autonomous regions.
  • Healthcare and Emergency Services: within the institutional buildings, we would like to highlight the case of medical services. Due to the need for fast responses, a geographic distribution is more than necessary. Using the above information can help to locate these services not just in the areas around which the populations are based, but in those areas that bring various mobility benefits. This is because the users will not only be closer to their hospital but they will also know the area better, as it will be in the municipality that they are most accustomed to visiting. 
  • Tourism: one can adopt a strategy of advertising the municipalities in the zone of influence of the region, knowing in hindsight that tourists will probably travel to spend a night in the capital of the region. This will improve the general incomes of the region since tourist will have more sites to visit and will spend more time in the area.
  • Trade: the zone of influence of a municipality tells us which ones will potentially be the beneficiaries of a new business, allowing for a more precise study of its clientele. 

In summary, the analysis of mobility data using networks offers us a new vision of our concept of regions and provinces, and gives a complementary vision of the dynamics of our society. Apart from satisfying out curiosity, it helps us to understand the behavior of the population, the regional groupings that form from movement, and to know the province and reference municipality…This allows us to act in a more intelligent way, maximizing the benefit for the population.

What do you see in the maps?

Continue reading “The new Spain: redrawing the country using mobility data”