Finding the next festival headliner using Big Data

AI of Things    15 September, 2017
The music industry is always evolving and, like many other industries, has realised that it oughtto be harnessing the power of Big Data. Historically, human efforts have beenthe driving force behind this sector. A fan would go to local venues to see anunknown band that their friend had recommended. An artist would gig tirelesslyto build up their following. A label would painstakingly listen to samples fromartists in the search for their next superstar. Now, the music industry is moredata-driven than ever. In 2016, music streaming overtook downloads andCD purchases as the leading way in which music is listened to and companiessuch as ReverbNation are using this data to uncover the next global superstar.


The homepage of the website features the claim that “Artists launch careers here” and with household names such as Imagine Dragons and Alabama Shakes starting their careers on this platform, this claim seems credible. ReverbNation is less of an agency and more of a social media platform (when an artist joins they can link their social media accounts to the site). The tools used to achieve exposure harness the power of Big Data. For example, the Crowd Review tool sends an artist’s song to a select group of music fans, who then provide feedback. The report for the artist features data driven analysis ranging from audience retention to commercial potential. ReverbNation also offers promotional tools, targeted fan interaction and website creation. 
ReverbNation tools available to artists
Figure 1 : ReverbNation tools available to artists.
 

In our previous blog from March, 
it was shown how Big Data could cut out the intermediaries between label and artist in order to help artists financially. Now, ReverbNation is showing how data science can also bring efficiency to the process of finding the next best artist. The team at ReverbNation listen to artist uploads daily so still offer ‘human’ suggestions to labels based on what they hear. Additionally, labels and festival organizers alike can submit a request to the site and receive detailed statistics about artists’ music and marketability in order to make the perfect picks for their situation. For example, a rock festival in Paris could find an artist of that genre with a growing following in that region.

Artists have received festival slots at Bonnaroo and Ultra Music Festival
through ReverbNation.

Music fans are evolving too. Now more than ever they want a close connection with their favourite artists as well as instant access to content. Using Big Data, this is easy to achieve. Streaming services such as Spotify, Pandora and ReverbNation, according to data from the Record Industry of America, accounted for 51% of revenues in the US market in 2016 and their growth shows no sign of stopping. The billions of data points created by the users of such services allow ‘Discover’ features to suggest new music based not only on your listening habits but also on those of your friends. The true power of data science in this area is that the more music people listen to, the more information they create, and the more accurate the suggestions will be.

Spotify´s Discover feature
Figure 2 : Spotify’s Discover feature.

Of course, the thrill of stumbling across new music by chance will always exist. However, using the data available can bring efficiency to all areas of the music industry. The ‘lucky break’ that artists search for will involve less ‘luck’, and more data science. These processes can also be applied to other industries. For example, an art gallery could base their decision to showcase an artist’s work not on intuition, but on historic data of their performance. Here at LUCA, we are excited to see what Big Data can bring to the arts industries. What changes do you think we will see in the coming years?

 

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

Telefónica promotes the digital transformation towards ‘Industria Conectada 4.0’

ElevenPaths    15 September, 2017

* This post was translated and originally published here (Spanish) within the framework of the I Congreso de Industria Conectada taking place in Madrid the 21st of September. The Congress is organized by the Ministry of Economy, Industry and Competitiveness of Spain and is linked to its Connected Industry 4.0 strategy.


The term Industry 4.0 is used in reference to the fourth industrial revolution. Digitalization is transforming everything at an unparalleled speed. Therefore, all the agents involved, companies, citizens and public administrations, must adapt to this reality in order to be competitive in this new environment. There is no doubt that digitalization is creating uncertainties, which will have to be addressed, but it will also provide new opportunities for economic growth and social well-being, enabling us to move towards a better society.

We have an excellent opportunity to transform our society, institutions and the industry in particular. This reality, together with our capabilities, inspires us to promote this transformation hand in hand with the industrial network of the country, technological companies and the Administration.
At Telefónica we are undergoing a similar process. A process during which, in a short time, we have had to transform our “factory” -the design, distribution and marketing of our services- to serve better our customers’ needs.
As a result, we have acquired technological and innovative capabilities that allow us to compete in this new business ecosystem. Among other things, we have proven experience in projects involving Big Data, Artificial Intelligence, IoT (Internet of Things), Cloud and Security. We are working with other companies and collaborating in their transformation thanks to the adoption of these technologies. We are also contributing to the development, together with the Public Administrations, of cooperation and interoperability platforms. We are fostering innovation in an open ecosystem, through initiatives such as Open Future or Wayra. And, we continue to be at the forefront in the deployment of new-generation networks to meet the high connectivity demands of the industrial sector.
At Telefónica, we want to contribute to this process from the perspective and knowledge of being the leading operator of connectivity services and digital solutions in Spain. A country that leads the deployment of fibre in Europe, placing us in a privileged position for the future deployment of 5G.
We are firmly committed to ‘Industria Conectada 4.0’. We have to take advantage of all the opportunities offered by digitalization and new technologies. We will continue working together with the Spanish’s Ministry of Economy, Industry and Competitiveness in this initiative, key for the future of Spanish industry.






Chairman & CEO, Telefónica S.A



This post can also be found in Telefónica Public Policy.

Telefónica Business Solutions Reinforces the Security of its Network with Clean Pipes 2.0

ElevenPaths    14 September, 2017

MADRID, 14 September, 2017ElevenPaths, Telefónica’s cyber security unit, today announced the launch of Clean Pipes 2.0, a software-based security service, to prevent known and unknown threats across the Telefónica Business Solutions’ network. The service has been jointly designed by ElevenPaths, Telefónica’s cyber security unit; Telefónica Business Solutions; and Palo Alto Networks® (NYSE: PANW), the next-generation security company.

Delivered as a service to customers via Telefónica Business Solutions’ virtual network infrastructure, Clean Pipes 2.0 (video) is natively embedded into the Company’s platforms to deliver a breach prevention-oriented architecture that provides superior security at a low total cost of ownership.

The Palo Alto Networks Next-Generation Security Platform was built from the ground up for breach prevention, with threat information shared across security functions system-wide, and designed to operate in increasingly mobile, modern networks. By combining network, cloud and endpoint security with advanced threat prevention capabilities in a natively integrated security platform, Palo Alto Networks safely enables all applications and enables Clean Pipes 2.0 to deliver highly automated, preventive protection against cyberthreats at all stages in the attack lifecycle without compromising performance.

Clean Pipes 2.0 is immediately available for Telefonica’s multinational customers across the globe. The service meets different organizations’ needs, by providing features such as application-layer firewall, advanced protection against known and unknown threats, web filtering, and intrusion prevention, amongst others.

Its cloud-based Network Function Virtualization (NFV) architecture brings reduced operational costs to organizations in terms of purchasing, deployment and maintenance, without sacrificing effectiveness or performance. At the same time, Telefónica Business Solutions’ customers will benefit from a managed service that eliminates the need for hardware investment, while providing an ever-updated solution. Clean Pipes 2.0 enables unprecedented flexibility for delivery of Security-as-a-Service.

Pedro Pablo Pérez, CEO of ElevenPaths commented, “Nowadays pressure of zero-day attacks and targeted malware exploits requires state-of-the-art analytics and precise execution, for improving customers’ network security and cyber-resilience. Together with Palo Alto Networks, we have deployed this advanced security infrastructure within our networks, enabling clients to employ next-generation sophisticated protection, and to simplify the provisioning and managing of their cyber security needs”.


“With the implementation of this solution, Telefónica Business Solutions reinforces the security of Internet services for large customers even more. Additionally, it allows us to enrich the value proposition we offer to the segment of Large Enterprises, with a product that comes with great effectiveness and, at the same time, with a low investment”, explains Hugo de los Santos, CEO B2B Products & Services of Telefónica Business Solution.


 “We are delighted to join forces with Telefónica to deliver unparalleled next-generation, breach-prevention security capabilities to its customers”, said Mark Anderson, President of Palo Alto Networks. Our collaboration gives organizations peace of mind, with complete visibility and control over the usage of their digital services, while knowing that cyberattacks on their most valued data assets and critical control systems can be prevented”.

Cycling in the city? Analysing Cyclist Safety in Madrid with Excel

AI of Things    13 September, 2017
Written by Paloma Recuerdo, original post in Spanish

In this post we will see an example of how we can perform simple descriptive analytics on a set of data, without having to resort to specific tools. It is a very widespread tool, but often we are not aware of its great power. As in a previous post, we have used it for data preparation and filtering tasks, in this example it is used as an analytical tool that allows us to answer the questions we ask about Bicimad.



In 2014, the Bicimad public bicycle service became operational. Like many other people in Madrid who suffer from daily traffic jams in Madrid, we thought it was good news to have other public transport alternatives. But we were worried about one issue: safety. Would there be more accidents? Would this information be transparently transmitted to citizens?


We decided to investigate this issue and so we searched the Open Data portal of the Madrid City Council for information about accidents related to bicycles. We found the dataset that collects information on “Traffic Accidents involving bicycles”. Although the service was started in 2014, we only have information as of January 2017, meaning the volume of data is not very large. However, this information is updated monthly. Although the size of the sample is small and, therefore, the conclusions that we obtain will not be very determining, we are interested in seeing how we can “talk” to this data to answer the questions that concern us.
hompage of the madrid city council data portal
Figure 1: Open Data Portal of the Madrid City Council.

Unfortunately, in July of 2017, the first fatal BiciMad accident occurred, at a location whose danger had already been documented by another user (via his blog) a year before. It is very important to detect these “danger spots” to see what measures can be taken to avoid accidents. We are going to analyse this set of data with Excel to find the answers to these questions about the security of BiciMAD for ourselves.
First, we downloaded the data set in Excel format. This file contains information about traffic accidents where at least one bicycle is involved, indicating day, time, number of victims, district, name of the road and type of accident.
The types of accident considered are as follows:
  • Double Collision: Traffic accident between two vehicles in motion.
  • Multiple collision: Traffic accident between more than two vehicles in motion.
  • Impact with stationary object: Accident between a vehicle in motion with a driver and an immobile object that occupies the road or area away from it, whether a parked vehicle, tree, street lamp, etc.
  • Atropello: Accident between a vehicle and a pedestrian occupying the roadway or passing through sidewalks, shelters, walkways or areas of public roads not intended for the movement of vehicles.
  • Tipping: Accident suffered by a vehicle with more than two wheels and that for some reason its tires lose contact with the road being supported on one side or on the roof.
  • Motorcycle crash: Accident suffered by a motorcycle, which at some point loses its balance, due to the driver or due to the circumstances of the road.
  • Moped fall: Accident suffered by a moped, which at a certain moment loses its balance, due to the driver or the circumstances of the road.
  • Bicycle fall: Accident suffered by a bicycle, which at a certain moment loses its balance, due to the driver or the circumstances of the road.
The downloaded file looks like this:
excel image of the downloaded file
Figure 2: File downloaded in Excel format.
Let’s start by creating a table.
It is as simple as choosing the option “Insert table” in the ´What do you want to do? ´ menu. It automatically preselects the entire table and detects that it has headers- we just have to confirm them. (We have eliminated the first row before because the information is now redundant).
Excel table showing creation of first step
Figure 3, We create a table
We have something like this:
image of table as it appears when selected
Figure 4: Appearance of the table.
We can insert total values. To do this, we activate the “Row of totals” box, which can be found in Table / Design Tools.
design menu on excel screenshot
Figure 5: Insert row of totals.
Then, in the column that we need, we display and choose the function (sum, average, maximum, etc.). In this case, we choose “the account” of the accidents.
As we are going to analyse the distribution of cases by time slots, we will assign to each action to a label:
  • Tomorrow: from 7 to 12
  • Afternoon: from 13 to 20
  • Night: from 21 to 6 
To do this, first remove the “DE” with the function “Search-Replace” (from the Home menu)
table showing search replace function on excel
Figure 6: Use Search-Replace function.
Then we extract the data “hour” from the column “Hourly section”. Since the interval is always 1 hour, we are only interested in the first numeric value of the column. We insert two additional columns, and from the Data menu, we select:
Data, Text in columns, delimited by “:”
We are left with the first column, which indicates the time and we eliminate the other two.
Now we want to assign a “morning / evening / night” tag, depending on the value of that column. For that, we can use nested SI functions. As they are sometimes a little complex, if you have any problems with them, these videos will be very useful.
Insert a new column, and with the nested SI function assign the corresponding labels:
= YES (O ([@ [TIME STOP]]> = 20; [@ [TIME STOP]] <= 6); “Night”; YES ([@ [TIME STOP]]> = 13; “Afternoon”; ” Morning”))
table showing Use of the nested SI function.
Figure 7: Use of the nested SI function.
Once the time bands are labelled, we will insert the dynamic table. These tables (pivot table) will allow us to create a simple “dashboard” to analyse the data dynamically.
Before inserting the dynamic table, we make sure that the names of the columns are correct, to facilitate the selection of fields in the next phase. Then, we select “Insert” – Dynamic Table, and a new sheet.
creating dynamic table
Figure 8: Create dynamic table.
Now, we will have to add the fields that interest us in our dynamic table. For example, by adding Schedule and Months in rows, and as values, Type of Vehicle (that being the “bicycle” helps us to count the cases):
image of selection of fields
Figure 9: We choose fields.
Figure 10: Incidents per month.
.. we can begin to see how the number of accidents registered in January (44) almost doubled in the summer months, with a maximum value of 80 incidents in June.
Remember that our goal is to create a “table” where we can answer questions such as:
  • What are the most frequent accidents?
  • When do they occur? Are there more at sometimes of the year than others? More in certain time slots than others?
  • Where are they happening? Are they more frequent in some districts than in others? Are there “black spots” of accident accumulation?
1. We start by creating the first visualization. We add the fields:
  • Type of accident (Rows)
  • District (Filter)
  • Hourly section (Columns)
  • Number of incidents (values)
We can see how many accidents of each type occur according to time zone, district, or we can filter those that interest us the most.
table showing accidents by time zone and district.
Figure 11: Accidents by time zone and district.
2. We create a second table on the same sheet
Remember that our intention is to create a table. For this, when adding the new table from the sheet “accident list …” we indicate the name of the sheet and its position (simply by choosing a cell) in which we have created the first pivot table instead of “new sheet”.
In this second table we will analyse the accidents by district. Therefore, we are going to add the following fields:
  • District (Rows)
  • Months (Filter)
  • Number of incidents (values)
We obtain this result:
incidents by district table
Figure 12, Incidents by district
Now, let’s filter again by visual time slot. We want to create some “buttons” labelled “Tomorrow”, “Afternoon” and “Night”, and place them on each of the dynamic tables that we have created.
For this, in the field selector for the dynamic table, on the “Time section” option, we choose “Add as data segmentation” (slicing) in the context menu.
Segementation by selection schedule
Figure 13: We add segmentation by Section Schedule
With this we obtain the “buttons”. Now, to make it look better on the panel, we put the buttons on the dynamic table and, using the menu “Data segmentation tools“, we chose the option “3 columns” to display it horizontally.

 Configuration of columns for "buttons".
Figure 14, Configuration of columns for “buttons”

And it looks like this:
Segmentation buttons on the table
Figure 15: Segmentation buttons on the table
Thus, simply by pressing the button that interests us, we can see the data in the table segmented by that specific value. Now we want that segmentation to take affect on the other table as well. Since we are building a panel, we want all tables / diagrams to be “synchronized” in terms of time zone.
To do this, in the “Data segmentation tools” menu, select “Report connections” and also mark the first table (Dynamic table 8). We see that the segmentation can be synchronized even with dynamic tables that are located in other sheets, which allows us to make multipage reports.
Report connections.
Figure 16: Report connections.
3. Insert dynamic graphics.
Now that we have the two tables, and a segmentation that interests us, we can insert dynamic graphs that help us analyse the information in a more visual way. For example, in the first table we insert a “pie” dynamic graphic (pie chart).
To insert the dynamic chart, we put ourselves on the table that interests us, and select the option “Insert dynamic chart” from the PivotTable Tools menu, Analyse
dynamic graph
Figure 17: We insert a dynamic graph
Thus, as we select the “buttons” that filter by time slot, the tables and dynamic graphics are updated. Therefore, we can see that, in the mornings, in the set of districts, the most common type of accident is the double collision.
graph showing type of incident / tomorrow.
Figure 18, type of incident / tomorrow.
We also see that, at night, the type of incidents diversifies.
Dynamic graph type of incident / night
Figure 19: Dynamic graph type of incident / night
Now we insert a second dynamic chart that shows us the number of cases per district. From the second table that we have created, we add a dynamic graphic as we did in the previous case. In this instance we will choose, for example, a bar diagram. As in the previous case, we will create a new data segmentation menu. In this case, we will segment by “type of accident”. Thus, we already have a panel in which we can analyse and visualize the data by choosing the segmentation that most interests us.
We will reposition the graphs and segmentation buttons in the way that is most accommodating to us (for example), and we already have a control panel that allows us to analyse the data with ease. When placing them, you must keep in mind that you cannot overlap some of the tables with others. Therefore, it will be necessary to leave the maximum space it will occupy in each segment for each table. These details are those that are optimized in the specific tools that we find in the market on data analytics, as well as having easier access the result as it is visually attractive. But for the purpose of this example, Excel is more than enough.
excel Example of panel
Figure 20, Example of panel
Using the two sets of buttons, the graphs and dynamic tables are updated automatically. Now we can answer the questions we raised at the beginning of this post.
  • Which accident is the most frequent?
  • At what times do the most accidents occur?
  • In which district do more accidents occur? Is there a specific street?
  • Does the number of accidents vary according to the season of the year?
We will answer these questions, but then I invite you to raise your own.
Which accident is the most frequent? At what times do the most accidents occur?
Selecting all accident types and all time slots, we can see that the most frequent are falls and double collisions, and that it is in the afternoon when more incidents occur.
In the mornings, the number of falls is slightly lower than that of double collisions, but they increase as the day progresses, being the most common at night. At night, the accidents decrease, but crashes with a fixed object increase.
In which district do more accidents occur? Any specific street?
If we observe the second dynamic table and its associated graph (we have changed it to vertical bars since we can modify the type of graphic we are looking at with more ease), we see, without doing any previous filtering by time or by district, that the higher number of incidents are concentrated in the district of Centro, followed, with some distance by the districts of Retiro and Arganzuela.
Figure 21, Number of accidents per district
Interestingly, if we analyse this same graph according to the time slot, we see how the pattern of incidents per district changes. The Centro district is always in first place, but the rest of the positions vary. This surely has to do with the fact that in some districts there is a greater concentration of leisure facilities, or commercial areas, offices etc.
For example, as a whole, the districts with the highest number of accidents are Centro, Arganzuela and Chambertí. However, in the afternoons, the districts with the highest number of incidents are, in this order, Centro, Retiro, Chamberí and Carabanchel. And in the mornings, they are Centro, Moncloa and Arganzuela.
Does the number of accidents vary according to the season of the year?
In the first dynamic table we created, we already saw the answer to this question. The number of incidents has gradually increased from January 30 to the maximum of 58, registered in June. It is logical that as the weather conditions improve, the number of incidents increases, since the good weather encourages more people to use the service.
We have left one question unanswered. A question that may be the most important, since it can help prevent accidents.
Can we detect any “black spots”? Is there a street where there are more accidents?
To answer this question, we will go back to the initial table “Bicycle Accident List”. Next to the column “Address” we will insert a new row with a function that tells us how many times a value is repeated in a column. The formula we will use is “COUNTIF.SI”:

= COUNTIF ([Place]; [@ Place])
table showing Use of CountSi to count repeated records.
Figure 22: Use of CountSi to count repeated records.
Then, in the result column, we can filter through those streets with a greater number of occurrences. If we observe the greatest frequency of occurrence, “11”, we see that it corresponds to Calle Alcalá. However, when checking the column Number, we see that there are no coincidences, therefore it does not show any specific point of greater danger.
The same happens if we filter by occurrence numbers “6” and “5”. We find the streets Bravo Murillo and Paseo de la Castellana, which, after Calle Alcalá, are the second and third longest streets in Madrid.
Figure 23: Incidents occurred on Calle Alcalá.


In conclusion:
It will be necessary to carefully follow the incidents that are recorded in biciMAD to be able to draw conclusions about this data, which may result in the improvement of the service. Today, the data collected does not indicate any specific black point in which there is a higher concentration of accidents. However, it would be advisable to investigate the points denounced by the users themselves, since, although incidents have not yet been recorded in those points, they may end up happening in the future.


You can also follow us on TwitterYouTube and LinkedIn

ProTrain Project: Big Data to optimise regional traffic in Germany

AI of Things    8 September, 2017
Transport planning within major cities is becoming a worldwide issue as more and more people are moving to cities like Berlin which in turn has put urban transport under growing pressure. In order to optimise transport planning, city councils and transport operators need precise data. Telefónica NEXT, a subsidiary of Telefónica Deutschland has joined forces with eight project partners to focus on the ProTrain project which aims to optimise local public transport in Berlin-Brandenburg.

Figure 1: ProTrain aims to bring a more effective public transport system

ProTrain is a three-year project funded by the German Federal Ministry of Transport and Digital Infrastructure. The amount of users of public transport is increasingly yearly. The combination of rising population numbers and increasing commuter traffic have brought the capicities in regional traffic to its limits , especially during peak times. Aswell as an increased pressure on transport services, passengers have rising expectations. They want a more tailored service with extensive information, comfort and individual services
Working alongside the eight project partners involved with ProTrain, we aim to allocate rail passengers more effectively on the basis of big data analyses in order to make optimum use of available rail transport capacity. The information is intended to enable transport operators to determine the actual as well as the expected demand in more detail so that they can respond proactively. Leading on from this, travellers are to receive information on alternative connections or railcars with vacant seats in real time.

Figure 2: ProTrain uses the potential of large volumes of existing data to perform transport analyses.

ProTrain uses the potential of large volumes of existing data to perform transport analyses. Telefónica NEXT manages the analysis of anonymised mobile network data. This data is generated during normal operations when mobile phones communicate with mobile cell sites while using the internet or making calls. The data is then anonymised via a three-level TUV-certified process which removes all personal data. This data is then combined with further datasets of the project partners which comprise of historic and current data on passenger numbers per railcar, information on rail operations, effects of weather and current events. ProTrain will analyse and predict demand in accordance with these factors. 

Smart analysis of large volumes of data hopes to provide travellers with information on the app or directly on the platform enabling users to avoid full trains as one example. The aim of this project is to allow users of local and regional transport to optimise their planning and handling of passenger volumes, especially at peak times. 
Figure 3:The data is anonymised via a three-level TUV-certified process.
At LUCA we hope to see the growth of projects that aim to benefit society and by helping peoples daily commute that is another impressive benefit of the use of Big Data.

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

How can Brand Sponsored Data be used as a marketing tool for video advertising?

Paloma Recuero de los Santos    7 September, 2017

(Content written by Cambria HAYASHINO)


Connecting with customers while they are on the go is crucial for every successful marketing and sales campaign. More and more companies have realized that the mobile screen is a key element in every media plan to land own marketing messages.

However, as we have seen in previous blogs, many mobile phone users in LATAM are on restricted mobile data plans. This leads to customers who carefully monitor data usage and usually avoid using most apps or even websites without being connected to Wi-Fi. This fact also impacts marketing campaigns, especially video campaigns. Users are less likely to watch a video-ad when on the go compared to when they are on Wi-Fi. Brand Sponsored Data can help to overcome this issue.

Showcase: Brand Sponsored Data as a tool to improve the video-advertising experience

Movistar Colombia planned a big mobile video campaign promoting their new service Movistar Música. In order to create a positive brand experience, they decided to use Brand Sponsored Data to support this goal.

So, Movistar decided to run their whole mobile video campaign in a sponsored mode, to encourage more customers to click on the video immediately (no matter if they are on Wi-Fi or not) and reach more user with their marketing message.

 

A seamless customer experience

With help of a targeted messaging campaign customers received a direct (and free) SMS informing them about the new music service. With an embedded link to a separate landing page, users were invited to watch a video informing them about the new music service. At the end of the video, customers could sign up for the service, if they were interested. The entire customer journey was free for users in terms of data costs.

 Process description. - click to enlarge
Figure1: Process description. – click to enlarge

Impressive results

In order to see the impact of Brand Sponsored Data, Movistar set up a control group, which received the same message, but with a non-sponsored video. The uplifts of the sponsored vs. non sponsored group were impressive:

Sponsored users showed:

  • 2.3x higher Click-Rate
  • 1.9x higher video starting rate
  • 3x more video completions (65%! outstream format)
  • 4x higher CTR

Furthermore, digging more into detail Movistar found out, that not even more users responded to the sponsored message, but especially users who were connected to cellular, so were on the go. While the majority of the non-sponsored group was connected to Wi-Fi, when clicking on the video. So Brand Sponsored Data could encourage more users to click immediately on the link to watch the video.

Results. - click to enlarge
Figure 2: Results. – click to enlarge


Conclusions


With this sponsored video approach, Movistar could not only reach more customers but also achieve more video completions, making sure, that more users have received the full marketing message.

This campaign’s success demonstrated that customers showed higher interest in watching a video-ad if the obstacle of data costs was removed. So with a single and very easy adaption of the video campaign, the results could be improved significantly for both – customers and the video-advertiser..
To learn more about Brand Sponsored Data and to see other case studies, visit our website.

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

How can Big Data help to improve the financial scoring process?

AI of Things    4 September, 2017
Content written by Daniel Torres Laguardia, Head Scoring (data product)
Michael wants to apply for a financing of its recent TV purchase, and wait… he needs to be checked. And after that, he may get his financing for a certain amount of months and a specific monthly installment. The check is not like being checked in the airport, it is quick and transparent for Michael, but waiting for the outcome is somehow dizzy…

Scoring someone sounds like a procedure that you need to go through and that you don’t want to know about. However it facilitates access to financing, where companies that finance end users take informed decisions to lower the risk and make these credits accesible.
In some cases or markets, these scores are negative scores, so they can deny or harden the conditions of a credit. In other countries there are positive scores and end users can check their scores and improve them. And if in the past a fraudster took over someone’s identity and did some purchases, then the real person’s scoring might go down and that might affect future important transactions of the affected person.

 Mobile usage data can improve financial scoring processes.
Figure 1: Mobile usage data can improve financial scoring processes.

How you use your mobile phone subscription can help users in these situations or even directly get a credit approved by the financing entity. There are some concepts in the industry that are showing how Big Data and business intelligence can help individuals to get access to small credits, for example “Big Data, Small Credit”. It is a reality that there are new data sources which can be analyzed to take a business decision to grant a credit, and telco data can play its role, always with user consent.

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

How is Big Data influencing the recruitment process?

AI of Things    29 August, 2017
Big Data has became a phenomenon that is almost affecting every business and sector around the globe. From financial services, optimising services for cities and even monitoring the weather, Big Data is invading all sectors and one of the most recent is that of recruitment.

Figure 1: Big Data can be used to speed up the recruitment process. 

With the rise of platforms like LinkedIn, which now has more than 238 million members no wonder recruitment experts are starting to feel inundated with too many potential candidates. This expanse of data has led to even more time wasted during the hiring tradition process.
Big Data can allow recruiters to find information relating to candidates to ensure they are suitable for a role in technology or marketing for example. It is possible to look at the candidate’s publicly offered source code, their LinkedIn profile and other social media channels, the websites they frequent and even the way they talk about the industry they are applying to work in. Big Data can then make the hiring process faster due to the velocity in which information can be recognized and evaluated. The average hiring process is between 25-45 days and this varies with the seniority of the position. This allows recruiters to connect with top candidates more rapidly and as result less time is wasted pursuing the wrong people.
Big Data gives HR managers the possibility to look at candidates from several angles, instead of just basing information on a CV, we can receive a wealth of information on the Internet about the candidate. Candidates can demonstrate their knowledge and expertise on social media channels like Twitter, LinkedIn, thought leadership and industry insights.
Recruitment and Big Data
Figure 2: Online footprints can give recruiters an idea about potential candidates.
Another trend in today’s recruitment process is that of company culture, this normally relates to how well a potential candidate fits with the ethos and branding of the company. The power of social media and personal blogs means that you can get to know a candidate’s personality before the interview. This can allow you to skip right to candidates who are a good fit for your organizational culture. Big Data looks at things in a very black & white format and removes a lot of personal bias that can creep into the recruitment process, and we all know how much we are depending on personal criteria.
To conclude, it is clear that Big Data has given recruiters access to an increased volume of information which can be used throughout the lifecycle of a position within a company. These various sources of data will equip comanies to be in a better position to make smarter and more profitable recruitment decisions that will make a difference in the staff turnover rate.

What do Big Data, Algorithms and Netflix have in common?

AI of Things    22 August, 2017
Netflix has become a worldwide phenomenon with over 98 million customers streaming worldwide. The amount of data generated from that amount of users can be directly reinvested in the service they provide, which ultimately leads users worldwide to be happier with their service. As Netflix is an internet company they can really get to know their customer even better than a traditional television network. Big Data is at the core of the success for Netflix, now let’s analyse how.

Netflix and Big Data
Figure 1: Netflix have access to a wealth of data from extensive customer base.

The main algorithm that Netflix have disposal of is the “recommendation algorithm“. Netflix start their process of getting to know their customers from the get go. They are asked to rate their interest in movie genres and rate any movies they’ve already seen. Doing this up front allows Netflix to really incorporate the desires of the user into their service which will in turn increase engagement.
The following example highlights the importance of Big Data and how Netflix have been able to use it to their advantage. Before giving the go ahead for House of Cards they already knew that firstly the British version of House of Cards had been well watched. Secondly, those who watched the British version of House of Cards also watched Kevin Spacey films and/or films directed by David Fincher. This combination of factors had a lot of weight in Netflix’s decision to make the $100 million investment in creating a U.S version of House of Cards. Jonathan Friedland, Netflix Chief Communications Officer, says “Because we have a direct relationship with consumers, we know what people like to watch and that helps us understand how big the interest is going to be for a given show. It gave us some confidence that we could find an audience for a show like House of Cards.” Big Data had the power to give Netflix factual information that allowed them to make strategic and well informed business decisions.
Another interesting analysis that Netflix can carry out is the completion rate of a series. Netflix could ask themselves hhow many people started watching Daredevil and continued watching the rest of the series? This can then lead them to ask, “Where was the common cut off point for their users? What did the other 30% of users do who stopped? How big of a time gap was there between when consumers watched one episode and when they watched the next? This can give Netflix a good idea of the overall engagement of the show.
Although the success of every show on Netflix was not predictable, what analytics and data can do is give you insight so you can run a better business and offer a superior product. People with data have an advantage over those who run on intuition or “what feels right”. Of course at LUCA we stand behind this mantra and hope to collaborate with businesses who want to start making data-driven decisions.

Quantum computing in the future of IoT

Cascajo Sastre María    9 August, 2017

Connected devices, the people who use them, and the places that harbour them are growing at extraordinary rates. In order to maintain this current growth rate, we must look to the future. The scalability of our current digital architecture is not enough. Needs far surpass possibilities. The world needs more computation, more calculating power. This is the foundation needed to be able to work with the incredible number of connections and the massive agglomeration of data that IoT is facing. Fortunately, we have already found the solution and are starting to explore it.

Quantum computing, the necessary change

When we speak about quantum computing, a kind of “mysticism” hangs over the topic of conversation. For now, quantum computing is not too well known beyond specific circles. However,  computing is reaching solutions that were impossible until now thanks to quantum properties. Quantum computing is a paradigm shift and a change in architecture in the construction of digital environments and hardware. In “traditional” computing, capacity is measured by bits, information units that can have two possible values: 0 and 1. However, in quantum computing, these values, known as qubits, can be 0, 1 and both of them overlapping at the same time. Without entering into the details of the operation, qubits dramatically increase the calculation and computational capacity.

“There are endless problems that cannot be solved in terms of human time [with current computing],” said Serguei Beloussov, CEO of Acronis at the 4th International Quantum Computing Congress in Moscow last July. “New materials, engineering problems, artificial intelligence… thanks to quantum computing, these problems can be solved more quickly, which is fascinating.” All this effort plays a fundamental role in another regard, namely the possibility of reducing computing devices to nanometric levels, offering more computing power and smaller sizes, both on exponential scales. This also results in another very important fact: lower energy consumption. These three basics are essential in building a connected world; they are the most important aspects to give the Internet of Things value. And therefore, the physical properties at the quantum level can solve a scalability problem that seemed insurmountable. The IoT can continue to evolve!

Quantum IoT, the future that’s just around the corner

How will this technology affect the evolution of the Internet of Things? Artificial intelligence and Big Data processing are issues inherent to the nature of the IoT. In many cases, they pose a limit, a barrier to overcome. So what if we can overcome it? “To solve problems of the neural networks  used in machine learning, you have to be able to mathematically optimize certain functions with a huge amount of data,” explained John Martinis, director of Google’s Quantum Computation Laboratory, at the congress. “With quantum computing, we want to explore a larger number of parameters with which we can deal with this type of problem more efficiently in order to find better solutions. These problems are perfect for quantum computing.” Thanks to this paradigm shift, we can now process larger amounts of data faster and more efficiently. The result is expanded possibilities of connected devices, the creation of  new devices and nodes of information processing, as well as better information transfer.

Another interesting example of “quantum IoT” involves security. Quantum mechanics has a series of properties that, if used properly, make it possible to create a virtually inviolable communication environment. Theoretically, thanks to these properties (quantum entanglement, in particular), you can create completely instantaneous communication without any means of transmission. This means creating a 100% secure method of communication. Another important aspect is quantum encryption, something that is already being put into practice. While encrypted messages can be violated in a considerable but feasible amount of time, by means of supercomputing, quantum encryption and the computation associated with decrypting such messages make this encryption impossible to break. “The idea of ​​transmitting simple quantum objects (such as a photon) like signals in the classical sense means that no one can steal or destroy information. This is based on a host of quantum principles of transmission”, explained Alex Fedorov, a young doctoral student at the Russian Center for Quantum Computing, at the conference. “This is used in encryption or BlockChain, which allows us to ensure the soundness and permanence of the information since you cannot modify or capture it without that action being recorded.”

Quantum encryption is a discipline that is currently booming since it allows information to be secured beyond what we could have imagined. As we know, in a world in which cyberattacks are growing, it is extremely important to safeguard our information. Thanks to quantum computing, we can build smaller, more efficient and more secure devices. But that’s not all. As we said above, smart objects will be even more intelligent. Smart Cities will reach levels never imagined. Communication will be even faster, and we will achieve even more efficient energy management. Of course, there is still a great deal of work to do. “The main problem for quantum computing is decoherence,” Martinis explained. This means that the creation of quantum computers is still limited to a certain number of qubits. “What we have shown this morning,” Beloussov explained, referring to one of the discoveries announced during the congress, “is a quantum computer with fifty-one real qubits. Until now only ten qubits had been achieved.” However, for the time being, about a hundred qubits or more are needed before these processors can be implemented in everyday devices. “When we have them, we will see direct applications like autonomous cars, wearables and Smart City services that operate thanks to quantum computing.”

Leave a Comment on Quantum computing in the future of IoT