Macro data, macro responsibility

Álvaro Alegría  12 May, 2019

On the 11th April 1945, just two days before his death, during the celebrations of Jefferson Day, Franklin D. Roosevelt uttered a phrase that would remain forever in history: “with great power comes great responsibility”.

As strange as it may sound, many people (including the youth of today) don’t relate this quote to the American president, but with another character from the same country, Uncle Ben. He is Spiderman´s uncle, who uses the phrase in one of the films from the saga. This anecdote, although simple, helps us to emphasize that people do not always know how things are in reality, nor how they come to us. Hence, another famous phrase that says: “I am responsible for what I say, not for what you understand. ¨

However, in the business environment, maintaining an attitude in this regard could be a serious mistake, because it is essential for companies to ensure that the messages they transmit arrive clearly, and without delay, to the recipients. And when we talk about messages, it is not just direct messages, but also the image they are projecting with their actions.

One of the areas which has just begun to have a special relevance is the way in which companies make use of big data. Society is beginning to understand the power that data provides and, consequently, is beginning to demand responsibilities to the same extent.

As a recent example of this, we can remember the scandal that Facebook has had to face as a result of the mis-use of user data by Cambridge Analytica. In case you do not know the case, during the spring of 2018, it was discovered that users of the social network received personalized information on its walls with the aim of influencing their political position. This information was customized with the purpose of being as effective as possible and was based on the knowledge that the social network had about each of its users; forcing its founder to have to appear before the United States Senate and before the European Parliament.

As a consequence of this and other scandals, society is claiming limits to the enormous power that data confers to large companies. In this sense, we can refer to the restrictions that have been established in European data protection regulations in relation to decisions based only on automated means or, similarly, that individuals are subject to decisions taken by an algorithm. The most used example to illustrate this type of decision is the process of assessing the financial capacity of an individual when applying for a loan. If the bank denies this request based solely on an automated decision, citizens have the right to express their opinion or challenge the decision and even request the participation of a person in the process. This is a very important limitation because, otherwise, individual injustices based on collective data could occur, let’s not forget, that the decision made by the algorithm is not based solely on our information, but on that of other people with profiles similar to ours.

I remember some time ago I had a very interesting conversation on this subject, in which my interlocutor commented that, thanks to algorithms, the decisions that were made today were fairer, because it eliminated the danger of being influenced by the prejudices that we as humans sometimes have. Basically, he argued that the algorithms, unlike people, could not be accused of discriminating. And at times this is true and sometimes it is not.

Algorithms discriminate. Of course, they do. In fact, it is one of its main practical applications in the field of Big Data. The difference with humans lies in the fact that the algorithms discriminate in an objective way (based on data), while humans do it in a subjective way (based on experiences, tastes or beliefs). Thus, it is expressed that nothing can be blamed on algorithms about their decisions, because they are “pure.” However, it is a complicated topic. In my opinion, the only real way to avoid discrimination based on certain critical aspects (gender, race, etc.) is to completely eliminate this information from the training and decision data sets of the algorithms. Although, in reality, even that is not enough, since many of these aspects can be inferred through the combination of other data.

Another area in which special focus must be placed is to avoid influencing the digital divide, which, in its most extreme cases, leads to digital poverty. The digital divide defines the inequalities that occur as a result of differences in access to technology, whether for economic reasons (scarcity of resources to acquire devices or connect to the network), knowledge (people of advanced age or with little intellectual training that prevents them from being users of a specific technology) or even for reasons of personal attitude (people who voluntarily choose not to live connected).

Figure 2. Companies must work to ensure the responsile use of data 

Economic actors and Public Administrations have a moral duty to combat the digital divide. The creation of analogue ghettos should be avoided at all costs, those in which the users who conform them have fewer benefits or opportunities due to the fact that they are less attractive to companies or that it is more difficult for administrations to provide them with public services. And, in the same way, mechanisms must be provided that allow a dignified life to be lived for all those who, as conscientious objectors, choose not to be digital individuals. The tricky part of this issue is how to achieve these objectives without affecting the overall progress of the rest of society. An example that shows that these are not future issues but totally current, is the debate that exists today in Sweden about the elimination of physical money, in favour of digital means of payment. There are voices that have begun to expose the significant damage that this measure would mean for certain groups.

For all that has been mentioned so far and for many other examples that do not fit into a single article, we all, as a society, do the right thing by demanding those who have the power of Big Data to honour their responsibility to us.

Telefónica, as a digital leader in Spain and Latin America, has taken good note of this. The main test is the impulse that, from the president of the company, is being made for the creation of a new digital compact. In the manifesto published in favour of said pact, the company ensures that “a digitalization focused on people must ensure that citizens are its main beneficiaries and feel in control.”

The mere fact of having published a manifesto on these issues highlights the commitment of the Telefónica group. However, this commitment is not limited only to making proposals and inviting change, but preaches by example, through concrete measures such as, for example, the fact of having a specific area of “Big Data 4 Social Good” led by Pedro Antonio de Alarcón; whose objective is to use Telefónica’s internal data, together with other external data, to return the value of the data to the world, thus contributing to the Sustainable Development goals set by the UN by 2030. In the same way, the Digital Transformation Area of Fundación Telefónica, directed by Lucila Ballarino, seeks to combine technology with social action and has its own Big Data Unit, with the aim of making innovative projects with high social impact and with a data-driven management to maximize the efficiency of their processes and increase their results.

Telefónica is an important player and its impulse is, without a doubt, a call to action for the rest of the economic, political and social actors. Let’s all make the most of this power together but do so responsibly. 

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

You can also follow us on TwitterYouTube and LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *