Success Story: achieving an audience based strategy

AI of Things    26 May, 2020

In our success story today, we tell you about one of our key projects in the use of Big Data to improve communication between a business and its audience, in the OOH advertising sector. This time, we worked with Exterion Media, transforming their digital strategy with the latest technological trends, such as Big Data and IoT, to better identify their target audience and design high-impact campaigns on the London Underground network.

The investment in OOH is expected to increase to £1 billion by 2021. But current market metrics did not allow Exterion Media to profile audiences in offline environments as well as online. So, to ensure the expected growth, outdoor advertising companies are looking for ways to embrace the latest technological trends in a process of change in advertising, that is more audience focused.

Therefore, both with the more traditional sources of information (panels and surveys), as well as data generated from mobile devices, they can offer a unique opportunity for those companies seeking to better understand their users who, in 90% of cases, have a mobile phone at their fingertips 24 hours a day. Thanks to the information of the anonymized and aggregated data, obtained through our Crowd Analytics platform, they provide a better understanding of the target audience.

Our team of Data Scientists and Data Engineers in the UK use the mobility analysis platform to process and analyse over 4 million pieces of mobile data every day. Through this analysis of extrapolated data, insights are gained into the mobility patterns of the London Underground audience, based on the most frequented locations and peak times. This information is very relevant for companies in the retail sector that are looking for the ideal places and moments to capture their audience.

Mick Ridley, director of data strategy at Exterion Media, says:

After this project we can speak with confidence about how we capture audiences through advertising, and offer our clients the ability to better define their target audience

Mick Ridley

Now, Exterion Media has knowledge of how audiences move and access to Big Data’s analytical tools that allow them to improve the sales discourse with their clients. Thanks to Big Data, we can transform information into value, to allow data-driven decisions.

https://youtu.be/vcFofTVmi9o

To stay up to date with LUCA, visit our Webpage, subscribe to LUCA Data Speaks and follow us on TwitterLinkedIn YouTube.

ElevenPaths has achieved AWS Security Competency status

ElevenPaths    26 May, 2020

Telefónica Tech’s cybersecurity company has demonstrated deep technical and consulting expertise helping large enterprises to adopt, develop and deploy complex cloud security projects that protect their environments on AWS to establish and maintain a suitable security posture in the cloud.

ElevenPaths, the Cyber Security Company part of Telefónica Tech, announced today that it has achieved Amazon Web Services (AWS) Security Competency status. This designation recognizes that ElevenPaths has demonstrated a strong overall AWS practice and deep expertise that helps customers achieve their cloud security goals.

AWS is enabling scalable, flexible and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify AWS Partner Network (APN) Consulting and Technology Partners with deep industry experience and expertise.

AWS Security Competency Partners have demonstrated success in building products and solutions on AWS to support customers in multiple areas, including: infrastructure security, policy management, identity management, security monitoring, vulnerability management, and data protection. Achieving the AWS Security Competency differentiates ElevenPaths as an AWS APN member that provides specialized consulting services designed to help enterprises adopt, develop and deploy complex security projects on AWS. To receive the designation, APN Partners must possess deep AWS expertise and deliver solutions seamlessly on AWS. As AWS Security Consulting Partner, ElevenPaths helps large enterprises adopt, develop, and deploy complex cloud security projects.

Moreover, Telefónica Tech has a strategic collaboration with Amazon Web Services to enable an easier journey to the cloud for enterprise customers. Telefónica includes AWS in its offer of cloud services for the B2B market and has teams of trained and certified specialists in AWS. Likewise, it has created two Cloud Centers of Excellence in Spain and Brazil that provide professional and managed services to help customers on their path to adopt the public cloud and will be launched in the rest of countries within Telefónica Hispam region.  In the last year, dozens of Telefónica professionals have been trained and specialized in Spain, Brazil and various Hispam countries in AWS cloud technologies.

“We are very proud to be recognized by AWS.It proves that we are going in the right direction and encourages us to continue working to help our customers enhance their cloud security posture and therefore reduce their risk exposure in their digital transformation”, said Alberto Sempere, Director of Product and Go-to-market at ElevenPaths. “Our Cloud Security experts are fully skilled to design, deploy and manage AWS innovative cloud-native security features, helping our customers to move securely critical workloads to the public cloud, while keeping compliance and governance.”

ElevenPaths offers an integrated end-to-end approach to cloud security, to guide and accompany their customers throughout the whole secure cloud adoption process. With the clear ambition of being the benchmark cloud security service provider in the markets, the company has developed over the last two years a complete value proposition, internally transforming its technology, processes and people with the training and certification of cybersecurity professionals in AWS architectures and specialization of certified AWS security experts in Spain and Brazil. This proposition allows them to give the best response to the new challenges derived from the paradigm shift of cloud adoption, includes Professional Cloud Security Services to assist customers in designing a secure AWS environment, following security best practices for the AWS architecture and the design of the cloud security platform that best meets their needs, combining native AWS services and advanced ISV technology. Besides, the proposal includes ElevenPaths Managed Security Service for the cloud (Cloud MSS), to manage the security of our client’s AWS environments from their SOC, providing complete visibility of cloud assets, network security and security posture, also identifying inherent risks and detecting cyber-attacks and security incidents, taking into account compliance requirements as well as customer governance standards.

As AWS Security Consulting Partner, ElevenPaths is well qualified and certified to, with its cybersecurity professionals certified as AWS Security Specialists, guide customers through all phases of security project development, including design, deployment, integration of native AWS services, as well as maintenance of AWS infrastructure, customer assets, applications and tools used to adequately protect them. This recognition encourages ElevenPaths to continue its strategy, constantly improve and evolve its capabilities to anticipate and respond to the current and future challenges of its customers in the safe adoption of AWS.


Full press release:

Bestiary of a Poorly Managed Memory (IV)

David García    25 May, 2020

If we must choose a particularly damaging vulnerability, it would most likely be arbitrary code execution, and even more so if it can be exploited remotely. In the first blog entry we introduced the issues that can be caused by a poorly managed memory. The second one was about double free, and the third one was focused on dangling pointers and memory leaks. Let’s close this set of blog posts with the use of uninitialized memory and the conclusions.

Use of Uninitialized Memory

For efficiency purposes, when we call ‘malloc’ or use the ‘new’ operator in C++, the memory area allocated is not initialized. What does this mean? That it doesn’t contain a default value, but data that will seem random to us and doesn’t make sense in the context of our process. Let’s see it:

We get a block of 10,000 integers, fill it with random integers and free it up. In theory, according to the standard of the C library, the memory coming from ‘malloc’ should not be initialized, but in certain systems (particularly modern ones) it is likely to be initialized to zero. That is, the whole reserved area is full of zeros.

In the program, we make use of a reserved memory area and then free it up. But when we use this type of memory again, the system returns that same block with the content it already had. This content probably does not make sense in the current execution context.

Let’s see the output:

What would happen if we used that data accidentally? Let’s see it, again, within code. We modify the program so that the second reserve is made for a structure that we have defined:

As we can see, we make use of ‘p’ by filling that area with random data. We free that block and now call one for a structure that should contain a pointer to a string and two integer values. Let’s check a series of executions:

As we see, the structure is initialized with “garbage”, and making use of this “garbage” is problematic, if not worrying, and completely unsecure. Imagine that these values are used for a critical application.

In addition to those already mentioned, the issues related to manual memory management do not end here. What we have seen is just a small sample and the list would be endless: pointer arithmetic, out-of-bounds write, and so on.

New Management Mechanisms

C++ has greatly improved manual management so that if already in the first steps of the language the need to use functions through operators (“new” and “delete”) was removed, the new standard extends and improves memory management through “smart” pointers. That is, memory containers that call their own destructor when they detect that they are no longer useful. For example, when an object goes out of a scope and is no longer referenced by any other variable or object.

Still, even with smart pointers, there is room for surprise and even for cases where we must use the traditional method, either for efficiency or for limitations in the libraries used by a given application.

Another method of memory management that does not require a collector is the system used by languages such as Swift or Rust. The first one uses an “immediate” type of memory collector that does not require pauses, ARC or Automatic Reference Counting. This is a method that relies on the compiler to insert into the code the appropriate instructions to free up memory when this one is no longer going to be used. Rust, a relatively modern language, uses a method based on the concepts of “borrowing” and “ownership” related to the objects created with dynamic memory. An intermediate commitment between not having to carry the burden of a memory collector and the inconvenience of the programmer having to worry minimally about the logic of “borrowing” an object to other methods.

Conclusions

It is clear that manual memory management causes a tremendous vortex of issues that can (and usually do) lead to serious security problems. On the other hand, it requires a good capacity, attention and experience from programmers who use languages like C or C++. This doesn’t mean that these languages should be abandoned because they are complex to use in certain aspects. As we said at the beginning, you can’t avoid using them in certain types of applications.


Don’t miss the previous entries of this post:

A new organizational role for Artificial Intelligence: the Responsible AI Champion

Richard Benjamins    22 May, 2020

With the increasing uptake of Artificial Intelligence (AI), more attention is given to its potential unintended negative consequences. This has led to a proliferation of voluntary ethical guidelines or AI Principles through which organizations publicly declare that they want to use AI in a fair, transparent, safe, robust, human-centric, etc. way, avoiding any negative consequences or harm.

Harvard University has analyzed the AI Principles of the first 36 organizations in the world that published such guidelines and found 8+1 most used categories[1], including human values, professional responsibility, human control, fairness & non-discrimination, transparency & explainability, safety & security, accountability, privacy + human rights.

The figure below shows the timeline of publication date of the AI Guidelines for the 36 organizations. The non-profit organization Algorithm Watch maintains an open inventory of AI Guidelines with currently over 160 organizations[2].

Timeline of publication date of the AI Guidelines for the 36 organizations
View bigger

From Principles to practice

While there is much work dedicated to formulating and analyzing AI Guidelines or Principles, much less is known about the process of turning those principles into organizational practices (Business as Usual – BAU). Initial experiences are being shared and published[3],[4], and experience is building up[5],[6],[7], with technology and consultancy companies leading. Telefonica’s methodology, coined “Responsible AI by Design”3, includes various ingredients:

  • The AI principles setting the values and boundaries[8]
  • A set of questions and recommendations, ensuring that all AI principles are duly considered in the product lifecycle
  • Tools that help answering some of the questions, and help mitigating any problems identified
  • Training, both technical and non-technical
  • A governance model assigning responsibilities and accountabilities

Here we focus on a new organizational role which is essential for implementing the responsible use of AI in an organization; the role plays a critical role in the governance model and we have coined it “Responsible AI Champion”[9] (RAI Champion).

Introducing the Responsible AI Champion

Why do we need champions? AI & Ethics is a new area in many organizations and to establish new areas the identification of champions is a proven strategy. A champion is knowledgeable about the area, is available for fellow employees in a given geography or business unit, and provides awareness, advice, assistance and escalation if needed. Champions are also crucial to turn new practices into BAU, and as such are agents of chance. In particular, the responsibilities of a Responsible AI Champion are to inform, educate, advice & escalate, coordinate, connect and manage change.

Inform

A RAI Champion informs fellow employees about the importance of applying ethics to AI and data to avoid unintended harm. He or she raises awareness of the organization’s AI Principles.

Educate

A RAI Champion provides and organizes training -both online and face to face – to the corresponding business unit or geography, explaining how to apply the principles to the product lifecycle. He or she also explains the governance model and encourages self-educated experts to form a voluntary community of practice where “local” employees can get first-hand advice.

Advice & escalate

A RAI Champion is the final “local” contact for ethical questions about AI and Big Data applications. If the experts of the community of practice, nor the RAI Champion can address the issue at hand, it is escalated to a multi-disciplinary group of senior experts.

Coordinate

Given the fact that AI and Big Data refer to issues dealt with in several other organizations, RAI Champions need to coordinate with all of them. Coordination is needed with the DPO (data protection officer) for privacy related issues; with the CISO (chief information and security officer) for security-related aspects; with the CDO (chief data officer) for data and AI related topics; with CSR (corporate social responsibility) for reputational and sustainability issues; with the Regulation area for possible future AI regulations; and with the Legal area for other legal issues.

In some organizations, the responsible use of AI and Big Data is part of a wider “responsibility” initiative including topics such as sustainability (SDGs), climate change, human rights, fair supply chain and reputation. In this case, the RAI Champion should coordinate and be fully aligned with the respective leaders.

Connect

RAI Champions need to connect relevant people to form communities of experts on the subject matter. Those communities are the first place to go if ethical doubts cannot be solved within a product or project team. RAI Champions also need to form a community among themselves connecting different geographies and business units of the organization in an active learning and sharing network. Finally, more mature organizations also may consider setting up or join an external RAI Champion (or similar) network where experiences and practices are shared with other organizations, either from the same sector or across different sectors.

Manage change

Finally, RAI Champions are agents of change. They have to ensure that over time, ethical considerations become an integral and natural part of any business activity touching AI and Big Data, including design, development, procurement and sales. They have to implement and turn the governance model into BAU.

 

The RAI Champion profile

For organizations that are starting, the RAI Champion is more a role than a fulltime job. Typically, the role is taken up by AI or Big Data enthusiasts that have researched ethics topics by themselves without being asked and are attentive to the latest developments. But the RAI Champion role is not necessarily the realm of technical people only. They also come from areas such as regulation, CSR, and data protection. Indeed, a “good” candidate to take up the role is the DPO.

RAI Champions need to be communicative with an interest to teach and convince. As with any new roles with an interdisciplinary character, RAI Champions will need to be trained before they can exercise their role.

To stay up to date with LUCA, visit our Webpage, subscribe to LUCA Data Speaks and follow us on TwitterLinkedIn YouTube.


The Pharmaceutical Retail Industry and Their Mobile Applications

Carlos Ávila    21 May, 2020

The pharmaceutical retail industry has been forced to act much faster in this race of the so-called “digital transformation” due to the global pandemic that society is currently going through. Therefore, pharmaceutical companies have had to use applications already deployed or they have had to deploy applications quickly. These applications are the same ones that move their business to manage prescriptions and orders for drugs, discounts, etc. and that make the use of their services attractive to customers in this period of high demand for drugs.

On the other hand, many governments around the world established the mandatory quarantine, which led people make greater use of digital media for the purchase of medicines, food, and other products. As a result, mobile applications and the infrastructure supporting them play a key role today and are likely to be introduced into our daily lives more than ever before.

What Are the Implications of This?

All the data generated through the customers are managed by your mobile device and the technological infrastructure (in-house or third-party) of the pharmaceutical companies. As you might expect, these applications could have vulnerabilities and pose a risk to customer data.

Many of these applications have direct communication with company devices and systems running internal processes, creating an additional attack vector for cybercriminals seeking this type of information.

Image 1: Description and functionalities of pharmaceutical applications

For this analysis, we have selected the latest version of 29 applications (iOS/Android) from pharmaceutical companies where the user can access various services. These include, mainly, online purchase of drugs and management of medical prescriptions. The applications were randomly selected from pharmaceutical companies in South America, Spain, and the United States.

Within this set of application samples, we focused on analysing only the mobile application. Although weaknesses were discovered on the server side (backend), these were not included.

For this analysis, we employed an Android device (rooted), an iPhone 5S (no jailbreak) and our platforms mASAPP (continuous security analysis of mobile applications) and Tacyt (mobile threat cyberintelligence tool).

Analysis Results

The OWASP Top 10 Mobile Security Controls performed general tests. These are only an overview of the number of tests that could be done on these mobile applications in a comprehensive manner.

In our case, the results showed that, although security controls were implemented for the development of these types of applications, several weaknesses to be fixed were found and, above all, maintain continuous improvement in the development process. The vulnerabilities found according to the controls evaluated are in the following summary matrix:

General summary of analysed control results
(-) Feature applicable only on Android platforms

Firstly, we wish to highlight several weaknesses that we found in easily-readable structures such as XML, API Keys, or configuration files. This denotes insecure local storage.

Image 2: Certificate/Key Hardcoded files
Image 3: Readable API Keys Hardcoded Files

While a large number of these applications establish secure communication channels (HTTPS) with their backends, some unencrypted HTTP channels are still working, as showed in our results box. We also found applications that do not verify the authenticity of their certificates or self-signed certificates. This shows that security needs to be improved in this regard.

Image 4: Use of Self-Signed Certificates

Also, among other unsecure application programming practices, we noted the lack of code obfuscation features (depersonalization) to make the reversing process harder in almost all Android applications.

Image 5: Review of java classes after reversing process
Image 6: Documentation and technical comments in detail

A not-insignificant fact in this analysis is that 5 of the applications were found by Tacyt on unofficial markets. In many cases they were deployed by users who did not necessarily own the application (we do not know for what purpose).

Image 7: Sample of an application found on other unofficial markets

Conclusions

We believe that these findings are a further contribution to the progress towards enhanced security and hope that they will help application developers from the pharmaceutical sector.

In this global health crisis, there have been many other cases where industries have had to transform abruptly many of their traditional services into digital services, with all the IT risks that this entails.

Managing the security and privacy of the user data of pharmaceutical applications is essential since these store private data of our health. It is important for companies within this sector to be aware that their customer data is exposed to computer risks and that, by performing appropriate controls and continuous evaluations, they should protect it −also keeping their technological infrastructure safe from potential cyberthreats.

Business Continuity Plan: From Paper to Action

Diego Samuel Espitia    20 May, 2020

Medium and large companies that must comply with industry or national standards and controls have had to develop what is known as a BCP (Business Continuity Plan). Through it, experts in the company’s operations or specialised consultants define the route of action to be taken in different scenarios where business continuity is threatened. On the other hand, many small companies have had to implement them in order to do business with the companies that must by law require them.

This emerged after the attacks of September 11th, 2001, when it became clear that many companies did not know how to react in case their headquarters were blocked. Therefore, disaster scenarios were raised on one business area or the whole business, looking for alternatives to fill that gap for a period of time. Some of these plans considered earthquakes, tsunamis, and access closures due to social circumstances, among others. But, how many of them included a pandemic among the potential causes of a business blockage?

Not many companies took it into account. However, this is the simplest problem. Even if some of the approaches made for natural disasters or access blockages to headquarters were followed, we cannot know exactly when it would possible to go back to work.

The Technology and Security that a BCP Should Include When Facing a Pandemic

Let’s start by explaining what should have been done previously to be prepared. It is essential to have a pilot project of how our services and employees would react to telework. Why? Because even if we use a VPN that allows us to simulate that the worker is directly connected to the company’s network, the services and the network are not necessarily ready to receive requests from that connection.

According to the behaviour on the Internet, when performing validations of the services exposed we can see a growth of more than 40% in the use of RDP, as shown by Shodan in its blog. When making a simple search, we find computers having known vulnerabilities:

Actually, not all companies have the technology required to deploy enough VPNs to get the entire company connected remotely. However, this should have been taken into account in order to avoid exposing vulnerable services. To this end, there are many comparisons and aids on the Internet to help you make secure decisions fitting within the budget.

Secondly, companies must know what they are exposed to on the Internet and how is the regular use of these services. Just by means of this basic data it is possible to identify when the use from external networks is exceeding the capacities of each service or when we are being cyberattacked.

So, What’s the Next Step?

As long as the services exposed are clear, information security measures can be taken. These should be implemented at the moment of starting the continuity plan. In other words, by this time they should be fully operational and under review.

These measures must be oriented to the full identification of users. As we are working remotely, the local identification measures such as the network, the MAC of the computer and its configuration are not available. In most cases, only the user and password are controlled, and this has proven not to be a mechanism that guarantees identification.

Once you have this control, you must start monitoring events in all services and have fine-tuned alerts to detect external threats, since at this time all connections will be made outside the company network. For this reason, all perimeter security controls must go to what was calculated in the continuity plan.

What to Do Next?

The last measure that must be covered by this continuity plan is the technological tools that will be used to control the operations and work of the different groups within the company. These must include training for the staff −and to this end, it is essential to have strategic allies in the world of technology.

This is because of the endless number of tools available on the Internet today. However, not all comply with the information protection measures required to ensure business continuity. One of the main examples of these tools are in cloud services. In recent years, cloud-based tools have experienced exponential growth in terms of options and implementations. However, not in all cases this is done with sufficient security measures. This is critical considering that this is almost the cornerstone of the digital transformation as well as of a good development of the continuity plan, which today must be operating at its maximum capacity.

Conclusions

Following the first month of measures at a global level, it has been possible to verify that the business continuity plans of some companies have worked properly in terms of their essential objective of keeping employees performing their functions and being able to access information. Nevertheless, due to the growth of services exposed on the Internet and the vulnerabilities detected in these, information security was not taken into account when designing these plans.

This is evidenced by the control reports made from our SOC (Security Operations Centre), which have been widely analysed in different media by our ElevenPaths experts and published in a guide: Risk Guide and Recommendations on Cyber Security in times of COVID-19.

For this reason, companies must begin to align their plans with the new circumstances and to implement controls and mechanisms that allow their employees, not only to carry out their tasks, but also guarantee the security of the information that, in the near future, will constitute the continuity of the companies.

How to Make API REST Requests to Tor Hidden Services in an Android APK

Rafael Ortiz    19 May, 2020

We were building a proof of concept in the Innovation and Laboratory Area as part of the architecture needed to create a Tor hidden service. We also needed a mobile application to interact with that hidden service through a JSON API. As it turns out, there is not a lot of well documented ways to do this seemingly straightforward task. We are sharing our notes here in case anyone else wants to see how to add this support to their application.

If you don’t care about the background, go ahead and skip to the “Implementation” part below.

Background

First, let’s take a look at the different building blocks we’ll need to make calls to a hidden service from our app. We’ll assume you have a basic familiarity with Tor and Android app development.

Orbot, NetCipher, and the Guardian Project

Orbot is a free application for Android that acts as a Tor proxy for your device. You can think of it as running the Tor service on your phone, the same as you would on any other Linux system. Orbot is developed by the Guardian Project, who create and maintain many privacy oriented apps for Android. They are the team behind the officially endorsed Tor Browser for Android, and the Orfox+Orbot combo that came before it.

However, forcing a user to install and launch Orbot before running your app is not a friendly experience. To address this they created NetCipher. NetCipher provides, among other things, an OrbotHelper utility class that lets your app check if Orbot is installed, prompt the user to install it easily, and automatically launch Orbot in the background when your app launches. It’s analogous to how the Tor Browser bundle launches a Tor service in the background.

It’s not quite the same, though. The current official Tor Browser for Android does away with NetCipher and Orbot as a requirement, opting to bundle Tor within the application itself. This gives Tor Browser users across different platforms a familiar all-in-on experience. However, since Orbot integration is much simpler than adding a Tor daemon to our app we will use that instead.

Volley Library and ProxiedHurlStack

On the NetCipher library gitlab page you can see examples provided for many different Android HTTP libraries. The main supported methods are HttpUrlConnection, OkHttp3, HttpClient, and Volley. You can also see sample implementations for each of these techniques.

Unfortunately, these examples and the artifacts associated with them for other HTTP clients did not work out of the box. Most of them haven’t really been touched in at least a year, and it appears the standard method of implementing Tor has gone from NetCipher+Orbot (analogous to proxying your local FireFox install through Tor) to an integrated Tor service in the APK itself (analogous to the Tor Browser bundle).

After some trial and error, it turned out you don’t really need the info.guardianproject.netcipher:netcipher-volley artifact to get Tor working in your app. If you look at the StrongHurlStack.java source you can see it’s pretty straightforward to reimplement. We also came across this stackoverflow post describing the same concept. The example doesn’t include an SSLSocketFactory like the StrongHurlStack does, but we can rely on Tor to provide the end-to-end encryption and identity assurance that SSL would. SSL for Tor hidden services is redundant.

Implementation

We will assume you already have an API accessible as a hidden service at somesite.onion.

The dependencies you need to add to your app level build.gradle file are the following:

dependencies {
    implementation 'com.android.volley:volley:1.1.1
    implementation 'info.guardianproject.netcipher:netcipher:2.1.0
}

Be sure to change the versions to the latest available at the time of implementation.

Next, create a ProxiedHurlStack.java file and class as described in both the NetCipher examples and the stackoverflow post and add it to your project.

package your.app.here;

import com.android.volley.toolbox.HurlStack;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;

public class ProxiedHurlStack extends HurlStack {
    @Override
    protected HttpUrlConnection createConnection(URL url) throws IOException {
        Proxy proxy = new Proxy(
                Proxy.Type.SOCKS,
                InetSocketAddress.createUnresolved("127.0.0.1", 9050)
        );
        return (HttpURLConnection) url.openConnection(proxy);
    }
}

Now in our MainActivity.java file we can import all the relevant libraries.

package your.app.here;

import com.android.volley.Request;
import com.android.volley.RequestQueue;
import com.android.volley.Response;
import com.android.volley.VolleyError;
import com.android.volley.toolbox.JsonObjectRequest;
import com.android.volley.toolbox.Volley;

import org.json.JSONObject;

import info.guardianproject.netcipher.proxy.OrbotHelper;

Next, we call init( and installOrbot() from our onCreate() method to spin up Orbot in the background. If Orbot is already installed, init() will return true and prompt Orbot to connect to the Tor network. If Orbot is not already installed, init() will return false and the user will be taken to the Play Store and prompted to install Orbot. When installation finishes the app will tell Orbot to create a connection to the Tor network.

@Override
protected void onCreate(Bundle savedInstanceState) {

    // ... other actions here ...

    if (!OrbotHelper.get(this).init()) {
        OrbotHelper.get(this).installOrbot(this);
    }
}

Now we can build a JSON request to our hidden service. You would add this next part wherever you send requests to your API.

JSONObject jsonBody = new JSONObject("{\"your payload\": \"goes here\"}");
RequestQueue queue = Volley.newRequestQueue(this, new ProxiedHurlStack());
String url = "http://somesite.onion/your/api/endpoint/here";

JsonObjectRequest jsonRequest = new JsonObjectRequest(
    Request.Method.POST, url, jsonBody,
    new Response.Listener<JSONObject>() {
        @Override
        public void onResponse(JSONObject response) {
            // do something with the response
        }
    },
    new Response.ErrorListener() {
        @Override
        public void onErrorResponse(VolleyError error) {
            // do something with the error
        }
    }
);

queue.add(jsonRequest);

And that’s it! Now you can test your app and see API calls being made to your hidden service.

CapaciCard Is Already Working on iPhone

Innovation and Laboratory Area in ElevenPaths    18 May, 2020

CapaciCard has been developed to make it work on iPhone. This feature expands its utility since it could only be used on Android so far. CapaciCard is a project of ElevenPaths Innovation and Labs area that has caught the attention of several logistics areas, both from Telefónica as well as from other external companies.

As before, CapaciCard allows authentication from any platform without the need for complex integrations.

What CapaciCard Is

CapaciCard is an in-house and patented identification and authorisation technology designed to allow users to authenticate and/or authorise any operation on a computer system. Can you imagine authenticating yourself or authorising a payment by simply swiping a plastic card on your smartphone screen (without NFC or additional hardware)? So now try to imagine the same thing but this time by placing that same card on the touchpad of your laptop.

Thanks to this update, CapaciCard can now be used on both Android and iOS. But do not forget that it can also be used on a laptop touchpad, as well as in dedicated IoT beacons like our iDoT.

Advantages

CapaciCard enables authentication, identification or authorisation of users taking advantage from the inherent capacitive features of multitouch screens, like smartphones or touchpads ones. Neither NFC, connection, Bluetooth nor additional hardware are required, just a cost-effective card.

  • It can be used on your laptop touchpad (video in Spanish).
https://youtu.be/iiTuQGSONuw
  • Cost-effective: CapaciCard is a simple plastic card with some capacitive points inside forming a unique graph for each user. Any capacitive screen (like that ones from multitouch smartphone screens or laptop touchpads) can read them.
  • Authenticate in many services from the same device: CapaciCard enables many dispositions inside, so just with a card you will be able to be authenticated in any web. The web will only need to be slightly modified to take advantage of this technology, as is usually done to incorporate any identity provider.
  • Paired with your device: Leave passwords and coordinate cards behind. CapaciCard is simple and easy to use. Do not fear to lose your card: it has been previously paired with your devices by following a simple process, so this prevents it from being used by a third party.

More information on https://capacicard.e-paths.com

Bestiary of a Poorly Managed Memory (III)

David García    14 May, 2020

If we must choose a particularly damaging vulnerability, it would most likely be arbitrary code execution, and even more so if it can be exploited remotely. In the first blog entry we introduced the issues that can be caused by a poorly managed memory. The second one was on double free. Now we are going to see more examples. 

Dangling Pointers 

Manual memory management is complex so attention must be paid to the order of operations, where resources are obtained from and where we stop using them in order to free them under good conditions. 

It also requires tracking copies of pointers or references that, if freed too early, may cause pointers to become “dangling”. That is, making use of a resource that has already been freed. Let’s see an example:

Let’s run:

This leaves us with a pointer pointing to a memory area (heap) that is not valid (note that it does not print anything after “(p2) apunta a…”. Moreover, there is no way to know if a resource whose address has been copied is still valid, just as it is not possible to recover a memory leak if its reference is lost (we will see this later).

To tag that a pointer is not valid, we assign the NULL macro to that pointer (in “modern” C++ we would assign nullptr) to somehow warn that it is not pointing at anything. But if that NULL isn’t verified, this is useless. Therefore, for every pointer using a resource, its non NULLity” must be verified. 

The good practice is, therefore: once we free up memory, we assign NULL or nullptr (in C++) to tag that the pointer is no longer pointing at anything valid. Also, before making use of it, both to copy it and to de-reference it, we must verify if it’s valid. 

Memory Leaks  

The opposite of using a memory area that is no longer valid is to have no pointer pointing at a valid memory area. Once the reference is lost, we can no longer free that reserved memory and it will occupy that space indefinitely until the program ends. This is a big issue if the program does not finish − such as a server that normally runs until the machine is shut down or some other unavoidable interruption occurs. 

An example (if you want to replicate it, do it in a virtualised system for testing):

The code on the right gets parts of memory until all the heap memory is used up. This causes the system to run out of RAM, start swapping and finally the OOM-killer will kill the process for overconsuming memory. 

What is the OOM-killer? It is a special kernel procedure (on Linux systems) to end processes in memory so that the system is not destabilised. In the screenshot we can see the output of the command ‘dmesg’, where the kill of our process is showed due to the cost of resources it represents to the system.

If we analyse the code, we see that we get into an endless loop where memory is reserved and the same pointer is reallocated to new blocks of that memory. Previous references are not freed and are lost, which triggers a relentless memory leak (exactly like a burst pipe) that ends drastically. 

This is obviously a dramatization of what would happen in a real program, but actually it occurs that way. The issue is that the reserved memory is not controlled at a point, so lost references are accumulated, and it ends up becoming a problem. It is possible that in applications with memory leaks that we only use for a few hours, we only notice a slowdown (this was more evident in times when the RAM was more limited) or a memory buildup. However, regarding servers the issue commonly leads to service drop. 

In the next post we will see the use of uninitialized memory.


Don’t forget to read the previous entries of this post:

Advances of Industria 4.0 and its impact on society

Olivia Brookhouse    14 May, 2020

We are currently living at a time where technology is heavily present in our lives. One of the greatest advances has to be the development of Industry 4.0, known as the fourth industrial revolution. It is a term that refers to how computers, data and process automation are connected.

All of this has transformed the way you can run a business, especially in the manufacturing area. Have you ever thought about how manufacturing could be operated remotely? Well this is now becoming a reality. So what does this all mean?

Forbes Magazine reveals that the most relevant feature is the interconnection of computers. In fact, they can make decisions without any human participation. Combined with cyberphysical systems and the Internet of Things (IoT) technology, all of this results in more efficient and productive factories. However, It’s impact extends beyond manufactuirng and without a doubt, Industry 4.0 has had a strong impact on everyday life.

In fact, KPMG Consulting indicates that according to Gartner reports, the Internet of Things market will be valued at approximately 3.2 billion euros in 2020. Similarly, the market for the Industry 4.0 is projected to reach a 3.4 billion euros next year.

Discover more about the history

Prior to Industry 4.0, there were 3 industrial revolutions, which have each modified the paradigms of the manufacturing environment (Ing Tay et al., 2018). Some notable features are:

  1. Industry 1.0 began approximately in the 1780s. It was dominated by water and steam energy in mechanical production.
  2. Industry 2.0 is a period where mass production excelled and was the main means of production. For example, production in the steel industry allowed the manufacturing of railways in the industrial system.
  3. In the middle of the 20th century, the industrial revolution began. This is where we started to see the introduction of digital technologies. Computers, information and communication technology defined this period.
  4. And finally, industry 4.0, which has forced people to use high-tech devices. This period includes the transformation through digitalization and business automation.

Elements of Industry 4.0

Industry 4.0 represents the future of manufacturing worldwide. Among the relevant aspects that are on par with technological trends (Ing Tay et al., 2018), we present the following:

Cyber-physical systems (CPS)

CPS’ comprise of sensors installed in all elements of industry 4.0. You must ensure that each of these systems behaves in a stable manner, especially when using artificial intelligence technologies and the Internet of Things. This feature enables your company to generate global networks that links to storage systems, machinery and production facilities.

Internet of Things (IoT)

What is IoT? It represents the advanced connectivity of systems, services, physical objects, object-to-object communication and data exchange. The Internet of Things is a digital transformation enabler that offers your company infinite possibilities. Thanks to Industry 4.0, you can obtain control and automation of elements such as heating, lighting, machining and remote monitoring in industrial processes.

Big data and analytics

In industry 4.0, having a solid Big Data strategy is beneficial to improve predictive manufacturing, as well as providing a vital guide to the development of industrial technology through the evolution of the Internet. Web analytics processes huge amounts of information generated daily, which would not be possible with traditional methods.

Autonomous robots

A few years ago you probably only envisaged their presence in movies. Today, however, you can find robots with greater versatility, advanced functions and easier operability in various fields. In industry 4.0, robots will be able to interact with each other and actively collaborate with human personnel. They will become increasingly sophisticated and therefore a vital tool in the field of manufacturing.

3D printing

The use of advanced data technologies is promoted throughout industry 4.0. Thus, 3D printing or additive manufacturing tends to make use of new available materials. Some of the products may require a combination of metallic components and smart materials.

Finally, it is important to know that the fourth industrial revolution presents an enormous oppurtunity. The greatest potential of Industry 4.0 lies in improving the world’s manufacturing output that will be able to meet the needs of society.

To keep up to date with Telefónica’s Internet of Things area visit our webpage or follow us on TwitterLinkedIn and YouTube.