Pillars of a data-driven organization and how not to fail in the selection of use cases

Antonio Pita Lozano    25 October, 2022

The selection of use cases is a common approach to the data-driven transformation of companies, but in most cases these use cases do not achieve the expected result, generating disappointment and slowing down the necessary transformation.

In some cases, we find ourselves with failures that dynamite the entire transformational effort of the company. For this reason, I always recommend carrying out an analysis after each case to help us identify possible areas for improvement that will allow us to improve the results of the following use cases.

Pillars of a data-driven organization

After analysing many use cases implemented in different companies, we can identify the main cause of their failure, which is none other than the poor management of expectations, but if we delve a little deeper into the underlying reasons, we find that most of them are caused by an inadequate alignment of the use case to the state of maturity of the data-driven pillars of the organisation, which if we remember are three: technology, talent and organisation and culture.

  • Technology: the technologies implemented in the company must be identified and used to solve the use case. If the use case requires a new technology for the company, it is desirable to carry out the process in two steps, first we introduce the technology with another simple use case, known and easy to implement. Then we approach our analytical use case with the technology, as long as we have sufficient experience. It is important to remember that to approach an analytical use case with a new technology is to increase the risk of failure to the maximum.
  • Talent: there should be an internal capability map covering all the actions to be performed in the use case. Otherwise, new capabilities should be brought into the company internally or externally and tested in another simple, known and easy to evaluate use case to build professional confidence among all team members. If we identify a problem in a new case that is being performed by a new team, mistrust will arise among the members.
  • Organisation and culture: it is necessary to ensure that the company is prepared to be able to put the knowledge extracted from the data to good use by having the right processes and culture in place. It must be remembered that everything new generates reluctance and even more so when it is not perfectly understood, especially when we know that all the models we generate will fail in specific cases.

Company’s data-driven pillars will dictate which use cases will be a success and which will be a failure.

Use case selection

If the organisation is in an incipient stage of its data-driven journey, it is advisable to select simple use cases that require easy or non-existent technological implementation and that affect the smallest number of functional processes.

In addition, it would be convenient to select from among those that have been successful in other companies in the sector. On the other hand, if the degree of maturity of the pillars of the data-driven organisation is high, more innovative and risky use cases can be chosen, as the company will assume and reward this risk.

Sometimes we find companies with heterogeneous maturity levels, for example, strong in technology and talent, but weak in cultural transformation. In these cases, it is essential to identify the strengths and select the use cases that are based on these strengths and reject those that are based on the other pillars that are less developed or evolved.

If you want to succeed in developing analytical use cases and become a data-driven company, remember to align the use cases with the maturity of the company’s data-driven pillars.

■ Originally published in the book “A Data-Driven Company” by Dr. Richard Benjamins. If you want to know more about the book, I recommend a previous post.

The 7 priorities of a company when adopting Blockchain

José Luis Núñez Díaz    24 October, 2022

Last week I shared with the 101Blockchains community of professionals and digital innovators the Blockchain vision we have at Telefónica and how companies can create value by adopting this technology. If you are curious you can check the full presentation at this link.

But in addition, in the debate with the attendees the common challenges faced by companies when adopting Blockchain were raised again. I had the opportunity to confirm them with the audience by launching two questions. The results, without claiming to have any statistical validity, do point to certain trends that we can analyse. But above all, they suggest the first of the priorities that we must address in our Blockchain project.

The technology

On the type of blockchain preferred for business applications, the majority options were private and licensed networks. Bearing in mind that the options were not exclusive, we could even conclude that the majority of participants prefer this type of network. The third option chosen by a third of the audience was the public Ethereum network.

Preferred types of blockchain networks for business applications

With the second question we tried to find out the main concerns of the attendees when adopting blockchain in their operations. Again the answer left no doubt. In this case we were presented with the business version of the blockchain trilemma, that is to choose:

  • the traceability and transparency of public or licensed networks to the detriment of scalability and performance
  • the scalability and performance of private networks while renouncing a certain transparency.
Challenges in bringing a blockchain solution into production

We can easily relate the two answers. A minimum transparency of the operations that are recorded on the network is guaranteed by the technology itself. The more participants, the more transparency and guarantees about the immutability of the recorded information.

In those use cases that are intrinsically connected to the business or with exigent performance requirements, the preferred technologies are those that offer the possibility of having more control and trust between the network participants. Especially Hyperledger Fabric is the de facto standard in these cases. We are talking about environments with few known participants, for example supply chains or data reconciliation platforms. However, for those cases where transparency is the key, companies find in the public Ethereum network, with thousands of independent participants, the perfect ecosystem to trace their operations and allow third party verification.

The importance of a PoC and Minimum Viable Ecosystem

We have already chosen the technology that best suits the requirements of our use case. Now we must ensure that once the solution is deployed it will help solve the challenges of the case. To do this, the best option is to design a proof of concept and validate as soon as possible which minimum functionalities will allow us to create value for the business. Their scope is limited, and they are usually limited also to quickly demonstrate that this minimum functionality can be implemented. We place ourselves in the field of technology and technology always works. However, although it is very important, a proof of concept is not sufficient in most cases.

We need to relativise the functional level and analyse which attributes or components of the project will actually determine its feasibility. This is what we call not the minimum viable product, but the minimum viable ecosystem. The value of a blockchain project is in the value captured by all the participants. We have to identify the right participants to create sufficient value and understand the relationships between them, the governance, the operating model and how new participants can easily join, the systems with which they must interact and the integration interfaces with them. In short, it is a question of mapping out the interactions and the chain of creation and transmission of value: where, how and when it is created and who, how and when it is captured.

Elements to take into account when scaling a proof of concept

However, identifying all these components and relationships does not mean implementing them. For example, let’s think of systems that record, process and make decisions based on information collected by IoT devices. That input information can be simulated. Exactly the same with the information that can be received from information systems.

The important thing about the minimum viable ecosystem is to understand what information our solution is going to deliver, to whom, how and when. And furthermore, to assess whether this scenario is sufficient to approve the project.

A minimum viable ecosystem is never functional. It offers a complete vision of the impact that the solution will have on the processes on which it acts. We are not simulating the solution, we are presenting in the greatest possible detail all the potential scenarios and opportunities for the participants to capture part of the value created.

We can think of that minimum viable ecosystem as the intermediate step between a proof of concept and a pilot. The first is a conceptual approach and the second is already a functional exercise. A pilot can be made productive and scaled up to a productive system. The ecosystem must be implemented.

Return on investment

As a result of the minimum viable ecosystem we talked about the value captured by the participants. In any evaluation committee where the continuity of a project is to be decided upon, this value must be estimated. In blockchain projects we talk about a network and decisions are more complicated. The viability of the project may depend on the decisions on continuity taken by another of the participants. If one does not go ahead, the rest may not be able to generate enough value to make the necessary investments viable.

Therefore, the parameters of profitability and return on investment have more than one dimension in these projects. As part of the exercise of building the minimum viable ecosystem, it is necessary to understand the motivations of each participant and to value the benefits that each one of them will obtain from the project. In addition, it is frequent that in the same project, the different participants play a different role. It is typical for example a supply chain where distributors and suppliers are involved. Each one can obtain benefits of different nature and even in different exercises.

The benefit generated can be translated directly or indirectly into book value for the different participants. For example, let’s think about a food traceability project designed to convey greater confidence to the final consumer. In the medium term, this confidence may both retain the customer and justify a price premium. However, imagine a small producer. Perhaps the project allows them to demonstrate their excellence by meeting delivery deadlines or quality parameters. Thanks to this evidence, he could renegotiate his contracts and obtain direct benefits in the short term.

This variability and asymmetry in the benefits obtained determines that each project, depending on the specific use case and the participants involved, combines different returns and even different levels of investment to achieve the expected results. Characterising this benefits map in as much detail as possible should be a priority before entering on costly integration projects or migrating from legacy systems and applications to a new blockchain-based solution.

Interoperability between networks

The development of business projects based on blockchain is accelerating in parallel with cryptoeconomics. Use cases such as bitcoin or crypto currencies have developed a centralized ecosystem in a few public networks. However, when a company decides to launch a blockchain project it thinks about creating its own private network. The result is a multitude of networks deployed independently as silos, although many of them can share technology.

There is an ongoing debate about the interoperability of blockchain networks. How is interoperability ensured between different blockchain networks? Before answering this question, it is necessary to ask what we mean by interoperability and whether it is necessary for our specific case of use.

A priori, two blockchain networks can interoperate at different levels. They could share data or allow a smart contract deployed on one network to write or interact with another. They could also validate transactions and reach consensus between them. However, what use cases need these levels of interoperability? Let’s think of two business applications, one for payroll management and one for expense management. Both probably need to be aware of employee data. These could be replicated in each application or available in a common repository. However, they do not talk to each other. They do not interoperate.

Each application simply uses the information it obtains from available sources. The same happens with Blockchain. The information stored in a network or the tasks (smart contracts) are self-contained. They do not need to interact with another network. In any case, it will be the application that integrates with two networks simultaneously. From each of them it will retrieve or register information of different nature that allows the implementation of the use case.

Reuse of components or ad-hoc developments?

Many of the business applications based on blockchain make use of the same basic functionality of the technology. At least two out of three use cases are based on the immutability of the information stored or exchanged. The rest are split between asset tokenisation (rights of use, intangible assets, digital twins, etc.) and information source reconciliation.

Let’s analyse the most common case of blockchain projects, traceability and certification. Thanks to immutability, we can create irrefutable digital evidence that we can also date and attribute accurately. With this evidence we can leave a trace of an information or an event so that it can be verified by a third party. Now let’s think about a specific case of document certification. If we atomize the necessary operations we can create a catalogue of actions that we could reuse in another case.

These actions would include creating the digital asset that the document represents, associating the intrinsic data of the document with it, signing it digitally to attribute it to the holder, creating a unique fingerprint that allows subsequent verifications, etc. These same operations could be applied in an industrial traceability project to monitor the condition of a specific piece. In this case we would create the asset, assign it to an operator responsible for the part and we can assign intrinsic data to identify it.

We have therefore managed to reuse components in projects of a different nature. If we think about the implementation, surely each operation can be translated into a generic smart contract that can be parameterized according to the specific process that we monitor and trace. These generic smart contracts are the reusable components that make it possible to significantly minimise the development times of blockchain solutions. In some cases we will need to develop specific components (i.e. new smart contracts). However, the majority of use cases can be made with these reusable components.

Need for decentralisation

Another recurrent debate among blockchain advocates raises the extent to which it is necessary to decentralise the operation of a network. In fact, many experts claim that a private network does not respect the underlying value of having a decentralised platform. From this position only public networks with thousands of independent nodes would be true blockchain networks. Mass replication in thousands of nodes without any one node being able to influence the rest guarantees immutability and integrity.

In cases of consortia where several partners operate the network, a minimum of decentralisation is guaranteed. However, we were saying that each company is deploying its own private network. How do we guarantee immutability and integrity in these cases? As long as there is cryptographically recorded evidences that can be verified by unrelated third parties, both attributes can be guaranteed. T

he basic cryptography that links the blocks of stored information makes it unfeasible for historical information to be altered without invalidating the distributed verification evidence. In any case, the use of public networks to record snapshots or images of the system at a time as evidence is a common procedure to guarantee the integrity and verifiability required by blockchain defenders.

Veracity of the IIOT

Finally, we can reflect on what the immutability of information means. In essence, the information we record on a network cannot be altered and we can guarantee its integrity. What happens if that information is false? We are effectively “building” a lie that people will be able to verify. Therefore, we have to be careful with the information we store in blockchain. We must never believe something recorded in the blockchain without having a guarantee of how that information is being recorded.

The easiest way to guarantee not only the integrity but the veracity of that information is to record it as close to the source as possible. In many of the business processes that we can consider, that place is a reliable IoT device as an interface to automatically load the information in the blockchain. But still the parties must ensure that the devices have not been tampered with and trust them.

TrustOS: Quick and easy Blockchain

From Telefónica we have been working for several years so that our customers can implement Blockhain without worrying about all these challenges. Our proposal is TrustOS, a simple network service that allows to invoke in a simple way the most demanded functionality of blockchain. Following the thesis we explained before, TrustOS would be those reusable components of any Blockchain project, which we have packaged and made available to our clients. Thanks to TrustOS, a company can:

  • Add blockchain to its systems, services and applications at a low cost in time and resources. It can divest itself of the underlying blockchain technology and use the TrustOS APIs to combine the best of the public and private blockchain networks.
  • Simulate your minimum viable ecosystem without paying attention to the network topology or develop complex integrations of your systems with Blockchain.
  • Present the managers with positive business cases from the very beginning, since the investment in network deployment is minimized and the service starts to be used immediately.
  • Develop applications that can simultaneously interact with several blockchain networks even when they are based on different technologies.
  • Reuse the basic components of TrustOS to implement traceability or certification use cases with very few lines of code.
  • To trust in the real decentralisation of the solution thanks to the federation of networks, a novel concept that allows the creation of meshes of different networks that act as verifiers of the integrity of the information exchanged in the other networks of the mesh.
  • Guarantee the data exchanged and its integrity, thanks to the IoT modules that natively register information and evidence in blockchain through TrustOS.

Cyber Security Weekly Briefing, 15-21 October

Telefónica Tech    21 October, 2022

The Noname057(16) group attacks the Spanish Ministry of Defense

Last Friday, threat actor Noname057(016) carried out an attack against the website of the Spanish Ministry of Defense, rendering them unavailable over a short period of time. 

Noname057(16) is a group with political motivation that tends to carry out denial-of-service attacks against its victims, which are usually institutions and companies from EU or NATO countries, especially in the public, transport and telecommunications sectors.

The group has been acting through this type of attacks since March of 2022, when their Telegram channel was created, but has increased its activities since last Summer.

Additionally, the group has recently claimed that they are not to be confused with the Killnet hacktivist group, which has a similar profile and modus operandi.

More info

* * *

Microsoft reports a misconfigured endpoint of its own

Microsoft Security Response Center has reported the remediation of a misconfigured endpoint, which could have resulted in unauthorised access to data contained on the endpoint.

The information that could have been exposed involved business transactions between Microsoft and customers, including sensitive information such as personal names, email addresses, email content, company names, phone numbers, or document attachments.

Microsoft became aware of the misconfigured endpoint on 24 September thanks to a tip-off from SOCRadar, and then proceeded to address the risk. According to the information published by Microsoft, there is no indication that customer accounts or systems have been compromised, and they have indicated that all affected customers have been notified directly.

More info

* * *

Critical vulnerability in Apache Commons Text

A critical vulnerability in Apache Commons Text has recently been disclosed. It would allow an unauthenticated attacker to remotely execute code (RCE) on servers running applications with the affected component.

Identified with CVE-2022-42889 and a CVSS of 9.8, the flaw affects Apache Commons Text versions 1.5 to 1.9 and is located in insecure defaults at the time Apache Commons Text performs variable interpolation, which could lead to arbitrary code execution on remote servers.

According to the Apache Foundation itself, the Apache Commons Text library is reportedly present in more than 2,500 projects and recommends upgrading as soon as possible to Apache Commons Text 1.10.0, which disables interpolators that present problems by default.

On the other hand, several security researchers have pointed out the public availability of a proof of concept (PoC) for this vulnerability, a fact that considerably increases the risk.

Other sources have even compared this bug to the well-known Log4j vulnerability, although it seems likely that its impact is less widespread and for the time being there are no reports of its possible active exploitation on the network.

More info

* * *

BlackLotus: highly sophisticated malware for sale in underground forums

Security researchers have reportedly detected a threat actor selling a tool called BlackLotus on underground forums, with capabilities that have so far only been observed in state-sponsored groups and actors.

This tool, a type of UEFI bookit, would be installed in the computer’s firmware and would evade detection by security solutions by loading itself early in the device’s boot sequence.

According to the author of the tool in his publication, BlackLotus is said to have features to detect activity in virtual machines and has protections against removal, thus making malware analysis more difficult.

Finally, security researcher Scheferman says that until a sample of the malware has been fully analysed, it cannot be ruled out that BlackLotus could be used to carry out a Bring Your Own Driver (BYOVD) attack.

More info

* * *

​PoC available for critical Fortinet vulnerability

Over the past few days, a proof-of-concept (PoC) has been published on GitHub that exploits the critical security flaw affecting Fortinet FortiOS, FortiProxy and FortiSwitchManager products that was reported over the past week under the coding CVE-2022-40684.

Specifically, exploitation of this vulnerability could allow a remote attacker to perform an authentication bypass, deriving their actions in performing malicious operations on the administrative interface via HTTP(S) requests.

In addition, according to Horizon3.ai, following an analysis of the PoC, they indicate that FortiOS would expose a management web portal, allowing the user to configure the system.

It is worth noting that when the PoC was published in open source, Fortinet had already reported active exploitation of the vulnerability. However, on Friday it issued an advisory that included mitigation guidance, as well as updates and fixes for customers.

Finally, it is worth noting that researchers from GreyNoise and Wordfence have published detection of exploitation attempts.

More info

World Energy Saving Day: Efficiency to drive progress

Nacho Palou    20 October, 2022

The 21st of October is World Energy Saving Day. During the last ten years, this day has raised awareness of the impact that energy consumption has on the environment and natural resources, to enable more efficient use of energy and offer “universal access” to “affordable, reliable and modern” energy.

Energy is one of the pillars of any civilization. There is a relationship between energy and progress. In fact, the Kardashev scale measures the degree of technological evolution of a civilization based on the amount of energy it uses. Or rather, based on its ability to harness the energy resources at its disposal, both on and off Earth.

According to the latest calculation, humanity is a 0.7 type civilization on the Kardeshev scale. That means that we can still make much more effective use of the power in our environment.

Energy efficiency to accelerate progress and sustainability

However, while energy consumption implies progress, its use also requires the use of limited natural resources. This means that it is usually harnessed at a high economic, social, and environmental cost.

This trade-off can be solved by promoting energy efficiency: using as little energy as possible to get the intended result. Energy efficiency reduces environmental impact and carbon footprint, while boosting “economic growth, human development and environmental sustainability”, says the World Energy Forum.

Photo: Erik Dungan / Unsplash
Photo: Erik Dungan / Unsplash

Digitalization for efficient use of energy resources

Digitalization is one of the levers at our disposal to increase energy efficiency. The convergence of technologies such as the Internet of Things (IoT), Big Data, and Artificial Intelligence, among others, makes it possible to better manage energy from production to consumption, for example:

  • Improving the management of energy and natural resources such as water, gas, or electricity, achieving enhanced operational efficiency and process reliability.
    Added an additional layer of intelligence to infrastructures also allows for the proper distribution and use of these resources, and reduces losses.
  • Optimizing energy production with conventional and clean energy sources (such as wind and solar) by incorporating sensors and smart hardware at different stages of production.
    Also, by applying predictive production systems and solutions for the integration of distributed energy resources, among other possibilities.
  • Reducing energy consumption in homes, public and commercial spaces and businesses by sensing the environment, taking into account the context and learning from user habits.
    Monitoring energy consumption enables the implementation of data-driven energy saving strategies and the design of accurate and customized savings plans.

Technological advances that improve efficiency and sustainability in digitization

Digital solutions of this type can improve energy efficiency at different levels of energy resource management.

Telefónica Tech also develops most of these solutions using ultra-efficient technologies with a lower resource consumption that reduces its environmental footprint.

Thus, the products and services in our portfolio are designed to reduce their impact to promote green digitization in any type of sector, field, and activity.

Secure Digital Workplace: chronicle of a foretold (and necessary) evolution

Juan Carlos Vigo López    19 October, 2022

The changes that have taken place in the Digital Workplace lately have put some technological areas under stress, as we have had to adapt to the evolutions during and after the COVID, modifying the temporal incorporation of the observed trends.

When workers had to go home, technology measures had to be fast-tracked to achieve digital and operational resilience in our work environments. For this reason, CIOs and CTOs have had to evolve their approaches from technology stacks with solutions for short-term problems (pandemic) to the search for stacks that develop medium-term digital resilience and trends towards a hybrid working model.

According to the Gartner Digital Workplace Survey, 68% of respondents agreed that “more C-level executives expressed interest in the digital workplace since COVID-19,” said Matt Cain, vice president analyst at Gartner.

Source: Gartner.

This shifted the positioning of meeting solutions, collaborative work, chats, from interesting to mandatory. And I include Cyber-Security and resilience in these areas.

So, if we were to describe the trends in this area and complement them with our vision, we would have the following:

1. New digital work Core

A collection of communication, collaboration and personal productivity tools in SaaS, combined in a cloud office suite.

Typically includes email, instant messaging, file storage and sharing, conferencing, document management and editing, search and discovery, task prioritisation and collaboration.

This Digital Work Core is the cornerstone of Digital Workplace infrastructures.

2. Aligning the Core with the cloud

Increased use of Cloud office solutions, as well as reduced costs, increased simplicity and more functionality for employees.

It led to the upgrade of cloud services with new mobility, content discovery and Artificial Intelligence (AI) functionalities, which are shaping the future.

3. Evolution from BYOD to BYOT

More personal Internet of Things (IoT) or wearable devices are starting to be used in the workplace, in a trend known as BYOT (Bring Your Own Technology).

This involves a wide range of connected objects such as smartwatches, fitness wristbands, smart lamps, air purifiers, voice assistants, smart headsets, and virtual reality (VR) headsets being brought closer to the workplace. And in the future, it could be sophisticated devices such as robots and drones, surely.

As home technology becomes more intelligent and IoT-enabled, an increasing range of tools will be brought into the Digital Workplace and used in remote or hybrid work.

4. Economics of distance

Virtual or hybrid meetings have proliferated across COVID-19. The pandemic influenced the emergence of the “distance economy”, or business activities that do not rely on face-to-face activity. Organisations with operating models that rely on physical and face-to-face events have mutated to virtual or hybrid alternatives.

Simultaneously, as internal meetings, customer interactions, new employee interviews and a variety of other business activities have become virtualised, the distance economy has given rise to a new generation of meeting solutions that mimic a face-to-face meeting. Empowering telecommunications as a lifeline at all times.

5. New digital workspace

A smart digital workspace incorporates the digitisation of physical objects to offer new ways of working and improve work productivity. The technologies incorporated are: IoT, digital signage, integrated workplace management systems, virtual workspaces, motion sensors and facial recognition.

Any place where people collaborate is a smart digital workspace, such as office buildings, desks, meeting rooms, conference rooms, public places and even people’s homes.

The development of hybrid work models, with the incorporation of remote working, implies a review of design strategies to better understand how people participate in physical spaces and their social relationship.

6. Desktop as a Services

Desktop as-a-Service (DaaS) provides users with a virtualised, on-demand desktop experience from a remote location. It includes provisioning, patching and maintaining the management plane and resources to host workloads.

Organisations have been interested in adopting a virtual desktop infrastructure in the past, but complexity and capital investment have made implementations difficult. The pandemic has accelerated the DaaS adoption model.

7. Democratisation of services associated with the Digital Workplace

There is a trend towards user participation in the technological services of the future:

  • Employees will participate more actively in the models for resolving incidents, problems, and knowledge of digital workplaces, through their own empowerment and in their own interactions. Different gradients of intensity are available, including small code development (no-code application development tools, etc.).
  • Collaborative integration tools, where expert users with IT skills handle relatively simple application, data, and process integration tasks on their own through intuitive, codeless development environments.
  • User data science, allowing analytical insights to be extracted from data without the need for extensive data science expertise.

8. Resiliencia y Ciberseguridad en Digital Workplaces

There is one aspect to take care of, that of the resilience of these Digital Workplaces, making this characteristic in the face of increasingly sophisticated and industrialised situations and attacks a necessity for an almost indestructible Digital Workplace.

Developing a Digital Operational Resilience model that can be defined, and that if we go deeper, we can bifurcate it into several separate paths such as:

Plans, programmes and controls.

That it can take its cue from what is being done in the financial field and that we take the broad lines:

  • Incident response and employee response plan and how it affects the Digital Workplace.
  • Assessment of the risks posed by cyber-attacks and an action plan to mitigate them.
  • Appropriate security controls in the digital infrastructure, which could include encryption at rest and in transit, authentication, access controls, audit trails, monitoring systems, event management systems and incident response plans.
  • Incident notifications when incidents occur so that regulators can assess vulnerabilities and make recommendations for improving the security posture.
  • Service continuity plan during outages that may occur.

Training and simulations

Human involvement in digital ecosystems has been identified as the weakest link, as elements to be trained and coached in the face of attacks and incidents. And here we must not forget the entire ecosystem of collaborators and third parties involved in the day-to-day operations of companies.

Digital Workplace security architecture

Developing a security architecture before and during the life of the Digital Workplace can be based on the following strategies:

  • Security as an element to be included in the design of the Digital Workplace, through the participation of security teams in all phases of design, implementation, operation, innovation, etc.
  • Include security in the management of Digital Workplace assets.
  • Within integration, consider micro-segmentation architectures.
  • Develop different security layers.
  • Development of Zero-Trust strategies.

Security technologies and areas to consider

  • Access management.
    • Specific management of privileged users. PIM and PAM
  • Two-factor authentication
  • Biometric elements
  • Encryption of information at rest and in transit.
  • Data redundancy.
  • EDR (Endpoint Detection and Response)
  • NDR (Network Detection and Response)
  • XDR (Extended Detection and Response) / NGIPS (Next-generation intrusion prevention system)
  • CASB (Cloud Access Security Broker), DLP (Data Loss Prevention) and IRM (Information Rights Management).
  • Deception.
  • Security operations to consider
  • Vulnerability management and patching strategies. Virtual patching.
  • Management of traditional attack vectors, mail, browsing, file exchange, etc.
  • Hardening of endpoints.
  • Password management.
  • Data leakage control.
  • Threat hunting through EDR telemetry.
  • Include Intelligence.

Security supply chain monitoring

It is necessary not only to know our company’s security score through a third-party scorer, but also those of my suppliers and partners who make up the security value chain.

Resulting in a Digital Workplace management as an element of my business value chain.

Conclusion

It is understood that the incorporation of visions from different points of view such as Workplace, Cloud, IT Operations, Cybersecurity, etc. will mark a holistic approach to this, and where it is necessary to have technological partners who propose such approaches to their customers, either because they have experienced the union of Cloud, Cybersecurity and IoT as is the case of Telefónica Tech.

Selecting a managed security service provider (MSSP): 5 key factors to keep in mind

Telefónica Tech    18 October, 2022

An Managed Security Service Provider (MSSP) offers you a team of seasoned security experts that will work for you at a fraction of the cost of building your security team in-house. Previously these providers served only large-scale industries or businesses, but now many MSSPs offer their services to small as well as medium-sized businesses.

According to Gartner research, in 2021 the Managed Security Services (MSS) market grew 9.8% in U.S. Dollars, reaching $13.9 billion in revenue. The managed detection and response (MDR) segment witnessed a strong growth at 48.9% in U.S. Dollars. By 2024, more than 90% of buyers looking to outsource to security services providers will focus on threat detection and response services.

But not every business has the workforce to find and resolve vulnerabilities and threats. Selecting a qualified, good-fit MSSP (managed security service providers) is a challenge.

To help you choose a Managed Security Service provider, here are 5 things you should consider:

1. Managed Security Services Rankings

Managed security service providers offer diverse interpretations of what Managed Security is, making it difficult to directly compare what providers deliver.

Security and risk management leaders should recognize fundamental deliverables and align requirements to offerings.

To facilitate the comparison between the different MSSP, there are specialized rankings such as the one published by MSSP Alert: Top 250 MSSPs -2022 Edition. The results rankings are based on:

  • Annual recurring revenues
  • Profitability
  • Business Growth Rate
  • Cyber professional headcount
  • Managed security services offered
  • MSSP Alert’s editorial coverage of MSSPs worldwide
  • Third-party industry honors (ie Gartner, Forrester, IDC, etc…)

Telefónica Tech USA ranked 5th in MSSP Alert’s TOP 250 Global MSSPs list for 2022 published by MSSP Alert, a CyberRisk Alliance resource to identify the main providers of managed security services worldwide.

The curated list identifies and honor the top MSSPs  worldwide and will help enterprises evaluate and choose the MSSP that fit the most to their needs.

2. Simplify today’s cyber complex ecosystem

As security providers landscape has become a vastfield with multiple actors, enterprises looks forward to reduce the time spent in integrations, vendor selection and qualification, hence facilitate the decision-making process.

Having access to the best technologies and partners (and, thus,  delegating updates, patches, bug fixing, etc.) is key to success.

3. Peace of mind of trusting an experienced partner

Finding an MSSP partner who has the right expertise for your business can help you identify your vulnerabilities and mitigate quickly. Telefónica Tech is a Managed Security Services Provider (MSSP) with a heritage in managed services across data center, workplace, communications, and cloud.

As such, we have deep subject-matter expertise across the entire threat landscape and operate security as a core discipline, from advisory services through to managed security engineering and operations.

4. Proprietary threat intelligence, advanced technology, and standardized procedures

A managed security service provider (MSSP) provides outsourced monitoring and management of security devices and systems. Common services include managed firewall, intrusion detection, virtual private network, vulnerability scanning and anti-viral services.

Hundreds of MSSPs now offer MDR services — however, Gartner says customers should be careful about pretenders in the market that have incomplete offerings.

On the other hand, Telefónica Tech’s NextDefense Managed Service integrates Managed Detection and Response (MDR), Vulnerability Risk Management (VRM) and Cyber Threat Intelligence (CTI) with Digital Risk Protection (DRP) into a single solution that defends your cloud, corporate network, remote employees, digital assets, brand, and reputation.

5. Cost Control of your security operations and technologies

When deciding on which MSSP to use, your top priority should be to find an MSSP who is both budget-friendly and provides value for your money.

Great MSSPs will provide customizable pricing, with tailored solutions, specific to your business needs. Picking and choosing specific services can help you keep your budget contained and avoid paying for unnecessary products.

Selecting an MSSP such as Telefónica Tech provides the technology, the experts and the processes at a fixed and predictable monthly cost and SLA without CAPEX investment needed.

🔵 Interested in talking to an expert? Contact our team.

AI of Things (XI) Preventive maintenance on sensors: anticipating sensor failures, predicting battery replacement

Víctor Vallejo Carballo    17 October, 2022

We are immersed in an historic technological revolution in which data analysis has taken centre stage and sooner rather than later will lead all business organisations, or at least those that want to remain competitive and profitable, to become fully Data Driven organisations

This technological revolution has given rise within the industrial sector to the term Industry 4.0, or fourth industrial revolution, a new scenario that leverages both the automation of processes and the interconnection of data based on IIoT technology (Internet of Things applied to Industry).

This is a set of tools, devices and, of course, sensors, which are responsible for both data collection and analysis for subsequent decision-making at the operational and management level within the organisation itself.

Sensors, an essential component

Sensors, therefore, have become a basic component, since, through the detection, measurement and analysis of factors, they enable greater automation of industrial processes. Their measurements are subsequently translated into commands, which are then executed by the actuating/executing components within a well-defined action/response plan.

However, the functionality offered by sensors is not limited exclusively to increasing process automation; their use becomes essential for industrial maintenance, as these assets can enable significant savings in maintenance or repair costs caused by unplanned production stoppages, improvements in profitability thanks to constant monitoring throughout the manufacturing process, thus generating higher performance rates on production lines, as well as improvements in the safety of industrial workers themselves.

So, what exactly do the sensors measure?

Photo: Arshad Pooloo / Unsplash
Photo: Arshad Pooloo / Unsplash

As can be expected, the answer to this question is a wide range of variables, which will depend on the specific characteristics of what is manufactured, but we can group them into environmental variables (temperature, humidity, light, vibrations…), mechanical variables, derived from the machinery itself (position, proximity, speed,…), electrical variables from energy consumption (voltage, current, resistance, power,…) and process variables about physical or chemical conditions generated during manufacture (fluid level, temperature increase in machines and cooling times, waste level, densities,…).

Sensors have become one of the most sensitive parts in the process of capturing early information to provide an adequate response in time and form during manufacturing.

Given such heterogeneity of available variables, it follows that sensors have become one of the most sensitive parts in the process of capturing early information to provide an adequate response in time and form during manufacturing.

This is why both the identification of the type of sensor to be installed, its location within the chain and the maintenance of these sensors are crucial to ensure that the measurements are reliable and significant, as incorrect measurements due to a defect or fault in the sensor can end up leading to imbalances in the composition of the manufactured goods or can even mean a total shutdown due to a critical error, either because of having used too many or too few components or ingredients that are necessary in the right proportion to maintain the quality expected and approved in the standards, protocols and certifications.

So, what type of maintenance should be carried out?

Photo: Mech Mind / Unsplash
Photo: Mech Mind / Unsplash

There are different approaches to address this question, which can be summarised in 4 different types of maintenance, depending on the implementation strategy.

  1. Corrective, where the sensor can work until it fails, at which point it is repaired or replaced.
  2. Preventive, which is carried out systematically through inspections, whether or not the asset has failed, and which, together with corrective maintenance, are the most widespread strategies to date.
  3. Predictive maintenance, which makes use of predictive algorithms to estimate in advance the moment of sensor failure, so that maintenance will only be carried out when necessary, anticipating the incident.
  4. Prescriptive strategy, which is based on predictive maintenance and incorporates elements of maintenance management, costs, etc.

A proactive strategy focused on anticipating and correcting and will determine with greater precision the useful life of equipment, risks of failure and potential impact on the system.

As sensors become cheaper, their implementation continues to be promoted throughout the production chain, and this interconnection of data generated during manufacturing, in combination with Artificial Intelligence techniques within the Big Data technological environment, is causing a shift from prevention to forecasting in maintenance processes.

It will be less and less necessary to stop processes to analyse errors and/or solve problems when deploying constant predictive maintenance, since predictive models executed in real time using historical, inventory and process data will be used to model the failure pattern by learning patterns that precede failures in a machine, sensor, asset, etc. and, consequently, predicting when maintenance or replacement of the sensor or part will be necessary before the functional failure occurs.

In other words, we will be implementing a proactive strategy focused on anticipating and correcting and will determine with greater precision the useful life of equipment, risks of failure and potential impact on the system.

Photo: Vaclav / Unsplash
Photo: Vaclav / Unsplash

This proactive strategy based on the ‘sensorisation’ of the plant and the adoption of automatic learning techniques contributes to the fact that, in the long term, predictive maintenance offers lower recurrent costs than other maintenance strategies, since the higher initial investment is returned in increased ROI, as the number of incidents detected in advance increases, thus reducing the rate of critical failures in the chain.

A clear example of this change in maintenance strategy can be seen in those industries that have an intensive use of electric batteries, both in controlled static environments (industrial facilities, telephone systems, etc.) and in dynamic mobility environments (railway environment, electrified transport, etc.), where it is vital to estimate acceptance-rejection values for batteries with a projected useful life of several years to ensure that they will not be operating in the near future within critical ranges that compromise their integrity.

Photo: Lenny Kuhne / Unsplash
Photo: Lenny Kuhne / Unsplash

In the automotive sector, more and more car manufacturers are relying on predictive maintenance to continuously monitor the performance of electric vehicle batteries. Sensors installed in the car constantly feed data to a virtual model of the battery, known as a digital twin, which enables large-scale modelling of the service performance and estimation of optimal battery life under different usage conditions in a laboratory environment.

This approach to creating digital batteries leads to significant time savings, as physical testing of different conditions is a handicap due to the long lifetime of batteries, while allowing multiple simulations in parallel without the need to deploy complex and costly physical test environments.

Knowing how long it will take to reach critical values that compromise performance will allow specific actions to be deployed to extend battery life by replacing parts and improving the design of new cells and batteries.

Moreover, this optimisation of performance has the additional positive effect of reducing the environmental impact, as less and less waste will be discarded and reused more frequently, extending vehicle lifetime and allowing batteries to be a real key lever for change in the decarbonisation of transport and part of the industrial processes.

🔵 More content on IoT and Artificial Intelligence can be found in other articles in our series – the first article of which can be found here,

Cyber Security Weekly Briefing, 7 — 14 October

Telefónica Tech    17 October, 2022

Critical vulnerability in Fortinet 

Fortinet has issued a security advisory to its customers urging them to update their FortiGate firewalls and FortiProxy web proxy, in order to fix a critical authentication bypass vulnerability that could allow remote attackers to log into unpatched devices. The vulnerability has been identified as CVE-2022-40684.

The vulnerability has currently no CVSS criticality associated with it according to the vendor, although some researchers estimate that it could reach a score of 9.8.

The flaw resides in the administrative interface where, using alternative routes or channels in FortiOS and FortiProxy, an unauthenticated attacker could perform operations via specially crafted HTTP or HTTPS requests. The vulnerable versions are FortiOS 7.0.0 to 7.0.7, FortiOS 7.2.0 to 7.2.2 and FortiProxy 7.0.0 to 7.0.6 and 7.2.0, the vulnerability being fixed with the new versions FortiOS 7.2.1 y 7.2.2 and FortiProxy 7.2.1.

Also, in case it is not possible to implement these updates, Fortinet has recommended limiting the IP addresses that can reach the administrative interface through a local policy, and even disabling remote administration interfaces to ensure that potential attacks are blocked until the update can be implemented.

There are no reports of possible active exploitation of this flaw by threat actors so far, although according to the search engine Shodan, there are more than 100,000 FortiGate firewalls accessible from the Internet. 

More info → 

​ * * *

LofyGang focuses on supply chain attacks 

Researchers at Checkmarx have published a report on a threat actor focused on supply chain attacks, known as LofyGang.

According to Checkmarx, the group’s latest campaign since 2021 is reportedly focused on infecting open-source software supply chains with malicious NPM packages.

The attackers’ objectives would be focused on obtaining credit card information, or stealing user accounts, including premium accounts for Discord, or services such as Disney+ or Minecraft, among others. In executing the attacks, they use all kinds of TTPs, including typosquatting, targeting typos in the supply chain, or “StarJacking”, linking the URL of the legitimate package to an unrelated GitHub repository.

The group, which is believed to be of Brazilian attribution, communicates mainly via Discord. They also have a YouTube channel and contribute to several underground forums under the nickname DyPolarLofy, promoting their tools and selling the credentials they have obtained.

On the other hand, the group has a GitHub where they offer their open-source repositories offering tools and bots for Discord. It is worth noting that the Checkmarx researchers have created a website to keep track of updates on their findings and a repository of the malicious packages discovered so far. 

More info →  

​ * * *

Emotet resurfaces with new evasion mechanisms 

Researchers at VMware Threat Analysis Unit have published a report analysing the resurrection of the group behind the Emotet malware-as-a-service (MaaS), known as Mummy Spider, MealyBug or TA542.

This new resurgence of the malware comes on the heels of its dismantling by international law enforcement in January 2021. Researchers analysed data from spam emails, URLs and attachments collected from campaigns earlier this year, concluding that Emotet botnets are constantly evolving to make detection and blocking by defence teams more difficult.

They do this by hiding their configurations, creating more complex execution chains and constantly modifying their command and control (C2) infrastructure. In addition, they have expanded and improved their credit card theft capabilities and their mechanism for lateral propagation.

The distribution of the malware is based on mass mailings of emails with malicious links or attachments. 

More info → 

​ * * *

Microsoft fixes 84 vulnerabilities in its Patch Tuesday, including two 0-day vulnerabilities 

Microsoft has fixed 84 vulnerabilities in its October Patch Tuesday, including two 0-day vulnerabilities. One of them actively exploited, and 13 critical flaws that would allow privilege escalation, impersonation or remote code execution.

The actively exploited 0-day, identified as CVE-2022-41033 and CVSS 6.8, was discovered by an anonymous researcher and affects the Windows COM+ event system service, allowing an attacker to gain system privileges. On the other hand, the second 0-day, which, according to Microsoft, has only been publicly disclosed, has been catalogued as CVE-2022-41043 and with a temporary CVSS of 2.9.

In this case, the bug consists of an information disclosure vulnerability in Microsoft Office that could allow an attacker to gain access to user authentication tokens.

Regarding the other two recently known 0-days in the Exchange server (CVE-2022-41040 and CVE-2022-41082), Microsoft clarifies that it has not yet released security updates to address them and refers to its 30 September release, which includes guidance on how to apply mitigations for these vulnerabilities. 

More info

​ * * *

Alchimist: new attack framework targeting Windows, Linux and macOS 

Cisco Talos researchers have discovered a new attack tool, with command and control (C2) capabilities, designed to target Windows, Linux and macOS systems.

Named “Alchimist”, the Cisco release notes that all of the tool’s files are 64-bit executables and are developed in the GoLang programming language, features that facilitate compatibility with different operating systems.

Its operation is based on a web interface that allows it to generate and configure payloads deployed on infected devices to take screenshots, launch arbitrary commands and even execute code remotely.

In addition, Alchimist is able to introduce a new remote access Trojan (RAT) called “Insekt” via PowerShell code for Windows, wget for Linux systems and, in the case of macOS, replaced by a privilege escalation exploit (CVE-2021-4034) in Polkit’s pkexec utility.

Once implemented, the Trojan will establish communication with the attackers’ C2 infrastructure via the Alchimist interface and different communication protocols such as TLS, SNI, WSS/WS, its main purposes being information gathering and command execution. 

More info

How can we bring Internet of Things to the rural world?

Miguel Maroto    13 October, 2022

This article examines the barriers to IoT entering the rural world with solutions that help farmers maximise the productivity of their farms.

It all started from a conversation I had a few days ago with my father, a small olive oil producer, and several of his friends, who are also small producers.

The conversation centred on the problems they are having with the irrigation systems on their farms. These problems were the most important ones:

  1. When it comes to irrigating olive groves in the irrigation community that manages all these farms, the farmer has no power to decide when it is the best time to irrigate, as the decision is taken centrally. As a result, there are farms where olive trees are drying up due to an excess of water rather than a lack of water.
  2. On the other hand, the costs associated with irrigation are very high, which means that many producers are not as profitable as they would like to be.

As an IoT expert, I explained that there are solutions on the market that can help solve both problems. On the one hand, to make a better decision on the best time to irrigate, and on the other hand, to improve the overall cost.

After having this conversation, I had the impression that, although the audience I was addressing was interested, there was a certain mistrust among them that led to reluctance to change.

Companies that provide IoT solutions must consider the agri-food sector in general, since, according to the Cajamar report on the Spanish agri-food sector in the European context, its weight in the Spanish GDP has increased to 9.7% in 2020. Moreover, according to Caixabank Research, this industry has surpassed its pre-crisis production level.

Despite the above, this industry is still facing two major problems, which were reflected in my conversation, where IoT solutions can provide a great help to mitigate them:

  • Climate change: in the case of Spain, one of the consequences is a reduction in the water available for irrigation in the different river basins. According to the PricewaterhouseCoopers study on the future of the agri-food sector, two thirds of the country is at risk of desertification.
  • Cost efficiency: in a scenario in which the price paid to the farmer of origin of some crops is increasingly lower, due to, among other things, competition from emerging countries where the cost of labour is much lower (PricewaterhouseCoopers), it is very important for the farmer to be cost efficient in order to maintain the profitability of the farm.

What kind of agriculture oriented IoT solutions are on the market?

These solutions currently focus on monitoring farms with the help of sensors (light, humidity, temperature, soil moisture, crop health, etc.) and using this information to automate certain farm components such as irrigation.

The final goals are:

  • To improve crop yields
  • To increase crop productivity
  • To reduce the consumption of agricultural inputs such as fertilisers, copper, etc.
  • To reduce water consumption

The benefits that this type of solution brings to the farmer include:

  • Improved crop yields
  • Help in planning activities
  • Increased product quality
  • Reduced costs for the farmer
  • Improved control of crop growth and yield factors
  • Increased food security
  • Climate change mitigation

What is Telefónica Tech doing in the Agro world?

Telefónica Tech team, led by Andrés Escribano, is working on smart farming solutions offering products and services that contribute to improve productivity, sustainability, as well as reducing costs in the use of resources.

Smart agriculture solutions combine different technologies such as IoT devices, Cloud platform, Artificial Intelligence and Blockchain. These solutions are focused on several product lines: digitalisation of the field either through IoT devices or by flying drones; management platform for decision support and automation of agricultural tasks; smart irrigation management; product for industrial agriculture based on indoor vertical farming systems; and solutions for traceability of production and certification of origin of agricultural products.

In short, I believe that companies that market IoT solutions and help farmers to solve their problems need to build a simple yet powerful message so that this profile, which does not usually have a technological background, understands and identifies what these types of solutions can do for their farm.

There must also be collaboration with the public administration to get this message across to all rural areas.

How to protect your social media accounts

Diego Samuel Espitia    11 October, 2022

Companies and individuals use social networks today to generate new revenue or to sell their services and products, much more than just to communicate with other people or to post likes and dislikes.

However, few people know how to secure the social network they use and when they are attacked they lose control of the account, finding themselves in big trouble to regain that control.

Let’s learn about a few tips to be prepared in case we become victims of a cyber-attack on our social network accounts.

As it is difficult to tackle all social networks, we will take some of the most common in the world and give the most generic advice possible.

Understanding what is on offer

All social networks work every day to ensure the identification and authentication of their users, providing multiple ways to authenticate and mechanisms to guarantee the user’s identity. However, most users only implement a password and are unaware of what the network offers to ensure its recovery in case of loss.

In the case of Facebook, there is a page where they provide advice on how to set up security, but it is divided into three fundamental steps.

These steps allow you to set up the minimum access control that any user should have, but in addition to this it is necessary to know what is requested in case you lose control of your account. In the specific case of Facebook, the system asks you to validate your information with a series of photos, including one of your ID card or passport.

This is done for identity validation, which also compares the names on the social network with those registered on the document and may be requested by the social network, simply to validate an identity check.

Considering the above, it is vital to have names and images that are useful to the network for recovery, in the case of an account theft, no matter what changes have been made by criminals, this allows Facebook to confirm the identity in the historical records.

This same procedure is valid for almost all other social networks, such as Instagram, Youtube, LinkedIn and Twitter. However, it does not work for TikTok, where it is not even possible to set up two-factor authentication.

Photo: Solen Feyissa / Unsplash
Photo: Solen Feyissa / Unsplash

TikTok has become one of the most used platforms by companies, entrepreneurs and individuals, but little has been done to analyse the security provided by this platform, where the only configurable parameter is whether the account is private or not.

In case of forgetting the password or a change in it, the phone number is requested and a 6-digit pin is sent, but there is no procedure in case of losing control of it, only a procedure is indicated to recover it in case it is deleted and it only works after 30 days of deletion.

Knowing who discloses your data

Another big problem of the networks is that we end up flooded with advertising in our emails or with several emails to manage advertising networks, for this public email systems can help us with a relatively simple trick.

The trick consists of adding the social network to the username of our email, without this implying that we should have an email address for each account. So, if the account with which you registered on Instagram is [email protected], then change it to [email protected], with outlook.com or hotmail.com accounts it also works.

This change will allow advertising or data sent from these platforms to reach your email with this ID and you can have evidence of who disclosed your data.

Additionally, many of the attacks of session theft are carried out by automated systems, which take the databases of information leaks and initiate processes to break passwords, but this “new” email is not valid to open the social network.

Conclusion

Remember that it is always better to be safe than sorry and that criminals are constantly looking for weaknesses to hijack social media accounts, especially now that they have become a popular buying and selling channel for people.

Understanding the controls and protections they provide and knowing what to do in the event of an incident is vital to ensuring the security of your information and your environment.