Metaverse (I): threats in an immersive, multi-sensory environment

Estevenson Solano    11 April, 2023

While the discussion and excitement around the metaverse is growing, there are also feelings of doubt, fear, concern and uncertainty about the potential risks in an environment where the boundaries between the physical and virtual worlds will become increasingly blurred.

The metaverse, to put it simply, can be thought of as the next iteration of the internet, which began as a set of independent and isolated online destinations that, over time, have evolved into a shared virtual space, similar to how a metaverse will evolve.

Metaverse is a shared virtual collective space created by the convergence of physical and enhanced digital reality” —Gartner.

Is the metaverse betting on Cyber Security?

The WEF (World Economic Forum) argues that the metaverse is a persistent and interconnected virtual environment in which social and economic elements mirror reality. Users can interact with it and with each other through immersive devices and technologies, while also interacting with digital assets and properties.

The metaverse is neither device-independent nor owned by a single provider. It is an independent virtual economy, enabled by digital currencies and non-fungible tokens (NFT).

As a combinatorial innovation, for a metaverse to work, multiple technologies and trends need to be applied.

These include virtual reality (VR), augmented reality (AR), flexible work styles, HMD viewers, Cloud, IoT (Internet of Things), 5G connectivity and programmable networks, Artificial Intelligence (AI)…and, of course, Cyber Security.

Challenges of the metaverse for organisations

The Cyber Security challenges that organisations face when operating in the metaverse can have significant implications for the security and privacy of their assets and users.

These challenges include:

  • The theft of virtual assets can lead to significant financial losses for organisations.
  • Identity theft can compromise sensitive information and resources.
  • Malware attacks can infect entire virtual environments, causing widespread damage,
  • Social engineering attempts to trick users into revealing confidential information or performing unauthorised actions.
  • The lack of standardisation in the metaverse can make it difficult for organisations to develop consistent security protocols and ensure interoperability between different virtual environments.
  • The novelty of the metaverse and the low security awareness of users can lead to poor security practices, making them more vulnerable to cyber-attacks.

The Metaverse Alliance suggests that, in traditional internet use, users do not have a complete digital identity that belongs to them. Instead, they provide their personal information to websites and applications that can use it for a variety of purposes, including making money from it.

In the metaverse, however, users will need a single, complete digital identity that they control and can use across platforms. This will require new systems and rules to ensure users’ privacy and security.

In the metaverse, users will need a unique and comprehensive digital identity that they control and can use across different platforms.

In short, users need to own and control their digital identity in the metaverse, rather than leaving it to third-party websites and applications. It is very worrying that most internet users do not have a digital identity of their own.

Understanding the new virtual environment and its risks

Statista expects the metaverse market to grow significantly in the coming years. It expects revenues generated by the metaverse market to grow at a compound annual growth rate (CAGR) of more than 40% between 2021 and 2025.

It also expects growth to be driven by the increasing adoption of virtual and augmented reality (VR and AR) technologies, with gaming and eSports industries dominating, and growing interest in virtual social experiences.

In the aftermath of the Covid-19 pandemic, the shift towards digital and virtual experiences has accelerated, further driving growth in the metaverse market.

These forecasts present significant opportunities for companies and investors, particularly in the entertainment and social networking sectors. However, the market also poses several challenges in terms of regulation, Cyber Security and standardisation that need to be addressed to ensure its sustainable growth.

The metaverse is not immune to cyberthreats

The metaverse, like any technology, is not immune to risks and vulnerabilities. Here are some of the technological and cyber risks associated with the metaverse:

  • In the metaverse, users will create and share large amounts of personal data. This includes information such as biometric data, personal preferences and behavioural patterns. Ensuring the security and privacy of this data will be critical to prevent leaks and unauthorised access to sensitive information.
  • As the metaverse becomes more popular, it will attract the attention of cybercriminals. Cybercrimes such as distributed denial of service (DDoS) attacks, malware and phishing scams could compromise the security of the virtual world and its users.
  • The metaverse is likely to involve the exchange of virtual currencies and assets. If these assets are not adequately protected, they could be vulnerable to theft, fraud, and hacking.
  • With the immersive nature of the metaverse, users may become addicted and spend excessive time in the virtual world. This could lead to physical and mental health problems, as well as social isolation.
  • The metaverse could become dominated by a few powerful companies or individuals, resulting in a centralised and controlled virtual world. This could limit users’ freedom and innovation.

As we will see in the next article within this series, it is essential to develop robust security protocols and regulations to prevent these risks and ensure that the metaverse remains a safe environment for all users.

Featured photo: Julien Tromeur / Unsplash

Satellites with 5G technology to provide IoT coverage worldwide

Nacho Palou    10 April, 2023

During the last edition of MWC, our colleague Javier Zorzano participated in the “5G IoT Summit: Hybrid NB-IoT and Satellite Solutions” alongside Shahbaz Ali, from Sateliot.

In their talk, Javier and Shahbaz discussed the benefits and challenges around satellite connectivity for IoT devices. This technology, which we are jointly developing, expands and complements our portfolio of NB-IoT connectivity solutions via 5G and LPWA networks. This way, we are configuring a hybrid solution, with both terrestrial and non-terrestrial (NTN) satellite networks, to offer our customers global NB-IoT coverage.

“Only 30 percent of the world’s surface has coverage from terrestrial networks” according to IoT For All. For this publication, satellite connectivity was one of the “dominant trends” at Mobile World Congress 2023.

“The next step in NB-IoT connectivity is to provide global coverage, worldwide. And that step is what we are taking now.” —Javier Zorzano (Telefónica Tech)

The growing number of industries and sectors adopting IoT solutions worldwide makes it necessary to develop this hybrid IoT connectivity that provides coverage worldwide. Otherwise, there will be more and more IoT use cases that cannot be developed due to the lack of communication.

For example, for the livestock sector operating in rural or remote regions, for the logistics sector when it needs to accurately track the status and location of its goods on transoceanic routes, or for the renewable energy sector that manages wind farms or solar farms in hard-to-reach places.

Convergence of terrestrial and non-terrestrial networks

As explained by Shahbaz Ali during the meeting, Sateliot is developing the first 5G NB-IoT constellation of LEO (Low Earth Orbit) nanosatellites.

This constellation is made up of small and efficient satellites typically located between 500 and 600 kilometers in altitude to configure an NTN (Non-Terrestrial Network) IoT connectivity network capable of integrating into our 5G terrestrial network.

Sateliot’s nanosatellite recreation. Image: SATELIOT

Our collaboration with Sateliot consists of developing a technological solution that includes satellite IoT connectivity to offer an affordable and transparent solution for our customers: satellite IoT connectivity based on the same 3GPP standard as 5G and NB-IoT networks.

This connectivity is also compatible with the same IoT devices currently used thanks to the development and certification of hybrid connectivity modules.

“Thanks to standardization, our hybrid IoT connectivity technology is affordable and scalable, and will reduce frictions when adopting it.” – Shahbaz Ali (Sateliot)

This way any conventional IoT device can simultaneously work with terrestrial 5G NB-IoT networks and satellite networks. So service providers “will be able to connect with the nanosatellite network using a roaming service when they need 5G coverage to offer connectivity and follow, for example, the cargo of a moving ship, the trajectory of a mountain biker or alert emergency services in the case of an accident,” explains Sateliot.

Benefits and challenges of our satellite IoT connectivity

The development of IoT connectivity via satellite provides two important benefits to any IoT solution:

  • Global coverage in remote areas and in territories without network infrastructure or mobile coverage, helping to close the digital divide between regions.
  • Backup coverage to reinforce mobile network coverage and to ensure service continuity in case of disruption due to incidents or natural disasters.

The solution we are developing with Sateliot not only allows the use of the same IoT devices already on the market, but also the same SIM card. Whether connected through a conventional network or through Sateliot’s network, connectivity can be managed through our Kite platform.

Satellite connectivity managed with Kite Platform

Kite Platform is our managed connectivity solution through which our customers can easily control and monitor their data line SIM cards in real time, via web or API. In this way, “our solution is equivalent to having a conventional roaming agreement, but with a satellite operator,” explains Javier Zorzano.

From a technical point of view, this simplifies and reduces the adoption time of this technology. From a commercial point of view, this roaming agreement is more affordable than existing solutions in the market.

The first commercial pilots of satellite NB-IoT with customers are planned for the end of the year.

With our technology and in collaboration with Sateliot, we address the major challenges of satellite connectivity:

  • Connectivity management, with our Kite Platform technology.
  • Service cost, a “very sensitive” aspect for the IoT B2B market.
  • The massive adoption of IoT that demands coverage in places not covered by terrestrial networks.

Conclusion

Our partnership with Sateliot helps provide NB-IoT coverage on a global scale. It also contributes to the massive deployment of our IoT solutions and devices, especially in the B2B sector.

This technological solution opens up new possibilities and use cases for IoT technology. From precise tracking of goods or fleet management anywhere in the world to the development of smart livestock and agriculture solutions in rural areas, as well as environmental monitoring projects or the management of natural resources such as water or energy. “Possibilities that we could not consider before are now on the table,” concludes Javier Zorzano.

Watch the talk with Javier Zorzano (Telefónica Tech) and Shahbaz Ali (Sateliot) in which they discuss the benefits and challenges of satellite connectivity for IoT devices (Hybrid NB-IoT), a solution that expands and complements our IoT connectivity portfolio.”

Featured photo: Stefan Stefancik / Unsplash

Rethinking consensus for a more sustainable Blockchain: from PoW to PoS

Jorge Ordovás    5 April, 2023

Keeping a record of information on public Blockchain networks (such as Bitcoin, Ethereum or others) involves solving the so-called “Byzantine Fault”: several contingents of the same army surround an enemy city, and have to agree, in a hostile environment (where emissaries sent between camps could be killed by the enemy, and messages lost; or worse, with potential traitors among them, who can kill the emissaries, or alter the orders received) to attack the city in a coordinated way.

If they can get the sum of most of the forces at their disposal to attack at once, they can win the battle (but if they cannot get a minimum number of forces to attack in a coordinated way, they will lose).

Global Management and Resolution for critical services

In the field of public Blockchain networks, the challenge is how to bring together the different users of the network, connected to each other by an insecure medium (P2P connections over the Internet) in which information may not travel uniformly (or may even be altered or lost along the way) to make a coordinated decision (what will be the next block in the chain, and therefore, which transactions are recognised as valid) without there being a centralised authority that directs the process.

The key to solving this problem is to apply game theory, implementing a mechanism called “Proof of Work” (PoW). Its objective is to incentivise the behaviour of certain actors, the miners, to use their computational capacity to generate new blocks containing information accepted by consensus by the entire network, following the rules defined in a competition that involves carrying out very computationally expensive calculations.

In exchange for this work, the miners receive compensation (in the case of Bitcoin, they receive 6.25 bitcoins plus the commissions from the transactions included in the blocks they generate).

The “51% problem”

Potentially, a miner with a majority of the global computing power of these networks (for example, by having quantum computers…) could alter their functioning by causing unexpected events (slowing down the confirmation of new transactions, blocking payments related to a specific entity, or causing forks in the blockchain in a premeditated way that would allow him to spend the same cryptocurrency twice in exchange for goods or services already consumed). This situation is known as the “51% problem”.

However, in practice, if a miner were to obtain this computational capacity and use it to generate anomalous behaviours such as those identified, he himself would be the main loser, since he would cause a crisis in the network that would put at risk the possibility of continuing to obtain more rewards for generating new blocks, not to mention the probable crash in the market that would cause the price of the cryptocurrency in the network to collapse (and therefore the income he obtains from his activity).

It is much more productive for this miner to follow the defined rules and get that return than to “cheat” and risk the return on his investment in the infrastructure needed to achieve that computing capacity.

The environmental impact of Blockchain technologies

To guarantee this result, the “network effect” (the number of miners working on the network to compete for the generation of new blocks) is key. The higher the cost of obtaining a relevant percentage of the network’s computational capacity, the greater the incentive to dedicate this infrastructure to complying with the rules established in the network.

And this is where we run into a problem: the environmental impact of these technologies, as the computational capacity of networks such as Bitcoin, which require specialised hardware and consume large amounts of electricity to operate, has increased.

The chart shows how the curve representing the computational capacity of the Bitcoin network continues to grow over time, currently exceeding 200 trillion operations per second.

Chart: Bitcoin network computational capacity, in https://www.blockchain.com/charts/hash-rate
Chart: Bitcoin network computational capacity, in https://www.blockchain.com/charts/hash-rate

While there is an open debate about this impact of mining in public Blockchain networks, compared to other activities that also demand an intensive use of infrastructure and also generate a significant environmental footprint, and despite the fact that there are more and more projects that try to take advantage of renewable energies for mining, there is a real problem derived from the need for energy to sustain the PoW process and different initiatives are proposed to radically modify the way in which a Blockchain network reaches consensus, highlighting the alternative that we call Proof of Stake (PoS).

More and more Blockchain projects seek to reduce energy consumption and use renewable energy sources

The aim of replacing PoW with PoS in public Blockchain networks is to achieve a mechanism that is as secure as mining, but which is not based on the need for computational capacity to participate in the process of validating new blocks, but on a random process in which the more cryptocurrency in the network one owns, the more likely one is to be chosen to generate a new block (and receive a reward for it).

The most relevant case of transition from PoW to PoS is Ethereum, the public network par excellence for the use of smart contracts in decentralised services. Since Ethereum’s inception in 2015, this transition was already considered for the future, but it did not begin until 2020.

Beacon Chain, a new consensus mechanism for Ethereum

This year, a network called Beacon Chain began to be deployed to manage this new consensus mechanism in Ethereum. This network, in tests since August 2020, will be integrated with the main Ethereum network in the coming months, in an event called The Merge, which will transform the way in which this network reaches consensus in the generation of new blocks without affecting all the smart contracts, data and services based on them that currently exist. A more than considerable challenge, considering the volume of business it manages.

When this happens, there will no longer be miners, but validators who will have previously made a deposit of 32 Ether (Ethereum’s cryptocurrency) as a guarantee to participate in the process of selecting the actors who will generate and validate new blocks in the Ethereum network. At the time of writing, the value of that deposit of 32 Ether amounts to about $98,000, and there are already more than 351,000 validators who have deposited more than 11 million Ether to be selected to participate in the process.

Chart: Beacon Chain statistics, in https://beaconcha.in/
Chart: Beacon Chain statistics, in https://beaconcha.in/

In each round, the Beacon Chain randomly chooses, from among all the available validators, the one known as proposer, who will decide the new block to be generated in Ethereum and the transactions that will be included in it, and a set of validators who will act as attesters, reviewing and accepting (or not) the block generated, so that it can be incorporated into the network (at a rate of one new block every 12 seconds).

How Beacon Chain works

This process does not require complicated calculations as in the case of PoS, so no relevant computational capacity is needed to be a validator, only to run a specific software that allows to connect to the Beacon Chain and participate in the consensus mechanism.

If during this process both proposer and attester behave correctly, they will receive a reward for their work (higher in the first case). If they do not respond when elected, or behave incorrectly, they will receive a penalty (the amount of which depends on the damage their behaviour may cause to the functioning of the network), which will be deducted from the deposit they made to be eligible for the process.

Beacon Chain incentivises compliance with Ethereum’s rules: fair play maximises profits

Should an actor be penalised on a recurring basis, they may even lose their entire deposit and be expelled as a validator, so this mechanism encourages behaviour according to the defined rules, in order to generate new blocks in the network in a coordinated and secure manner.

The probability of being selected in the process is directly proportional to the number of deposits made, so whoever has contributed the most Ether as collateral, the more likely they are to be selected, and therefore, the greater the return on the new PoS process.

Pure game theory: since behaving correctly maximises profit, and cheating minimises profit, whoever has the most Ether will have the most incentive to play by the rules (for whatever it’s worth).

Ethereum evolution

The evolution of Ethereum does not stop at the migration from PoW to PoS, there are more phases in the future to increase the capacity of the network through multiple blockchains where smart contracts will be deployed, which will operate in parallel, and the Beacon Chain will be in charge of guaranteeing security and coordinating the process of generating new blocks in all the chains that will make up Ethereum.

However, there is not yet a definite date for this final phase; there are still many decisions to be made, developments and tests to be carried out. And first of all, confirm that 2022 will be the year of The Merge if nothing unforeseen happens with the implementation process of Proof of Stake on the current network. Rien ne va plus, or as they say in this decentralised ecosystem, WAGMI (We All Gonna Make It).

Featured photo: Bastian Riccardi / Unsplash

Leave a Comment on Rethinking consensus for a more sustainable Blockchain: from PoW to PoS

Technology and the young: how to turn dangers into opportunities

María Riesgo    4 April, 2023

The technology world is moving extremely fast. If we look back, we would have never imagined the possibility of connecting to a computer without interrupting the landline phone at home or answering a work email from our mobile phone.

We live along with technology from the moment we wake up to the moment we go to sleep. What is the first thing we usually do as soon as we open our eyes? Almost certainly checking our mobile phones or even our social networks.

Years ago, when we were younger, we didn’t live dependent on our mobile phones, social media and everything that it entails. In fact, we had to ask permission to be able to connect to the Internet and use the social networks of the time.

Now young people have a double nuance:

  • They are purely technological and have been born with a phone in their hands.
  • They are too exposed to technology and this could be a danger, or perhaps an opportunity.

Why could technology become a danger?

Exposure to social media can be harmful if it is not used responsibly. To this end, it is essential to educate children and teenagers on good practices in the use of technology.

How could we avoid this?

Children should be taught that they should not be dependent on technology, but that it ought to be used as a game, with established schedules and under supervision. That technology is a complement to their growth, but that they do not have to invest all their time in it. In fact, they should encourage their imagination and not get used to the immediacy provided by the Internet.

As they move through the stages of childhood, and reach a more mature age, they should be made aware of the safety issues they could be exposed to if they use social networks in an unsafe way, for example, by giving out personal details, sending photos to strangers, or trusting people they do not know in the real world.

In fact, this concept is very important, we must make them understand that not everything that happens on the internet is real, that they should not believe everything they see, since even an image of a person could be altered, for example, through a hologram, a person could impersonate someone else, and deceive their victim to achieve their goal, which is never going to be good.

As well as teaching how to prevent conversations with strangers, we also need to raise awareness of non-bullying. As well as teaching empathy and respect, emphasis must also be placed on cyberbullying.

As we are developing, social networks can be an impetus for socialising, learning and developing the imagination.  However, they can also present several dangers, including cyberbullying, so it is very important to teach them that they should respect themselves and should not neglect the privacy of their data and image and that, on the other hand, they should respect others and not disseminate content about peers or friends in the social environment.

In short, focus on the use of technology as something positive and from which much can be learned, but without forgetting the dangers that can arise if we do not “put an antivirus”, through education, to the youngest.

Why could technology be an opportunity?

Because if interest in technology is awakened from a young age, since they are in full contact with it, it could be an opportunity for them, as they could start, for example, programming at a very young age, identifying this task as a game.

How could they get started?

Through programming, for example, which is one of the languages of the digital society. If they approach programming as a learning game from an early age, if they finally like it, they will be able to specialise and move on to more specific programming courses. For example, starting with code.org, where, in fact, José María Álvarez-Pallete is a promoter alongside figures like Obama, Zuckerberg, Jeff Bezos and Susan Wojcicki.

In this way, through platforms such as code.org, they could transform an interest, which from the beginning can be instilled as learning through play, into a challenge, advancing in technological knowledge as they grow.

Thus, knowing the dangers, promoting knowledge and continuous learning, who knows if in the future they may consider this game as an impulse to train and end up working in it. Moreover, there is a huge offer in the world of cybersecurity, which is growing exponentially every year.

We need to encourage young people from an early age to play and get excited about learning everything that technology can offer them, warning them about the dangers.

Also teach them that hackers are not the bad guys in the story; the dangerous ones are those who use technology to do evil. But it is important to differentiate between these two figures because, who knows, in the future they may end up being hackers and developing tools to detect security flaws and protect clients, or users, from the dangers they were taught from an early age.

In this way, if young people develop an interest in protecting themselves and others, they could also develop an interest in analysing, for example, social networks. How a simple tweet can go viral and how they can check day by day whether the trend of that message continues to rise or whether it is no longer being talked about. Through this little game or visual curiosity, without realising it, they would be acting as cyber intelligence analysts, carrying out a basic study of a trend on social networks.

On the other hand, having taught them about cyberbullying, we can advance in their learning about social networks by explaining that, if they find any negative comment towards any person, or even a commercial brand, and they detect it, they will also be learning about offensive comments on social networks, a very popular topic for companies, as the image on social networks must be a very careful and monitored topic.

Conclusion

Technology education has to go through stages of learning as children grow up. There is little point in explaining to a very young child the dangers of phishing or ransomware without a prior context. That is why in the early stages it will be necessary to accompany them in the use of technology, to supervise what they do and what they learn.

In short, education and protection should be used to awaken their interest in how to use the Internet safely, and in how to advance in knowledge in order to gradually improve.

It is always said that children are the future of the world, and in this case, they are the future of technology and cybersecurity as we understand it now, and how it will progress thanks to them in a few years’ time.

Featured photo: Robo Wunderkind / Unsplash

Big Data in basic research: from elementary particles to black holes

Javier Coronado Blazquez    3 April, 2023

Big Data paradigm has profoundly penetrated all the layers of our society, changing the way in which we interact with each other and technological projects are carried out.

Basic research, specifically in the field of physics, has not been immune to this change in the last two decades and has been able to adapt to incorporate this new model to the exploitation of data from leading experiments.

We will talk here about the impact of Big Data on three of the major milestones in modern physics.

(1) Large Hadron Collider: the precursor of Big Data

One of the buzzwords of 2012 was the “Higgs boson”, that mysterious particle that we were told was responsible for the mass of all other known particles (more or less) and that had been discovered that same year. But in terms of media hype, the focus was on the instrument that enabled the discovery, the Large Hadron Collider, or LHC, at the European Organization for Nuclear Research (CERN).

The LHC is a particle accelerator and is probably the most complex machine ever built by humans, costing some €7.5 billion. A 27 km long ring buried at an average depth of 100 metres under the border between Switzerland and France, it uses superconducting electromagnets to accelerate protons to 99.9999991% of the speed of light (i.e., in one second they go around the ring more than 11,000 times).

By colliding protons at these delirious speeds, we can create new particles and study their properties. One such particle was the Higgs boson.

To make sure that the protons, which are elementary particles, collide with each other, instead of using them one by one, large packets are launched, resulting in about 1 billion collisions per second.

All these collisions are recorded as single events. Thousands of individual particles can be produced from a single collision, which are characterised in real time (well below a millisecond) by detectors, collecting information such as trajectory, energy, momentum, etc.

Massive amounts of data

As we can imagine, this produces an enormous amount of data. Specifically, over 50,000-70,000 TB per year of raw data. And that’s just from the main detectors, as there are other secondary experiments at the LHC. Because it doesn’t operate every day of the year, it generates an average of 200-300 TB of data; a complicated – but feasible – volume to handle today. The problem is that the LHC came into operation in 2008, when Big Data was a very new concept, so there was a lot of ad hoc technology development. Not for the first time, the Internet itself was born at CERN, with the World Wide Web.

The Worldwide LHC Computer Grid (WLCG), a network of 170 computing centres in 42 countries, was established in 2003, with a total of 250,000 available cores allowing more than 1 billion hours of computing per year.

Each of the nodes in this network can be dedicated to data storage, processing or analysis.

Depending on the technical characteristics, each of the nodes in this network can be dedicated to data storage, processing or analysis. To ensure good coordination between them, a three-tier hierarchical system was chosen: Tier 0 at CERN, Tier 1 at several regional sites, and Tier 2 at centres with very good connectivity between them.

 LHC control room / Brice, Maximilien, CERN
LHC control room / Brice, Maximilien, CERN

Spain hosts several of these computing centres, both Tier 1 and Tier 2, located in Barcelona, Cantabria, Madrid, Santiago de Compostela and Valencia. One of the aspects that has fostered this large volume of data is the application of machine learning and artificial intelligence algorithms to search for physics beyond what is known, but that is a story for another day…

(2) James Webb Space Telescope: the present and future of astrophysics

The LHC explores the basic building blocks of our Universe: the elementary particles. Now we are going to travel to the opposite extreme, studying stars and entire galaxies. Except for the remarkable advances in neutrino and gravitational-wave astronomy in recent years, if we want to observe the Universe, we will do so with a telescope.

Due to the Earth’s rotation, a “traditional” telescope will only be able to observe at night. In addition, the atmospheric effect will reduce the quality of the images when we are looking for sharpness in very small or faint signals. Wouldn’t it be wonderful to have a telescope in space, where these factors disappear?

That was what NASA thought in the late 1980s, launching the Hubble space telescope in 1995, which has produced (and continues to produce) the most spectacular images of the cosmos. NASA considered a couple of decades ago what the next step was, and began designing its successor, the James Webb Space Telescope (JWST), launched on 25 December 2021 and currently undergoing calibration.

With a large number of technical innovations and patents, it was decided to place JWST at the L2 Lagrange point, 4 times further away from us than the Moon. At such a distance, it is completely unfeasible to send a manned mission to make repairs, as was the case with Hubble, which orbits at “only” 559 km from the Earth’s surface.

NASA’s James Webb Telescope main mirror. Image Credit: NASA/MSFC/David Higginbotham

One of the biggest design challenges was data transmission. Although the JWST carries shields to thermally insulate the telescope, because it is so far from the Earth’s magnetosphere, the hard disk that records the data must be an SSD (to ensure transmission speed) with high protection against solar radiation and cosmic rays, since it must be able to operate continuously for at least 10 years. This compromises the capacity of such a hard disk, which is a modest 60 GB.

With the large volume of data collected in observations, after about 3 hours of measurements this capacity may be reached.

The JWST is expected to perform two data downloads per day, in addition to receiving pointing instructions and sensor readings from the various components, with a transmission rate of about 30 Mbit/s.

Compared to the LHC’s figures this may seem insignificant, but we must not forget that JWST orbits 1.5 million kilometres from Earth, in a tremendously hostile environment, with temperatures of about 30°C on the Sun-facing side and -220°C on the shadow side. An unparalleled technical prodigy producing more than 20 TB of raw data per year, which will keep the astrophysical community busy for years to come, with robust and sophisticated machine learning algorithms already in place to exploit all this data

(3) Event Horizon Telescope: Lifetime Big Data

Both the LHC and JWST are characterised by fast and efficient data transmission for processing. However, sometimes it is not so easy to get the “5 WiFi stripes”. How many times have we been frustrated when a YouTube video would freeze and load because of our poor connection? Let’s imagine that instead of a simple video we need to download 5 PB of data.

This is the problem encountered by the Event Horizon Telescope (EHT), which in 2019 published the first picture of a black hole. This instrument is actually a network of seven radio telescopes around the world (one of them in Spain), which joined forces to perform a simultaneous observation of the supermassive black hole at the centre of the galaxy M87 for 4 days in 2017. Over the course of the observations, each telescope generated about 700 TB of data, resulting in a total of 5 PB of data scattered over three continents. The challenge was to combine all this information in one place for analysis, which it was decided to centralise in Germany.

In contrast to the LHC, the infrastructure for data transfer at this level did not exist, nor was it worth developing as it was a one-off use case. It was therefore decided to physically transport the hard disks by air, sea and land.

One of the radio telescopes was located in Antarctica, and we had to wait until the summer for the partial thaw to allow physical access to its hard disks.

Researcher Katie Bouman (MIT), who led the development of the algorithm to obtain the black hole photo with the EHT, proudly poses with the project's hard disks.
Researcher Katie Bouman (MIT), who led the development of the algorithm to obtain the black hole photo with the EHT, proudly poses with the project’s hard disks.

In total, half a tonne of storage media was transported, processed and analysed to generate the familiar sub-1 MB image. Explaining the technique required to achieve this would take several individual posts.

What is important here is that sometimes it is more important to be pragmatic than hyper-technological. Although our world has changed radically in so many ways thanks to Big Data, sometimes it is worth giving a vintage touch to our project and imitate those observatories of a century ago that transported huge photographic plates from telescopes to universities to be properly studied and analysed

Featured image shows the polarised view of the black hole in M87. The lines mark the orientation of polarisation, which is related to the magnetic field around the shadow of the black hole. Photo: EHT Collaboration

Cyber Security Weekly Briefing, 25 – 31 March

Telefónica Tech    31 March, 2023

GitHub exposes its RSA SSH host key by mistake

GitHub announced last Friday that they had replaced their RSA SSH host key used to protect Git operations.

According to the company, this key was accidentally exposed in a public GitHub repository last week. They acted quickly to contain the exposure and an investigation was launched to discover the cause and impact.

While this key does not give access to GitHub infrastructure or user data, this action has been taken to prevent potential spoofing. Users are advised to remove the key and replace it with the new one.

More info

* * *

Apple fixes an actively exploited 0-day

Apple has released security updates fixing an actively exploited 0-day vulnerability in older iPhone, macOS and iPad devices.

The flaw, identified as CVE-2023-23529, is a WebKit-type confusion bug, which has a CVSS of 8.8 and could lead to arbitrary code execution, data theft, access to Bluetooth data, etc.

It should be noted that, in terms of devices, the vulnerability affects iPhone 6s, iPhone 7, iPhone SE, iPad Air 2, iPad mini and iPod touch, in addition to Safari 16.3 on macOS Big Sur and Monterey, macOs Ventura, tvOS and watchOS. The company recommends updating as soon as possible to avoid possible exploit attempts.

More info

* * *

Supply chain attack via 3XC video conferencing platform

Researchers from various security firms such as SentinelOne, Sophos y CrowdStrike have warned of a supply chain attack via the 3CX video conferencing programme.

While the investigation into the attack is still ongoing, it has been confirmed to affect Windows platforms where the compromised 3CXDesktopApp application would download ICO files from GitHub, ultimately leading to the installation of a stealer malware.

The first detections of the app’s suspicious behaviour in security solutions were reportedly in mid-March 2023, but researchers have identified infrastructure used in the attack with registration dates in February last year.

The campaign, which SentinelOne has dubbed SmoothOperator, has no clear attribution, although some researchers point to possible connections to Labyrinth Chollima, part of the North Korean Lazarus Group. 3CX has not made any statement regarding the campaign.

More info

* * *

Analysis of campaigns exploiting 0-days on Android, iOS and Chrome

Google’s Threat Analysis Group has published a report sharing details about two campaigns that used 0-day exploits against Android, iOS and Chrome.

In the first campaign, 0-day exploit strings targeting Android and iOS were detected and distributed via shortened links sent via SMS to users located in Italy, Malaysia and Kazakhstan. The vulnerability, already fixed in 2022, which affected iOS in versions prior to 15.1, is identified as CVE-2022-42856 and CVSS 8.8, which refers to a type confusion bug in the JIT compiler that can lead to arbitrary code execution.

On the other hand, the one identified as CVE-2021-30900, with CVSS 7.8, also fixed, deals with an out-of-bounds writing and privilege escalation bug. As for the Android exploit chain, these targeted users of phones with an ARM GPU running versions earlier than 106. As for the bugs, all fixed, one of them is CVE-2022-3723 (CVSS 8.8), type confusion in Chrome; CVE-2022-4135 (CVSS 9.6), buffer overflow in Chrome’s GPU; and CVE-2022-38181 (CVSS 8.8), privilege escalation. It is worth noting that the latter vulnerability was found to be actively exploited.

The second campaign, targeting devices in the United Arab Emirates via SMS, consists of several 0-days and n-days targeting Samsung’s web browser.

The link redirects users to a page developed by spyware vendor Variston and exploits vulnerabilities CVE-2022-4262CVE-2022-3038CVE-2022-22706 and CVE-2023-0266.

More info

Cloud terms you can’t miss

Roberto García Esteban    30 March, 2023

It was George Favaloro and Sean O’Sullivan, managers of Compaq Computer, who first used the expression “Cloud Computing” in 1996, and since then, the term has become so popular that I already meet primary school children who know, for example, that Siri does not live inside iPads but much further away, “in the cloud”.

However, as technology develops, a hodgepodge of terms and acronyms appear that are difficult for non-technologists to understand. So the intention of this post is to try to explain these terms in a simple way in order to explain the details of the very general concept of “the cloud”.

Cloud terms glossary

  • API: “Application Programming Interface”. It is the standard mechanism for communication between applications. It is an interface that allows different applications to request data and deliver it in a predefined format and according to specific rules.
  • Cloud Computing: Here I borrow the definition given by Salesforce, which in 1999 was the first company to market enterprise services from the cloud: “Cloud computing is a technology that enables remote access to software, file storage and data processing over the Internet, thus providing an alternative to running on a personal computer or local server. In the cloud model, there is no need to install applications locally on computers. Cloud computing offers individuals and businesses the capability of a well-maintained, secure, easily accessible, on-demand pool of computing resources”.
  • Hybrid Cloud: Cloud deployment model that combines the dedicated computing resources of a private cloud for critical data and applications with the shared resources of a public cloud to meet peak demand.  
  • Private Cloud: In this case, the computing resources and environment are for the exclusive use of an organisation. It is comparable to having one’s own data centre within an organisation, but with the advantages of delegating its management and dimensioning it on demand thanks to virtualisation.
  • Public Cloud: A deployment model in which an internet service provider offers computing resources over the internet on an infrastructure shared by several organisations on a pay-per-use basis.
  • Cluster: It is a collection of servers that are connected to each other through a network, and which behaves like a single server in many respects.
  • Colocation o Housing: Service offered by companies that provide data centres in advanced and secure facilities to host the technology platforms owned by their customers. These facilities offer high quality services and connectivity.
  • DPC: Data Processing Centres. These are the physical locations where all the electronic equipment necessary for the processing and storage of a company’s information is located.
  • Hypervisor: A hypervisor, also known as a virtual machine monitor (VMM), is software that creates and runs virtual machines and isolates the operating system and hypervisor resources from the virtual machines, allowing them to be created and managed. When the physical hardware system is used as the hypervisor, it is referred to as a ‘host’, and the multiple virtual machines that use its resources are referred to as ‘guests’. The hypervisor uses resources, such as CPU, memory and storage, as a pool of media that can be easily redistributed among its guests.
  • IaaS: Infrastructure as a Service. With IaaS, a virtualisation-based solution is available where the customer pays for resource consumption such as disk space used, CPU time, database space or data transfer.
  • Latency: Or network latency, the time it takes for a data packet to be transferred between a server and a user over a network.
  • Virtual Machine: A virtual machine (VM) is a virtual environment created on physical hardware using a hypervisor that has its own operating system, CPU, memory, network interface and storage.
  • Metacloud: Tools for the management and administration of multiple clouds, also managing resources in the cloud and exposing APIs to applications.
  • Multicloud: A cloud deployment model in which services from multiple cloud providers are combined to take advantage of the specific benefits of each provider.
  • On-demand: Equivalent to “on-demand”. In the technology field, it is used to express the flexibility of cloud products, based on a pay-per-use model in which the provider makes all its resources available to the customer on demand so that the customer can respond to peaks and troughs in demand.
  • On-premises: This is the traditional licensing scheme, i.e. the company acquires the licences that grant it the right to use the provider’s systems, integrates them into its own installations and maintains its data within its own infrastructure.
  • Open Source: Free software should not be confused with freeware because free software does not have to be free. The source code of Open Source is “Open Source” and programs licensed under the GPL (“General Public License”), once acquired, can be freely used, copied, modified and redistributed.
  • PaaS or Platform as a Service is a cloud computing service model that provides a ready-to-use development environment over the Internet in which developers can develop, manage, distribute and test their software applications.
  • PUE: Power Usage Effectiveness is the value that results from dividing the total amount of energy used by a data centre facility by the energy supplied to the data centre’s IT equipment. Items such as lighting or cooling fall into the category of energy used by a data centre facility. The closer the PUE value is to 1, the more efficient the data centre is.
  • Disaster Recovery: or Disaster Recovery is a method of recovering data and functionality after a system outage due to a disaster, natural or human-induceds. 
  • SaaS or Software as a Service is a cloud computing service model that consists of distributing software applications hosted in the cloud to users via the Internet through a subscription or purchase payment model, while maintaining the privacy of their data and the personalisation of the application.
  • Bare-metal server: A bare-metal server is a physical server with a single tenant, i.e. for the exclusive use of the client that contracts it and which is not shared with other organisations or users.
  • SLA: Service Level Agreement. This is a protocol, usually set out in a legal document, whereby a company that provides a service to another company undertakes to do so under certain service conditions.
  • Oversubscription. Oversubscription of resources occurs when a shared hosting or Public Cloud provider offers a number of computing resources in excess of the available capacity, on the theory that customers do not use 100% of the resources offered.
  • VPN: AVirtual Private Networkis a network that creates a private, encrypted and secure connection between two points over the Internet. VPN communication tunnels allow encrypted and secure traffic to be sent and allow company employees to access the information they need from their company, even if it is private.
Leave a Comment on Cloud terms you can’t miss

Cybercrime, a constant threat to all types of companies

Nacho Palou    29 March, 2023

Cyber threats have existed since technology began to be used in companies and organizations. But the evolution of the technology world in the 21st century has changed the landscape: the famous “security perimeter” no longer exists, and our digital data and assets are located in different places and constantly moving, making it difficult to protect against threats.

Mobile devices, cloud services, or the location of digital assets in changing places, sometimes outside our borders, have blurred that perimeter. This has led to a new era in which organizations face global risks.

Cybercrime as a service (CaaS)

Today, malicious actors have professionalized and many operate as international organized crime groups.

These groups “rent out” their attack and encryption tools in affiliate models, meaning that criminals with lower levels of preparation can access powerful attack tools in exchange for sharing their profits.

At the same time, the technological advance that protects organizations has been matched by malicious actors who remain at the forefront of the latest technologies and techniques. In some key legal issues, such as the practical impossibility of attributing criminal offenses in certain areas of the Internet, such as the dark web, the impunity of these actors remains.

Main threats today

Among the main threats faced by organizations today are:

  • Ransomware: Destructive attacks that encrypt an organization’s data and demand ransom in exchange for the tools and secret keys that allow its recovery.
  • Denial of Service (DDoS): Attacks aimed at stopping or deteriorating an organization’s websites or systems. They can be motivated by activism, commissioned, rewarded, etc. The environment is artificially overloaded until it stops working or does so very poorly.
  • Email-related attacks and identity theft: Phishing is one of the most used methods. Criminals send “deceptive” messages with links or malicious files that, once opened, infect systems and allow malicious actors to access valuable organization information.
  • Data theft: Malicious actors take over large amounts of an organization’s data and exfiltrate it (possibly using the company’s own legitimate mechanisms) to be sold, auctioned, etc.
  • Malware: Other families of malicious software are frequently used to harm systems (viruses), spy (backdoors, keyloggers, etc.), or profit. For example, “miners” are programs that mine cryptocurrencies in the infrastructure without the company being aware, generating economic benefits for the malicious actor.
  • Insiders: Sometimes the “enemy is at home” and they are employees or collaborators who act out of revenge or to obtain economic benefit.

How to protect yourself against these threats?

For any company, SME, and organization, protection against these threats must be approached from a holistic and comprehensive perspective, considering all relevant and interrelated aspects. A solid Cybersecurity strategy must take into account both prevention and detection and response to incidents.

Therefore, for companies and organizations, it is essential to:

  • Carry out good information security management, which includes identifying the organization’s critical assets, assessing risks, defining security measures, and implementing appropriate controls.
  • Have clear security policies and procedures that establish the responsibilities and obligations of employees and other actors related to the organization, as well as how to act in case of an incident.
  • Offer good training and awareness in Cybersecurity for all employees of the organization, so that they are aware of the risks and know how to act in case of an incident.
  • Have monitoring and analysis systems for network and system activity that allow early detection of possible security incidents and enable quick action to minimize damage.
  • Design a Cybersecurity incident response plan that establishes the procedures to be followed in case of an incident, including notification to authorities and the management of communication with customers and other stakeholders.

Other measures that can help protect an organization against cybersecurity threats include the use of advanced technological security solutions, such as firewalls, antivirus software, intrusion detection systems, and vulnerability management solutions.

Conclusion

Cybersecurity poses real threats to all organizations, and it is essential to protect a company’s assets and data.

Threats are becoming increasingly sophisticated and dangerous, and organizations must stay up-to-date with the latest trends and threats in the field of cybersecurity to ensure adequate protection and be prepared to effectively face them.

A comprehensive security approach, including technological measures, security policies, and staff training, is essential to minimize risks.

Featured photo: Stefano Pollio / Unsplash

Cyber Security Weekly Briefing, 18 – 24 March

Telefónica Tech    24 March, 2023

HinataBot: new botnet dedicated to DDoS attacks

Researchers at Akamai have published a report stating that they have identified a new botnet called HinataBot that has the capability to perform DDoS attacks of more than 3.3TB/s.

Experts have indicated that the malware was discovered in mid-January, while being distributed on the company’s HTTP and SSH honeypots.

HinataBot uses exfiltrated user credentials to infect its victims and exploits old vulnerabilities in Realtek SDK devices, CVE-2014-8361, Huawei HG532 routers, CVE-2017-17215, and/or exposed Hadoop YARN servers. Once the devices are infected, the malware executes and waits for the Command & Control server to send the commands.

Akamai warns that HinataBot is still under development and that it could implement more exploits, and thus expand its entry vector to more victims and increase its capabilities to carry out attacks with a greater impact.

More info

* * *

CISA issues eight security advisories on industrial control systems

CISA has recently issued a total of eight security advisories warning of critical vulnerabilities in industrial control systems. These new vulnerabilities affect several products from different companies such as Siemens, Rockwell AutomationDelta ElectronicsVISAMHitachi Energy y Keysight Technologies.

The most significant of these vulnerabilities are those affecting the Siemens brand, of which three warnings have been collected affecting its SCALANCE W-700 assets, RADIUS client of SIPROTEC 5 devices and the RUGGEDCOM APE1808 product family, with a total of 25 vulnerabilities with CVSSv3 scores ranging from 4.1 to 8.2.

As a result, due to their impact, the warnings for Rockwell Automation’s ThinManager ThinServer equipment stand out, with one of its three bugs having a CVSSv3 of 9.8, as does the InfraSuite Device Master asset from Delta Electronics, for which a total of 13 vulnerabilities have been reported.

More info

* * *

Mispadu: banking trojan targeting Latin America

Researchers at Metabase Q Team have published a report on an ongoing campaign targeting banking users in Latin American countries using the Mispadu trojan. According to Metabase Q Team, the trojan has been spread through phishing emails loaded with fake invoices in HTML or PDF format with passwords.

Another strategy involves compromising legitimate websites looking for vulnerable versions of WordPress to turn them into its C2 server and spread malware from there. According to the research, the campaign started in August 2022 and remains active, affecting banking users mainly in Chile, Mexico and Peru.

In November 2019, ESET first documented the existence of Mispadu (also known as URSA), a malware capable of stealing money and credentials, as well as acting as a backdoor, taking screenshots and logging keystrokes.

More info

* * *

​​New 0-day vulnerabilities against different manufacturers during Pwn2Own contest

The Pwn2Own hacking contest is taking place this week in the Canadian city of Vancouver until Friday 24 March. After the first day, participants have managed to show how to hack into multiple products, including the Windows 11 operating system along with Microsoft Sharepoint, Ubuntu, Virtual Box, Tesla – Gateway and Adobe Reader.

It is worth noting that, according to the event’s schedule, security researchers will today and tomorrow reveal other 0-days that affect these assets, as well as others such as Microsoft Teams and VMWare Workstation.

Last but not least, it is important to point out that after these new 0-day vulnerabilities are demonstrated and disclosed during Pwn2Own, vendors have 90 days to release security patches for these security flaws before the Zero Day Initiative discloses the information publicly.

More info

* * *

​Critical vulnerability in WooCommerce Payments fixed

Researcher Michael Mazzolini of GoldNetwork reported a vulnerability in WooCommerce Payments this week, which has resulted in a security update being forced to be installed.

The vulnerability does not yet have a CVE identifier, although it has been assigned a CVSSv3 criticality of 9.8, being a privilege escalation and authentication bypass vulnerability, which could allow an unauthenticated attacker to impersonate an administrator and take control of the online retailer’s website.

It should be noted that no active exploitation has been detected so far, although Patchstack has warned that since no authentication is required for exploitation, it is likely to be detected in the near future. The affected versions range from 4.8.0 to 5.6.1, and the vulnerability has been fixed in version 5.6.2.

More info

5G connectivity: Four real and practical use cases

Nacho Palou    22 March, 2023

According to data from GSMA, collected by the publication Redes & Telecom, by the end of 2022, over one billion 5G connections had been surpassed worldwide; this figure will reach two billion in 2025, providing coverage to a third of the population in Europe.

5G connectivity is experiencing progressive deployment that is faster than its predecessors, 3G and 4G.

Real use cases for 5G connectivity

At Telefónica Tech, we are already implementing solutions to transform sectors such as industry by launching different projects that successfully leverage the advantages and enormous potential of 5G connectivity, such as:

  • Gestamp: The smart factory of Gestamp is based on a digital twin. A digital twin consists of a virtual model of the factory that optimizes production and helps with decision-making. The physical elements of the plant are connected via 5G to generate a virtual copy of the entire factory that allows industrial processes to be validated, different scenarios to be tested, and decisions based on data to be made.
  • APM Terminals: One of the largest operators of ports, maritime and land terminals in the world uses 5G connectivity to coordinate port traffic and improve safety through the deployment of 5G coverage at the APM Terminals terminal in the Port of Barcelona. The provision of 5G connectivity in cranes, trucks, and mobile staff allows all active actors to be located and visible in real-time, whether they are in motion or not, within the terminal. This helps reduce accidents among facilities, workers, vehicles, and goods.
  • Navantia: The Spanish company of reference in the manufacture of advanced ships uses 5G connectivity to remotely assist maintenance officers through Augmented Reality (AR) glasses. It also uses 5G to support the shipbuilding process, including processing 3D scanning in real-time, thus optimizing its production processes.
  • IE University: It has an immersive teaching center on its Segovia Campus that uses 5G connectivity in its implementation of virtual classes through streaming and from personal devices. This way, it can incorporate new educational resources such as Virtual Reality (VR), which allows specialized classes to be taught through immersive experiences for its students.

5G Connectivity: Key Advantages

The three key advantages of 5G connectivity are its transmission capacity and speed, imperceptible latency, and high concurrency of devices connected simultaneously in specific geographical areas.

  • Capacity and speed: 5G can reach download speeds of up to 10 gigabits per second (Gbps), allowing for the transmission of large amounts of data in less time.
  • Latency: Latency is the time that elapses from when a connection or request is initiated on one end until a response is received from the other end. For example, it is the time that elapses from when an industrial robot requests instructions to operate until it receives those instructions. With 5G, latency can be as low as 5 milliseconds, allowing for almost real-time communication.
  • Concurrency: The high capacity and concurrency that 5G allows make it possible to connect multiple devices simultaneously, including IoT sensors and actuators: 5G supports up to one million connected devices per square kilometer.

5G technology is up to 90% more efficient in terms of energy consumption per unit of traffic.

Because of its characteristics and advantages, 5G connectivity also has significant implications for other new-generation digital technologies, including Big Data, the Internet of Things (IoT), and Artificial Intelligence (AI).

  • IoT (Internet of Things): 5G enables the reliable, secure connection of numerous devices without sacrificing its low latency.
  • Big Data: Thanks to 5G’s high data transfer capacity, it is possible to send and receive large volumes of information almost in real-time.
  • Artificial Intelligence (AI): 5G connectivity enables automated systems to respond almost instantly.