Is AI key to successful Real Estate investment?

Patrick Buckley    27 November, 2020

As Artificial Intelligence (AI) continues to shape the world around us, in today’s post we explore the impact of AI on commercial Real Estate investment. To what extent is AI the key to investment success on the property market?

The Applications of AI on Real Estate Investment

Real estate investment can be highly lucrative if it is done right. But investors must be aware that, like any market, property markets do suffer from a degree of volatility and are vulnerable to demand and supply-side ‘shocks’.

A clear example of such as ‘shock’ is the COVID-19 pandemic which saw net Real Estate investment volume drop dramatically in certain global markets in the first quarter of 2020, with the Spanish market alone suffering from a 40% reduction in sales. This, of course, will come as no surprise due to the inevitable decrease in wealth and access to finance arising from the complex financial side effects of state intervention.

As the CBRE (a leading commercial Real Estate services and investment firm) predicts, investment volume in the global Real Estate market will fall by 38% in 2020. The important question facing investors now is whether markets will crash or boom across the world as a result of the pandemic.

In answering this question, AI and Big Data can provide helpful insights. Data trends can help us predict the future state of Real Estate markets around the world to an increasingly accurate extent.

Skyline AI, a New York based property investment company, offers commercial investors the opportunity to make investment decisions with the help of unique software that compiles and analyses data on a broad set of market indicators. These indicators include interest rates, property data and stock market trends, to predict the future value of property investments in specific areas at specific times.

Skyline AI’s algorithms are also able to monitor potential off-market investment opportunities and predict when they will come to market. This gives investors the edge when it comes to accessing lucrative deals.

Other commercial property investment companies focus their analysis on social data such as neighbourhood crime rates, school ratings and accessibility of public transport to offer a similar analysis to both commercial investors and regular households who wish to make an informed property buying decisions.

So is AI the future of Real Estate?

Not exactly. Whilst AI can offer increased insights into the future state of property markets, and can be an extremely valuable tool for commercial investors, algorithms can only predict to a certain degree of accuracy the effects of shock events on markets, no algorithm could have predicted COVID-19.  Markets are simply too unpredictable, and human intuition and evaluation of shock events remain fundamental. AI insights are only really valuable if combined with the expertise of analysts who may be able to predict more accurately the impact of ‘new events’ on markets.

We must also remember that humans are far better at selling than any bot or algorithm. The best salesman may be able to persuade even the most prudent investor to purchase a property.

Conclusion

The commercial Real Estate investment market has been and continues to be revolutionised by AI-driven insights. Investment companies are able to use Big Data to justify and influence purchasing decisions, almost guaranteeing a good investment return. However, we must remember that AI generated insights cannot always predict surprise shocks and they must be combined with analysts experience to provide a more accurate picture of future valuations. Furthermore, AI will never be able to replicate the selling powers of a human being.

To keep up to date with LUCA visit our website, subscribe to LUCA Data Speaks or follow us on TwitterLinkedIn or YouTube .

Nonces, Salts, Paddings and Other Random Herbs for Cryptographic Salad Dressing

Gonzalo Álvarez Marañón    24 November, 2020

The chronicles of the kings of Norway has it that King Olaf Haraldsson the Saint disputed the possession of the Hísing island with his neighbour the King of Sweden. They decided to settle the dispute peacefully with a game of dice. After agreeing that the winner would be the one with the highest score, the King of Sweden threw the dice first.

–Twelve! I won! No need to throw the dice, King Olaf.

As he shook the dice in his hands, Olaf the Saint replied:

–There are still two sixes left in the dice and it will not be difficult for God, my Lord, to make them appear again on my behalf.

The dice flew and two sixes came out again. Once more, the king of Sweden rolled the dice and again he rolled two sixes. When it was Olaf the Saint’s turn, one of the dice rolled broke in two, one half showing a 6 and the other a 1, making a total of 13. As a result, the ownership of the island was awarded to Norway and both kings remained friends.

Randomness plays a fundamental role in all games of chance. And what might surprise you most: cryptography could not exist without randomness. In this article you will discover how randomness is used in cryptography and how to obtain random numbers, a task, as you will see, not easy at all.

What is Randomness?

There are so many definitions of randomness that we could fill a book with them. In cryptography the following interpretation is common, which I quote from Bruce Scheneier:

Randomness refers to the result of a probabilistic process that produces independent, evenly distributed and unpredictable values that cannot be reliably reproduced.

I would like to highlight the following three ingredients that every randomness generator must exhibit in order to be used with guarantee in “cryptographic salads”:

  • Independence of values: there is no correlation between the values generated. For example, if you toss a coin (without trickery) into the air and it comes up heads nine times in a row, what is more likely, heads or tails on the tenth toss? Well, the probability is still 1/2, because the result of one toss is independent of the previous toss.
  • Unpredictability: even if you get bored looking at values and more values, you can’t predict the next value with a higher probability than random, no matter how long the preceding sequence was. Again, coins, dice and roulettes are excellent random generators because, no matter how many theories you come up with, you won’t know what’s going to happen next (assuming they’re not loaded).
  • Uniform distribution: I’m sure that while you were reading the chronicle of King Olaf the Saint you were thinking: “Impossible! How can two sixes go out four times in a row? And you are right to doubt, because the probability of this sequence is (1/36)-(1/36)-(1/36)-(1/36) = (1/36)4 = 0.00000059537… or 1 in 1.67 million. It is not likely that this sequence will occur, but it is possible. In fact, if you roll the dice a billion times it would appear about 600 times on average. Randomness as we imagine it manifests itself in large numbers, not in small numbers. The more values generated, the more we expect to see all possible sequences, distributed evenly, without any kind of bias.

The problem with randomness is that you’re never sure. Were the dice of the Nordic kings loaded? Did it happen, just by chance, that an improbable sequence happened that day? There is evidence of randomness that dictates with very high confidence whether or not a generator is random, but you can never be absolutely sure. In fact, there is a wide range of statistical test sets (NIST, Test01, Diehard, ENT, etc.) that try to rule out sequences that do not verify certain statistical properties, although they cannot guarantee perfect randomness.

How Are Random Numbers Generated?

Yes, but how do you generate random numbers on a computer? In order not to complicate things, let’s limit ourselves to two approaches:

  • True Random Number Generator (TRNG): require a natural source of randomness. Designing a hardware device or software program to exploit this randomness and produce a number sequence free of bias and correlation is a difficult task. For example, thermal noise from a resistor is known to be a good source of randomness. TRNGs can also collect entropy in a running operating system through the use of connected sensors, E/S devices, network or disk activity, system registers, running processes, and user activities such as keystrokes and mouse movements. These system- and human-generated activities can function as a good source of entropy but can be fragile and manipulated by an attacker. In addition, they are slow to produce random numbers.
  • Pseudo-random number generators (PRNG): unfortunately, most natural sources are not practical due to the inherent slowness of process sampling and the difficulty of ensuring that an opponent does not observe the process. Moreover, it would be impossible to reproduce, which would require two copies of the same sequence: one for Alice and one for Bob, which entails the almost insurmountable difficulty of getting them to both. Therefore, a method is required to generate randomness that can be implemented in software and that is easily reproducible, as many times as necessary. The answer lies in pseudo-random number generators: an efficient and deterministic mathematical algorithm to transform a short and uniform string of k length, called the seed, into a longer, uniform-looking (or pseudo-random) output string of l >> k length. In other words:

 “A pseudo-randomness generator uses a small amount of true randomness to generate a large amount of pseudo-randomness”

What Is the Use of Randomness in Cryptography?

Randomness is difficult to generate and difficult to measure. Nevertheless, it is a key ingredient for the success of any cryptographic algorithm. Look at the different roles that randomness can play in making cryptography secure:

  • Encryption keys: to encrypt a message you need an encryption key, both for secret key algorithms and public key algorithms. If this key is easy to guess, what a rip-off! A fundamental requirement for the secure use of any encryption algorithm is that the key is selected randomly (or as randomly as possible). In fact, one problem faced by ransomware is how to generate random keys to encrypt victims’ files. The best encryption algorithm in the world is worthless if the key is revealed. It is recommended to use a hardware device to generate them, such as the TPM on Windows systems or an HSM.
  • Initialization Vectors: Block cipher algorithms use a random initial value, called the initialization vector (IV), to start the cipher of the first block and ensure that the same message encrypted with the same key will never yield the same value, as long as a different IV is used. This value can be known, but not predictable. Again, it is therefore critical to use random (and unpredictable) values to avoid repetition. And once again, it is recommended to use hardware devices to generate them.
  • Nonces: a nonce is a number used once in a secure communication protocol. And what use can these fleeting nonces be? In a similar way to that explained with the initialisation vectors, nonces ensure that even if the same messages are transmitted during a communication, they will be encrypted in a completely different way, which avoids reproduction or reinjection attacks. In fact, nonces usually work as IVs: a nonce is generated and encrypted with the secret key to create the IV. They are also used in authentication protocols, such as in HTTPS, or for proof of work systems, such as in Bitcoin.
  • Salts: salt is another random value commonly used when storing passwords in a database. As you may know, passwords should never be stored in clear: any attacker who accesses the user table would see the passwords! The password hash could be stored instead. But what if two passwords are the same? They will be given the same hash! If an attacker steals the database and sees many identical password hashes, bingo! He knows that they are easy passwords, the ones everyone chooses when they are not careful. On the other hand, you can pre-compute huge tables of known password hashes and search for those hashes in the stolen database. To avoid these problems, a random value is added: salt. From now on, the password hash will not be saved, but the salt and the password hash concatenated with the salt: H( password || salt). Therefore, two identical passwords will result in two different hashes as long as the salt is random. Likewise, attacks that pre-calculated hashes of known passwords are no longer useful. Like nonces, salts don’t have to be secret, but they do have to be random. Another typical application of salts is in key derivation functions from passwords (KDF). A very simple scheme consists of repeating n times the hash of a password and a salt:

key = Hn( password || salt )

  • Filling: the famous RSA public key encryption algorithm is deterministic, i.e. the same message encrypted with the same key will always yield the same cipher text. That can’t be! It is necessary to randomise the message in clear. How? By adding random bits very carefully, in what is known as the OAEP scheme, which transforms traditional RSA into a probabilistic scheme. Similarly, to avoid the malleability of RSA encryption in digital signatures, the PSS scheme adds randomness.
  • Blind signatures: to get a person to digitally sign a document, but without being able to see the content of the signed document, random values that “blind” the signer are also used, hiding the content of the message to be signed. Subsequently, once the random value is known, the value of the signature can be verified. This is the digital equivalent of signing a document by placing a tracing paper over it: it prevents the document to be signed from being seen, but perfectly transfers the signature to the document. And who would want to sign something without seeing it first? These blind signature protocols are used, for example, in electronic voting and digital money applications.

Without Randomness There Is No Security

Random numbers are of critical importance in cryptography: they are the very basis of security. Cryptography cannot be incorporated into products and services without random numbers. An inadequate random number generator can easily compromise the security of the entire system, as confirmed by the long list of vulnerabilities due to poor randomness. Therefore, the choice of the random generator must be taken carefully when designing any security solution. Without good randomness there is no security.

A Simple Explanation About SAD DNS and Why It Is a Disaster (or a Blessing)

Sergio de los Santos    23 November, 2020

In 2008, Kaminsky shook the foundations of the Internet. A design flaw in the DNS made it possible to fake responses and send a victim wherever the attacker wanted. 12 years later, a similar and very interesting formula has been found to poison the cache of DNS servers. Even worse than Kaminsky’s. Fundamentally because it does not need to be in the same network as the victim and because it has been announced when many servers, operating systems and programs are still not patched. Let’s see how it works in an understandable way.

In order to fake a DNS request and return a lie to the client, the attacker must know the TxID (transaction ID) and the UDP source port. This implies a 32-bit entropy (guess two 16-bit fields). SAD DNS consists (basically, because the paper is very complex) in inferring the UDP port through an ingenious method that uses ICMP error return messages. If the port is inferred, this again leaves an entropy of only the 16-bit TxID, assumable for an attack. Once you have these two data, you build the packet and bombard the name server.

How to Infer the Open UDP Port

The necessary preliminaries are that, due to the operation of UDP, the server opens some UDP response ports through which it communicates with other name servers. Knowing these ephemeral ports that open in its communications is vital because, together with the TxID, they mean everything that an attacker needs to fake the response. In other words, if a server (resolver or forwarder) asks a question to another server, it expects a specific TxID and UDP in its response. And whoever returns a packet with that data it will be taken as the absolute truth. He could be fooled by a false IP-domain resolution. It is only necessary that the attacker knows in this case the open UDP port, deduct the TxID by brute force and bomb his victim.

When you contact a UDP port and ask if it is open or not, the servers return a “port closed” message in ICMP. To avoid overloading them with answers, they have an overall limit of 1000 per second. A global limit means that it doesn’t matter if you are asked 100 or 10 servers at a time, for all of them you have 1000 answers in one second to open port questions, for example. This, which was done in order to avoid overloading the system, is what actually causes the whole problem.

The overall limit is 1000 on Linux, 200 on Windows and FreeBSD and 250 on MacOS. And in reality, the whole paper is based on “reporting” this fixed global limit formula. It needs to be revised because the dangers of this have been warned about before, but never with such a practical attack and application. Also, important because not only DNS, but QUIC and HTTP/3, based on UDP, can be vulnerable. The attack is complex and at each step there are details to mitigate, but fundamentally the basic steps are (with potential inaccuracies for the sake of simplicity) the following:

  • Send 1000 UDP probes to the victim resolver with faked source IPs testing 1000 ports. This is actually a batch of 50 every 20 ms to overcome another limit of responses per IP that the Linux operating system has.
  • If all 1000 ports are closed, the victim will return (to the faked IPs) 1000 ICMP error packets indicating that the port is not open. If it is open, nothing happens, it is discarded by the corresponding application on that port. It doesn’t matter that the attacker doesn’t see the ICMP responses (they reach the faked IPs). What matters is to see how much of the global limit of 1000 responses per second is “used up” on that batch.
  • Before letting that second pass, the attacker queries any UDP port that he knows is closed and if the server returns an ICMP of “closed port”… it is that he had not used up the 1000 ICPM of “closed port error” and therefore… in that range of 1000 there was at least one open port! Bingo. As the ICMP response limit is global, a single closed port response means that the limit of 1000 “closed port” responses per second was not used up. Some of them were open. This query is made from your real IP, not faked, to receive the real (or not) response.

Thus, in batches of 1000 queries per second and by checking whether or not the limit of error packets port closed is used up, the attacker can deduce which ports are open. In a short time, he will have mapped the open ports of the server. Obviously, the attacker combines this with binary “intelligent” searches to optimize, dividing the ranges of “potentially open” ports in each batch to go faster and therefore find the specific data.

The researchers also had to eliminate the “noise” from other open ports or scans being made to the system while the attacker is performing it, and in the paper, they explain some formulas to achieve this.

More Failures and Problems

It all comes from a perfect storm of failures in UDP implementation, in the implementation of the 1000 response limit… The above explanation is “simplistic” because the researchers detected other implementation problems that sometimes even benefited them, and some other times consisted of slight variations of the attack.

Because the failure is not only in the implementation of the ICMP global limit. Neither is the implementation of UDP getting away with it itself. According to the RFC, on a single UDP socket, applications can receive connections from different source IPs on the same socket. The verifications on who is given what, is left in the RFC to the application handling the incoming connection. This, which is supposed to apply to servers (they are receiving sockets), also applies to clients. Thus, according to the experiments in the paper it also applies to the UDP client that opens ports for queries and therefore makes the attack much easier, allowing “open” ports to be scanned for queries with any source IP address.

And something very important: what happens if in the UDP implementation the application marks a response UDP port as “private” so that only the initiator of the connection can connect to it (and others cannot see whether it is open or closed)? This would pose a problem for the first step of the attack in which the source IPs are faked and speed up the process. Opening “public” or “private” ports depends on the DNS server. And only BIND does this well. Dnsmask, Unbound, no. In these cases you cannot forge the IPs of the spurts (the ones used to use up the global limit and that you don’t care whether you receive or not) but you can only forge the spurts with a single source IP. This would make the scan slower. But no problem. If it’s not like that and the ports are private, there is also a failure in Linux to “mitigate” it. The “global limit” verification is done before the limit count by IP. This, which at the beginning was done this way because checking the global limit is faster than checking the limit per IP, it actually allows it not to take so long and the technique remains valid even with the private ports.

The paper continues with recommendations for forwards, resolvers… a thorough review of DNS security.

Solutions?

Linux already has a patch ready, but there’s a lot of cutting to do. From DNSSEC, which is always recommended but never quite takes off, to disabling ICMP responses… which can be complex. The Kernel patch will now make it so that there is no fixed value of 1000 responses per second, but randomly between 500 and 2000. The attacker will therefore not be able to make his calculations correctly to know if in one second this limit has been used up and deduct open UDP ports.

It seems that the absolute origin of the problem is implementation, not design. This RFC describes that response rate limit, and leaves it open to a number. Choosing it fixed and in 1000 as was done in the Kernel in 2014 is part of the problem.

By the way, with this BlueCatLabs script programmed every minute, you will be able to mitigate the problem on a DNS server by doing by hand what the SAD DNS patch will do.

So, let’s wait for patches for everyone: the operating systems and the main DNS servers. Many public servers are already patched but many more are not. This attack is particularly interesting as it is very clean for the attacker, he does not need to be in the victim’s network. He can do everything from the outside and confuse the servers. A disaster. Or a blessing. As for this, quite a few “loose ends” in the UDP and DNS protocol will be fixed.

Deep Learning and satellite images to estimate the impact of COVID19

AI of Things    23 November, 2020

Motivated by the fact that the Coronavirus Disease (COVID-19) pandemic has caused worldwide turmoil in a short period of time since December 2019, we estimate the negative impact of COVID-19 lockdown in the capital of Spain, Madrid, using commercial satellite imagery courtesy of Maxar Technologies©. The authorities in Spain are adopting all necessary measures, including urban mobility restrictions, to contain the spread of the virus and mitigate its impact on the national economy. These restrictions leave signatures in satellite images that can be automatically detected and classified.

Monitoring vehicles

We focus on the development of a car-counting solution to monitor the presence of visible cars within high-resolution images. Recent studies reveal up to a 90% increase when comparing cars traffic between fall of 2018 and 2019 in several hospitals from Wuhan, China. This observation could mean that an infection was growing in the community and people required health care services. Similarly, we hypothesize that the number of vehicles decreased drastically during the COVID-19 lockdown in Madrid, thus we further investigate on how to accurately detect these vehicles using computer vision techniques.

Figure 1: Satellite images courtesy of RSMetrics© suggest that COVID-19 may have been present and spreading through China before the outbreak was first reported to the world.
Figure 1: Satellite images courtesy of RSMetrics© suggest that COVID-19 may have been present and spreading through China before the outbreak was first reported to the world.

Deep Learning

For this reason, we research on how to accurately detect these vehicles using computer vision techniques. Recently, driven by the success of deep learning-based algorithms, most literature have pursued approaches based on Convolutional Neural Networks (CNNs). The main reason for this popularity is that CNNs can automatically learn feature representation so that there is no need for manual feature extraction. As a result, CNNs are attracting widespread interest because of their robustness to appearance changes under “in-the-wild” conditions.

Current approaches detecting objects typically fail or lose precision due to the relatively small size of the target objects and the vast amount of data to be processed in the presence of multiple “in-the-wild” factors, such as, different cities/countries, viewpoint changes, occlusions, illuminations, blurriness, and so on.

Figure2: Challenging appearance variability due to different factors including viewpoint changes (nadir angle), shadows, daylight changes marked by weather and seasons, etc.

Labelled data

We categorize existing approaches into two groups according to whether they estimate the number of cars directly from the image (counting by regression), or they learn to detect individual cars first, and then count occurrences to set an overall number of small vehicles in the image (counting by detection). We leap to the conclusion the latter approach achieves superior performance.

Figure 3: Both supervised approaches need a set of training images with annotations. Counting by regression requires the overall number of cars as label. Counting by detection requires the vehicle position by setting the bounding box coordinates on each instance.
Figure 3: Both supervised approaches need a set of training images with annotations. Counting by regression requires the overall number of cars as label. Counting by detection requires the vehicle position by setting the bounding box coordinates on each instance.

Besides the aforementioned difficulties, the studies of object detection in satellite imagery are also challenged by the data set bias problem, which means that learned models are usually constrained to the same scene on which they were trained. To alleviate such biases, we train our model using also different vehicle annotations at different spatial resolutions from COWC and DOTA benchmarks, reflecting the demands of real-world applications. As far as we know, this is the first time that an algorithm successfully combines images at different resolutions to deal with the lack of satellite data properly annotated.

Madrid dataset

As we expected, we need the highest resolution commercially available in order to detect small vehicles. For this reason, we download 153 satellite images from 22 hot spots around the autonomous community of Madrid with a spatial resolution of 30 cm (WorldView-4 satellite data accessible using SecureWatch©). We select specific areas in Madrid where car-counting is a proxy of activity, such as shopping centres, highway crossings, hospitals, industrial areas and universities, among others.

In the video below we visually observe the reduction in the total number of cars before and during the COVID-19 restrictions. Thus, it seems reasonable to study the overall impact of the lockdown on the traffic volume.

Walkthrough over some processed images to visually perceive the dramatic reduction in the presence of vehicles over Madrid (audio in Spanish with English subtitles).

Results

In the experiments we measure the performance of our proposal, and compute car-counting statistics to quantify the dramatic drop in the number of vehicles during the lockdown. As a result, we corroborate these statistics using additional indicators such as telco data and traffic sensors data respectively. We reach the conclusion that these insights correlate with official statistics on economic activity, thus car-counting statistics can complement traditional measures of economic activity in helping policy makers tailor their responses to flatten the recession curve.

Figure 4: Timeline curves of how the COVID-19 outbreak is evolving in Madrid since 2020. Red, yellow and blue colours compare curves obtained using anonymized and aggregated telco data from Telefónica Movistar antennas, traffic statistics acquired from the City Council of Madrid sensors, and by estimating the presence of visible cars with our satellite technology respectively.
Figure 4: Timeline curves of how the COVID-19 outbreak is evolving in Madrid since 2020. Red, yellow and blue colours compare curves obtained using anonymized and aggregated telco data from Telefónica Movistar antennas, traffic statistics acquired from the City Council of Madrid sensors, and by estimating the presence of visible cars with our satellite technology respectively.

Additional information about the vehicle detection technology, the downloaded high-resolution satellite images, a market analysis, and comparative results for each region of interest are also submitted in the supplementary material.

Written by Roberto Valle Fernández.

Don’t miss the next webinar (in Spanish) “Deep Learning and AI to improve traffic in times of Covid19″ that will take place on November 25th. To schedule this event click here. (Remember to follow the registration steps to watch it live)

Spanish version:

Other LUCA POCs here.

 LUCA visit our website, subscribe to LUCA Data Speaks or follow us on TwitterLinkedIn or YouTube .

Cybersecurity Weekly Briefing November 14-20

ElevenPaths    20 November, 2020

Malware distribution campaign supplants the identity of Spanish ministries

ESET researchers warn of a malware distribution campaign that is impersonating Spanish ministries to distribute a malicious Android application through links sent by WhatsApp. The link provided in the messaging application would take users to a recently created domain gobiernoeconomica[.]com, where they offer information about alleged financial aid. Meanwhile, when accessing the website, an alleged PDF file is automatically downloaded, which is in fact a malicious application for Android.

More info: https://blogs.protegerse.com/2020/11/18/web-fraudulenta-con-supuestas-ayudas-economicas-del-gobierno-espanol-descarga-troyano-bancario-para-android/

Campaign against organizations in Japan

Symantec researchers have discovered a campaign against Japanese companies in different sectors and located in 17 different countries. This campaign would have been active for one year, from October 2019 to October 2020 and, according to the researchers, could be attributed to the APT Cicada, also known as APT10, Stone Panda, Cloud Hopper, being espionage its final purpose. Among the techniques used by Cicada are the use of DLLs and the exploitation of the ZeroLogon vulnerability (CVE-2020-1472). It is worth highlighting that the APT would have been within the network of some of the victims for almost a year, which shows the wide range of resources and skills available to them.

All the details: https://symantec-enterprise-blogs.security.com/blogs/threat-intelligence/cicada-apt10-japan-espionage

Vulnerabilities in industrial control systems

Real Time Automation (RTA) and Paradox industrial control system providers have recently warned of critical vulnerabilities that expose their systems to remote attacks by threat agents. Likewise, Schneider Electric supplier has addressed nine highly critical flaws in its SCADA systems. According to Claroty researchers, the RTA flaw assigned with CVE-2020-25159 is located in the ENIP stack (versions prior to 2.28) which is used in up to 11 devices from six different suppliers. On the other hand, the vulnerability in Paradox assigned with CVE-2020-25189 is due to a buffer overflow that affects its internet module IP150. This same system is also affected by a second high-importance vulnerability assigned as CVE-2020-25185. Finally, Schneider’s vulnerabilities affect its Interactive Graphical SCADA system and include read and write errors, as well as an incorrect restriction of operations within the memory buffer limits. CISA has also issued alerts on critical vulnerabilities as they could allow remote code execution.

More: https://threatpost.com/ics-vendors-warn-critical-bugs/161333/

New Cyberpionage campaign called CostaRicto

For the past six months, the Blackberry Intelligence team has been monitoring a cyberspionage campaign targeting a number of victims around the world. The campaign, called CostaRicto, appears to be operated by “hackers-for-hire”, a group of APT mercenaries who use tailored malware and complex VPN proxy and SSH tunnelling capabilities. This type of cybercriminals offering their service on demand is becoming popular in sophisticated state-funded campaigns, although on this occasion the diversity of objectives makes it impossible to identify the interests of a single group. This campaign has been directed against entities from various sectors, particularly financial institutions, located in Europe, America, Asia, Australia, Africa and, especially, Southeast Asia. Among the set of tools used in the CostaRicto campaign, a custom-designed malware was identified that first appeared in October 2019 and had hardly been used, so it could be exclusive to this operator.

All the details: https://blogs.blackberry.com/en/2020/11/the-costaricto-campaign-cyber-espionage-outsourced

The Challenge of Online Identity (I): Identity Is the New Perimeter

Andrés Naranjo    19 November, 2020

We often find ourselves in situations where we are faced with a mission and, as the mission goes on, we realise that the first choices we made were not good. At that point, we have two options: start over from scratch or make up for that poor decision making with extra work and effort.

The Internet Was Created in an Unsecured Way

This is a phrase that you will surely have heard from cyber security professionals at some point. This exciting change that the digital transformation has meant, one of the most drastic in the history of humanity, has always been built on things that had not been done before. Along the way, we have made choices that have sometimes proved to be wrong. But, as we said before, we find ourselves in the situation where we cannot “reset” the Internet and start from scratch.

One of those bad choices we’ve been making since the beginning of the Internet is identity management, or in other words, how a system knows who is the user using it. For example, who is the person accessing their email and how is it different from another individual. Traditionally this has been based on the use of passwords, which only the user in question should (I must repeat this: should) know. But, either because of the security of the systems themselves (where the password can be intercepted or stolen both on the network and on the device where it is used) or because the user does not make a responsible use of them, this system has proved to be extremely fragile.

The User: The Weakest Link in The Chain

This point is extremely easy to check: just take a walk through the “underground” channels of Telegram or the Deep Web (or Dark Net) to see how many premium service accounts are for sale: Netflix, Spotify, PrimeVideo, Twitch Gaming and almost any other type. It is logical to infer that these low-cost and unexpired accounts have been stolen from other users whose credentials, and therefore identity, management can clearly be improved, either in the online service or in their personal use.

Prestigious studies on the subject, such as Forrester’s The Identity And Access Management Playbook for 2020, warn us that 81% of security breaches are caused by a weak, stolen or default password. This happens for a multitude of reasons, both because of the user’s responsibility and because of defects in the design or implementation of security.

To a large extent, this is due to the user’s ignorance or lack of zeal, who thinks that no one can be interested in violating his security as a private user. This has a much greater impact when the user’s identity is the gateway to a larger entity such as a company or organisation. So much so that, currently, it has proved to be the weakest link in the chain: a user vulnerable to cyberattacks makes a company vulnerable. This can be caused, for example, by using a password that is too easy to guess, too common or reused on more than one site or online service.

I also recommend reading this other article on the proper use of passwords. For these inappropriate uses of passwords, cybercriminals have designed techniques which we will talk about in the second part of this series of articles. So, in order not to delegate that trust to the end user, cyber security companies like ElevenPaths strive to avoid this type of risk to users, by designing products and services that add an additional layer of security to the traditional and obsolete user/password pair. In addition to including innovative improvements to identity services that we will also discuss in part 3 of this series of articles.

SmartPattern: The Path to Robust Identity Management

So, where is the challenge? In being able to develop technology that guarantees that extra layer of security without harming the user experience and without forcing the user to learn or adapt to a new identification system. We call robust identity or level 3 authentication:

  • Something you have: for example, a physical device or card.
  • Something you know: for example, a pin or password.
  • Something you are: for example, your fingerprint.

So, as a culmination and, in a way, a spoiler of where this journey of digital transformation is taking us in terms of identity management, we will put forward an example of robust identity management that is both convenient and usable by the common user of technology: SmartPattern.

SmartPattern is a new concept in the process of robust authentication, as well as in the authorisation and signature of documents through a simple mobile pattern gesture, which can be used in any smartphone, tablet or touchpad laptop as an identity service.

In other words, the user does not need to remember or save hundreds of passwords, but simply remember a single pattern for all online services, whereby the service uses a machine learning engine that is capable of detecting unique features in the route, which even if intentionally shown to another user, will fail in 96% of cases. We were able to verify this in a field study at the University of Piraeus, Greece.

Artificial Intelligence training from SmartPattern
Artificial Intelligence training from SmartPattern

Thanks to its versatility, SmartPattern can be integrated with a multitude of authentication and authorisation services. For example, logging in and/or authorizing a banking transaction, as we have already demonstrated in Nevele Bank’s demo portal: a bank without passwords!

The SmartPattern website offers more information on this subject but let this innovative and advanced element show that the path to a secure identity will have many more avenues beyond the well-known duplicates that we have hitherto considered secure.

This is all for the moment. In the next part we will talk about cybercrime and the market for stolen credentials that continues to grow, both on the Deep Web and on underground Telegram channels.

Rock, Paper, Scissors and Other Ways to Commit Now and Reveal Later

Gonzalo Álvarez Marañón    17 November, 2020

Have you ever played rock, paper, scissors? I bet you have. Well, let’s put the tin lid on it: how would you play through the phone? One thing is clear: the first one to reveal his choice loses for sure. If you shout “rock”, the other will say “paper” and you will lose again and again. The question is: how do you get both of you to commit to a value without revealing it to the other party?

In real life, paper envelopes and boxes are used to stick to a value without revealing it. For example, a judge of a jury writes his verdict on a piece of paper and puts it in a sealed envelope. When the envelopes are opened, you can no longer back out. Can cryptography create an even more secure envelope or digital box? What a question, of course! Let’s see how we can play rock, paper, scissors over the phone thanks to cryptography.

Creating Commitment Schemes Through Cryptographic Protocols

In cryptography, a commitment scheme allows one to stick to a value that will remain hidden until the moment it must be revealed and there will be no going back. Using the box analogy: you keep your message in a locked box and give the box to someone else. Without the key, they cannot read your message. Even you cannot change it, because the box is in their possession. When you give them the key, they will open the box and then, yes, they could read the message. As you can see from this simple example, a commitment scheme consists of two phases:

  1. The commitment phase: you keep the message under lock in a box and send the box.
  2. The disclosure phase: the receiver of the box opens it with your key and reads the message

Mathematically, this scheme can be represented as follows:

c = C (rm)

In other words, commitment c is the result of applying a public C function to a random value r and a message m. When the sender subsequently discloses the values of r and m, the receiver can recalculate c and, if it matches the original, will know that there has been no cheating.

To consider it safe, any commitment scheme must satisfy the following two requirements:

  1. Secrecy (or concealment): at the end of the first phase, the receiver does not obtain any information about the committed value. This requirement must be met even if the recipient is cheating. Concealment protects the interests of the sender.
  2. Unambiguous (or linked): given the message committed in the first phase, the receiver will find the same value in the second phase after the legal “opening” of the commitment. This requirement must be met even if the sender is cheating. Linking protects the interests of the receiver.

A simple way to perform this scheme digitally is by using cryptographically secure hash functions as a C-commitment function. Imagine that our good old friends Alice and Bob want to play rock, paper, scissors on the phone. They can send each other the following information using the generic H hash functionand a random value rA, as shown in the picture:

At the end of the protocol, Bob needs to verify that the value of the hA hash sent by Alice is equal to the H ( rA  || “rock” ) value calculated by himself. If both values match, he knows that Alice has not cheated. The result of this protocol is that Alice loses the game because paper wraps rock.

Let’s follow the above protocol from Alice’s perspective. She first commits to the “rock” value by sending Bob the hA hash value. For his part, Bob will not yet be able to determine that Alice has committed to that value, as Bob doesn’t know the random rA value used and Bob is unable to reverse the hash function. The fact that Bob cannot determine which value has been committed is the “secret” (or “hidden”) property of the commitment scheme.

As soon as Bob sends his own “paper” value to Alice, she knows that she has lost, but she is unable to cheat, since for tricking Bob she would have to invent a different value for the random rA value, let’s say rA’, that meets H( rA’ || “scissors” ) = H( rA || “rock” ). But this fraud would imply that Alice can find collisions in the hash function, which is considered (computationally) impossible (technically, the hash function is required to be resistant to the second preview). This is the “unambiguous” (or “linking”) property of the commitment scheme: that Alice cannot change her mind after the disclosure phase.

The use of hashes as a commitment function is the simplest way to implement a commitment scheme. However, in real applications, the commitment may be required to exhibit some special properties, such as homomorphism, which require more sophisticated mathematical functions, based on variants of Diffie-Hellman, ElGamal or Schnorr. One of the best known examples is Pedersen commitment, which is very simple and has the following property, which is very useful in many situations: if you have committed two messages m1 and m2  m1 to the c1 and c2 values, respectively, then the product of the two commitments, c1 × c2, is equal to the commitment of the sum of the m1 + m2 messages.

Applications of Commitment Schemes

Commitment schemes are experiencing new applications thanks to the recent development of new cryptographic protocols:

  • Just as we started the article by playing rock, paper, scissors over the phone, we could use versions of the commitment schemes for any other game of chance played remotely: from flipping a coin to playing mental poker and others.
  • Another application of the commitment schemes together with the Zero-Knowledge Proof is the construction of zero knowledge databases, in which queries can be made that only reveal whether a property consulted is true or not, such as whether a person is of age or has a sufficient balance in his account, but without revealing either his age or his balance. In this application a special scheme called mercurial commitment is used.
  • The commitment schemes are also a key part of a verifiable secrecy sharing scheme: the distribution of secrecy among several individuals is accompanied by commitments of the parts of the secret held by each. The commitments do not reveal anything that could help a dishonest party, but after the commitments of the parties have been revealed, each individual can verify whether the parties are correct.
  • Commitment schemes are also being used in optimal and credible auctions, where the bidder must commit to the value of a bid without the possibility of backing out.
  • Polyswarm uses commitment schemes along with smart contracts at Ethereum. Polyswarm is a decentralised threat intelligence marketplace where threat detection is encouraged by putting money into the game – NCT. Different manufacturers’ engines can bet money based on trust in their own detections. They commit their verdict by calculating the Keccak hash on the sample of potential malware along with a random number. This hash represents their commitment on the artefact. After an engine has been pronounced, there is no turning back. Once the commitment window has been closed, during which the engines could make their predictions, they reveal their original verdict and their random number. Those who made correct evaluations are rewarded, while those who failed in their prediction lose their bets. In this way, Polyswarm rewards honest market participation, encouraging quality and unique malware detection.

Online auctions, Internet gambling, smart contract signing, secure database searches, secure multiparty computing, Zero-Knowledge Proof, … there are many applications that require information to be securely hidden until it can be revealed. As interest in public computing environments, such as blockchain, grows, commitment schemes will continue to provide confidence and encourage new secure technologies to flourish.

How Traditional CA’s Are Losing Control of Certificates and Possible Reasons Why Chrome Will Have a New Root Store

Sergio de los Santos    16 November, 2020

It’s all about trust. This phrase is valid in any field. Money, for example, is nothing more than a transfer of trust, because obviously we trust that for a piece of paper that is physically worthless, we can get goods and services. Confidence in surfing comes from root certificates. As long as we trust them, we know that our navigation is not being intervened and we can visit and introduce data in websites with certain guarantees. But who should we trust when choosing these root certificates? Do we trust those provided by the browser or those offered by the operating system? Google has made a move and wants to have its own Root Store system.

At First It Was the CAB/Forum

It is the forum of relevant Internet entities (mainly CAs and browsers). It votes and decides on the future of the use of certificates and TLS in general. Or not. Because this year we have seen how a relevant manufacturer has acted independently of the result of a vote and has unilaterally applied its own criteria. In 2019 they voted on whether to reduce the life span of TLS/SSL certificates to one year. The result was no. But it made no difference. The browsers took the floor. In February 2020, Safari unilaterally stated that it would mark certificates created for more than 398 days from 1 September as invalid. Firefox and Chrome followed suit. The vote among the parties involved (mainly CAs and browsers) was useless.

Another example is how Chrome led in a certain way the “deprecation” of the certificates using SHA-1 by being more and more aesthetically aggressive with the validity of these certificates (red blades, alerts…) and sometimes without being aligned with the deadlines set by the CAB/Forum.

Nothing bad really, it should not be misunderstood. Browsers can provide a certain agility in transitions. The problem is that the interests of the certification authorities, with a clear business plan, do not always coincide with those of the browsers (represented by companies with sometimes opposite interests). In the end, it seems that whoever is closest to the user calls the shots. There is no point in CA deciding to issue certificates with a duration of more than one year if the browser used by 60% of users is going to mark them as untrustworthy. Popularity, closeness to the user, is a value in itself that Chrome and others, exploit (as Internet Explorer did in its time) in order to impose “standards”.

The Root Store… Everyone Had Their Own and Now Chrome Wants One

Windows has always had a Root Store with the root certificates it trusts. Internet Explorer, Edge feeds on it…and Apple and Android do exactly the same. The most popular browser with its independent Root Store was Firefox. And this, sometimes, caused problems. In 2016, Firefox was the first to stop trusting WoSign and StartCom because it did not trust their practices. The rest followed immediately. On the other hand, in 2018, Apple, Google and Firefox stopped trusting Symantec certificates. They used traditional blocking (by various means) and not necessarily by stopping to include them in their Root Store.

In general, browsers were moving in this direction. If Edge wanted to stop relying on something, Microsoft would take care of it in Windows. If it was Safari, Apple would remove it from the Root Store Mac and the iPhone. If Chrome wanted to control who to trust, it could do so on Android, but… what about Chrome on Windows, on Mac, iPhone… and Chrome on Linux? That piece was missing from the puzzle and made it dependent on the criteria of a third party.

Now Chrome wants its own Root Store, so it doesn’t have to depend on anyone. In its statement where it defends the movement mainly talking about how this provides homogeneity on the platforms. Not in all of them, because it specially mentions that in iOS this step will be forbidden and therefore Chrome will continue using the root store imposed by Apple. For the rest, it explains its criteria for inclusion as a trusted root certificate (which in principle, are the standards). And that of course it will respond to incidents that undermine trust in the CA.

But why would you want a Root Store? In 2019 Mozilla was once again reminded of why they had always done it and why it was necessary: mainly to “reflect their values” (which others may also translate as “interests”). But apart from the homogeneity that Mozilla also mentions, one sentence in its explanation that hits the nail on the head is: “In addition, OS vendors often serve customers in government and industry in addition to their end users, putting them in a position to sometimes make root store decisions that Mozilla would not consider to be in the best interest of individuals.”. Mozilla does not trust them. It also mentions that the fact that the operating system inserts certificates to analyse traffic in its Root Store (such as the antivirus), it does not affect them. Always putting individual freedoms first, as it did by imposing the DoH and forcing a certain choice between security and privacy.

What about Google’s motivations? Will they be similar? On paper yes, they want homogeneity. But let’s not forget that whoever controls it, as Mozilla subtly reminded us, deciding on the Root Store independently of the operating system also makes it possible to choose who, at any given time, can access the encrypted traffic. Apart from being a headache for the administrator.

So, in the end it seems to be, again, a question of trust… or maybe mistrust? Chrome, once mature and with a great influence on the market, wants us to trust them and their policy of access to the Root Store. This in turn (in the light of the reasons given by Mozilla) … could this not be interpreted as a slight mistrust of the platform where Chrome runs? Is this not a further step in the distancing of the CAs themselves? An attempt, after all, to have more control?

Cybersecurity Weekly Briefing November 7-13

ElevenPaths    13 November, 2020

Links between Vatet, PyXie and Defray777

Researchers from Palo Alto Networks have investigated the families of malware and operational methodologies used by a threat agent that has managed to go unnoticed while compromising entities in the health, education, technology and institutional sectors. The group, active since 2018 and driven by financial motivations, would be responsible for the creation of Vatet, a loader that allows the execution of payloads such as PyXie RAT and Cobalt Strike. In some intrusions, a previous step can be observed through the use of typical banking Trojans such as IcedID or Trickbot as an entry point, to subsequently download Vatet and its payloads in order to carry out recognition and information exfiltration tasks before running ransomware Defray777 in memory. The researchers estimate that this group is responsible for the creation and maintenance of Vatet, PyXie and Defray777.

Microsoft Security Newsletter

Microsoft has published its monthly update newsletter, known as Patch Tuesday, in which the company has fixed 112 vulnerabilities in several of its products. 17 vulnerabilities have been classified as critical, 12 of which are related to CER flaws. Among the vulnerabilities published by the Redmond company, the CVE-2020-17087 (CVSS 7.8) stands out: local vulnerability of scalation of privileges in the Windows kernel, which was already discovered by Google Project Zero and actively exploited. Likewise, the critical vulnerabilityCVE-2020-17051 (CVSS 9.8) allows remote execution of code found in the Windows network file system (NFS). The Automox research team warns that, in the coming days, they expect an increase in the scanning of 2049 ports, as a result of this vulnerability. Finally, they highlight the vulnerabilities CVE-2020-17052 (CVSS 7.5) and CVE-2020-17053 (CVSS 7.5), which affect memory corruption that could lead to the remote execution of code found in Microsoft’s Scripting Engine and Internet Explorer.

Two new 0-day in Chrome

Yesterday, Google published the correction of two new 0-day vulnerabilities in its Chrome browser that would be actively exploited. The first of these (CVE-2020-16013) is due to an incorrect implementation of its JavaScript V8 engine. The second one (CVE-2020-16017) is a use-after-free memory corruption bug in the Site Isolation security component. Google indicates that they have evidence of the existence of exploits for these vulnerabilities. With the release of this new browser version (86.0.4240.198), Google has corrected five 0-day bugs in less than three weeks.

Distribution of malware through fake Microsoft Teams updates

According to Bleeping Computer, Microsoft is allegedly alerting its users through a private note about a campaign of fake Microsoft Teams updates carried out by ransomware operators. In this campaign, threat agents are reportedly exploiting malicious advertisements so that, when searching for the Teams application in search engines, the main results lead to a domain under the control of the attacker. By accessing the malicious link, the payload would be downloaded hidden under a legitimate Teams update. According to Microsoft, in most cases, the initial payload was the infostealer Predator the Thief, which allows the exfiltration of sensitive information from the victim. However, Bladabindi and ZLoader malware have also been detected, as well as Cobalt Strike to perform lateral movement on the infected network and subsequently launch the ransomware.

New malware against hostelry sector

ESET researchers have discovered a new modular backdoor, called ModPipe, which targets point-of-sale (POS) management software with the aim of stealing sensitive information stored on these devices. This backdoor affects the RES 3700 POS systems from Oracle MICROS, a software used in many restaurants, bars and other hospitality establishments worldwide. The malware consists of a dropper through which a loader is installed to gain persistence. The next step is to implement the main module in charge of establishing communications with other downloadable modules that would allow, among other actions, deciphering and stealing passwords from the databases, obtaining the running processes or scanning IP addresses.

Smart Stadiums: How 5G is revolutionising live Sports

Patrick Buckley    13 November, 2020

As we all look forward to returning to live events in a post-pandemic world, in today’s post we share with you the latest exciting innovation in smart stadiums, the implementation of in-stadium 5G coverage.

5G Is set to be a game-changer for fans at home and in the stadium, allowing for enhanced IoT connectivity, faster internet speeds and better quality live streams for those unable to make it to the match. 

This is no distant dream! In 2019, Telefónica teamed up with FC Barcelona to deliver Europe’s first 5G connected stadium at Camp Nou and many football clubs and stadiums worldwide are following suit. In the US alone, there are 13 NFL stadiums connected by a 5G network powered by Varizon. 

Powering the passion for the game 

Stadiums have always been more than simply physical spaces, they can be defined better by the atmosphere created in them as fans and players come together to celebrate their love of the game. Promoting this passion that spectators have is the main driver behind sports clubs seeking 5G coverage in their stadiums.

The atmosphere of camaraderie is promoted by allowing fans to connect easily to social media platforms. Previously, low bandwidth and high latency often prevented game goers from accessing the internet quickly or to any usable extent. This is due to the extremely high levels of traffic within the stadium. Now, thanks to the upgraded network, fans will be able to tweet, share and react to game updates in real-time, allowing for a more interactive game experience for those both inside and outside the stadium. 

Enhanced IoT Connectivity 

Over the last few years, many football clubs have invested in IoT technologies in their stadiums to improve the game experience for fans and allow for better crowd management. Back in 2018, Telefonica teamed up Atletico Madrid to deliver the world’s first smart stadium at Civitas Metropolitano , implementing enhanced IoT powered security management systems, smart scoreboards and a wrap-around IoT powered LED lighting system. 

By upgrading network coverage within the stadium, sports clubs are laying the infrastructure for the next generation of IoT technology powered by 5G. 5G network can support far more connected devices than previous network generations, paving the way for even more innovations to be introduced in the future such as security cameras, crowd control technology and smart information displays.

And for those stuck at home… 

As we are currently unable to go to live games due to the Covid-19 pandemic, it has become more important than ever for sports clubs and broadcasters to deliver the best live streaming capabilities possible. Thanks to the increased capabilities of IoT connected devices, the quality of live streams has increased dramatically. According to FC Barcelona, 5G technology allows for the live streaming of 4K 360º footage, allowing for fans to experience the game in real time virtual reality, totally redefining how we watch games from home. 

Conclusion

We cannot underestimate the power of 5G to revolutionise the way in which we experience live sporting events. As more stadiums implement this technology, we will soon take for granted the ability to connect to social media easily whilst in the stadium and remotely view games in 360º 4k ultra-high definition. Further to this, the capabilities of IoT devices within stadiums are significantly enhanced thanks to 5G, bringing never before seen innovations to stadiums around the world. 

To keep up to date with Telefónica’s Internet of Things area, visit our web site or follow us on TwitterLinkedIn  and  YouTube