All posts by b

An update on MD5 poisoning

Last year we published a proof-of-concept tool to demonstrate bypasses against security products that still rely on the obsolete MD5 cryptographic hash function.

Summary: The method allows bypassing malicious executable detection and whitelists by creating two executables with colliding MD5 hashes. One of the executables (“sheep”) is harmless and can even perform some useful task and is expected to be categorized as goodware by the victim. After the sheep is accepted by the victim, the colliding malicious version (“wolf”) is sent. Because affected products rely solely on the MD5 fingerprints to identify known good executables, wolf is already whitelisted and can run.

Although the reception of the research was generally positive, some were skeptical about the extent and even the validity of the issue. Although in the meantime we received information about more affected products, NDA’s prevented us from further demonstrating that the problem indeed exists and affects multiple vendors. 

Today we are able to share a demonstration of the problem affecting Panda Adaptive Defense 360. The issue is demonstrated against the stricter “Lock mode” of the product meaning that the Panda agent only allows known good executables to run (application whitelisting). For the sake of this video we manually unblock the harmless executable version (sheep4.exe) to speed up the process, as otherwise the analysis could take several hours to complete (it was confirmed that the “sheep” executables aren’t detected as malicious by the cloud scanner in case they are not manually unblocked):


(You can skip 01:00-01:55 if you are not interested in the policy update)

We notified Panda Security about this issue through their Hungarian partner (see the timeline at the end of this post). Panda responded that this is a known issue that is expected to be fixed in the next major version, but no ETA was provided. Panda stated that MD5 was used because of performance reasons. We informed Panda that the BLAKE2 hash function can provide higher level of security at better performance than MD5 (thanks to Tony Arcieri for this update!).

We’d like to stress that this research is not about individual vendors but about bad practices prevalent in the security industry. We now know of at least four vendors affected by the above problem and several others still provide MD5 fingerprints only in their tools and public reports. It is shameful that while hard work is put into phasing out SHA-1, in the security industry it is still generally accepted to use MD5, even after it was exploited in a real-world incident. We understand that there are more straightforward ways for evasion, but think that this issue is a good indicator of how security product development is often approached.

We should do better than this!

Timeline

2016.08.30: Sending technical information to vendor.
2016.09.05: Vendor requests more information, including PCOPInfo logs collected during retest.
2016.09.06: Sending demo video and identification information about product instance. Requesting more information about PCOPInfo usage.
2016.09.06: Vendor responds with instructions about PCOPInfo.
2016.09.08: Sending PCOPInfo logs to vendor.
2016.09.19: Vendor responds that this is a known issue, replacement algorithm is expected to be implemented in the next version.
2016.09.27: Requesting negotiation about issue publication date.
2016.10.12: Requesting negotiation about issue publication date. Including notification about 90-day disclosure deadline in case no agreement would be achieved.
2016.10.19: Vendor responds, internal discussion is still in progress.
2016.11.16: Requesting information about acceptance of publication date.
2016.11.28: Public release.

Bake your own EXTRABACON

In the last couple of days we took a closer look at the supposed NSA exploit EXTRABACON,  leaked by Shadow Brokers. As an initial analysis of XORcat concluded, the code is capable of bypassing authentication of Cisco ASA devices after exploiting a memory corruption vulnerability in the SNMP service. We managed analyze and test the code in our lab and even add support for version 9.2(4) (that created quite bit of a hype :). While we don’t plan to release the upgraded code until an official patch is available for all affected versions, in this post we try to give a detailed description of the porting process: what the prerequisites are and how much effort is required to extend its capabilities.  We also hope that this summary will serve as a good resource for those who want to get started with researching Cisco ASA.

Continue reading Bake your own EXTRABACON

Finding the salt with SQL inception

Introduction

Web application penetration testing is a well researched area with proven tools and methodologies. Still, new techniques and interesting scenarios come up all the time that create new challenges even after a hundred projects.

In this case study we start with a relatively simple blind SQL injection situation and show how this issue could be exploited in a way that made remote code execution possible. The post will also serve as a reference for using Duncan, our simple framework created to facilitate blind exploitation.

Continue reading Finding the salt with SQL inception

Poisonous MD5 – Wolves Among the Sheep

MD5 is known to be broken for more than a decade now. Practical attacks have been shown since 2006, and public collision generator tools are also available since that time. The dangers of the developed collision attacks were demonstrated by academia and white-hat hackers too, but in case of the Flame malware we’ve also seen malicious parties exploiting the weaknesses in the wild.

And while most have already moved away from MD5, there is still a notable group that heavily uses this obsolete algorithm: security vendors. It seems that MD5 became the de-facto standard of fingerprinting malware samples and the industry doesn’t seem to be willing to move away from this practice. Our friend Zoltán Balázs collected a surprisingly long list of security vendors using MD5, including the biggest names of the field.

The list includes for example Kaspersky, the discoverer of Flame who just recently reminded us that MD5 is dead, but just a few weeks earlier released a report including  MD5 fingerprints only – ironically even the malware they analysed uses SHA-1 internally…

And in case you think that MD5 “good enough” for malware identification let’s take another example. The following picture shows the management console of a FireEye MAS – take a good look at the MD5 hases, the time delays and the status indicators:

fireeye_duplicateDownload sample files

As you can see, binaries submitted for analysis are identified by their MD5 sums and no sandboxed execution is recorded if there is a duplicate (thus the shorter time delay). This means that if I can create two files with the same MD5 sum – one that behaves in a malicious way while the other doesn’t – I can “poison” the database of the product so that it won’t even try to analyze the malicious sample!

After reading the post of Nat McHugh about creating colliding binaries I decided to create a proof-of-concept for this “attack”. Although Nat demonstrated the issue with ELF binaries, the concept is basically the same with Windows (PE) binaries that security products mostly target. The original example works by diverting the program execution flow based on the comparison of two string constants. The collision is achieved by adjusting these constants so that they match in one case, but not in the other.

My goal was to create two binaries with the same MD5 hash; one that executes arbitrary shellcode (wolf) and another that does something completely different (sheep). My implementation is based on the earlier work of Peter Selinger (the PHP script by Nat turned out to be unreliable across platforms…), with some useful additions:

  • A general template for shellcode hiding and execution;
  • RC4 encryption of the shellcode so that the real payload only appears in the memory of the wolf but not on the disk or in the memory of the sheep;
  • Simplified toolchain for Windows, making use of Marc Stevens fastcoll (Peter used a much slower attack, fastcoll reduces collision generation from hours to minutes);

The approach may work with traditional AV software too as many of these also use fingerprinting (not necessarily MD5) to avoid wasting resources on scanning the same files over and over (although the RC4 encryption results in VT 0/57 anyway…). It would be also interesting to see if “threat intelligence” feeds or reputation databases can be poisoned this way.

The code is available on GitHub. Please use it to test the security solutions in your reach and persuade vendors to implement up-to-date algorithms before compiling their next marketing APT report!

For the affected vendors: Stop using MD5 now! Even if you need MD5 as a common denominator, include stronger hashes in your reports, and don’t rely solely on MD5 for fingerprinting!

Testing Oracle Forms

SANS Institute accepted my GWAPT Gold Paper about testing Oracle Forms applications, the paper is now published in the Reading Room.

Forms is a typical example of proprietary technology that back in the day might have looked a good idea from business perspective but years later causes serious headaches on both the operational and security sides:

  • Forms uses naively implemented crypto with (effectively) 32-bit RC4
  • The key exchange is trivial to attack to achieve full key recovery
  • Bit-flipping is possible since no integrity checking is implemented
  • Database password set at server side is sent to all clients (you read that correctly)

And in case you’re wondering: applications based on Oracle Forms are still in use, thanks to vendor lock-in…

The full Gold Paper can be downloaded from the website of SANS Institute:

Automated Security Testing of Oracle Forms Applications

The accompanying code is available on GitHub.

CVE-2014-3440 – Symantec Critical System Protection Remote Code Execution

Today we release the details of CVE-2014-3440, a remote code execution vulnerability in Symantec Critical System Protection. You can get the detailed advisory on the following link:

CVE-2014-3440 – Symantec Critical System Protection Remote Code Execution

We reported the vulnerability with the help of Beyond Security, Symantec fixed the vulnerability on 19.01.2015. Although we didn’t manage to get a nice logo :) in this blog post we give some additional information regarding the issue. This is another example that just like any other software, security products can introduce new security risks to their environment too.

First of all, here’s a short video demonstrating our exploit in action:

The exploit consists of two parts: First we need to register ourselves as an Agent to the SCSP Server. In default installations Agents can be registered without any authentication or prior knowledge. Before the official Symantec advisory was available I asked some of my peers there if this step can be hardened/mitigated somehow, but they couldn’t present any method to do this. Even in case that Agent registration would be somehow restricted, the attack could be executed from a compromised host with an Agent installed.

During the registration we obtain a GUID that allows us to make use of the bulk log upload feature. This feature is affected by a path traversal issue that allows arbitrary file writes leading to RCE. The exploitation method is detailed in our advisory.

Regarding the official workarounds: I actually jiggled when read about placing the log directory to different physical drive, but quickly realized that this is not the first case that software exploitation is mitigated by hardware :) The workaround definitely would work and you only need a different logical partition (not a full physical drive) so the drive letters will be different.

Considering the Prevention policies let’s just say that I studied SCSP to measure the behavioral detection capabilities of different endpoint products, and wasn’t really impressed (by any of the tested products)…

Bonus

While studying SCSP I stumbled upon another interesting thing, that I wouldn’t consider a vulnerability, more like a really strange design decision. In a nutshell, the SCSP management interface allows connecting to different servers and the server list can be edited without authentication. This way an attacker can replace the address of a legitimate server with her own and wait for administrators to log in:

And if you are wondering what that binary thing at the end is, here are some clues:

public byte[] encipherPassword(byte pwd[])
{
    XOREncoding.encode(pwd);
    return pwd;
}
public static void encode(byte data[])
{
    for(int i = 0; i < data.length; i++) data[i] ^= 0x5a;
}

Code Review on the Cheap

At the 31. Chaos Communication Congress I had the pleasure to watch the presentation of Fabian Yamaguchi about the code analysis platform Joern. I’ve heard about this tool before at Hacktivity but this time I could have deeper view on the internals and the capabilities of the software that inspired me to clean up and release some piece of code I used for some years in code review projects.

My idea was quite similar to the one of Fabs: Recognize potentially vulnerable code patterns, and create a tool that is aware of the sytnax in order to look for the anomalies in the source efficiently. Yes, I could use regular expressions and grep, but  eventually I faced several problems like:

  • Ugly source code 
  • Keywords inside comments or strings
  • The complexity of the regex itself

Since I have to deal with several different languages I also had to come up with a solution that is easily applicable to new grammars. In the end I settled with a solution somehow between grep and Joern and implemented it in CROTCH (Code Review On The CHeap):

Instead of full-blown parsers CROTCH only uses lexers, which are readily available in all flavors in the Pygments Python package (intended to use for source code highlighting), providing support for almost every language one can think of. Other lexers can also be created very easily – although installing them requires some setuptools-fu. With these lexers we can tokenize the source code, splitting it to meaningful parts and assigning types (like scalar, string, comment, keyword, etc.) to them. This is of course far from having an AST for example, but allows us to start creating small, targeted “mini-parsers” which focus only to the relevant parts of the source.

These “mini-parsers” can be implemented through a general purpose state machine, usually in a few dozen lines of Python. The example provided on GitHub implements a state machine similar to the following to identify simple format string bugs without troubling with most of the grammar:

 printf

We successfully utilized this approach to find bugs in scale from enterprise Java applications to large PL/SQL projects, hope you’ll find CROTCH useful too! As always, issues, feature- and pull requests are welcome on our GitHub!

Trend Micro OfficeScan – A chain of bugs

Analyzing the security of security software is one of my favorite research areas: it is always ironic to see software originally meant to protect your systems open a gaping door for the attackers. Earlier this year I stumbled upon the OfficeScan security suite by Trend Micro, a probably lesser known host protection solution (AV) still used at some interesting networks. Since this software looked quite complex (big attack surface) I decided to take a closer look at it. After installing a trial version (10.6 SP1) I could already tell that this software will worth the effort:

  • The server component (that provides centralized management for the clients that actually implement the host protection functionality) is mostly implemented through binary CGIs (.EXE and .DLL files)
  • The server updates itself through HTTP
  • The clients install ActiveX controls into Internet Explorer

And there are possibly many other fragile parts of the system. Now I would like to share a series of little issues which can be chained together to achieve remote code execution. The issues are logic and/or cryptographic flaws, not standard memory corruption issues. As such, they are not trivial to fix or even decide if they are in fact vulnerabilities. This publication comes after months of discussion with the vendor in accordance with the disclosure policy of the HP Zero Day Initiative.

Continue reading Trend Micro OfficeScan – A chain of bugs

OWASP Top 10 is overrated

OWASP Top 10 doesn’t need an introduction: it’s certainly the most well-known project of the Open Web Application Security Project (OWASP), referenced by every single presentation, paper, brochure and blog post that is at least slightly related to web application security. 

But unfortunately, according to my experiences, most of the time, OWASP Top 10 is the only thing people can think of, when it comes to (web)app security, even though OWASP and other groups provide materials of much higher professional value. This affects not only for novice programmers, but also people in corporate IT-security teams, and even quite a few security vendors and consultant firms.

In the next sections, I will try to shed light on the misunderstandings around the Top 10, the weaknesses of the project, and reasons why I think it doesn’t fit many purposes it is (ab)used for right now.

Good Definitions

To see what the Top 10 is good for, we can simply take a look at how the Top 10 project defines its goals:

"The goal of the Top 10 project is to raise awareness about application security by identifying some of the most critical risks facing organizations."

This is a very accurate description that deserves some more attention. 

Raising awareness is a really important task that needs to be taken care of. However, raising awareness in itself was never meant to help prevent, identify or solve any problem. The HIV problem is not taken care of by simply telling people that they should only have sex in a safe way (whatever that means…). 

Moving forward we read that the Top 10 helps “identifying some of the most critical risks”. Not all the risks. Not all the critical risks.

All in all, with the Top 10, we may have sneak peak into the problems of the web application security, however, we will definitely not see the big picture. The Top 10 can’t be used as a silver bullet, when it comes to web application security, this is what the creators themselves tell you. Unfortunately, as it turns out, people prefer short, simple lists as the solution for their overly complex problems.

Risk Rating

We are most afraid of “critical” vulnerabilities, and the Top 10 project provides a seemingly good way to put up the fight with them. But what vulnerabilities are critical anyway?

“Top 10-only” people are usually unaware of the fact that OWASP also maintains its own Risk Rating Methodology. This methodology helps us assessing the risk of vulnerabilities by measuring different factors of a potential attack, like its impact on our business or the motivation of the attacker. What we have to keep in mind is that these factors are strongly dependent on the wider context of the application.

The Top 10 is created by measuring the usual impact of some common vulnerability classes. But web applications are difficult (both from security and development standpoints) because they are mostly developed “from scratch” (hopefully at least based on some kind of framework…) to meet the custom specifications of the customer. Business processes of big companies are run through series of independent webapps developed by different companies on different platforms at different times.  Apart from the vulnerabilities arising from the interaction of these components (which are obviously not covered in the Top 10) assessing the severity of even the simple-looking issues can be far from obvious, if you take the application context into account.

How severe is an SQL injection issue in an application with similar functionality to that of phpMyAdmin? Have you ever tried exploiting an SQL injection issue with a 10 char input limit? How often would you associate critical impact to an open redirect issue (No. 10 in Top 10 2013)? Are insecure file uploads (which usually result in RCE) even covered by the Top 10?

For the uninitiated reader, the Top 10 could suggest that the severity of security vulnerabilities can be measured merely by some of their technical properties; this contradicts the basic considerations of any serious security concept. The truth is that the Top 10 is about the web applications in general, and not about your web application.

Bad Definitions

 You could assume after the honest and clear self-definition of the project that it will at least provide you with a useful classification for the vulnerabilities you can encounter. But it is not. Some examples:

If we create a big set of injection problems (A1) and say that “injection occur when untrusted data is sent to an interpreter as part of a command or query” why don’t we drop XSS (A3) in the bag also? I can think of the browser as an interpreter and HTML tags or embedded scripts as its commands. 

Isn’t the presence of insecure direct object references (A4) imply the  lack of function level access control (A7)?

Can’t we just think about every single security issue as Security Misconfiguration (A5)?

I suspect that in order to raise awareness about some undoubtedly important issues, the Top 10 team tried to include too many things in this short 10 element list. As a result, the classification became unclear and instead of shedding light on fundamental issues, it left its readers alone to fight the complexity of application security.

A developer friend of mine asked me the other day, if I could point him to some useful resources about web application security. First, I wanted to give him a list including the OWASP Top 10, but then I decided not to, because I didn’t want to proivde him a confusing material at the very first step on his way in security.

Some common misunderstandings

Finally I would like to answer some common questions regarding the above thoughts and the Top 10 in general:

– “You are bashing OWASP”

– I’m not bashing OWASP. I’m only bashing the Top 10 project and those who are lazy enough to see it as a comprehensive solution for their appsec problems. I’m actually an admirer of OWASP, and use their fine materials every single day. I also do respect the work of the Top 10 project members, and I’m open for discussions about how we could make things better.

– “At least the Top 10 gives us a QA baseline to hold on to”

– The Top 10 is never meant to be a baseline for anything (there is even a separate OWASP project for that). A baseline should be something that you can enforce/verify, but the Top 10 doesn’t give you anything to do such things (other OWASP projects can help you, though). You should also consider if such baseline could ever be created, taking the high diversity of web applications into account. 

– “The Top 10 gives you all the references to really useful materials”

This is true, and this is what I find most valuable in the project. But people are lazy and tend to ignore those materials. Why would someone waste the time on learning robust development methodologies and frameworks, when he can solve all his problems by “using bind variables in all prepared statements and stored procedures, avoiding dynamic queries”? And although the Top 10 encourages its readers to “Don’t stop at 10”, they actually do, because  it is always easier to check the items of a list than to think through the problem we are facing. 

OWASP Top 10 can be a good starting point for less tech-savvy people (like journalists), but it shouldn’t be thought of as a piece of professional material. I hope an audience that has already got familiar with the Top 10 will also find the more valuable materials of OWASP which could effectively help creating and maintaining more secure systems.

Update: Here are some links to other OWASP projects which can give you more useful guidance to deal with web application security: