Tag Archives: projects

Proxying nonstandard HTTPS traffic

Depending on the time spent in IT, most professionals have seen an instance of two where developers based their implementations on specific quirks and other non-standard behaviors, a well-known example is greylisting, another oft-used but less-known one is Wi-Fi band steering. In all these cases, the solution works within a range of implementations, which usually covers most client needs. However, just one step outside that range can result in lengthy investigations regarding how such a simple thing like sending an e-mail or joining a Wi-Fi network can go wrong.

During one of our application security assessments, we found an implementation which abused the way HTTP worked. The functionality required server push, but they probably decided to use HTTP anyway to get through corporate firewalls, and the application came from the age before WebSockets and HTTP2. So they came up with the idea of sending headers like the one below, then keeping the connection open, without sending any data.

HTTP/1.0 200 OK
Server: IDealRelay
Date: Wed, 16 Sep 2015 18:03:24 GMT
Content-length: 10000000
Connection: close
Content-type: text/plain

Googling the value of the Server header revealed mostly false positives, however as it turned out, there’s even a patent US 7734791 B2 with the title Asynchronous hypertext messaging that describes this behavior. Sending data to the server happened over separate HTTP channels, which themselves returned worthless responses, and the real response arrived in the first HTTP channel promising to deliver 10 megabytes in its Content-length header.

Some corporate proxies might have handled this well, however the Proxy module of Burp Suite just waited for the full 10 MB to arrive and just hung there, waiting for all eternity – and even if it managed to receive some data, its Scanner module couldn’t have handled the correlation between sending a payload in one request and getting response in another. In order to solve the problem, I decided to throw together a simple HTTPS proxy that doesn’t assume that much about the contents, just accepts CONNECT requests, performs a Man-in-the-Middle (MITM) attack and dumps the plaintext traffic into a file.

Multiple streams needed to be handled asynchronously, so I chose Erlang for its unique properties, and used built-in modules for everything except generating fake certificates for MITM. Since such certificates are cached, I chose to sign them using the command line version of OpenSSL for the sake of simplicity, so a whitelist is applied to the hostname to avoid command injection attacks – even though it’s supposed to be a tool used to debug “well behaving” applications, it never hurts to protect and setting good example. As an output format, PCAP was chosen as it’s simple (the PCAP writer itself is 32 lines of Erlang code) and widely supported by tools such as Wireshark.

The proxy logic fits into 124 lines of code, and waits for new TCP connections. By setting the socket into HTTP mode, the standard library does all the parsing for the CONNECT request. A standard response is sent, and from this point, the mode gets changed back to raw, and both the server and the client gets an SSL handshake from the proxy. Server certificates used for MITM are cached in an ETS (built-in high-performance in-memory database) table, shared between the lightweight thread-like Erlang processes, so certificates are signed only once per hostname, and the private key is the same for the CA and all the certificates.

After the handshake traffic is simply forwarded between the peers, and simultaneously written into the PCAP file. TCP/IP headers are added with the same port/address tuple as the original stream, while sequence/acknowledgement values are adjusted to reflect the plaintext content. This way, Wireshark doesn’t have any problems with reassembling the stream and can even detect and dissect known application protocols. Source code of the SSL proxy is available in our GitHub repository under MIT license, pull requests are welcome.

With the plaintext traffic in our hands, we managed to develop another tool to experiment with the service and as a result, we found several vulnerabilities, including critical ones. So the lesson to learn is that just because tools cannot intercept traffic out of the box, it doesn’t mean that the application is secure – existing tools are great for lots of purposes but one big difference between hackers and script kiddies is the ability of former to develop their own tools.

Testing websites using ASP.NET Forms Authentication with Burp Suite

Testing a website is usually considered just another day at work, Burp Suite is usually the tool of our choice for automating some of the scans that apply in this field. Assessing the authenticated part of the site is also common, and since Burp can be used as an HTTP proxy, it can capture our session tokens (usually HTTP cookies) and perform scans just like we’d do as humans. This token is usually remain unchanged over the time of the session, and the session itself is kept alive by the scanner activity itself.

However, a few weeks ago, we encountered an ASP.NET site that offered the regular ASP.NET_SessionId cookie and after successful login we got another one called .ASPXFORMSAUTH indicating that the application uses Forms Authentication. We started issuing requests including these two cookies, but within 5 minutes, the server started responding like we’re not logged in. Investigation followed, and we could reproduce that even by sending a request every 10 seconds, after 2 minutes pass from login, the requests with the two original cookies are denied. The operators confirmed that the session timeout is 2 minutes (which is resonable given the normal use-cases of the application), and looking at the responses revealed that around halfway between login and the 2 minutes deadline, a new .ASPXFORMSAUTH cookie is set by the server.

Apparently, Burp Suite ignored such Set-Cookie headers at the time both in its Scanner and Intruder modules, so I wrote a simple plugin that would hook HTTP requests within Burp and behave like a browser for this specific cookie. If such a cookie is received it gets stored as a protected field of the class, and subsequent requests will be modified to include the latest value. Since Burp uses multiple threads, this value also needs locking in order to avoid race conditions. Burp also offers some help with parsing and mangling HTTP headers, so besides hooking HTTP traffic and initialization, the plugin starts by storing a reference to an instance of the IExtensionHelpers interface.

public class BurpExtender implements IBurpExtender, IHttpListener
{
  protected IExtensionHelpers helpers;
  protected String cookie_value = null;
  public static final String COOKIE_NAME = ".ASPXFORMSAUTH";
 
  @Override
  public void registerExtenderCallbacks(IBurpExtenderCallbacks callbacks)
  {
    helpers = callbacks.getHelpers();
    callbacks.setExtensionName("AspxFormAuth");
    callbacks.registerHttpListener(this);
  }

By registering a HTTP listener, the processHttpMessage method will be called by Burp every time an HTTP request is issued, regardless of its source, including the browser, the Scanner or Intruder modules. The messageIsRequest parameter can be used to define different behavior for requests and responses; in case of former, any Cookie: headers with the matching name in the request will be updated to the latest value (if there’s one, hence the null check).

@Override
public void processHttpMessage(int toolFlag, boolean messageIsRequest, IHttpRequestResponse messageInfo)
{
  if (messageIsRequest)
  {
    synchronized (this) {
      if (cookie_value == null) return;
    }
    byte[] request = messageInfo.getRequest();
    IRequestInfo ri = helpers.analyzeRequest(request);
    List parameters = ri.getParameters();
    for (IParameter parameter : parameters) {
      if (parameter.getType() == IParameter.PARAM_COOKIE &&
          parameter.getName().equals(COOKIE_NAME)) {
        synchronized (this) {
          messageInfo.setRequest(helpers.updateParameter(request,
                helpers.buildParameter(COOKIE_NAME, cookie_value,
                  IParameter.PARAM_COOKIE)));
        }
      }
    }
  }

The only remaining task is feeding the cookie_value member with tokens by parsing responses. Again, the Burp helpers are used to analyze the headers and if a cookie with a matching name is found, its value is stored, taking special care to let only a single thread access the value at a time.

else
{
  IResponseInfo ri = helpers.analyzeResponse(messageInfo.getResponse());
  List cookies = ri.getCookies();
  for (ICookie cookie : cookies) {
    if (cookie.getName().equals(COOKIE_NAME)) {
      synchronized (this) {
        cookie_value = cookie.getValue();
      }
    }
  }
}

Using this technique, we’d been able to perform the assessment, the full source code can be found in a branch of our GitHub repository. If you don’t want to compile it for yourself, a binary JAR is also available on the release page. We’re also proud that since we released the Burp OAuth plugin this one is based on, former has been built upon and improved by SecureIdeas as part of their Co2 plugin.