It’s been a steady trend that most of our pentest projects revolve around web applications and/or involve database backends. The former part is usually made much easier by Burp Suite, which has a built-in scanner capable of identifying (among others) injections regarding latter. However, detection is only half of the work needed to be done; a good pentester will use a SQL injection or similar database-related security hole to widen the coverage of the test (obviously within the project scope). Burp continually improves its scanning engine but provides no means to this further exploitation of these vulnerabilities, so in addition to manual testing, most pentesters use standalone tools. With the new features available since Burp Suite 1.7.09, we’ve found a way to combine the unique talents of Burp with our database exploitation framework, resulting in pretty interesting functionality.
During a web application test one of the most precious bugs you can find is a good-old SQL injection: These vulnerabilities can lead you to bypass all the security controls of the application, elevate your privileges and find new (possibly vulnerable) functionality and in the end take control over the entire database server and possibly pivot your attack to the depths of the target network.
Fortunately for the application operators SQL injection vulnerabilities are not that easy to come by thanks to the roboust development frameworks and smart testing tools. That being said, these vulnerabilities are here to stay, although many times you can only discover them the hard manual way hidden deep inside some complex feature. After the discovery, exploitation can be more painful, since information emitted by the database often disappears as it goes through the layers of the application, leaving you with a fully blind scenario. And although there are great generic tools (like SQLmap or SQLninja) to retreive meaningful information by gaining advantage of these bugs, there are many cases when configuring or even patching your favorite software takes more effort in the end than writing your specific exploit from scratch.
But we, computer people are lazy and hate to reinvent the wheel every time we stumble upon some exotic injection not to mention the debugging frenzy that a speed-coded search algorithm or a multi-threaded program can cause.
This is why I created Duncan (named after the blind, weak but loyal attendant of the Prince of Thieves) that is a simple but powerful Python program skeleton/module that saves you the work with the most daunting tasks of SQLi exploit development: it takes care of threading, implements tested search algorithms and provides a convenient command line interface (that might be an oxymoron…) for configuration. You only have to implement one single method that checks for the value of a character of a custom query result (typically this can be done in 5-8 lines of code) and you can start looting, Duncan takes care of everything else.
Time == Money
Among the simple blind injection – that was really just an exercise to clean the rust off the coder part of my brain – I also wanted to support time-based injections, which may look identical to a standard blind injection but there might be a catch:
While typical blind injections are usually exploitable through some kind of 1-bit information leak and are basically identical to the well-known number guessing game, time based injection has an interesting property, namely that you have to “pay some price” (spend considerable amount of time) every time you guess (in)correctly. Optimizing an exploit to minimize this cost sounded like a fun idea to play with, so I brought up the topic at Camp0 where we created a simple test program and some proof of concept algorithms (the algorithms are named after my participating friends, CJ and Synapse – kudos to them!).
We concluded, that as the set of possible guesses shrink, linear search gets more beneficial than binary since in the former case the number of expensive guesses is constant (but you have to make more cheap requests overall), while in the latter case we can expect that the number of expensive requests will be proportional to the logarithm of the set size (but we have to make fewer requests overall).
So basically both implemented algorithms fall back to linear search under different conditions related to the set size and the ratio of the costs of the correct/incorrect guesses.
This works pretty well in model implementations, saving 20-30% time in average so I decided to implement a similar algorithm in Duncan too.
Unfortunately as it turned out optimizations only made a (positive) difference in some unlikely edge cases and implementing some simple auto-tuning mechanism for timing could have much bigger benefits. Luckily Duncan allows you to implement such more advanced algorithms pretty easily :)
Update 2017.01.03: After 3 years, Erick Wong from Mathematics Stack Exchange provided us with a comprehensive answer to this problem. As it turned out, “the optimized algorithm can yield significant performance gain in case the penalty is big (e.g. querying a large dataset)”.
So feel free to use Duncan in your own pentest projects or experiment with more advanced algorithms, the source code along with some implementation and usage examples are available in our GitHub repository.