Decrypting Eazfuscator.NET encrypted symbol names

Author: dnet

There are many obfuscators for different languages, and some of those offer reversible options for easier field debugging. Eazfuscator.NET is one of these and with a bit of reverse engineering, whole files can be restored with the original symbols once you have the password.

(more…)


Drop-by-Drop: Bleeding through libvips

Author: b

During a recent engagement we encountered a quite common web application feature: profile image uploads. One of the tools we used for the tests was the UploadScanner Burp Suite extension, that reported no vulnerabilities. However, we noticed that the profile picture of our test user showed seemingly random pixels. This reminded us to the Yahoobleed bugs published by Chris Evans  so we decided to investigate further.

(more…)


Our take on social engineering

Author: dnet

Like many other offensive IT security companies, we also offer social engineering assessments. And like in other areas of our portfolio, we try to steer client needs in a way that they order something that actually matters. This blog post summarizes what we experienced and how we see things in this field. While many things work the same way around the globe, the starting point is our feeling here in Hungary, where many people in the local IT security scene think social engineering means walking into buildings dressed as a pizza delivery guy and calling targets on the phone.

(more…)


The curious case of encrypted URL parameters

Author: dnet

As intra-app URLs used in web applications are generated and parsed by the same code base, there’s no external force pushing developers towards using a human-readable form of serialization. Sure, it’s easier to do debugging and development, but that’s why I used the word “external”. Many frameworks use custom encodings, but one of the most extreme things a developer can do in this regard is completely encrypting request parameters. We encountered such a setup during a recent web app security assessment, let’s see how it worked out.

(more…)


Snow cannon vs. unique snowflakes — testing registration forms

Author: dnet

Many of the web application tests we conducted had a registration form in the scope. In such cases, there’s usually a field that needs to be unique for each invocation, sometimes called username, in other cases, the e-mail address is used as such. However, launching the Scanner or Intruder of Burp Suite or a similar tool will send the same username over and over again, resulting in possible false negatives. We faced this problem long enough that we came up with a solution for it, and now you can use it too!

(more…)


Bare Knuckled Antivirus Breaking

Author: b

Endpoint security products provide an attractive target for attackers because of their widespread use and high-privileged access to system resources. Researchers have already demonstrated the risks of complex input parsing with unmanaged code and even sloppy implementation of client- and server-side components of these products. While these attacks are still relevant, it is still generally overlooked how security software breaches some important security boundaries of the operating system. In this research we first present a generic self-defense bypass technique that allows deactivation of multiple endpoint security products. Then we demonstrate that self-defense can hide exploitable attack surface by exploiting local privilege escalation vulnerabilities in 6 products of 3 different vendors. You can download our whitepaper here:

Bare-Knuckled Antivirus Breaking (PDF)

The following part of this blog post contains demonstration videos and some additional notes about the exploits described in the paper. We will also use this post to publish up-to-date information about affected vendors and fixes.

(more…)


Emulating custom crytography with ripr

Author: b

Custom cryptography and obfuscation are recurring patterns that we encounter during our engagements and research projects. Our experience shows that despite industry best practices and long history of failures these constructs are not getting fixed without clear demonstration of their flaws. Most of the time demonstration requires instrumenting the original software or reimplementing the algorithms from scratch. This way we can create specially crafted encrypted messages, find hash collisions etc.

Ripr  is a really exciting tool “that automatically extracts and packages snippets of machine code into a functionally identical python class backed by Unicorn-Engine”. I was really curious about how effectively this tool can be used so I decided to create a new sample that models some of the algorithms we’ve seen and write up my experiences as a reference for others.

The test program

To put ripr  to the test I grabbed the first decent looking RC4 implementation in C* and added an additional XOR step with a hardcoded 4-byte key to it. This small addition would simulate hardcoded keys, lookup tables and other constants that are commonly used in standard and non-standard algorithms alike. As we will see, resolution of these structures is not a trivial task for a static analyzer.

I compiled the code with GCC, not stripping the symbols, so I could work as if I already did the reversing work to identify the subroutines of interest. I then loaded the binary to Binary Ninja and made ripr export the key scheduler (KSA) and the keystream generator (PRGA) functions as two Python classes that I copied to a single script. As this was in the middle of a busy day I just slapped some instantiation code to it to see if the thing runs without any obvious compilation errors.

ksa=KSA() # Instanitate key scheduler
ksa.run(key,S) # initialize cipher state (S) with key
print repr(S) 
prga=PRGA() # Instantiate keystream generator
prga.run(S,plain,cipher) # Run keystream generator with the calculated state
print repr(cipher)

It did, but in order to make the code do anything useful we need to understand what was and what wasn’t generated for us by ripr.

* I didn’t verify the correctness of this implementation and even noticed some oddities (like calculation of the ciphertext size), but any mistakes would make my candidate even better for a “homebrew” algorithm.

First commit (e1569ae)

The first thing I noticed is that the generated code doesn’t handle output arguments: Arguments are just written to the memory of the emulator, but ripr doesn’t know that some of these allocations will contain important data at the end of the run of the function. This can be easily fixed by reading memory from addresses pointed to by the argAddr_N variables. In our case the key scheduler populates the S buffer, so we have to read back the memory of arg_1 of KSA.run():

-        return self.mu.reg_read(UC_X86_REG_RAX)
+        return self.mu.mem_read(argAddr_1,256)

As you can see I chose to implement more “pythonic” interface for this method, returning the object of interest instead of using an output variable. You can see similar changes in the later commits where I’m finalizing the code.

When I executed this code the KSA function executed successfully (but not necessarily correctly!) but the PRGA raised the following exception:

Traceback (most recent call last):
  File "prga.py", line 115, in <module>
    prga.run(S,plain,cipher)
  File "prga.py", line 57, in run
    self._start_unicorn(0x400733)
  File "prga.py", line 44, in _start_unicorn
    raise e
unicorn.unicorn.UcError: Invalid memory write (UC_ERR_WRITE_UNMAPPED)

 

Second commit (aaf375c)

It seems that the emulated program tries to access unmapped memory. Since the exception is caused by emulated code, the stack trace doesn’t provide information about what exactly went wrong. To debug this we need to know the instruction and the context where the emulation fails. One way to do this is to hook each instruction in Unicorn Engine but for me it was easier to extend the auto-generated exception handler code to print out context information when an unhandled exception happens:

             else:
+                print "RIP: %08X" % self.mu.reg_read(UC_X86_REG_RIP)  # 0x4007dd: mov eax, dword [rbp-0x1c]
+                print "EAX: %08X" % (self.mu.reg_read(UC_X86_REG_EAX))
                 raise e

The offending instruction can be seen as comment above. EAX pointed to memory at slightly above 0x4000 so I simply added a new mapping to the constructor of PRGA and the exception went away:

+        self.mu.mem_map(0x1000 * 4, 0x1000) # Missed mapping

After looking at the exception handlers I also tried to implement strlen() as a hook function that is meant to replace the original import call during emulation. Hooks for impoerted functions work by checking memory access exceptions against a defined list of addresses: if the saved return address points after an imported function call, the generated code handles the exception by calling the corresponding hook function. As far as I can tell return values should be manually set in the hook function (in this case setting EAX to the string length), but I also gave the function a return value for easier debugging (it turned out my original code had a pretty obvious bug, can you spot it?).

Third commit (5680b36)

So the code ran fine, but the results were different from what I got from the original binary. Two things were suspicious though:

  • My static obfuscator string (“ABCD”) was nowhere to be found in the generated code. This shows that manual reverse engineering is still crucial when using ripr.
  • My strlen() implementation was never called. Since hook functions are really easy to write, I suggest to always add some debug code (even simple prints) to them to prevent bugs like this. This is also a good way to have a high-level trace of the execution of the emulator.

With enough infromation obtained by reversing the program the first problem can be resolved easily. In this case I also had a suspicious piece of memory in the generated code that I couldn’t originally connect to anything:

self.data_0 = '00000000000000000000000000000000540a400000000000'.decode('hex')

It turns out that my obfuscator key is located at 0x400a54. This piece of memory held the pointer to it, but that region was not properly populated (although it was mapped so it didn’t cause an exception). Similarly, the import for strlen() was located at 0x4004d0 in the original binary, but not populated in the generated code by ripr. Adding these two lines to the PRGA constructor resolved these issues:

self.mu.mem_write(0x400a54L, "4142434400".decode('hex'))
self.mu.mem_write(0x4004d0L, "ff25410b2000".decode('hex'))

Note that the code written for the strlen() import is just a jump pointing to some memory unmapped in the emulator. This way an exception will be raised that can be handled by the code responsible for calling the hook function in Python.

Fourth commit (c9c7d3c)

What I failed to notice before this commit was that KSA also relied on strlen(). But since it was in a separate class using a different emulator instance my previous changes didn’t affect it. One could merge the classes, but for the sake of simplicity I chose to just duplicate the code. After this the emulated and the original program gave identical results.

Conclusion

All in all I managed to create a working emulator in about two hours, without any prior experience with ripr. Assuming proper understanding of the targeted program I expect about the same effort needed for experienced users in case of real life targets: the complexity of the task is mostly dependent on the number of unresolved data and code references, not the complexity of the algorithm itself. Considering the amount of work needed to reimplement cryptographic code, or instrumenting large software, ripr will definitely be on the top of my list of tools when the next homebrew crypto-monster appears!


Conditional DDE

Author: b

Here’s a little trick we’d like to share in the end-of-year rush:

DDE is the new black, malware authors quickly adopted the technique and so did pentesters and red teams in order to simulate the latest attacks. According to our experience trivial DDE payloads (like fully readable PowerShell scripts) slip through conventional detections, but process monitoring can cause some headache: powershell.exe launched from Office is surely an obvious indicator of something phishy.

Malware sandboxes (that execute incoming files in virtualized environments to learn more about their purpose) are an example of defensive tools that implement such detection. And although they are commonly seen as all-in-one APT stoppers, these tools are in fact quite limited in terms of simulating an actual target, that provides a broad venue for their bypass.  Evasion is generally performed by conditional checks to determine if the payload would run in the right domain, timezone, etc. If the condition is not met, the payload remains dormant so the instrumentation in the sandbox won’t catch anything suspicious.

So how do we implement this with DDE? Looking at some public obfuscation techniques it’s easy to spot the IF field code, that allows conditional parsing of other fields in the document. We can combine this with the DATE or TIME codes to construct a document with time-based execution:

{SET P {IF {TIME \@ "m"} > 13 "C:\\Winows\\System32\\calc.exe" ""}}
{DDEAUTO {REP P} "s2"}

The above DDE construct only executes calc.exe if the minutes of the hour are past 13. Suppose you send attachments that only execute code after 9:00 AM during the night – by the time someone opens the bait, the analyzer already marked it safe hours ago. Or better yet, you can rely on the resource constraints of the sandbox and make it cache/whitelist your first shot before you send the rest. These methods can be further refined with the use of fields like USERNAME or even FILENAME.

By the way, is DDE Turing-complete?


Notes on McAfee Security Scan Plus RCE (CVE-2017-3897)

Author: b

At the end of last month, McAfee published a fix for a remote code execution vulnerability in its Security Scan Plus software. Beyond Security, who we worked with for vulnerability coordination published the details of the issue and our PoC exploit on their blog. While the vulnerability itself got some attention due to its frightening simplicity, this is not the first time SSP contained similarly dangerous problems, and it’s certainly not the last. In this post, I’d like to share some additional notes about the wider context of the issue.

(more…)


Fools of Golden Gate

Author: b

In this blog post, we once again demonstrate that excessive reliance on automated tools can hide significant risks from the eyes of defense. Meanwhile, we discuss technical details of critical vulnerabilities of Oracle Golden Gate and show another disappointing example of the security industries approach to product quality.

(more…)