Adding XCOFF Support to Ghidra with Kaitai Struct

Author: b

It’s not a secret that we at Silent Signal are hopeless romantics, especially when it comes to classic Unix systems (1, 2, 3). Since some of these systems – that still run business critical applications at our clients – are based on some “exotic” architectures, we have a nice hardware collection in our lab, so we can experiment on bare metal.

We are also spending quite some time with the Ghidra reverse engineering framework that has built-in support for some of the architectures we are interested in, so the Easter holidays seemed like a good time to bring the two worlds together.

My test target was an RS/6000 system running IBM AIX. The CPU is a 32-bit, big-endian PowerPC, that is already (mostly?) supported by Ghidra, but to my disappointment, the file format was not recognized when importing one of the default utilities of AIX to the framework. The executable format used by AIX is XCOFF, and as it turned out, Ghidra only has a partial implementation for it.

At this point I had multiple choices: I could start to work on the existing XCOFF code, or could try to hack the fully functional COFF loader just enough to make it accept XCOFF too, but none of these options made my heart beat faster:

  • Java doesn’t have unsigned primitives, that makes parsing of byte streams painful
  • The existing ~1000 LoC XCOFF implementation includes a wide set of structure definitions with basic getters and setters, but it doesn’t handle more complex schematics of the input
  • The COFF loader expects everything to be little-endian – adding support for BE would require rewriting everything

Instead, I decided to start from scratch, and develop code, that:

  • is reusable in tools other than Ghidra
  • is easy to read, write and extend
  • has excellent debug tools

Ghidra ❤️ Kaitai

The above benefits are provided by Kaitai Struct, “a declarative binary format parsing language”. Instead of implementing a parser in a particular (procedural) language and framework, with Kaitai we can describe the binary format in a YAML-like structure (I know, YAML===bad, but believe me, this stuff works), and then let the Kaitai compiler produce parser code in different languages for us from the same declaration.

Although my Kaitai-fu (picked up mainly through these challenges at Avatao) was rusty , I managed to put together a partial, hacky, but working format declaration in a couple of hours for XCOFF32, based on IBM’s documentation.

This approach also had some benefits from research standpoint, as by reading the specification I could spot

  • inconsistencies between specification and implementation
  • redundant information (e.g. size specifications) in the spec

both of which can lead to interesting parsing bugs! (After this, I wasn’t surprised, when while digging Google I found that IDA, which has built-in XCOFF support has suffered from such bugs in the past)

Coming back to Ghidra development, I could create two implementations from the same Kaitai structure: one in Python, one in Java. I could import the Java implementation as a class in my Ghidra Loader and debug Ghidra-specific code in Eclipse, while check the semantic correctness of the parser and explore the API more comfortably in Python REPL:

$ python -i portmir
.text 0x20
.data 0x40
.bss 0x80
.loader 0x1000
>>> hex(portmir.section_headers[0].s_vaddr)

… or just browse the parsed structures in KT‘s awesome WebIDE.

Integrating the generated Java code with Ghidra was a piece of cake:

  • Add Kaitai’s runtime library to the project
  • Wrap the Java byte array provided by Ghidra’s ByteProvider with ByteBufferKaitaiStream, and use the appropriate constructor of the generated class

After the Ghidra-Kaitai interface was set, the only things left were setting the default big-endian PowerPC language, letting Kaitai parse the section headers of the XCOFF file, and mapping them to the Program memory. After this, I could immediately see convincing disassembly and decompilation(!) results:

First disassembly and decompilation result

(Mysterious) Symbols of Love

To give Ghidra more hints about the program structure, I proceeded by parsing symbol information. I don’t want to dive deep into the XCOFF format in this post, but in short, there is a symbol table defined by the .loader section of the binary, that holds information about imports and exports, and there is an optional symbol table potentially referenced by the main header for more detailed information. XCOFF can also contain a TOC (table of contents) that contains valuable structural information for reverse engineering if present.

Since the small utility I used for testing only contained a loader symbol table, I implemented parsing for that, and managed to find the entry function of the file, which was not identified during automatic analysis.

To check my results, I also loaded the sample file into IDA, and to my surprise, this tool showed much more symbols than the loader symbol table! I searched for some of the missing symbols in the binary and found a single occurrence of every missing function name inside the .text section:

Length-prefixed string structure inside the .text section

After a lot of digging (and asking on Twitter) I found that this arrangement matches the Symbol Table Layout described in the specification:


So far, I couldn’t fully decipher this layout, but my working theory is that while the optional symbol table and TOC were removed by stripping, the per-function stabs remained untouched. If so, this is good news for reverse engineers interested in the XCOFF format :)

Update 2021.04.07: As /u/ohmantics pointed out, this is actually the Traceback Table of the function.  Proper support for these structures coming soon!

While the parser of this information should be placed in a proper analyzer module, for now, I put together a simple Python script that tries to parse string structures from between declared functions, and renames functions accordingly:

Pseudocode with additional symbol names


This blog post showed that Kaitai Struct can be an effective tool to add new formats to Ghidra. Parser development is a tedious and error-prone process that should be outsourced to machines, which don’t get frustrated at the 92nd bitfield definition, and can produce the same, correct implementation for every instance (provided you don’t screw up the parser declaration itself ;) ).

The post allowed a peek inside the XCOFF format too, that seems to worth some security-minded study in parser applications.

We hope that our published code will attract contributors that are also interested in bringing XCOFF to Ghidra or even to other research tools:

Featured image is from Wikipedia (our boxes look much cooler)

Decrypting and analyzing HTTPS traffic without MITM

Author: dnet

Sniffing plaintext network traffic between apps and their backend APIs is an important step for pentesters to learn about how they interact. In this blog post, we’ll introduce a method to simplify getting our hands on plaintext messages sent between apps ran on our attacker-controlled devices and the API, and in case of HTTPS, shoveling these requests and responses into Burp for further analysis by combining existing tools and introducing a new plugin we developed. So our approach is less of a novel attack and more of an improvement on current techniques.

Of course, nowadays, most of these channels are secured using TLS, which provides encryption, integrity protection and authenticates one or both ends of the figurative tube. In many cases, the best method to overcome this limitation is man-in-the-middle (MITM), where a special program intercepts packets and acts as a server to the client and vice versa.

For well-written applications, this doesn’t work out-of-the-box, and it all depends on the circumstances, how many steps must be taken to weaken the security of the testing environment for this attack to work. It started with adding MITM CA certificates to OS stores, recent operating systems require more and more obscure confirmations and certificate pinning is gaining momentum. Latter can get to a point, where there’s a big cliff: either you can defeat it with automated tools like Objection or it becomes a daunting task, where you know that it’s doable but it’s frustratingly difficult to actually do it.


Patching Android apps: what could possibly go wrong

Author: dnet

Many tools are timeless: a quality screwdriver will work in ten years just as fine as yesterday. Reverse engineering tools, on the other hand need constant maintenance as the technology we try to inspect with them is a moving target. We’ll show you how just a simple exercise in Android reverse engineering resulted in three patches in an already up-to-date tool.


Emulating custom crytography with ripr

Author: b

Custom cryptography and obfuscation are recurring patterns that we encounter during our engagements and research projects. Our experience shows that despite industry best practices and long history of failures these constructs are not getting fixed without clear demonstration of their flaws. Most of the time demonstration requires instrumenting the original software or reimplementing the algorithms from scratch. This way we can create specially crafted encrypted messages, find hash collisions etc.

Ripr  is a really exciting tool “that automatically extracts and packages snippets of machine code into a functionally identical python class backed by Unicorn-Engine”. I was really curious about how effectively this tool can be used so I decided to create a new sample that models some of the algorithms we’ve seen and write up my experiences as a reference for others.

The test program

To put ripr  to the test I grabbed the first decent looking RC4 implementation in C* and added an additional XOR step with a hardcoded 4-byte key to it. This small addition would simulate hardcoded keys, lookup tables and other constants that are commonly used in standard and non-standard algorithms alike. As we will see, resolution of these structures is not a trivial task for a static analyzer.

I compiled the code with GCC, not stripping the symbols, so I could work as if I already did the reversing work to identify the subroutines of interest. I then loaded the binary to Binary Ninja and made ripr export the key scheduler (KSA) and the keystream generator (PRGA) functions as two Python classes that I copied to a single script. As this was in the middle of a busy day I just slapped some instantiation code to it to see if the thing runs without any obvious compilation errors.

ksa=KSA() # Instanitate key scheduler,S) # initialize cipher state (S) with key
print repr(S) 
prga=PRGA() # Instantiate keystream generator,plain,cipher) # Run keystream generator with the calculated state
print repr(cipher)

It did, but in order to make the code do anything useful we need to understand what was and what wasn’t generated for us by ripr.

* I didn’t verify the correctness of this implementation and even noticed some oddities (like calculation of the ciphertext size), but any mistakes would make my candidate even better for a “homebrew” algorithm.

First commit (e1569ae)

The first thing I noticed is that the generated code doesn’t handle output arguments: Arguments are just written to the memory of the emulator, but ripr doesn’t know that some of these allocations will contain important data at the end of the run of the function. This can be easily fixed by reading memory from addresses pointed to by the argAddr_N variables. In our case the key scheduler populates the S buffer, so we have to read back the memory of arg_1 of

-        return
+        return,256)

As you can see I chose to implement more “pythonic” interface for this method, returning the object of interest instead of using an output variable. You can see similar changes in the later commits where I’m finalizing the code.

When I executed this code the KSA function executed successfully (but not necessarily correctly!) but the PRGA raised the following exception:

Traceback (most recent call last):
  File "", line 115, in <module>,plain,cipher)
  File "", line 57, in run
  File "", line 44, in _start_unicorn
    raise e
unicorn.unicorn.UcError: Invalid memory write (UC_ERR_WRITE_UNMAPPED)


Second commit (aaf375c)

It seems that the emulated program tries to access unmapped memory. Since the exception is caused by emulated code, the stack trace doesn’t provide information about what exactly went wrong. To debug this we need to know the instruction and the context where the emulation fails. One way to do this is to hook each instruction in Unicorn Engine but for me it was easier to extend the auto-generated exception handler code to print out context information when an unhandled exception happens:

+                print "RIP: %08X" %  # 0x4007dd: mov eax, dword [rbp-0x1c]
+                print "EAX: %08X" % (
                 raise e

The offending instruction can be seen as comment above. EAX pointed to memory at slightly above 0x4000 so I simply added a new mapping to the constructor of PRGA and the exception went away:

+ * 4, 0x1000) # Missed mapping

After looking at the exception handlers I also tried to implement strlen() as a hook function that is meant to replace the original import call during emulation. Hooks for impoerted functions work by checking memory access exceptions against a defined list of addresses: if the saved return address points after an imported function call, the generated code handles the exception by calling the corresponding hook function. As far as I can tell return values should be manually set in the hook function (in this case setting EAX to the string length), but I also gave the function a return value for easier debugging (it turned out my original code had a pretty obvious bug, can you spot it?).

Third commit (5680b36)

So the code ran fine, but the results were different from what I got from the original binary. Two things were suspicious though:

  • My static obfuscator string (“ABCD”) was nowhere to be found in the generated code. This shows that manual reverse engineering is still crucial when using ripr.
  • My strlen() implementation was never called. Since hook functions are really easy to write, I suggest to always add some debug code (even simple prints) to them to prevent bugs like this. This is also a good way to have a high-level trace of the execution of the emulator.

With enough infromation obtained by reversing the program the first problem can be resolved easily. In this case I also had a suspicious piece of memory in the generated code that I couldn’t originally connect to anything:

self.data_0 = '00000000000000000000000000000000540a400000000000'.decode('hex')

It turns out that my obfuscator key is located at 0x400a54. This piece of memory held the pointer to it, but that region was not properly populated (although it was mapped so it didn’t cause an exception). Similarly, the import for strlen() was located at 0x4004d0 in the original binary, but not populated in the generated code by ripr. Adding these two lines to the PRGA constructor resolved these issues:, "4142434400".decode('hex')), "ff25410b2000".decode('hex'))

Note that the code written for the strlen() import is just a jump pointing to some memory unmapped in the emulator. This way an exception will be raised that can be handled by the code responsible for calling the hook function in Python.

Fourth commit (c9c7d3c)

What I failed to notice before this commit was that KSA also relied on strlen(). But since it was in a separate class using a different emulator instance my previous changes didn’t affect it. One could merge the classes, but for the sake of simplicity I chose to just duplicate the code. After this the emulated and the original program gave identical results.


All in all I managed to create a working emulator in about two hours, without any prior experience with ripr. Assuming proper understanding of the targeted program I expect about the same effort needed for experienced users in case of real life targets: the complexity of the task is mostly dependent on the number of unresolved data and code references, not the complexity of the algorithm itself. Considering the amount of work needed to reimplement cryptographic code, or instrumenting large software, ripr will definitely be on the top of my list of tools when the next homebrew crypto-monster appears!

Accessing local variables in ProGuarded Android apps

Author: dnet

Debugging applications without access to the source code always has its problems, especially with debuggers that were built with developers in mind, who obviously don’t have this restriction. In one of our Android app security projects, we had to attach a debugger to the app to step through heavily obfuscated code.


Trend Micro OfficeScan – A chain of bugs

Author: b

Analyzing the security of security software is one of my favorite research areas: it is always ironic to see software originally meant to protect your systems open a gaping door for the attackers. Earlier this year I stumbled upon the OfficeScan security suite by Trend Micro, a probably lesser known host protection solution (AV) still used at some interesting networks. Since this software looked quite complex (big attack surface) I decided to take a closer look at it. After installing a trial version (10.6 SP1) I could already tell that this software will worth the effort:

  • The server component (that provides centralized management for the clients that actually implement the host protection functionality) is mostly implemented through binary CGIs (.EXE and .DLL files)
  • The server updates itself through HTTP
  • The clients install ActiveX controls into Internet Explorer

And there are possibly many other fragile parts of the system. Now I would like to share a series of little issues which can be chained together to achieve remote code execution. The issues are logic and/or cryptographic flaws, not standard memory corruption issues. As such, they are not trivial to fix or even decide if they are in fact vulnerabilities. This publication comes after months of discussion with the vendor in accordance with the disclosure policy of the HP Zero Day Initiative.


Quick and dirty Android binary XML edits

Author: dnet

Last week I had an Android application that I wanted to test in the Android emulator (the official one included in the SDK). I had the application installed from Play Store on a physical device, and as I’ve done many times, I just grabbed it using Drozer and issued the usual ADB command to install it on the emulator. (The sizes and package names have been altered to protect the innocent.)

$ adb install hu.silentsignal.blogpost.apk
1337 KB/s (27313378 bytes in 22.233s)
        pkg: /data/local/tmp/hu.silentsignal.blogpost.apk

A quick search on the web revealed that the application most probably had the installLocation parameter set to preferExternal in the manifest file. Latter is an XML file called AndroidManifest.xml that contains important metadata about Android applications and is transformed into a binary representation upon compilation to reduce the size and processing power required on resource-constrained devics. Running android-apktool converted it back to text format, and revealed that it was indeed the cause.

<?xml version="1.0" encoding="utf-8"?>
<manifest android:versionCode="133744042" android:versionName="4.13.37"
  android:installLocation="preferExternal" package="hu.silentsignal.blogpost"

Most results of the web search agreed that the emulator (although capable of emulating SD cards) is incompatible with this setting, some suggested increasing the memory of the emulated device, others said the same about the SD card, but unfortunately none of these worked for us. The majority of accepted answers solved the problem by changing the preferExternal parameter to auto, which is the default.

Changing an Android application and repackaging it is also a breeze usually, apktool supports this natively, I just have to sign the resulting APK with a key of my own. However, this application used some features that apktool (and other tools invoked in the process, including AAPT) didn’t like. I’ve met this situation before, and there are usually two solutions.

  1. Removing such features in a way that the application can still be tested is a cumbersome series of iterations, and leads to almost certain insanity.

  2. Updating apktool and some dependencies so that they accept such features is rather painless, as it usually Just Works™.

However in this case, even beta versions couldn’t handle the task, I’ve even written some wrapper scripts around the dependencies to tweak with parameters like minimum and targeted API levels, but got nowhere. Then I realized, binary XML files still have to store the attribute names somewhere, so I fired up a hex editor and hoped that the Android runtime won’t complain about unknown attributes and will use the default value. It seemed that the format uses UTF-16 to store strings, and at offset 0x112, I changed the i to o, resulting in installLocatoon.

00f0  3a080000 82080000  0f006900 6e007300  |:.........i.n.s.|
0100  74006100 6c006c00  4c006f00 63006100  |t.a.l.l.L.o.c.a.|
0110  74006f00 6f006e00  00000b00 76006500  |t.o.o.n.....v.e.|
0120  72007300 69006f00  6e004300 6f006400  |r.s.i.o.n.C.o.d.|
0130  65000000 0b007600  65007200 73006900  |e.....v.e.r.s.i.|

Then, I simply updated the manifest in the APK file using a ZIP tool since APK files are just ZIP archives with specific naming conventions, just like JAR, DOCX and ODT files. Since the digital signature of the APK is now corrupted, I had to resign it with jarsigner, but installation still failed.

$ 7z -tzip a hu.silentsignal.blogpost.apk AndroidManifest.xml

7-Zip [64] 9.20  Copyright (c) 1999-2010 Igor Pavlov  2010-11-18
p7zip Version 9.20 (locale=hu_HU.UTF-8,Utf16=on,HugeFiles=on,4 CPUs)


Updating archive hu.silentsignal.blogpost.apk

Compressing  AndroidManifest.xml      

Everything is Ok
$ jarsigner -sigalg SHA1withRSA -digestalg SHA1 \
    -keystore s2.jks hu.silentsignal.blogpost.apk s2
Enter Passphrase for keystore: 
$ adb install hu.silentsignal.blogpost.apk
1337 KB/s (27313378 bytes in 22.233s)
        pkg: /data/local/tmp/hu.silentsignal.blogpost.apk

How can there be no certificates? Let’s list the contents with a ZIP tool, and look for the META-INF, an old friend from the Java world (JAR files) that contains a list of files with name and (in case of Android, SHA-1) hash (MANIFEST.MF), a public key (*.RSA in case of RSA), and a signature of the former with the latter (*.SF). The name of the public key and signature files are the same as their alias in the keystore converted to uppercase; see the last parameter of jarsigner in the above commands (s2).

   Date      Time    Attr     Size   Compressed  Name
------------------- ----- -------- ------------  --------------------
2014-04-03 14:44:42 .....   362941       110461  META-INF/MANIFEST.MF
2014-04-03 14:44:42 .....   369105       113933  META-INF/S2.SF
2014-04-03 14:44:42 .....     2046         1865  META-INF/S2.RSA
2014-02-27 13:37:16 .....      928          637  META-INF/CERT.RSA
2014-02-27 13:37:16 .....   361020       113942  META-INF/CERT.SF

See? It’s there! Also, another certificate (CERT.*), since without apktool (which rebuilds APK archives from scratch) everything except AndroidManifest.xml is included from the original file. Let’s delete the META-INF directory, and try again.

$ 7z -tzip d hu.silentsignal.blogpost.apk META-INF

7-Zip [64] 9.20  Copyright (c) 1999-2010 Igor Pavlov  2010-11-18
p7zip Version 9.20 (locale=hu_HU.UTF-8,Utf16=on,HugeFiles=on,4 CPUs)

Updating archive hu.silentsignal.blogpost.apk

Everything is Ok
$ jarsigner -sigalg SHA1withRSA -digestalg SHA1 \
    -keystore s2.jks hu.silentsignal.blogpost.apk s2
Enter Passphrase for keystore: 
$ adb install hu.silentsignal.blogpost.apk                            
1337 KB/s (27313378 bytes in 22.233s)
        pkg: /data/local/tmp/hu.silentsignal.blogpost.apk

With this last modification, it worked, and I was able to explore the application within the emulator that makes it much more easier to set a global proxy and manipulate certificates than a physical device, and I haven’t even mentioned faking sensors and GSM information, or taking snapshots. The lesson here was that

  • if a difficult format takes more than 5 minutes to recreate, it’s worth considering manual editing in case of simple modifications,
  • most XML readers (including the Android runtime) tend to ignore unknown attributes, and
  • the META-INF directory should be removed before signing an APK file, otherwise the runtime refuses it.

Thanks to Etamme for the featured image Androids, licensed under CC-BY-3.0

From Read to Domain Admin – Abusing Symantec Backup Exec with Frida

Author: b

Symantec (formerly Veritas) Backup Exec is one of my all-time favorites in pentest projects: it has a very nice list of vulnerabilities ranging form basic stack overflows through a hardcoded password to arbitrary file reads. Although most of these vulnerabilities aren’t new, some users tend to accept the risk of running unsupported versions because purchasing the new releases isn’t cheap. But this is not the best part from an attackers perspective.

Backup Exec is a backup software (surprise!) that by definition needs access to the most important parts of the domain (why would you backup something you don’t care about?), so as you get access to a Backup Exec instance theoretically you also get access to the most important data on the network. In practice all Backup Exec installations I  encountered had domain administrative access granted.

But how exactly can we escalate our privileges from a single Backup Exec instance?

My most recent “date”  with Backup Exec turned out a bit unusual. The software itself was the most recent version with all publicliy known bugs patched, but on the same host there was another “enterprice level” application that granted me limited file read rights through a pretty dumb vulnerability.

Since I didn’t have broad permissions and I didn’t know anything about the filesystem, I couldn’t access any interesting configuration files, password dumps or other precious loot. But I knew, my old lady is listening at port 10000, so I started to enumerate the default files of Backup Exec.

This software uses  MS SQL Server to store all the information required to perform backup and restore, but unfortunately the database files were inaccessible by my user. However the database backup located at <BackupExec Dir>\data\bedb.bak was readable!

So I grabbed the file and read Symantec’s documentation about the DB recovery process. After incrementing the infamous Lamer Counter a couple of times (#ProTip: if you use cURL to download something, don’t forget to remove the HTTP response headers from the output) I realized that this .bak file is just a standard MS SQL backup that you can parse with any SQL Server instance. 

In the recovered database you will find a table called LoginAccounts that contains all the domain usernames and passwords that were configured by the administrators of the system, to let BE access different hosts on the network. The trick is that the passwords are in some custom weird form that you can’t easily decipher.

Reversing the custom encryption

When you encounter a similar situation, you first have to figure out if the algorithm that produced the weird ciphertext depends on some configurable key. If it does, you’re probably out of luck, but if it doesn’t, your chances are good to recover some meaningful data in finite amount of time. 

I installed two separate instances of Backup Exec and configured two accounts with the same password. Then I queried the password on both instances to see if they are the same. Since the passwords are stored in an NVARCHAR (multibyte) field but the actual value is simple printable ASCII as the result of a simple SELECT you’ll get a bunch of not printable/alien characters which are hard to handle so you better cast to varbinary. But beware, the encrypted passwords are several hundred bytes long, and MSSQL truncates them by default so you have to use a query like this:

SELECT cast(AccountPassword as varbinary(1024)) FROM LoginAccounts;

Sample result after unhexlify:


 The ciphertexts were same which meant that there was no installation specific secret in my way. Great!

Backup Exec needs to access the plaintext data, so there has to be a decryption function somewhere. Since there are tons of executables and libraries included with the software, first I ran a fast script hoping I can find some helpful exports:

find . -name '*.dll' -exec strings -f {} \; | fgrep -i decrypt

The developers were kind enough with me, the output showed that bemsdk.dll exports a lot of interesting methods:


CBemLoginAccountX::Decrypt seems particularly interesting, let’s take a look at it in IDA:

As you can see, this method calls CEncrypt::Decrypt(wchar_t,wchar_t). It looks straightforward to LoadLibrary() this DLL in a small wrapper program and call CEncrypt::Decrypt() with the parameters dumped from the DB. But if you take a closer look, you can also see that depending on the object state the encrypted data may first run though a simple loop that uses a possibly dynamically constructed memory reqion (dirty_bastard on the pic) to transform the ciphertext before the actual encryption happens. I can reuse the Decrypt methods, but only after this region is constructed, so I turned to dynamic analysis.

My first night with Frida

I tried to attach a debugger to the management application (BkupExec.exe). First time I failed because the process was protected by a service called bedbg.exe, but killing it made it possible to attach with a debugger. But BkupExec.exe is a .NET application that uses bemsdk.dll thourh a wrapper assembly (bemsdkwrapper.dll) and my debugger became useless, because of all the dynamic memory magic performed by the process.

Luckily, at this time I’ve already took a look at Frida.RE, and although I’ve never used it before, it seemed like a good fit for this job. The concept was simple: hook CDecrypt::Decrypt(), replace its first argument with the ciphertext to be decrypted, wait for the method to finish and read the output buffer (second argument). Here’s the final code:

Interceptor.attach(ptr("0xdeadbeef"), { // address of CEncrypt::Decrypt()
	onEnter: function(args) {
		send("Decrypt (Before): ",Memory.readByteArray(args[0],697));
		args[0]=Memory.allocAnsiString("ciphertext"); // Your ciphertext here
		send("Decrypt (After): ",Memory.readByteArray(args[0],697));
	onLeave:function(retval){send("Leave: "+Memory.readUtf16String(this.x));}

But the road that led me here wasn’t exactly straight.

First of all, I needed a way to trigger the password decryption somehow. I could theoretically fire a call to the decryption function myself but I couldn’t figure out a way to get the address of the newly created LoginAccountX instances (the question is still open at StackExchange). Luckily I found a way to trigger this action from the GUI: when creating new backup jobs, the management application checks if it can access the resource to be backed up using the default Login Account.

But my original script didn’t work.

The first problem was with character encodings (there is always a problem with the character encodings): the implementation of wchar_t is platform dependent; in my case, the output buffer turned out to be readable as a UTF-16 string, which was the last thing for me to try out. Also, I had to realize that although the API defines the ciphertext parameter as a wchar_t string, it has to be provided in simple ASCII. The lesson is when experimenting with Frida, always use Memory.readByteArray() first, implicit conversations of the V8 engine and your API (Python in my case) can mess things up badly.

Second, I used the create_script() method of the Python API to provide a JavaScript script as a string to Frida to run. This wasn’t the best idea, since my ciphertext contained backslashes, which need to be double-escaped in order to pass through both the Python and the JavaScript interpreters. I spent hours on figuring this out, LC++;

But finally my hook script was able extract the plaintext passwords for the Domain Administrator account (and several others).

Exploitation in Practice

Repeating the process is a bit time consuming:

  1. Grab a copy of bedb.bak
  2. Import the DB backup to an MS SQL database
  3. Copy the encrypted passwords
  4. Install Backup Exec (trial is available from Symantec)
  5. Install Frida.RE
  6. Get the address of the Decrypt() export
  7. Replace the appropriate parameters and attach to the BkupExec process with the above script 
  8. Trigger decryption by adding a new backup job

But it’s totally worth it: with read-only access on a BackupExec server (e.g. CVE-2005-2611)  you can get plain text user accounts (probably with high privileges). 

The dynamic analysis revealed, that you can also simply build a wrapper program around bemsdk.dll, since the problematic section of code is not called during the standard execution. I still find the Frida.RE way more convenient though.

I have to emphasize that this is not a vulnerability in Symantec’s product, but administrators should keep in mind that their passwords for backup accounts are stored in fully reversible form (equivalent to plaintext).