Tag: reversing

Deceiving the Deceivers: A Review of Deception Pro

Deceiving the Deceivers: A Review of Deception Pro

TL;DR: This is my personal experience and a quick review of the Decepton Pro sandbox. Deception Pro is a specialized sandbox for long-duration analysis of malware and threat actor behavior. Overall, it is a promising product and fills a niche gap in the malware sandbox market.

One challenge facing malware analysts, reverse engineers, and threat intelligence analysts is understanding how malware behaves over a longer period of time. Capturing behaviour in a traditional sandbox for 3, 5, or even 20 minutes is possible, and analysts can also run samples in custom virtual machines or a baremetal analysis system to watch what they do. But there are still key challenges, such as:

  • It’s difficult to make the environment realistic enough to “convince” malware to continue execution, and even more difficult to capture follow-on actions such as commands issued to a bot or hands-on-keyboard events. Advanced malware and actors are looking for active directory environments or corporate networks, for example, and this can be difficult to simulate or maintain.
  • Even if an analyst can create a realistic enough environment to capture meaningful actor activity, it’s difficult to randomize the environment enough to not be fingerprinted. If an actor sees the same hostname, IP, or environment configurations over and over again, the analysis machine can easily be tracked and/or blocklisted.
  • Scalability, especially in baremetal setups, is always an issue. In my baremetal analysis workstation, I can’t detonate multiple malware samples at a time (while preventing cross-contamination), for example, and I can’t easily add snapshots for reverting after detonation.

Introducing Deception Pro

I was introduced to Deception Pro by a colleague who spoke highly of Paul Burbage’s team and the work they’ve done on other products (like Malbeacon). After reaching out to Paul, he was kind enough to offer me a demo account to help me understand the product and how it could fit into my threat research workflow. So without further ado, here’s my disclaimer:

Disclaimer: Paul and the Deception Pro team provided me with a free demo license to evaluate the product and see if it meets my needs. I’m not being paid for this review, and Paul and the team did not ask me to write one. This review is entirely my own doing.

In this post, I’ll be covering what Deception Pro is, how it can fit into a malware analysis and reverse engineering workflow, and some of its features.

Overview

Deception Pro is what I’d call a “long-term observability sandbox.” Essentially, it’s a malware sandbox designed to run malware for extended periods – several hours or even days – while also fooling the malware into thinking it’s running in a legitimate corporate environment. Long-term observation can be beneficial for a couple reasons, most notably:

  • Advanced malware often “sleeps” for long periods, waiting for an elapsed period of time before continuing execution or downloading additional payloads. 
  • When the analyst wants to observe additional payload drops (for example, in a loader scenario) or hopes to catch hands-on-keyboard actions or follow-up objectives the attackers are trying to execute.

Pretend for a moment I’m a malware analyst (which I am, so there’s not much stretch of the imagination here). I detonated an unknown malware sample in my own virtual machines as well as several commercial sandboxes. Using publicly available commercial and free sandboxes, I determined that the malware belongs to a popular loader family. (Loaders are a class of malware that download additional payloads. They typically perform sandbox detection and other evasion techniques to ensure the target system is “clean” before executing the payload.)

I know this malware is a loader, but I want to understand what payload it ultimately drops. This behavior isn’t observable in the other sandboxes I’ve tried. I suspect that’s because the malware only communicates with its C2 and deploys its payload after a long period of time. I then submit the malware sample to Deception Pro.

When starting a new Deception Pro session, you’re greeted by an “Initiate Deception Operation” menu, which is a cool, spy-like way of saying, “start a new sandbox run.” James Bond would approve.

In this menu, we can choose from one of three randomly generated profiles, or “replicas,” for the user account in your sandbox – essentially, your “target.” This person works for one of the randomly generated company names and is even assigned a fancy title. Deception Pro then generates fake data to populate the sandbox environment, and this replica acts as a starting point or seed. I chose Mr. Markus Watts, a Supply Chain Data Scientist at the company Pixel Growth. Looks legit to me.

In the next menu, we’re prompted to upload our malware sample and choose additional details about the runtime environment. The two primary options are “Detonate Payload” and “Stage Environment Only.” Detonate Payload does what you’d expect and immediately detonates the payload once the environment spins up. Stage Environment Only allows the operator (you) to manually interact with the analysis environment. I haven’t experimented with this option.The final menu before the sandbox starts is the Settings menu. Here, we can select the detonation runtime (days, hours, minutes), the egress VPN country, some additional settings, and most importantly, the desktop wallpaper of the user environment. I’ll choose a relaxing beach wallpaper for Mr. Watts. He probably needs a nice beach vacation after all the work he does at Pixel Growth.

As Deception Pro is designed for long-term observation, it’s best to set a longer duration for the run. Typically, I set it to 5–8 hours, depending on my goals, and I’ve had good results with this.

After clicking the Submit button, the analysis environment is set up and populated with random dummy data, such as fake files, documents, and other artifacts, as well as an entire fake domain network. This creates a realistic and believable environment for the malware to detonate in.

Deception Pro - Generating environment

Behavioral and Network Analysis

Fast-forward eight hours, and our analysis is complete. I’m excited to see what behaviors were captured. We’ll start with the Reports → Detections menu.

The Detections menu shows key events that occurred during malware detonation. There are a few interesting entries here, including suspicious usage of Invoke-WebRequest and other PowerShell activity. Clicking on these events provides additional details:

In the Network tab, we can view network connections such as HTTP and DNS traffic, along with related alerts:

In the screenshot above, you may notice several web requests as well as a network traffic alert for a “FormBook C2 Check-in.” This run was indeed a FormBook sample, and I was able to capture eight hours of FormBook traffic during this specific run.

I was also able to capture payload downloads in another run:

In this run (which was a loader), a 336 KB payload was delivered roughly five hours into execution. This highlights the fact that some loaders delay payload delivery for long periods of time.

The Artifacts menu allows analysts to download artifacts from the analysis, such as PCAPs, dropped files, and additional downloaded payloads:

Regarding PCAPs, there is currently no TLS decryption available, which is a drawback, so let’s touch on this now.

Conclusions

It’s important to remember that Deception Pro is a specialized sandbox. I don’t believe it needs to have all the features of a traditional malware sandbox, as that could cause it to become too generalized and lose its primary strength: creating believable target users and lightweight environments while enabling long-term observation of malware and follow-on actions. Here are some of the benefits I noticed when using Deception Pro, and some potential room for improvement:

Benefits

  • Generates operating environments that simulate very realistic enterprise networks. This can expose additional malware and threat actor activities that other sandboxes may miss, like pivoting or network reconnaissance.
  • Allows users to specify long detonation runtimes (hours to days) for observance of full attack chains (from initial infection to command and control, data exfiltation, and additional module and payload drops.
  • Captures key events, behaviors, and network traffic of interest for investigators and researchers
  • Allows interaction with the running sample and environment

Room for Improvement

  • PCAP decryption is currently missing (though this is reportedly coming)
  • Behavioural output is somewhat limited in its current state. This wasn’t too detrimental for my use case, as I primarily used Deception Pro as a long-term detonation environment rather than a full-fledged analysis sandbox. I rely on other tools and sandboxes for deeper analysis.
  • Currently no memory dump capabilities or configuration extraction

Also, note that the operating system environment is randomly generated, which limits customization (such as usernames, company names, etc.). This will rarely be an issue, but could matter when attempting to detonating highly targeted malware.

Overall though, I think the team behind Deception Pro is well on its way to creating a solid specialty sandbox, and I’m excited to see where it goes. Big thanks to Paul and the team for letting me spam their servers with malware.

Go Big or Go Home (and Other Terrible Go Puns): Tips for Analyzing GoLang Malware

Go Big or Go Home (and Other Terrible Go Puns): Tips for Analyzing GoLang Malware

A few days ago, Dr. Josh Stroschein invited me on his livesteam channel to talk about Golang malware. I wanted to get a quick blog post up before I forget everything we talked about. So, here is a summary of the key points we discussed in the livestream. You can also just watch the livestream here if you’re feeling lazy. Ok, let’s get on with it.

Go (or Golang) has gained some traction over the past few years, not just among developers, but increasingly among malware authors also looking for flexibility and portability. As reverse engineers and malware analysts, that means we need to get more comfortable navigating Go binaries, understanding how they’re structured, and knowing what makes them different from traditional malware written in C, C++, or Delphi (vomit face).

What Is Go, and Why Does It Matter?

Go is a statically typed, compiled programming language developed by Google. It’s designed to be simple, fast to compile, and efficient. Some of the things Go does well:

  • Cross-compilation: Go makes it easy to build binaries for different OS’s and architectures.
  • Static linking: Most Go binaries are self-contained, meaning no external dependencies.
  • Built-in concurrency: Go’s goroutines make it easy to write efficient networked applications.

From a developer’s perspective, it’s efficient and practical. From a malware analyst’s perspective, it presents some interesting challenges.

Why Use Go for Malware?

Go offers several advantages that make it appealing to malware authors:

  • Portability: Malware authors can compile a single codebase for multiple platforms (Windows, Linux, macOS, ARM, etc.).
  • Self-contained binaries: Go binaries include everything they need to run, which results in some seriously HUGE executable file sizes, but more on that later.
  • Less tooling: Traditional reverse engineering tools aren’t as well-optimized for Go binaries, especially compared to C/C++ (but this is changing quickly).
  • Rapid development: Go is relatively easy to write and maintain, which makes it efficient for malware development.
  • Evasion by obscurity: Go binaries look different from typical malware, especially in static analysis, which may help them avoid basic detections (this is also changing rapidly).

Common Pitfalls in Analyzing Go Malware

1. Large Binary Sizes

Even a simple Go program can compile into a binary tens of megabytes in size, which you’ll see in a moment. This is due to static linking of the Go runtime and standard libraries. For analysts, this means more to sift through as it’s often not immediately obvious where the actual malicious code begins or ends.

2. Excess of Legitimate Code

Go’s standard library is extensive, and malware often makes use of common packages like net, os, crypto, and io. Most of the code in the binary is likely benign. The challenge is identifying the small percentage of custom or malicious logic within all the legitimate functionality. Your classic needle-in-a-pile-of-needles problem.

3. Obfuscation (Garble and Others)

Go malware is increasingly using obfuscation tools like Garble, which strip or randomize symbol names, re-order packages, and break common static analysis workflows. These techniques don’t necessarily make the malware more sophisticated, but they do add complexity to the reversing process.

Other common obfuscation techniques may include:

  • Encrypted or encoded strings
  • Control flow obfuscation
  • Packing or compression

Let’s analyze a very basic Go binary. The best way to do this is to write our own code.

Analyzing a Basic Go Program

Go code is fairly straightforward and simple to write. Here is literally the most basic Go application you can write, printing our favorite “Hello World” (in this case, “Hello Earth”) string:

When compiled (using the go build command), the binary is a fairly large executable (2MB+). Since Go ships a lot of library code into each compiled executable, even this simple Hello World binary is substantial.

Let’s open this up in IDA, my dissasembler of choice for Golang. Newer versions of IDA (I think version 8+) are good at identifying Go standard library code. IDA nicely groups these libraries in “folders”, as you can see in the screenshot below:

Each of these folders represent a library. For example, “internal”, “runtime”, and “math” are all libraries being imported into this Go program. IDA is able to recognize these libraries and functions and name them appropriately. If your dissasembler is not designed for Golang use, you’ll see a bunch of generic names for these functions which makes analysis of Go binaries a lot more difficult. One tool (GoReSim) can help identify these functions, and the output of this tool can then be re-imported into some disassemblers like Ghidra.

Most of the time in un-obuscated Golang binaries, the main functionality of the program will reside in the function main.main or, main_main), which IDA identified for us:

Tip: Whenever I’m analyzing a Go binary, I first always look for main_main or other functions that contain the name “main_*”.

Inside main_main we can see our Hello World code. You may be able to spot the “Hello Earth!” string in the code below:

This “Hello Earth!” string also contains a bunch of other junk. These are also strings in the binary. One challenge when analyzing Golang code is that strings are not null-terminated like they are in C programs. Each string is actually a structure that contains the string itself and an integer representing the string’s length. I provided some terrible pseudocode for visualization of this:

struct string (
value = "Hello Earth!"
length = 12
)

In this case, IDA didn’t know that “Hello Earth!” is a separate string from “152587…” and the others. This is one thing you’ll need to take into account when analyzing Golang.

Ok, Hello World apps are cool and all, but let’s take it up a notch. Many malware binaries written in Go will be obfuscated. Garble is one such obfuscator. Garble… well… garbles the metadata of the Go binary. It does this by stripping symbols, function names, module and build information, and other metadata from the binary during compile-time.

If we open the same Hello World binary in IDA, but “Garbled” during compilation, it looks a lot different:

All our nice, beautiful Golang function names have been replaced with ugly, generic IDA function names (“sub_xxxxxx”). So how do we find our main function code now? We can’t – Golang won. Time to pack up and Go home.

No, just kidding. We just have to work a bit harder. I’ve found that Golang requires several critical libraries to correctly function, and one of those is the “runtime” library, which contains a lot of Go’s runtime code. Oftentimes, the runtime library names are not obfuscated, like in this case of my binary compiled with Garble (Note: I think Garble can also strip the module names from “runtime” as well, but I didn’t test this. In any case, the “runtime” module names are often not obfuscated). This means we can find cross-references to runtime functions in the code, and trace those back to the program’s main function! Let’s try this.

If we search the function list in IDA for “runtime”, we get the following:

One common runtime function is runtime_unlockOSThread. We can double-click on this function and select CTRL+X to see cross-references to it. Taking a look through all the cross-referenced functions will lead you to a block of code that looks like this:

When you spot functionality that contains a lot of “runtime” functions, you may be near the location of the program’s main code. In this case, our main code is not far away, in sub_49A9E0. You may be wondering: “Kyle, how are you so smart that you found that so fast?”. Well, intelligence aside, it was a lot of hunting around the code. No crazy tricks here.

And here we have our main code at sub_49A9E0:

Tip: Garble and other obfuscators can also obfuscate strings, not just the function names. I used the default Garble settings for this binary. The analysis methodology is the same, however.

Additional Resources

A few more resources on Golang I find extremely helpful:

  • Ivan Kwiatkowski’s YouTube videos on GoLang analysis.
  • Josh Stroschein’s PluralSight course on GoLang malware analysis. In this course, Josh covers the OT malware FrostyGoop.

Key Takeaways

Go malware is becoming more common, and it’s likely here to stay. While it presents some unique challenges, many of the same principles from other forms of reverse engineering still apply. You just need to adjust your approach and tools.

5 Tips for Reversing Go Malware

  1. Start with main.main (main_main) – This is (nearly) always the entry point for a Go binary and can give you a foothold into the rest of the logic.
  2. Use the right tooling – IDA, Ghidra with GoReSym (other disassemblers probably worked, but I haven’t tested them), and de-obfuscators like the appropriately named UnGarbler.
  3. Ignore the noise – Skip most of the standard library code unless it’s directly involved in malicious behavior.
  4. Look for key APIs – Even with obfuscation, patterns like “net.Dial”, “os/exec”, or “http.Get” can help narrow down suspicious areas.
  5. Combine static and dynamic analysis – Especially with obfuscated binaries, dynamic tracing or debugging can be the fastest way to understand real behavior. Ivan Kwiatkowski has some great tips on debugging Golang in this video.
Unpacking Ryuk

Unpacking Ryuk

In an earlier post, I wrote a technical analysis of the Ryuk ransomware and its behaviors. This post is a follow-up to that, for whoever is interested in learning one method of unpacking a Ryuk sample.

As explained in my previous post, Ryuk will typically try to inject itself into several processes running on the victim system. It does this be leveraging a common injection technique using OpenProcess, VirtualAllocEx, WriteProcessMemory, and finally, CreateRemoteThread .

Ryuk can be extracted from memory by running it in a debugger (x64dbg is my choice for this) and setting a breakpoint on CreateRemoteThread (this can be done with the command setBPX CreateRemoteThread in x64dbg).

Breakpoint hit on CreateRemoteThread.
Breakpoint hit on CreateRemoteThread.

Once the breakpoint is hit and program execution is paused, we can expect to see a handle to a process (the process in which Ryuk wishes to inject code into) by inspecting the call stack window. In my case, the handle is 0x148, located in rcx:

Call stack for CreateRemoteThread.
Call stack for CreateRemoteThread.

Next, we need to cross-reference this handle with its process name in order to find out the target process. We could use x64dbg for this, but I will use ProcessHacker because I feel it is a bit easier to use in this case. To do this, simply launch ProcessHacker, right-click the running Ryuk process, select Properties, and then the Handles tab.

We can see below that the handle (0x148) is associated with the taskhost.exe process. It looks like Ryuk injected code into taskhost.exe and is now attempting to run this code. Note: This process may be different for you! Ryuk often injects into dwm.exe, virtual machine processes, and others.

Handle to taskhost.exe in Ryuk process.
Handle to taskhost.exe in Ryuk process.

Now we must inspect this taskhost.exe process and try to find the location where Ryuk injected code. We can do this with ProcessHacker or any number of memory inspection tools. I have chosen instead to attach the process again to x64dbg and inspect the memory there. This is just a matter of personal preference, however.

What we are looking for in the taskhost.exe process address space is a suspicious memory region where Ryuk’s code has likely been injected. This can be accomplished using the Memory tab in x64dbg (or the Memory tab in ProcessHacker, if you choose to do it with this tool). In x64dbg, underneath the taskhost.exe memory region, we can see a memory region has been created with ERW (Execute-Read-Write) permissions, which is suspicious. The memory region has a significant size (163 kb), which is definitely enough space to store an executable:

Injected executable in taskhost.exe.
Injected executable in taskhost.exe.

This is likely our injected code. We can dump this memory region from x64dbg (or from ProcessHacker) by right-clicking the memory region and selecting the Dump or Save option.

Now you should be able to inspect this file in a PE viewer, or a disassembler (such as IDA), and should be able to see readable strings:

Ryuk strings - before unpacking executable.
Ryuk strings – before unpacking executable.
Ryuk strings - after unpacking executable.
Ryuk strings – after unpacking executable.

After dumping the memory region, if we try to start analyzing this file in a disassembler (such as IDA), we would see that this file is not a valid PE file, or is otherwise corrupted. To fix this, you can the use PE_Unmapper tool. This tool will “unmap” the executable from memory and fix up our dumped executable, including rebuilding the Import Address Table. Note: Other tools may work for this as well, including Scylla or OllyDump, but I found that PE_Unmapper worked the best for my case.

With PE_Unmapper, we can unmap this dumped executable using the following command syntax:

pe_unmapper.exe [mem_dump_input] [mem_address_base] [output_file]

In my case, the command would be:

pe_unmapper.exe ryuk_dump.mem 0x13F630000 ryuk_fixed.exe

You should now have imports listed in a PE viewer tool or in a disassembler. One important thing to note is that Ryuk dynamically builds a second imports table using the GetProcAddress function. This means that there are actually more imports than what is listed in the Imports section of any PE viewer tool. Because of this, you will likely experience issues analyzing this file in a disassembler.

These functions seem to be labeled as cs:qword_<address>. To fix this issue, you will have to manually rename these functions, like so:

Ryuk imports table - before fixing.
Ryuk imports table – before fixing.
Ryuk imports table - after fixing.
Ryuk imports table – after fixing.

You could probably script this part as well, if you felt motivated enough. If you do, please share it with me 😉

Here is the sha256 hash of the sample I used in my analysis:

feafc5f8e77a3982a47158384e20dffee4753c20

This sample can be found on VirusTotal. If you don’t have access to VirusTotal, just Google the hash and you may be able to find it elsewhere online. Otherwise, any new-ish Ryuk sample will likely work for this analysis.

Well, that’s about it. As always, thanks for reading! If you enjoyed this post, follow me on Twitter (@d4rksystem).