I was recently analyzing a malware sample that abuses the Beep function as an interesting evasion tactic. The Beep function basically plays an audible tone notification for the user. The Beep function accepts a parameter DelayInterval, which is the number of milliseconds to play the beep sound. The calling program’s thread that executed the function will “pause” execution for the duration of the beep. This can be an interesting anti-sandbox and anti-debugging technique. Let’s take a deeper look at the Beep function.
How Beep Works
When a program invokes the Beep function, it ultimately calls into a function called NtDelayExecution, which does exactly as it is titled: It delays execution of the calling program’s running thread. The below image illustrates how this essentially works:
The calling program (in this case, the malware), calls Beep, which further calls into NtDelayExecution. Once the beep duration has been met, control flow is passed back to the malware.
Here is a function trace from API Monitor, showing the same thing. Notice how Beep invokes several other lower level functions, including DeviceIOControl (to play the audible beep sound via hardware) and the call to NtDelayExecution:
As a side note, as the Beep function was originally intended to play an audible “beeeep!” when executed, it accepts a parameter called dwFreq which denotes a frequency of the beep sound, in hertz. This means that the calling program can decide the type of the tone that is played when Beep executes. This particular malware doesn’t play a tone when calling beep, but I think this would be a funny technique for malware to use; annoy the victim (or malware analyst). You may also wonder why the malware can’t just call NtDelayExecution directly. This would also work, but may appear more obvious to malware analysts and researchers. Anyway, it’s much more fun to use Beep than to call NtDelayExecution directly.
The Malware
The malware I was investigating calls the Beep function with a DelayInterval parameter of 65000 milliseconds (which will stall analysis for about 1 minute). It also calls this Beep function multiple times for added delay. This will cause the sandbox to stall for potentially log periods of time. If the malware is being debugged, the analyst will temporarily lose control of the malware as the thread is “paused”. Here is an excerpt of this code in IDA Pro:
Sandbox and Debugger Mitigations
To mitigate this technique, the sandbox should hook the NtDelayExecution function and modify the DelayInterval parameter to artificially decrease any delay. In a debugger, the malware analyst can set a breakpoint on NtDelayExecution or the Beep function and modify the DelayInterval parameter in the same way.
Other References
While researching this malware, I ran across an article from researcher Natalie Zargarov from Minerva Labs who wrote about this same technique in 2023 used in a different malware family.
I was digging into a new version of StrelaStealer the other day and I figured it may help someone if I wrote a quick blog post about it. This post is not an in-depth analysis of the packer. It’s just one method of quickly getting to the Strela payload.
A quick assessment of the executable in PEStudio reveals a few interesting things that I’ve highlighted. Note the TLS storage (callbacks). When the sample first executes, it makes two TLS callbacks as we’ll see in a bit.
Viewing the strings in PEStudio reveals several large strings with high-entropy data. These strings are part of the packed payload.
Let’s open the file in a disassembler to investigate further. I’ll be using IDA Pro for this analysis. If we inspect the “histogram” at the top of the IDA Pro window, we can see a large olive green segment which indicates data or code that IDA can’t make sense of. IDA Pro calls this data blob unk_14012A010:
As we saw in the strings earlier, this is likely the packed payload. I’ll rename this blob in IDA Pro to obfuscated_payload_blob. If we view the cross-references to this blob (ctrl+x in IDA), we can see several references:
Double-click one of these (I’ll select the 2nd one from the bottom), and you’ll see the following:
It seems our blob is being loaded into register rdx (lea rdx, obfuscated_payload_blob), and a few instructions later there is a call instruction to the function sub_140096BA0. Inspecting the code of this function and you may notice there are quite a few mathematical instructions (such as add and sub), as well as lots of mov instructions and a loop. This all indicates that this is highly likely a deobfuscation routine. Let’s rename this function deobfuscate_data. We won’t be analysing the unpacking code in depth, but if you wished to do so, you should rename the functions you analyse in a similar manner to better help you make sense of the code.
If we then get the cross-references to the deobfuscate_data function, we’ll see similar output to the cross-references for the obfuscated payload blob:
Inspect these more closely and you’ll see that the obfuscated blob is almost always being loaded into a register followed by a call to the deobfuscate_data function. This malware is unpacking its payload in multiple stages.
If we walk backwards to identify the “parent” function of all this decryption code, we should eventually spot a call to a qword address (0x14008978D) followed by a return instruction. This call looks like a good place to put a breakpoint as this is likely the end of the deobfuscation routine (given that there is also a return instruction that will take us back to the main code):
Let’s test this theory by launching the malware in a debugger (I’ll be using x64dbg). When you run the malware, you’ll hit two TLS callbacks (remember I mentioned those earlier?), like the below:
Just run past these. TLS callbacks are normally worth investigating in malware but in this case, we are just trying to unpack the payload here and will not investigate these further. You’ll eventually get to the PE entry point:
Put a breakpoint on the call instruction at 0x14008978D (using the command bp 14008978D) and run the malware. You should break on that call instruction:
If we step into this call instruction, we’ll get to the OEP (original entry point) of the payload! Inspect the Memory Map and you’ll see a new region of memory with protection class ERW (Execute-Read-Write):
This new memory segment (highlighted in gray in the image above) contains our payload. Don’t believe me? Dump it from memory (right-click -> Dump to file) and take a look at the strings. You should see something like the following:
You’ll spot some interesting data like an IP address, a user agent string, registry keys, and so on. If you don’t see any cleartext strings, you likely dumped the payload too early (before the malware deobfuscated all the data in this memory region), or too late, after the malware cleared its memory. Start reading this blog post again and try again ☺
Let’s open this dump file in IDA. After opening the file in IDA, be sure to rebase it (Edit -> Segments -> Rebase Program) to match it to the memory address in x64dbg:
After opening this dumped payload in IDA, you’ll see some inconsistencies, however:
See the problem? Some call instructions are not resolved to function names. However, in x64dbg, these functions are labeled properly:
This is because in x64dbg, these function names are being resolved to addresses in memory. In our IDA dump, they are not mapped properly.
Normally, what I would do next is try get my IDA database as close as possible to the code in x64dbg. We could spend more time analysing the unpacking code to identify where the malware is resolving its imports and this may help us get a better dump of the payload. Or, we could automate this by writing a python script to export all function names from x64dbg and import them into IDA. But why spend 1 hour automating something when we can spend 2 hours doing it manually? 🙂
We can manually fix this up IDA by cross-referencing each unknown function with the function name in x64dbg. For example, at address 0x1B1042 there is a call to InternetOpenA (according to our x64dbg output) and address at 0x1B107B is a call to InternetConnectA.
And now, we have something a lot more readable in IDA:
After you spend a bit of time manually renaming the unknown functions in your IDA database file, you should have some fairly readable code. Congrats! You unpacked Strela’s payload. Spend some time analysing the payload and see what you can learn about this sample.
Happy reversing! 🙂
— d4rksystem
Creating Quick and Effective Yara Rules: Working with Strings
This is a quick post to outline a few ways to extract and identify useful strings for creating quality Yara rules. This post focuses on Windows executable files, but can be adapted to other files types. Let’s start with an overview of the types of strings we are interested in when developing Yara rules.
tl;dr
In this post, you will learn:
How to extract ASCII and Encoded strings from malware samples.
How to analyse strings from a malware sample set and choose strings for your Yara rule.
Tips and other tools to assist in Yara rule creation.
ASCII vs. Encoded Strings
Windows executables normally contain both ASCII and encoded strings. A “string” typically refers to a sequence of alphanumeric and special characters arranged in a specific order. Strings are used to represent various types of data, including file names, paths, URL’s, and other content within files. ASCII and encoded strings refer to different concepts in the context of character representation.
An ASCII string is a character encoding standard that uses numeric codes to represent characters. ASCII is a straightforward encoding, but it has limitations when it comes to representing characters from other languages or special symbols. Encoded strings generally refer to representing text using a specific character encoding scheme, such as Unicode (16-bit Unicode Transformation Format, or UTF-16, and sometimes referred to as “wide” strings) which is standard in Windows executable files. When writing Yara rules for Windows executables, we normally want to focus on both ASCII and Unicode strings. So, how do we extract these strings from an executable file? Glad you asked.
Extracting Strings
The simplest way to extract ASCII strings is using the strings tool in Linux/Unix (also available in Windows and MacOS). Execute the command on your malware executable target, and save the output to a text file like so:
strings -n 4 -e l malware.exe > malware-encoded-strings.txt
Once we have our strings, let’s dump them into a Yara rule, shall we? Heh… Not so fast, cowboy. We have some strings analysis work to do first.
Analyzing Strings
One of the challenges with using strings for detecting malware is that there are so.. many.. strings. A single executable file could have thousands. How do we know the good strings, from the bad strings, from the ugly strings? How can we know which to include in our Yara rule?
If you have a single malware executable, you’ll have lots of strings to dig through (depending on the size of the executable file, of course). The trick is to identify the strings that are likely related to the malware itself, while disregarding and filtering out the strings that are not directly related to the malware that we may not be interested in (such as compiler data and code, common strings that also reside in benign files, etc.). It takes experience to know what to look for and what to ignore.
If you have a number of files of the same malware family, this process can be a bit more efficient. What we need to do is gather our malware sample set, extract all strings from these samples, and compare these strings to identify the strings we should zero in on for our Yara rule.
This malware sample set must meet the following requirements:
The malware samples should be part from the same malware family. For example, if you are developing a Yara rule for Ryuk ransomware, all samples should be Ryuk ransomware, otherwise bad samples/strings will taint your Yara rule.
The malware samples should be unpacked/deobfuscated. If the samples are packed, encrypted, obfuscated, etc., you are no longer writing a Yara rule for the malware itself, but rather for the packer/obfuscator. If this is your intention, that’s perfectly fine, as there are valid use cases for this as well!
The malware samples should be of the same file type. It’s not a good idea to mix Windows executables with MS Office documents, for example.
The more malware samples you have in your set, the more accurate your Yara rule could be.
We can extract and analyse all strings in a malware sample set with a one-liner command. First, make sure you have your malware samples together in one directory called “samples”. (I am assuming you are on a *Nix system here, but the following command can be adapted for Windows as well with a bit of work):
for file in $(ls ./samples/*); do strings -n 4 $file | sort | uniq; done | sort | uniq -c | sort -rn > count_malware_strings.txt
In the above command, we create a for loop that iterates over all files in our samples directory (“samples”). Each file’s strings are extracted and sorted, and finally we append a “count” value to each string and save this to a text file “count_malware_strings.txt”. Here is a screenshot of the result:
You may be able to spot some interesting strings. The number “9″ next to each line denotes the number of samples this string resides in. My sample set consists of 9 samples, so each string with a 9 next to it means that this string resides in all my malware samples!
We should also run this same command, but for encoded strings:
for file in $(ls ./samples/*); do strings -n 4 -e l $file | sort | uniq; done | sort | uniq -c | sort -rn > count_malware_strings_encoded.txt
Here is the result:
See any interesting strings here? Perhaps the references to WMI (SELECT * …), the sandbox-related strings (“sandbox”), and strings such as “Running Processes.txt”?
Selecting Strings for the Yara Rule
So, now we have a much better idea of what strings to use in our Yara file. Ideally, we’ll want to select strings that are in all or most of the sample set. Selecting strings that are in only one file may result in lots of false-positives (depending on what type of rule you are creating and what your objectives are, of course). However, selecting only strings that appear in all files may result in your Yara rule being too specific. Again, this will depend on your objectives for the rule.
Consider also that even though you are dealing with malware, there will be “benign” strings (sometimes called “goodware strings”) in these files that are not part of the malware’s code or functionalities. You’ll likely want to weed these out. Optionally, you could create a goodware strings database or list that simply contains strings you wish to exclude from your Yara rules. But this is a topic for another day.
Creating our Yara Rule
Based on the strings I observed in the strings text files I created previously, I chose the following strings and created my basic Yara rule:
Notice how I added the “wide” attribute to some of the strings. This tells Yara that these are encoded strings. For the conditions at the bottom, I am specifically looking for samples that have the header bytes 0x5A4D (meaning a Windows PE file), and the sample must have 15 or more of these strings residing in them. Lowering this number will result in more of a “hunting” rule, where you may catch additional malware (with a wider net) but have more false positives. Increasing this number will create a higher-fidelity rule, but may be too specific.
Other Tools and Tips
Here are a few other random tips/tricks for dealing with strings in Yara rules:
PE Studio– PE Studio is a great PE executable file analysis tool that also has a nice “goodware” and “malware” strings database built-in. You can open an executable file in PE Studio and the tool will provide you with some hints on which strings may be interesting.
Strings-Sifter – A tool created by Mandiant, it can “sift” through strings and sort them based on how unique or “malicious” they are. This is very useful for quickly identifying the interesting strings.
Yargen – A full-on cheatmode for Yara rules. Yargen is a tool from Florian Roth that takes an input sample set and automatically generates Yara rules based on interesting strings or code in the files. This is a great tool if you are pressed for time or if you have lots of rules to create. However, nothing beats a well-tuned, manually-written rule (in my humble, old-school, boomer opinion). Also, if you are new to Yara and/or malware analysis, stay away from the automatic tools and just do it manually, please 🙂
Conclusion
I hope this short post helps you create better Yara rules! If you have further suggestions or ideas, send them to me and I may include them in this post or in future posts!
It has been a while since I’ve touched a malicious RTF document and I’ve been itching to refresh my knowledge in this area. The tricky part was finding a maldoc worth investigation. Well, my luck recently changed – along came a maldoc lure that targeted guests of the 2023 NATO Summit in Lithuania in July. I found a maldoc worthy of my time.
BlackBerry wrote a great post of the analysis of the entire attack chain, but glossed over the analysis of the first-stage lure, which is what prompted me to analyze this further. Note that I’ll only be covering the 1st-stage lure in this post. The filename of the file I am investigating in this post is “Overview_of_UWCs_UkraineInNATO_campaign.docx”. The document is available on VirusTotal:
Upon initial inspection, this MS Word document does indeed appear to be quite targeted:
To begin my analysis, I first executed the document in a Windows 10 VM while capturing network traffic in Fiddler. The screenshot below shows connections to two IP addresses:
The first connection seems to be an HTTP OPTIONS request to 104.234.239.26.
Edit 1: I was informed from a reader (@k0ck4) that the malware is also making SMB connections to the remote server. This is true – the malware attempts to connect to the remote server via SMB, and following this makes an HTTP OPTIONS request. I was not able to get the malware to connect to the server (likely, the server is offline) but according to the strings in the RTF document objects, it attempts to download a file (more on this later!). The following screenshot from Wireshark shows the SMB connections:
The second connection is for another IP, 74.50.94.156. The second IP appears to be downloading a file (start.xml). For fun, I queried these IP’s in Shodan to see if there is anything interesting. Fun fact: The 74.50.94.156 IP is using WinRM and other services and has some interesting data exposed. (I blurred the data out, but you can check it out on Shodan if interested):
Lets dig deeper into this Document file. I switched over to Remnux VM, and used the tool zipdump to get an idea of what this file’s contents are.
There is definitely something in this document. An embedded RTF file (index 13) which seems to be titled “afchunk.rtf”! Let’s extract it:
(Since this command is a bit hard to read, here it is in text):
Let’s switch over to the rtfdump tool to see what is inside this RTF file:
It looks like we have three potential embedded objects. The first object (index 147) has a size of 0 bytes… interesting. The second object (index 152) appears to be an “OLE2LINK” object. And the third object (index 161) has the designation “SAXXMLReader”.
While rtfdump, rtfobj, and similar tools are extremely valuable, they are reliant on malware authors behaving properly. Some RTF malware may be able to hide objects from these tools or otherwise obfuscate the data inside. For this reason, I almost always look into the raw data of the file to make sure my findings align. To start, I ran the strings tool on the afchunk.rtf file (command: strings afchunk.rtf). A few things pop out:
There are what appears to be two objects embedded in this RTF file, denoted by the highlighted “objdata” tags. The first objdata tag is succeeded by a blob of hex data. If we copy this hex and transform it to ASCII, we would see some interesting things – but we’ll extract the object in a moment. This objdata tag is preceded by the string “Word.Document.8” which informs us that this may be an embedded Word document.. However, the standard DOC magic bytes (“D0CF11E”) are missing from the hex data. This object seems to be potentially malformed – this could be purposefully done, so as to mislead analysts and automated tools.
The second objdata tag contains another hex blob, but this time we see the “D0CF11E” magic bytes, which denotes an embedded document file or OLE (Object Linking & Embedding) object.
OLE is a way for different programs to exchange data between them. Imagine you have a document in MS Word, and you want to include a chart or a spreadsheet from MS Excel. OLE enables this. The chart or spreadsheet becomes an OLE object. You can learn more about OLE here. In the case of maldocs, malware authors often link or embed malicious objects into otherwise benign RTF documents as a way to hide them and stealthily execute evil activity.
Let’s dump the objects we discovered to disk. The rtfobj tool can help with this:
This command displays the objects inside this RTF file, and dumps them to separate files so we can analyze them. As we suspected, the first object (ID 0 in this output) states that the object is “Not a well-formed OLE object”. The second object (ID 1) has a class name of “OLE2LINK”, a type of OLE object. As a fun homework assignment, Google “OLE2LINK” – the first thing you’ll see is a list of vulnerabilities affecting this object type.
So, let’s take a look at the embedded objects we just extracted.
Analysis of Embedded Object 1
Viewed in a hex editor, Object 1 contains some interesting strings, notably: The IP “104.234.239.26” and the URI path “\share1\MSHTML_C7\file001.url”. When the afchunk.rtf file executes, this embedded object also executes, forcing MS Word to send an HTTP request to this remote server. We’ll discuss this more in a moment.
Edit 2: As a described in Edit 1, this document makes an SMB connection as well as the HTTP request. You can tell this is SMB by the reversed Windows SMB slashes (“\\” and “\”).
Analysis of Embedded Object 2
Similarly to Object 1, Object 2 can be viewed in a hex editor:
Viewing the second object in hex editor reveals another interesting string: “74.50.94.156”, as well as a URI path “/MSHTML_C7/start.xml”. This is the other IP we saw in our Fiddler traffic. As with the first embedded object, this second embedded OLE object also executes upon afchunk.rtf executing, and similarly tricks MS Word into contacting a remote web server. How does this work? I am glad you asked.
These embedded objects seem to be taking advantage of a known older vulnerability called (CVE-2017-0199). According to Microsoft, this vulnerability “exists in the way that Microsoft Office and WordPad parse specially crafted files. An attacker who successfully exploited this vulnerability could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.” Sounds quite dangerous… and very generic.
Digging deeper, I found publicly available exploit code for this vulnerability. If you compare the exploit’s payload code to the strings in the RTF document, you can see some similarities. The malware authors perhaps even re-used some of this exploit code for their own maldoc. For example, the following strings from the original RTF exploit code were existent in this document:
The Next Stages
During the time of my analysis, I could not contact the IP’s directly, so I could not obtain the files as they were supposed to be downloaded (via exploitation of the RTF document and MS Word). However, I was able to obtain them from VirusTotal.
The file hosted on 104.234.239.26 is another MS Word file that renders an iframe in preparation for the next stage of the attack. The file hosted on 74.50.94.156 is an XML file containing a weaponized iframe that is then loaded into MS Word. This malicious iframe is part of the CVE-2022-30190 vulnerability, and sets the stage for the later stages of this attack.
Since the goal of this blog post was simply to show one methodology for analyzing an RTF file, I won’t go into detail on the later stages of this attack. You can read it on the BlackBerry blog post.
For further reading, I found a good older article from the researchers at Nviso. Additionally, McAfee researchers posted a great article on malicious RTF documents and how they work.
I hope you enjoyed! If you see any inconsistencies or errors in this post, please let me know! Also, if you have additional techniques, I am always happy to learn new ways of malware analysis! 🙂
I was investigating a malware sample that uses an interesting trick to circumvent sandboxes and endpoint defenses by simply deleting its zone identifier attribute. This led me on a tangent where I began to research more about zone identifiers (which, embarrassingly enough, I had little knowledge of prior). Here are the results of my research.
The Zone.Identifier is a file metadata attribute in Windows that indicates the security zone where a file originated. It is used to indicate a level of trustworthiness for a file when it is accessed, and helps Windows determine the security restrictions that may apply to the file. For example, if a file was downloaded from the Internet, the zone identifier will indicate this, and extra security restrictions will be applied to this file in comparison to a file that originated locally on the host.
The zone identifier is stored as an alternate data stream (ADS) file, which resides in the file’s metadata. There are five possible zone identifier values that can be applied to a file, represented as numerical values of 0 to 4:
Zone identifier “0”: Indicates that the file originates on the local machine. This file will have the least security restrictions.
Zone identifier “1”: Indicates that the file originated on the local Intranet (local network). Both zone identifier 0 and 1 indicate a high level of trust.
Zone identifier “2”: Indicates that the file was downloaded from a trusted site, such as an organization’s internal website.
Zone identifier “3”: Indicates that the file was downloaded from the Internet and that the file is generally untrusted.
Zone identifier “4” – Indicates that the file came from a likely unsafe source. This zone is reserved for files that must be treated with extra caution as they may contain malicious content or pose a security risk.
You can use the following PowerShell command to check if a file has a zone identifier ADS:
Get-Item <file_path> -Stream zone*
An example of this output can be seen below. Notice the highlighted area that denotes the ADS stream (“Zone.Identifier”) and its length. Also note that if no data is returned after running this command, the file likely does not have a zone identifier stream.
To view this file’s zone identifier stream, you can use the following PowerShell one-liner:
Get-Content <file_path> -Stream Zone.Identifier
An example of this can be seen below:
A zone identifier file will look like something like this:
In this example, this Zone.Identifier indicates that the associated file originates from “zone 3”, which typically corresponds to the Internet zone. The ReferrerUrl denotes the domain of the webpage where the file was downloaded from or potentially the referrer domain, and the HostUrl specifies the precise location where the file was downloaded from.
These zones are also referred to as the Mark of the Web (MoTW). Any file that originates from Zone 3 or Zone 4, for example, are said to have the mark of the web.
Malware can abuse the zone identifier in a few different ways, with a couple different goals:
Defense Evasion
Malware can manipulate the zone identifier value to spoof the trust level of a file. By assigning a lower security zone to a malicious file, the malware can trick Windows and defense controls into treating the file as if it came from a trusted source.
To accomplish this, malware can simply modify its files’ zone identifiers. Here is how this can be accomplished via PowerShell:
This PowerShell one-liner modifies a file’s zone identifier to be a certain value (in this case, setting the zone ID to “1”). This may help the malware slip past certain defensive controls like anti-malware and EDR, and may make the malware look less suspicious to an end user.
Or, the zone identifier stream can simply be deleted, which may trick some defense controls. In order to attempt to bypass defenses, a variant of the malware family SmokeLoader does exactly this. SmokeLoader calls the Windows API function DeleteFile (see code below) to delete its file’s zone identifier stream. You can investigate this for yourself in a SmokeLoader analysis report from JoeSandbox (SHA256: 86533589ed7705b7bb28f85f19e45d9519023bcc53422f33d13b6023bab7ab21).
Alternatively, malware authors can wrap their malware in a container such as a IMG or ISO file, which do not typically have zone identifier attributes. Red Canary has a great example in this report.
Anti-Analysis and Sandbox Evasion
Malware may inspect the zone identifier of a file to circumvent analysis. Malicious files that are submitted to an analysis sandbox or are being analysed by a reverse engineer may have a different zone identifier than the original identifier the malware author intended. When the malware file is submitted to a sandbox, the zone identifier may be erroneously set to 0, when the original value is 3. If malware detects an anomalous zone identifier, it may cease to execute correctly in the sandbox or lab environment.
The pseudo-code below demonstrates the logic of how malware may check its file’s zone identifier:
zone_identifier_path = current_file_path + ":Zone.Identifier"
with open(zone_identifier_path, 'r') as f:
zone_info = f.read()
# Check if the zone is Internet zone (zone ID 3 or higher)
if "ZoneId > 2" in zone_info:
# File is from the Internet zone (as expected), continue running
return()
else:
# File may be running in a sandbox or analysis lab!
TerminateProcess()
If you are craving more information on this topic, other good resources are here and here.
— Kyle Cucci (d4rksystem)
Book Summary – “Evasive Malware: Understanding Deceptive and Self-Defending Threats”
Since my new book “Evasive Malware: Understanding Deceptive and Self-Defending Threats” pre-order just launched, I wanted to write up a quick summary of the book, including what you’ll learn, the book’s target audience, and a breakdown of each section in the book. Let’s get started!
What is this book about?
“Evasive Malware: Understanding Deceptive and Self-Defending Threats” is a book about the fascinating and terrifying world of malicious software designed to avoid detection. The book is full of practical information, real-world examples, and cutting-edge techniques for discovering, reverse-engineering, and analyzing state-of-the-art malware, specifically malware that uses evasion techniques.
Beginning with foundational knowledge about malware analysis in the context of the Windows OS, you’ll learn about the evasive maneuvers that malware uses to determine whether its being analyzed and the tricks they employ to avoid detection. You’ll explore the ways malware circumvents security controls, such as network or endpoint defense bypasses, anti-forensics techniques, and malware that deploys data and code obfuscation. At the end of the book, you’ll learn some methods and tools to tune your own analysis lab and make it resistant to malware’s evasive techniques.
What will you learn?
Modern malware threats and the ways they avoid detection
Anti-analysis techniques used in malware
How malware bypasses and circumvents security controls
How malware uses victim targeting and profiling techniques
How malware uses anti-forensics and file-less techniques
How to perform malware analysis and reverse engineering on evasive programs
Who is this book for?
This book primarily targets readers who already have at least a basic understanding and skill-set in analyzing malware and reverse-engineering malicious code. This book is not a beginner course in malware analysis, and some prior knowledge in this topic is assumed. But have no fear – the first three chapters of this book consist of a crash-course in malware analysis and code analysis techniques.
Here are some of the practical applications of this book:
Malware Analysts and Researchers – Learn how modern and advanced malware uses evasion techniques to circumvent your malware lab and analysis tools.
Incident Responders and Forensicators – Learn how advanced malware uses techniques like anti-forensics to hide its artifacts on a host. Understanding these techniques will help improve incident response and forensics skills.
Threat Intellgience Analysts– Learn how bespoke, targeted, and cybercrime malware uses evasion techniques to hide and blend into its target environment.
Security Engineers / Security Architects – Learn how malware evades the host and network defenses that you design, engineer, and implement.
Students and Hobbyists – Learn how modern, advanced malware operates. If you read and actually enjoy this book, then you now know that you should pursue a job in malware research 😉
This book consists of five sections (parts), each consisting of three or more chapters. Let’s take a brief look at each of these.
Part 1: The Fundamentals
Part 1 contains the foundational concepts you’ll need to know before digging into the rest of the book. The topics include the fundamentals of how the Windows operating system works, and the basics of malware analysis, covering sandbox and behavioral analysis to static and dynamic code analysis.
Chapters in Part 1:
Chapter 1: Windows Foundational Concepts
Chapter 2: A Crash Course in Malware Triage and Behavioral Analysis
Chapter 3: A Crash Course in Static and Dynamic Code Analysis
What you’ll learn:
What evasive malware is and why malware authors use evasion techniques in their malware.
The fundamentals of Windows OS internals.
A crash course in malware analysis and reverse engineering, covering the basics of malware sandbox analysis and behavioral analysis, and static and dynamic code analysis.
Part 2: Context-Awareness and Sandbox Evasion
Part 2 starts getting into the good stuff; How malware is able to detect sandboxes, virtual machines, and hypervisors, and circumvent and disrupt analysis.
Chapters in Part 2:
Chapter 4: Enumerating Operating System Artifacts
Chapter 5: User Environment and Interaction Detection
Chapter 6: Enumerating Hardware and Network Configurations
Chapter 7: Runtime Environment and Virtual Processor Anomalies
Chapter 8: Evading Sandboxes and Disrupting Analysis
What you’ll learn:
How malware detects hypervisors by inspecting operating system artifacts.
How malware detects virtual machines by looking for runtime anomalies.
How malware tries to detect a real end user in order to identify if it’s running in a sandbox.
How malware actively circumvents analysis by exploiting weaknesses in sandboxes or directly interfering or tampering with the analyst’s tooling.
Part 3: Anti-Reversing
Part 3 covers the many techniques malware may use to prevent or impede reverse-engineering of its code, such as complicating code analysis, disrupting debuggers, and causing confusion and misdirection.
Chapters in Part 3:
Chapter 9: Anti-disassembly
Chapter 10: Anti-debugging
Chapter 11: Covert Code Execution and Misdirection
What you’ll learn:
How malware authors implement anti-disassembly techniques and how you can overcome them.
How anti-debugging techniques work, and how to identify these techniques while analyzing malware.
How malware utilizes covert code execution and misdirect techniques to confuse malware analysts and slow down the reversing process.
Part 4: Defense Evasion
Chapters in Part 4:
Chapter 12: Process Injection, Manipulation, and Hooking
Chapter 13: Evading Network and Endpoint Defenses
Chapter 14: An Introduction to Rootkits
Chapter 15: Fileless Malware and Anti-forensics
What you’ll learn:
How malware implements modern process injection and manipulation techniques to circumvent defenses.
How malware actively and passively circumvents and bypasses modern endpoint and network defenses like EDR/XDR.
The basics of rootkits and how they evade defenses.
How always uses living-off-the -and techniques to remain undetected and blend into the environment.
Anti-forensics techniques and how advanced malware hides from forensics tooling and investigators .
Part 5: Other Topics
Finally, Part 5 covers additional techniques and topis that did not fit in well with the other chapters. This section covers topics like obfuscating malware and malicious behaviors via encoding and encryption, how packers work and how to unpack malware, and how to make your malware analysis lab a bit more resilient to evasive malware.
Chapters in Part 5:
Chapter 16: Encoding and Encryption
Chapter 17: Packers and Unpacking Malware
Chapter 18: Tips for Building an Anti-evasion Analysis Lab
What you’ll learn:
How malware implements obfuscation and encryption to complicate analysis and hide malicious activity, and how to analyze obfuscated code.
How malware uses packers and crypters, and how to analyze packed malware.
How to configure and tune your analysis lab to help streamline analysis of malware that may be detecting your lab environment.
Pre-Order the Book!
If you decide to legally purchase my book (instead of pirating it), it would be much appreciated. I need to buy beer, a new gaming PC, feed my family, you know, important stuff.
How to pre-order:
You can order the book directly from the No Starch Press publisher website. If you order from No Starch, you also can get access to an Early Access version of the book, as well as the finished book!
You can order on Amazon. Sometimes Amazon has deals and this may be cheaper, but you do not get access to the Early Access version. Amazon ships to many places in the world, so this is an advantage.
There are other sites you can order from as well, such as local bookstores. Just Google “Evasive Malware book”.
If you decide to pre-order the Early Access version of my book, I would love your feedback! If you spot technical errors, spelling and grammar errors, or even if you just want to tell me “It’s amazing!” or “It sucks!”, I want to hear your feedback 🙂 Feel free to contact me via Twitter or LinkedIn.
A lot of love for the infosec community went into this book, so I hope you enjoy it! 🙂
Malware Analysis in 5 Minutes: Identifying Evasion and Guardrail Techniques with CAPA
Modern malware has gotten better and better at detecting sandbox and analysis environments, and at evading these environments. Malware can circumvent defenses, sandboxes, and analysts by using various techniques such as VM detection, process injection, and guardrails.
In particular, guardrails are one or more artifacts that malware looks for on the host before executing its payload. These artifacts may be specific registry keys, files, directories, network configurations, etc. If these specific artifacts do not exist on the host, the malware may assume it is running in an analysis lab, or is otherwise not the right target for infection.
One of the most tedious processes when investigating malware that is evading your sandboxes or tooling is figuring out what techniques the malware is using for this, and where in the code this occurs. CAPA can help automate this process.
CAPA is a tool written by the FireEye/Mandiant FLARE team that can be used to quickly triage and assess capabilities of a malware sample.
For this example, I have a sample that will not run in my sandboxes or in my analysis VM’s and I am trying to figure out why. Let’s throw this sample into CAPA:
capa path/to/sample.exe
CAPA provides a nice summary of the potential ATT&CK techniques the malware is using, along with its identified capabilities. This assessment can help in many malware analysis situations, but here the focus is on evasion techniques.
Based on this initial analysis, we can see several possible techniques being used, such as:
Executing anti-VM instructions
Hashing and data encoding (could be used to hide strings)
Checking if a certain file exists (could be used for creating guardrails)
Getting the hostname (could also be used for guardrails)
Multiple process injection techniques
We can get additional information from CAPA by using the verbose mode:
capa path/to/sample.exe -vvv
Now we can focus on a few of these techniques and where they reside in code:
CAPA identified two uses of the CPUID instruction, which can be used to identify a virtual machine environment. We can now throw this sample into a disassembler and locate this code by jumping to the addresses listed in CAPA:
If we wanted to bypass this detection technique, we could NOP out (remove) the CPUID instructions, or modify their return values. More about the CPUID instruction can be seen here and here.
Additionally, CAPA identified the addresses in the binary where process injection behaviors may be occurring:
With this information, along with the offset addresses provided, we can set breakpoints on these addresses or instructions for analysis in a debugger. For more info on these process injection techniques, this write-up is old but still very relevant.
Finally, I suspect this sample is using some sort of guardrails. Guardrails are a technique used by malware to prevent sandbox analysis, hamper manual analysis, evade host defenses, and prevent unnecessary “spreading” of the malware.
As previously identified by CAPA, this sample may be using the system hostname and files/directories as guardrails. It also likely that it has hardcoded hashes of those guardrails in order to make it difficult for analysts to spot what the malware is specifically looking for:
CAPA identified that this sample is checking for a specific file at function offset 0x1400012C1, and the hostname at 0x140001020. Let’s inspect the hostname query in the sample in a dissembler. Once Ghidra disassembles this function, this is what is displayed:
In Ghidra, we can see that the sample is calling GetComputerNameA in order to get the domain hostname of the victim. It then hashes this hostname (CryptCreateHash, CryptHashData) and compares it to a hardcoded hash using memcmp (memory compare).
This instruction is comparing the DAT_target_hash (the hash of the hostname that the malware is expecting) to hashed_domain_name (the actual hostname of the victim). If these hashes do not match, the sample will terminate itself.
Since the target hash is hardcoded in the binary and will not be “un-hashed” in memory, we don’t really know what this malware sample is looking for. Our best option here is to bruteforce the hash using a rainbow table or wordlist.
Or… we can simply bypass this hash checking functionality altogether. With this information from CAPA, we can now patch the binary (in a disassembler or in a debugger) in order to completely bypass these VM detection and guardrail techniques, and allow our sample to run in our VM. We can do this by NOP’ing out instructions, modifying the function return values, or skipping the code altogether by jumping over the suspect code.
I recently was investigating a memory dump from a host infected with BlackEnergy3. BlackEnergy3, which is a modified version of the original BlackEnergy malware families, was used in the attacks on the Ukrainian power grid in 2015. BlackEnergy3 is similar to its version 2 counterpart, but has been modified with additional modules that serve multiple purposes such as extraction of credentials, keystroke logging, and destruction capabilities.
This post is a sort of a step-by-step methodology for investigating BlackEnergy3 infections, and more generally, rootkit behavior in memory. I will be using Volatility as my primary tool for this investigation.
Edit: One reader asked which sample I used for this investigation. This write-up is from a memory image provided by SANS and was included with the Advanced Memory Forensics and Threat Detection course. (This course is highly recommended if you are interested in memory forensics and hunting advanced malware!). I don’t know exactly which sample was used on the infected system, but I found a possible similar sample on VirusTotal here.
Investigating Userland
I always start a memory forensics investigation by inspecting the processes that were running on the system before the memory was extracted. The Volatility “Pstree” command provides an output of processes in a nice tree-based form:
vol.py -f memdump.img --profile=Win7SP1x64 pstree
What we should be looking for here are strange process parent/child relationships, orphaned processes (processes with no parent), and processes that seem out of place, such as strange or misspelled process names. We see no clear evidence of any of this type of activity:
Let’s dig a bit deeper. One of my goto Volatility modules for quick wins is “malfind”. “Malfind” will enumerate the Virtual Address Descriptors (VADs) tables for each process running on the system, and attempts to find anomalies and possible evidence of code injection.
After running “malfind”, we can see an anomaly right off the bat – possible code injection into “svchost.exe” (PID 1468) process:
We can see above that the memory permission for this region is “PAGE_EXECUTE_READWRITE”, which means that this area of memory possibly contains executable code. We can also see the “MZ” header synonymous with Windows PE files, so this is highly likely malicious code injection. For closer inspection, let’s dump out this region of memory into a file using “vaddump”:
We can now inspect this area of memory by simply running the “strings” command on the dumped memory region we are interested in (“0x1a0000”):
strings -n8 svchost.exe.7e4aa060.0x00000000001a0000-0x00000000001affff.dmp | less
There are several interesting strings here. There is a reference to “aPLib”, which is a library for compressing and packing executable files. This means that the injected malicious code was likely packed, which is definitely a red flag and out of place in a process such as “svchost.exe”. Also, there are references to a user agent string, references to DLL files and a DAT file, and several references to possible API function calls.
A quick Google search shows that many of these strings are actually part of the Command & Control functionality of BlackEnergy3:
DownloadFile – Retrieves a file from the Internet.
RkLoadKernelImage – Used to load code into kernel memory address space.
RkLoadKernelObject – Used to load a new driver module into kernel memory from userland memory.
SrvAddRequestBinaryData – Used to append binary data to the C2 HTTP POST data (for C2 communication and payload download).
Srv* – These commands are used for C2 communication.
“main.dll” – The internal name of BlackEnergy’s primary DLL file.
The presence of these kernel-related functions signal that we are dealing with a rootkit.
Hunting for Rootkits
After our brief analysis of the injected code into svchost.exe, we know we are dealing with some sort of rootkit behavior. Rootkits typically will load a kernel module or driver into kernel memory space. Let’s hunt for this.
“Modscan” is able to scan kernel memory for loaded drivers and modules, and is the perfect command to use here:
There are a few potentially suspicious modules listed here, but one in particular stands out: “adp94xx.sys”. I was able to determine that this module is out of place by Googlng the other good, benign modules. The only way to know what is not normal is to know what is normal – so it’s good to do some Googling or have a list of normal drivers handy 😉 Let’s dump this kernel driver from memory, using the base address listed above:
Once again, I use the strings command to run a quick inspection of this file:
strings driver.fffff88003fbf000.sys | less
We can see several kernel function calls here. Running the same strings command but for Wide strings (16-bit little-endian) encoding, we can see a bit more:
strings -e l driver.fffff88003fbf000.sys | less
A few items stick out to here. The most obvious is that this driver file appears to be published by Microsoft and is called the “AMD IDE driver”. In addition, we can see several Windows API functions. One example is “SeImpersonaltePrivilege”, which is a Windows API function that can be used to impersonate privileges and access tokens, and is used in some rootkits and privilege escalation exploits. This function is just a clue into the functionality of this driver. Finally, we see a reference to “svchost.exe”, which is what we saw earlier in malscan!
A quick Google search for “AMD IDE driver” and “adp94xx.sys” reveal a few discrepancies. First, “AMD IDE driver” is a real driver name, but does not seem to relate to the file name “adp94xx.sys”. Second, the “adp94xx.sys” could be a legitimate driver, but is related to Adaptec, and not to AMD IDE drivers. This discrepancy proves that hunting for kernel rootkits is a lot about knowing what is and what is not normal, and knowing how to Google 😉
There are a few imports we should focus on here. One function of interest is “KeStackAttach”. According to Microsoft, KeStackAttachProcess attaches a specified process thread to the address space of another process. This functionality can be used to run code from the kernel module rootkit in the context of a userland process, which essentially serves as a very stealthy way to run code.
As a quick tip, we can also extract the imports in a format that can be imported into IDA for later analysis:
Later, we can look at this module in IDA or another disassembler in order to better understand it. This is out of the scope of this post, but this is something that should be done during an investigation.
Wrapping Up
From the investigation above, we can make several inferences (or at least, educated guesses) from this data.
Malicious code was injected into the “svchost.exe” process. Once executed, this code likely downloads an additional module from the Internet (using the DownloadFile function).
The malware may have executed its rootkit behavior be leveraging its “RkLoadKernelObject” function, which allows code execution in the context of the kernel.
Once in kernel memory, the rootkit is able to hide on the system, and inject additional malicious code into other userland processes, further embedding itself in the system in stealthy way.
This of course is not a complete investigation of BlackEnergy3, but shows what can be done to quickly triage rootkit behaviors. You can likely see that hunting rootkits, and memory hunting in general, takes a combined approach of cross-referencing the output of multiple tools, Googling things, and understanding what is and what is not normal Windows behavior.
Bonus: For sticking with me this long, you may have noticed 2 “iexplore” processes in the “pstree” output:
These are actually the product of a special module that BlackEnergy3 is able to deploy called the “Ibank” module. This module injects itself into Internet Explorer processes and is able to steal banking credentials from its victims 🙂
As always, thanks for reading.
— @d4rksystem
Hiding Virtual Machines from Malware – Introducing VMwareCloak & VBoxCloak
Many malware families are still using fairly trivial techniques for the detection of virtual machine environments. Once malware detects that it may be running in a virtual machine, it may terminate itself, or worse, execute code that will cause a diversion and potentially lead the malware analyst down the wrong paths :O
Malware often uses the following techniques for virtual machine detection:
Registry Enumeration
Registry enumeration is one of the most common techniques that evasive malware may use to determine if it is running in a VM. Some registry keys malware may look for is hardware information, system BIOS information, and any other registry keys and values that contain references to hypervisors such as VMware Workstation and VirtualBox. Many of these registry keys can be renamed or removed without heavily affected the performance or usability of the VM!
File & Directory Enumeration
Malware may enumerate files and directories on the system to get an understanding of the environment it is running in. Malware may look for files and directories that reference common hypervisors, such as “VMware” or “VBox” directories in the “C:\Programs” directories. Malware may also enumerate the “C:\Windows” directory, typically looking for hypervisor-related drivers and system files. An interesting fact is that many of these files (even system and driver files!) can be removed or renamed without affecting the VM, since these files are loaded into memory and not often accessed from the disk!
Process Enumeration
Finally, malware often enumerates the running processes on the system to determine if any hypervisor-related processes are running. Typically, hypervisors such as VirtualBox and VMware have processes running that are used to enable “helper” related functionalities such as drag-and-drop, clipboard sharing, and shared drives. These processes are often not required for the general functionality of the VM, so they can be safely killed in order to better hid the VM from malware.
Because these detection techniques are fairly trivial, we as malware analysts can also use trivial methods to bypass them! I wrote VMwareCloak (for VMware Workstation) and VBoxCloak (for VirtualBox) for just this reason. These tools are Powershell scripts that are designed to sanitize your Windows sandbox VM’s. The scripts kill processes, and remove or rename registry keys, files, and directories that may lead malware to believe it is running in a virtualized environment.
To run the scripts, simply execute your chosen script as an Admin on your Windows VM:
If all goes well, your VM will be sanitized and the evasive malware may now run as if it was not in a VM! (I have tested this script with several malware families. However, these scripts will not work for all malware, especially more advanced variants that are, for example, using hardware detection or timing-based detection techniques.)
A bit more information can be found in my writeup here:
Enjoy! Feel free to yell at me when you inevitably find bugs in the script 😉
Many malware families still use simple evasion techniques for detection of virtual machine environments and malware analysis sandboxes. These simple checks are enumerating things on the host such as processes, certain files and directories, specific drivers and hardware configurations, and registry keys that may give away the presence of a hypervisor. If a virtual machine is detected, the malware may kill itself or perform other evasive actions.
Did you know that many of these simple checks can be completely bypassed by slightly modifying the analysis environment before running the malware? I wrote a quick Powershell script to make these modifications quickly and automagically. Note: This script only supports VirtualBox so far, but will support VMWare in the near future.
The script is very simple. Give it one of several parameters and it will get to work cleaning up your Windows VirtualBox VM and priming it for malware analysis. The changes it makes are as follows:
Renames several registry keys that malware typically used for VirtualBox detection.
Kills VirtualBox processes (VBoxService and VBoxTray).
Deletes VirtualBox driver files.
Deletes or renames VirtualBox supporting files in System32 directory.
One popular question I get a lot is: “Won’t making these types of changes, especially to driver files and processes, break or crash my VM?”
Answer: No! The file modifications the script makes are only on the disk. VirtualBox loads these files into memory anyway, so we can freely modify file and directory names without affecting the VM too much. I say “too much” because your VM will likely slow down a bit after these changes are made (especially after terminating VBox processes) and it won’t be as user friendly. The script, for example, will break drag/drop, clipboard, and shared folder settings, but this is a side affect of making your VM more difficult to detect. If you really want to be hardcore reversing evasive malware, you wouldn’t want these features enabled anyway 😉
To run, just invoke the PowerShell script like this:
“Vboxcloak.ps1 -all”
This command will make all configuration changes to the virtual guest system. We can see this in the screenshot below:
I tested this script with a few evasive malware samples and it seems to work well, on many occasions. Obviously, it’s not perfect and will not evade all malware anti-analysis checks, but it is a good start when analyzing an evasive sample.
Once again, the script can be downloaded from: https://github.com/d4rksystem/VBoxCloak
Enjoy! Feel free to yell at me when you inevitably find bugs in the script 🙂