Wright In The Shadows

This is a story about one of the tasks that we prepared for the CTFZone qualifying stage , which was held at the end of November. You can read about the qualification preparation process here .

You start with two files: decrypt_flag.py and ntfs_volume.raw. Let's take a look at the script. He opens a file named key.bin, and then, using a loop, he tries to take a 34-byte binary string from each offset inside the file, which is then used as input for the PBKDF2 function. Each returned key is used as an XOR key to decrypt the encrypted string sewn into the code. If its decrypted MD5 hash matches the predetermined value in a decrypted form, the script uses the received data to generate and print the flag.

So you need to find the key.bin file. It’s impossible to simply sort through all offsets inside the image file (ntfs_volume.raw), since the process of finding the key will be too slow. This is not forbidden by the rules, but you certainly won’t have time before the end of CTF.

The image file contains a single-partition MBR partition table. Its offset is 2048 512-byte sectors, and it contains the NTFS file system, but the key.bin file is not there:

$ fls -o 2048 -r -p ntfs_volume.raw | grep -F key.bin | wc –l
0

NTFS stores UTF-16LE encoded file names. Let's try searching in it!


Search results for file records

Having studied the search results, we focus on file records that begin with the FILE signature [1]. Here is the only such entry:


Found Record

The goal is already close! We have a file record, but we need data. In NTFS, they are stored in the $ DATA attribute, which can be resident or non-resident [2]. For the record we found, this attribute starts at offset 0 × 3AADD00 and indicates non-resident data (this means that the information is stored outside the file record).

So where exactly is the data of the desired file? To answer this question, it is necessary to decode the so-called mapping pairs, or data runs (pairs “block length - block offset”) [3]. The data runs of the desired file are as follows (note the offset 0 × 3AADD40): 22 53 01 A0 4E 21 05 31 C1 11 38 30 00. Or, if we rearrange them:

1.	22 53 01 A0 4E
2.	21 05 31 C1
3.	11 38 30 00

The file consists of three fragments, the size of the first of which is 339 clusters, and it starts with cluster # 20128. For our code using PBKDF2, this snippet is large. As indicated in the file system header, the size of one cluster is 4096 bytes:

$ fsstat -o 2048 ntfs_volume.raw | grep 'Cluster Size'
Cluster Size: 4096

Let's take a look at the data for this offset (in bytes):
2048 * 512 + 20128 * 4096 = 83492864. We will extract any significant amount of information (for example, 128 bytes) from here, insert it into a new file, which we will call key.bin, run the script ... Nothing succeeded.

Perhaps the file was not stored in the current, but in the previous version of the file system (previous formatting) - do not forget that we did not see the record about the deleted file with the same name. What was the cluster size before? Let's look for the file system header with the NTFS signature [4]. Maybe we are lucky and we really find the header from the previous formatting.


Search results for the file system header

The first and last positions in the search results refer to the current file system, but the file system headers located between them seem to belong to the previous formatting. And they have a different cluster size!


The file system header from the previous version.

Such a sector size is written at an offset of 0 × 4554800B: 00 02, or 512. But such a cluster size is recorded at an offset of 0 × 0x4554800D: F7, or 247.

So, we have the cluster size (in bytes) 512 * 247 = 126464. Some kind of nonsense! If you believe the NTFS parser [5], such a value must have a sign and be processed in a special way, so the real cluster size (in sectors) is 1 << - (- 9) = 512. Or, if in bytes, 512 * 512 = 262144 Now sounds more believable.

The data starts here at this offset (in bytes):
2048 * 512 + 20128 * 262144 = 5277483008. Let's try again to do the same trick with the information stored there ... Again, a failure! What is wrong? We have CTF here, it means “not so” anything can be.

The task we are fighting over is called In the Shadows. It is possible that it has something to do with shadow copies of the volume. So, we have a file from the file system that previously existed in this volume. Unfortunately, we simply cannot take and mount its shadow copy, but we know the exact offset at which the data begins! This is 5277483008, or, inside the section, 5277483008 - 2048 * 512 = 5276434432.

According to the specifications of the VSS format [6], redirected data blocks are described in the Block descriptor structure containing a 64-bit field in which the original offset (inside the volume) is stored, and also a 64-bit field describing the target block offset (inside the volume). Let's look for 5276434432 as a 64-bit little endian number.

There are only two results in the results, and only one of them is located at an even offset.


Found block descriptor

Target block offset: 00 00 9B 03 00 00 00 00, or just 60489728. Final offset: 60489728 + 2048 * 512 = 61538304. From here, export a certain amount of data to a new file named key.bin, and ...

$ ./decrypt_flag.py
ctfzone{my_c0ngr4t5_t0_u,w311_d0n3_31337}

Done!

References


  1. https://flatcap.org/linux-ntfs/ntfs/concepts/file_record.html
  2. https://flatcap.org/linux-ntfs/ntfs/attributes/data.html
  3. https://flatcap.org/linux-ntfs/ntfs/concepts/data_runs.html
  4. https://flatcap.org/linux-ntfs/ntfs/files/boot.html
  5. https://github.com/msuhanov/dfir_ntfs/blob/94bb46d6600153071b0c3c507ef37c42ad62110d/dfir_ntfs/BootSector.py#L58
  6. https://github.com/libyal/libvshadow/blob/master/documentation/Volume%20Shadow%20Snapshot%20(VSS)%20format.asciidoc#431-block-descriptor

Source: https://habr.com/ru/post/undefined/


All Articles