ElectroMan,
Please explain how you did the NAND dump and why the first 0x0800 0000 seem all good and the rest seems to contain raw NAND bytes plus OOB.
ElectroMan,
Please explain how you did the NAND dump and why the first 0x0800 0000 seem all good and the rest seems to contain raw NAND bytes plus OOB.
Whatever "filesystem" they are using seems to place 16 bytes of NAND management data at the beginning of each block after block 0 (possibly data for previous block?) I did not look too much at that and just discarded it, as it was corrupting the extracted firmware sections.
Do you have other specific questions about the process? I explained the methodology in the first post and provided the script I used here:
https://www.eevblog.com/forum/testgear/rs-rtb2004-snooping/msg3242084/#msg3242084I was basically emulating a crude NAND controller driver via memory reads/writes through OpenOCD to get the NAND contents.
The image that was extracted seems to have all the software (and a bunch of different versions), but if there are errors in the latter memory regions it would be difficult to tell for certain what should be there. I double-checked my binary math a few times, and as all the important data seemed to be there I assumed some of the garbage later in the file was an artifact of however they were flashing these from the factory.
Note that the apps seem to use about 25-30M of a 512M flash. A lot of the other stuff there are older firmware versions (the scope came with a version newer than most of them on the flash). This leads me to believe the image they use to flash these devices has a lot of stuff left over from earlier testing stages.
It is, of course, also possible there's something wrong with my methodology and my binary math fell apart. A second set of eyes on that would be welcome.
Peter’s txt contains an answer: there is an FTL (and that is natural - this is NAND), so in general you can’t just strip and ignore OOB data. It’s a luck (and high quality components choice by R&S) that simple stripping worked on the first part of data without hitting a bad/remapped block. Btw, what about ECC? Is it done automatically by Cyclone’s NAND controller?
Peter’s txt contains an answer: there is an FTL (and that is natural - this is NAND), so in general you can’t just strip and ignore OOB data. It’s a luck (and high quality components choice by R&S) that simple stripping worked on the first part of data without hitting a bad/remapped block. Btw, what about ECC? Is it done automatically by Cyclone’s NAND controller?
Maybe I used the wrong term. I don't think that those stripped 16 bytes are really OOB with sector recovery info, etc. Maybe more like checksum, framing, etc... it seems too few bytes to have a meaningful goal.
Could they be ECC data ?
Well, as I was saying, they sure are OOB (out of band) but they don't seem ECC as they are only 16 bytes for each 16384 bytes block. I think that is very few bytes to do any recovery or error signaling. But, for sure, I'm no expert on this. And, most of all, i think 4 or more bytes are always the same... so it is even less useful data.
Checked the Cyclone docs and ElectronMan's script, the command used for reading (MAP01) handles the ECC internally, NAND spare area is not included in the readout at all. Those 16 bytes should be FTL metadata. PeDre's txt contains Sector Size: 16384 and User Sector Size: 16368 - exactly 16 bytes difference. FTL block map dump (the big part of txt) consists of small entries fitting to 16 bytes easily. I don't have a dump to see how those OOB bytes looks like, could someone post an example?
That txt also explains why simple stripping works (for now): all "bad" counters (bad blocks, sectors, ECC errors, relocs etc) are 0.
Here I shared a very rough map of the whole NAND with 32kB lines width (heavily resized).
One can see the patterns although the 16 bytes are completely obfuscated given the resizing.
I'll see if there is an anonymized extract block that I could share.
Whatever "filesystem" they are using seems to place 16 bytes of NAND management data at the beginning of each block after block 0 (possibly data for previous block?) I did not look too much at that and just discarded it, as it was corrupting the extracted firmware sections.
Doing it methodically:
The first 0x08000000 bytes (128 MB) are perfect. All the bytes in the right place and we can see all the previous FW versions fully stored there. I wonder when they start writing on top of each other since we have the 128 MB almost all filled up.
The problem rises after the 0x08000000 as the dump starts having this format:
- a macro-block of 8 blocks of 0x4000 bytes (with the first 16 bytes being OOB) but the last block not having its own 16-bytes OOB.
This repeats itself up to file's end.
I've just reread your dumping explanation and I'm sure that your "The script strips out the 16 byte block header that is on blocks 1 - 4095." was not correctly executed on the whole NAND dump. So you did strip out the OOB data on the first 128 MB but not on the rest (at least not fully, as one of the 8 consecutive blocks has it strippeed out)?
I extracted the remaining OOB stuff, concatenated its contents and did some sanitization of certain patterns that seem to be flashed previously on the NAND (before writing newer files).
The remaining information seems to be some settings files and huge log files. I think even the calibration logging is there. But, I have a feeling something may be missing...
I'm not very good at interpreting perl stuff, so some questions remain as I can't verify your scripts fully:
- The last 384 MB must have a stripping/extraction error. Can you please verify.
- If you did extract 512 MB we should have 536.870.912 bytes. We have 512 MB - 4095 x 16 bytes.
It looks like the first 0x2000 blocks (of 0x4000 bytes) had their OOB data correctly stripped but the rest not. And if so, and you tried to also strip them in the rest of the data, then you stripped some good data (which was not the OOB portions).
Here is the output from a SCPI command to the flash memory. But I do not know exactly what it means. Maybe it helps a little bit.
I'll try to have a look, Peter.
I recall when working with a mostly-complete unaltered dump before that the 16-byte block headers may have ended later in the flash. I am taking a raw image now.
My scope indicates it has ~380M of free internal flash memory, so I don't think there's anything major missing. I have noticed a few things like screenshot thumbnail BMP files in the last 3/4 of the flash, alignment logs, and possibly mask files.
Peter’s txt contains an answer: there is an FTL (and that is natural - this is NAND), so in general you can’t just strip and ignore OOB data. It’s a luck (and high quality components choice by R&S) that simple stripping worked on the first part of data without hitting a bad/remapped block. Btw, what about ECC? Is it done automatically by Cyclone’s NAND controller?
The NAND controller does do ECC transparently when reading in "Main area" mode, which is what I used. Main mode does not include the Spare area.
From the CycloneV HPS technical reference manual:
Main Area Transfer Mode
In main area transfer mode, when ECC is enabled, the NAND flash controller inserts ECC check bits in
the data stream on writes and strips ECC check bits on reads. Software does not need to manage the ECC
sectors when writing a page. ECC checking is performed by the flash controller, so software simply
transfers the data.
If ECC is turned off, the NAND flash controller does not read or write ECC check bits
Looks like 08000000+ area uses some different data/metadata layout, so the reader script strips metadata incorrectly. The dump by tv84 looks like it has some data stripped at wrong positions (16 "metadata" bytes are at +0 in each 16K sector of the first 128K block, but the same looking data is moved by 16 bytes upper in the next block, then by 32 bytes upper, ...).
It would be better to turn off that stripping at all, read everything as is, then decide what to strip for each area. Or even use MonDeb "cr" command to read NAND to RAM (so a builtin read function that knows the correct data format will be used), then dump it from there either via JTAG (this can be done in huge portions) or with MonDeb "du" command.
Looks like 08000000+ area uses some different data/metadata layout, so the reader script strips metadata incorrectly. The dump by tv84 looks like it has some data stripped at wrong positions (16 "metadata" bytes are at +0 in each 16K sector of the first 128K block, but the same looking data is moved by 16 bytes upper in the next block, then by 32 bytes upper, ...).
It would be better to turn off that stripping at all, read everything as is, then decide what to strip for each area. Or even use MonDeb "cr" command to read NAND to RAM (so a builtin read function that knows the correct data format will be used), then dump it from there either via JTAG (this can be done in huge portions) or with MonDeb "du" command.
Yeah, I was considering using the Mondeb interface to do just that. I had tried doing DMA transfers via script from flash but could never get them to work.
I have a full raw JTAG dump happening now at home via the flash controller. Problem is that long runs of 0xFF seem to cause JTAG timeouts and errors, so getting a good dump is a challenge. If it isn't complete when I get home I'll try using Mondeb to transfer it into RAM so I can read it in larger chunks.
Agree with both. With a successful raw dump, we should be able to parse in no time.
Electro, remember we don't need the 1st 128 MB again. You can restart at 0x08000000.
By doing a bitmap with the whole 512MB it's very easy to see those 16-bytes "junk" zones and do a visual integrity check.
Agree with both. With a successful raw dump, we should be able to parse in no time.
Electro, remember we don't need the 1st 128 MB again. You can restart at 0x08000000.
By doing a bitmap with the whole 512MB it's very easy to see those 16-bytes "junk" zones and do a visual integrity check.
I just started pulling the entire 512M so I'd have it for disaster recovery. It doesn't matter much anyway, as the first 128M transfers pretty quickly. It is the parts of flash that have a lot of 0xFF's (empty) that cause all kinds of JTAG errors and restarts. The first 128M takes about an hour or two, and the remainder ends up taking about 18hrs or so.
Here is the output from a SCPI command to the flash memory. But I do not know exactly what it means. Maybe it helps a little bit.
Peter
Peter, show us the command that you used for this.
We can easily see here:
"Block Size: 131072",
"Sector Size: 16384",
"User Sector Size: 16368",
So, we definitely should have blocks of 128 kB = 8 sectors x 16 kB (but each sector has -16 usable bytes).
The command to show those FTL stats should be DIAGNOSTIC:SERVICE:PFSTATUS
(more or less) complete SCPI command list:
.FWU parsing (after decryption):
00000000 Header Size: 0400 [00000000-000003FF] FileSize OK
00000002 Section 1 Size: 0017792C [00000400-00177D2B]
00000006 Section 2 Size: 014903A4 [00177D2C-016080CF]
0000000A Section 1 CRC16: 2AE2 CRC OK
0000000C Section 2 CRC16: 50A4 CRC OK
0000000E ????: 0x10330000
0000001E Model: RTB2004
0000002E FW Version: 02.202
0000003E Release Date: 2018-11-06
0000004E ????: 6731.19395
0000005E Compilation: Build 522 built on 2018-11-06 12:37:30 by MaG? [02.202 - HCL: 03.300 - MesOS: 04.300] with GCC 5.3.0
0000015E (???) Hash Type: 2
00000198 Build: 522
000001AA Section 1 SHA256: 898ADDB2A111DBE0C45BC0EA363D4CD5 HASH OK
000001CA Section 2 SHA256: 7208D30AF3FB85125AD5082BC46230FB HASH OK
000003FE Header CRC16: 25E7 CRC OK
.FWU parsing (after decryption):
00000000 Header Size: 0400 [00000000-000003FF] FileSize OK
00000002 Section 1 Size: 0017792C [00000400-00177D2B]
00000006 Section 2 Size: 014903A4 [00177D2C-016080CF]
0000000A Section 1 CRC16: 2AE2 CRC OK
0000000C Section 2 CRC16: 50A4 CRC OK
0000000E ????: 0x10330000
0000001E Model: RTB2004
0000002E FW Version: 02.202
0000003E Release Date: 2018-11-06
0000004E ????: 6731.19395
0000005E Compilation: Build 522 built on 2018-11-06 12:37:30 by MaG? [02.202 - HCL: 03.300 - MesOS: 04.300] with GCC 5.3.0
0000015E (???) Hash Type: 2
00000198 Build: 522
000001AA Section 1 SHA256: 898ADDB2A111DBE0C45BC0EA363D4CD5 HASH OK
000001CA Section 2 SHA256: 7208D30AF3FB85125AD5082BC46230FB HASH OK
000003FE Header CRC16: 25E7 CRC OK
Good info!
The 0x10330000 is known as the "Material Number." The 6731.19395 Appears in some strings associated with a date and time, but not sure what it represents.
19395 2018-11-06 12:37:30
6731(6730:6731;2018/11/06 12:28:21)
19395(19399;2018/10/24 15:01:02)
19383(19399;2018/10/18 10:30:34)
19377(19399;2018/10/17 11:30:25)
How did you decrypt the FWU? I tried with the AES key posted in this thread, but it failed.
How did you decrypt the FWU? I tried with the AES key posted in this thread, but it failed.
It's AES-256, not AES-128. You have to consider the 32 bytes of the key.
Oh. I didn't realize it was 0-padded 256-bit AES. It even worked the last time round, I just didn't notice that it succeeded. Oops.
E: welp, didn't work. just didn't fail, but the data's still encrypted. hmm.
Make sure you're using AES-256CBC with an IV of 0. Openssl can be used.
If all you want from decrypted FWU is to get the main part for study - just find the ELF header signature (7F "ELF") and rip from there till end of file, you'll get a correct ELF.
Using this:
openssl aes-256-cbc -K 43C6B3E57510A3C5547AA4DF9528B783 -iv 0 -in RTB2004.FWU -out RTB2004.FWU.dec
Resulting .dec file is either compressed or encrypted (bytes are uniformly distributed)