Author Topic: Random high write times on SD card with STM32 and FileX  (Read 2303 times)

0 Members and 1 Guest are viewing this topic.

Offline lightshadeTopic starter

  • Newbie
  • Posts: 1
  • Country: sg
Random high write times on SD card with STM32 and FileX
« on: February 02, 2025, 02:53:52 am »
I am doing a datalogger application with a custom piece of hardware using STM32 and FileX middleware. While datalogging, it occassionally experiences a spike in write and flush times that seems to be random to me. So I tried to make a simple application just to test it.

The setup: The custom board uses STM32H563 MCU directly connected to the SD card slot hardware component. I generated my code using STM32CubeMX with FileX in standalone mode. So I am not using LevelX or ThreadX (from Azure RTOS). The SDMMC peripheral has a clock speed of 25MHz, with 0 SDMMC clock divide factor. This gives me the maximum SD default speed according to the reference manual. The SD cards used are FAT32 with 4096KB sector size.

For debugging and testing purposes, I am continuously logging an array of 32768 bytes that is in the program. I have given the media file 102400 bytes that is aligned to 32 bytes. The code is simple. With the media and file open, the code writes the 32KB of data and immediately flush the data repeatedly. I time the write and flush operations with a timer and log it into the array that I can see when I pause the debugging. For all tests, I run them in debug mode with a ST link V2 clone.

What I found and thought so far: The write and flush times are consistent. Until randomly, the write and flush time spikes up. As an example, it normally takes 40ms per write and flush, and when it spikes, it takes 800ms instead.

I have done the test with and without formatting the SD card and it seems the time spikes occur after a similar number of writes, but never really the same.

The frequency of this happening seems to be significantly higher on an unbranded SD card. On branded SD cards, it happens rarely, like once every 3000+ writes. The size of the spike also varies, it could be 200ms to 800ms.

The datalogger has suffcient writing speed for my application. However, when a spike in time like this occurs, my buffers may overflow. I would like to understand what are these spikes in time, and if there is any way to avoid it other than to increase my buffers.

My questions are: Could they be caused by the attached debugger? Is this the result of some garbage collection, SD FAT table udpates or bad sectors? Is this spike in time similar for other applications? I do not have any other possible datalogger in my possession right now.
 

Offline DavidAlfa

  • Super Contributor
  • ***
  • Posts: 6469
  • Country: es
Re: Random high write times on SD card with STM32 and FileX
« Reply #1 on: February 02, 2025, 06:38:22 am »
You can easily discard the debugger by running a release (optimized) app and sending the write times by uart.
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Offline amyk

  • Super Contributor
  • ***
  • Posts: 8559
Re: Random high write times on SD card with STM32 and FileX
« Reply #2 on: February 02, 2025, 10:58:02 am »
Those delays are caused by the FTL deciding to do some internal reorganisation that results in erasing and/or copying blocks of the NAND flash.
 
The following users thanked this post: fchk, Ian.M, I wanted a rude username

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7477
  • Country: fi
    • My home page and email address
Re: Random high write times on SD card with STM32 and FileX
« Reply #3 on: February 02, 2025, 01:07:08 pm »
I agree with what amyk wrote above.

To minimize this latency, pick SD/microSD cards with UHS class 3 (U3) or video speed class 30 (V30), like SanDisk High Endurance 32 GB (SDHC, SDSQQNR-032G-GN6IA, EAN 0619659173067) or 64 GB (SDXC, SDSQQNR-064G-GN6IA, EAN 0619659173081) which cost about 14€ here.  I believe, but am not 100% certain, that the STM32H SDMMC peripheral supports SDXC just as well as SDHC.  All >32GB SD and microSD cards are SDXC.

The U3 and V30 classes mean the manufacturer guarantees a minimum 30 MB/s minimum sequential write speed; see www.sdcard.org/developers/sd-standard-overview/speed-class/ for details.  It is this minimum speed "guarantee" that should minimize the occasional controller latencies as well.  They will still happen, but they must be short enough to not cause issues (as otherwise the minimum sequential write speed could not be sustained).
 

Offline fchk

  • Frequent Contributor
  • **
  • Posts: 301
  • Country: de
Re: Random high write times on SD card with STM32 and FileX
« Reply #4 on: February 03, 2025, 09:06:11 am »
The problem is that SD is managed flash. There is a controller in each SD card (8051 or ARM7) that manages wear levelling and other things, and this controller may freely decide to take a break of up to 2 seconds if it needs to. Your application must not rely on any particular timing.
USB sticks and SSDs are also managed flash.

The other kind of storage is unmanaged flash. Raw NAND flash chips are unmanaged, for example. They have a deterministic timing, writing a block always takes the same time. If you need this behavior than NAND flash is for you. You need to to bad block mapping and wear levelling (if you need this at all) yourself.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2884
  • Country: ca
Re: Random high write times on SD card with STM32 and FileX
« Reply #5 on: February 03, 2025, 06:21:18 pm »
The problem is that SD is managed flash. There is a controller in each SD card (8051 or ARM7) that manages wear levelling and other things, and this controller may freely decide to take a break of up to 2 seconds if it needs to. Your application must not rely on any particular timing.
USB sticks and SSDs are also managed flash.

The other kind of storage is unmanaged flash. Raw NAND flash chips are unmanaged, for example. They have a deterministic timing, writing a block always takes the same time. If you need this behavior than NAND flash is for you. You need to to bad block mapping and wear levelling (if you need this at all) yourself.
The problem with raw flash is that at some point it's going to exceed it's write endurance, and to replace it you will need to physically de-solder and re-solder a new chip, which is not something that you can rely on average user to do. While replacing a memory card/USB stick is a trivial operation.
The best solution to this is to use something like F-RAM which is both non-volatile and does not have limited endurance, but it's downside is that it's quite expensive and is very limited in capacity.

Offline jc101

  • Frequent Contributor
  • **
  • Posts: 732
  • Country: gb
Re: Random high write times on SD card with STM32 and FileX
« Reply #6 on: February 03, 2025, 06:36:17 pm »
I've seem exactly this, though with different processor and task.
My solution was to keep write buffers in RAM that can hold up to 5 seconds worth of data, to give the SDcard time for housekeeping as and when it decides it needs it.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4519
  • Country: gb
  • Doing electronics since the 1960s...
Re: Random high write times on SD card with STM32 and FileX
« Reply #7 on: February 03, 2025, 11:16:59 pm »
Quote
The problem with raw flash is that at some point it's going to exceed it's write endurance, and to replace it you will need to physically de-solder and re-solder a new chip, which is not something that you can rely on average user to do. While replacing a memory card/USB stick is a trivial operation.

Yes this is very true.

You can make FLASH last much longer if you organise a "walking" write process so the wear is even. But for sure nothing beats SD cards for GB/$ :) They can just be slow - as many people discovered with e.g. action cams which often require special grade cards.

Are you using the "open" serial interface (5mbps, IIRC) or the "licensed" (one can find the spec on chinese websites, and there are threads on it here) 4 bit wide mode?
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline amyk

  • Super Contributor
  • ***
  • Posts: 8559
Re: Random high write times on SD card with STM32 and FileX
« Reply #8 on: February 04, 2025, 03:05:09 am »
The other kind of storage is unmanaged flash. Raw NAND flash chips are unmanaged, for example. They have a deterministic timing, writing a block always takes the same time. If you need this behavior than NAND flash is for you. You need to to bad block mapping and wear levelling (if you need this at all) yourself.
Even raw NAND has a tiny, very limited microcontroller (probably more like a state machine) and write and erase times can still vary due to the adaptive programming algorithms they use, but much less since there's no FTL processing.
 

Offline fchk

  • Frequent Contributor
  • **
  • Posts: 301
  • Country: de
Re: Random high write times on SD card with STM32 and FileX
« Reply #9 on: February 04, 2025, 10:22:05 am »
NAND Flash only has a hard-wired state machine. I've never seen parallel NAND Flash with a processor core.

Exhausion of Write/Erase cycles shouldn't be a problem since USB-Sticks use the same kind of memory. And data logging has got a linear write pattern, not a random one, so there is less need for wear levelling. You could also mount NAND Flashes on small PCBs that can be plugged on the main board just like Hardkernel does it with EMMC modules.
https://www.hardkernel.com/shop/256gb-emmc-module-xu4-linux/


 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 5664
  • Country: gw
Re: Random high write times on SD card with STM32 and FileX
« Reply #10 on: February 04, 2025, 12:08:08 pm »
You have to use a FIFO buffering between the source of your data and the sdcard.
The sdcards do their housekeeping with typically 1-20ms random outages, but sometimes much longer (afaik the "write latency" max is 200ms according an SD spec I saw in past).
I did it in past with freertos, while pushing the data off the sensors into the FIFO at speed X and writing at max sdcard speed Y from FIFO to the sdcard. The FIFO shall be, say those 200ms times your data volume/sec you write to the FIFO, deep. And your data will not be lost.

https://www.eevblog.com/forum/programming/gps-tracker-with-pic-microcontroller-and-sd-card/msg3507404/#msg3507404
« Last Edit: February 04, 2025, 12:12:05 pm by iMo »
Readers discretion is advised..
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2884
  • Country: ca
Re: Random high write times on SD card with STM32 and FileX
« Reply #11 on: February 04, 2025, 03:47:28 pm »
You can make FLASH last much longer if you organise a "walking" write process so the wear is even.
It's not that easy to come up with algorithm for write levelling which would not have any "holes". Especially if you also need to have a "commodity" file system like FAT, which requires certain data to be in specific sectors, and they tend to become a single point of failure due to being overwritten often.
Are you using the "open" serial interface (5mbps, IIRC) or the "licensed" (one can find the spec on chinese websites, and there are threads on it here) 4 bit wide mode?
I typically use SD interface because it provides for higher bandwidth.
« Last Edit: February 04, 2025, 03:51:01 pm by asmi »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7477
  • Country: fi
    • My home page and email address
Re: Random high write times on SD card with STM32 and FileX
« Reply #12 on: February 04, 2025, 06:12:38 pm »
It's not that easy to come up with algorithm for write levelling which would not have any "holes".
One useful trick is to use the first 4/8/16 bytes of each 512/2048/4096-byte sector you use for the identifier, for the metadata sectors describing where the data sectors are.

For logging data, for example when using a MCU with SD cards without a filesystem, the same method can be used to use the entire Flash as a circular buffer: just stick a monotonically increasing identifier (32 bit one with 512 byte sectors suffices for two terabytes) at the beginning of each sector.  To locate the sector last written to, you do a binary search, reading only the 32 first bits of each sector.  For a 32G SD card, you only need to check 29 sectors!  For raw NAND, you'd need to have the list of bad sectors, too, to skip during the scan.  In Linux, reading the raw card is trivial; only need elevated privileges to directly access the block device, and then access the entire card as if it was just an ordinary file, although sector-aligned reads and writes are faster.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4519
  • Country: gb
  • Doing electronics since the 1960s...
Re: Random high write times on SD card with STM32 and FileX
« Reply #13 on: February 04, 2025, 07:10:21 pm »
Quote
. Especially if you also need to have a "commodity" file system like FAT, which requires certain data to be in specific sectors, and they tend to become a single point of failure due to being overwritten often.

There are some wear levelling filesystems, but if you want windows host compatibility (via USB MSC) then you have to use something like FatFS which (below 4MB) will produce a FAT12 volume, and windows needs this, and 512 byte sectors. I used an Adesto FLASH chip which is just right for this. And you can't combine this with some linear storage scheme; well not in the same part of the chip. On my product the FAT12 filesystem is intended just for config - maybe 10k+ FAT writes and then you are taking a chance.

Using an SD card with internal wear levelling is the simplest way.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2884
  • Country: ca
Re: Random high write times on SD card with STM32 and FileX
« Reply #14 on: February 04, 2025, 09:47:56 pm »
In Linux, reading the raw card is trivial; only need elevated privileges to directly access the block device, and then access the entire card as if it was just an ordinary file, although sector-aligned reads and writes are faster.
It's the same in Windows, you can gain block-level access to storage devices if you need to. But here we're talking about an MCU, so typically it's some version of FAT (advantage is that you can easily read card off-device), or some kind of homebrew rudimentary filesystem (in my experience as soon as some sort of storage device appears in a project device for one reason, it quickly finds other uses too, which necessitates at least some kind of FS to keep things from stepping on each other).

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7477
  • Country: fi
    • My home page and email address
Re: Random high write times on SD card with STM32 and FileX
« Reply #15 on: February 04, 2025, 11:12:26 pm »
In Linux, reading the raw card is trivial; only need elevated privileges to directly access the block device, and then access the entire card as if it was just an ordinary file, although sector-aligned reads and writes are faster.
It's the same in Windows, you can gain block-level access to storage devices if you need to. But here we're talking about an MCU, so typically it's some version of FAT (advantage is that you can easily read card off-device)
No, what I described is what I use on an MCU.  I can trivially use the microSD card socket on my Teensy 4.1, writing to the raw sectors without bothering implementing any filesystem, using the entire card as a circular buffer like I described, by reserving 4 of every 512 byte sectors for a monotonic counter.  Using any SD card reader, I can read those no-filesystem SD cards in Linux with a simple userspace application: I only need to give it access to the block device.  (I often use a helper that temporarily obtains privileges, opens the block device as standard input, then drops privileges, and executes the unprivileged application, which uses standard input for its operation.)

The same scheme – a circular buffer with the beginning of each sector containing a counter –– applies to raw NAND flash, and to the metadata sectors of any custom solution, but the location of bad sectors has to be known first.  You could reserve say every 173th sector (for 378 sectors, 512 bytes per sector, in 32 MiB) and use those as a separate circular buffer, using a binary search among them to locate the latest bad sectors list.  Each 512-byte sector would then contain a 32-bit generation counter, and identify up to 127 bad sectors. 
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4519
  • Country: gb
  • Doing electronics since the 1960s...
Re: Random high write times on SD card with STM32 and FileX
« Reply #16 on: February 05, 2025, 08:03:19 am »
Don't FLASH chips have bad blocks remapped already? They must have otherwise when a machine starts using it, it would do an awful lot of remapping of bad blocks. Also I do a factory test (on the 4MB Adesto device) and this checks every block.

Interesting about Linux. A long time ago I had a reason to read some "copy protected" CF cards. They were 48MB Sandisk ones which apparently had some special feature... but somebody said a program called WINHEX can read them. I never succeeded (under windows).
« Last Edit: February 05, 2025, 08:30:39 am by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7477
  • Country: fi
    • My home page and email address
Re: Random high write times on SD card with STM32 and FileX
« Reply #17 on: February 05, 2025, 09:09:12 am »
Don't FLASH chips have bad blocks remapped already? They must have otherwise when a machine starts using it, it would do an awful lot of remapping of bad blocks. Also I do a factory test (on the 4MB Adesto device) and this checks every block.
Managed Flash does, unmanaged does not.  Serial flashes that take commands are managed, and do.

I believe the managed Flash ones, like Winbond serial flashes, have a controller with eeprom to retail the bad blocks list, in a hash table: a fixed-size array of logicalSectorphysicalSector mappings, so that each sector is typically O(1) time complexity; or perhaps in sorted order so a binary search in logicalSector is exactly O(log₂N).  If we assume 5% of sectors may become bad, then 32GiB with 4096-byte pages has 8,388,608 pages, and up to 419430 may be remapped; that would require about 25 Mbits at 64 bits per mapping, and 25 look-ups per sector using a binary search.  For a hash table, the performance starts degrading when it is more than about half full (more checks per sector), so there are obvious tradeoffs here.

Interesting about Linux.
The flash as a circular buffer, or a set of independent circular buffers, each having their own fixed size, is a very interesting option on microcontroller projects that do continuous logging.  You completely avoid the complexity of a filesystem, replaced with a binary search that locates the head in each circular buffer at startup, and since you write contiguously, you should get good throughput, too.  If you add pre-TRIMming a few percent of the buffer suitably ahead of the head, you should also minimise the write latencies that stem from the flash controller doing bookkeeping work.  It would still happen, but it would be minimal work.  And, if you used "High Endurance" microSDHC/microSDXC cards, have enough bulk capacitance to keep the microcontroller running for a second, and check for power loss (before regulator and capacitor smoothing), you can implement an emergency shutdown of the Flash so it will not be damaged on a sudden power loss, even if in the middle of a write (which IIRC can be aborted for faster shutdown/quiescing).

It would be interesting to compare the behaviour of this on your STM32, as I use Teensy 4.1.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4519
  • Country: gb
  • Doing electronics since the 1960s...
Re: Random high write times on SD card with STM32 and FileX
« Reply #18 on: February 05, 2025, 09:43:41 am »
Yes; my project does have a circular data logging mode too. I allocated 512k of the 4MB FLASH to this. There is a 2MB FAT12 FatFS filesystem too, visible to windows via USB MSC. But bad blocks are not being remapped IIRC. Certainly not in the filesystem.

I also have a power-down data save mode
https://www.eevblog.com/forum/microcontrollers/how-fast-does-st-32f417-enter-standby-mode/
It needs a pre-erased block, because the write time is 3ms rather than 15ms.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28627
  • Country: nl
    • NCT Developments
Re: Random high write times on SD card with STM32 and FileX
« Reply #19 on: February 05, 2025, 11:06:10 am »
Don't FLASH chips have bad blocks remapped already? They must have otherwise when a machine starts using it, it would do an awful lot of remapping of bad blocks. Also I do a factory test (on the 4MB Adesto device) and this checks every block.
Managed Flash does, unmanaged does not.  Serial flashes that take commands are managed, and do.

I believe the managed Flash ones, like Winbond serial flashes, have a controller with eeprom to retail the bad blocks list, in a hash table: a fixed-size array of logicalSectorphysicalSector mappings, so that each sector is typically O(1) time complexity; or perhaps in sorted order so a binary search in logicalSector is exactly O(log₂N).  If we assume 5% of sectors may become bad, then 32GiB with 4096-byte pages has 8,388,608 pages, and up to 419430 may be remapped; that would require about 25 Mbits at 64 bits per mapping, and 25 look-ups per sector using a binary search.  For a hash table, the performance starts degrading when it is more than about half full (more checks per sector), so there are obvious tradeoffs here.
This sounds like an odd solution to me. It adds complexity and now the solution depends on both eeprom and flash endurance.

When I need to store fixed size chunks in a flash, I round the size up to an integer (natural) number of sectors and reserve a flash area to store several chunks. The data always includes a checksum (CRC32 or SHA1). If the verify fails, the write gets retried and if that fails, the next chunk is used. For writing sequential data, I use bits from a byte (first or last in a sector) to mark a sector as empty, used and OK. If the write verify fails, again a retry and if that fails the sector doesn't get marked as OK and the data is written to the next sector. These are simple round-robin systems which do wear levelling by themselves. If you need something more complex, it is time to look for a wear levelling layer which does all the heavy lifting.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7477
  • Country: fi
    • My home page and email address
Re: Random high write times on SD card with STM32 and FileX
« Reply #20 on: February 05, 2025, 02:23:46 pm »
Don't FLASH chips have bad blocks remapped already? They must have otherwise when a machine starts using it, it would do an awful lot of remapping of bad blocks. Also I do a factory test (on the 4MB Adesto device) and this checks every block.
Managed Flash does, unmanaged does not.  Serial flashes that take commands are managed, and do.

I believe the managed Flash ones, like Winbond serial flashes, have a controller with eeprom to retail the bad blocks list, in a hash table: a fixed-size array of logicalSectorphysicalSector mappings, so that each sector is typically O(1) time complexity; or perhaps in sorted order so a binary search in logicalSector is exactly O(log₂N).  If we assume 5% of sectors may become bad, then 32GiB with 4096-byte pages has 8,388,608 pages, and up to 419430 may be remapped; that would require about 25 Mbits at 64 bits per mapping, and 25 look-ups per sector using a binary search.  For a hash table, the performance starts degrading when it is more than about half full (more checks per sector), so there are obvious tradeoffs here.
This sounds like an odd solution to me. It adds complexity and now the solution depends on both eeprom and flash endurance.
Nevertheless!

As an example of the non-managed Flash, consider Micron MT29F2G08ABAGAWP 256M×8 that Mouser sells, and its datasheet; specifically, the Error Management chapter in page 37.  You identify bad page blocks (up to 40 out of 2048 blocks ≃ 1.95%, from the factory, not more than 2% during device lifetime) by the first spare address on the first or second page per block.

As an example of managed Flash, consider Winbond W25N512GVEIG also from Mouser.  It has an internal Bad Block Management Look-Up Table, BBM LUT, with up to ten relocated blocks.  See 8.2.7 Bad Block Management (A1h) command, and 8.2.8 Read BBM LUT (A5h) command, on pages 34 and 35.  As it is just a maximum of 10 relocations, it is rather unlikely they use Flash itself for this, and instead use some kind of EEPROM that is copied to RAM on power-on; this EEPROM only requires say a thousand write cycles to be effective in practice.

These are simple round-robin systems which do wear levelling by themselves. If you need something more complex, it is time to look for a wear levelling layer which does all the heavy lifting.
Wear leveling and bad block management are two completely different things, because even from factory fresh, raw NAND may contain up to 1.9% of bad blocks (Micron), with the maximum lifetime bad blocks allowed below 2% (Micron) and up, depending on the manufacturer.

Most NAND Flash has spare bytes per block; for example both of the above have 64 extra bytes per each 2048 byte page.  The first extra ("spare") byte is often used to record the page or block failure from the factory.  The rest are used for ECC and such.  In the Micron one, you relocate and mark entire blocks (128 KiB + 4 KiB spare) bad.  On a 2 Gb one, you have 2048 blocks, and up to 40 may become bad during the device lifetime.  Since 2048 = 2¹¹, you only need 22 bits per block relocation.  As the very first block is guaranteed to last 1,000 PROGRAM/ERASE cycles, and typical ECC needs 6 bytes per 512 bytes, that means the spares left over from ECC in block 0, say one relocation per page (so max. 64 relocations), could be used.  It'd be simpler to use page 0 of block 0, though, and not use block 0 for data at all.

However, consider the case where your NAND controller/MCU has 1024 bits of EEPROM with > 100 guaranteed write cycles at hand.  You could store the bad block information there (at 24 bits per relocation, you can fit 42 relocations in 1008 bits).  If you have the same in 12-bit/24-bit RAM, and you keep the list sorted, a binary search will find the mapping or determine lack of by examining max. seven entries.  Comparing to the 300µs page program on the Micron, that's nothing.

Why use separate memory for the bad block mapping, then?  Because you want to read it as fast as possible during startup, and it is typically not much data at all.  It is not modified per se (hash table; is modified if sorted/binary search), only new entries added to it.   The number of write cycles is the maximum number of relocations!  It makes sense to make this small section more robust, at the cost of extra silicon.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 4519
  • Country: gb
  • Doing electronics since the 1960s...
Re: Random high write times on SD card with STM32 and FileX
« Reply #21 on: February 06, 2025, 08:30:40 pm »
As an aside, I would be interested in a finished and working (not some abandoned github project) interface for an SD card, ok for it to be "slow license-free" version, SPI, logical 512-byte block addressable.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf