Author Topic: Quick vs long format  (Read 3836 times)

0 Members and 1 Guest are viewing this topic.

Offline Rick LawTopic starter

  • Super Contributor
  • ***
  • Posts: 3479
  • Country: us
Quick vs long format
« on: January 02, 2020, 05:48:03 am »
Happy New Year!

I am doing my year end backup, so I was doing things the same way I have been doing for years.  Surface scan and formatting new drives to store useful backups.  On suspected drives (this year, I have one that all files reads ok but chkdsk doesn't finish), I off-load and surface scan, reformat and then copy back.  I know that probably was caused by ultra long filenames.  It is a repeatable error with XP era machines.  But I surface test and reformat (long instead of quick format) to be sure.

Dumb mindless work doing backup is.  But this time a thought hit me:  With all that intelligence in the drive firmware, it re-allocates bad sector all by itself...  Doesn't that make long format and surface scan a waste of time?

I sure like to hear your experience/opinion.
 

Offline radar_macgyver

  • Frequent Contributor
  • **
  • Posts: 724
  • Country: us
Re: Quick vs long format
« Reply #1 on: January 02, 2020, 06:13:52 am »
The sector reallocation built in to the drive firmware can only work if the drive is asked to read the data in question, so the act of reading data is probably more beneficial than scanning it. Modern filesystems (eg: zfs, btrfs) have a 'scrub' operation, which causes every used sector to be read at least once. If the firmware has to reallocate data, it will do so silently. If it can't, then any redundancy built into the higher level file system is used to re-create the data, and write it back (drive firmware will write it to a different physical sector).
 

Online Jeroen3

  • Super Contributor
  • ***
  • Posts: 4171
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Quick vs long format
« Reply #2 on: January 02, 2020, 07:08:52 am »
When to use full format:

- You are formatting a (software) RAID volume that must be block synchronized.
- For cold storage. you are making it your future self easier to read data after a disk crash. All non-relevant data will be erased instead of random.

Quote
Doesn't that make long format and surface scan a waste of time?
Most of the time, yes. if you need it because your drive threw an error, the disk is EOL.

Best insight into disk health: https://www.hdsentinel.com/
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17106
  • Country: us
  • DavidH
Re: Quick vs long format
« Reply #3 on: January 02, 2020, 02:10:10 pm »
Doing a surface scan of the drive by *writing* to every sector will force reallocation on write for bad sectors.  Reading the drive will leave bad sectors in place and increase the pending reallocation counter. (1)

So my preference now is to write to the entire drive to force reallocation and then check the SMART parameters.  This year I found three drives which were in the process of failing.

(1) This means that if you have a degraded RAID which keeps dropping a drive and becoming unavailable because of bad sectors, rewriting the bad drive can restore proper operation with bad sector data in only the damaged files unless metadata is affected.  I recovered a 4 x 3TB drive RAID5 last year using this method.  The lesson learned was *always* do a sector scan before swapping a RAID drive.
 

Online Jeroen3

  • Super Contributor
  • ***
  • Posts: 4171
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Quick vs long format
« Reply #4 on: January 02, 2020, 02:53:10 pm »
You can also ask the drive firmware to do a surface scan in the background.
Extended Self Test

I have programmed my server to this once every few months. It takes hours though.
 

Online BravoV

  • Super Contributor
  • ***
  • Posts: 7549
  • Country: 00
  • +++ ATH1
Re: Quick vs long format
« Reply #5 on: January 02, 2020, 04:14:41 pm »
If you decided to do long format, forget all other 3rd party tools, use the one made by the HD manufacturer.

For example Seagate has their called Seatools, WDC also has similar one, as most have the long format process (sometimes they called it full complete diagnostic) that performs whole drive diagnostic starting platter like from sector by sector verification, and up to spindle up and down test and etc.

Imo, they know their own stuff much better than other parties.

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17106
  • Country: us
  • DavidH
Re: Quick vs long format
« Reply #6 on: January 03, 2020, 01:40:12 am »
I usually find the manufacturer's utilities useless.  The long extended test does not do anything more than basic tests and read the entire surface.

If you use a wiping utility or a read and then write, then at least sectors which are pending reallocation will be processed.  Reading the SMART registers can be very informative as far as drive condition.
 

Online BravoV

  • Super Contributor
  • ***
  • Posts: 7549
  • Country: 00
  • +++ ATH1
Re: Quick vs long format
« Reply #7 on: January 03, 2020, 05:25:35 am »
Maybe we're using different tool, in the past I did use SeaTools from Seagate, thru the bootable version as its more powerful than thru Windows version, it managed to scan bad sectors and gave thorough report that Windows OS missed in NTFS full format with full write + verification enabled.

https://www.seagate.com/as/en/manuals/software/seatools-bootable/help-topic-bad-sector-found/

Quoting
"If you give permission to overwrite a bad sector SeaTools will attempt to write a pattern of zeros to that sector. Usually, this action will assist the disc drive firmware in managing the problem by retiring the problem LBA and activating a spare in its place."

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1334
  • Country: pl
Re: Quick vs long format
« Reply #8 on: January 03, 2020, 11:32:26 am »
NOTE Everything here assumes that we are talking about spinning rust, not SSD! While most of this post is generally valid for SSD, in some information may need adjustement.

But this time a thought hit me:  With all that intelligence in the drive firmware, it re-allocates bad sector all by itself...  Doesn't that make long format and surface scan a waste of time?
This is a very good thought. Indeed yes: nowadays firmware is reallocating damaged sectors. If it would occur that any of software scans detects bad sectors, it means that the drive is so horribly damaged that firmware has run out of sectors to reassign (normally each drive has a bunch of those reserved for this purpose, not accessible from the computer’s side).

You can see the information about reallocated sectors in your disk’s SMART data. And this is one of the very few SMART values that typically are meaningful in their raw form, indicating the actual number of reallocated sectors across any vendor I’ve heard of. You should observe that value, because if it grows from 0 (or the normalized value falls), it is a clear sign that you should start seeking a new drive and treat this one as soon becoming unreliable.

Under Linux you can check SMART data using smartctl from smartmontools (available as a package in many distros):
Code: [Select]
$ sudo smartctl -a /dev/disk/by-id/ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW34208404
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.3.11-arch1-1] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, [url=http://www.smartmontools.org]www.smartmontools.org[/url]

[… snip …]

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       5
  3 Spin_Up_Time            0x0027   231   169   021    Pre-fail  Always       -       1425
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       306
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   015   015   000    Old_age   Always       -       62420
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       303
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       231
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       76
194 Temperature_Celsius     0x0022   101   095   000    Old_age   Always       -       46
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

[… snip …]

Writing and then reading the whole drive, as suggested by earlier posters, will invoke reallocation. But this is a lengthy process, not really suitable for everyday use by most people. A 4TB drive at 200MB/s will take 5.5h to write (only write!). That is for the upper end what you may expect from a 7200RPM drive — for most drives prepare for spending one day per drive, even if they’re smaller. And the information you gain from it is “this drive is unsuitable for further use” if the reallocated sector count grows above 0 during the operation. If it doesn’t, you gain no information at all about its future reliability.

Someone has suggested full self-test and that is also a good option, but of course equally time consuming and giving the same type of information. The advantage is that there is a non-zero chance that the vendor has added some extras to the test, ones you can’t perform yourself from the software or that require insider knowledge. While you can perform those from CLI tools (like smartctl under Linux), interactive tools may be a better choice here, because they will give you indication of the test progress automatically and also handle cancelling etc. by themselves.

Unless you are really inclined to spend your time on testing your drives for backup purposes, I would suggest performing only the short SMART test to determine if the drive’s circuitry, mechanics and calibration are still OK. You will never have a 100% certain backups, so just “stop worrying and learn to love the bomb”. It is rarely the case that data of size requiring a HDD is worth that much protection. There are exceptions, like data the law obligates you to keep or which, upon being lost, would cause significant financial/image damages to the company. But most likely those are things like movies downloaded from the internet or some other hoarding case, or photos you will probably never look at again. The time needed to perform tests can be spent much better with friends or family. :)

If you want to spend your time on that, I would rather suggest doing checksums of files on the original medium and then, after copying them, on the backup medium. Because that is actually delivering you some information.

The truly important data, e.g. private keys or password manager’s database, are no more than tens of megabytes. You can just buy a bunch of cheap USB sticks and store them there regularly, preferably spreading the media physcially across multiple places — in case of some disaster or a theft. Also develop the habit of checking your backups for integrity and have a fuckup policies in place. It is much more important than just blindly copying data.
« Last Edit: January 03, 2020, 11:42:27 am by golden_labels »
People imagine AI as T1000. What we got so far is glorified T9.
 

Online Jeroen3

  • Super Contributor
  • ***
  • Posts: 4171
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Quick vs long format
« Reply #9 on: January 03, 2020, 01:00:33 pm »
For SSD's a full format is meaningless. There is no direct mapping to a block you can use and a place on the chip.
Either perform the secure erase pre-formatting (complicated, requires a disk power cycle, special commands, not available in Windows)

Instead, in Windows: use a quick format and then optimize the drive (former defragment).
In Linux: fstrim
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15244
  • Country: fr
Re: Quick vs long format
« Reply #10 on: January 03, 2020, 02:36:05 pm »
For SSD's a full format is meaningless. There is no direct mapping to a block you can use and a place on the chip.

Yup. I'd even venture that it's also meaningless on many recent HDDs as well, as their embedded controllers are a lot more advanced and probably handle physical formats as they see fit.
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17106
  • Country: us
  • DavidH
Re: Quick vs long format
« Reply #11 on: January 03, 2020, 02:53:19 pm »
Yup. I'd even venture that it's also meaningless on many recent HDDs as well, as their embedded controllers are a lot more advanced and probably handle physical formats as they see fit.

The physical layout, rotational latency, and seek time mean that minimizing access time requires the logical and physical layout to correspond except for remapped sectors.  It is actually easy to find which sectors are remapped by measuring access time to them.

Writing every sector of an SSD is not useless if the SSD is suffering from retention failure which can happen now in months in some cases despite their lying specifications.  The SSD has no way to know which sectors are in use or not (TRIM is for performance and cannot be relied on) so reads from bad sectors even though not in use can cause problems with some systems.

In theory a full read can force scrub on read for damaged but correctable sectors but it is not clear which drives support this.  They often lie.
« Last Edit: January 03, 2020, 02:59:05 pm by David Hess »
 

Offline mariush

  • Super Contributor
  • ***
  • Posts: 5134
  • Country: ro
  • .
Re: Quick vs long format
« Reply #12 on: January 03, 2020, 05:13:56 pm »
I found Western Digital Data Life Diagnostic tool to be enough for detecting weak/bad sectors on mechanical drives and mark them accordingly.
LINK : https://support.wdc.com/downloads.aspx?p=3&lang=en

Use the ERASE option, which overwrites each byte on the drive, after a restart you'll see the SMART data updated with the number of sectors reallocated and all that.
The software works with any drive, not just WD drives.

Quick Format just erases the metadata information (the records that hold folders, file names, sizes, last modified and last access times etc) leaving anything else on the drive untouched.
Full format, will write and read (testing it was written correctly).

SSDs don't store the data sequentially like mechanical drives, in order to extend the life of the flash memory cells which have limited number of erases so the ssd controller has to keep track of which pages of flash are written more often, so the controller will put data in random locations (to keep the wear levels even over the drive) and remember which flash memory corresponds to which actual sector requested by the OS.

The SSD also uses some hidden portion of Flash as spare memory and sometimes as write cache and when some pages of flash are too worn out, the controller will start to replace bits of "visible" flash memory with portions of that hidden memory that's less worn out.
Because of this even a full format will not erase all data, one could desolder the chips and put them in a test gig or a custom made SSD and retrieve all data stored in Flash and this way you may recover data from outside the portion that's normally visible to the OS.
 

Offline Rick LawTopic starter

  • Super Contributor
  • ***
  • Posts: 3479
  • Country: us
Re: Quick vs long format
« Reply #13 on: January 06, 2020, 01:51:58 am »
Hey guys, thanks!  Lots of useful information here and lots of fruit for thought here.

I went radio silent because I shot myself in the foot and blew two drives, so I had a lot more to do than I had expected.  I actually connected the molex in reverse.  It was firm enough to feel it was right, and I was dumb enough (lazy) to get out from under the desk to get my flashlight to check.  So poof, two back-up discs went kaput.  Damn, yellow wire to red wire should be easy to spot even in the dark...  I didn't think the molex can fit in reverse.  It was plug in not all the way but enough for electrical contact, and held firm enough that I thought it was right.

Man, I have been connecting molex connectors for 20+ years...  First time I did that.

Lesson re-learned - check and double check before flipping the ON switch.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15244
  • Country: fr
Re: Quick vs long format
« Reply #14 on: January 06, 2020, 04:31:20 pm »
Man, I have been connecting molex connectors for 20+ years...  First time I did that。

I've seen people plugging an M2 drive in a SATA power cable. It fits spot on, and everything looks fine, rest of the smoke.

Ouch. :-DD
It unfortunaly shows complete ignorance though, as even for someone that would think M2 could interface with SATA (some actually are SATA I think, as M2 exposes both SATA or PCIe) but obviously still require a proper M2 connection on the motherboard), they should know that a SATA power connector only conveys power supplies and not the data...
 

Online Jeroen3

  • Super Contributor
  • ***
  • Posts: 4171
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Quick vs long format
« Reply #15 on: January 07, 2020, 07:14:36 am »
If you handle a lot of bare drives. Invest in a docking station or hot-swap bay.
Or just a tool-less enclosure. Orico (aliexpress) makes nice ones.
 

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5902
  • Country: au
Re: Quick vs long format
« Reply #16 on: January 07, 2020, 10:50:21 am »
As of Windows 7 (if I recall correctly) a full format zeroes out the disk. Previously it didn't do this.
 

Offline Kilrah

  • Supporter
  • ****
  • Posts: 1852
  • Country: ch
Re: Quick vs long format
« Reply #17 on: January 07, 2020, 11:40:02 am »
Welp, if those drives still had Molexes I guess it was time to replace them anyway :P

For my data storage I have a 2-drive RAID0 set in my desktop PC, and 2 external dual drive RAID-supporting boxes with the same config as backups. I sync the internal to one of the externals every couple of days while the other is stored offsite, both rotated every couple of weeks or so.

2-3 times a year I run FreeFileSync in "compare file contents" mode instead of usual "scan for changes" mode in order to force a full read of both main, which would catch failing sectors and potential file corruption. Also check SMART status with CrystalDiskInfo at the same time. If there's any SMART caution the drive gets replaced, from experience when that happens it only takes a few weeks/months until they fail completely.
« Last Edit: January 07, 2020, 11:49:11 am by Kilrah »
 

Offline wraper

  • Supporter
  • ****
  • Posts: 17579
  • Country: lv
Re: Quick vs long format
« Reply #18 on: January 07, 2020, 12:16:35 pm »
But this time a thought hit me:  With all that intelligence in the drive firmware, it re-allocates bad sector all by itself...
If you are lucky. Often HDDs refuse to reallocate erratic sectors even if you forcefully try to do them so say by MHDD or Victoria. What I can say for sure, if more than 2-3 bad sectors do appear, and especially if they keep appearing, toss that drive into waste bin before it's too late.
 

Offline wraper

  • Supporter
  • ****
  • Posts: 17579
  • Country: lv
Re: Quick vs long format
« Reply #19 on: January 07, 2020, 12:33:15 pm »
Writing every sector of an SSD is not useless if the SSD is suffering from retention failure which can happen now in months in some cases despite their lying specifications.
Writing every sector is counterproductive. What you will achieve is performance degradation because of no free space left and everything then will need to be erased before writing. Also I don't see how it solves any retention problem. If drive has retention problems, only rewriting actual data has some use. Modern SSDs refresh/move data by themselves anyway. It's a part of wear leveling process.
Quote
The SSD has no way to know which sectors are in use or not (TRIM is for performance and cannot be relied on) so reads from bad sectors even though not in use can cause problems with some systems.
They may not know what's abandoned, TRIM helps with that. But they know for sure what's certainly not in use.
« Last Edit: January 07, 2020, 12:42:41 pm by wraper »
 
The following users thanked this post: Kilrah

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17106
  • Country: us
  • DavidH
Re: Quick vs long format
« Reply #20 on: January 07, 2020, 03:58:45 pm »
Writing every sector of an SSD is not useless if the SSD is suffering from retention failure which can happen now in months in some cases despite their lying specifications.

Writing every sector is counterproductive. What you will achieve is performance degradation because of no free space left and everything then will need to be erased before writing.

That is the normal case and further, TRIM is a crutch for poorly designed drives.

Quote
Also I don't see how it solves any retention problem. If drive has retention problems, only rewriting actual data has some use. Modern SSDs refresh/move data by themselves anyway. It's a part of wear leveling process.

Sure rewriting the data will work but if the drive is already written and suffering from retention problems, then reading the bad data is useless unless you are trying to recover it.  Modern SSDs *might* do idle time scrubbing but that is no help if left unpowered and is no help if the manufacturer lied about it.  In the later case, the same applies to scrub on read.

Idle time scrubbing and scrub on read present a conundrum to the manufacturer because supporting them means the drive must also support power loss protection.  Otherwise power loss during a scrubbing operation can potentially wreck the drive.  Tests in the past have shown that very few drives, including those which say they do, correctly support power loss protection.

Quote
Quote
The SSD has no way to know which sectors are in use or not (TRIM is for performance and cannot be relied on) so reads from bad sectors even though not in use can cause problems with some systems.

They may not know what's abandoned, TRIM helps with that. But they know for sure what's certainly not in use.

Any drive which has been in use for a while will have no unused sectors beyond what TRIM indicates.  Further, I have received new SSDs with bad sectors due to retention failure (Crucial) which should only happen with allocated sectors.  That caused some interesting file system problems until I got paranoid enough to investigate.
 

Offline Rick LawTopic starter

  • Super Contributor
  • ***
  • Posts: 3479
  • Country: us
Re: Quick vs long format
« Reply #21 on: January 08, 2020, 03:16:50 am »
If you handle a lot of bare drives. Invest in a docking station or hot-swap bay.
Or just a tool-less enclosure. Orico (aliexpress) makes nice ones.

I actually have a few "hot-swap" bays at hand.  Initially, I set up my backup system as a dedicated machine with hot-swap bays.  After a few years and with the limited space under the desk, the dedicated system degenerated into just "ah...  Just once a year... I can plug the darn drive into my system as needed."  The space was gradually filled - taken up by electronic parts/boxes these days.

Struggling in and out from the space-limiting area under the desk was what caused me to "lazy-out" with crawling out to get a flashlight.  One improvement I will make in the near future is to install "permanent" lights under my table.  Some light with an ON/OFF switch under the desk.  So, next year (I hope), instead of lying to myself that I can see in limited light, I can flip a switch and really see.

Welp, if those drives still had Molexes I guess it was time to replace them anyway :P
...
...

Naw... the two blown drives were SATA/600mb, but powered via a molex-to-SATA power converter+splitter.

I do have a few molex+IDE drives with (very old) backups on them.  I have not touch them for years.  I wonder if those drives can still spin.  Perhaps I should check on those backups and move them to a newer drive...

Enough break time - I do still have a lot to do because of the two dead drives (lots of juggling around to make sure critical files on server has a second copy even with 2 dead backup drives.  Now I have to juggle them back to their proper places.)

One "improvement" I already made - I will have at least two stand-by drives instead of just one - after I finish un-juggling the files.
 

Offline wraper

  • Supporter
  • ****
  • Posts: 17579
  • Country: lv
Re: Quick vs long format
« Reply #22 on: January 08, 2020, 05:08:22 pm »
Writing every sector of an SSD is not useless if the SSD is suffering from retention failure which can happen now in months in some cases despite their lying specifications.

Writing every sector is counterproductive. What you will achieve is performance degradation because of no free space left and everything then will need to be erased before writing.

That is the normal case and further, TRIM is a crutch for poorly designed drives.
How it's a crutch? Please explain how otherwise SSD should know that deleted (but not erased) data is no longer in use and sector can be erased for further use? Not only that, if you try to partially rewrite such (non used) physical sector (as logical sector is smaller than physical), it will cause read/modify/ and write of garbage data to another physical sector. Causing write amplification and further performance reduction besides not being able to erase non used sectors in advance. And physical sectors are quite large, usually 512kB. IMHO you have no idea about SSDs.
Quote
Any drive which has been in use for a while will have no unused sectors beyond what TRIM indicates.
BS
Quote
Further, I have received new SSDs with bad sectors due to retention failure (Crucial) which should only happen with allocated sectors.
Dunno what was your problem was but IMHO you made very big conclusions with very limited amount of knowledge about what actually happened.
« Last Edit: January 08, 2020, 05:40:44 pm by wraper »
 

Offline Kilrah

  • Supporter
  • ****
  • Posts: 1852
  • Country: ch
Re: Quick vs long format
« Reply #23 on: January 08, 2020, 05:35:41 pm »
How it's a crutch? Please explain how otherwise SSD should know that deleted (but not erased) data is no longer in use and sector can be erased for further use?
By understanding the actual filesystem, but since it's impractical/impossible to design a drive that could understand every possible filesystem out there it's not a crutch but the only viable solution.

Quote
Any drive which has been in use for a while will have no unused sectors beyond what TRIM indicates.
BS
This however is correct. At some point every flash page has been used once and only TRIM can mark some as truly empty again, and that point comes quickly since the drive will typically favor using empty pages first before starting to do read-modify-writes on partially full ones.
« Last Edit: January 08, 2020, 05:39:09 pm by Kilrah »
 

Online David Hess

  • Super Contributor
  • ***
  • Posts: 17106
  • Country: us
  • DavidH
Re: Quick vs long format
« Reply #24 on: January 09, 2020, 03:31:23 am »
That is the normal case and further, TRIM is a crutch for poorly designed drives.

How it's a crutch? Please explain how otherwise SSD should know that deleted (but not erased) data is no longer in use and sector can be erased for further use? Not only that, if you try to partially rewrite such (non used) physical sector (as logical sector is smaller than physical), it will cause read/modify/ and write of garbage data to another physical sector. Causing write amplification and further performance reduction besides not being able to erase non used sectors in advance. And physical sectors are quite large, usually 512kB. IMHO you have no idea about SSDs.

Then how did drives work before TRIM became available?  How do they work if the OS does not use TRIM?

The reserved space on the drive should provide plenty of space to absorb the difference between the virtual and physical sector mapping as old sectors are marked unused after being rewritten.  In practice the performance advantage of TRIM only becomes significant with drives which were poorly designed in the first place with insufficient reserve space.  It does make for great benchmarks and talking points though.

Quote
Further, I have received new SSDs with bad sectors due to retention failure (Crucial) which should only happen with allocated sectors.

Dunno what was your problem was but IMHO you made very big conclusions with very limited amount of knowledge about what actually happened.

Or it could be because I have tested various Flash media over the past decade and have an intimate understanding of how Flash media works at the hardware level.

So tell me, why would a new SSD which has never been used report bad sectors?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf