I came up with an approach to make it worth something to me. I used the factory scan that I think differs from the stand scan in that it does not de allocate block. The theory is like this.
1) I see there is this ECC level, which was 12 of 15. I take that as an attempt to get good data out of the bits of the flash regardless if they are all working or not.
2) I would say sporadic failures occur with certain data in certain offsets in the memory that was not in its initial bad sectors list as per using ECC.
3) I argue if I can set ECC level zero that would eliminate any correction, and do a disk scan with patterns I can find the bas sectors.
4) Through software, granted certain critical areas are ok like the MBR block, I can make an exclusion map using ext4 for example.
I just did one test and my initial 3+Gb write diffed ok. So if this is the case I would say that the basic technology was reliable, but it was implemented in a non reliable manner. It could be the technology is fundamentally unreliable. NOPE second try produced lots of errors.
****
ok a few hours later. Well I noticed a few things. One is it will accept pretty much anything I want it to do if I clear the stick and then re install it before doing it. The last setting I wanted to see if I altered 'r/w' cycle to 66ns, that is to say try to slow down a process. I left ecc zero. What happened and it was rather odd was the first 3 Gugs again verified (without any bad block declaration). So I am still testing things.
****
Mar 7 - Still have not found a reliable configuration. It works for a while and then totally messes up, so I guess that is the way this stuff just is.