EEVblog Electronics Community Forum
General => General Technical Chat => Topic started by: eti on September 05, 2020, 01:24:03 am
-
I've just discovered that Sinclair DELIBERATELY bought in FAULTY RAM stock for the ZX Spectrum, where only half the RAM capacity worked, all to save a few pence... WHAT A CHEAP SKATE!
I absolutely couldn't live with myself, knowing I'd skimped on the BOM, just to make more money! Okay, so you save a few pence, but then, in the future, people like me will still be discussing what a skimper you are - UGH! I'd have that thought gnawing away at me, I couldn't let a designed product get into people's hands like that - and I don't care if it worked perfectly or not - YUCK!
-
They only needed 32k and used 64k of ram in which half was faulty arranged so that the full 32k required was present. It's not like the other half was advertised or supposed to be available, it wasn't, just unused.
They also sometimes found a machine that the 32k they needed didn't work, in which case they rebadged it as a 16k machine.
Not really any different to, for example, hard drive, SSD, flash memory where bad areas are marked off in firmware.
Reduce, reuse, recycle.
Now if you want really dodgy, Amstrad (I think it was Amstrad) fitted completely fake ram and circuit board onto their computer to sell it into, I think Spain, or Portugal, which had a minimum memory requirement, the ram they fitted was completely bogus not connected in any meaningful way to the computer electrically!
-
I've got to ask, were you a potential buyer of a Spectrum when they were first on the market? If you had a choice between buying a computer or not buying a computer, would you have passed up on the Spectrum because you would rather save up for more expensive options?
And be sure to understand that if you bought a computer with 16 K of RAM, you would get 16 K of working RAM. If you only got 8 K when 16 K was promised, the consumer protection people would have been upon on them like a ton of bricks. The UK has always had strong consumer protection laws.
-
I've got to ask, were you a potential buyer of a Spectrum when they were first on the market? If you had a choice between buying a computer or not buying a computer, would you have passed up on the Spectrum because you would rather save up for more expensive options?
And be sure to understand that if you bought a computer with 16 K of RAM, you would get 16 K of working RAM. If you only got 8 K when 16 K was promised, the consumer protection people would have been upon on them like a ton of bricks. The UK has always had strong consumer protection laws.
"The UK has always had strong consumer protection laws."
Yes, I know, I am English ;)
-
It led to interesting mods like this one that created an 80K RAM rubber key speccy!
http://blog.tynemouthsoftware.co.uk/2020/03/a-zx-spectrum-with-80k-of-ram.html (http://blog.tynemouthsoftware.co.uk/2020/03/a-zx-spectrum-with-80k-of-ram.html)
-
Shrug, I deliberately bought a 6-core desktop CPU. Because it's cheaper than the 8-core, and doesn't offer much less performance for my applications. I don't care if it's the same die and the extra bits are just deactivated (whether for sales or technical reasons).
Tim
-
I've just discovered that Sinclair DELIBERATELY bought in FAULTY RAM stock for the ZX Spectrum, where only half the RAM capacity worked, all to save a few pence... WHAT A CHEAP SKATE!
I absolutely couldn't live with myself, knowing I'd skimped on the BOM, just to make more money! Okay, so you save a few pence, but then, in the future, people like me will still be discussing what a skimper you are - UGH! I'd have that thought gnawing away at me, I couldn't let a designed product get into people's hands like that - and I don't care if it worked perfectly or not - YUCK!
Erm... OK...
So, if you owned a fab 30 or 40 years ago, what exactly would you propose to do with your paartially-working, (for the time,) high density dies?!
Edit: And, alternatively, why is it Sinclair's fault for trying to get the lowest possible BOM for a mass-market, budget product?
What would you do?
Increase your sales price by $10, $15, $20? Just to get 100% functional, premium RAM chips...
Why? What would that accomplish in the end?!
You're insane. :)
Apparently you weren't actually doing computing in the 70s and 80s...
Even simple, early, slow DRAM was incredibly expensive... Don't even talk about SRAM...
-
So much righteousness... :-)
-
The OP is missing the point.
As others have said - There are two ways of achieving X amount of RAM ... buying first quality chips that provide X - or buying higher capacity chips that may have faults, but can still provide X.
If you were building a new product using new, developing technology (especially where cost reflects this), I would suggest you look at your BOM cost and see which way you would jump.
I also question how someone could be outraged at a machine being offered with X amount of RAM gets delivered with X amount of functional RAM.
So much righteousness... :-)
Indeed.
-
I have never heard about it causing any problems to the end users. So what was the problem. :-// .None.
-
As has already been stated, The Spectrum comes from the early days of home computing and bears little resemblance to the current market. The computers were relatively simple but parts, such as the DRAM, were very expensive. As you needed 8 or more DRAM Chips, their individual cost impacted upon the total BoM cost significantly. Manufacturers of home computers were actually grateful to the DRAM fabs for making grade B DRAM available to them at discounted prices. It was a very new and very different computing world back then. We, as users, were grateful for every 1K a manufacturer could provide at reasonable cost. The 48K Spectrum had a distinct sales advantage over the competition with smaller memory capacity, such as the VIC20 and Dragon 32. I was a Dragon 32 user and I modified that platform with greater DRAM etc. The 32K and 64K DRAM was very expensive at the time. I went in to repair Spectrum computers for a computer retailer as a favour to them and still have the schematics somewhere. I did see DRAM failures but that was not uncommon in that era. The worst faults were often in ASIC’s that were both expensive and hard to source, except from other faulty Computers. The ZX81 had an ASIC that regularly popped its clogs for no apparent reason. At the time there was also a practice of swapping out DRAM chips one at a time until the fault disappeared !
As I said, a very different era to now and home computing was a new frontier and some unusual production practices were employed to ‘get the job done’. The consumer just wanted a home computer, how the manufacturer created it was of little interest, provided it worked. Dragon Data was no different and they were even accused of stealing the Radio Shack MC6809E based CoCo design for the Dragon....... it did look awfully similar at the schematic level ;) It was so similar that CoCo schematics, programming manuals and programs were mostly compatible ;D ‘Wild West’ Frontier days :-+ Great fun for those of us who grew up during that era in computing. At school I was working on Commodore CBM PET’s and Z80 Research Machine platforms. A home computer such as a Sinclair ZX Spectrum, Commodore VIC20 or Dragon 32 was amazing to us at the time. To get 32K in a Dragon seemed impressive at the time. The 48K of the ZX Spectrum was a source of envy for many of us, so we upgraded our Dragon 32’s to 64K through modifications and upgrades. IIRC the Dragon 32 used ‘half good’ DRAM and when fully enabled it often worked fine in its full capacity. Grade B rejects sometimes failed speed tests but actually worked OK. A second bank of ‘half good’ 64K DRAM could also be added to make 64K. Oh fun times and I miss the simplicity of 8 bit computers.
Now if you really want to poke fun at the Sinclair Spectrum...... go after that daft calculator style rubber keyboard ! Compared to the Dragon 32’s real keyboard, it was a joke and looked like a toy. Such a pity as the computer was actually much better than it looked. I used to fit ZXSpectrum+ cases to standard Spectrum’s as an upgrade to the case and keyboard. Still not a real keyboard feel though.
Thanks for taking my mind back to happy days for me as a School kid, and then Student, working on 8 bit computers in my spare time :-+
Fraser
-
I've just discovered that Sinclair DELIBERATELY bought in FAULTY RAM stock for the ZX Spectrum, where only half the RAM capacity worked, all to save a few pence... WHAT A CHEAP SKATE!
I absolutely couldn't live with myself, knowing I'd skimped on the BOM, just to make more money! Okay, so you save a few pence, but then, in the future, people like me will still be discussing what a skimper you are - UGH! I'd have that thought gnawing away at me, I couldn't let a designed product get into people's hands like that - and I don't care if it worked perfectly or not - YUCK!
Sinclair a cheapskate? News at 10.
Sinclair's products were always dodgy, from his first audio kits in the 60s onwards. Have a look at an insider's view: http://diy.torrens.org/Sinclair/inside/Duncan.php (http://diy.torrens.org/Sinclair/inside/Duncan.php)
-
Sinclair was a boffin, an ideas man. He should have left design and production to those who specialised in turning ideas into reality.
To be fair to Sinclair as a company though, they were providing what consumers wanted and, as already stated, that sometimes required ‘bush engineering’ or ‘Wild West’ practices when it came to designs and components used. Very different times when electronics was often less ‘polished’ than what we have come to expect these days.
Fraser
-
As I said, a very different era to now
Small cough
Yes but some things don't change, cutting BOM is one of them.
In 2009 I got an assignment to purchase microcontrollers for a mass market product. To meet their target BOM the uC had to be <$0,10.
In that time that was insane. The cheapest china uC was $0,22 per million pcs.
So after a while one of our productmanagers gave me a chinese contact for a million uCs for $0,05 / pc.
I investigated and what was the story, these were 4 bit uC for toys where the ram was partly faulty. The contact had written a small program that on first run tests all ram locations and mark the bad ones if the bad ones were less than the good ones or the good ones when the good ones were less than the bad ones.
No need to say we passed but these things are used in cheap toys.
-
Yeah its not good to know you were ripped off.
There was a prominent but dodgy clone computer seller in Melbourne selling PC's with "external cache". He simply loaded dummy chips on the motherboard with fake cache markings and hacked the BIOS to report cache that was not there. The chips actually contained no semiconductor material inside. Added to that, he sold computers advertised with fake clock speeds. eg: 80386SX-27, when they were only clocked at 20 MHz.
-
A long time ago I had heard that the Sinclair Micromatic had used cheaper out of spec transistors, not sure whether it was true or not and it could easily have been confused with the Micro-FM radio which used out of spec transistors because their gain was too high. See link to Richard Torrens page below. I could never get my Mk2 Micromatic to work, it might have been my soldering or a bad ME4102 transistor. Clever bit of design for a reflex TRF though. The mica compression trimmer was a classic. I used a Sinclair Scientific calculator for a while but its accuracy wasn't that good and there was a calculator shoot out published in ETI magazine. The Commodore SR-1800 came out as the winner and I've still got mine somewhere, used the Sinlcair for its multiplexed LED display. http://diy.torrens.org/Sinclair/inside/Duncan.php (http://diy.torrens.org/Sinclair/inside/Duncan.php)
-
..... had used cheaper out of spec transistors
Here's the thing - Something described as "out of spec" simply has parameters that are not within the range defined by the designer - BUT - that doesn't necessarily mean those parameters will be outside the range of functionality for a particular application. It is not inconceivable to produce a widget which contains out of spec components and still work. Sinclair's RAM exercise being one example. This could also apply to, say, a transistor whose maximum voltage is specified at 25V, can't handle more than 15V but is going to work fine in a 9V circuit. In this case, replacement with an "in spec" part should restore operation.
I daresay you could then have a component that is "out of spec" - but it is one of those out of spec parameters that enables it to function in a particular circuit. Here, replacement with an "in spec" part will not restore function.
"In spec" is not as important as "suitable for purpose".
-
..... had used cheaper out of spec transistors
Here's the thing - Something described as "out of spec" simply has parameters that are not within the range defined by the designer - BUT - that doesn't necessarily mean those parameters will be outside the range of functionality for a particular application. It is not inconceivable to produce a widget which contains out of spec components and still work. Sinclair's RAM exercise being one example. This could also apply to, say, a transistor whose maximum voltage is specified at 25V, can't handle more than 15V but is going to work fine in a 9V circuit. In this case, replacement with an "in spec" part should restore operation.
I daresay you could then have a component that is "out of spec" - but it is one of those out of spec parameters that enables it to function in a particular circuit. Here, replacement with an "in spec" part will not restore function.
"In spec" is not as important as "suitable for purpose".
Absolutely, but the reason you buy parts which 'meet spec' is so you can design to those specs.
If you buy factory rejects like Sinclair did, then there's no guarantee that they will perform the same even in the same batch, never mind the next batch, it pushes up production cost because you have to 'select on test' for every product and it reduces reliability drastically.
It also makes repair far more difficult because there's no guarantee you can buy the re-marked parts, no guarantee you will get one that matches the foibles of the unit you're repairing and no guarantee you can find out which other components to change to make it work...
It's fine to buy bags of reject parts as a hobbyist, it used to be a big market and pretty much every electronics mag was teeming with ads for 'bargain bags' of factory reject components, you bought by weight not quantity, for production, it's a stupid trick that can wreck your reputation.
I still have some of those "BC108-like" transistors :)
The insider in http://diy.torrens.org/Sinclair/inside/Duncan.php (http://diy.torrens.org/Sinclair/inside/Duncan.php) notes that Sinclair's business model included large numbers of "return and get your money back", and that the Sinclair staff could often rescue the kit. When they did that they offered the punter the their money or their fixed kit, and the punters were often grateful.
-
Its pretty common for companies to sell partially working silicon chips for cheaper in order to avoid just tossing them in the trash.
Pretty much all the SD cards you have contain just partly working flash inside. Due to the push for density the yield is awful on flash memory. So even the top tier cards have some leftover capacity hidden away to cover the dead pages. The chips with too many dead areas are instead sold as half capacity memory chips.
AMD used to sell 3 core CPUs. Why would they make a 3 core chip? Because these are actually 4 core chips with one of the cores dead, but the chip is designed to easily disable that core.
Nvidia does this commonly where the xx70 models (2070 1070 970...) are actually the same GPUs that are used in the higher xx80 models(2080 1080 980...) except that too many cores are dead, so they disable a certain number of cores and sell it off as the lower model.
Even 100% working chips are binned. For example intel sells the same CPU die under multiple partnumbers depending on how well it did on the final test stage. If a chip is seen to perform better under high clock speeds it is sold as the overclocking K variant, if the chip fails the test at normal speed but passes the test at a lower clock speed then it is sold as a lower clocked model. If a chip shows particularly low power usage then it is sold as the low power S or T variant.
-
Many BJTs are/were allocated their type number, depending on their measured beta.
If you buy the base model car, it has many of the "bits" from the upmarket ones, just not used, so what's the difference?
-
If the parts were actually useless, they would've been trashed. The fact that they were sold and used is proof that they weren't useless.
Not that unscrupulous sellers don't exist either, but again, see above. When there are regulatory forces in place to prevent that, you get a well functioning market.
There's plenty of history of this, of actually trashing parts that don't meet spec. Manufacturers aren't afraid to do it. There are plenty of photos of, for example, mounds of finished vacuum tubes, to be crushed and disposed of (or hopefully recycled?). Selling dysfunctional parts would be more harmful to their reputation than the value of those parts, even if sold through appropriate channels (or indeed, the cost of setting up that "budget"/off-spec/relabel sales channel wouldn't be worth its revenues).
A better concern would be if they were reliable, at whatever capacity they had. I don't know about the chips in question, but it's generally the case that semiconductors don't change much over time (aside from exposure to ESD, which is avoidable in finished products by appropriate design). It's a good guess that, within that available capacity, they're fine, functionally indistinguishable from a chip of the same (design) size. That is, the defect happens to cause a few memory bits, or cells, or decoders or whatever, to malfunction, but the rest of the chip is and shall remain perfectly functional aside from that.
I don't know what's difficult about understanding that.
It's like I sold you a car that, by all indication is supposed to have a 4-cylinder engine in it, but I actually put in a V8, but I did a shitty job tuning it so it only makes the power of a 4-cylinder anyway (and yes, assuming other circumstances are comparable, like equal mileage -- maybe it's just not getting enough fuel or something, so it's somehow just as efficient, but only delivers half max power). And you're complaining that I've somehow defrauded you, but you're getting a car exactly as described.
Tim
-
He was sold half-working 64Kbit DRAMs at a significant discount, certainly less than the price of two 16Kbit DRAMs. If he'd been a little bit cleverer, he'd have figured out how to use 48K of them and do away with the bank of 16Kbit DRAMs that comprised the base 16K as mostly the defects would have been quite localized, so if they'd been tested and binned by defect page, with two XOR gates on DRAM A6 and A7 and two jumpers to select which 16K page not to use, he could have used 3/4 of each chip. However he'd already maxed out the 40 pins of the ULA, I'm not sure if it had enough gates left, and it would have impacted performance as all the RAM would have been contended with the ULA video controller. The refresh design might have also been a bit hairy as natively, the Z80 only provides a seven bit refresh address. Maybe Clive wanted to do it that way and either the pressures of time to market, or Richard Altwasser's good sense killed it off.
-
And if you think that doesn't go on today at the fab level, you're kidding yourself.
We all know the basic structure of DRAM - one transistor and one capacitor per bit. Not including muxes and all the other supporting circuitry. So I just added up all the RAM in my daily-use x86 machines, excluding GPU VRAM, phones, tablets, and so forth. 126GiB of it. One trillion, 82 billion, 331 million, 758 thousand, 592 transistor/cap pairs, if my math isn't off.
Somehow, I suspect I've got quite a bit of unused partially-faulty DRAM around me. Just a feeling..
Oh, and all those cheap off-brand (or just lesser brand..) SSDs out there with no markings or custom markings on the flash? Parts which failed to meet OEM specs. Sold off, tested (maybe..) to much lower standards and sold to you.
-
Basically all modern DRAM, NAND and camera sensors have faults in them. One in a thousand of chips might have no defects at all. DRAM defects are hidden on the chip level by using spare rows/columns. Or even half of the chip may be disabled and sold as smaller size. NAND defects usually are managed on controller level. Camera defects are dealt with in image processing software. CPUs/GPUs often have parts of them disabled to increase yield and sold as lower tier parts. Some have some part disabled even in the top tier chip. Say PS3 CPU has 8 cores physically but one of them is always disabled to increase yield.
-
Its pretty common for companies to sell partially working silicon chips for cheaper in order to avoid just tossing them in the trash.
Pretty much all the SD cards you have contain just partly working flash inside. Due to the push for density the yield is awful on flash memory. So even the top tier cards have some leftover capacity hidden away to cover the dead pages. The chips with too many dead areas are instead sold as half capacity memory chips.
AMD used to sell 3 core CPUs. Why would they make a 3 core chip? Because these are actually 4 core chips with one of the cores dead, but the chip is designed to easily disable that core.
Nvidia does this commonly where the xx70 models (2070 1070 970...) are actually the same GPUs that are used in the higher xx80 models(2080 1080 980...) except that too many cores are dead, so they disable a certain number of cores and sell it off as the lower model.
Even 100% working chips are binned. For example intel sells the same CPU die under multiple partnumbers depending on how well it did on the final test stage. If a chip is seen to perform better under high clock speeds it is sold as the overclocking K variant, if the chip fails the test at normal speed but passes the test at a lower clock speed then it is sold as a lower clocked model. If a chip shows particularly low power usage then it is sold as the low power S or T variant.
This is all by design though, Sinclair bought parts which would otherwise have ended up in the trash.
Well this is done with flash too.
Some flash silicon come off the manufacturing line so bad that the actual manufacturers like Micron, Samsung..etc don't want to bother with it. So there are companies out there that make it a business of buying the crap flash dies from the big players on the cheep. Putting them trough more extensive testing to patch up the bad memory areas[Note 1], packaging them into chips and selling it.
A lot of of the really cheep off brand SSDs have such waste reclaimed flash in them.
EDIT:
Note 1: To clarify expressed concerns for confusion, this process of "patching up bad memory" involves writing down the block as being bad by writing this information into the flash memory before it leaves the factory. This does not involve somehow repairing the incredibly tiny silicon structure on the die.
-
Some flash silicon come off the manufacturing line so bad that the actual manufacturers like Micron, Samsung..etc don't want to bother with it. So there are companies out there that make it a business of buying the crap flash dies from the big players on the cheep. Putting them trough more extensive testing to patch up the bad memory areas, packaging them into chips and selling it.
Not true, NAND is used as is. Controller deals with bad blocks of memory. Crappiest tier NAND usually goes into cheap memory cards or toys. Micron has their own division called Spectek which deals with dodgy Micron NAND and DRAM. Usually you can find S logo placed over original marking. They even sell chips with 3 lines on them for especially dodgy stuff.
(https://i.stack.imgur.com/ZVLRm.jpg)
(https://blog.macsales.com/wp-content/uploads/2011/03/OCZ-spectech.png)
-
Some flash silicon come off the manufacturing line so bad that the actual manufacturers like Micron, Samsung..etc don't want to bother with it. So there are companies out there that make it a business of buying the crap flash dies from the big players on the cheep. Putting them trough more extensive testing to patch up the bad memory areas, packaging them into chips and selling it.
Not true, NAND is used as is. Controller deals with bad blocks of memory. Crappiest tier NAND usually goes into cheap memory cards or toys. Micron has their own division called Spectek which deals with dodgy Micron NAND and DRAM. Usually you can find S logo placed over original marking. They even sell chips with 3 lines on them for especially dodgy stuff.
(https://i.stack.imgur.com/ZVLRm.jpg)
(https://blog.macsales.com/wp-content/uploads/2011/03/OCZ-spectech.png)
Yes later on flash manufacturers started doing it themselves since it provides profit out of junk.
And yes its the controllers job to fix bad blocks, but the bad block information is stored in areas of the flash chip itself and the manufacturer might grantee an area of error free blocks for holding bootloader code (This is more for embedded systems, not SSDs). For example from the datasheet of a random Micron flash chip of MT29F8G08FACWP:
(https://www.eevblog.com/forum/chat/clive-sinclair-what-a-cheap-skate!/?action=dlattach;attach=1060460;image)
-
[...]
Now if you want really dodgy, Amstrad (I think it was Amstrad) fitted completely fake ram and circuit board onto their computer to sell it into, I think Spain, or Portugal, which had a minimum memory requirement, the ram they fitted was completely bogus not connected in any meaningful way to the computer electrically!
Here is some further read in the eevblog forum:
https://www.eevblog.com/forum/vintage-computing/the-dodgiest-computer-ever-!/msg3109666/#msg3109666 (https://www.eevblog.com/forum/vintage-computing/the-dodgiest-computer-ever-!/msg3109666/#msg3109666)
-
Yes later on flash manufacturers started doing it themselves since it provides profit out of junk.
And yes its the controllers job to fix bad blocks, but the bad block information is stored in areas of the flash chip itself and the manufacturer might grantee an area of error free blocks for holding bootloader code (This is more for embedded systems, not SSDs). For example from the datasheet of a random Micron flash chip of MT29F8G08FACWP:
So how storing bad block information qualifies as "to patch up the bad memory areas"? :-//. Factory may sell crap with no or little testing and testing done later by 3rd party but NAND chip still is as is with nothing bad hidden or fixed.
-
Yes later on flash manufacturers started doing it themselves since it provides profit out of junk.
And yes its the controllers job to fix bad blocks, but the bad block information is stored in areas of the flash chip itself and the manufacturer might grantee an area of error free blocks for holding bootloader code (This is more for embedded systems, not SSDs). For example from the datasheet of a random Micron flash chip of MT29F8G08FACWP:
So how storing bad block information qualifies as "to patch up the bad memory areas"? :-//. Factory may sell crap with no or little testing and testing done later by 3rd party but NAND chip still is as is with nothing bad hidden or fixed.
Because the bad block information lets the controller know not to use those blocks, so if they are not used it does not matter that they are broken, so the flash chips works as intended where anything not marked as bad works. So "patching up bad memory areas" refers to marking them to not be used, not somehow magically fixing them to work again under a microscope or something.
So same thing as Clives RAM chips that arrive with defects. Its perhaps even possible that he got the manufacturer to separately bin the chips with the high or low half failing into separate boxes. In that case the "bad block table" is written on a label on the box of chips.
Or is there a requirement for this bad block correction to be hidden from the end user? Like for example modern HDDs that do bad block remapping inside the HDD controller chip while showing a perfectly linear array of always good sectors on the SATA interface.
-
Because the bad block information lets the controller know not to use those blocks, so if they are not used it does not matter that they are broken, so the flash chips works as intended where anything not marked as bad works. So "patching up bad memory areas" refers to marking them to not be used, not somehow magically fixing them to work again under a microscope or something.
Then every NAND chip that comes out of the factory through normal means is "patched up" even though it is not :palm:.
In that case the "bad block table" is written on a label on the box of chips.
Simply dumb speculation. Chips come in grades and factory does not want to put their own label onto trash grades but bad block table written on the package is something new. Yeah, good luck printing thousands of bad block locations for each chip there :palm:.
Or is there a requirement for this bad block correction to be hidden from the end user?
No, it is about calling things what they are. You writing they are "patching up" is misinformation. People who read it will think about something very different from what actually happens.
-
Well, a lot has been said about the fab and inherent defects of parts, including modern ones.
My bit is that Engineering is all about making the best thing you can within the boundary conditions you have. With all its problems, Sinclair, Peddle, Tramiel, Wozniak, Mensch, West, Noyce, Kilby, and so many other pioneers were solving issues within the constraints they had.
When I was a teen and started to get involved with the PC world and its "imperfect" hard drives, memories, etc. I also scoffed at the manufacturers for "putting out such garbage" - of course, I quickly realized how naïve I was.
I personally liked two books that talked about these early computing days: Soul of a New Machine (about DEC) and On the Edge (about Commodore).
-
Also may be of interest. I was reading about the latest high density Flash tech that Samsung I think is working with. It's a stack of layers built up, a terraced hill, allowing them to achieve just a little bit more areal density. Well, the paper noted how defects can get into that stack, and they can actually tolerate it in some cases as the layers just spread out over it. Defects might be from dust particles in the fab chamber, for example. So the layers take on some "princess and the pea" form. Defects in just the wrong place might cause broken or shorted connections, leaky paths or faulty transistors, sure, but they can actually get an error rate less than one per defect, which makes for a notable improvement to overall yield!
It's like picking up a potato and maybe it's just oddly shaped, or maybe it's got a black spot when you cut into it, or maybe it's a rotten lump. Sometimes it's fine, sometimes you can cut around it, sometimes it's a total loss. Whatever works.
Tim
-
I think we also need to consider the market that Sinclair and others were supplying......not NASA with mission critical systems. The consumer market could not afford CBM PET’s for home use but Sinclair found a way to provide something useful at an affordable cost to many. If people start becoming fussy about chip specifications then they also have to accept a higher cost product will result. It is simple maths and we cannot take away from Sinclair that, even though I dislike it, the ZX Spectrum was a commercial success that also helped many people of my age to enter the world of computers and programming when otherwise we would have remained excluded by cost.
My parents bought me a Dragon 32 and I loved it. I learnt all about its chipset, how to program and how to modify the hardware to suit my needs. Invaluable knowledge to me but I was fortunate that my parents could afford such a computer for me. I could not have afforded a Sharp MZ80 or other more advanced platforms.
Fraser
-
Because the bad block information lets the controller know not to use those blocks, so if they are not used it does not matter that they are broken, so the flash chips works as intended where anything not marked as bad works. So "patching up bad memory areas" refers to marking them to not be used, not somehow magically fixing them to work again under a microscope or something.
Then every NAND chip that comes out of the factory through normal means is "patched up" even though it is not :palm:.
Yep that is exactly what i was trying to say in my first post: https://www.eevblog.com/forum/chat/clive-sinclair-what-a-cheap-skate (https://www.eevblog.com/forum/chat/clive-sinclair-what-a-cheap-skate)!/msg3219324/#msg3219324
You can't buy a off the shelf a large capacity raw NAND flash chip that has all blocks working. They exist, but are not binned for it. And if you do get what seemingly looks a perfect array of memory then there is a controller in the mix fixing the bad blocks, such as SD cards where its not uncommon to make 8GB cards out of the bad 16GB flash that don't have enough working blocks to make up a full 16 + enough spare, but don't want to make a 15.9GB card.
In that case the "bad block table" is written on a label on the box of chips.
Simply dumb speculation. Chips come in grades and factory does not want to put their own label onto trash grades but bad block table written on the package is something new. Yeah, good luck printing thousands of bad block locations there :palm:.
And this is why the NAND chips store the bad block table inside the flash memory itself. The table is very big.
Since Sir Clives "memory controller" could only disable 32KB blocks of memory meant that the bad block table was only 1 bit large, pretty easy to fit on a label, and it was easy to tell this information to the "memory controller" via a simple jumper setting.
EDIT:
Just noticed your addition in your edited post, so il add a response
Or is there a requirement for this bad block correction to be hidden from the end user?
No, it is about calling things what they are. You writing they are "patching up" is misinformation. People who read it will think about something very different from what actually happens.
Noted. Will add clarification to my post.
-
Basically all modern DRAM, NAND and camera sensors have faults in them. One in a thousand of chips might have no defects at all. DRAM defects are hidden on the chip level by using spare rows/columns. Or even half of the chip may be disabled and sold as smaller size. NAND defects usually are managed on controller level. Camera defects are dealt with in image processing software. CPUs/GPUs often have parts of them disabled to increase yield and sold as lower tier parts. Some have some part disabled even in the top tier chip. Say PS3 CPU has 8 cores physically but one of them is always disabled to increase yield.
Yes we still do that these days.
I absolutely don't see what the problem is with what the OP mentioned. As long as the RAM chips were tested GOOD for the intended purpose, who cares? That's almost a detail. The design of the ZX Spectrum could not accomodate 64KBytes of RAM anyway (IIRC), so what's the problem. As mentioned, Sinclair probably got those parts for cheaper than the equivalent with 16KB parts, and with fewer parts in the end.
This example is not even a good example of the cheap choices Sinclair kept making. There are tons of others, a lot more problematic for the users, as some would hinder reliability a lot. But sure, Sinclair's motto was to design the cheapest stuff possible. The idea was to give as many people as possible access to new technology that was not accessible before. Personal computers tended to be pretty expensive before Sinclair. Then of course, others followed, some making better stuff for not much more money, but that was just natural competition then. When the Spectrum got released, you needed to shell out at least twice the price to get something remotely close in specs. Of course the more expensive stuff you'd get then would usually be more reliable and better finished, but the whole point IMO was to release products with the highest specs-to-price ratio possible.
Sinclair's business model was definitely not sustainable, but it was unique at the time, and I believe it kind of started a revolution of its own. Oh, and by the way, Clive Sinclair himself did not design Sinclair products. He had a bunch of pretty talented engineers, who did what they could given the constraints they were given.
Amstrad (that kind ot took over) managed to release objectively better stuff for cheap. Sure the guy himself probably had a bit more industrial experience, but that was also a different era already. Components' prices had already dropped a lot, and the market was a bit more "mature" - meaning the average joes were starting to see the benefits of personal computers - so you could sell people stuff for more money, but with everything they needed to get started. A ZX Spectrum needed at least a TV set - and a tape recorder. Amstrad products were usable without anything extra to buy. Sinclair did not take this approach, because (I think) at the time, most people (at least in the general population) were still not convinced of the interest of a personal computer, and thus would buy only if the investment was really low (to minimize the risk so to speak), even if, in the end, the cost of ownership would usually end up much higher.
-
I've just discovered that Sinclair DELIBERATELY bought in FAULTY RAM stock for the ZX Spectrum, where only half the RAM capacity worked, all to save a few pence... WHAT A CHEAP SKATE!
I absolutely couldn't live with myself, knowing I'd skimped on the BOM, just to make more money! Okay, so you save a few pence, but then, in the future, people like me will still be discussing what a skimper you are - UGH! I'd have that thought gnawing away at me, I couldn't let a designed product get into people's hands like that - and I don't care if it worked perfectly or not - YUCK!
Wait till you find out that that's how nearly all semiconductors are made: they're tested and graded. In some cases, like CPUs and memory, it includes testing each core and then disabling the bad ones. So a single-core CPU could be a dual-core CPU with one defective core, or a 4-core is a 6-core with one or two defective cores. In memory, it's bad banks or blocks which get disabled in testing and substituted by the spare blocks that are designed into the product for this purpose.
In many semiconductors (including the above), speed grading is also done: only some will manage the highest speeds.
So your 2GHz CPU might in "reality" be a "failed" 2.5GHz CPU.
There's nothing wrong with this. Do you expect a butcher to discard the whole hog if one leg happens to be scrawny? Of course not. Do you expect a glass company to discard a whole lot of glass because of the parts with bubbles, even though it'll be cut into smaller sheets later, and they can cut around the imperfections? The world would be (even more insanely) wasteful than it is if we didn't do stuff like that.
-
Wait till you find out that that's how nearly all semiconductors are made: they're tested and graded. In some cases, like CPUs and memory, it includes testing each core and then disabling the bad ones. So a single-core CPU could be a dual-core CPU with one defective core, or a 4-core is a 6-core with one or two defective cores. In memory, it's bad banks or blocks which get disabled in testing and substituted by the spare blocks that are designed into the product for this purpose.
In many semiconductors (including the above), speed grading is also done: only some will manage the highest speeds.
So your 2GHz CPU might in "reality" be a "failed" 2.5GHz CPU.
It used to be a huge pain point / point of confusion / joke in the semi industry: "you can't test quality into a product!"
Well, when your statistical model shows that, sometimes the article works just absolutely perfectly, and sometimes it fails, and when it fails, it fails hard, in very specific, localized ways; well, yes, all you need to do is test to weed out the failed parts, and voila, quality.
Early fabs often suffered from truly embarrassing yield rates -- under 1% for example. I recall reading this about early Japanese transistor production I think it was; Intel and I assume others suffered from similar issues many times through history, as they brought up new fab lines. Everything from Intel's famously first DRAMs, to newer CPUs (I want to say the Pentium was one of them? or is that more just true of any chips ran through whatever the new fab process is?).
And the principle still applies to natural variation in process parameters, even as tightly controlled as they are. Doping levels usually being the biggest variance (isn't it?), affecting everything from voltage rating (hence the multiple grades of TIP31/A/B/C) to gain (hFE grades of 2SC1815O/Y/GR/BL, and everything (Vpo, gm, Rds(on)) of JFETs), to switching speed (hence clock ratings of CPUs). Why dispose of a part that runs a little slower but is otherwise perfectly serviceable?
Or I could equally well ask -- why not dispose of the parts that exceed the spec? Surely you feel just as strongly about being oversold, as undersold? ;D
But overspec can be a problem too: typical example, fast modern epitaxial 2N3055s singing in old circuits that strung them up on wiring harnesses, or should I say resonant tank circuits -- yikes!
There's nothing wrong with this. Do you expect a butcher to discard the whole hog if one leg happens to be scrawny? Of course not. Do you expect a glass company to discard a whole lot of glass because of the parts with bubbles, even though it'll be cut into smaller sheets later, and they can cut around the imperfections? The world would be (even more insanely) wasteful than it is if we didn't do stuff like that.
Yup, exactly. Actually on the subject of meat, I wonder if standards could/should be updated to accommodate more kinds of defects. It's my understanding (at least over here; food laws do vary quite a bit around the world), finding a neoplasm or tumor is grounds to reject the carcass. Well, that might be warranted, but also what are the chances that the defect is benign? We remove benign tumors from humans all the time, and don't cull them. :P The even less appetizing question also follows: even if it's cancerous to the animal, can it 1. cause illness in humans, under any conditions (i.e., if eaten raw, or worse), and 2. what about when safely cooked?
A valid counter-argument is, with how messy the meat industry is over here, it's probably not a good idea to give them this much leeway (i.e. to judge whether a defect is benign). A good counter-counter-argument being, well can't we just regulate them like normal countries? But, ah, the USA can't have nice things.. :( (and so I won't go any more political here).
Tim
-
Wait till you find out that that's how nearly all semiconductors are made: they're tested and graded. In some cases, like CPUs and memory, it includes testing each core and then disabling the bad ones. So a single-core CPU could be a dual-core CPU with one defective core, or a 4-core is a 6-core with one or two defective cores. In memory, it's bad banks or blocks which get disabled in testing and substituted by the spare blocks that are designed into the product for this purpose.
In many semiconductors (including the above), speed grading is also done: only some will manage the highest speeds.
So your 2GHz CPU might in "reality" be a "failed" 2.5GHz CPU.
There's nothing wrong with this. Do you expect a butcher to discard the whole hog if one leg happens to be scrawny? Of course not. Do you expect a glass company to discard a whole lot of glass because of the parts with bubbles, even though it'll be cut into smaller sheets later, and they can cut around the imperfections? The world would be (even more insanely) wasteful than it is if we didn't do stuff like that.
It used to be a huge pain point / point of confusion / joke in the semi industry: "you can't test quality into a product!"
That used to be a widespread engineering aphorism, not just hardware/software.
Software weenies cannot believe the aphorism, because they are taught that test driven development is sufficient, and that if the "traffic light indicator" is green then the product works.
Two concepts that have never entered their consciousness... Tests cannot show the absence of faults, and if your tests are crap (which they usually are) then the green light means very little.
Yes, I exaggerate, but not too much.
-
There are plenty of stories of that in the hardware domain; a combination of insufficient (or an unanticipated need for) testing, and the application of Hyrum's law (over time, given enough users, the implementation becomes the interface).
Example that comes to mind, think it was a Pease article? Delco was abusing their regulators, runing them just on the bleeding edge of operation in their radios. Because, you know, can't be spending precious cents on heatsinks. They were thermally cycling, and blowing up. They weren't expected to operate that way, it's a protective measure not an operating mode. But, being the big ugly customer that they are, they got the testing and process improvements, which makes for a better part in the end, but it's rather unsatisfying to see the abusers win, y'know?
Now, in the wider sphere of engineering, or materials science, or whatever -- you can test what you can measure, but if you can't test it, you obviously aren't going to get any quality out of it. That's one thing you can't "test into" a product. I suppose microcracks in metal parts would be such an example: a thick enough part can't even be x-rayed. Parts can't be stress-tested if it spends precious fatigue life (most things aren't nearly so critical, but rocket engine parts might be an example?). That also struck with early metal-can transistors, where the bondwires, and weld spatter inside the metal can, could fatigue during extensive vibration tests, indeed receiving testing was testing quality out of them. Maybe, impurities in bulk chemicals? Chemical tests are lengthy and expensive (and, a lot of mechanical tests too), you're only going to test for common impurities.
So you often have situations where, not so much that you can't test some things, but it's a very real question of economy, how much time and money will be spent testing versus how much the material cost, and what revenue the finished product will generate. And implicit in that testing is, how much incoming material, or outgoing production, will you reject -- how much production time and handling labor will be wasted -- when the tests fail?
That's where you "can't test quality" into your process, you need a different method. Engaging suppliers in a more involved process, auditing, random inspections, 3rd party testing, etc. They may raise their costs in response, but when that's less than the above cost, there you go. Engage with your labor suppliers just as much, i.e., employees. Make sure they're happy, comfortable, have all the tools and procedures in place to produce quality parts, and that their managers are doing the same.
Which, heh, the converse, I can just imagine what a living hell that would be. Testing quality into a labor force? I'm not sure exactly what all that would entail, but it sounds as awful as any low-wage megacorp is.
Tim
-
The 1975 Sinclair Black Watch (https://en.wikipedia.org/wiki/Black_Watch_(wristwatch)) was an early, huge product failure. The batteries going dead, drifting oscillator, money-back guarantee... a loss of USD$3.4M (£2.6M) in 2019 dollars. OUCH.
I'm not sure if Sinclair just didn't listen to the engineers or he pushed them so hard, usually it's that they don't want to poke the bear and tell him flat out it's not ready yet. Dreamers don't worry too much about things not working.
I remember W. Edwards Deming (https://deming.org/inspection-is-too-late-the-quality-good-or-bad-is-already-in-the-product/) teaching "Inspection is too late. The quality, good or bad, is already in the product. As Harold F. Dodge said, “You cannot inspect quality into a product.”
-
It is not inconceivable to produce a widget which contains out of spec components and still work.
Unless we are talking Aliexpress cheapies, in which case they probably won't continue working even when they work.
Some double-standards going on here, methinks. Surely a product that has failed some test may fail further tests in the future - it is a faulty product, after all. But it's OK for Sinclair and the likes to use these in the name of a cheap BoM, but it's not okay for a hobby guy to use similar from a Alibay vendor.
-
It is not inconceivable to produce a widget which contains out of spec components and still work.
Unless we are talking Aliexpress cheapies, in which case they probably won't continue working even when they work.
Some double-standards going on here, methinks. Surely a product that has failed some test may fail further tests in the future - it is a faulty product, after all. But it's OK for Sinclair and the likes to use these in the name of a cheap BoM, but it's not okay for a hobby guy to use similar from a Alibay vendor.
I couldn't locate where you got the quote from, but years ago, we were taught that designs which relied upon strict adherence to component specs were poor engineering.
For instance, that is why amplifiers have negative feedback loops around them.
Whilst it is no doubt possible to design an amplifier using carefully selected components which will meet all required specifications, it is just plain easier to design one that isn't "touchy" about component specs.
The same thing applies to other electronic devices.
Mechanical stuff is not quite as forgiving!
-
It is not inconceivable to produce a widget which contains out of spec components and still work.
Unless we are talking Aliexpress cheapies, in which case they probably won't continue working even when they work.
Some double-standards going on here, methinks. Surely a product that has failed some test may fail further tests in the future - it is a faulty product, after all. But it's OK for Sinclair and the likes to use these in the name of a cheap BoM, but it's not okay for a hobby guy to use similar from a Alibay vendor.
I would guess that the only thing that was likely wrong with these Sinclairs trash picked chips is that one or two bits in those chips ware dead (stuck at 1 or stuck at 0) due to a lithography error that didn't quite connect two transistors properly or shorted something out. The rest of the chip is fine. If there are any other serious faults with the chip such as drawing 2x the supply current than normal then the test would also detect it and toss it out.
Yes no test is 100% guaranteed to find all issues. There might be 'zombie' chips that work fine in the test but then after some field wear an tear for example a bond wire that was just barely holding on lets go. But this is really rare. The tests are typically run at the extremes of the operating range. So the chip might be run below and above the datasheet speced supply voltage spec to make sure it works there too, the chip might be fed slow rise time signals to try coax and metastability faults to show up. The chip might be overclocked and measured at what point it craps out...etc So all this makes sure that the chip operates fine even beyond what the datasheet claims it can work at, so this way they can be pretty darn certain that it also works within the datasheet specs. Some of the fancy more expensive chips for high reliability applications might even be thermal cycled and tested at temperature extremes.
Its just that back in Clives days the semiconductor industry was not designing in tricks to make the chips more fault tolerant. So if a single transistor was dead the whole chip was considered trash. Back when chips has 10s of thousands of transistors this was perfectly fine. But as they got into the milions of transistors it simply was not possible to keep the yield up. So it started making more financial sense to make the chip slightly larger by adding redundant parts in order to be able to "fix" otherwise dead chips by blowing OTP fuses that disable the broken parts. Clive simply beat them to the idea by a fair few years.
-
The 1975 Sinclair Black Watch (https://en.wikipedia.org/wiki/Black_Watch_(wristwatch)) was an early, huge product failure. The batteries going dead, drifting oscillator, money-back guarantee... a loss of USD$3.4M (£2.6M) in 2019 dollars. OUCH.
I'm not sure if Sinclair just didn't listen to the engineers or he pushed them so hard, usually it's that they don't want to poke the bear and tell him flat out it's not ready yet. Dreamers don't worry too much about things not working.
I remember W. Edwards Deming (https://deming.org/inspection-is-too-late-the-quality-good-or-bad-is-already-in-the-product/) teaching "Inspection is too late. The quality, good or bad, is already in the product. As Harold F. Dodge said, “You cannot inspect quality into a product.”
Yep, this one is a good example of bad engineering.
I've heard stories about the development of the ZX80/81/Spectrum and the QL, but not of this watch, so I can't really tell how things were at Sinclair back then. It's probably a mix of Sinclair pushing too hard and engineers not willing/or unable to step foot and say "this isn't gonna work". What we don't even know is whether Sinclair himself was aware of the limitations BEFORE releasing the product. I frankly can't believe the engineering team DIDN'T. Sure, testing alone is not going to guarantee that the product is reliable, but come on. The two main problems: oscillator drift and power consumption were definitely easy to test, and some basic testing was definitely enough to figure out the product was not going to meet specs. And of course, it was also easy to figure it out from the design itself without any kind of testing. So either engineers LIED, or (more likely) Sinclair's management decided to release the product in spite of its shortcomings (something which Sinclair kept being famous for...), hoping the issues would be solved in time, the market already "hooked", and that they could release fixes later. They kept doing this till their last product (which I believe was the QL but I'm not sure?) This sounds bad, but hey, do many companies these days not do this? We could go as far as saying that Sinclair was one of the first companies that came up with the "minimum viable product" concept. Now define "viable", but some recent ompanies (like startups) DO often release unfinished/unreliable products as "MVPs", that ultimately also make a net loss, so this isn't that different.
Engineers, even these days, know how hard it can be to step foot against a company's management. Sometimes the only way out is to resign. IME, most engineers won't, and will end up doing what they are asked, and constantly whine about it. More comfortable than losing your job.
-
I remember Clive's stuff back to the 1960s.
As said by others above, he was always right on the edge of things only just about working. The little radio he did c. 1969 was crap and barely worked at the best of times. The audio modules were also crap, using the cheapest imaginable open carbon trimmers for the level controls.
Then he had the disaster with the watch, which may not have been his fault; it was a failing chip which had a near 100% failure rate, but not right away.
Then the portable TV which was packed with electronics. He got a custom CRT made for it. It didn't go far.
His last adventure was the C5 electric vehicle, assembled by a washing machine company. It was junk too.
But he wasn't bothered. He got his knighthood: Sir Clive.
And everybody thinks he was a genius. Well, he was. I went to one of his presentations, on the design of a digital voltmeter. He decided TTL chips cost too much so he built them with discrete components!
-
But he wasn't bothered. He got his knighthood: Sir Clive.
And Tommy Flowers did not, not even post mortem. Shame!
-
Early, Sinclair did many audio amplifiers (https://planetsinclair.meulie.net/sinclair/audio_advert_gallery.htm) (check out their overboard claims).
For the TR750 (germanium 1964) I read:
"The original transistors were selected Plessey devices rescued from landfill, a Sinclair specialty and consequently unobtainable through the usual suppliers." EPE Jake Rothman June 2015.
I guess he did get ahead by being a cheapskate.