| General > General Technical Chat |
| Clive Sinclair - what a cheap skate! |
| << < (8/10) > >> |
| SiliconWizard:
--- Quote from: wraper on September 05, 2020, 01:30:53 pm ---Basically all modern DRAM, NAND and camera sensors have faults in them. One in a thousand of chips might have no defects at all. DRAM defects are hidden on the chip level by using spare rows/columns. Or even half of the chip may be disabled and sold as smaller size. NAND defects usually are managed on controller level. Camera defects are dealt with in image processing software. CPUs/GPUs often have parts of them disabled to increase yield and sold as lower tier parts. Some have some part disabled even in the top tier chip. Say PS3 CPU has 8 cores physically but one of them is always disabled to increase yield. --- End quote --- Yes we still do that these days. I absolutely don't see what the problem is with what the OP mentioned. As long as the RAM chips were tested GOOD for the intended purpose, who cares? That's almost a detail. The design of the ZX Spectrum could not accomodate 64KBytes of RAM anyway (IIRC), so what's the problem. As mentioned, Sinclair probably got those parts for cheaper than the equivalent with 16KB parts, and with fewer parts in the end. This example is not even a good example of the cheap choices Sinclair kept making. There are tons of others, a lot more problematic for the users, as some would hinder reliability a lot. But sure, Sinclair's motto was to design the cheapest stuff possible. The idea was to give as many people as possible access to new technology that was not accessible before. Personal computers tended to be pretty expensive before Sinclair. Then of course, others followed, some making better stuff for not much more money, but that was just natural competition then. When the Spectrum got released, you needed to shell out at least twice the price to get something remotely close in specs. Of course the more expensive stuff you'd get then would usually be more reliable and better finished, but the whole point IMO was to release products with the highest specs-to-price ratio possible. Sinclair's business model was definitely not sustainable, but it was unique at the time, and I believe it kind of started a revolution of its own. Oh, and by the way, Clive Sinclair himself did not design Sinclair products. He had a bunch of pretty talented engineers, who did what they could given the constraints they were given. Amstrad (that kind ot took over) managed to release objectively better stuff for cheap. Sure the guy himself probably had a bit more industrial experience, but that was also a different era already. Components' prices had already dropped a lot, and the market was a bit more "mature" - meaning the average joes were starting to see the benefits of personal computers - so you could sell people stuff for more money, but with everything they needed to get started. A ZX Spectrum needed at least a TV set - and a tape recorder. Amstrad products were usable without anything extra to buy. Sinclair did not take this approach, because (I think) at the time, most people (at least in the general population) were still not convinced of the interest of a personal computer, and thus would buy only if the investment was really low (to minimize the risk so to speak), even if, in the end, the cost of ownership would usually end up much higher. |
| tooki:
--- Quote from: eti on September 05, 2020, 01:24:03 am ---I've just discovered that Sinclair DELIBERATELY bought in FAULTY RAM stock for the ZX Spectrum, where only half the RAM capacity worked, all to save a few pence... WHAT A CHEAP SKATE! I absolutely couldn't live with myself, knowing I'd skimped on the BOM, just to make more money! Okay, so you save a few pence, but then, in the future, people like me will still be discussing what a skimper you are - UGH! I'd have that thought gnawing away at me, I couldn't let a designed product get into people's hands like that - and I don't care if it worked perfectly or not - YUCK! --- End quote --- Wait till you find out that that's how nearly all semiconductors are made: they're tested and graded. In some cases, like CPUs and memory, it includes testing each core and then disabling the bad ones. So a single-core CPU could be a dual-core CPU with one defective core, or a 4-core is a 6-core with one or two defective cores. In memory, it's bad banks or blocks which get disabled in testing and substituted by the spare blocks that are designed into the product for this purpose. In many semiconductors (including the above), speed grading is also done: only some will manage the highest speeds. So your 2GHz CPU might in "reality" be a "failed" 2.5GHz CPU. There's nothing wrong with this. Do you expect a butcher to discard the whole hog if one leg happens to be scrawny? Of course not. Do you expect a glass company to discard a whole lot of glass because of the parts with bubbles, even though it'll be cut into smaller sheets later, and they can cut around the imperfections? The world would be (even more insanely) wasteful than it is if we didn't do stuff like that. |
| T3sl4co1l:
--- Quote from: tooki on September 06, 2020, 02:52:44 pm ---Wait till you find out that that's how nearly all semiconductors are made: they're tested and graded. In some cases, like CPUs and memory, it includes testing each core and then disabling the bad ones. So a single-core CPU could be a dual-core CPU with one defective core, or a 4-core is a 6-core with one or two defective cores. In memory, it's bad banks or blocks which get disabled in testing and substituted by the spare blocks that are designed into the product for this purpose. In many semiconductors (including the above), speed grading is also done: only some will manage the highest speeds. So your 2GHz CPU might in "reality" be a "failed" 2.5GHz CPU. --- End quote --- It used to be a huge pain point / point of confusion / joke in the semi industry: "you can't test quality into a product!" Well, when your statistical model shows that, sometimes the article works just absolutely perfectly, and sometimes it fails, and when it fails, it fails hard, in very specific, localized ways; well, yes, all you need to do is test to weed out the failed parts, and voila, quality. Early fabs often suffered from truly embarrassing yield rates -- under 1% for example. I recall reading this about early Japanese transistor production I think it was; Intel and I assume others suffered from similar issues many times through history, as they brought up new fab lines. Everything from Intel's famously first DRAMs, to newer CPUs (I want to say the Pentium was one of them? or is that more just true of any chips ran through whatever the new fab process is?). And the principle still applies to natural variation in process parameters, even as tightly controlled as they are. Doping levels usually being the biggest variance (isn't it?), affecting everything from voltage rating (hence the multiple grades of TIP31/A/B/C) to gain (hFE grades of 2SC1815O/Y/GR/BL, and everything (Vpo, gm, Rds(on)) of JFETs), to switching speed (hence clock ratings of CPUs). Why dispose of a part that runs a little slower but is otherwise perfectly serviceable? Or I could equally well ask -- why not dispose of the parts that exceed the spec? Surely you feel just as strongly about being oversold, as undersold? ;D But overspec can be a problem too: typical example, fast modern epitaxial 2N3055s singing in old circuits that strung them up on wiring harnesses, or should I say resonant tank circuits -- yikes! --- Quote ---There's nothing wrong with this. Do you expect a butcher to discard the whole hog if one leg happens to be scrawny? Of course not. Do you expect a glass company to discard a whole lot of glass because of the parts with bubbles, even though it'll be cut into smaller sheets later, and they can cut around the imperfections? The world would be (even more insanely) wasteful than it is if we didn't do stuff like that. --- End quote --- Yup, exactly. Actually on the subject of meat, I wonder if standards could/should be updated to accommodate more kinds of defects. It's my understanding (at least over here; food laws do vary quite a bit around the world), finding a neoplasm or tumor is grounds to reject the carcass. Well, that might be warranted, but also what are the chances that the defect is benign? We remove benign tumors from humans all the time, and don't cull them. :P The even less appetizing question also follows: even if it's cancerous to the animal, can it 1. cause illness in humans, under any conditions (i.e., if eaten raw, or worse), and 2. what about when safely cooked? A valid counter-argument is, with how messy the meat industry is over here, it's probably not a good idea to give them this much leeway (i.e. to judge whether a defect is benign). A good counter-counter-argument being, well can't we just regulate them like normal countries? But, ah, the USA can't have nice things.. :( (and so I won't go any more political here). Tim |
| tggzzz:
--- Quote from: T3sl4co1l on September 06, 2020, 06:29:12 pm --- --- Quote from: tooki on September 06, 2020, 02:52:44 pm ---Wait till you find out that that's how nearly all semiconductors are made: they're tested and graded. In some cases, like CPUs and memory, it includes testing each core and then disabling the bad ones. So a single-core CPU could be a dual-core CPU with one defective core, or a 4-core is a 6-core with one or two defective cores. In memory, it's bad banks or blocks which get disabled in testing and substituted by the spare blocks that are designed into the product for this purpose. In many semiconductors (including the above), speed grading is also done: only some will manage the highest speeds. So your 2GHz CPU might in "reality" be a "failed" 2.5GHz CPU. There's nothing wrong with this. Do you expect a butcher to discard the whole hog if one leg happens to be scrawny? Of course not. Do you expect a glass company to discard a whole lot of glass because of the parts with bubbles, even though it'll be cut into smaller sheets later, and they can cut around the imperfections? The world would be (even more insanely) wasteful than it is if we didn't do stuff like that. --- End quote --- It used to be a huge pain point / point of confusion / joke in the semi industry: "you can't test quality into a product!" --- End quote --- That used to be a widespread engineering aphorism, not just hardware/software. Software weenies cannot believe the aphorism, because they are taught that test driven development is sufficient, and that if the "traffic light indicator" is green then the product works. Two concepts that have never entered their consciousness... Tests cannot show the absence of faults, and if your tests are crap (which they usually are) then the green light means very little. Yes, I exaggerate, but not too much. |
| T3sl4co1l:
There are plenty of stories of that in the hardware domain; a combination of insufficient (or an unanticipated need for) testing, and the application of Hyrum's law (over time, given enough users, the implementation becomes the interface). Example that comes to mind, think it was a Pease article? Delco was abusing their regulators, runing them just on the bleeding edge of operation in their radios. Because, you know, can't be spending precious cents on heatsinks. They were thermally cycling, and blowing up. They weren't expected to operate that way, it's a protective measure not an operating mode. But, being the big ugly customer that they are, they got the testing and process improvements, which makes for a better part in the end, but it's rather unsatisfying to see the abusers win, y'know? Now, in the wider sphere of engineering, or materials science, or whatever -- you can test what you can measure, but if you can't test it, you obviously aren't going to get any quality out of it. That's one thing you can't "test into" a product. I suppose microcracks in metal parts would be such an example: a thick enough part can't even be x-rayed. Parts can't be stress-tested if it spends precious fatigue life (most things aren't nearly so critical, but rocket engine parts might be an example?). That also struck with early metal-can transistors, where the bondwires, and weld spatter inside the metal can, could fatigue during extensive vibration tests, indeed receiving testing was testing quality out of them. Maybe, impurities in bulk chemicals? Chemical tests are lengthy and expensive (and, a lot of mechanical tests too), you're only going to test for common impurities. So you often have situations where, not so much that you can't test some things, but it's a very real question of economy, how much time and money will be spent testing versus how much the material cost, and what revenue the finished product will generate. And implicit in that testing is, how much incoming material, or outgoing production, will you reject -- how much production time and handling labor will be wasted -- when the tests fail? That's where you "can't test quality" into your process, you need a different method. Engaging suppliers in a more involved process, auditing, random inspections, 3rd party testing, etc. They may raise their costs in response, but when that's less than the above cost, there you go. Engage with your labor suppliers just as much, i.e., employees. Make sure they're happy, comfortable, have all the tools and procedures in place to produce quality parts, and that their managers are doing the same. Which, heh, the converse, I can just imagine what a living hell that would be. Testing quality into a labor force? I'm not sure exactly what all that would entail, but it sounds as awful as any low-wage megacorp is. Tim |
| Navigation |
| Message Index |
| Next page |
| Previous page |