Author Topic: Oscilliscope memory, type and why so small?  (Read 29346 times)

0 Members and 1 Guest are viewing this topic.

Offline capsicumTopic starter

  • Contributor
  • Posts: 34
  • Country: us
Oscilliscope memory, type and why so small?
« on: March 03, 2017, 04:29:55 pm »
I am in the market for my first o-scope and noticed the very low amount of memory on many models. I expect that on a 20 or 30 year old item (RAM for my 80386 PC was $100 per MB, that was with a friend of the distributer discount) but i don't expect this on current equipment.
What is so special about the type of memory used in scopes that prevents 'plenty' being standard on all models? Why am i seeing only a few kilo-sample points on any scope(even the lowest end)? :wtf:

Non-volatile flash is under $1 per gigabyte even with packaging and bus connection circuitry[USB, SATA, CF, SD...] .
Dynamic ddr and gddr RAM is under $10 per gibibyte(the super fast gamer stuff or ECC might retail for $20/GiB)
Magnetic disk storage is basically $50 for the cost of the enclosure up to a Terabyte with 32-128MiB cache/buffer.
Even $100 bottom end smart phone has over a GiB of ram and 4GiB of flash.

Setting aside the fixed costs of wafer and packaging, the variable cost of memory on the chip is at or under one US cent [$0.01] per MiB.
I don't get it, ??? this seems like a massive marketing and management fail, but it is across so many companies that there must be something special going on that I am missing in my superficial overview. (...or its just one of those well established old-timer industries that simply enjoys living a couple decades behind and everyone is accustomed to the status quo so they get away with it.) 
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #1 on: March 03, 2017, 04:57:00 pm »
al overview. (...or its just one of those well established old-timer industries that simply enjoys living a couple decades behind and everyone is accustomed to the status quo so they get away with it.)

;) High-end seems to be relatively slow-evolving, so handicapped in memory/processing power. Low-end is relatively fast evolving and they keep it artificially crippled not to make high-end look silly. Chinese companies float somewhere in the middle and probably cannot fully decide on the strategy, lots of ideology is copied from the "big names". Currently only PicoTech seems to somewhat ingnore "game rules" with their USB scopes. Just got this one, 2408B sub-1000€ with 128MB:
https://www.eevblog.com/forum/testgear/picoscope-2000/
From inital tests it seems that full 128MB is probably available only in streaming mode, do not fully understand this aspect yet. But 100MB shared between channels in RTS is still fairly ok... On ~6000€ model they offer 2GB.
Also actually doing something with this amount of memory is extremely demanding processing wise. Pico can do it because they use dirt-cheap processing power in PC...But I seem to remember "big names" actually masking essentially PC+ADC as standalone unit... Perhaps some "boat anchor" owners can comment on that?

« Last Edit: March 03, 2017, 05:01:34 pm by MrW0lf »
 

Online mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13786
  • Country: gb
    • Mike's Electric Stuff
Re: Oscilliscope memory, type and why so small?
« Reply #2 on: March 03, 2017, 05:01:40 pm »

What is so special about the type of memory used in scopes that prevents 'plenty' being standard on all models? Why am i seeing only a few kilo-sample points on any scope(even the lowest end)? :wtf:

Scope memory has different requirments, in particular it needs to be able to be written to continuously without the gaps or pauses that occur with PC type memory, which is more burst-oriented. If you want to write just one byte to PC memory it's actually pretty slow, but you can write a burst of many bytes in only slightly more time, so avarage throughput is high.
And once you add intensity display it gets even more complicated.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16677
  • Country: us
  • DavidH
Re: Oscilliscope memory, type and why so small?
« Reply #3 on: March 03, 2017, 05:05:14 pm »
What DSOs were you considering?

1. In the common scenario where you are observing a real time signal, a record length significantly in excess of the screen resolution is superfluous.
2. Processing an extra long record to produce a real time display record takes extra computing power for little gain.
3. The real time requirements of memory used for the acquisition record make it special and not just common memory.

There are some applications which absolutely require a long record length but record length is largely advantageous only for marketing.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #4 on: March 03, 2017, 05:11:18 pm »
There are some applications which absolutely require a long record length but record length is largely advantageous only for marketing.

With large record you can simulate multi-timebase scope because you have full resolution over extremely wide timebase range. Not commonly understood concept in low end market.
 

Offline ZomBiE80

  • Contributor
  • Posts: 39
  • Country: fi
Re: Oscilliscope memory, type and why so small?
« Reply #5 on: March 03, 2017, 05:36:35 pm »
Think acquisition RAM in scopes as cache in PC CPU, it is SRAM, and so forth expensive type of memory. http://www.testandmeasurementtips.com/memory-depth-and-sampling-rate-in-oscilloscopes/
« Last Edit: March 03, 2017, 05:38:48 pm by ZomBiE80 »
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14315
  • Country: de
Re: Oscilliscope memory, type and why so small?
« Reply #6 on: March 03, 2017, 05:58:07 pm »
Fast scopes need fast memory, and this type of memory is expensive. It is only the relatively slow scope (e.g. 200 MSPS) that could used more or less standard DRAMs. But who needs 1 GB of memory on a 20 MHz scope - at those speeds it is abut lowest costs. So even 1 EUR of saved costs might be worth it.
It also only makes sense to have a low of memory if the processing power is there to really use it - otherwise you end up with such crazy things like GBytes of memory - but useful only at low speed.

Fast scopes need super fast memory, like the FPGA internal one - and this is expensive, as FPGAs are usually not made as memory in first place.

In addition DSOs are not redesigned every year like PCs. So you can't expect the newest memory technology. Modern external memory also needs a fast and complicated board design - not just the chips. Going the wide bus like with graphics cards makes the boards rather tricky.

The times of small memory are largely over - small was something like 2000 samples. I remember paying significant extra for a 4 MBytes memory.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27048
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #7 on: March 03, 2017, 06:17:53 pm »
Fast scopes need fast memory, and this type of memory is expensive. It is only the relatively slow scope (e.g. 200 MSPS) that could used more or less standard DRAMs. But who needs 1 GB of memory on a 20 MHz scope - at those speeds it is abut lowest costs. So even 1 EUR of saved costs might be worth it.
It also only makes sense to have a low of memory if the processing power is there to really use it - otherwise you end up with such crazy things like GBytes of memory - but useful only at low speed.

Fast scopes need super fast memory, like the FPGA internal one - and this is expensive, as FPGAs are usually not made as memory in first place.
Wrong! The internal memory in a typical FPGA is way slower than DDR3 memory for the same width. With DDR3 memory on the Xilinx Zync's memory controller you can get a sustained throughput of over 7GByte/s. If you add your own DDR3 controller (not so difficult on a modern FPGA) then the sky is the limit if you just make the databus wider.

I'm using deep memory all the time. Capture a long trace and then analyse. The deep memory allows to look at both the big picture and things like timing details. Also when looking for rare bugs deep memory helps to capture as many segments as possible with great detail.
« Last Edit: March 03, 2017, 06:21:00 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 28535
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: Oscilliscope memory, type and why so small?
« Reply #8 on: March 03, 2017, 07:38:33 pm »
I am in the market for my first o-scope and noticed the very low amount of memory on many models......

I don't get it, ??? this seems like a massive marketing and management fail, but it is across so many companies that there must be something special going on that I am missing in my superficial overview. (...or its just one of those well established old-timer industries that simply enjoys living a couple decades behind and everyone is accustomed to the status quo so they get away with it.)
There is that historically to some degree, with competitors just outspecing older models by a few Kb, 20+ yrs ago most DSO's had small memory depth but since then the power of a DSO is more highly valued and there have been significant increases in memory depth where today ~10+ Mbts/ch is the norm.
Smaller are still quite useful but limited memory constrains the range of features that can be included.

Wolfie mentions the Pico but he's probably unaware of the SDS2000X series with 140 Mpts as standard.
Avid Rabid Hobbyist.   Come visit us at EMEX Stand #1001 https://www.emex.co.nz/
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Offline tmbinc

  • Frequent Contributor
  • **
  • Posts: 250
Re: Oscilliscope memory, type and why so small?
« Reply #9 on: March 03, 2017, 09:15:25 pm »
Really for waveform storage, there is no reason to not use standard memory, you just need enough of them (i.e. enough memory width/memory channels). A GeForce GTX 1080 has a memory bandwidth of 320 GByte/s using GDDR5. It's not dual-port, so for a fast waveform update rate (not limited by waveform memory), you would need 2x the bandwidth. With 320 GByte/s bandwidth, that should be able to satisfy a 4CH 40GS/s (8bit) Scope. And this is fully consumer tech.

Bigger issue is framebuffer/histograph. Because you essentially need random-access, paged memory (DRAM) doesn't work very well. This is where fast on-chip memory (either FPGA or ASIC) comes into play, or external SRAM.

As a reference, the Tektronix DPO/MSO/MDO4xxx/5xxx-Series uses standard DDR2 memory.

A GPU with embedded SRAM would actually be a great basis for a high-performance scope....
 
The following users thanked this post: capsicum

Offline ebastler

  • Super Contributor
  • ***
  • Posts: 6600
  • Country: de
Re: Oscilliscope memory, type and why so small?
« Reply #10 on: March 03, 2017, 09:25:40 pm »
[...] Wolfie mentions the Pico but he's probably unaware of the SDS2000X series with 140 Mpts as standard.

That's just because Wolfie always mentions the Pico, no matter what the thread is about.  :P
Sorry, could not resist...  ;)
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27048
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #11 on: March 03, 2017, 09:35:01 pm »
Either way the Pico is able to do something usefull with the memory and AFAIK you can get Pico scopes with much more memory. I wouldn't want to use a Picoscope as a generic R&D scope though. I have used one a couple of times and it is just too cumbersome for that purpose. However if you are into analysing signals (prototyping signal processing and audio) it will probably be better at it especially if you get one with more than 8 bit ADCs (real bits and not oversampled in a half assed way).
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline MrW0lf

  • Frequent Contributor
  • **
  • Posts: 922
  • Country: ee
    • lab!fyi
Re: Oscilliscope memory, type and why so small?
« Reply #12 on: March 03, 2017, 10:07:42 pm »
That's just because Wolfie always mentions the Pico, no matter what the thread is about.  :P
Sorry, could not resist...  ;)

So?  :-// Is this Rigol/Keysight-only forum or something? Also many other brands get mentioned on almost any thread by their fanbase. Before getting 2408B I did analyze Pico stuff quite heavily to get my choice right this time. Noticed many interesting things you cannot usually find in hobby market. Why I cannot share that? Apart from this I recently swapped Rigol for Analog Discovery 2. Did run 32768Hz test on AD2 and said couple good words also:
https://www.eevblog.com/forum/testgear/testing-dso-auto-measurements-accuracy-across-timebases/msg1145910/#msg1145910
AD2 directly competes with Pico low end btw. Also I have very positive attitude towards GWI or any other company trying their best to provide good product for educated consumer. As for SDS2000 I somehow thought of it as middle-range scope - but just looked at offers and indeed, with these almost dumping prices it competes directly with low end - so not bad, why not if keen on knobbed scopes.
Personally Im totally sold on math channels concept and this is best done on PC based system. Will publish something soon that will up the game from math channel ADAC  :-+
« Last Edit: March 03, 2017, 10:09:31 pm by MrW0lf »
 

Offline ebastler

  • Super Contributor
  • ***
  • Posts: 6600
  • Country: de
Re: Oscilliscope memory, type and why so small?
« Reply #13 on: March 03, 2017, 10:37:33 pm »
@MrWolf -- that's alright, of course; I was just teasing you. My apologies!

The point of knobbed scope vs. math box which you mention is probably one of the most fundamental matters of preference. Personally I like knobs on a scope, and hence tend to skip over any PC/USB based stuff. But no doubt the PC-based devices have their place and advantages! Heck, even I am considering one when it comes to logic analyzers... So, to each their own -- peace!  ;)
 

Offline capsicumTopic starter

  • Contributor
  • Posts: 34
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #14 on: March 03, 2017, 10:39:36 pm »
I certainly was thinking more of waveform storage or capturing a record for review, I know the screen resolution doesn't require many more records than it's pixel width. Extra processing overhead is a non issue in products priced in the hundreds or thousands of dollars, any semi-modern possessor can handle gobs of extra data points, this isn't 1988. Also "real time" displays are not actually true analog real time, and it doesn't need to be, the human viewer can only take it in so fast. As long as the proper delay is set  for synchronizing input channels at the output.  On the data processing side, PC style DDR memory (any generation) is fast enough for a billion gamers around the world who need screen resolutions and input to output delays far tighter than any scope operator.

However I can see the ADC acquisition speed for higher bandwidth scopes as a limiting factor. Ghz or higher range anyway.
Now I'm not actually suggesting that a scope can or should directly use a PC style memory bus,(maybe see last paragraph) however low cost ddr memory modules can however be packaged with or even masked right onto an ASIC.
Also ASICs aren't stuck with one giant production run every 10 years, (that would be a fool bit of management due to the hidden holding costs not to mention market shifts) as such, a point revision to add more memory as memory it becomes cheaper, can be made for each run. However the memory doesn't actually need to be part of the acquisition ASIC, the ASIC only needs to control and buffer the sample points going to the main memory, and the memory can be a second die in the same package remaining more flexible in revisioning while maintaining a reliable and short signal path. Then a proper low cost high production CPU/GPU can be used do the grunt of analyzing and displaying the data stored in memory at a more leisurely pace.

Comparison,(yes high production, but these are also retail prices) 8-10 years ago for the same era designs currently filtered down to $500 scopes:
XBOX360 $300, or AMD 65nm Phenom CPUs with built in memory controller, L2 and L3 caches of several megabytes, and 3Ghtz of 64bit wide calculation were retailing about $100, with mainboards (including all those pesky memory buses and a substantial onboard GPU) again $100 retail, PC3 packaged DDR3 at the time was around $30-40/GiB. Currently that price bracket will fetch an AMD APU (CPU and GPU on one die) with even better performance and efficiency and much lower demands on the supporting mainboard chipset.

Old PC2(240 pin packaged DDR2. again 10 years old) routinely runs 800Mhz 64bit width, double data rate (16 bytes per clock cycle, hardware gross bandwidth of 6400MB/s) and read write bandwidths (benchmarked net data in actual general use computers) of  2200-3000 MBytes/s per mem-channel with latencies of 4-15 cycles.
GDDR5 generally runs between 3.6 and 4.5 Gbit/s per pin. Newer GDDR5X is around 12Gbit/s/pin.
The Playstation4 at $255 has a total of 8 GB of GDDR5 @ 176 Gbit/s (CK 1.375 GHz and WCK 2.75 GHz) as combined system and graphics RAM for use with its AMD system on a chip comprising 8 Jaguar cores, 1152 GCN shader processors and AMD TrueAudio.
 :o 8GB at 176Gbits/s in a $250 box :o ! Now I don't expect a $300 scope to have this much as scopes are lower production and have significant additional hardware, but it is clear that memory cost is a fairly trivial part of the design.
 

Offline capsicumTopic starter

  • Contributor
  • Posts: 34
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #15 on: March 03, 2017, 10:44:01 pm »
Really for waveform storage, there is no reason to not use standard memory, you just need enough of them (i.e. enough memory width/memory channels). A GeForce GTX 1080 has a memory bandwidth of 320 GByte/s using GDDR5. It's not dual-port, so for a fast waveform update rate (not limited by waveform memory), you would need 2x the bandwidth. With 320 GByte/s bandwidth, that should be able to satisfy a 4CH 40GS/s (8bit) Scope. And this is fully consumer tech.

Bigger issue is framebuffer/histograph. Because you essentially need random-access, paged memory (DRAM) doesn't work very well. This is where fast on-chip memory (either FPGA or ASIC) comes into play, or external SRAM.

As a reference, the Tektronix DPO/MSO/MDO4xxx/5xxx-Series uses standard DDR2 memory.

A GPU with embedded SRAM would actually be a great basis for a high-performance scope....

Isn't paging an operating system level construct?
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27048
  • Country: nl
    • NCT Developments
Re: Oscilliscope memory, type and why so small?
« Reply #16 on: March 03, 2017, 10:49:13 pm »
Personally Im totally sold on math channels concept and this is best done on PC based system. Will publish something soon that will up the game from math channel ADAC  :-+
AFAIK Lecroy scopes (many of their models are PC based) are more geared towards signal analysis rather than trying to emulate old technology (an analog scope).

@capsicum: IMHO FPGAs have overtakken the abilities of ASICs when it comes to oscilloscopes. In the last decade FPGAs have been highly optimised to deal with high speed digital interfaces. For example the Xilinx Spartan6 has a SERDES (serial to parallel / parallel to serial converter) with realtime timing adjustment on every I/O pin and dedicated SERDES PLLs. With this an FPGA can deal with huge amounts of data by leveraging what an FPGA is good at: doing things in parallel with wide busses.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16677
  • Country: us
  • DavidH
Re: Oscilliscope memory, type and why so small?
« Reply #17 on: March 03, 2017, 11:20:34 pm »
There are some applications which absolutely require a long record length but record length is largely advantageous only for marketing.

With large record you can simulate multi-timebase scope because you have full resolution over extremely wide timebase range. Not commonly understood concept in low end market.

It is also not a commonly understood concept in the high end market.  When I was evaluating the Tektronix MSO5000 series, I asked the application engineers how to do this and together we could not figure it out although we did manage to get the oscilloscope to crash a few times.
 

Offline RGB255_0_0

  • Frequent Contributor
  • **
  • Posts: 772
  • Country: gb
Re: Oscilliscope memory, type and why so small?
« Reply #18 on: March 03, 2017, 11:52:47 pm »
Ok, DDR isn't so great in a reasonable scope. What about HBM(2) as used on the Pascal compute cards? Would that be viable?
Your toaster just set fire to an African child over TCP.
 

Offline tmbinc

  • Frequent Contributor
  • **
  • Posts: 250
Re: Oscilliscope memory, type and why so small?
« Reply #19 on: March 04, 2017, 12:25:58 am »
Isn't paging an operating system level construct?
The other paging :).

(DRAM is accessed by loading one RAM "row" at a time, then allowing to access multiple "columns" quickly. A "row" can, more generically, be described as "page"; hence the term "Fast Page Mode DRAM", if anyone remembers...)

DRAM works great with caches, since caches (assuming WriteBack) has a minimum fetch size of the cacheline size, hiding most of the latency if you do clever prefetching.

However with a "framebuffer[GetSample()][x++] += 1;"-style construct (logically speaking, of course), you end up with roughly 1 read-modify-write every few hundred bytes apart, which is the most terrible access pattern you can have. A cache isn't going to help you. For GPUs, memory is accessed in "tiles" do optimize for typical access patterns, but this doesn't work predictably for random samples.
 
The following users thanked this post: capsicum

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7432
  • Country: nl
  • Current job: ATEX product design
Re: Oscilliscope memory, type and why so small?
« Reply #20 on: March 04, 2017, 12:38:40 am »
Any memory, where you need to deal with things, like RAS latency and such is not usable for it. You need high throughput at a constant rate. As L3 cache on PC. You have some 20MB cache in a CPU, that would be 20 million sample on a scope.
But this explains it probably well:
http://www.eetimes.com/document.asp?doc_id=1279463
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4551
  • Country: au
    • send complaints here
Re: Oscilliscope memory, type and why so small?
« Reply #21 on: March 04, 2017, 12:54:41 am »
Extra processing overhead is a non issue in products priced in the hundreds or thousands of dollars, any semi-modern possessor can handle gobs of extra data points, this isn't 1988. Also "real time" displays are not actually true analog real time, and it doesn't need to be, the human viewer can only take it in so fast. As long as the proper delay is set  for synchronizing input channels at the output.  On the data processing side, PC style DDR memory (any generation) is fast enough for a billion gamers around the world who need screen resolutions and input to output delays far tighter than any scope operator.
You've come to this without knowing any of the underlying limitations, its been discussed over and over. You can go one of two ways:

Stream to memory and then display it out to a screen as fast as you can, as in the Lecroy products using their branded X-Stream design. This uses simple fifos to store the data and scales well to large memory depths on high latency memory but can have gaps in the capture and slow rates of updating the screen.

or

Accumulate the data as fast as possible into a histogram and display this as a "realtime" display. This is limited by memory latency and falls apart when you try and scale it up, but it can offer less dead time and more throughput to the screen (displayed wfm/s) as in the Keysight branded MegaZoom design.

Those are the two extremes of memory management, and both those brands and the other competitors offer a wide range of products in-between. The products are complementary and neither is ideal for all uses. But the main problem with deep memory has been what to do with it, better tools are arriving all the time to go through the long captures and extract/mark/search the "interesting" features so it has remained a niche use case and people get on with working using the tools they have available. Recall that scopes for a long time had thousands of points of memory and work still got done, the need for very long captures is unusual so there is not enough market demand for it to be universal.
 

Offline capsicumTopic starter

  • Contributor
  • Posts: 34
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #22 on: March 06, 2017, 10:20:55 pm »
Any memory, where you need to deal with things, like RAS latency and such is not usable for it. You need high throughput at a constant rate. As L3 cache on PC. You have some 20MB cache in a CPU, that would be 20 million sample on a scope.
But this explains it probably well:
http://www.eetimes.com/document.asp?doc_id=1279463

L3 type cache, (possibly dumped to a dynamic ram similar to the unlimited capture of USB scopes) is primarily what I have been on about. (I am sure that I was not clear enough in my posts)
 
In my original comment i mentioned  "Plenty" of memory, my intent was a few megabytes, not gigabytes, and was in contrast to many scopes still marketed new with only a few kilobytes. 

Enough to capture a few ethernet jumbo frames with reasonable resolution (about 200-400k per segment per channel; 5-10points per wave period, 37k cycles(73k bits, 9100 bytes); 4 duplex channels for 1G ethernet, differential signaling type)

You've come to this without knowing any of the underlying limitations, its been discussed over and over. You can go one of two ways:

Stream to memory and then display it out to a screen as fast as you can, as in the Lecroy products using their branded X-Stream design. This uses simple fifos to store the data and scales well to large memory depths on high latency memory but can have gaps in the capture and slow rates of updating the screen.

or

Accumulate the data as fast as possible into a histogram and display this as a "realtime" display. This is limited by memory latency and falls apart when you try and scale it up, but it can offer less dead time and more throughput to the screen (displayed wfm/s) as in the Keysight branded MegaZoom design.
I actually do understand quite a few of the underlying limitations of memory, don't be so arrogant. This thread is more about how those limitations might prevent scopes from having plenty of memory.

The LeCroy method it seems is exactly what I was suggesting in one post, thank you for bringing it to my attention.

Recall that scopes for a long time had thousands of points of memory and work still got done, the need for very long captures is unusual so there is not enough market demand for it to be universal.
And slide rules created the early space program, farms plowed with horses, and I could access most of the internet with my 28.8k modem. Sure they got the job done after a fashion because that was all they had to work with, yet they now hardly exist outside of the history-enthusiast realm.

You might also consider the observations of Gordon Moore in 1965. "Moore's law" is often grossly mis-understood to be a simple doubling of computing power or halving of packaged size, however his original paper was not about either.
The paper was actually an economic observation regarding the computing power per dollar of naked silicon for the economically optimal chip size and density at each observation date, the paper also includes the observation that low density chips, though the optimal at an earlier date are then a gross false economy at a later date. Triply so, once the relatively fixed costs of packaging, design, installation, and shipping are figured in, not to mention future-proofing and the expanded usefulness(even if slight).
As for optimal chip power; Very high power chips use more continuous silicon and thus make less efficient use of wafers due to the waste created by a single wafer defect, and very low power chips require excess handling, testing, and so forth per unit of computing power. Thus midrange chips form the bottom of the bathtub curve, yielding the best bang for the buck, and due to the ever advancing nature of this technology it is generally more economical to shift selection to the higher end of the range under consideration unless you are already on the extreme right side the curve where it is rising quite steep.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9897
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #23 on: March 06, 2017, 10:56:07 pm »

In my original comment i mentioned  "Plenty" of memory, my intent was a few megabytes, not gigabytes, and was in contrast to many scopes still marketed new with only a few kilobytes. 

Enough to capture a few ethernet jumbo frames with reasonable resolution (about 200-400k per segment per channel; 5-10points per wave period, 37k cycles(73k bits, 9100 bytes); 4 duplex channels for 1G ethernet, differential signaling type)


Sounds like a job for a logic analyzer, not a scope.  Once the scope has verified signal timing and integrity, the LA comes out to play.

The DS1054Z has 24 Mpts or 6 Mpts per channel (whichever comes first).  Seems like a lot...  Especially when there are only 800 pixels on the screen (less when menus are subtracted) and scrolling becomes tiresome after a very short time.  Yes, I know the memory can be dumped to a PC but how many people actually do that versus the number who trigger on what they want to see and a screen shot is all they need?
 

Offline capsicumTopic starter

  • Contributor
  • Posts: 34
  • Country: us
Re: Oscilliscope memory, type and why so small?
« Reply #24 on: March 06, 2017, 11:45:31 pm »

Sounds like a job for a logic analyzer, not a scope.  Once the scope has verified signal timing and integrity, the LA comes out to play.

The DS1054Z has 24 Mpts or 6 Mpts per channel (whichever comes first).  Seems like a lot...  Especially when there are only 800 pixels on the screen (less when menus are subtracted) and scrolling becomes tiresome after a very short time.  Yes, I know the memory can be dumped to a PC but how many people actually do that versus the number who trigger on what they want to see and a screen shot is all they need?
It was just an example of a common signal of substantial length. For sure LAs are good for this particular case but they are not free of cost and are not generalist tools like o-scopes, so frequency of use is a factor.
As for the DS1054z; Some of the newest scopes are finally catching up, this is a good thing for sure, but to me it seems a few years late and there is still a large number that haven't caught up and some companies are spending money on marketing to push sub-par products rather than just engineering a better product. In my mind in 2017 memory depth shouldn't even be a purchase selection criteria outside of some very niche use cases.

http://www.eetimes.com/document.asp?doc_id=1279463
On closer reading this is just stating the obvious that a scope needs to be correctly programed(either software or hardware logic) to handle its allotted memory, and it seems to be highly biased toward the products of the author's company. AKA Stealth marketing.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf