Using DDR memory with an FPGA is easy nowadays. Just drop a piece of pre-cooked IP in your design and done. Xilinx allows to have multiple narrow memory interfaces to the fabric (basically creating a multi-port memory) or a really wide one. In the Xilinx Zync the memory is shared between the processor and the FPGA fabric through multiple memory interfaces.
Maybe you should stick to Xilinx' datasheets/documentation because they told me an entirely different story.
Then link to the documentation and not some random website. The documentation says the memory interface can handle 32bit 1800Mb/s which makes 7.2GB/s.
I was presuming that the acquisition FPGA could write directly to the acquisition DRAM, and the acquisition memory could be set up in a double-buffered ("banked"?) configuration so that the FPGA's writes wouldn't collide with reads performed by the downstream processing pipeline. Which is to say, the FPGA's writes would go through a switch which would send the data to one DRAM bank or the other, depending on which one was in play at the time.
That is really difficult to do with modern high performance DRAM short of building your own memory controller with two interfaces. Older DSOs did work this way by sharing the memory bus.
The Zynq solves a lot of problem but can its serial interfaces connect to ADCs and operate continuously? These things cost as much as an entire Rigol DSO. A dedicated FPGA for each channel would be cheaper.
It did lead me to wonder how many designers of DDR2/DDR3 memory controllers have gone insane.
In the recent past I looked at a modern SRAM based design but the parallelism of DDR2/DDR3 has a lot of advantages even though they are complicated. Just using the FPGA or ASIC's internal memory has a lot going for it.
All this talk is very interesting but hard to see a point. Clearly there are two completely different approaches. One is glitch hunters approach which is ok with tiny memory, but need very high wfm rate. Glitch hunter will say large mem is s*it and trouble, because hard to achieve high wfm.
All this talk is very interesting but hard to see a point. Clearly there are two completely different approaches. One is glitch hunters approach which is ok with tiny memory, but need very high wfm rate. Glitch hunter will say large mem is s*it and trouble, because hard to achieve high wfm.
I fail to see why you can't have both. More precisely, I fail to see how a well-designed large memory system will necessarily result in a lower waveform rate.
Suppose you have a capture system which captures all of the samples continuously. Suppose too that you have a triggering system that does nothing but record the memory addresses of every triggering event.
If you're glitch hunting, then you want to maximize the waveform update rate. Assuming a triggering subystem that can keep up with the sampling rate, the waveform update rate is going to be determined by how quickly you can scan a display window's worth of data and show it on the screen. But that data is anchored at the trigger point, so you have to have trigger data anyway. Your processing rate is what it is, and may be slow enough that it would have to skip triggering events if those events are too closely spaced in time. But how can the waveform update rate possibly depend on the size of the entire capture when the displayed time range shows a subset of that? More to the point, if you have a system with a small amount of capture memory, then you're forced to sample at a slower rate in order to capture a longer period of time, but that's no different from subsampling a larger buffer from a faster capture. And that's true even of the triggering mechanism, if need be. Sure, the triggering mechanism wouldn't see all of the possible triggering events if it's subsampling the memory, but that is no different from when the sampling rate is reduced to allow the capture to fit into a smaller amount of memory.
Sure, it's desirable to process all of the data in the capture buffer (or the subset that represents the time window covered by the display), but how is it an advantage to reduce the amount of memory available to the capture system? I see no upside to that save for cost savings. But the advantages of a larger capture memory are undeniable.
Last time we discussed this, I came to the conclusion of "why not both?" which is sort of the result that the Rigol DS1000Z series produces where most of what you see and what measurements are made on is a short record just long enough for the display.
Then link to the documentation and not some random website. The documentation says the memory interface can handle 32bit 1800Mb/s which makes 7.2GB/s.
Last time we discussed this, I came to the conclusion of "why not both?" which is sort of the result that the Rigol DS1000Z series produces where most of what you see and what measurements are made on is a short record just long enough for the display.
Rigol approach makes indeed sense with very limited hardware to use, but they messed up one key point: you can decimate, but not destroy data! They probably did this to make use of some limited-bit integer calculus, which is of course fast, but effectively nulls out accuracy gain you can have with stats collection. Stats on Rigol effectively do not work because of this. Initially I was hoping do same trickery I do with new scope... when needing extra accuracy (with repeating signals) just crank up stats up to 1000x and enjoy extreme timing accuracy even with low sample sets (max 16kpts with ETS).
Rigol got ideas from Keysight MegaZoom obviously... What I still do not get about MZ is that even Keysight says it's only "screen pixels", but you get substantially more accuracy than with Rigol. *SOX3000T seem to deliver about 20kpts-like accuracy. Maybe MegaZoom only decimates, not destroys, and "statistical gain" kicks in? 20kpts-like seems about right for statistical gain on screen pixels.
I'm not sure there is yet enough market pressure for cheap super-scope having both 1M+ wfm/s and gigabytes of memory, despite it being totally possible technically (need hardware parallelism to cover non-trivial use cases, just like with multi-core CPUs). But thats only good because having single perfect scope theres no excuse to acquire wide variety of gear... which would be very hard to accept for people suffering from various gear acquisition related illnesses
Edit: Seems new R&S is trying to find some middle ground with 10...20MB main mem and 160MB of segmented. Cannot find trigger rearm data, wfm/s is 50,000 (maybe worse when offloading to segmented mem?). There has to be some (cost/performance) reason to differentiate two memories.
My old Tektronix DSOs all use a 16 bit processing record so when averaging is used, it really works and the 10 bit displays turn smooth as glass. In an odd way, they look like an analog oscilloscope but without the noise.
What is the case again for super long record lengths when you have delayed acquisition, DPO type processing, and segmented memory?
What is the case again for super long record lengths when you have delayed acquisition, DPO type processing, and segmented memory?
What is the case again for super long record lengths when you have delayed acquisition, DPO type processing, and segmented memory?
If you want to record some sort of events and work with them (e.g. compare the event you've got today with an event you had a month ago), you want memory long enough to capture the whole event, whatever this might be.
Of course you do - but that is not an oscilloscope. It's what is called a 'transient recorder' or 'high-speed digitiser', and they are available with very large amounts of memory, e.g. http://www.keysight.com/en/pc-1128783/High-Speed-Digitizers-and-Multichannel-Data-Acquisition-Solution?pm=SC&nid=-35556.0&cc=GB&lc=eng. No display processing all, though.
And also less tested than DDR2.
Whereas is a great idea to use a DDR3 memory because it's faster, but will it be relaiable enough within a safe margin? The same goes why bother using windows XP in a super-duper-6-digit-priced spectrum analyzer? It's more tested.
DDR3 is probably in 80% of all computers now and doing well.
And also less tested than DDR2.
Whereas is a great idea to use a DDR3 memory because it's faster, but will it be relaiable enough within a safe margin? The same goes why bother using windows XP in a super-duper-6-digit-priced spectrum analyzer? It's more tested.
DDR3 is probably in 80% of all computers now and doing well. I don't think you can test anything more than this.
DDR2 is obsolete and harder to buy.
DDR4 requires fast expensive FPGAs.
There might be also other specifications we are unaware of that make the DDR2 memory suitable for this pourpouse
The problem is certainly not a technical one.
There might be also other specifications we are unaware of that make the DDR2 memory suitable for this pourpouse
DDR2 is not that much different from DDR3 - slightly higher voltage, less sophisticated protocol. Thus DDR2 is a bit slower. Best DDR3 is about 2 times faster than best DDR2. In practical terms, the difference is less because 2133Mb/s DDR3 is rare and difficult to deal with. So, practically you get 1333Mb/s from DDR3, perhaps even less, while most common DDR2 is 800Mb/s. Otherwise they're both the same.
The bandwidth utilization of the memory is very low - perhaps 20-30%, if not less. Therefore, the bandwidth provided by the memory is less than the bandwidth required for continuous acquisition. The bandwidth can be increased by few means: