Products > Test Equipment
Show us your square wave
Rupunzell:
Could this measurement be run without the RG58 coax, connectors and related adapters? Use only a single adapter if possible between the S4 head and generator output as there appears to be a reflection from the RG58 coax/connectors/adapters.
See circled areas in this altered image:
Bernice
--- Quote from: EV on January 29, 2015, 12:23:51 pm ---
Here are pictures you asked:
Generator: Rigol DG4162, sync out, 40 MHz square wave, connected straight to scope with about 1 m RG58 cable.
Scope: Tektronix R7103
1. pic: Timebase 7B15, Vertical amp 7A29, BW 1 GHz
2. pic: Timebase 7T11, Vertical amp 7S11 with sampling head S-2, BW ?
3. pic: Timebase 7T11, Vertical amp 7S11 with sampling head S-4, BW ?
--- End quote ---
The last picture with S-4 sampling head does not look good. So I connected the RG58 cable with 20 dB attenautor to the S-4 sampling head and here are new pictures with 5 ns and 1 ns timebase.
[/quote]
TunerSandwich:
--- Quote from: joeqsmith on January 30, 2015, 03:36:14 am ---
--- Quote from: TunerSandwich on January 30, 2015, 03:25:14 am ---
--- Quote from: joeqsmith on January 30, 2015, 02:09:01 am ---
I data I presented in the video shows a point to point running 400Mb, no problem. Not sure why your thinking 100Mb on a Gig connection.
--- End quote ---
That is not possible. There is an error in the data/math there. 1Gb = 125MB, with NO protocol overhead....ethernet protocols are going to slash at least some margin off that....if you see 90-100MB full duplex you are lucky, and have a top NiC/cabling etc etc etc....
I have some fancy NAS/SAN devices here at work, running LACP (802.3ad) link aggregations, and top tier controllers, drives, cabling etc....and we are lucky to see a sustained 130-150 MB rate, on large blocks....forget small blocks of non contiguous file headers....that has generally mirrored my experiences in that realm for the last couple decades....
are you confusing MB and Mb?
--- End quote ---
1000Mbit connection or yes 125MByte. Getting 400Mbit or 50Mbyte data rates. Project I am working on will run at 500Mbit sustained until I fill a TB drive and not miss a beat. No problem. In the case of the DSO, I tried a direct link then ran it through the old crappy Cisco. No performance hit. Keep in mind that the overall average data rate for the DSO will be much much lower. This is a burst. Then we wait for the DSO to collect it thoughts.
--- End quote ---
Ok, that is fine...but Wuer said 100MB and you retyped it as 100Mb and said you didn't understand why he said that was impossible.....he didn't say 100Mb.....you misread or misquoted that....50MB on 1Gb ethernet is no problem....even on small blocks you should get somewhere around 47MB, so 50MB is totally in the realm....but 400MB on 1Gb, no way.....
I see a misquote on your end....that's why it's not adding up. Wuer is bang on with his statement of 100MB on 1Gb ethernet....and he is being generous there.
When you showed your data on the LeCroy forum, did you mistakenly flip those MB vs Mb figures? If so I can see why no one responded....not trying to be rude at all, just trying to point out that those bits of terminology are pretty damn critical in making any realistic assessments
Another issue there is "burst" rates typically apply to contiguous streams of data....small files, with small byte/sector allocations on drives are going to put those theoretical rates in the dumpster....
Big files, going between magnetic drives, with 4k byte sectors....over ethernet MIGHT get 100-110(ish) MB/sec....SSD is no help there either as the reduced (non existent really) seek times aren't very relevant.....ethernet is still dealing with packets snd, rec, ack etc....there is much more latency there than the drives....
How is the LeCroy provisioning this data? I don't mean internally, I mean off through the PCi bus, into the NiC and on down the line....there is going to be far more complexity, overhead etc, over ethernet than any potential latencies across PCi bus or associated mem controllers etc.....
Even the most arcane SATA drive is going to get limited by the network overhead.....If I recall correctly, most OTS/modern magnetic drives have a seek time of around 3-20ms.
Maybe we are talking apples an oranges here and I need to go back and read what you posted prior, but in terms of Wuer being wrong about 1Gb ethernet being capped @ 100MB/s, he isn't.....that is how it goes. Again he didn't say 100Mb/s as you re-wrote it....key key point there....
joeqsmith:
--- Quote ---are you confusing MB and Mb? As you can see below Wuer is talking MB and above you are talking Mb....big difference there
--- Quote --- Quote from: Wuerstchenhund on Today at 06:04:46 AM
And really, it should be pretty obvious to an EE why on a single PCI bus with a theoretical max transfer rate of 133MB/s a 1Gbps network adapter (which in a good environment transfers between 90-100MB/s) doesn't leave much room for anything else. And something like 33MB/s isn't much for an acquisition system which captures over 20GB data per second, and while not everything in the sample memory goes to the CPU, everything that you want to see on the screen or that you want to process in any way does. Well, you do the math.
--- End quote ---
--- End quote ---
Right a 1Gbps adapter in a good environment transfers between 90-100MB/s. Again, I am getting around 400Mb or 50MB. Around 50%. I must be missing the question.
joeqsmith:
--- Quote from: TunerSandwich on January 30, 2015, 03:42:42 am ---
Ok, that is fine...but Wuer said 100MB and you retyped it as 100Mb and said you didn't understand why he said that was impossible.....he didn't say 100Mb.....you misread or misquoted that....50MB on 1Gb ethernet is no problem....even on small blocks you should get somewhere around 47MB, so 50MB is totally in the realm....but 400MB on 1Gb, no way.....
I see a misquote on your end....that's why it's not adding up. Wuer is bang on with his statement of 100MB on 1Gb ethernet....and he is being generous there.
When you showed your data on the LeCroy forum, did you mistakenly flip those MB vs Mb figures? If so I can see why no one responded....not trying to be rude at all, just trying to point out that those bits of terminology are pretty damn critical in making any realistic assessments
--- End quote ---
:-DD :-DD missed that. Makes more sense now that I reread it. I was thinking man, 100Mb on a 1Gb connection is good? :palm: :-DD
No problem. Good catch. My original post to the LeCroy forum:
--- Quote ---A short video showing the results of adding a 1000Gb board to the Wavemaster. I used an Intel PRO/1000 GT Ethernet board which supports off loading some of the processing that the OS would normally have to do.
I can leave both ports attached to the switch, each with their own IP and select the card I want to use from Labview. XStream has no problems with it.
I ran some tests with larger MTU size, Nagle enabled, changing the DSOs software priority and direct connection to the PC rather than using the Cisco switch. Gains were minimal.
The one thing that I do notice that is not in the video, at 20GS/s the poor DSO has no time left to service the Ethernet. Depending on what you are doing, I have seen average data rates as poor as 10Mb-20Mb with the 100Mb port. In all cases the add-on board yielded better performance.
--- End quote ---
TunerSandwich:
--- Quote from: joeqsmith on January 30, 2015, 03:54:23 am ---
--- Quote from: TunerSandwich on January 30, 2015, 03:42:42 am ---
Ok, that is fine...but Wuer said 100MB and you retyped it as 100Mb and said you didn't understand why he said that was impossible.....he didn't say 100Mb.....you misread or misquoted that....50MB on 1Gb ethernet is no problem....even on small blocks you should get somewhere around 47MB, so 50MB is totally in the realm....but 400MB on 1Gb, no way.....
I see a misquote on your end....that's why it's not adding up. Wuer is bang on with his statement of 100MB on 1Gb ethernet....and he is being generous there.
When you showed your data on the LeCroy forum, did you mistakenly flip those MB vs Mb figures? If so I can see why no one responded....not trying to be rude at all, just trying to point out that those bits of terminology are pretty damn critical in making any realistic assessments
--- End quote ---
:-DD :-DD missed that. Makes more sense now that I reread it. I was thinking man, 100Mb on a 1Gb connection is good? :palm: :-DD
No problem. Good catch. My original post to the LeCroy forum:
--- Quote ---A short video showing the results of adding a 1000Gb board to the Wavemaster. I used an Intel PRO/1000 GT Ethernet board which supports off loading some of the processing that the OS would normally have to do.
I can leave both ports attached to the switch, each with their own IP and select the card I want to use from Labview. XStream has no problems with it.
I ran some tests with larger MTU size, Nagle enabled, changing the DSOs software priority and direct connection to the PC rather than using the Cisco switch. Gains were minimal.
The one thing that I do notice that is not in the video, at 20GS/s the poor DSO has no time left to service the Ethernet. Depending on what you are doing, I have seen average data rates as poor as 10Mb-20Mb with the 100Mb port. In all cases the add-on board yielded better performance.
--- End quote ---
--- End quote ---
If you don't use LACP to aggregate the ports on the NiC, then running each on it's own IP actually increases processing overhead. You basically have an idle port, trying to look for data. If you want to divide packets across the aggregate then LACP needs to be in place throughout the chain...both source and destination, and ANY device (switch etc) in-between needs to have those links operating as a "team". If you team those ports, you don't exactly have a 2Gb/s port either, as there is overhead in the LACP.
Intel PRO NiC is top tier kit....a lot of older Cisco switches don't support (802.3ad) dynamic links.....also running into a switch is not a point to point connection.....the switch re-prioritizes packets, depending on lots of things.....you can, however tag packets with priority...using QOS.
I know that is all a bit off topic here, but just adding a bit of insight on how that all comes together in the real world. I had a brief stint as a Cisco tech (CCNA cert) and thought some of that jargon might shed some light on some faults in methodology.
Larger MTU doesn't necessarily benefit things.....a lot of devices don't handle jumbo frames in the same way. Generally MTU of 9000 or more doesn't necessarily yield the result you might expect.....for example intel PRO NiC might call a jumbo frame 9000 MTU and a Cisco switch might call it 9090...etc etc etc....it's an endless clusterfuck of manufacturers not being on the same page....and a very loose classification about what a "jumbo frame" really is.....as a general rule stay away from jumbo frames....they very often don't help anything, and very often hurt things.....unless you have really sussed out the need, and entire chine of events that are handling that packet (and assuming that the source is even packaging that at the MTU).
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version