General > General Technical Chat

Big Test MFG says our Ethernet Chip violates the IEEE standard - are they right?

<< < (8/8)

tooki:

--- Quote from: nctnico on October 21, 2021, 01:30:43 pm ---
--- Quote from: tooki on October 21, 2021, 03:06:47 am ---
--- Quote from: nctnico on October 20, 2021, 08:51:41 pm ---
--- Quote from: tooki on October 20, 2021, 12:24:42 pm ---- Gigabit requires Cat 5 cable, but requires 4 pairs.

--- End quote ---

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.

--- End quote ---
Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.

--- End quote ---

Well, I have seen CAT5 cables fail at 1Gbit where CAT5E worked. The failing cables where only a few meters long. So explain that to me...

--- End quote ---
Could have been noncompliant cable (just because a manufacturer says it’s complaint doesn’t mean it is, see below), or it could have been poorly terminated, or it could have been fully compliant originally but degraded due to damage.


Here’s a 2016 test of Cat 6 cables, finding that only 10% met the rated spec.

Another test from a few years before found a compliance rate of just 30% for Cat 5e cables and 15% for Cat 6 cables.

There’s no reason whatsoever to assume this has changed: most of the time the cables are short enough that it works anyway, and a few lost packets here and there aren’t noticeable anyway. But when we push cables to the limits of bandwidth and/or length, suddenly the noncompliant cables become problematic.

We see these problems a lot more with HDMI, whose higher bandwidth requirements (and fairly low tolerance for bit loss, thanks to encryption) spotlight the cable problems. Ethernet is just better at masking bad cables…

ejeffrey:

--- Quote from: Ranayna on October 21, 2021, 12:19:40 pm ---Wasn't there some change in the specs some years ago?
I always heard that CAT 5e was essentially "backported" into CAT 5, making CAT 5e obsolete.

--- End quote ---

No, the cat5e standard superseded cat5.  Honestly anything sold recently as cat5 is somewhat suspect just on the grounds that it is claiming adherence to an obsolete standard.


--- Quote from: nctnico ---Well, I have seen CAT5 cables fail at 1Gbit where CAT5E worked. The failing cables where only a few meters long. So explain that to me...

--- End quote ---

New or old cable?  Honestly any modern cable labeled as cat5 is suspect.  It is a long obsolete standard and as far as I know reputable manufacturers haven't made it in ages.  Old cables should be fine as long as they haven't been abused.  My understanding at least is that the added specs for cat5e were A) already met by a lot of cat5 but with no guarantee or testing, and B) only required to reach the full 100 meter range including a wall jack and patch panel that interrupt the signal.  A 5 meter patch cable directly connecting a device to a switch should definitely work and has worked in every situation I have tried although I haven't used cat5 cables in a long time.

wraper:

--- Quote from: tooki on October 21, 2021, 03:10:54 am ---
--- Quote from: ejeffrey on October 20, 2021, 10:04:59 pm ---
--- Quote from: nctnico on October 20, 2021, 08:51:41 pm ---
--- Quote from: tooki on October 20, 2021, 12:24:42 pm ---- Gigabit requires Cat 5 cable, but requires 4 pairs.

--- End quote ---

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.

--- End quote ---

Really?  I've haven't looked a that extensively but I have never seen a cat5 cable fail at a gigabit speeds.  The initial version of 1000Base-T specifically called for cat5 but with slightly modified specs -- I think only for NEXT which was not specified in the cat5 standard but which most existing cat5 cables supported.  Cat5e was then published and cat5 withdrawn and the ethernet spec modified to call for cat5e.

--- End quote ---
Every resource I’ve checked still says Cat 5, not 5e, as the minimum for gigabit.

--- End quote ---
Every resource I checked says that cat5 is sufficient only up to 100Base-TX, including Belden which makes these cables. https://www.belden.com/blogs/why-category-5e-cabling-is-becoming-outdated


--- Quote ---It is a misconception: as ve7xen said, the Cat 5e standard didn’t even exist when gigabit was created!
--- End quote ---
He also said that it had additional requirements for which Cat 5 spec was not sufficient.

mansaxel:

--- Quote from: tooki on October 21, 2021, 09:05:40 pm ---We see these problems a lot more with HDMI, whose higher bandwidth requirements (and fairly low tolerance for bit loss, thanks to encryption) spotlight the cable problems. Ethernet is just better at masking bad cables…

--- End quote ---

Also, Ethernet almost never is run even near its capacity. Sure, the packets are short and fast when they arrive, but there usually is lots of space between them, so retransmissions initiated by TCP will have space to happen, even if they mess the transmission speed up grossly.  (A true CSMA/CD net, that is 10Mbit half-duplex, is another story, best forgotten in the bad memories from the 90s). Therefore, as long as there is capacity available, errors will be tolerable to most scenarios. If you're running a protocol that always puts frames on the wire, like SDH, which is much more like the transmission you'll see in HDMI or SDI or AES3 or S/PDIF, there is always a verifiable frame on the wire, increasing the statistical probability that stochastic errors will surface.  Try "filling the cable" by running FTP with fast end systems that have tuned operating systems, and a bad cable will sour the day immensely.

Similar situations exist with longer or more spliced/patched cables; there is a probable error rate per meter of bad cable, and there is a error rate per patch/IDC splice too; and of course they're cumulative.

Finally, to put a metrology slant on this: There is reported verified difference in PDV (packet delay variation) between optical and electrical Gigabit Ethernet interfaces, and the reason for this is that there is a fair amount of error checking in the electrical interface that does not have to happen in the same way for the optical one.  Of course, the differences are small, but if you are trying to build the best IEEE 1588-2008 (Precision Time Protocol) infrastructure possible, this will matter, since the assumption that packet time A->B == packet time B->A will not be true as often over copper as over fiber.

"All broadcast engineers are borderline time-nuts"
               (Another forum member to me, in privmsg)

Navigation

[0] Message Index

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod