Author Topic: Big Test MFG says our Ethernet Chip violates the IEEE standard - are they right?  (Read 3690 times)

0 Members and 1 Guest are viewing this topic.

Offline metrologistTopic starter

  • Super Contributor
  • ***
  • Posts: 2206
  • Country: 00
Regarding cat5 vs. cat5e, I think it may be because the spec is guaranteed to 100m. Who here has tested this? I think short runs, 3m patch cables, is why I see reported Gbit speeds on cat3.

Some of the reporting here seems contradictory, but I am not a standards expert. Negotiating the highest common denominator is what I would expect. I only really care about what's happening at the port.

So I took a peak at the digital board and found https://www.ti.com/lit/ds/symlink/dp83867ir.pdf in qfn. I see all 16 data lines go off the backbone :-//

Edit: one thing I wanted to mention is that I had fiber installed and a new router. My PC had Gbit NIC, but I was only getting 10Mbps (simple speedtest.net). I changed the 10m patch cable and then got near the specified 1000mbps. I did not keep track of that cable.
« Last Edit: October 21, 2021, 12:15:44 am by metrologist »
 

Online ve7xen

  • Super Contributor
  • ***
  • Posts: 1193
  • Country: ca
    • VE7XEN Blog
Regarding cat5 vs. cat5e, I think it may be because the spec is guaranteed to 100m. Who here has tested this? I think short runs, 3m patch cables, is why I see reported Gbit speeds on cat3.

Some of the reporting here seems contradictory, but I am not a standards expert. Negotiating the highest common denominator is what I would expect. I only really care about what's happening at the port.

Yeah, the cable quality is likely only going to matter near the edges of the spec, in an alien crosstalk environment (ie. cable bundle) and when EMI is a problem etc. In practice you can use a compliant cable at considerably longer than spec'd distance and it will work fine, and likewise an underspecced cable will often work just fine too. Like any spec, it's 'guaranteed' to work when you are in compliance, and outside of that the behaviour is undefined, not guaranteed to break - and if it's going to break will break in the most annoying, intermittent way possible ;).

The PHY doesn't normally do any evaluation of the cable quality, so if it has sub-par characteristics, it will generate errors and a high loss-rate, but link will almost certainly come up unless it's extremely bad, or as previously discussed, is missing pairs. Even a 'low' loss-rate of like 1% will have a serious impact on throughput in most applications, because it causes many retries and messes with the rate estimation / congestion control algorithms at higher layers. It's up to the user to supply appropriate cabling or configure the equipment to match the supplied cable's limitations, the spec will not aid you here.

The PHY you found looks like it supports downshift (called 'speed optimization' here), but it seems to be disabled by default. AutoMDIX is also supported and enabled by default. So the most likely scenario is that the link partner also does not do speed downshift, and both sides were trying to link at 1G over a two-pair cable.
« Last Edit: October 21, 2021, 12:39:28 am by ve7xen »
73 de VE7XEN
He/Him
 
The following users thanked this post: tooki

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11500
  • Country: ch
- Gigabit requires Cat 5 cable, but requires 4 pairs.

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.
Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.

But 5e’s superior performance will get you a bit more length.

I suspect that cases where replacing 5 with 5e made gigabit work are actually due to either the old cable being substandard, or even more likely, the jacks being terminated poorly. I’ve seen many a jack where the pairs were untwisted far too long, which 100Mbps might tolerate, but gigabit will not.
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11500
  • Country: ch
- Gigabit requires Cat 5 cable, but requires 4 pairs.

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.

Really?  I've haven't looked a that extensively but I have never seen a cat5 cable fail at a gigabit speeds.  The initial version of 1000Base-T specifically called for cat5 but with slightly modified specs -- I think only for NEXT which was not specified in the cat5 standard but which most existing cat5 cables supported.  Cat5e was then published and cat5 withdrawn and the ethernet spec modified to call for cat5e.
Every resource I’ve checked still says Cat 5, not 5e, as the minimum for gigabit.
 

Offline Ranayna

  • Frequent Contributor
  • **
  • Posts: 861
  • Country: de
Wasn't there some change in the specs some years ago?
I always heard that CAT 5e was essentially "backported" into CAT 5, making CAT 5e obsolete.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
- Gigabit requires Cat 5 cable, but requires 4 pairs.

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.
Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.

Well, I have seen CAT5 cables fail at 1Gbit where CAT5E worked. The failing cables where only a few meters long. So explain that to me...
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online wraper

  • Supporter
  • ****
  • Posts: 16860
  • Country: lv
Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.
It's not a misconception. CAT5 cable is not up to spec for gigabit ethernet. It should work for shorter runs or even may be as good as 5e spec but nothing is guaranteed.
 

Offline rsjsouza

  • Super Contributor
  • ***
  • Posts: 5986
  • Country: us
  • Eternally curious
    • Vbe - vídeo blog eletrônico
- Gigabit requires Cat 5 cable, but requires 4 pairs.

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.
Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.

Well, I have seen CAT5 cables fail at 1Gbit where CAT5E worked. The failing cables where only a few meters long. So explain that to me...

That is my experience as well. The standard may say one thing, but for about a decade or so doing Eth evaluations in various scenarios I can tell that, in practice, GigE with anything longer that your run of the mill freebie-accessory length (1m or 1.5m) is extremely flaky with Cat5 and even ultra-cheap Cat5e. Reasonable quality Cat5e creates solid GigE connections.
Vbe - vídeo blog eletrônico http://videos.vbeletronico.com

Oh, the "whys" of the datasheets... The information is there not to be an axiomatic truth, but instead each speck of data must be slowly inhaled while carefully performing a deep search inside oneself to find the true metaphysical sense...
 

Online ve7xen

  • Super Contributor
  • ***
  • Posts: 1193
  • Country: ca
    • VE7XEN Blog
Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.

When 1000base-T was specified, Cat5e didn't exist. So the standard references plain Cat5, but stipulates additional requirements that were not part of the Cat5 standard. These requirements were ultimately rolled into Cat5e. So while 1000base-T spec doesn't call out Cat5e, it does require cable that is more stringently specified than Cat5, and Cat5e meets those requirements.

The new requirements were regarding return loss and several forms of crosstalk. I find it somewhat hard to believe that people had actual problems with Cat5 patch cables because they were not Cat5e, since most would be compliant anyway, and you can't really get them anymore even if you wanted to, but the specs don't offer any guarantees (without the specified RL/crosstalk parameters of the cable), I suppose. IME problems with cabling are practically always due to poor termination or damaged cable.
73 de VE7XEN
He/Him
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3640
  • Country: us
The only IEEE 802.3 PMA supporting Cat3 that would be relevant is 10base-T. Clause 14.5.1 of 802.3-2018 (in the 10base-T MDI section) says:
100baseT4 also operates over Category 3 cable (using 4 pairs).
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11500
  • Country: ch
Wasn't there some change in the specs some years ago?
I always heard that CAT 5e was essentially "backported" into CAT 5, making CAT 5e obsolete.
No, the other way around: the Cat 5 standard was deprecated, meaning that the Cat 5e standard effectively replaced the Cat 5 standard.


Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.
It's not a misconception. CAT5 cable is not up to spec for gigabit ethernet. It should work for shorter runs or even may be as good as 5e spec but nothing is guaranteed.
It is a misconception: as ve7xen said, the Cat 5e standard didn’t even exist when gigabit was created!

Clearly, 5 is being pushed to its limits on gigabit, and it wouldn’t surprise me at all if low grade cables didn’t actually meet their claimed Cat 5 standard, which would explain those cables failing. (Noncompliant cables remain extremely widespread in the computer industry, and it’s only because we often are nowhere near length limits that some cables work at all.) Also, even a compliant cable could become noncompliant if mechanically abused, so this is another source of potential bad cables.


It’s all academic anyway, since practically all Ethernet cable these days is Cat 5e, 6, or 7.


Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.

When 1000base-T was specified, Cat5e didn't exist. So the standard references plain Cat5, but stipulates additional requirements that were not part of the Cat5 standard. These requirements were ultimately rolled into Cat5e. So while 1000base-T spec doesn't call out Cat5e, it does require cable that is more stringently specified than Cat5, and Cat5e meets those requirements.

The new requirements were regarding return loss and several forms of crosstalk. I find it somewhat hard to believe that people had actual problems with Cat5 patch cables because they were not Cat5e, since most would be compliant anyway, and you can't really get them anymore even if you wanted to, but the specs don't offer any guarantees (without the specified RL/crosstalk parameters of the cable), I suppose. IME problems with cabling are practically always due to poor termination or damaged cable.
Thanks for the info!
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11500
  • Country: ch
- Gigabit requires Cat 5 cable, but requires 4 pairs.

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.
Nope. It’s a common misconception that gigabit requires Cat 5e. It does not.

Well, I have seen CAT5 cables fail at 1Gbit where CAT5E worked. The failing cables where only a few meters long. So explain that to me...
Could have been noncompliant cable (just because a manufacturer says it’s complaint doesn’t mean it is, see below), or it could have been poorly terminated, or it could have been fully compliant originally but degraded due to damage.


Here’s a 2016 test of Cat 6 cables, finding that only 10% met the rated spec.

Another test from a few years before found a compliance rate of just 30% for Cat 5e cables and 15% for Cat 6 cables.

There’s no reason whatsoever to assume this has changed: most of the time the cables are short enough that it works anyway, and a few lost packets here and there aren’t noticeable anyway. But when we push cables to the limits of bandwidth and/or length, suddenly the noncompliant cables become problematic.

We see these problems a lot more with HDMI, whose higher bandwidth requirements (and fairly low tolerance for bit loss, thanks to encryption) spotlight the cable problems. Ethernet is just better at masking bad cables…
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3717
  • Country: us
Wasn't there some change in the specs some years ago?
I always heard that CAT 5e was essentially "backported" into CAT 5, making CAT 5e obsolete.

No, the cat5e standard superseded cat5.  Honestly anything sold recently as cat5 is somewhat suspect just on the grounds that it is claiming adherence to an obsolete standard.

Quote from: nctnico
Well, I have seen CAT5 cables fail at 1Gbit where CAT5E worked. The failing cables where only a few meters long. So explain that to me...

New or old cable?  Honestly any modern cable labeled as cat5 is suspect.  It is a long obsolete standard and as far as I know reputable manufacturers haven't made it in ages.  Old cables should be fine as long as they haven't been abused.  My understanding at least is that the added specs for cat5e were A) already met by a lot of cat5 but with no guarantee or testing, and B) only required to reach the full 100 meter range including a wall jack and patch panel that interrupt the signal.  A 5 meter patch cable directly connecting a device to a switch should definitely work and has worked in every situation I have tried although I haven't used cat5 cables in a long time.
 
The following users thanked this post: tooki

Online wraper

  • Supporter
  • ****
  • Posts: 16860
  • Country: lv
- Gigabit requires Cat 5 cable, but requires 4 pairs.

AFAIK Cat 5E actually. I had to redo a lot of wiring to replace plain CAT5 because it didn't work for 1Gbit.

Really?  I've haven't looked a that extensively but I have never seen a cat5 cable fail at a gigabit speeds.  The initial version of 1000Base-T specifically called for cat5 but with slightly modified specs -- I think only for NEXT which was not specified in the cat5 standard but which most existing cat5 cables supported.  Cat5e was then published and cat5 withdrawn and the ethernet spec modified to call for cat5e.
Every resource I’ve checked still says Cat 5, not 5e, as the minimum for gigabit.
Every resource I checked says that cat5 is sufficient only up to 100Base-TX, including Belden which makes these cables. https://www.belden.com/blogs/why-category-5e-cabling-is-becoming-outdated

Quote
It is a misconception: as ve7xen said, the Cat 5e standard didn’t even exist when gigabit was created!
He also said that it had additional requirements for which Cat 5 spec was not sufficient.
« Last Edit: October 22, 2021, 06:59:09 am by wraper »
 

Offline mansaxel

  • Super Contributor
  • ***
  • Posts: 3554
  • Country: se
  • SA0XLR
    • My very static home page
We see these problems a lot more with HDMI, whose higher bandwidth requirements (and fairly low tolerance for bit loss, thanks to encryption) spotlight the cable problems. Ethernet is just better at masking bad cables…

Also, Ethernet almost never is run even near its capacity. Sure, the packets are short and fast when they arrive, but there usually is lots of space between them, so retransmissions initiated by TCP will have space to happen, even if they mess the transmission speed up grossly.  (A true CSMA/CD net, that is 10Mbit half-duplex, is another story, best forgotten in the bad memories from the 90s). Therefore, as long as there is capacity available, errors will be tolerable to most scenarios. If you're running a protocol that always puts frames on the wire, like SDH, which is much more like the transmission you'll see in HDMI or SDI or AES3 or S/PDIF, there is always a verifiable frame on the wire, increasing the statistical probability that stochastic errors will surface.  Try "filling the cable" by running FTP with fast end systems that have tuned operating systems, and a bad cable will sour the day immensely.

Similar situations exist with longer or more spliced/patched cables; there is a probable error rate per meter of bad cable, and there is a error rate per patch/IDC splice too; and of course they're cumulative.

Finally, to put a metrology slant on this: There is reported verified difference in PDV (packet delay variation) between optical and electrical Gigabit Ethernet interfaces, and the reason for this is that there is a fair amount of error checking in the electrical interface that does not have to happen in the same way for the optical one.  Of course, the differences are small, but if you are trying to build the best IEEE 1588-2008 (Precision Time Protocol) infrastructure possible, this will matter, since the assumption that packet time A->B == packet time B->A will not be true as often over copper as over fiber.

"All broadcast engineers are borderline time-nuts"
               (Another forum member to me, in privmsg)


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf