Author Topic: Anyone used the Wiznet ethernet chips?  (Read 37397 times)

0 Members and 1 Guest are viewing this topic.

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Anyone used the Wiznet ethernet chips?
« on: July 25, 2015, 09:40:54 pm »
I'm working on a new design which needs network connectivity. The chips from Wiznet which offload dealing with the low level stuff like TCP/IP, UDP, etc look appealing to me. Does anyone have experience with these chips? They seem available from Mouser en Digikey so people must be buying them.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline asgard20032

  • Regular Contributor
  • *
  • Posts: 184
Re: Anyone used the Wiznet ethernet chips?
« Reply #1 on: July 25, 2015, 10:14:32 pm »
Arduino ethernet shield use them. But nowaday, when someone need ethernet, its far more easier/cheaper to use a mcu with an on board ethernet than to buy an extra chip. Look at pic32 and arm cortex, lot of part with ethernet.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #2 on: July 25, 2015, 11:15:54 pm »
I have been down that road before but the problem is in the TCP/IP stack. In my experience it does take ironing out bugs before the product is really stable. I'd like to avoid that for this project. Besides that the W5500 chip isn't that bad pricewise considering most controllers need an external phy and a microcontroller without ethernet (dealing with the rest of the application) can be extremely cheap.
« Last Edit: July 25, 2015, 11:28:33 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline lukier

  • Supporter
  • ****
  • Posts: 634
  • Country: pl
    • Homepage
Re: Anyone used the Wiznet ethernet chips?
« Reply #3 on: July 26, 2015, 12:25:19 am »
On the other hand you could fix critical (security) bugs in the network stack as a part of your in field firmware upgrade (i.e. upgrading uIP there).

This chip probably has the same issue that TCP/IP offload engines have - network stack is hardwired and if it has a bug then it becomes serious vulnerability that is impossible to fix.
 

Offline JoeN

  • Frequent Contributor
  • **
  • Posts: 991
  • Country: us
  • We Buy Trannies By The Truckload
Re: Anyone used the Wiznet ethernet chips?
« Reply #4 on: July 26, 2015, 01:12:32 am »
On the other hand you could fix critical (security) bugs in the network stack as a part of your in field firmware upgrade (i.e. upgrading uIP there).

This chip probably has the same issue that TCP/IP offload engines have - network stack is hardwired and if it has a bug then it becomes serious vulnerability that is impossible to fix.

Nothing is impossible to fix if your router can apply flexible filters.   O0
Have You Been Triggered Today?
 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #5 on: July 26, 2015, 04:23:43 am »
I've used the W5100 and W5200, with driver code written from scratch.  I can share some experiences and opinions on those chips if it would be helpful to you.  But I'm not familiar with the W5500 specifically.
 

Online westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #6 on: July 26, 2015, 08:27:24 am »
These days, I'd worry about lack of IPv6...
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #7 on: July 26, 2015, 09:29:41 am »
I've used the W5100 and W5200, with driver code written from scratch.  I can share some experiences and opinions on those chips if it would be helpful to you.  But I'm not familiar with the W5500 specifically.
Since the W5500 seems to be an improved version of those chips I would appreciate every comment on those! Why did you write your own driver code? I have not looked at the code provided by Wiznet yet.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #8 on: July 26, 2015, 12:09:07 pm »
The Wiznet chips are well supported on Arduino, but I have a PIC.  Searching for PIC code without any dependencies on Microchip peripheral libraries or similar, I came across this first:

http://www.ermicro.com/blog/?p=1773

Which is for AVR (not Arduino), but it was a good tutorial, with simple example code.  Looked easy enough to port, so I had a go at it.  With it and the W5100 datasheet, I had basic TCP/IP working in a few hours.  Then a couple more days experimenting, optimizing, and tying it into my personal programming environment; which ended up being a total rewrite.  Since I do this to learn more than to achieve an end result, for me it was time well spent.  None of it was particularly hard, the chip really does hide most of the complexity.

But there were some issues.  The W5100 has a linear memory space, containing both the control registers, and the circular (ring) buffers for RX/TX data.  Let's say you want to send multiple bytes of payload data over SPI to one of the circular buffers.  You must:

1) Lower the chip select (CS) line.
2) Send the "write memory" command.
3) Send the low and high bytes of the memory address, where you want to place the data byte.
4) Send the data byte.
5) You cannot send another byte, because the address doesn't autoincrement.  Sending another byte will *replace* the one you just sent, at the same memory address.  You have to send a new address, which requires raising CS and going back to #1.

Transferring 4 bytes for every byte of real data is terribly inefficient.  The W5100's speed is limited more severely by this, than by its 10Mbps Ethernet.  Plus it's extra load on the host MCU, since you're recomputing the address for every byte.  And since you're having to toggle CS regularly, you can't even take full advantage of DMA.

The W5100 also doesn't tristate MISO when CS is high.  This non-standard behavior really should be mentioned in the main datasheet, but it's not, only in a separate app note regarding SPI.  If you intend to have it share a bus with other SPI peripherals, you must add an additional buffer IC.

So I upgraded to the W5200.  It has 100Mbps Ethernet, and more memory; neither of which I actually needed.  But it autoincrements, and tristates MISO.  Presumably newer versions include these enhancements as well.  Converting my code was just a matter of putting in new register addresses and buffer ranges, other than the differences I listed it's virtually identical.  I'm happy with this one.

There's one other thing worth mentioning.  Neither chip seems to have a power-on reset function, or if it does, it doesn't work well.  Do NOT skip manually asserting the reset signal after power-on.  Do NOT try to be clever like I did, and attempt to conserve an MCU pin by using a RC reset circuit.  Do NOT assume any pre-built module with these chips has a pull-up resistor on reset (mine didn't).  If you fail to reset it after power-on exactly as described in the datasheet, it may confound you by working many times in a row, then failing when you least expect it.  The W5200 can also fail to tristate MISO if not reset, blocking comms with other SPI devices on the same bus.
 
The following users thanked this post: KE5FX, s8548a

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #9 on: July 26, 2015, 02:26:11 pm »
Thanks for these remarks!  :-+
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #10 on: July 26, 2015, 08:37:09 pm »
I considered wiznet, but since the STM32F family already has ethernet capable devices, it was more compact to choose that with lwip.
Reflecting on the choice, I also would have bought an validated IP stack if I had to do it again.

There is also lantronix who develop IP stacks inside the RJ45 connector.
 

Online westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #11 on: July 27, 2015, 02:23:06 am »
The much remarked ESP8266 also has an internal TCP stack.  By default, it communicates with the host over serial, which might dramatically slow throughput.  Or not, depending on the complexity of the SPI protocol for the WizNet.

Lantronix I'd be very inclined to trust, but you pay for that!
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #12 on: July 27, 2015, 05:45:32 am »
I considered wiznet, but since the STM32F family already has ethernet capable devices, it was more compact to choose that with lwip.
Reflecting on the choice, I also would have bought an validated IP stack if I had to do it again.
From company experience I can tell you that even commercial "validated" IP stacks from small companies needs much debugging for specific purposes like IP6. Sometimes we had more SW engineers working on it then there were in the other company  :palm:
For my own hobby I am also interested in LWIP over STM32, since company code is restricted  :(
 Was your code for yourself or commercial? Are you willing/able to share? Or share the pitfalls/ problems, perhaps in another topic so not to hijack this one, sorry Nico. Still LWIP over STM32 might be a better alternative than a standalone implementation if Jeroen for instance wants to share his work.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #13 on: July 27, 2015, 06:32:02 am »
The final implementation was commercial. But is was based on Chibios with lwip.
Lwip proved to be a pita because there is no proper documentation. So you're basically required to fully understand lwip before using it. You'll have to look beyond the api, which takes a ginormous amount time.

ChibiOS is great. I strongly recommend everyone to use this at least once.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #14 on: July 27, 2015, 09:48:49 am »
The final implementation was commercial. But is was based on Chibios with lwip.
Lwip proved to be a pita because there is no proper documentation. So you're basically required to fully understand lwip before using it. You'll have to look beyond the api, which takes a ginormous amount time.
@Kjelt: I had a similar experience using uIP (I think lwip is based on it). uIP was written with the idea that short code in C also produces short code when compiled; it's complete mess  :palm: Unfortunately I had to leave my patched up version (with a BSD socket layer) behind at my previous employer.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #15 on: July 27, 2015, 11:26:50 am »
That's a shame  :(
I can understand that you are going for a ready set go one chip solution, faulty IP stacks can take a lot of time to debug/fix, there goes the profit.
Also the interaction with the Phy were causing a lot of time to fix.
Weird that such a common interface/protocol is still not mainstream good quality open source available, esp. with the huge trend towards networking small embedded devices  :-//
 

Offline Mechanical Menace

  • Super Contributor
  • ***
  • Posts: 1288
  • Country: gb
Re: Anyone used the Wiznet ethernet chips?
« Reply #16 on: July 27, 2015, 02:05:03 pm »
Weird that such a common interface/protocol is still not mainstream good quality open source available, esp. with the huge trend towards networking small embedded devices  :-//

If you asked most OSS devs who actually are low level networking gurus* for a solution their answer would almost certainly be "well you have embedded Linux and BSD options, what more could you want for networking? SOICs and RAM are cheap.**" They also may understand every watt counting but not every milliwatt.


*So not me unfortunately.
**That I believe is actually the approach of many a WiFi module among other things.
Second sexiest ugly bloke on the forum.
"Don't believe every quote you read on the internet, because I totally didn't say that."
~Albert Einstein
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #17 on: July 27, 2015, 03:45:59 pm »
Since I had experimented with both W5200 and ENC28J60, I would definitely suggest against Wiznet chips unless your MCU is too small to handle the TCP/IP stack. W5200 is all-in-one and can be bug prone, while one serious bug in W5200 you have to call your products back.

ENC28J60 is effectively only a MAC, you have to roll your own IP stack. This will allow simpler bug fixes. And the real balance tipper for me: ENC28J60 already have a fully working Linux kernel driver so if your MCU runs Linux, just compile that driver in and you can use that tested and trusted Linux TCP/IP stack. (hey, with two ENC28J60 you may even able to roll a router out of it.)
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #18 on: July 27, 2015, 05:28:39 pm »
Since I had experimented with both W5200 and ENC28J60, I would definitely suggest against Wiznet chips unless your MCU is too small to handle the TCP/IP stack. W5200 is all-in-one and can be bug prone, while one serious bug in W5200 you have to call your products back.

ENC28J60 is effectively only a MAC, you have to roll your own IP stack. This will allow simpler bug fixes.
If you have some way to update the firmware remotely then a software IP stack is easier to fix. Otherwise you'll have to recall anyway. Then again a software IP stack is more likely to have bugs. To me the Wiznet W5500 chip starts to look better and better. It is a product which has had several years of focussed development (IOW: it's not the first version) and has a large user base. I looked through their forum but I don't see any signs of serious problems. Actually I like the idea of having the IP stack running on a seperate processor. Buffer overruns in the IP stack cannot affect the main application processor in any way. If the connection is lost the main application processor can just reset (or power cycle) the Wiznet chip and try again.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #19 on: July 27, 2015, 06:02:12 pm »
W5200 is all-in-one and can be bug prone, while one serious bug in W5200 you have to call your products back.

Anything can be bug prone.  But did you actually find a bug, or are you just speaking generally?

Originally I'd intended to use USB as the interface for my project, not Ethernet.  Spent two weeks trying to get a USB stack working on ARM, it kept crashing at random, looked in vain for the bug in the stack... only to eventually find it was a compiler bug.

Then decided my project was better done with a PIC.  So prior work on ARM stack was useless, and spent another two weeks tracking down actual bugs in Microchip USB stack.

Then discovered that too was useless.  The Windows USB drivers were so horribly inefficient, that transferring a mere 36K/sec, in frequent small packets, was consuming over half the CPU on the host PC.

Before these experiences, the ENC28J60 would have been my first choice.  But wasn't too keen on the possibility that another MCU-hosted stack might lead to another two weeks' frustration.  Not for a personal project, anyway.  For a product, maybe.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #20 on: July 27, 2015, 08:49:39 pm »
@nctnico: just took a glance at the datasheet, unsure but looks like its only IPv4 not 6 and you don,t get a MAC adress so you have to get a range for your company. Unsure so check  ;)
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #21 on: July 28, 2015, 07:44:28 am »
W5200 is all-in-one and can be bug prone, while one serious bug in W5200 you have to call your products back.

Anything can be bug prone.  But did you actually find a bug, or are you just speaking generally?

Originally I'd intended to use USB as the interface for my project, not Ethernet.  Spent two weeks trying to get a USB stack working on ARM, it kept crashing at random, looked in vain for the bug in the stack... only to eventually find it was a compiler bug.

Then decided my project was better done with a PIC.  So prior work on ARM stack was useless, and spent another two weeks tracking down actual bugs in Microchip USB stack.

Then discovered that too was useless.  The Windows USB drivers were so horribly inefficient, that transferring a mere 36K/sec, in frequent small packets, was consuming over half the CPU on the host PC.

Before these experiences, the ENC28J60 would have been my first choice.  But wasn't too keen on the possibility that another MCU-hosted stack might lead to another two weeks' frustration.  Not for a personal project, anyway.  For a product, maybe.

As I said before ENC28J60 have a relatively good Linux driver. You can just go ahead and use Linux on your MCU, or if you are willing to GPL your code, lift the Linux ENC28J60 driver and Linux network stack code. That one is well tested and trusted.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #22 on: July 28, 2015, 10:12:44 am »
Using Linux or the Linux TCP/IP stack isn't an option on a microcontroller. There are not enough resources on a microcontroller to run that software. Besides that the Linux kernel source code is a complete mess so getting the TCP/IP stack out in one piece will be a large amount of work.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #23 on: July 28, 2015, 10:59:39 am »
Depends on your definition of microcontroller  :-\ , I totally agree that on a typical Arm Cortex M3/4 microcontroller (72-120Mhz, 128-512kB Flash and 64-128kB RAM) Linux is absolutely a no go.
On a dual core A7 micro with 800MHz and 512MB DDR RAM it is very feasible as the raspberri's show.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #24 on: July 28, 2015, 11:17:58 am »
A SoC or Raspberry PI would triple/quadruple the BOM cost for the circuit I have in mind and the CPU would be 99.99999% idle.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #25 on: July 29, 2015, 01:36:10 am »
Quote
A SoC or Raspberry PI would triple/quadruple the BOM cost for the circuit
Are you sure?  It looks like you can still get Model B+ Pis for under $30, and It would be "challenging" to put together a cpu+wiznet+magnetics+connectors for less than $10...
I have the same sort of aversion to designing in "module level" components, but I'm starting to feel like that's more habit than something I can really justify...

Quote
Weird that such a common interface/protocol is still not mainstream good quality open source available
For small micrcontrollers, which were never a target for the protocols in question.  The "big system" TCP/IP code is so common/open that I start to worry about lack of diversity in implementation :-(
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #26 on: July 29, 2015, 07:01:18 am »
For small microcontrollers, which were never a target for the protocols in question. 
Indeed were ,past tense,  the whole game has changed.
As the trend towards the (wrongly choosen name) "IoT" is now getting huge momentum, PoE is mature (so no mains connection needed anymore) and guestimates are that in 10 years time billions of small resource constrained devices will be networked my worst nightmare is that those constrained IP stacks will lack (proper) security , receive no security updates and that those $5-$10 IP connected gadgets will become the main entry point for hackers to gain access to your home network. And I am not alone, MacAfee has already written a report identifying this small embedded device security thread as one of the major threads for the coming years.
Also ARM has waken up and for instance bought a company with a constrained TLS implementation this year (Polar), so they can sell their Cortex cores with some decent IP stack (unfortunately only available for the full package expensive compiler licenses).

So if the BOM price is not a problem I can only advise for serious IP connected devices to:
- be built upon an OS with good support and security updates,
OR keep the whole "constrained devices network" fully seperated from the business/home network.
« Last Edit: July 29, 2015, 07:03:53 am by Kjelt »
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #27 on: July 29, 2015, 09:51:11 am »
Quote
A SoC or Raspberry PI would triple/quadruple the BOM cost for the circuit
Are you sure?  It looks like you can still get Model B+ Pis for under $30, and It would be "challenging" to put together a cpu+wiznet+magnetics+connectors for less than $10...
Don't forget the SD card needed for the Raspberry Pi, the extra assembly steps, freedom of connector placement, etc. I already did the calculation (the potential customer initially wanted to use the Rpi) and a solution with a wiznet+microcontroller is much cheaper. And since the device is to be mass produced I'm not keen on relying on a single source module. I already got a problem with a different project where the manufacturer of a module made an incompatible new version after one year despite the promise to keep the modules available for at least 10 years  :palm:

Security will be adressed as well. IMHO the biggest problem is not space constrained devices but the fact that many developers and managers are totally clueless about security. Once the required functionality has been implemented the product goes into the shops. Hacking a wired or wireless home control system is a piece of cake. I agree this will get worse but we have seen the same with OS development and websites. In the beginning security wasn't considered and several wakeup calls where needed to get things right. The 'IoT' devices  will go through a similar learning curve.
« Last Edit: July 29, 2015, 09:59:42 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #28 on: July 29, 2015, 10:12:21 am »
Quote
single source module.
Single source module, single-source chip.  What's the difference?
(although Wiznet, like FTDI, seems to do reasonable and useful things in their followon products...)
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #29 on: July 29, 2015, 01:54:04 pm »
Chips are usually produced in larger quantities, are aimed at a wider audience and cost more to design than modules so chips ought to have a longer availability to recoup the engineering costs.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #30 on: July 29, 2015, 02:09:16 pm »
Using Linux or the Linux TCP/IP stack isn't an option on a microcontroller. There are not enough resources on a microcontroller to run that software. Besides that the Linux kernel source code is a complete mess so getting the TCP/IP stack out in one piece will be a large amount of work.

Since in my projects if the project outgrow a big AVR (like ATmega2560) I go directly to either Raspberry Pi 2 or Allwinner A31s (both rocking quad core Cortex-A7), Linux is usually a must-have anyway. And a networking requirement means the project immediately outgrows the AVR since I leave no networking communication unencrypted for security reasons, and AVR obvious cannot process proper SSL or IPSec even with Wiznet chipsets.

Unsecured IoT communication is a big no-no since one router breach and you are screwed. And I don't think a micro have enough oomph to run any good, proper cryptography. You can also look around for other chips that supports Linux like Samsung S3C2410 or Intel Quark (that one runs x86 by the way, and supports PCI Express so higher-grade computer-like parts can be used)
« Last Edit: July 29, 2015, 02:23:25 pm by technix »
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #31 on: July 29, 2015, 02:32:18 pm »
There are a few microcontrollers with a crypto module, with AES and DES. Such as STM32F7.

And remember that Ethernet =/= Internet. There are a few advantages of Ethernet, such as speed, solid hardware and cheap cabling.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #32 on: July 29, 2015, 02:53:26 pm »
Unsecured IoT communication is a big no-no since one router breach and you are screwed.
Explain how exactly...
Security is more than pouring some encryption over a solution and be done with it. As a rule of thumb: a system should remain secure (=detect fraud) when the encryption is broken. Security stands on 3 pillars: Authorisation, Authentification and Accounting; encryption isn't even mentioned specifically.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #33 on: July 29, 2015, 03:31:39 pm »
Unsecured IoT communication is a big no-no since one router breach and you are screwed.
Explain how exactly...
Security is more than pouring some encryption over a solution and be done with it. As a rule of thumb: a system should remain secure (=detect fraud) when the encryption is broken. Security stands on 3 pillars: Authorisation, Authentification and Accounting; encryption isn't even mentioned specifically.

For example if Alice have an IoT capable room heater that is not properly secured. Mallory, walking by her apartment with his smartphone, can breach her Wi-Fi security (the router breach I am talking about), and then command the heater turn on full power if the heater is not properly secured. This will at least turn Alice's room into a sauna and take a huge chunk out of her electricity bill when she is back, or even set her room on fire. This is a double Authentication breach.

Or for some Internet-accessible IoT gadgets like Philips Hue, Mallory can spread a malware that detects the presence of such a gadget into Alice's laptop (thus bypassing the router which is also a firewall, effectively breaching it) and put on a freaky light show and scare the living s**t out of Alice in the least expected hour of day.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #34 on: July 29, 2015, 03:39:29 pm »
There are a few microcontrollers with a crypto module, with AES and DES. Such as STM32F7.

And remember that Ethernet =/= Internet. There are a few advantages of Ethernet, such as speed, solid hardware and cheap cabling.

Not much difference between Internet and Ethernet actually, since they all run TCP/IP and have largely the same threat model (e.g. a laptop can get breached in the same way)
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #35 on: July 29, 2015, 03:42:09 pm »
Just found this: a US$2 chip that have the same oomph of BBB and in a easy-to-solder TQFP package: Allwinner A13. Couple that chip to some DRAM, a microSD card (or eMMC), an Ethernet PHY chip and a bunch of 1117 regulators, you get a full blown Linux platform that runs proper security and cryptography stacks.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #36 on: July 29, 2015, 03:54:17 pm »
Unsecured IoT communication is a big no-no since one router breach and you are screwed.
Explain how exactly...
Security is more than pouring some encryption over a solution and be done with it. As a rule of thumb: a system should remain secure (=detect fraud) when the encryption is broken. Security stands on 3 pillars: Authorisation, Authentification and Accounting; encryption isn't even mentioned specifically.

For example if Alice have an IoT capable room heater that is not properly secured. Mallory, walking by her apartment with his smartphone, can breach her Wi-Fi security (the router breach I am talking about), and then command the heater turn on full power if the heater is not properly secured. This will at least turn Alice's room into a sauna and take a huge chunk out of her electricity bill when she is back, or even set her room on fire. This is a double Authentication breach.

Or for some Internet-accessible IoT gadgets like Philips Hue, Mallory can spread a malware that detects the presence of such a gadget into Alice's laptop (thus bypassing the router which is also a firewall, effectively breaching it) and put on a freaky light show and scare the living s**t out of Alice in the least expected hour of day.
Now explain how this cannot happen when running Linux in the device.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #37 on: July 29, 2015, 06:25:00 pm »
Unsecured IoT communication is a big no-no since one router breach and you are screwed.
Explain how exactly...
Security is more than pouring some encryption over a solution and be done with it. As a rule of thumb: a system should remain secure (=detect fraud) when the encryption is broken. Security stands on 3 pillars: Authorisation, Authentification and Accounting; encryption isn't even mentioned specifically.

For example if Alice have an IoT capable room heater that is not properly secured. Mallory, walking by her apartment with his smartphone, can breach her Wi-Fi security (the router breach I am talking about), and then command the heater turn on full power if the heater is not properly secured. This will at least turn Alice's room into a sauna and take a huge chunk out of her electricity bill when she is back, or even set her room on fire. This is a double Authentication breach.

Or for some Internet-accessible IoT gadgets like Philips Hue, Mallory can spread a malware that detects the presence of such a gadget into Alice's laptop (thus bypassing the router which is also a firewall, effectively breaching it) and put on a freaky light show and scare the living s**t out of Alice in the least expected hour of day.
Now explain how this cannot happen when running Linux in the device.

I was saying that it is more difficult to set up proper cryptography stack on a micro. Linux can also be improperly configured but it is a lot easier to achieve the proper security stack than a micro since:
  • Linux's network stack is a lot better tested than whatever you can roll for yourself or used in chips like W5200 and friends - it is used in 95% of all servers worldwide
  • Linux-based platform have easier to use, well tested security software like OpenSSL or OpenSwan
  • In some cases you don't even need to write code to secure your entry points.

The last point can be proved using any of those setups:
  • Apache server + mod_ssl + mod_authnz_external + pwauth. mod_ssl will give your Apache Web server SSL support - making sure the communication cannot be eavesdropped, and mod_authnz_external + pwauth will allow you authenticate the connection using an encrypted (shadowed) password database (which is also the system's main user database, accessed vis PAM.) Your application code need to be Web based and served through the Apache server here.
  • StrongSwan alone. This implements the IKEv2 VPN protocol which will authenticate both hosts of the connection as well as the user. It can carry any protocol safely.
  • StrongSwan or OpenSwan alone. This implements a simpler IPSec protocol that authenticates both hosts of the connection. Your code need to be aware of this though.
  • StrongSwan or OpenSwan + xl2tpd + pppd. This combo implements the L2TP/IPSec VPN protocol which have a similar but slightly weaker level of protection than IKEv2.
  • pptpd + pppd. This implements PPTP VPN protocol. Not so useful now as PPTP is considered broken
 

Offline Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
Re: Anyone used the Wiznet ethernet chips?
« Reply #38 on: July 29, 2015, 06:29:14 pm »
Linux's network stack is a lot better tested than whatever you can roll for yourself or used in chips like W5200 and friends - it is used in 95% of all servers worldwide

95%? Really? I rather doubt it.

Quote
Linux-based platform have easier to use, well tested security software like OpenSSL or OpenSwan

You choose OpenSSL as an example of well tested, secure software? Bwaaahahhahah.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #39 on: July 29, 2015, 06:56:10 pm »
An encrypted path between an IoT device and a host can be achieved using a crypto engine in a microcontroller and at the 'other side' with much less chance of configuring it wrong or leaving other holes open. SSL is intended to connect securely between random hosts/devices. An IoT device in general does not do that; it usually is paired with one or more 'hosts' and does not have a user(interface) to supply credentials. All in all the security model is entirely different. The key problem with an IoT device is authentification. How can a host be sure it is talking to the right device and how can the IoT device be sure it is talking to the right host?
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #40 on: July 29, 2015, 07:18:35 pm »
For example if Alice have an IoT capable room heater that is not properly secured. Mallory, walking by her apartment with his smartphone, can breach her Wi-Fi security (the router breach I am talking about), and then command the heater turn on full power if the heater is not properly secured. This will at least turn Alice's room into a sauna and take a huge chunk out of her electricity bill when she is back, or even set her room on fire. This is a double Authentication breach.

Or for some Internet-accessible IoT gadgets like Philips Hue, Mallory can spread a malware that detects the presence of such a gadget into Alice's laptop (thus bypassing the router which is also a firewall, effectively breaching it) and put on a freaky light show and scare the living s**t out of Alice in the least expected hour of day.

In both your examples, the real security failure wasn't in the IOT device at all, but something else (the Wi-Fi router, laptop, or user for misconfiguring security on those devices).

I'm no security expert, but out of curiosity I searched and found examples of low-end MCUs running RSA and AES-128.  Probably not fast, but an IOT device is not going to be transferring megabytes per second.  I agree that if you roll your own, it's likely to be flawed, but that's not so bad if it's a secondary security method - because then it requires successful exploit of two different flaws, rather than just one.  (Whereas with a router and IOT device both running Linux, and going without updates for years as is common for embedded devices, the likelihood of discovery of a a single flaw that works for both increases; especially with complex security methods.  I wonder how many embedded Linux devices are still susceptible to Heartbleed?)

But honestly, if someone has breached my Wi-Fi or laptop, a scary but harmless light show would be the least of my worries.  In fact it would be a welcome notification.  I say leave the lights unprotected, let them function as a honeypot.

A heater is a more serious matter.  But starting a fire?  No heater should be capable of setting the room on fire, simply by commanding it to, period.  Sane thermostatic limits should be built-in, hardwired, and functioning independently of the MCU.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #41 on: July 30, 2015, 05:54:12 am »
An encrypted path between an IoT device and a host can be achieved using a crypto engine in a microcontroller and at the 'other side' with much less chance of configuring it wrong or leaving other holes open. SSL is intended to connect securely between random hosts/devices. An IoT device in general does not do that; it usually is paired with one or more 'hosts' and does not have a user(interface) to supply credentials. All in all the security model is entirely different. The key problem with an IoT device is authentification. How can a host be sure it is talking to the right device and how can the IoT device be sure it is talking to the right host?

Any IKEv2, L2TP/IPSec and plain IPSec all have mandatory host authentication using a PKI (all three) or pre-shared key (L2TP/IPSec and plain IPSec), and HTTPS supports (optional) client certificate authentication which can also be used as a form of host authentication. Those are all tested and trusted methods.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #42 on: July 30, 2015, 06:03:15 am »
For example if Alice have an IoT capable room heater that is not properly secured. Mallory, walking by her apartment with his smartphone, can breach her Wi-Fi security (the router breach I am talking about), and then command the heater turn on full power if the heater is not properly secured. This will at least turn Alice's room into a sauna and take a huge chunk out of her electricity bill when she is back, or even set her room on fire. This is a double Authentication breach.

Or for some Internet-accessible IoT gadgets like Philips Hue, Mallory can spread a malware that detects the presence of such a gadget into Alice's laptop (thus bypassing the router which is also a firewall, effectively breaching it) and put on a freaky light show and scare the living s**t out of Alice in the least expected hour of day.

In both your examples, the real security failure wasn't in the IOT device at all, but something else (the Wi-Fi router, laptop, or user for misconfiguring security on those devices).

I'm no security expert, but out of curiosity I searched and found examples of low-end MCUs running RSA and AES-128.  Probably not fast, but an IOT device is not going to be transferring megabytes per second.  I agree that if you roll your own, it's likely to be flawed, but that's not so bad if it's a secondary security method - because then it requires successful exploit of two different flaws, rather than just one.  (Whereas with a router and IOT device both running Linux, and going without updates for years as is common for embedded devices, the likelihood of discovery of a a single flaw that works for both increases; especially with complex security methods.  I wonder how many embedded Linux devices are still susceptible to Heartbleed?)

But honestly, if someone has breached my Wi-Fi or laptop, a scary but harmless light show would be the least of my worries.  In fact it would be a welcome notification.  I say leave the lights unprotected, let them function as a honeypot.

A heater is a more serious matter.  But starting a fire?  No heater should be capable of setting the room on fire, simply by commanding it to, period.  Sane thermostatic limits should be built-in, hardwired, and functioning independently of the MCU.

The big problem is that the Intranet IoT device could have defended itself but it didn't. Also some IoT gear establish connection (or even open listening ports) to the Internet so it have to defend itself.

It is possible for a heater to ignite itself without breaching the temperature control if the heater is not well maintained or some flammable material is put too close to it.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #43 on: July 30, 2015, 06:32:00 am »
Way to often you find an alarm system with a web interface using the default password.
They'll tell you it's safe because nobody can access the network.... Right... But what if an employee turn rogue?

Quote
You choose OpenSSL as an example of well tested, secure software? Bwaaahahhahah.
You say that, but because it is used widely they did find the bug. Which might not be the case with a proprietary implementation.
The bug might live for years before someone starts exploiting it, and then it's too late.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #44 on: July 30, 2015, 07:09:02 am »
Way to often you find an alarm system with a web interface using the default password.
They'll tell you it's safe because nobody can access the network.... Right... But what if an employee turn rogue?

Quote
You choose OpenSSL as an example of well tested, secure software? Bwaaahahhahah.
You say that, but because it is used widely they did find the bug. Which might not be the case with a proprietary implementation.
The bug might live for years before someone starts exploiting it, and then it's too late.

I support this.

Security system auditing have to be done on a source code level, and if you used the built in crypto engine of a MCU it is next to impossible to audit that, which will cause some security aware clients not to select your system. OpenSSL had a vulnerability but thanks to its being open source the bug is fixed within hours and within weeks everybody patched their OpenSSL installation. On the other side Microsoft Windows is always thought to have backdoors by NSA and that is for ages.

For security packages I would prefer using some tested and trusted, well audited open source products like OpenSSL (1.0.1g+), GnuTLS, StrongSwan, OpenSSH or GNU Privacy Guard, even though using those means I have to use a Linux-capable MCU.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #45 on: July 30, 2015, 07:35:33 am »
I second Nico, if you have a constrained device that only needs to communicate with a single or very restricted set of other constrained devices , you can implement a symmetric (PSK) key based security layer IF you know what you are doing (so stay close to the known security implementations/standards).
This means setting up sessions using good randoms on both sides and key derivation of the PSK, aka "known partners design patterns".

Using TLS is only valid if you need (open) access to a larger audience, even unknown third parties that you need to identify based on their certificate.
That automatically involves public key cryptography which uses lots of RAM due to the large keys (yes even with ECC it takes up a lot of RAM and cycles).
Using OpenSSL even with a supersmall ciphersuite like AES-CCM8 takes over 30kB flash and 10kB of RAM which can be reduced but takes up lots of time and again you exactly need to know what you are doing or you can introduce weaknesses.

So it all depends on the implementation/usage of the device if OpenSSL for instance makes sense or is just overkill.

The claim that OpenSSL is more safe because it is open source was a bit debunked IMO by the latest bug finds, they were introduced couple of years ago and nobody checked it or found it. It was introduced to the fault of 1 programmer, so there is no good review in place or testing or other kind of checks and balances. Because it is open source and used in a lot of systems you can be sure that hackers and government agencies are checking this code also and if they find something they are not going to share it.

 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #46 on: July 30, 2015, 08:09:55 am »
I should link this. It is very important that you never apply this kind of security. Even tough it's tempting to use in embedded systems.
https://en.wikipedia.org/wiki/Security_through_obscurity

Example: using a magic packet is not security.
http://wiki.openwrt.org/toh/netgear/telnet.console
Deriving your password key from mac or name is not security. Even if nobody but the engineers know.
http://www.gnucitizen.org/blog/default-key-algorithm-in-thomson-and-bt-home-hub-routers/
 

Offline Mechanical Menace

  • Super Contributor
  • ***
  • Posts: 1288
  • Country: gb
Re: Anyone used the Wiznet ethernet chips?
« Reply #47 on: July 30, 2015, 08:18:28 am »
The claim that OpenSSL is more safe because it is open source was a bit debunked IMO by the latest bug finds, they were introduced couple of years ago and nobody checked it or found it.

That's hardly an open source only problem. And they did find it, otherwise you wouldn't know about it ;) TBH as bad as that could have been it wasn't due to quick reactions when the problem was found, and OpenSSL does have a good track record. As much as I prefer OSS I'd say that is more important when making a decision on something like that.
Second sexiest ugly bloke on the forum.
"Don't believe every quote you read on the internet, because I totally didn't say that."
~Albert Einstein
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #48 on: July 30, 2015, 09:12:55 am »
I second Nico, if you have a constrained device that only needs to communicate with a single or very restricted set of other constrained devices , you can implement a symmetric (PSK) key based security layer IF you know what you are doing (so stay close to the known security implementations/standards).
This means setting up sessions using good randoms on both sides and key derivation of the PSK, aka "known partners design patterns".

Using TLS is only valid if you need (open) access to a larger audience, even unknown third parties that you need to identify based on their certificate.
That automatically involves public key cryptography which uses lots of RAM due to the large keys (yes even with ECC it takes up a lot of RAM and cycles).
Using OpenSSL even with a supersmall ciphersuite like AES-CCM8 takes over 30kB flash and 10kB of RAM which can be reduced but takes up lots of time and again you exactly need to know what you are doing or you can introduce weaknesses.

So it all depends on the implementation/usage of the device if OpenSSL for instance makes sense or is just overkill.

The claim that OpenSSL is more safe because it is open source was a bit debunked IMO by the latest bug finds, they were introduced couple of years ago and nobody checked it or found it. It was introduced to the fault of 1 programmer, so there is no good review in place or testing or other kind of checks and balances. Because it is open source and used in a lot of systems you can be sure that hackers and government agencies are checking this code also and if they find something they are not going to share it.

This is a kind of Security by Obscurity which is flat out useless if your communication channel is breached. When designing a security protocol you cannot assume the underlying communication channel is safe - instead you should always assume the channel is breached and all communication going over wire is transparent to everybody. This is why even for the most trusted channel at least some Diffie-Hellman is required. And since for IoT applications the controlee must make 100% sure who is controlling it.

Remember that any wireless solution can be listened to and analysed (a US$10 RTL-SDR dongle is enough to do this, and I have one.) Wired solutions are more difficult to breach but that does not mean nobody can sneak up to an unprotected section of your communication cable and attach a bug to it.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #49 on: July 30, 2015, 09:26:42 am »
I should link this. It is very important that you never apply this kind of security. Even tough it's tempting to use in embedded systems.
https://en.wikipedia.org/wiki/Security_through_obscurity
From a functional point of view: security is always obscurity. Either the encryption method is a secret (obscure) or the key is secret (obscure). Encryption systems needing to be open to the public is just a (strong) opinion. As others pointed out it doesn't help to prevent bugs. The fact is that an open encryption system takes a lot more CPU cycles than a proprietary one and either will be broken into at some point in the future due to increasing CPU power and analysis of encryption systems.

Anyway, you can use any encryption method you want in an IoT device. If someone is serious about hacking your system they'll have the flash contents read and then they'll have the encryption keys and encryption method. This would allow them to read and write messages as if they are the IoT device. However there are ways to implement a protocol in such a way that this is easy to spot.
« Last Edit: July 30, 2015, 09:49:49 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #50 on: July 30, 2015, 09:50:37 am »
I should link this. It is very important that you never apply this kind of security. Even tough it's tempting to use in embedded systems.

This is a kind of Security by Obscurity which is flat out useless if your communication channel is breached.

No it definitly is not security by obscurity, you both misread or misinterpret my post then.
If you use Auth.Enc. with a secret PSK and are NIST compliant on both sides you are doing the same thing as with for instance the AES-CCM ciphersuite and you do not need any DHKE. The weakest link is the master key as it should be.

 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #51 on: July 30, 2015, 12:32:42 pm »
I should link this. It is very important that you never apply this kind of security. Even tough it's tempting to use in embedded systems.

This is a kind of Security by Obscurity which is flat out useless if your communication channel is breached.

No it definitly is not security by obscurity, you both misread or misinterpret my post then.
If you use Auth.Enc. with a secret PSK and are NIST compliant on both sides you are doing the same thing as with for instance the AES-CCM ciphersuite and you do not need any DHKE. The weakest link is the master key as it should be.

Now you already have AES-CCM please tell me how much more will it cost you to step up to an MCU that runs a proper copy of Linux. Allwinner A13 + its accompanying PMIC costs 1.5 bucks a pop, and add one DDR2 SDRAM chip, and one NAND (or eMMC) chip (or microSD card) you got a full blown BBB equivalent that runs full blown Linux at 800MHz+. Then you can add a Ethernet PHY chip (can be cheaper than ENC28J60 or W5200) interfacing A13's MII to use the onboard 10/100 Ethernet.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #52 on: July 30, 2015, 12:55:23 pm »
It is software and <10kB
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #53 on: July 30, 2015, 02:04:57 pm »
It is software and <10kB

You don't need special software to program for Allwinner A13 - it is all open source. And about the size limit with the NAND/SD card support of A13 your program (that runs on Linux) can expand to a few gigabytes if you want to (e.g. big database) and you can give it up to 2GB of RAM.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #54 on: July 30, 2015, 02:27:33 pm »
Now you already have AES-CCM please tell me how much more will it cost you to step up to an MCU that runs a proper copy of Linux. Allwinner A13 + its accompanying PMIC costs 1.5 bucks a pop, and add one DDR2 SDRAM chip, and one NAND (or eMMC) chip (or microSD card) you got a full blown BBB equivalent that runs full blown Linux at 800MHz+. Then you can add a Ethernet PHY chip (can be cheaper than ENC28J60 or W5200) interfacing A13's MII to use the onboard 10/100 Ethernet.
I read a lot of 'adds' which gets me in the ballpark of $30 AND a lot of extra software engineering because Linux board support packages need a lot of bug fixing AND the need for a 6 layer PCB instead of 2 layers. Using Linux doesn't add up to something cheaper and (in this case) better.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Kjelt

  • Super Contributor
  • ***
  • Posts: 6460
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #55 on: July 30, 2015, 03:03:07 pm »
BTW I had my security protocol running on a low power stm8 , low power meaning 5mA , I would really like to see you do that with something running Linux  :)
But I agree that if someone has to make a 24/7 connected device that is accessible from third parties webbrowsers and should run TLS that an embedded Linux device would be very nice.
As Nico said, the budget is often a restriction, power can be a restriction, so can software maintenance be a restriction (you want Nico to update each device each and every time a bug is found?)
 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #56 on: July 30, 2015, 05:18:18 pm »
Security through "obscurity" in this context means using anything but a known and common standard.  It's easy to show many examples of this type of security gone wrong.  But that's not entirely due to poor decisions on the part of the implementers, it's also due to the prevalence of this kind of security.  Looking at the failures alone is educational, but ultimately misleading as to the effectiveness of this kind of security.  One must also consider that for every one that went wrong, many more went right.

That could even include some systems that are technically weak, and easily broken.  I have found security exploits on a few devices.  One is quite scary in terms of what it could be used for.  But this was done for personal use only, to get around some limitation of the device.  I didn't use my knowledge write a virus.  Nor did I publicly document it like some security "researchers" tend to do, under a false flag of increasing security awareness.  And no one else appears to have done this either.  Therefore, the public can continue using these devices, without fear of script kiddies like Mallory walking around and causing mischief.  Even if something can be exploited, there is no issue unless the exploit actually enters the wrong hands.

As for the Allwinner A13 being $2, that is very likely a bogus claim, originating from a particular Kickstarter for "CHIP - The World's First Nine Dollar Computer".  Many people have analyzed the BOM for this and found $9 to be completely unrealistic, concluding that it's either a scam, or a loss-leader on which they're recovering the loss based on sales of overpriced add-on boards.  In particular, Olimex concluded this product actually has a BOM of about $20.  Part of that was getting a quote from Allwinner on the A13, which was $4.80 in quantity 5,000.

Furthermore, given the complexity of the A13, versus the quality of support and documentation from Allwinner, rolling-your-own product around the bare IC would likely be a painful process; significantly increasing development cost.  The A13 recommendation is not realistic IMO.

Finally, I'm curious.  As it was not stated in this thread, does anyone here actually know what [nctnico]'s device does?  If not, then recommendations for Linux and idealized security might prove to be a bit silly, if it turns out he's developing an Ethernet-controlled dancing Coke can. ;)  (Or something equally innocuous if hacked.)
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #57 on: July 30, 2015, 05:44:33 pm »
Security is an issue because people will definitely try to break into the device to attempt fraud (it needs to pass a certification test and security is part of it). OTOH it is also cost sensitive and it needs to be in production within a couple of months  :scared:
« Last Edit: July 30, 2015, 05:54:14 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #58 on: July 30, 2015, 07:53:04 pm »
Security is an issue because people will definitely try to break into the device to attempt fraud (it needs to pass a certification test and security is part of it). OTOH it is also cost sensitive and it needs to be in production within a couple of months  :scared:

Oh my.  Your project is serious, and the engineer's paradox.  Sorry to hear that.

Are you comfortable identifying and implementing the requirements to pass the certification, from near scratch?  Another thread dedicated to discussion of the feasibility and time requirements on your target class of hardware might get some better answers.
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #59 on: July 31, 2015, 06:54:33 am »
You've made the client aware of this triangle?
 

Offline MagicSmoker

  • Super Contributor
  • ***
  • Posts: 1408
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #60 on: July 31, 2015, 11:41:17 am »
Getting back on topic... we used the WIZ810MJ module to provide an ethernet/web interface in a low volume product for about 6 years and the firmware and/or tcp/ip stack inside it was, as others have mentioned, pretty buggy. Worse is that none of the bugs were fixed in those 6 years. Now we use Cortex ARMs with a built in MAC and an external PHY (Microchip [nee SMSC] LAN8720Ai) for ethernet and uIP for the tcp/ip stack.

The stuff we make uses the ethernet port for configuration/telemetry and isn't connected to the internet so we don't really care about security, per se. YMMV
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #61 on: July 31, 2015, 02:52:41 pm »
Can you elaborate on the bugs?
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline MagicSmoker

  • Super Contributor
  • ***
  • Posts: 1408
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #62 on: July 31, 2015, 03:36:42 pm »
Can you elaborate on the bugs?

We used the WIZ810MJ to bolt on ethernet connectivity to an 8b AVR so that customers could change parameters and stream real time data without requiring any special hardware (anything with an ethernet port and a web browser). The biggest and most annoying bug was that the WIZ810MJ would hang if you tried to scroll down a web page before it finished loading, but it would also hang for no reason at all. You would have to cycle power to clear the error because toggling the hardware reset line on the WIZ810MJ module did nothing (a waste of a good I/O pin on the AVR for that one). And this didn't just affect TCP/IP; UDP functionality was lost, too, so we think the problem is in the W5100 firmware, and not necessarily it's (built-in/proprietary) TCP/IP stack.

Complicating the above was that data transfer was really slow even though the SPI bus was clocked at 6MHz and the web pages served by it were basically plain text forms (with one logo picture that took up all of 1.5k). That, of course, lead to more customers scrolling down the web page before it was finished loading.

At any rate, it was the number one cause of customer support questions and it wasn't that cheap, either (~25USD) so it made the decision to go with an ARM that has a MAC, at least, plus an external PHY a lot easier. Getting the 50MHz RMII bus for the PHY to work in a very high noise environment took a couple of board revisions and uIP required some tweaking by our software engineer, but given that this route only cost ~8USD and isn't buggy it was worth the extra effort.

 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #63 on: July 31, 2015, 06:46:42 pm »
Interesting.  A couple of thoughts:

1) Reset must be asserted for at least 2us to work as per the datasheet, must be driven both active low and high as the WIZ810MJ lacks a pull-up on the reset line, and must be asserted after power-up else inconsistent behavior can result.  (I had all sorts of weird trouble until I did these things properly.)

2) The W5100 would not be aware of scrolling in the browser, unless there were elements in the webpage that caused the browser to open additional sockets, in an attempt to load those elements in parallel regardless of how many requests were already pending.  The W5100 can handle no more than 4 open sockets, but may be configured for as few as 1.  What you experienced suggests there may have been an issue with the webpage design, that caused it to exceed the basic limitations of the W5100.  Regardless, hanging is certainly NOT an acceptable outcome under any circumstances.  I did not test opening more sockets on my W5100 than it was configured for, but I have seen an Internet-connected W5100 serving up something much like you described.  When it was featured on a major website, it received a lot of hits; and frequently rejected connections, but did not hang.  So I suspect a solvable issue in the MCU-side software, rather than something inaccessible in the W5100 itself.

Sounds like the ARM route worked out nicely for you in the end though, and that's all that matters.
 

Offline Sal Ammoniac

  • Super Contributor
  • ***
  • Posts: 1672
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #64 on: July 31, 2015, 10:51:28 pm »
Linux's network stack is a lot better tested than whatever you can roll for yourself or used in chips like W5200 and friends - it is used in 95% of all servers worldwide

I doubt it's close to 95%. It is substantial, though, and that means there are hoards of hackers attempting to crack it on a continuous basis.
Complexity is the number-one enemy of high-quality code.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #65 on: August 01, 2015, 10:17:42 am »
Linux's network stack is a lot better tested than whatever you can roll for yourself or used in chips like W5200 and friends - it is used in 95% of all servers worldwide
I doubt it's close to 95%. It is substantial, though, and that means there are hoards of hackers attempting to crack it on a continuous basis.
The number of servers and desktops running Linux is dwarfed by the number of Android devices (Android uses the Linux kernel). So yes people are likely to want to hack the Linux network stack.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #66 on: August 01, 2015, 01:56:43 pm »
Linux's network stack is a lot better tested than whatever you can roll for yourself or used in chips like W5200 and friends - it is used in 95% of all servers worldwide
I doubt it's close to 95%. It is substantial, though, and that means there are hoards of hackers attempting to crack it on a continuous basis.
The number of servers and desktops running Linux is dwarfed by the number of Android devices (Android uses the Linux kernel). So yes people are likely to want to hack the Linux network stack.

About 92% of servers, over 70% of smartphones (Android, Bada, Ubuntu Touch and more) and over 98% of TOP500 supercomputers runs Linux, making it such a high profile target - running on billions of devices big and small - probably even more than Windows desktops. Hackers will want to hack it (any part of it, including the network stack) and a Linux kernel breach will hurt a lot of people and even leak some critical national safety intelligence, but so far the high profile compromises are all userland breaches (both OpenSSL and GNU Bash runs in user mode, not kernel mode. Both always had some competitors like GnuTLS and LibreSSL for OpenSSL, as well as Z shell and Debian Almquist Shell for GNU Bash) and this fact attests the security of Linux kernel. So even if your project cannot use the full Linux kernel for any reason, lifting the network stack code from Linux and adapt it for yourself is still a better idea, safety wise, than using your homebrew or a chip like W5200.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #67 on: August 01, 2015, 02:10:20 pm »
If the Linux kernel source wasn't such an utter mess I agree it would make sense to try and adapt the Linux network stack for microcontroller use.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Mechanical Menace

  • Super Contributor
  • ***
  • Posts: 1288
  • Country: gb
Re: Anyone used the Wiznet ethernet chips?
« Reply #68 on: August 02, 2015, 12:01:48 am »
Nor did I publicly document it like some security "researchers" tend to do, under a false flag of increasing security awareness.


As a security researcher you go to the vendor with the information first, and generally agree to wait until the issue is fixed before you publish full details. It's only when nothing is being done, they are ignored or illegally threatened that security researchers you so despise tend to publish anyway, to let the public know they aren't secure and those they trust to provide that security don't care.

Quote
And no one else appears to have done this either.  Therefore, the public can continue using these devices, without fear of script kiddies like Mallory walking around and causing mischief.

Not every black hat is a script kiddie. All the public can be sure of in your case is no one is even trying to fix a known problem, so it is almost certainly being exploited. You could do it, what makes you think you're so special no one else could?

Quote
Even if something can be exploited, there is no issue unless the exploit actually enters the wrong hands.

And every time an exploit is just ignored instead of reported AND fixed that becomes more likely. If it isn't fixed it's better for the public to know they are not secure and be able to take precautions than falsely believe nothing is wrong so do nothing.
« Last Edit: August 02, 2015, 12:03:57 am by Mechanical Menace »
Second sexiest ugly bloke on the forum.
"Don't believe every quote you read on the internet, because I totally didn't say that."
~Albert Einstein
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Anyone used the Wiznet ethernet chips?
« Reply #69 on: August 02, 2015, 10:24:48 am »
I keep seeing mention of allwinner aXX products, I also noticed its a "Chinese  supported chip" and this chip has brute force flooded the market and made its way in cheap to medium quality electronic products.. Anyway, the point im  trying to make is this..if you try and Google SDK resources on any allwinner chip details, the platform is sparse and very very nebulous... its also mostly used in China for "their engineers" its a useless platform for English speaking people because majority of its SDK resources is biased to the Chinese space so what use is that to the rest of the world.
 

           
 

Offline MagicSmoker

  • Super Contributor
  • ***
  • Posts: 1408
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #70 on: August 02, 2015, 12:00:12 pm »
Interesting.  A couple of thoughts:

1) Reset must be asserted for at least 2us to work as per the datasheet, must be driven both active low and high as the WIZ810MJ lacks a pull-up on the reset line, and must be asserted after power-up else inconsistent behavior can result.  (I had all sorts of weird trouble until I did these things properly.)

Yep, that's generally true of any microprocessor. I directly connected the reset terminal on the WIZ810MJ to a pin on the AVR and during the boot sequence the code would bring that pin low for 10ms then high again. We also tried resetting the W5100 whenever there was a prolonged absence of data on the MISO line and LINK was active; no love - power needed to be cycled to unfreeze the little bastard.

...The W5100 can handle no more than 4 open sockets, but may be configured for as few as 1.

Yep, we knew of that limitation.

What you experienced suggests there may have been an issue with the webpage design, that caused it to exceed the basic limitations of the W5100.

Now you are invoking the age-old dispute: the hardware engineer says the software is buggy while the software engineer says the hardware is glitchy...

In the end, though, the real problem was that Wiznet failed to provide meaningful technical support so I will no longer consider using them. Better hardware and/or software engineers might succeed where we have failed, but why bother when there are so many uC out there with ethernet support built in (and even some with both the MAC and PHY)?

 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #71 on: August 02, 2015, 01:41:58 pm »
Interesting.  A couple of thoughts:

1) Reset must be asserted for at least 2us to work as per the datasheet, must be driven both active low and high as the WIZ810MJ lacks a pull-up on the reset line, and must be asserted after power-up else inconsistent behavior can result.  (I had all sorts of weird trouble until I did these things properly.)
Yep, that's generally true of any microprocessor. I directly connected the reset terminal on the WIZ810MJ to a pin on the AVR and during the boot sequence the code would bring that pin low for 10ms then high again. We also tried resetting the W5100 whenever there was a prolonged absence of data on the MISO line and LINK was active; no love - power needed to be cycled to unfreeze the little bastard.
To me this sounds like either a reset problem (I think reset must be low during power up) and/or a power supply problem (maximum current or decoupling). The  WIZ810MJ module doesn't look very well designed. I'm missing power decoupling, overvoltage protection and measures to reduce emitted EMC radiation (no a common mode transformer in an ethernet transformer is not going to cut it). But maybe these components are mounted on the bottom.
« Last Edit: August 02, 2015, 02:29:22 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Chris C

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #72 on: August 03, 2015, 04:50:11 am »
As a security researcher you go to the vendor with the information first, and generally agree to wait until the issue is fixed before you publish full details. It's only when nothing is being done, they are ignored or illegally threatened that security researchers you so despise tend to publish anyway, to let the public know they aren't secure and those they trust to provide that security don't care.

In general, I agree with you.  My concern is situations in which:

1) The product is no longer being manufactured, or has been replaced with a new product that lacks the issue; but the affected product is still in widespread use.
2) The issue is of a nature that it cannot be fixed or protected against, without incurring expenses that either the manufacturer or consumers would consider excessive.
3) The issue could be easily exploited by someone with minimal skills, if they are aware of the issue.
4) No evidence can be found that the issue is publicly known, or is being exploited.  This does not preclude the possibility that exploitation exists, but it is likely to be on such a limited scale that losses are infinitesimally small, compared to those that would be incurred if the issue was made publicly known.

For as long as all four conditions hold, there is NO possible positive outcome from making the issue public.  A responsible security researcher will understand this, and withhold the information.

But as you said, not every black hat is a script kiddie.  Some call themselves security researchers.  And would gladly boost their fame regardless of the consequences to others, if not reveling to have caused those consequences.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #73 on: August 03, 2015, 08:40:55 am »
The 5th option is: the security hole is already being abused but nobody tells anyone about it.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #74 on: August 03, 2015, 05:11:01 pm »
I keep seeing mention of allwinner aXX products, I also noticed its a "Chinese  supported chip" and this chip has brute force flooded the market and made its way in cheap to medium quality electronic products.. Anyway, the point im  trying to make is this..if you try and Google SDK resources on any allwinner chip details, the platform is sparse and very very nebulous... its also mostly used in China for "their engineers" its a useless platform for English speaking people because majority of its SDK resources is biased to the Chinese space so what use is that to the rest of the world.
 

         

Sorry buddy but I took offense from your comments, and look at the sidebar to find out where I am from, and why I took the offense.

If you don't feel like tackling the SDK you can just grab the linux-sunxi code and forge ahead - drivers usually does not depend on chip detail thanks to Linux driver layering. Also, you could have asked Allwinner nicely for documentations.
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #75 on: August 03, 2015, 05:33:25 pm »
@technix: In my experience it depends very much on who is asking to get information or not. Getting information may be easy for you but impossible for people from outside China. And even then if you can't read Chinese then you're often stuck with Chinglish -no offense intended; just stating a fact- and that makes it very hard if not impossible to use a chip. I work with together with Chinese developers on some projects and communication is difficult.

Anyway it seems the project got a go so I'll probably be making a prototype using the Wiznet W5500 in the next weeks.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline technix

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: cn
  • From Shanghai With Love
    • My Untitled Blog
Re: Anyone used the Wiznet ethernet chips?
« Reply #76 on: August 03, 2015, 05:51:30 pm »
@technix: In my experience it depends very much on who is asking to get information or not. Getting information may be easy for you but impossible for people from outside China. And even then if you can't read Chinese then you're often stuck with Chinglish -no offense intended; just stating a fact- and that makes it very hard if not impossible to use a chip. I work with together with Chinese developers on some projects and communication is difficult.

Anyway it seems the project got a go so I'll probably be making a prototype using the Wiznet W5500 in the next weeks.
If you would like to pay me I can cooperate with you when dealing with such chips. I am Chinese but my English is almost as good as native - a bit of New Yorker and a bit of la la land maybe, in fact.

You can contract me to translate documentation, obtain parts (some company may or may not offer their products out of China, but I can always buy it for you and send it over, if you like so) and even design the software (I do major in CS instead of EE as you may expect, EE is more of a hobby and side project when CS side is uninteresting.)
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Anyone used the Wiznet ethernet chips?
« Reply #77 on: August 03, 2015, 08:10:07 pm »
I keep seeing mention of allwinner aXX products, I also noticed its a "Chinese  supported chip" and this chip has brute force flooded the market and made its way in cheap to medium quality electronic products.. Anyway, the point im  trying to make is this..if you try and Google SDK resources on any allwinner chip details, the platform is sparse and very very nebulous... its also mostly used in China for "their engineers" its a useless platform for English speaking people because majority of its SDK resources is biased to the Chinese space so what use is that to the rest of the world.
 

         

Sorry buddy but I took offense from your comments, and look at the sidebar to find out where I am from, and why I took the offense.
asked Allwinner nicely for documentations.

No offence intended, its just common sense reality.

Quote
If you don't feel like tackling the SDK you can just grab the linux-sunxi code and forge ahead - drivers usually does not depend on chip detail thanks to Linux driver layering. Also, you could have

You pitched for Allwinner, can you show equal confidence in their technical documentation, SDKS, Q/A support forums.. this to most people is equally as important as the silicon they design and program against.
 



 
« Last Edit: August 03, 2015, 08:11:59 pm by diyaudio »
 

Offline prasimix

  • Supporter
  • ****
  • Posts: 2023
  • Country: hr
    • EEZ
Re: Anyone used the Wiznet ethernet chips?
« Reply #78 on: February 21, 2016, 02:27:59 pm »
Hi nctnico and others, I'm wondering if any progress about using W5500 is made or any useful info are found in the meantime. I found that guys from arduino.org decide to use it in their Ethernet 2 shield. Don't know is that just inertia since previous shield was built around W5100 from the same manufacturer or they made extensive research what is available in that price/performance range.
The W5500 looks attractive to me, the price is right and it promised a little bit more then ENC28J60 that I'm currently using and would like to replace with something that has full duplex, auto-negotiation and built-in TCP/IP stack. Does W5500 looks to you like good candidate for that?

Online westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Anyone used the Wiznet ethernet chips?
« Reply #79 on: February 22, 2016, 01:13:06 am »
are there ANY competing ethernet chips that include a TCP stack?
It's getting pretty common in WiFi modules, but I haven't seen many ethernet products that include TCP/IP...
 

Offline megabit

  • Newbie
  • Posts: 1
  • Country: gb
Re: Anyone used the Wiznet ethernet chips?
« Reply #80 on: February 22, 2016, 01:44:24 pm »
I've used a w5500 in a project in the last six months - teamed with a Atmel 328p on a custom board and running with an arduino-based toolchain.

I'm not doing anything particularly clever with it - simultaneous web server and MQTT client. No issues to report but it seems to be more reliable than the dodgy w5100 I was using previously. Anecdotally seems a little quicker too but I haven't done any proper tests.

 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #81 on: February 22, 2016, 02:15:27 pm »
The project I intended to use the W5500 got cancelled so I still don't have any hands-on experience.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline prasimix

  • Supporter
  • ****
  • Posts: 2023
  • Country: hr
    • EEZ
W5500 circuit
« Reply #82 on: February 29, 2016, 02:48:25 pm »
Thanks everyone for reply. I'm currently waiting for module bought on eBay to see W5500 in action. In the meantime I've checked various schematics and end up with the following one:



There is few details that is still questionable and I'd like to discuss with you:

A) I'm planning to mount RJ-45 connector (with magnetics) Amphenol LMJ2138812S0LOT6C on the separate PCB and use Ethernet cable 30cm long for connection. In that way termination resistors will remain on the PCB where is W5500 and transformers will be on the other side of the cable. I don't know it that permitted.
B) I found on some boards that PMODE inputs are directly connected to +3.3V, when on other pull-up resistors are used. I'd like to skip usage of resistors if possible.
C) Same as detail B but connection could be directly to the Gnd or using pull-down resistors. Again I'd like to skip resistors.

I also found that some use C28, C33 of 6n8 and 10n on receiver's lines (even with additional 22 to 33R for better noise immunity?) and that LED resistors is in some cases extremely high (2K2!) limiting current to couple of mA.

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #83 on: May 30, 2017, 08:46:37 pm »
I know this is an old thread but I'd like to update it with how the story has evolved.
Over the past months I have developed a product which uses the W5500 to have a microcontroller communicate to a network. Based on the advice here I made sure I can power cycle the W5500 and use that ability to provide it with the right power-up / reset timing. Even though the design is on a 2 layer board I managed to have a solid ground plane under the W5500 on the bottom layer and have 2 copper pours for the digital & analog power supply on the top layer. Each power supply pin has a 100nf bypass capacitor.

The software side needed some studying to get going but the wiki pages on Wiznet's website do help. Wiznet claims to have a BSD style socket interface but it takes some additional (lower level) functions to see if there is data available in order not to halt execution. I have tested it by pumping lot's of data through it (flood with large ping packets and UDP packets) and the W5500 seems to hold up pretty well. So far no crashes or odd behaviour.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: MagicSmoker, KE5FX

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #84 on: September 07, 2022, 05:46:20 pm »
Bumping this thread for more recent insight.

Usually I have used single-board computers for Internet access but now have an application which must be cost and space optimized and uses very very little Internet bandwidth, basically some short control/status messages over MQTT or possibly just raw TCP socket. It's kinda IoT, but the IoT part is not the "core", the devices are smart and capable even without being connected.

This is also a project which needs to be delivered quickly, yet scale up to possibly quite large volumes, we'll see.

WiFi is unwanted (too much configurability / unreliability, the target audience have routers with at least one free ethernet port), nRF52 MCUs are used for custom 2.4GHz local communications, possibly standard BLE in the future. The nRF52 is very well able to perform all other MCU functions in the system so it's the sole microcontroller. This means the options are either to add some STM32/similar just to perform as TCP/IP/Ethernet layer, or use a "hard wired" chip like the W5500. Benefits of W5500 seem to be good availability, excellent price, one firmware less to develop plus integration of ETH PHY on the same chip for even better cost and area.

Obvious turn-off in $(current_year) is lack of IPv6, go figure.

Any more experience on W5500 / others, or any new chips I should be looking at?

Also as far as I know, the application MCU (Cortex M4 @ 64MHz with 128KB of RAM) should have enough resources to do encryption IMHO (for say, a few hundred bytes a second, building the connection can be accepted to take tens of seconds no problem), and even if not, pre-shared keys is an option, but feel free to correct me if I'm wrong on this.
« Last Edit: September 07, 2022, 05:50:53 pm by Siwastaja »
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #85 on: September 07, 2022, 05:54:29 pm »
AFAIK Wiznet has an IPv6 version nowadays.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline DC1MC

  • Super Contributor
  • ***
  • Posts: 1882
  • Country: de
Re: Anyone used the Wiznet ethernet chips?
« Reply #86 on: September 07, 2022, 05:58:35 pm »
Yes, it does with relatively low effort, also I only have good words for w5500 lite-

Some examples:
 https://www.mischianti.org/2022/07/13/stm32-ethernet-w5500-with-plain-http-and-ssl-https/
 
The following users thanked this post: Siwastaja

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Anyone used the Wiznet ethernet chips?
« Reply #87 on: September 07, 2022, 06:17:28 pm »
I have a couple W5500 dev boards that I ordered a while ago. Not evaluated it yet though, but from the docs I read about it and everything else I've heard about TCP/IP ethernet on a MCU and the headaches that come with it, if cost of the extra W5500 and its max throughput is OK with you, I'd go fo this rather than spend hours debugging libraries and browsing forums. Just my 2 cents though.
 
The following users thanked this post: Siwastaja

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #88 on: September 07, 2022, 06:21:43 pm »
I have a couple W5500 dev boards that I ordered a while ago. Not evaluated it yet though, but from the docs I read about it and everything else I've heard about TCP/IP ethernet on a MCU and the headaches that come with it, if cost of the extra W5500 and its max throughput is OK with you, I'd go fo this rather than spend hours debugging libraries and browsing forums. Just my 2 cents though.
I fully agree. Last year I had to use LWIP because I couldn't get chips that allowed me to use a W5500 but LWIP is a complete dissaster compared to how easy the W5500 is to use.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: Siwastaja

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13748
  • Country: gb
    • Mike's Electric Stuff
Re: Anyone used the Wiznet ethernet chips?
« Reply #89 on: September 07, 2022, 06:58:27 pm »
I use the W5500 a lot, though only for UDP.  Very easy to use.
My only complaints are that it doesn't support packet fragmentation for longer UDP payloads, and the memory allocation can only allocate the same amount to TX and RX, so for RX-only applications like mine, half the onboard RAM is wasted, though that's not been an issue in practice for me.

A word of warning, don't use a MEMS oscillator - I found this to cause packet loss, always a proper crystal or crystal oscillator.
« Last Edit: September 07, 2022, 07:19:40 pm by mikeselectricstuff »
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 
The following users thanked this post: Siwastaja

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Anyone used the Wiznet ethernet chips?
« Reply #90 on: September 07, 2022, 07:10:39 pm »
And the W6100 supports IPv6.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: Anyone used the Wiznet ethernet chips?
« Reply #91 on: September 07, 2022, 09:02:23 pm »
I am working on a 32F4 project with LWIP and I find it solid. It's had many years of development and bug fixing.

The main issue is the same old one with open source software: no real support. You post questions on forums, sometimes you get a reply, quite often somebody tells you that you are a stupid idiot (but he might know something useful), and the real work is in implementing it; in this case doing the code to connect it to your micro's ETH hardware. ST do an appnote on this, which is more or less right.

I am very happy with it. The board is test running 24/7, doing two concurrent (mutex-controlled, due to lack of RAM) MbedTLS processes, plus an HTTP server test-polled at 1Hz, etc. I posted here about all that stuff over the last year or so :)

My project does not support IPV6 - almost nobody seems to be using it, in LANs. LWIP can be compiled for IPV6 though if needed.

OTOH, a lot of hours has gone into this project. A lot of forum posting and reading. I am not accounting for my time :)

« Last Edit: September 07, 2022, 09:28:26 pm by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online tellurium

  • Regular Contributor
  • *
  • Posts: 229
  • Country: ua
Re: Anyone used the Wiznet ethernet chips?
« Reply #92 on: September 11, 2022, 11:12:06 pm »
Usually I have used single-board computers for Internet access but now have an application which must be cost and space optimized and uses very very little Internet bandwidth, basically some short control/status messages over MQTT or possibly just raw TCP socket. It's kinda IoT, but the IoT part is not the "core", the devices are smart and capable even without being connected.

So I guess there would be only one outbound connection - which is good.
TLS or not TLS?
How much free RAM there is left on that nRF at runtime for networking?
Open source embedded network library https://mongoose.ws
TCP/IP stack + TLS1.3 + HTTP/WebSocket/MQTT in a single file
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #93 on: September 12, 2022, 04:59:19 am »
So I guess there would be only one outbound connection - which is good.
TLS or not TLS?
How much free RAM there is left on that nRF at runtime for networking?

One connection strategy would seem the best. If more than one, the rest would be unencrypted local things like MODBUS TCP.

Solid encryption is of course a necessity, still pondering between pre-shared keys vs. public keys. I'd say 60-70KB RAM for networking.
 

Offline DC1MC

  • Super Contributor
  • ***
  • Posts: 1882
  • Country: de
Re: Anyone used the Wiznet ethernet chips?
« Reply #94 on: September 12, 2022, 06:12:26 am »
Yupiee, the IPv6 mini modules:

https://shop.wiznet.eu/ipv6/wiz610io.html

(the chip itself 2.35€)

Cheers,
DC1MC

EDIT: Added datasheet.
« Last Edit: September 12, 2022, 06:18:29 am by DC1MC »
 
The following users thanked this post: SiliconWizard

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #95 on: September 12, 2022, 08:52:32 am »
Yupiee, the IPv6 mini modules:

https://shop.wiznet.eu/ipv6/wiz610io.html

(the chip itself 2.35€)

Yes, availability of W6100 looked worse (than W5100) at the usual distributors, but seeing W5100S and W6100 are pin-compatible, designing in either doesn't seem risky. Seeing the manufacturers sells directly is always a huge plus.

EDIT: W5100 is not pin-compatible with W5100S or W6100
« Last Edit: September 12, 2022, 09:13:18 am by Siwastaja »
 

Online tellurium

  • Regular Contributor
  • *
  • Posts: 229
  • Country: ua
Re: Anyone used the Wiznet ethernet chips?
« Reply #96 on: September 12, 2022, 11:35:11 am »
Solid encryption is of course a necessity, still pondering between pre-shared keys vs. public keys. I'd say 60-70KB RAM for networking.

60-70KB should be plenty for a MQTT over TLS, using mbedTLS. If a connection is quiet and does not carry a lot of data, mbedTLS can be tuned for down to 2KB IO buffer size per connection - so overall, including all buffering, I'd say it would be possible to tune the whole thing to use 5-7KB overall.

Wiznet modules have their own TCP stack inside, and provide socket-like interface over SPI.

Alternatively, you can use networking library of ours, https://github.com/cesanta/mongoose - it implements its own stack. On W5500, it reads /writes raw frames, thus bypasses W5500's stack (which makes it possible to use IPv6, and implement more than 8 sockets). It is experimental yet, but we're ready to provide full support for free if you decide to use it. Example app is at https://github.com/cesanta/mongoose/tree/master/examples/arduino/w5500 - let me know if you're up to it!
Open source embedded network library https://mongoose.ws
TCP/IP stack + TLS1.3 + HTTP/WebSocket/MQTT in a single file
 
The following users thanked this post: Siwastaja

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: Anyone used the Wiznet ethernet chips?
« Reply #97 on: September 13, 2022, 06:43:07 am »
On my target, MbedTLS took up 150k of code, and the minimum it runs with (a typical crypto suite) is about 50k. Obviously if you control both ends then this can come down, plus you can use any crypto you like.

If you control both ends, there is no practical difference between using a pre shared key and using PK crypto; in both cases you have to keep a key stored securely somewhere.

And if you use a shared key, then you don't need the massive lump of TLS and its non-deterministic private heap memory allocation :) You just send the data, encrypting each packet with AES256. A TinyAES256 I use is about 100kbytes/sec and the AES256 which comes with MbedTLS I measured at 800kbytes/sec.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #98 on: September 14, 2022, 03:03:07 pm »
How weird crystal specifications.

W6100 datasheet recommends 500uW drive level. Even more weirdly, W5500 datasheet recommends 59.12uW/MHz, which equals whopping 1500uW at 25MHz. Mouser parametric search lists the highest power crystal in stock as just 1000uW. 500uW is something you can actually get but this already limits the choice and even then the crystal datasheets recommend drive levels in tens of uW. Anyone have any idea what's going on here, I don't remember ever seeing a weirdly high drive level recommendation like this?

EDIT: Large crystals all seem to have too much shunt capacitance, making them fail the calculation for gm_crit as explained in https://docs.wiznet.io/img/products/w5100s/w5100s_crystal_selection_guide_v100e.pdf. Their reference design lists bogus part number for the crystal so that's not helping either. I'm starting to think no suitable crystal for this chip exists, so just have to use something and hope for the best.

EDIT2: Related discussion: https://forum.wiznet.io/t/topic/6112 . Unluckily, the few crystals the poor guy thinks fall within the spec are rated to 100uW max.

EDIT3: Apparently no one has managed to find a crystal that is within specifications. So users must choose one of the following:
1) Pick a large crystal -> ignore gain margin criterion by choosing a crystal with too much ESR, too high Cs, or too high CL, or combination thereof
2) Pick a small crystal -> get stability condition right, but ignore drive level maximum rating of the crystal, likely resulting frequency shift or distortion and thus another source of instability, and/or premature failure of the crystal
3) Pick a small crystal but increase the suggested 0 ohm external series resistance value to lower the drive strength to within the crystal spec -> stability goes out of window again?

By careful choice of parts, it seems I can find crystals which are borderline acceptable, i.e. gain margin criterion some 20-30% off (case 1), or rated max power 300mW (case 2), not too far below 500mW. So maybe it is possible to find a combination which is marginal. The fact these chips are manufactured and sold suggests it has to work with some combination, and probably the specifications are just way too tight and no one questioned them because many designers are not very pedantic but design outside ratings if they so wish.
« Last Edit: September 14, 2022, 06:12:53 pm by Siwastaja »
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #99 on: September 14, 2022, 06:31:19 pm »
In my design I have used an external oscillator. Or to be more specific: a clock output from the microcontroller.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #100 on: September 14, 2022, 06:35:47 pm »
In my design I have used an external oscillator. Or to be more specific: a clock output from the microcontroller.

I considered that but the 1.2V level requirement makes an oscillator an unobtanium, too, although one could use a higher voltage oscillator and level-shift the output.

This can't be really this difficult, I contacted Wiznet, hope they come back. They don't participate on the forum where many many have asked about the crystal specs (and reported weird crystal issues).
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13748
  • Country: gb
    • Mike's Electric Stuff
Re: Anyone used the Wiznet ethernet chips?
« Reply #101 on: September 14, 2022, 06:36:44 pm »
In my design I have used an external oscillator. Or to be more specific: a clock output from the microcontroller.
Yes, use an oscillator, let someone else worry about drive etc. 25 cents at 100x on LCSC
25MHz is a bit on the high side for a simple inverter based oscillator.

As I mentioned before DO NOT use a MEMS oscillator if you want all your packets to arrive!
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #102 on: September 14, 2022, 06:48:27 pm »
In my design I have used an external oscillator. Or to be more specific: a clock output from the microcontroller.

I considered that but the 1.2V level requirement makes an oscillator an unobtanium, too, although one could use a higher voltage oscillator and level-shift the output.

This can't be really this difficult, I contacted Wiznet, hope they come back. They don't participate on the forum where many many have asked about the crystal specs (and reported weird crystal issues).
The W5500 datasheet I have says that you can use a 3.3V square wave as the clock input.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #103 on: September 14, 2022, 06:50:13 pm »
In my design I have used an external oscillator. Or to be more specific: a clock output from the microcontroller.

I considered that but the 1.2V level requirement makes an oscillator an unobtanium, too, although one could use a higher voltage oscillator and level-shift the output.

This can't be really this difficult, I contacted Wiznet, hope they come back. They don't participate on the forum where many many have asked about the crystal specs (and reported weird crystal issues).
The W5500 datasheet I have says that you can use a 3.3V square wave as the clock input.

Sorry, I'm talking about W6100, which seems to have the same requirements as W5500S.

W5500 seems to have better availability at distributors, too, so considering skipping IPv6 support just to get a product with sane clock input.

W6100 would seem to require a 3V3 oscillator + a level shifter, in which case I need to be quite careful not to distort the duty cycle or introduce jitter.
« Last Edit: September 14, 2022, 06:52:21 pm by Siwastaja »
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #104 on: September 14, 2022, 07:17:23 pm »
Level shifter shouldn't be hard. A resistive divider should do it; the clock input should have a high impedance anyway. Another option is to AC couple the clock into the W6100 (optionally using a capacitive divider). The internal bias circuit should take care of centering the signal where the W6100 likes it best.
« Last Edit: September 14, 2022, 07:21:28 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #105 on: September 15, 2022, 05:03:55 am »
Yeah with just a few pF of input capacitance I'm probably going with 3V3 oscillator and resistor divider but I'm just getting a bit paranoid for reading so many posts about issues caused by clocking of this chip. This isn't the first Ethernet device I have designed but to be frank the earlier ones I didn't care much and they just Did Work with whatever 25MHz +/-50ppm crystal I got. Now I want to avoid problems when production is possibly ramped up to tens of thousands, recalls is last thing I need.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: Anyone used the Wiznet ethernet chips?
« Reply #106 on: September 15, 2022, 05:42:44 am »
One gets the same forum debates around the 32F417 + LAN8742 PHY chip and the famous 50MHz clock feeding the 32F4's ETH subsystem... Almost nobody seemed to actually know something. All I can say is that I followed the data sheets and it is rock solid, across numerous boards, but I did keep the PCB track length to about 2cm, and put resistors in series as recommended, to reduce ringing.

A resistive divider won't work at 25MHz. Look at the DC loading of such a divider, versus the required Zout to drive even a few pF. You will probably need caps in parallel, so e.g. 2 x 1k to set the DC level at 50%, and 2x 10pF to produce an "AC" divider.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #107 on: September 15, 2022, 06:41:57 am »
A resistive divider won't work at 25MHz. Look at the DC loading of such a divider, versus the required Zout to drive even a few pF. You will probably need caps in parallel, so e.g. 2 x 1k to set the DC level at 50%, and 2x 10pF to produce an "AC" divider.

Say 5mA loading on 3V3 oscillator R = 3.3V/0.005A = 660 ohm,
Assuming pin capacitance of 5pF + layout capacitance 3pF = 8pF
t_10-90% = 2.2*R*C = 2.2*660 * 8e-12 = 11.6ns
Rise/fall time for clock input of W6100 per datasheet max 8ns

You are right, it might marginally work with lower value resistors, otherwise capacitors AC bypassing the resistors would be needed.
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: Anyone used the Wiznet ethernet chips?
« Reply #108 on: September 15, 2022, 07:24:48 am »
Indeed; although the 5mA is probably just a waste. Depends on the input circuit it is driving. If there is a pullup or pulldown, that changes things, plus the value of such internal Rs tends to have a huge tolerance on it.

I also doubt one needs that much xtal drive, because all that matters is the amplitude on the clock input. The crystal drive relates to the amplitude at the other end of the xtal. But I don't know enough about this. The thing which might drive (no pun intended) the high xtal drive power would be if the clock input was low-Z and you needed to achieve a given amplitude at it. For example to drive a 50 ohm input straight off a xtal circuit like this



with say 1.5V P-P, you would need to drive the xtal really hard.
« Last Edit: September 15, 2022, 07:30:10 am by peter-h »
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline nctnicoTopic starter

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Anyone used the Wiznet ethernet chips?
« Reply #109 on: September 15, 2022, 07:28:23 am »
There is likely some internal biasing going on but the input should be able to deal with a sine wave as well. After all, when used as a crystal oscillator the input receives a sine wave. Due to the biasing this sine wave likely has the zero crossing just at the right level for the input. Might be worth investigating if you really want to flesh things out. It kind of sucks the clock input needs such a low level for an external clock input.
« Last Edit: September 15, 2022, 07:36:22 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #110 on: September 15, 2022, 07:33:18 am »
And finally I found a suitable crystal, I don't know how I managed to overlook this quite inexpensive part:
https://www.jauch.com/downloadfile/5d5283eb8ea55300421fd84937fcba163/jauch_datasheet_j49smh.pdf
Cshunt = 5pF (better than the usual 7pF of large crystals)
ESR = 30ohm (better than the usual 40ohm)

Gain margin calculation
> 8.43/(4*30*(2*3.14*25e6)^2*(5e-12+12e-12)^2*1000)
ans =  9.8616 > 6.9897

Drive level max 500µW. I still don't feel good about running something at absolute maximum rating and 5x over recommended, OTOH I don't know what the "recommended 500µW" in the W6100 datasheet actually means, maybe it means "pick a crystal with maximum rating of 500µW and W6100 will drive it at lesser power".

Going in circles with this crystal drive level thing, it seems some kind of tradition for manufacturers to pull ridiculous drive level numbers out of thin air, e.g. see https://community.infineon.com/t5/PSoC-5-3-1/External-crystal-drive-level-calculation/td-p/242159 and the company response apologizing the wrong numbers.

W6100 datasheet does not specify the crystal amplifier voltage swing, but assuming W6100 oscillator circuit runs from 1.2V, possible maximum drive level for a typical CL=12pF, CO=7pF, ESR=40ohm crystal would be, from first formula of above link, 2*40*(3.14*25e6*1.2*(12e-12+7e-12))^2 * 1e6 = 256uW

So maybe the 500uW advice is more like "better pick a crystal with 500uW absolute maximum rating" and not "the oscillator circuit will drive the crystal at 500uW and any attempt to reduce drive by increasing Rext results in instability due to too little amplitude".

I'm probably just going with a crystal after all, but have been enjoying documenting this weird trip, I think I have never spent this much time choosing a crystal just for such ubiquitous use case as 100M Ethernet.
« Last Edit: September 15, 2022, 07:42:19 am by Siwastaja »
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: Anyone used the Wiznet ethernet chips?
« Reply #111 on: September 15, 2022, 07:51:16 am »
The xtal oscillator should drive the xtal with a rail to rail swing. It won't drive it at any particular "power" - no way to do that.

But if there was a real requirement for a high power xtal, it would suggest that the clock input is low impedance of some sort.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #112 on: September 15, 2022, 08:36:51 am »
The xtal oscillator should drive the xtal with a rail to rail swing. It won't drive it at any particular "power" - no way to do that.

Not true at all, configurable crystal drive strength is often available, even in many cheap microcontrollers, you can easily verify this by measurement, swing is very rarely "rail to rail".  For example, see random Google result: https://community.silabs.com/s/article/choosing-c8051-crystal-oscillator-drive-level-oscxcn-xfcn-value-x?language=en_US

How they internally exactly achieve this, I don't know, I never looked into it.

W6100 does not have configurable drive strength, but that does not mean it must be a full swing between GND and Vcc (they don't even explicitly tell which Vcc domain, but I assume it's the 1.2V domain) without any internal current limitation.

So yes, it will drive at some particular power, but of course the crystal ESR, CL and CO are all part of the equation, it doesn't measure and regulate the power.

In absence of software-configurable drive strength, one adds external series resistance.
« Last Edit: September 15, 2022, 08:38:22 am by Siwastaja »
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #113 on: September 15, 2022, 08:41:35 pm »
Rail-to-rail doesn't make sense. If a crystal has an ESR of 100Ohm, with a 3.3Vpp square-wave drive, then that is 3.3^2/100 /2 = 54.5mW. Most crystals are much lower amplitude.. e.g. a 100uW sine-wave drive with 100Ohm ESR is 0.242Vpp.

 The ESR is the only resistive element of the crystal (e.g. it dissipates power), whereas the capacitors and inductors are only used for oscillation or tuning characteristics. A pierce oscillator circuit may have a series  and shunt resistor into/across the inverter to set the gm of that amplifier.

I remember that PICs have fuses to set the frequency range of the oscillator. E.g. for a PIC16F1509 it says these bits will change the shunt resistor between 2M and 10M. Lower gm = less gain = lower power. The external series resistor may be needed to limit power into a low-power crystal.
I don't remember seeing this tunable shunt resistor on e.g. a STM32. The series resistor may still be needed. I presume they have set the crystal driver power reasonably high to drive a wide range of oscillators (and possibly lower frequency ones need some scaling down)
 
 

Offline peter-h

  • Super Contributor
  • ***
  • Posts: 3697
  • Country: gb
  • Doing electronics since the 1960s...
Re: Anyone used the Wiznet ethernet chips?
« Reply #114 on: September 16, 2022, 05:43:17 am »
The 0.242V is across the xtal itself. This is not related to the swing of the amplifier output driving it.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #115 on: September 24, 2022, 05:38:29 pm »
I happened to work on a board bring up today with a W5500 and a STM32H725 chip. Having worked with ethernet phy's before and TCP/IP stacks.. pretty amazed how simple it is. Just write a few IP registers and it's ready to open a socket.

The internal engine of the chip seems to max out at around 92Mbit/s while transmitting the (same) 1K of data with a TCP socket. When the program also writes the Tx buffer and then waits for packet completion, it maxes out around 23 18Mbit/s with a SPI speed 34.375MHz, and 27 22Mbit/s with a SPI speed of 43.75MHz. So could be potentially a little bit faster if new data is written while the old packet is transmitted.

Unfortunately this SPI bus breaks down at 68MHz, as I had to patch it with a few thin coil wires.

This is still far higher than the figures listed on their official page: https://docs.wiznet.io/Product/iEthernet/W5500/Application/spi-performance
(Honestly 3Mbit/s sounds quite atrocious.)

For some reason, I get worse performance with UDP (max 80Mbit/s). But still very decent and much faster than a "regular" microcontroller TCP/IP stack can hold up (I remember researching some TCP/IP stacks for PIC32MX, most topped out at max 1-2.5MiB/s in TCP).

I'm posting this because the "raw" performance (note: my code is polling the status registers, etc.) of this chipset was a bit hard to find for me.
« Last Edit: September 24, 2022, 06:39:59 pm by hans »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Anyone used the Wiznet ethernet chips?
« Reply #116 on: September 24, 2022, 05:43:39 pm »
Well the W5500 supports SPI @up to 80MHz from the datasheet. Haven't tried it yet so I don't know what the problems could be. Not sure what you mean by "this SPI bus breaks down at 68MHz, as I had to patch it with a few thin coil wires." Could you detail?
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #117 on: September 24, 2022, 05:51:40 pm »
I've a stack of 2 boards (see: https://twitter.com/BitsOfHans/status/1571145207177158657/photo/1), but the board-to-board connector pinout was not correct so I had to jump the lines between boards with ~10cm wires (I use very thin coil wire for that, ideal to solder onto small pads). I do have some series damping resistors in the PCB to damp reflections, but wires in open air don't have a proper 50Ohm impedance.

My current code either transmits data into the buffer, or waits for the packet to be transmitted. For my code, the measured throughputs match pretty well using the harmonic mean of the SPI speed and "44Mbit/s" (estimated limitation of statemachine and firmware etc.):

1/ (1/34.375 + 1/44) = 19.3Mbit/s
1/ (1/43.75 + 1/44) = 21.9Mbit/s
Actually these calculations are a bit on the high side, so my prediction for SPI @ 80MHz is: 1/ (1/80 + 1/44) = 28.4Mbit/s.

If I want faster speeds, I would need to write into the bufferspace while another packet is being sent. I'm not sure if the buffer RAM is dual port, or if it needs arbitration, but I assume that at some point either the packet engine (statemachines, firmware polling etc.) or the SPI bus (80MHz) will be the limiting factor.

It's pretty promising, because other ethernet mac controllers (like the classic ENC28J60 or ENC424J600) have pretty slow SPI buses. Some multiple of tens of Mbit/s is pretty good.
« Last Edit: September 24, 2022, 06:50:18 pm by hans »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Anyone used the Wiznet ethernet chips?
« Reply #118 on: September 24, 2022, 06:07:46 pm »
Weird that such a common interface/protocol is still not mainstream good quality open source available, esp. with the huge trend towards networking small embedded devices  :-//
Some of the very best implementations of TCP/IP are open source, but they are also full featured, large, and in big solutions like BSD and Linux. When people are trying to squeeze into something small they have so many incompatible notions of what limited resources means that generic solutions don't work well.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Anyone used the Wiznet ethernet chips?
« Reply #119 on: September 25, 2022, 10:09:36 am »
Small update..

Read the docs, which says not to write to TxBuffer space while a TCP packet is transmitted. However, I'm not sure why. I think they don't want you to change the Tx WR pointers while a packet is in transmission. But why should the buffer space be inaccessible? I suppose they would use dual-port RAM to have the Ethernet packet engine and SPI engine operate independently.. So I went ahead and implemented this dual process anyway.

The old code was:

Code: [Select]
        while(1) {
            // check for available space
            do {
                fsr = le2be(SpiW5500_Read<uint16_t>(s0, 0x20));
            } while (fsr < sizeof(myPacket));

            // write packet
            myPacket.data[0]++;
            SpiW5500_Write<Packet1K>(s0tx, wp, myPacket);
            wp += sizeof(myPacket);
            SpiW5500_Write<uint16_t>(s0, 0x24, le2be(wp)); // Tx WR pointer

            // transmit packet
            SpiW5500_Write<uint8_t>(s0, 1, 0x20); // Command: Send

            // wait till transmit is done
            do {
                ir = SpiW5500_Read<uint8_t>(s0, 2); // Rd IR (SendOK = 0x10)
            } while ((ir&0x10) == 0);
            SpiW5500_Write<uint8_t>(s0, 2, 0x1F); // Clr IR (SendOK = 0x10)

This first checks for free space, then writes new 1K packet, and transmits it, which is sequential and slow.

New code does not:
Code: [Select]
        while (1) {
            fsr = le2be(SpiW5500_Read<uint16_t>(s0, 0x20)); // Rd Tx freespace
            ir  = SpiW5500_Read<uint8_t>(s0, 2); // Rd IR
            if (!hasWritten && fsr > sizeof(myPacket)) { // write packet when space is available and it wasn't done so
                myPacket.data[0]++;
                SpiW5500_Write<Packet1K>(s0tx, wp, myPacket);
                wp += sizeof(myPacket);
                hasWritten = true;
            }
            if (ir & 0x10) { // Check if triggered: SendOK = 0x10, which arms the send command
                SpiW5500_Write<uint8_t>(s0, 2, 0x10); // Clr IR (SendOK = 0x10)
                canSend = true;
            }
            if (hasWritten && canSend) { // new data written & send armed => set new pointer and transmit
                SpiW5500_Write<uint16_t>(s0, 0x24, le2be(wp));
                SpiW5500_Write<uint8_t>(s0, 1, 0x20); // Command: Send
                hasWritten = false;
                canSend = false;
            }
        }

This code works fine. No TCP retransmits or weird issues to be seen in Wireshark, and the packet counter is working normally.

For tests, socket 0 has maximum buffer space (16KiB). Throughput went up from ~27.8Mbit/s (old code, SPI clock of 43.75MHz) to 37Mbit/s (new code).

When I change packet size to MTU (1472bytes), the throughput peaks at 28.1Mbit/s (old) and 38.4Mbit/s (new).
Improved SPI driver (without STM32 HAL overhead): max 28.75 / 39.3Mbit/s respectively.

All throughputs were measured in user space with a small python socket script.

That last test translates 90% throughput of the SPI bus clock to application throughput. That's a nice promise for designs that can get up to 70 or 80MHz SPI clocks. (which hopefully I can when I fix the board-to-board connectors).

Not sure why I see so many benchmark figures of this chip with lower numbers. Obviously SPI bus is a bottleneck (this STM32H7 has a FIFO, so with CPU cycles it doesn't need DMA to saturate the SPI bus), but if you DMA that it combined with this 'trick' it should be a matter of maximizing the SPI clock.
« Last Edit: September 25, 2022, 10:12:43 am by hans »
 
The following users thanked this post: DC1MC

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Anyone used the Wiznet ethernet chips?
« Reply #120 on: September 25, 2022, 03:22:56 pm »
High-speed SPI needs some thought, like only route it on a PCB (no wires/connectors), route it as kinda-impedance controlled over a contiguous ground plane, and series terminate the signals at source, matching the impedance. For example, if you assume the CMOS Rds_on to be 20 ohms, and add 33 ohms in series, then calculate the trace width for 53 ohms impedance, given you know your stackup (distance to the ground plane). Remember vias from the IC ground leads to the ground plane near the signal traces on both ends. And if you absolutely need to go through a connector, use ground-signal-ground-signal scheme, and try to work out the characteristic impedance of the wire by googling or from the geometry, so you can match the PCB tracks and series termination resistors to the wire impedance.

Of course, just minimizing the bus length to a few cm is easier and then all of this does not matter.
« Last Edit: September 25, 2022, 03:28:53 pm by Siwastaja »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14476
  • Country: fr
Re: Anyone used the Wiznet ethernet chips?
« Reply #121 on: September 25, 2022, 05:48:10 pm »
Read the docs, which says not to write to TxBuffer space while a TCP packet is transmitted. However, I'm not sure why. I think they don't want you to change the Tx WR pointers while a packet is in transmission. But why should the buffer space be inaccessible? I suppose they would use dual-port RAM to have the Ethernet packet engine and SPI engine operate independently.. So I went ahead and implemented this dual process anyway.

The old code was:

Code: [Select]
        while(1) {
            // check for available space
            do {
                fsr = le2be(SpiW5500_Read<uint16_t>(s0, 0x20));
            } while (fsr < sizeof(myPacket));

            // write packet
            myPacket.data[0]++;
            SpiW5500_Write<Packet1K>(s0tx, wp, myPacket);
            wp += sizeof(myPacket);
            SpiW5500_Write<uint16_t>(s0, 0x24, le2be(wp)); // Tx WR pointer

            // transmit packet
            SpiW5500_Write<uint8_t>(s0, 1, 0x20); // Command: Send

            // wait till transmit is done
            do {
                ir = SpiW5500_Read<uint8_t>(s0, 2); // Rd IR (SendOK = 0x10)
            } while ((ir&0x10) == 0);
            SpiW5500_Write<uint8_t>(s0, 2, 0x1F); // Clr IR (SendOK = 0x10)

This first checks for free space, then writes new 1K packet, and transmits it, which is sequential and slow.

New code does not:
Code: [Select]
        while (1) {
            fsr = le2be(SpiW5500_Read<uint16_t>(s0, 0x20)); // Rd Tx freespace
            ir  = SpiW5500_Read<uint8_t>(s0, 2); // Rd IR
            if (!hasWritten && fsr > sizeof(myPacket)) { // write packet when space is available and it wasn't done so
                myPacket.data[0]++;
                SpiW5500_Write<Packet1K>(s0tx, wp, myPacket);
                wp += sizeof(myPacket);
                hasWritten = true;
            }
            if (ir & 0x10) { // Check if triggered: SendOK = 0x10, which arms the send command
                SpiW5500_Write<uint8_t>(s0, 2, 0x10); // Clr IR (SendOK = 0x10)
                canSend = true;
            }
            if (hasWritten && canSend) { // new data written & send armed => set new pointer and transmit
                SpiW5500_Write<uint16_t>(s0, 0x24, le2be(wp));
                SpiW5500_Write<uint8_t>(s0, 1, 0x20); // Command: Send
                hasWritten = false;
                canSend = false;
            }
        }

This code works fine. No TCP retransmits or weird issues to be seen in Wireshark, and the packet counter is working normally.

For tests, socket 0 has maximum buffer space (16KiB). Throughput went up from ~27.8Mbit/s (old code, SPI clock of 43.75MHz) to 37Mbit/s (new code).

When I change packet size to MTU (1472bytes), the throughput peaks at 28.1Mbit/s (old) and 38.4Mbit/s (new).
Improved SPI driver (without STM32 HAL overhead): max 28.75 / 39.3Mbit/s respectively.

All throughputs were measured in user space with a small python socket script.

That last test translates 90% throughput of the SPI bus clock to application throughput. That's a nice promise for designs that can get up to 70 or 80MHz SPI clocks. (which hopefully I can when I fix the board-to-board connectors).

Not sure why I see so many benchmark figures of this chip with lower numbers. Obviously SPI bus is a bottleneck (this STM32H7 has a FIFO, so with CPU cycles it doesn't need DMA to saturate the SPI bus), but if you DMA that it combined with this 'trick' it should be a matter of maximizing the SPI clock.

This is because indeed most of them are made with rather low SPI clock and no double buffering at all. As you mentioned, the docs say not to do double buffering actually, so probably people just stick to that advice. Or maybe they just don't know any better anyway. But of course maximizing SPI clock and filling the TX FIFO while the chip is transmitting rather than wait for it to complete are the only ways of maximizing throughput. No way around it.

The more concerning point IMO is not so much that simple users do not manage to maximize throughput but that the vendor itself does not either apparently. :-DD
 
The following users thanked this post: hans


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf