Author Topic: lwip UDP disable CRC  (Read 6885 times)

0 Members and 1 Guest are viewing this topic.

Offline tanffnTopic starter

  • Contributor
  • Posts: 17
lwip UDP disable CRC
« on: May 30, 2015, 11:30:59 pm »
As per your suggestions I rewrote the code to avoid fragmentation, and it works quite nice!
Thank you! :)

I now want to add a one byte header to the UDP packet, but I still want to avoid copying from one buffer to another AND I want to avoid packet fragmentation.

I am streaming UDP data from SPI->DMA->MEM->ETH, So I use pbuf by Ref:

Code: [Select]
    TaskUdpStreamErr = netbuf_ref(buf, dataPointer, TASK_UDP_STREAM_PACKET_SIZE);
    // dataPointer is filled by the DMA and shared with the with other tasks

so I tried this hack:
Code: [Select]
   
    TaskUdpStreamErr = netbuf_ref(buf, dataPointer, TASK_UDP_STREAM_PACKET_SIZE + 1);
    buf->p->flags = 0x80 | (i << 5) | ci;
   
    and in the low_level_output():
    if (q->flags & 0x80)
    {
        *((uint8_t*)buffer + bufferoffset) = q->flags;
        bufferoffset += 1;
        byteslefttocopy -= 1;
    }
   

Problem is that it will fail on the CRC...
   
1. Is there a way to disable the CRC? is it a valid solution?
2. What other solution do you suggest?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: lwip UDP disable CRC
« Reply #1 on: May 31, 2015, 12:00:56 am »
You need to increase TASK_UDP_STREAM_PACKET_SIZE.There is no way around it. You can send UDP without checksum to reduce calculation overhead. Just make the checksum field zero. Usually the low level protocol (ethernet) has CRC error checking so the UDP checksum is totally redundant.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline dfmischler

  • Frequent Contributor
  • **
  • Posts: 548
  • Country: us
Re: lwip UDP disable CRC
« Reply #2 on: May 31, 2015, 01:25:28 am »
Usually the low level protocol (ethernet) has CRC error checking so the UDP checksum is totally redundant.
No, the UDP checksum is not totally redundant.  Nor is the TCP checksum.  These checksums perform an important end-to-end function that can help deal with issues like routers with corrupt firmware, RAM problems, etc., that cannot be detected with link-level CRCs alone (because these are checked on incoming packets and generated on outgoing packets for each hop).  And while UDP checksums are allowed to be unchecked, this is not considered anything close to best practice.

The need for a datagram-level (e.g. UDP packet or TCP segment) end-to-end verification is real.  Many years ago, I worked with a customer on a network problem like this.  The network stack was not TCP/IP, but DECnet Phase IV.  The symptom was that the remote File Access Listener (kinda like an FTP server but for DECnet) would cooperate in a file transfer until the end of the transfer, at which point DECnet would announce that the file transfer had failed due to a CRC error.  DECnet Phase IV had no per-datagram end-to-end checksum, and the FAL only checked once for the entire file (all or nothing).  A bad DECrouter 250 was corrupting some of the packets as they went through the network and so the overall file transfer would fail even though all of the link-level CRCs were correct.  And you might have wasted an hour or more transferring the file only to be told at the end that the transfer failed.  A per-packet end-to-end checksum would have let the poor user know right away that the transfer was not working properly, or better yet retransmit the bad packets.  This is something that TCP/IP got right.
« Last Edit: May 31, 2015, 02:05:25 pm by dfmischler »
 
The following users thanked this post: VooDust

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: lwip UDP disable CRC
« Reply #3 on: May 31, 2015, 03:14:05 am »

Problem is that it will fail on the CRC...
   
1. Is there a way to disable the CRC? is it a valid solution?
2. What other solution do you suggest?

Unless I am mistaken, when you make the same change it will always flip the same bits of the CRC.

Just to be sure I wasn't barking mad, I tested the idea in an online CRC calculator, using CRC P(x) = x^8+ x^5+ x^4+ x^0
Hex string => CRC
01000000 => 0x9b
02000000 => 0x07

So changing the first byte from 0x01 to x02 alters the CRC by 0x9C. I tested this a couple of times:

010F0F0F => 0x9D
020F0F0F => 0x01

01ABCDEF => 0xAA
02ABCDEF => 0x36

Likewise there is most probably a 'simple' way to combine the existing CRC with the CRC of the added data byte  (most probably from a 256 entry table).

As a hint, if your initial values are all zeros

01ABCDEF has the same CRC as 0001ABCDEF which is the same as that of 000000000000000000000000000001ABCDEF

So to add 0x89 at the start of a CRC'ed data block it should be as easy as this:

* taking your data string (in this case ABCDEF)
* adding 0x00 to the front (the CRC shouldn't change if that is the only change in the packet).
* looking up the difference between the CRCs of 00000000 (which will be zeros!) and 89000000 (0x76).

As ABCDEF has a CRC of  0x31, I predict 89ABCDEF has a CRC of (0x76 xor 0x31) = 0x47... and indeed it does!

« Last Edit: May 31, 2015, 03:18:36 am by hamster_nz »
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline tanffnTopic starter

  • Contributor
  • Posts: 17
Re: lwip UDP disable CRC
« Reply #4 on: June 15, 2015, 05:01:12 am »
Got married last week! So I've been busy  :popcorn:

Read the debate, left CRC on and did the following implementation (as a test):

               
Code: [Select]
for (uint32_t ki=0; ki<1000; ki++)
                {
                    for (uint32_t i=0; i<6; i++)
                    {
                        for (uint32_t ci=0; ci<4; ci++)
                        {
                            for (uint32_t j=0; j<501; j++)
                                tempDebug[j] = i * 2000 + ci*5000 + (j%2)*100;
                       
                            buf = netbuf_new();
                            netbuf_ref(buf, &tempDebug, 1000+4);
                            // Hack: to avoid extra copy (@low_level_output)
                            // 7    - flag
                            // 4..6 - ID
                            // 0..3 - Chunk Index
                            buf->p->flags = 0x80 | (i << 4) | ci;
                            TaskUdpStreamErr = netconn_send(conn, buf);
                            netbuf_delete(buf);
                            vTaskDelay(2 / portTICK_PERIOD_MS);
                        }
                    }
                }
                vTaskDelay(1000 / portTICK_PERIOD_MS);

And in the low level code:

Code: [Select]
    // hack to quickly avoid fragmentation and memcopy
    if (q->flags & 0x80)
    {
        uint32_t t = q->flags;
        memcpy( (uint8_t*)((uint8_t*)buffer + bufferoffset), &t, 4 );
    }

Problem is that with a Router/switch I get to the PC MOST (but not all) of the message:
With 1ms delay between messages: 20232 out of 24000.
With 2ms delay between messages: 23947 out of 24000.

I know there is no flow control in UDP, but if I'm directly connect (with a switch) why am I losing packets?
Is there a way to improve it?
Do I have to switch to TCP/IP protocol..?
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: lwip UDP disable CRC
« Reply #5 on: June 15, 2015, 06:12:07 am »
What program do you use on the PC side? Did you check with Wireshark whether you got all the packets?
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: lwip UDP disable CRC
« Reply #6 on: June 15, 2015, 07:57:28 am »

There are no guarantees anywhere along the chain so any reason is possible.... One broadcast packet from another device on a busy link is enough to cause somebody's packet to be dropped if a port is really busy.

Another reason issue that unless you target device ocassionally sends a packet to the source device the source will ocassionally have to time out the ARP table entry, recording what Ethernet address is servicing which IP address. It is perfectly acceptable behaviour to drop UDP packets while ARP resolution is in progress.

On easy thing to try would be to insert a small known delay between bursts of packets, allowing a bit of quiet time for other data on the link. This might reduce congestion issues.

Or you could try adding static ARP entries on the source.

Or maybe 100 other potential fixes... UDP is like that - delivery isn't guaranteed, so it will never be 100%.

BTW as you most likely already know,  TCP/IP avoids both these potential problems as packets are sent both ways (keeping ARP entries current) and it has congestion control built in, as well as the protocol making sure that lost packets are resent.
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: lwip UDP disable CRC
« Reply #7 on: June 15, 2015, 08:09:50 am »
I know there is no flow control in UDP, but if I'm directly connect (with a switch) why am I losing packets?
Is there a way to improve it?
Do I have to switch to TCP/IP protocol..?

1) Because UDP, by design, offers no guarantees whatsoever about when packets will be delivered, in what order they will be delivered, or even if they will be delivered.

2) No, by design. But you can build things on top of it - such as transport protocols.

3) If you want guarantees then you will need a transport protocol. TCP is a transport protocol. Rolling your own transport protocol is unwise, except as a learning exercise , or if there are known special properties of your application / signals that could benefit.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: lwip UDP disable CRC
« Reply #8 on: June 15, 2015, 09:00:56 am »
In theory UDP packets can get lost but in reality it almost never happens if you are on the same network segment. One of my projects pumps 80Mbit/s of UDP traffic onto the network and even in a VM Windows has no problem receiving all the packets. Another thing to consider is that VOIP and any streaming protocol use UDP so UDP transport has to be reliable nowadays. See http://www.onsip.com/about-voip/sip/udp-versus-tcp-for-voip

If packets get lost at such low rates chances are either the application used to monitor the packets can't keep up or the packets don't get send because Lwip is doing an ARP request instead of sending out a packet. IIRC Lwip has an ARP cache refresh timer which may cause trouble.

Switching to tcp/ip can add significant delays due to resending packets / waiting for a timeout and therefore can cause more problems for realtime applications than it solves.
« Last Edit: June 15, 2015, 09:05:14 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline danfo098

  • Contributor
  • Posts: 17
  • Country: se
    • diyprojects.se
Re: lwip UDP disable CRC
« Reply #9 on: June 15, 2015, 09:50:43 am »
If the ethernet frame CRC32 is incorrect switches not just routers will also drop the packet so there is probably still a problem with the CRC.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19497
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: lwip UDP disable CRC
« Reply #10 on: June 15, 2015, 09:52:20 am »
In theory UDP packets can get lost but in reality it almost never happens if you are on the same network segment.

In practice UDP packets do get lost. Whether or not that is a problem is application dependent. The application designer has a to make a decision.

Quote
Switching to tcp/ip can add significant delays due to resending packets / waiting for a timeout and therefore can cause more problems for realtime applications than it solves.

Just so. Hence my comment "if there are known special properties of your application / signals that could benefit".

TANSTAAFL
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline gmb42

  • Frequent Contributor
  • **
  • Posts: 294
  • Country: gb
Re: lwip UDP disable CRC
« Reply #11 on: June 15, 2015, 11:53:36 am »
Have you checked the switch port statistics if it provides them?
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf