Author Topic: [SOLVED] USB CDC "flow control"?  (Read 13230 times)

0 Members and 1 Guest are viewing this topic.

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
[SOLVED] USB CDC "flow control"?
« on: April 24, 2016, 10:38:09 am »
TL;DR: Does the USB CDC protocol support flow control (specifically, the MCU says "hold up", and any writes on the connected PC block)? How does the MCU say "hold up"?

I just ran the USB CDC demo for my LPC Expresso LPC11U37H (an ARM cortex M0+ board fwiw), and I tested what happens when the UART's buffers fill up by adding long sleeps in my application code to simulate a complex operation. Perhaps I was expecting too much, but I was expecting the USB interrupt handlers to notice if the buffers were getting near full, and for them to tell the PC to hold up. Alas, the code I see is vastly simpler; in fact there's no ring buffer to speak of at all, and bytes are dropped.

So, as written above, is what I'm trying to do here even possible with the standard USB CDC protocol? Which endpoint should the "hold up" command be sent along, and what should it look like? Not looking for the complete answer, just some basic pointers would be greatly appreciated!

EDIT: Here's a more concrete version of my question:



In this hypothetical situation, we have a 1200 baud device that is fast enough to handle whatever a PC throws at it, because the 1200 baud channel is naturally quite limited, and the device can hypothetically handle each line/character as it comes. This means that the RS-232 device here is guaranteed to never require any form of RS-232 flow control. However, once you introduce a USB CDC-based USB-to-UART converter, there is the possibility that the PC could overrun the buffer in the USB-to-UART converter; which means that this system can now drop bytes. What I'm asking is, how is this problem handled/prevented? Or is the genuinely a thing that cannot be avoided?

SOLUTION: My LPC should not be echoing USB ACKs for packets that it can't accommodate in its buffer. The PC will retransmit these packets, so the data is not dropped. Actually implementing this is left as an exercise to the reader (read: I haven't got this working myself yet).

EDIT: Fixed broken image link.
« Last Edit: November 04, 2016, 11:27:36 am by rs20 »
 

Offline baoshi

  • Regular Contributor
  • *
  • Posts: 167
  • Country: sg
    • Digital Me
Re: USB CDC "flow control"?
« Reply #1 on: April 24, 2016, 12:05:17 pm »
The control lines are part of CDC-ACM protocol. These are seldom implemented by hardware. You need software to toggle/read some GPIOs to implement flow control.
 

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: USB CDC "flow control"?
« Reply #2 on: April 24, 2016, 12:14:33 pm »
The control lines are part of CDC-ACM protocol. These are seldom implemented by hardware. You need software to toggle/read some GPIOs to implement flow control.

Thanks, but:

To be clear, I'm not concerned about UART Software/Hardware flow control, this is a purely virtual COM port device (although the answer to my question would obviously be needed for anyone implementing those things properly.)
 

Offline exmadscientist

  • Frequent Contributor
  • **
  • Posts: 342
  • Country: us
  • Technically A Professional
Re: USB CDC "flow control"?
« Reply #3 on: April 24, 2016, 08:18:13 pm »
The CDC-ACM protocol does provide for handshaking signals. The NXP LPC USB drivers, however, don't actually do anything with them.

In my experience with the LPC11U67's ROM drivers, which I believe are shared among many different LPC variants, I've found them to be barely-functional, poorly-implemented piles of junk. They correctly and usably implement only the bare minimum for claiming "Built-in USB drivers!" on their data sheet. I had to rewrite sizeable chunks of functionality to work around their deficiencies, which was not trivial since the damned drivers themselves are closed-source (though they are, I think, just a slightly modified derivative of the older nxpusblib code that's floating around).

If I remember right, the functions that you'll need to set up are the event handlers for CDC line code changes (I think I called them CDC_OnSetLineCode() or something close to that).
 
The following users thanked this post: rs20

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26874
  • Country: nl
    • NCT Developments
Re: USB CDC "flow control"?
« Reply #4 on: April 24, 2016, 08:26:46 pm »
The main question is though whether the flow control is handled at the USB layer or at the application layer (in which case the application must implement serial port flow control).
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: rs20

Offline radar_macgyver

  • Frequent Contributor
  • **
  • Posts: 694
  • Country: us
Re: USB CDC "flow control"?
« Reply #5 on: April 24, 2016, 08:49:26 pm »
You can send the SerialState notification (code 0x20, described in section 6.5.4 of the CDC PSTN subclass document). This allows the CDC endpoint to implement "virtual" interrupts to the host. Bits 0 and 1 are equivalent to the DCD and DSR signals from RS-232, so if you send DSR low and hardware handshake is enabled, this will prevent the CDC driver from sending additional data to your endpoint.

I haven't personally used the DSR bit, but I have used DCD and RI to send GPS pulse-per-second information to the host. Please note that the CDC-ACM driver in Linux kernels prior to 2.6.32 did not implement this properly, so your application couldn't call select() and be notified of the interrupt condition. Kernels with this bug fixed will correctly implement hardware handshake for flow control.
« Last Edit: April 24, 2016, 08:50:57 pm by radar_macgyver »
 
The following users thanked this post: rs20

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: USB CDC "flow control"?
« Reply #6 on: April 25, 2016, 01:09:26 am »
I'm not sure DSR, CTS, or even XON/XOFF are the answers here. I tried using XON/XOFF, which "ought" to work just as well (and is easier to implement, I just sent '\x13' before going into my sleep) but PuTTY didn't seem to respect the incoming XON/XOFF characters at all, and bytes sent during the sleep were simply dropped.

To put it another way, let me pose you a more concrete question, in the form of a picture:



In this hypothetical situation, we have a 1200 baud device that is fast enough to handle whatever a PC throws at it, because the 1200 baud channel is naturally quite limited, and the device can hypothetically handle each line/character as it comes. This means that the RS-232 device here is guaranteed to never require any form of RS-232 flow control. However, once you introduce a USB CDC-based USB-to-UART converter, there is the possibility that the PC could overrun the buffer in the USB-to-UART converter; which means that this system can now drop bytes. What I'm asking is, how is this problem handled/prevented? Or is the genuinely a thing that cannot be avoided?

In my experience with the LPC11U67's ROM drivers, which I believe are shared among many different LPC variants, I've found them to be barely-functional, poorly-implemented piles of junk.

Agreed, I found that sending multiple writes in quick succession (sending data back from the MCU to the PC) would cause the code to just drop packets, because if the transmitter was busy, it'd just bail out rather than busy-looping to wait for it to become free. I added the busy loop, and now it works nicely (haven't resolved the bigger issue in the other direction above though). Why did they choose such a stupid option!? To be fair though, I've found similar problems in AVR sample code as well; it boggles my mind that sample USB code seems to universally be so utterly terrible.

EDIT: Fixed broken image link.
« Last Edit: November 04, 2016, 11:27:32 am by rs20 »
 

Offline ade

  • Supporter
  • ****
  • Posts: 231
  • Country: ca
Re: USB CDC "flow control"?
« Reply #7 on: April 25, 2016, 03:29:31 am »
Quote
What I'm asking is, how is this problem handled/prevented?

If the host sends data to the converter which would result in an overrun, the converter should refuse by sending a USB NAK handshake packet back to the host.  The host should then throttle the transmission and attempt to send the data again at a later time.

So basically flow control happens on the USB side by the device responding with ACK or NAK depending on its buffer status.  This flow control is therefore the responsibility of the USB CDC firmware.

RTS/CTS/XON/XOFF is for end-to-end flow control, implemented by (your) application.
 
The following users thanked this post: oPossum, rs20

Offline exmadscientist

  • Frequent Contributor
  • **
  • Posts: 342
  • Country: us
  • Technically A Professional
Re: USB CDC "flow control"?
« Reply #8 on: April 25, 2016, 03:47:17 am »
Where is the LPC in your diagram? I don't see it there, and it's not a monolithic USB to UART bridge like an FTDI chip or equivalent. Yes, it can be used as one, but that requires explicit handling in your USB code of the CDC-to-actual-hardware-UART interface. As ade says, that will probably involve NAK packets or something else at the USB layer.

Something to consider is that the baud rate setting on a virtual COM port doesn't actually change anything. It just says to the USB device, "hey, your baud rate is <X> now". It's up to the device application driver code to do something intelligent with that information (or ignore it, as often is appropriate). The USB link rate never changes through all of this.
 

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: USB CDC "flow control"?
« Reply #9 on: April 25, 2016, 04:32:48 am »
Thanks, looks like USB NAKs are the answer to my problem. Coercing the crappy demo code* I have into doing that might be a difficult challenge, especially given that half the code is buried in some ROM somwhere.

* I think the demo code I have right now will be ACKing everything.

Where is the LPC in your diagram? I don't see it there, and it's not a monolithic USB to UART bridge like an FTDI chip or equivalent. Yes, it can be used as one, but that requires explicit handling in your USB code of the CDC-to-actual-hardware-UART interface. As ade says, that will probably involve NAK packets or something else at the USB layer.

Yes, USB NAKs are what I'm after. That diagram doesn't represent my usecase at all, I was just trying to formulate the question in a way that people would understand (and I succeeded! :-) )
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: [SOLVED] USB CDC "flow control"?
« Reply #10 on: April 25, 2016, 08:47:20 am »
Quote
looks like USB NAKs are the answer to my problem. Coercing the crappy demo code* I have into doing that
I'm shocked that it's not already doing that.  After all, sending serial data at 960bps is a "normal case" even though the USB side of the connection will run MUCH faster than that.  If there were no flow control of any kind, USB/Serial would be essentially non-functional.

Um.  I thought this would normally be handled by USB "hardware."  You queue up buffers for the USB endpoint, and the host fills them up.  The serial code pulls away one of the buffers and requeues the data for the serial port, then gives the buffer back to the USB.  When the Serial output buffer is "full", the serial code stops taking USB buffers, causing them to sit in the USB queue, full.  When all the USB buffers on the endpoint are full, the USB hardware starts sending NAKs instead of ACKs...  Or something like that.  (I haven't really looked into this in detail, just read some docs and watched some seminars and such.  But it's pretty typical of a LOT of protocols - if you can't do anything with the data, you shouldn't read it, and THAT eventually activate flow control.  If you read the data first, and THEN decide that you can't do anything with it, it's too late!)
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1636
  • Country: nl
Re: [SOLVED] USB CDC "flow control"?
« Reply #11 on: April 25, 2016, 11:36:33 am »
Yep pretty much what westfw said. By not giving an endpoint buffer back to hardware in your firmware, the peripheral should NACK any IN tokens and depending on the USB endpoint used (bulk/interrupt vs isochronous) the host should resend the data, and thus a certain amount of data rate flow control can be used.

Essentially USB benchmarks like this one this example measure how fast your device can free-up those buffers while consuming most if not all of the CPU resources (obviously depending how much processing is done on the data).

I'm not certain what the timeout of the host is if a transfer continues to fail; like sometimes your USB device will enumerate and connect properly, but eventually device connection may be closed because the device does not respond to any IN tokens for prolonged periods (firmware crash, etc.).
 

Offline ade

  • Supporter
  • ****
  • Posts: 231
  • Country: ca
Re: [SOLVED] USB CDC "flow control"?
« Reply #12 on: April 25, 2016, 04:06:49 pm »
Quote
I thought this would normally be handled by USB "hardware."

It's both.  The hardware will send the NAK but it's up to the firmware/driver to implement flow control. 

E.g., to decide when/if to retry the packet, how many times to retry, whether to move the packet to the back of the send queue to allow other transmissions to proceed, or alternatively to simply drop all NAKed packets, etc.  (And for HiSpeed similarly support NYET and PING).
 

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: [SOLVED] USB CDC "flow control"?
« Reply #13 on: June 06, 2016, 11:49:08 am »
Just closing the loop on this -- I implemented CDC firmware for my LPC11U37H completely from scratch using the USB peripheral directly (OK, OK, I did steal the USB Descriptor structs from the NXP demo. But that's all!)  Quite a rewarding experience when it works!

I handled reading from the endpoint in the simplistic but reliable way: polling to see whether there's a packet available in the buffer when it's time to read a byte; not interrupt driven at all. The NAK'ing works perfectly, I can short a jumper on my dev board to instruct my firmware to stop reading bytes, and I did so for 8 hours overnight. Upon removing that jumper, the file I had been transmitting over the USB line just resumed exactly where it left off the previous evening, no dropped bytes at all (obviously this shouldn't be surprising, but given some of the experiences I've had with other people's code, I was starting to wonder...) This simulates the case, for example, of a milling machine accepting GCode over USB. Turns out the driving software can just dump the entire GCode file into the USB as a single write; there's no need for hacky application-level workarounds or DTR/DTE style control signals. You just need USB firmware that doesn't suck; Linux and Windows operating systems play their roles in USB flow control flawlessly (I was wondering if the OS would throw some sort of error after 2 minutes of inactivity or something...)

The problem with the existing demos is they are interrupt-based, and the interface provides no way (that I know of) of rejecting an incoming packet. The interrupt handler seems to read the packet from memory, mark the endpoint as active/ready-to-receive, and then palms the packet off to the next layer of the library with no way for that next layer to say "I haven't got space to store this!". Obviously it's possible to get an interrupt-based approach right (and an optimal solution does involve interrupts), but the interface for a polling approach is much cleaner and easier to get right (and only meaningfully suboptimal in contrived cases).

I do wonder if engineers need a "interrupts aren't always the right solution" lesson...
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26874
  • Country: nl
    • NCT Developments
Re: [SOLVED] USB CDC "flow control"?
« Reply #14 on: June 06, 2016, 04:43:21 pm »
The interrupt handler seems to read the packet from memory, mark the endpoint as active/ready-to-receive, and then palms the packet off to the next layer of the library with no way for that next layer to say "I haven't got space to store this!".
I'd just fix this part and keep the whole thing interrupt driven!

edit: typo
« Last Edit: June 06, 2016, 08:17:58 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: [SOLVED] USB CDC "flow control"?
« Reply #15 on: June 06, 2016, 06:38:36 pm »
Which exact examples were failing?  I thought the NXP USB code was based off of the AVR "LUFA" code, which is widely respected, widely used, and AFAIK doesn't have this problem...
 

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: [SOLVED] USB CDC "flow control"?
« Reply #16 on: June 06, 2016, 11:45:48 pm »
I'd just fix this part and keep the whole thing interrupt driven!

You can't fix (in fact, you can't even see) ROM code! However, I haven't closely checked whether I've missed a way of doing this. As noted below, I will report back with detailed results.

Also, would you consider the use-case first before making the interrupt vs polling choice? Or would you immediately leap for interrupts?

Which exact examples were failing?  I thought the NXP USB code was based off of the AVR "LUFA" code, which is widely respected, widely used, and AFAIK doesn't have this problem...

There's an interesting lineage from AVR LUFA to the LXP USB demo code to the actual ROM code onboard the NXP chips. Now that I know my expectations of USB flow control are valid, I'll go back and retry all these and report back on this thread.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26874
  • Country: nl
    • NCT Developments
Re: [SOLVED] USB CDC "flow control"?
« Reply #17 on: June 06, 2016, 11:52:10 pm »
I assumed you didn't use the ROM code because AFAIK it has some other issues as well. I use standard USB CDC code from Keil or NXP (but not LUFA) and that just does things using an interrupt much like a serial port. A thread safe FIFO buffer transfers data between the main application and the (interrupt driven) USB library. Since USB is likely to require service within a specific amount of time it is safer to have it driven by interrupts than polling if you want the code to be of generic use.
« Last Edit: June 06, 2016, 11:55:48 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline exmadscientist

  • Frequent Contributor
  • **
  • Posts: 342
  • Country: us
  • Technically A Professional
Re: [SOLVED] USB CDC "flow control"?
« Reply #18 on: June 07, 2016, 05:02:34 am »
If you're willing and able to post your code up, I'd be interested to see it. One of my back-burner projects is to rework the LPC11U68's USB stack to not be such a monstrosity, and being able to build on someone else's work (and give back what I can, of course) would be very helpful.
 

Offline rs20Topic starter

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
Re: [SOLVED] USB CDC "flow control"?
« Reply #19 on: June 10, 2016, 01:13:45 pm »
If you're willing and able to post your code up, I'd be interested to see it. One of my back-burner projects is to rework the LPC11U68's USB stack to not be such a monstrosity, and being able to build on someone else's work (and give back what I can, of course) would be very helpful.

Attached. It could do with a lot more tidy-up, but it should be vaguely readable. Also, it's just source files, I couldn't figure out a way of zipping the whole workspace in a way that makes it standalone without sending you my entire hard drive. It requires the LPCExpresso LPC11U37H board & chip projects as well to compile (although it should be fairly easy to tease out those dependencies, it's just some GPIO and debug UART stuff).
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf