Well, the typical "latency" on a given bus would be the min/average/max time-to-delivery of data packets. Obviously it could depend on several factors including the transfer mode, packet size and host-related questions (OS, drivers, application...), but I have used USB HS extensively and know what to expect, so I'm only interested in what would significantly differ with USB SS. Since SS is more like a point-to-point protocol, the typical latency should be much lower than with HS and its somewhat "shared" nature and its 125µs microframes. But any real-life experience with it would be interesting.
Round-trip times are easier to measure than one-way latencies especially when using non-real time OSs.
I have read some articles claiming round-trip times (which would mean timing the following sequence: request to send a data packet from the host to the device, the device receiving the data and immediately sending a request to send back another data packet to the host, and finally the host receiving it) of around 30µs-50µs with USB SS and the most favorable case (appropriate packet size, etc). Of course on non-realtime OSs, that's only an average. You can get higher latencies on occasion... On RT-patched Linux kernels though, that kind of latency can be sustainable.
I guess I would have to test by myself to really get an idea of what I can obtain...
Anyway, if anyone knows of any other possible solution than the FTDI or Cypress chips, that'd be interesting. Since I'm going to interface that with FPGAs only, in theory I guess I could also consider an USB 3.0 core IP, but I don't know if that even exists so far, and if it does, that probably costs a fortune and takes a large amount of logic resources...