| Electronics > Projects, Designs, and Technical Stuff |
| I2C automatic data rate adaption to pullups |
| << < (5/6) > >> |
| ejeffrey:
--- Quote from: Siwastaja on February 26, 2019, 06:55:24 am ---OTOH, I2C implementations tend to be equally (or even more) broken; and microcontroller I2C implementations have notorious history of being massively bloated, hard to configure and providing little benefit over bit-banging. This has recently got better: for example, the newest I2C implementations on the newest STM32 devices can do actual DMA transactions in the background (woohoo! :clap:), without you babysitting every signal level change. This was impossible just a few years ago; you needed to poll and bit-bang the control registers with correct timing to the extent that you could have bit-banged the actual SDA/SCL instead. So while a higher-level, easy to use I2C libraries do exist (and due to the standardization of how the protocol is used, they even tend to work!), they also tend to be blocking, slow calls to the extent that you just cannot use them in an actual product that needs to multitask something else than just communicating with I2C devices, which is almost always the case. So now you are left with trying to utilize the (usually broken or difficult to configure) HW I2C peripheral to the full extent, and writing interrupt-based I2C implementation that works with your specific device and application. --- End quote --- Good to know. The I2C applications I have used tend to be fine with occasional blocking calls from the main thread, but I assumed that the non-blocking DMA versions mostly worked. |
| jmaja:
It seems to be like that. First a lot of error checking, then generate start etc. Interrupts for Ack, Nack, Stop, lost arbitration and bus error. Seems DMA can't be used with I2C on PSoC 4. DMA sounds like an overkill anyway. I only need to write two bytes, write three bytes, then wait for the measurement to be ready, read 6 bytes and write two bytes. 13 bytes need to transferred for each measurement and for that slave address needs to be given 4 times. This takes now 246 us to start the measurement and 370 us to read and put the sensor to low power (measured from scl activity, actually a bit longer). Let's see if I have the time to try to make it faster. How much should it take with simple bit banging? I only need master mode and there will be just one slave per bus, since these sensors have fixed slave address and I need two of them. Perhaps that was lucky, since I can run the two buses in parallel. |
| bson:
--- Quote from: ejeffrey on February 26, 2019, 05:13:35 am ---I prefer I2C any time I don't care about speed. --- End quote --- Yup, IMO it doesn't belong anywhere where speed matters. This is also why I don't do interrupt-driven I2C but just busy wait (poll device, poll timer, or use a scheduler to sleep if available) in the foreground; anything that matters is then free to interrupt without worrying about nesting or other complications. This way the I2C code can also bang out lost clocks, reset the controller or power cycle the bus devices when clock banging doesn't unwedge it (looking at you, MSP430 USCI), deal with bus hot plugging (not terribly challenging to implement, see a demo using a MSP430G2553 at ), or whatever special functionality is called for in a particular application. The reason for that particular demo was to be able to just plug in a small OLED display on an I2C connector and have debug and diagnostic info pop up. BTW, the I2C bus can also be pulled up with a JFET, BJT mirror, or other constant current source. While a JFET is not a particularly good CCS it definitely beats pullup resistors and only adds a dual JFET. Makes the bus far less sensitive to capacitive loading, such as in the hot plug demo above (where the capacitance varies with what's plugged in). It does make TVS ESD protection mandatory. |
| jmaja:
I found this: https://community.cypress.com/docs/DOC-15334 Pure bit banging without interrupts. At 12 MHz system clock I could get only 90 kHz using it. Then I modified to be as fast as I could and got 225 kHz. Testing just SetSCL; SetSDA ClrSCL; in a for loop gave just 289 kHz. Which is actually about expected, since the fastest pin toggle is said to take 7 clock cycles. Low period is 1.52 us and high 1.94 us. Seems to take even more than 7 cycles, since low high should be just 2*7/12=1.2 us. No way of making bit bang I2C faster than the hardware one. But maybe the hardware one could be used more efficiently without interrupts. |
| jmaja:
I made the measuring work with using I2C hardware registers. Or actually a bit of a mix. I use API initialization and enabling, but then do the reading and writing with registers. I got lucky, since I made a mistake and it still worked. It seems the sensor datasheet requires extra commands that aren't actually needed, at least in my case. The I2C hardware is fast. No gaps and I get 425 kHz throughout. Now I need only 65 us to start the measurement (246 us originally, but with one command less 100 us) and 224 us to read and put to sleep (370 us originally, no changes in protocol). Total saving 146 us due dropped command and 181 us due to taking away the API and interrupt overhead. My implementation has about zero error checking, maybe it will become slower. |
| Navigation |
| Message Index |
| Next page |
| Previous page |