Author Topic: I2C automatic data rate adaption to pullups  (Read 4074 times)

0 Members and 1 Guest are viewing this topic.

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
I2C automatic data rate adaption to pullups
« on: February 18, 2019, 12:31:21 pm »
I've been working with Cypress PSoC BLE chip and found out by testing it adapts I2C data rate based on pullups. When I set 400 kHz I get 300 kHz with 20 k, 330 kHz with 10 k, 345 kHz with 5 k and 397 kHz with 470 pullups. That's without any slaves on the bus, thus totally controlled by PSoC.

How does it know, which pullups I have used? Probably not from rise time, since I get 100 ns 30%->70% even with 10 k. I2C 400 kHz spec requires just 300 ns.

So it likely measures the current needed to hold the scl low and assumes 400 pF capasitance (I2C specs requirement) and sets data rate based on that.

Is that common? I have never heard about such before and can't find much from documentation. It only mention that real data rate may be different depending on tR and tF.

Is there a way to get 400 kHz with 5-10 k pullups? Maybe the 5 k internal ones? Some setting to override the adaptation, since I wont have large capacitance. Just one slave and 25 mm long PCB track.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: I2C automatic data rate adaption to pullups
« Reply #1 on: February 19, 2019, 12:14:34 pm »
More likely it's waiting until a valid logic high is received, before beginning the next clock-high period.  So the output frequency is 1 / (t_cycle + t_rise).

Easy way to prove/disprove it's based on current sense: place a capacitor in parallel.  A capacitor has no effect on DC current, but proportionally increases risetime. :)

You can also emulate a lower value resistor with a constant current source, which draws the same current all the way up, whereas a resistor only delivers the most current at the lowest voltage.  You get a slanted rise, rather than a slow curve.  A 1mA CCS will be equivalent to about a 1.5k resistor, but only requires the current of a ~3.3k resistor.

To get higher clock speeds, it would seem the better option is to increase the internal clock rate, however you would do that (prescaler from a master clock? dedicated oscillator/timer section?).  This begins to violate the I2C spec, though, and you should consider the high speed mode if supported by all devices, or something with higher overall speed, like SPI, asynchronous serial, or parallel.

Tim
« Last Edit: February 19, 2019, 12:21:40 pm by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #2 on: February 19, 2019, 12:52:07 pm »
I haven't found anything helpful yet, but I did find that also ESP32 suffers from the same problem/feature: https://www.esp32.com/viewtopic.php?f=14&t=1350

It seems to measure the high time based on actual logic value crossing. That doesn't seem to be the case with Cypress, since then I should get very close to 400 kHz with 100 ns rise time, which is already clearly inside I2C specs.

Or maybe it is? The period (with internal 5 k) is 2.90 us (345 kHz). If I measure the low time at very close to 0 V, it is 1.38 us. Thus clearly less than half period (1.45 us) but still more than half period at 400 kHz (1.25 us). The high time (measured from 0 V) is 1.52 us. So apparently driving low is asymmetric and it actually needs to be, since I2C specs require 1.3 us minimum low time and only 0.6 us minimum high time. At 1.38 us before the very sharp falling edge the voltage is already over 70%.

Even measuring at 90% the high time is 1.3 us so more than double the spec requirement. So either the chip is extremely conservative with the specs or it is measuring something other than voltage level change.

With 0.47 k pullup (SCL only) the period is 2.52 us, low time 1.37 us (measured at 0 V) and high time 1.15 us. So it seems to just reduce the high time.

Then I tried to increase system and SCB clocks to get more oversampling. With 30 oversampling instead of 20 used earlier I got 359 kHz with 5k and 352 kHz with 0.47k. So the effect of pullup became smaller but even with 0.47 k the data rate is 10% below what is it set at.

Distributing the 20 oversamplings differently (originally 11 low and 9 high) I can adjust the asymmetry, but the total period stays the same.

Playing with clock frequencies and oversampling set up I can get e.g. 424 kHz actual with 1.26 us low time (1.31 us to 30%) and 1.1 us high time (0.93 us to 70% and 0.80 us to 90%). Would that work reliably with a slave only capable of 400 kHz? The actually set data rate is 500 kHz. With 0.47k pullups I get 498 kHz with 1.25 us low and 0.77 us high. Then low time would be a bit too short, since it's only 1.26 us to 30%.

I didn't have a suitable capacitor at hands. I could find only 1 nF through hole capacitor and it stopped the I2C line permanently (needed a boot, probably reseting the I2C line would have worked as well). So it certainly notices capacitance.

I tried the slave at 800 kHz (about 0.6 us high and low time, actual setting 1000 kHz). Still worked OK, so I guess it is safe to use it at 500 kHz?
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11903
  • Country: us
    • Personal site
Re: I2C automatic data rate adaption to pullups
« Reply #3 on: February 19, 2019, 05:20:42 pm »
All I2C controllers I've seen rely on releasing the line and waiting for the logic level to go high. So bus capacitance and pull-up resistors affect the period.

How do you "set 400 kHz"? Typically you set some sort of a prescaler/baudrate register, which is not directly related to the actual I2C clock frequency. May be whoever wrote the library used different R/C values for calibration.

I fund it is easier to adjust the baudrate divider until I hit the necessary frequency. And if bus structure does not change in run-time, it is good enough.
Alex
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #4 on: February 19, 2019, 06:06:49 pm »
OK, so it's common to adjust the clock rate based on rise time. I've used mostly SPI so far so I didn't know about this and it doesn't seem to be easy to find anything actually said about it. I guess it is somewhat related to clock stretching, since master needs to monitor the SCL line.

It's a hardware I2C so the actual clock rate must be defined by hardware. Software just inputs register values and the byte to be sent/read.

I don't quite like the idea that data rate and thus timing depends on what the bus capacitance and internal pullup value happen to be. Then the whole timing of the software gets messed up. Sounds stupid to make the data rate slower even with perfectly OK rise time.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11903
  • Country: us
    • Personal site
Re: I2C automatic data rate adaption to pullups
« Reply #5 on: February 19, 2019, 06:18:20 pm »
OK, so it's common to adjust the clock rate based on rise time.
It is actually the only way to implement due to clock stretching detection that is necessary anyway. Technically clock stretching can only happens at well defined points, but if you are implementing that logic, it makes sense to have it everywhere.

It is also expected that the rise time will be slow, so this is a natural protection against that.

It's a hardware I2C so the actual clock rate must be defined by hardware. Software just inputs register values and the byte to be sent/read.
But you still set the ideal clock rate.

Atmel/Microchip documents actually define how the real clock rate relates to the set clock rate and bus parameters.

I don't quite like the idea that data rate and thus timing depends on what the bus capacitance and internal pullup value happen to be. Then the whole timing of the software gets messed up. Sounds stupid to make the data rate slower even with perfectly OK rise time.
But it is not usually a result of an artificial delay. If you somehow see different results with the same rise time, it would be strange.
Alex
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #6 on: February 19, 2019, 06:52:40 pm »
It is somewhat covered here: https://www.i2c-bus.org/speed/

If clock rate depends on rise time, it will depend on internal pullup, which are far from accurate, and bus capacitance, which may change when you change PCB layout or even PCB manufacturer.

I just can't understand why 400 kHz wont give 400 kHz as long as rise time is OK (within I2C specs). Slower than what is set should be a special case, not something that will always happen unless you have zero rise time.

Good to know Atmel/Microchip documents that. I haven't found anything about it from Cypress. I have mostly used Atmel chips, but their BLE chip (ATSAMB11) was a total disaster. I wasted several months and never got it working in low power mode (ULP) without hanging all the time.
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #7 on: February 20, 2019, 03:05:35 pm »
I'm wondering was the rise time I measured mainly caused by the oscilloscope probe? 100 ns (30->70%) with 5 k pullups means about 20 pF capacitance. I just measured 15 pF for my probe at 10x and 100 pf at 1x. I measured those with DE-5000 LCR meter.

So will the real data rate be something else when I'm not probing? At least changing the probe from 10x to 1x made a huge change (427->373 kHz). Then I tried measuring through a 10 k resistor. Obviously I don't get the wave form anymore, but I could see 427 kHz pattern with 1x and 10x showed also 427 kHz.

With 5 k and 20 pF time constant is 100 ns. Without 10x probe it should be only 5 pF and 25 ns time constant. With 1x probe then 105 pf and 525 ns. Then with 470 pullup I should get 50 ns with 1x and 10 ns with 10x. I get 495 kHz with 470 pullup with both 1x and 10x.

100 ns time constant drops 495->427, but 50 ns has no effect?
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 4032
  • Country: us
Re: I2C automatic data rate adaption to pullups
« Reply #8 on: February 20, 2019, 04:57:18 pm »
I just can't understand why 400 kHz wont give 400 kHz as long as rise time is OK (within I2C specs). Slower than what is set should be a special case, not something that will always happen unless you have zero rise time.

I think you are overthinking this.  If your application really depends on I2C operating at exactly 400 kHz, then you are probably doing something wrong.  Chips attempt to make the communication reliable while operating near the target speed, and that is what you are seeing.
 
The following users thanked this post: Siwastaja

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #9 on: February 20, 2019, 07:44:49 pm »
I think you are overthinking this.  If your application really depends on I2C operating at exactly 400 kHz, then you are probably doing something wrong.  Chips attempt to make the communication reliable while operating near the target speed, and that is what you are seeing.

Definitely I am. I have a bad habit of wanting to understand things. There was a time frame I needed to fit the communication into, but it didn't fit even with a bit over 400 kHz due to rather long pauses between I2C packets. These pauses where caused by MCU, probably an interrupt. Pause depend on CPU MHz, but increasing MHz just increased total current consumption despite more deep sleep time. I found a way to get more time for I2C and it doesn't seem to waste too much current.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: I2C automatic data rate adaption to pullups
« Reply #10 on: February 21, 2019, 12:33:46 pm »
Ah yes, poorly interleaved interrupts and peripheral operations can chew up arbitrary amounts of time.  You only get to save time, down to the minimum critical path time, when everything is perfectly overlapped. Which is almost never going to happen.  But between those extremes, yes, lots of low-hanging fruit... or not.  :scared:

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: Heartbreaker

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #11 on: February 25, 2019, 07:31:22 am »
Ah yes, poorly interleaved interrupts and peripheral operations can chew up arbitrary amounts of time.  You only get to save time, down to the minimum critical path time, when everything is perfectly overlapped. Which is almost never going to happen.  But between those extremes, yes, lots of low-hanging fruit... or not.  :scared:

Tim

It's not about an interrupt happening randomly during I2C. It must be an interrupt caused by I2C events. I2C is made by Cypress IDE and it uses interrupts at the end of each byte. Seems to "waste" around 240 clock cycles between slave address and data after that. Isn't that a lot? E.g. at 12 MHz system clock and 427 kHz real life data rate there is first a burst of slave address, which takes about 22 us. But then SCL stays low and SDA high for 22 us before the data is send or read. With higher system clock this gap is smaller, thus it's not clock stretching caused by a slave.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: I2C automatic data rate adaption to pullups
« Reply #12 on: February 25, 2019, 01:35:10 pm »
Ahh even worse then, it's synchronous. May be worth digging into what the library / autogenerated code is doing and simplifying it.  For example, if you know what data to send, and expect to receive, after the first byte, precalculate that before the next interrupt arrives (while the first is still transmitting), then strip the second interrupt down as thin as possible.  If they're the same interrupt, it may be hard to avoid overhead while branching to different paths -- compilers tend to store everything they need at the top of the function, regardless of whether it's used that time or not -- at which point an assembler solution may be necessary.  Unless you're heavily invested in making this project go fast, try to avoid assembler.  :)  (Do take the time to learn the instruction set, though -- inspect the compiler's output and make sure it's doing what you intend it to do.)

On AVR for example, it's easy enough to do a ~60 cycle interrupt, with GCC.  I don't know what your SoC is capable of but I'd be surprised if it needed quite that many cycles just for starting the interrupt.

Which, historical note, the 8086 needed almost that much (hundreds) for a complete interrupt cycle -- besides being slow as molasses to begin with (lots of cycles per instruction -- microcoded architecture), an interrupt cycle had to perform about eight words (16 bits/word) of memory access, plus fetching the instructions themselves.  The interrupt itself took something like 120 cycles (while a FAR CALL took merely 80).  Even worse if you had an 8088, the 8-bit bus version of the 8086 (so it needed twice as many bus cycles besides, and your average system had several cycles of wait state to access DRAM).

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9963
  • Country: us
Re: I2C automatic data rate adaption to pullups
« Reply #13 on: February 26, 2019, 01:47:16 am »
Which, historical note, the 8086 needed almost that much (hundreds) for a complete interrupt cycle -- besides being slow as molasses to begin with (lots of cycles per instruction -- microcoded architecture), an interrupt cycle had to perform about eight words (16 bits/word) of memory access, plus fetching the instructions themselves.  The interrupt itself took something like 120 cycles (while a FAR CALL took merely 80).  Even worse if you had an 8088, the 8-bit bus version of the 8086 (so it needed twice as many bus cycles besides, and your average system had several cycles of wait state to access DRAM).

Tim

In my view, the 8086 hardware should take responsibility for vectoring to the interrupt routine and probably saving the return address on the stack.  The rest of the bloatware is on the compiler writers and the programmers.  The 8086 lacks registers so there will necessarily be a prolog and epilog generated by the C compiler but the clever assembly language programmer may be able to short-circuit a lot of saves and restores.

It's nice when the hardware provides a duplicate set of registers for system mode.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9963
  • Country: us
Re: I2C automatic data rate adaption to pullups
« Reply #14 on: February 26, 2019, 02:01:26 am »
Back to the main topic:  I almost never use I2C.  It is truly a PITA to get it working.  Mostly I use SPI.  Sure, I need separate CS' pins (or an IO expander) but I can get some real speed out of the protocol.  I'm pretty sure my LPC1768 mbed is receiving SPI at 12.5 MHz on an interrupt driven basis including queuing.  Do check carefully when creating an SPI slave.  Timing is everything.

 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 4032
  • Country: us
Re: I2C automatic data rate adaption to pullups
« Reply #15 on: February 26, 2019, 05:13:35 am »
Sure SPI is great if you need speed.  That isn't the point of I2C.  If you are monitoring bus voltages or temperature you don't need speed, but hanging a dozen devices off the same two wires is pretty convenient. 

I prefer I2C any time I don't care about speed.  Not just because of the reduced wire count, but because I find it is usually much closer to "just works" than SPI.  SPI can't even agree on the clock phase or polarity so even though 99% of devices work the same way, you have to break out your decoder ring to translate the datasheet descriptions (which always use slightly non-standard language to describe it) to make sure, and then figure out how to program your SPI master to operate the right way.  I2C uses open drain so there is less faffing about with signal levels and translators.  I2C uses standardized addressing and mostly standardized register read/write formats, so your software APIs can be higher level and still be expected to work.  Every SPI device invents their own protocol.  I2C has well defined framing, but even though SPI has an obvious message frame controlled by chip select, there are some special-snowflake SPI devices that don't use the CS line for framing.  I haven't had cause to use it, but I2C's SMBus guise supports interrupts with an automatic priority arbitration.

My biggest problem and my only real problem with I2C is that it can be harder to reset the bus into a known state.  With SPI, de-asserting all chip-select lines is usually sufficient (baring aforementioned special-snowflake devices).  With I2C, if the master is interrupted during a transfer while the slave is asserting DATA you can't initiate a transfer until the slave finishes.  The master needs to trigger clock pulses until the slave releases the data line.  SMBus adds a timeout, but if you might have non-SMBus devices you still need to do it by clocking.  This is IMHO a fairly minor pain point in exchange for all the other conveniences.


SPI works better over long distances or board-to-board because it isn't open drain, but I don't award this to SPI since if that is an actual requirement I would rather use RS232 or RS485 than SPI.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9327
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #16 on: February 26, 2019, 06:55:24 am »
there are some special-snowflake SPI devices that don't use the CS line for framing.

Indeed - even the very newest incarnation of the STM32 SPI peripheral, which has had claimed "hardware nSS" for over a decade, and they have gone through numerous iterations so they have had chances to fix the damn thing -- and it still doesn't actually support CS line for framing in slave mode, which is ridiculously funny as it would be trivial for them to implement. What you need to do is to program a general-purpose interrupt on the CS pin, and then go there and use the power/clock control registers to issue a full SPI peripheral reset, since it cannot be even reliably controlled using its own registers.

This is made especially awkward since the whole grand idea behind using an actual electrical line for framing is the best thing since sliced bread - not only for speed and synchronization, but also for robustness and code simplicity, for all parties involved. It's the main selling point for SPI. The only downside really is having to route an extra signal, but it's definitely worth it. And yet some designers (at ST, for example) are utterly stupid enough not to see this, and completely misuse the SPI, losing its #1 benefit. But apparently, there are people out there who don't know what "framing" means, why it is needed, and how difficult it is to live without - these are most likely software folks who are so accustomed to creating complex, hard-to-debug state machines that they don't know sometimes there is an easier way to do it that fundamentally avoids most of the protocol complexity they have learned to live with.

OTOH, I2C implementations tend to be equally (or even more) broken; and microcontroller I2C implementations have notorious history of being massively bloated, hard to configure and providing little benefit over bit-banging. This has recently got better: for example, the newest I2C implementations on the newest STM32 devices can do actual DMA transactions in the background (woohoo! :clap:), without you babysitting every signal level change. This was impossible just a few years ago; you needed to poll and bit-bang the control registers with correct timing to the extent that you could have bit-banged the actual SDA/SCL instead. So while a higher-level, easy to use I2C libraries do exist (and due to the standardization of how the protocol is used, they even tend to work!), they also tend to be blocking, slow calls to the extent that you just cannot use them in an actual product that needs to multitask something else than just communicating with I2C devices, which is almost always the case. So now you are left with trying to utilize the (usually broken or difficult to configure) HW I2C peripheral to the full extent, and writing interrupt-based I2C implementation that works with your specific device and application.

SPI, on the other hand, tends to be much easier to get working interrupt or DMA based, with less bloat, and fewer states. For example, a specific interrupt-driven I2C implementation which just efficiently reads sensors on an STM32F205 was around 1000 lines and developed for a full working week; another working with SPI similarly was below 100 lines and developed in half a day.

I do agree that if you are expecting speed, synchronous operation and timing predictability out of I2C, you are using the wrong interface. Many sensor devices which could be used in more timing-critical products, do have a selectable I2C vs. SPI interfaces, and you are supposed to use it in SPI mode if you need these features.

I2C's great when all you need is a bunch of easy-to-glue-on sensors, and you can afford controlling them in inefficient blocking calls, every now and then in the non-timing-critical main thread. For everything else, it starts to get really messy, really quick. If you feel like you need to tune the I2C bus frequency to exact 400kHz, you are probably in this "don't use I2C" territory.
« Last Edit: February 26, 2019, 07:17:28 am by Siwastaja »
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #17 on: February 26, 2019, 12:12:14 pm »
I don't think I have actually used I2C before. I have used SPI a lot and would have preferred it, but the most suitable sensor had only I2C. I didn't realize all the limitations of I2C. It seems to need very high CPU clock speed vs. data rate. At 12 MHz I get only 500 kHz, which becomes 427 kHz in reality. Or actually much less when time is wasted for slave address and gaps between. Not so good when you try to minimize power consumption by deep sleeping as much as possible.

The API seems to use an ISR for handling all that happens on I2C. It's 850 lines although about half is not used for master and is taken out with #ifdef.

I usually don't use any API, but since this is a Bluetooth chip and new to me, I have tried to use it. Just hate it when you can't understand what the API actually does due to several layers of calls.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 9327
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #18 on: February 26, 2019, 01:31:47 pm »
If you need performance out of I2C and the sensor choice is inevitable, you should put some thought into how to write your own I2C access layer, completely written for the purpose of your specific sensor only, stripping away the abstaction layers. It's basically an interrupt-driven state machine, but it's complicated if your MCU's I2C implementation is of a sucky variant. If it is, then it's sequencing like: Please generate start condition, and give me an interrupt when the start has been generated... Then on that high-priority interrupt, write the address, turn on the ACK interrupt, wait for the ack interrupt, and so on, and so on.

BTDT - good luck  :).

It's so much easier on a microcontroller with a sane, modern I2C peripheral, which can handle the full transaction using DMA, using one initial configuration per transfer. But beware, the fact that the I2C peripheral supports DMA on paper, doesn't mean it's actually usable. It may be that you need to handle like 5 timing critical interrupts before you get to the point of utilizing the DMA to do a whopping 2-byte transfer.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9963
  • Country: us
Re: I2C automatic data rate adaption to pullups
« Reply #19 on: February 26, 2019, 03:16:13 pm »
My last use of SPI was fast, as I mentioned above, but more important, the amount of data transferred was substantial.

Just for fun...

Back in the old days, circa '70, the IBM 1130 computer had a CalComp 1627 drum plotter. 

http://ibm1130.org/hw/io/

It took one transfer for every 0.01" step it made so if there was a lot of ink going down onto the paper, there were a LOT of steps.  Actually, even the white space took steps.  Hundreds of thousands of steps, maybe.  Well, I have an FPGA implementation of the 1130 and I wanted to use my LaserJet as a plotter.  So, I just used the LPC1768 to grab the plotter stream via SPI, convert the step-by-step commands to sentences and pass the stuff to the printer via the LAN.

See attached...  I have no idea how many steps there are but it takes about 268 lines of code to get it done.  This problem is from my grandson's high school freshman algebra homework.  Except for the DA/DX bit which should be in lower case except that the plotter library (or the 1130) didn't support lower case.

And that's the way plots were made back in the early days.  Every little detail was left to the programmer and FORTRAN ruled!  Still does...
« Last Edit: February 26, 2019, 03:22:06 pm by rstofer »
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 4032
  • Country: us
Re: I2C automatic data rate adaption to pullups
« Reply #20 on: February 26, 2019, 04:40:33 pm »
OTOH, I2C implementations tend to be equally (or even more) broken; and microcontroller I2C implementations have notorious history of being massively bloated, hard to configure and providing little benefit over bit-banging. This has recently got better: for example, the newest I2C implementations on the newest STM32 devices can do actual DMA transactions in the background (woohoo! :clap:), without you babysitting every signal level change. This was impossible just a few years ago; you needed to poll and bit-bang the control registers with correct timing to the extent that you could have bit-banged the actual SDA/SCL instead. So while a higher-level, easy to use I2C libraries do exist (and due to the standardization of how the protocol is used, they even tend to work!), they also tend to be blocking, slow calls to the extent that you just cannot use them in an actual product that needs to multitask something else than just communicating with I2C devices, which is almost always the case. So now you are left with trying to utilize the (usually broken or difficult to configure) HW I2C peripheral to the full extent, and writing interrupt-based I2C implementation that works with your specific device and application.

Good to know.  The I2C applications I have used tend to be fine with occasional blocking calls from the main thread, but I assumed that the non-blocking DMA versions mostly worked.
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #21 on: February 26, 2019, 05:52:12 pm »
It seems to be like that. First a lot of error checking, then generate start etc. Interrupts for Ack, Nack, Stop, lost arbitration and bus error. Seems DMA can't be used with I2C on PSoC 4. DMA sounds like an overkill anyway. I only need to write two bytes, write three bytes, then wait for the measurement to be ready, read 6 bytes and write two bytes.  13 bytes need to transferred for each measurement and for that slave address needs to be given 4 times. This takes now 246 us to start the measurement and 370 us to read and put the sensor to low power (measured from scl activity, actually a bit longer).

Let's see if I have the time to try to make it faster. How much should it take with simple bit banging? I only need master mode and there will be just one slave per bus, since these sensors have fixed slave address and I need two of them. Perhaps that was lucky, since I can run the two buses in parallel.
 

Offline bson

  • Supporter
  • ****
  • Posts: 2497
  • Country: us
Re: I2C automatic data rate adaption to pullups
« Reply #22 on: February 26, 2019, 11:57:38 pm »
I prefer I2C any time I don't care about speed.
Yup, IMO it doesn't belong anywhere where speed matters.  This is also why I don't do interrupt-driven I2C but just busy wait (poll device, poll timer, or use a scheduler to sleep if available) in the foreground; anything that matters is then free to interrupt without worrying about nesting or other complications.  This way the I2C code can also bang out lost clocks, reset the controller or power cycle the bus devices when clock banging doesn't unwedge it (looking at you, MSP430 USCI), deal with bus hot plugging (not terribly challenging to implement, see a demo using a MSP430G2553 at ), or whatever special functionality is called for in a particular application.  The reason for that particular demo was to be able to just plug in a small OLED display on an I2C connector and have debug and diagnostic info pop up.

BTW, the I2C bus can also be pulled up with a JFET, BJT mirror, or other constant current source.  While a JFET is not a particularly good CCS it definitely beats pullup resistors and only adds a dual JFET.  Makes the bus far less sensitive to capacitive loading, such as in the hot plug demo above (where the capacitance varies with what's plugged in).  It does make TVS ESD protection mandatory.
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #23 on: February 27, 2019, 01:28:53 pm »
I found this: https://community.cypress.com/docs/DOC-15334

Pure bit banging without interrupts. At 12 MHz system clock I could get only 90 kHz using it. Then I modified to be as fast as I could and got 225 kHz. Testing just

SetSCL;
SetSDA
ClrSCL;

in a for loop gave just 289 kHz. Which is actually about expected, since the fastest pin toggle is said to take 7 clock cycles. Low period is 1.52 us and high  1.94 us. Seems to take even more than 7 cycles, since low high should be just 2*7/12=1.2 us.

No way of making bit bang I2C faster than the hardware one. But maybe the hardware one could be used more efficiently without interrupts.
 

Offline jmajaTopic starter

  • Frequent Contributor
  • **
  • Posts: 296
  • Country: fi
Re: I2C automatic data rate adaption to pullups
« Reply #24 on: February 28, 2019, 08:44:34 am »
I made the measuring work with using I2C hardware registers. Or actually a bit of a mix. I use API initialization and enabling, but then do the reading and writing with registers. I got lucky, since I made a mistake and it still worked. It seems the sensor datasheet requires extra commands that aren't actually needed, at least in my case. The I2C hardware is fast. No gaps and I get 425 kHz throughout.

Now I need only 65 us to start the measurement (246 us originally, but with one command less 100 us) and 224 us to read and put to sleep (370 us originally, no changes in protocol). Total saving 146 us due dropped command and 181 us due to taking away the API and interrupt overhead.

My implementation has about zero error checking, maybe it will become slower.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf