Electronics > Projects, Designs, and Technical Stuff
I2C automatic data rate adaption to pullups
<< < (4/6) > >>
ejeffrey:
Sure SPI is great if you need speed.  That isn't the point of I2C.  If you are monitoring bus voltages or temperature you don't need speed, but hanging a dozen devices off the same two wires is pretty convenient. 

I prefer I2C any time I don't care about speed.  Not just because of the reduced wire count, but because I find it is usually much closer to "just works" than SPI.  SPI can't even agree on the clock phase or polarity so even though 99% of devices work the same way, you have to break out your decoder ring to translate the datasheet descriptions (which always use slightly non-standard language to describe it) to make sure, and then figure out how to program your SPI master to operate the right way.  I2C uses open drain so there is less faffing about with signal levels and translators.  I2C uses standardized addressing and mostly standardized register read/write formats, so your software APIs can be higher level and still be expected to work.  Every SPI device invents their own protocol.  I2C has well defined framing, but even though SPI has an obvious message frame controlled by chip select, there are some special-snowflake SPI devices that don't use the CS line for framing.  I haven't had cause to use it, but I2C's SMBus guise supports interrupts with an automatic priority arbitration.

My biggest problem and my only real problem with I2C is that it can be harder to reset the bus into a known state.  With SPI, de-asserting all chip-select lines is usually sufficient (baring aforementioned special-snowflake devices).  With I2C, if the master is interrupted during a transfer while the slave is asserting DATA you can't initiate a transfer until the slave finishes.  The master needs to trigger clock pulses until the slave releases the data line.  SMBus adds a timeout, but if you might have non-SMBus devices you still need to do it by clocking.  This is IMHO a fairly minor pain point in exchange for all the other conveniences.


SPI works better over long distances or board-to-board because it isn't open drain, but I don't award this to SPI since if that is an actual requirement I would rather use RS232 or RS485 than SPI.
Siwastaja:

--- Quote from: ejeffrey on February 26, 2019, 05:13:35 am ---there are some special-snowflake SPI devices that don't use the CS line for framing.

--- End quote ---

Indeed - even the very newest incarnation of the STM32 SPI peripheral, which has had claimed "hardware nSS" for over a decade, and they have gone through numerous iterations so they have had chances to fix the damn thing -- and it still doesn't actually support CS line for framing in slave mode, which is ridiculously funny as it would be trivial for them to implement. What you need to do is to program a general-purpose interrupt on the CS pin, and then go there and use the power/clock control registers to issue a full SPI peripheral reset, since it cannot be even reliably controlled using its own registers.

This is made especially awkward since the whole grand idea behind using an actual electrical line for framing is the best thing since sliced bread - not only for speed and synchronization, but also for robustness and code simplicity, for all parties involved. It's the main selling point for SPI. The only downside really is having to route an extra signal, but it's definitely worth it. And yet some designers (at ST, for example) are utterly stupid enough not to see this, and completely misuse the SPI, losing its #1 benefit. But apparently, there are people out there who don't know what "framing" means, why it is needed, and how difficult it is to live without - these are most likely software folks who are so accustomed to creating complex, hard-to-debug state machines that they don't know sometimes there is an easier way to do it that fundamentally avoids most of the protocol complexity they have learned to live with.

OTOH, I2C implementations tend to be equally (or even more) broken; and microcontroller I2C implementations have notorious history of being massively bloated, hard to configure and providing little benefit over bit-banging. This has recently got better: for example, the newest I2C implementations on the newest STM32 devices can do actual DMA transactions in the background (woohoo! :clap:), without you babysitting every signal level change. This was impossible just a few years ago; you needed to poll and bit-bang the control registers with correct timing to the extent that you could have bit-banged the actual SDA/SCL instead. So while a higher-level, easy to use I2C libraries do exist (and due to the standardization of how the protocol is used, they even tend to work!), they also tend to be blocking, slow calls to the extent that you just cannot use them in an actual product that needs to multitask something else than just communicating with I2C devices, which is almost always the case. So now you are left with trying to utilize the (usually broken or difficult to configure) HW I2C peripheral to the full extent, and writing interrupt-based I2C implementation that works with your specific device and application.

SPI, on the other hand, tends to be much easier to get working interrupt or DMA based, with less bloat, and fewer states. For example, a specific interrupt-driven I2C implementation which just efficiently reads sensors on an STM32F205 was around 1000 lines and developed for a full working week; another working with SPI similarly was below 100 lines and developed in half a day.

I do agree that if you are expecting speed, synchronous operation and timing predictability out of I2C, you are using the wrong interface. Many sensor devices which could be used in more timing-critical products, do have a selectable I2C vs. SPI interfaces, and you are supposed to use it in SPI mode if you need these features.

I2C's great when all you need is a bunch of easy-to-glue-on sensors, and you can afford controlling them in inefficient blocking calls, every now and then in the non-timing-critical main thread. For everything else, it starts to get really messy, really quick. If you feel like you need to tune the I2C bus frequency to exact 400kHz, you are probably in this "don't use I2C" territory.
jmaja:
I don't think I have actually used I2C before. I have used SPI a lot and would have preferred it, but the most suitable sensor had only I2C. I didn't realize all the limitations of I2C. It seems to need very high CPU clock speed vs. data rate. At 12 MHz I get only 500 kHz, which becomes 427 kHz in reality. Or actually much less when time is wasted for slave address and gaps between. Not so good when you try to minimize power consumption by deep sleeping as much as possible.

The API seems to use an ISR for handling all that happens on I2C. It's 850 lines although about half is not used for master and is taken out with #ifdef.

I usually don't use any API, but since this is a Bluetooth chip and new to me, I have tried to use it. Just hate it when you can't understand what the API actually does due to several layers of calls.
Siwastaja:
If you need performance out of I2C and the sensor choice is inevitable, you should put some thought into how to write your own I2C access layer, completely written for the purpose of your specific sensor only, stripping away the abstaction layers. It's basically an interrupt-driven state machine, but it's complicated if your MCU's I2C implementation is of a sucky variant. If it is, then it's sequencing like: Please generate start condition, and give me an interrupt when the start has been generated... Then on that high-priority interrupt, write the address, turn on the ACK interrupt, wait for the ack interrupt, and so on, and so on.

BTDT - good luck  :).

It's so much easier on a microcontroller with a sane, modern I2C peripheral, which can handle the full transaction using DMA, using one initial configuration per transfer. But beware, the fact that the I2C peripheral supports DMA on paper, doesn't mean it's actually usable. It may be that you need to handle like 5 timing critical interrupts before you get to the point of utilizing the DMA to do a whopping 2-byte transfer.
rstofer:
My last use of SPI was fast, as I mentioned above, but more important, the amount of data transferred was substantial.

Just for fun...

Back in the old days, circa '70, the IBM 1130 computer had a CalComp 1627 drum plotter. 

http://ibm1130.org/hw/io/

It took one transfer for every 0.01" step it made so if there was a lot of ink going down onto the paper, there were a LOT of steps.  Actually, even the white space took steps.  Hundreds of thousands of steps, maybe.  Well, I have an FPGA implementation of the 1130 and I wanted to use my LaserJet as a plotter.  So, I just used the LPC1768 to grab the plotter stream via SPI, convert the step-by-step commands to sentences and pass the stuff to the printer via the LAN.

See attached...  I have no idea how many steps there are but it takes about 268 lines of code to get it done.  This problem is from my grandson's high school freshman algebra homework.  Except for the DA/DX bit which should be in lower case except that the plotter library (or the 1130) didn't support lower case.

And that's the way plots were made back in the early days.  Every little detail was left to the programmer and FORTRAN ruled!  Still does...
Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod