Electronics > Projects, Designs, and Technical Stuff
I2C automatic data rate adaption to pullups
<< < (3/6) > >>
T3sl4co1l:
Ah yes, poorly interleaved interrupts and peripheral operations can chew up arbitrary amounts of time.  You only get to save time, down to the minimum critical path time, when everything is perfectly overlapped. Which is almost never going to happen.  But between those extremes, yes, lots of low-hanging fruit... or not.  :scared:

Tim
jmaja:

--- Quote from: T3sl4co1l on February 21, 2019, 12:33:46 pm ---Ah yes, poorly interleaved interrupts and peripheral operations can chew up arbitrary amounts of time.  You only get to save time, down to the minimum critical path time, when everything is perfectly overlapped. Which is almost never going to happen.  But between those extremes, yes, lots of low-hanging fruit... or not.  :scared:

Tim

--- End quote ---

It's not about an interrupt happening randomly during I2C. It must be an interrupt caused by I2C events. I2C is made by Cypress IDE and it uses interrupts at the end of each byte. Seems to "waste" around 240 clock cycles between slave address and data after that. Isn't that a lot? E.g. at 12 MHz system clock and 427 kHz real life data rate there is first a burst of slave address, which takes about 22 us. But then SCL stays low and SDA high for 22 us before the data is send or read. With higher system clock this gap is smaller, thus it's not clock stretching caused by a slave.
T3sl4co1l:
Ahh even worse then, it's synchronous. May be worth digging into what the library / autogenerated code is doing and simplifying it.  For example, if you know what data to send, and expect to receive, after the first byte, precalculate that before the next interrupt arrives (while the first is still transmitting), then strip the second interrupt down as thin as possible.  If they're the same interrupt, it may be hard to avoid overhead while branching to different paths -- compilers tend to store everything they need at the top of the function, regardless of whether it's used that time or not -- at which point an assembler solution may be necessary.  Unless you're heavily invested in making this project go fast, try to avoid assembler.  :)  (Do take the time to learn the instruction set, though -- inspect the compiler's output and make sure it's doing what you intend it to do.)

On AVR for example, it's easy enough to do a ~60 cycle interrupt, with GCC.  I don't know what your SoC is capable of but I'd be surprised if it needed quite that many cycles just for starting the interrupt.

Which, historical note, the 8086 needed almost that much (hundreds) for a complete interrupt cycle -- besides being slow as molasses to begin with (lots of cycles per instruction -- microcoded architecture), an interrupt cycle had to perform about eight words (16 bits/word) of memory access, plus fetching the instructions themselves.  The interrupt itself took something like 120 cycles (while a FAR CALL took merely 80).  Even worse if you had an 8088, the 8-bit bus version of the 8086 (so it needed twice as many bus cycles besides, and your average system had several cycles of wait state to access DRAM).

Tim
rstofer:

--- Quote from: T3sl4co1l on February 25, 2019, 01:35:10 pm ---Which, historical note, the 8086 needed almost that much (hundreds) for a complete interrupt cycle -- besides being slow as molasses to begin with (lots of cycles per instruction -- microcoded architecture), an interrupt cycle had to perform about eight words (16 bits/word) of memory access, plus fetching the instructions themselves.  The interrupt itself took something like 120 cycles (while a FAR CALL took merely 80).  Even worse if you had an 8088, the 8-bit bus version of the 8086 (so it needed twice as many bus cycles besides, and your average system had several cycles of wait state to access DRAM).

Tim

--- End quote ---

In my view, the 8086 hardware should take responsibility for vectoring to the interrupt routine and probably saving the return address on the stack.  The rest of the bloatware is on the compiler writers and the programmers.  The 8086 lacks registers so there will necessarily be a prolog and epilog generated by the C compiler but the clever assembly language programmer may be able to short-circuit a lot of saves and restores.

It's nice when the hardware provides a duplicate set of registers for system mode.
rstofer:
Back to the main topic:  I almost never use I2C.  It is truly a PITA to get it working.  Mostly I use SPI.  Sure, I need separate CS' pins (or an IO expander) but I can get some real speed out of the protocol.  I'm pretty sure my LPC1768 mbed is receiving SPI at 12.5 MHz on an interrupt driven basis including queuing.  Do check carefully when creating an SPI slave.  Timing is everything.

Navigation
Message Index
Next page
Previous page
There was an error while thanking
Thanking...

Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod