Author Topic: 134 bit package by SPI using DMA  (Read 2159 times)

0 Members and 1 Guest are viewing this topic.

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
134 bit package by SPI using DMA
« on: November 20, 2019, 06:12:29 am »
 
Hi.
I am having trouble receiving 134 bit packets per SPI using DMA. The last 6 bits never reach the DMA buffer.

I suspect that it is because the DMA can only work with bytes, not bits, that is, the 134 bits would be 16 bytes + 6 bits, 2 bits would be missing so that those last 6 bits generate 1 byte that the DMA stores in its buffer.

Is there no way to receive packets that are not full bytes, using SPI with DMA?

Regards.

 
 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11277
  • Country: us
    • Personal site
Re: 134 bit package by SPI using DMA
« Reply #1 on: November 20, 2019, 06:53:57 am »
You don't mention the MCU type. This has nothing to do with DMA. SPI peripheral is likely to be hard coded to receive 8- or 9-bit bytes. So it is SPI that does not tell the DMA that there is a new byte.

Alex
 
The following users thanked this post: luiHS

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21720
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: 134 bit package by SPI using DMA
« Reply #2 on: November 20, 2019, 07:13:27 am »
In general, it would have to be something like:

0. Who frikken cares?  Trample the 2+ extra bits, and mask off the garbage later.

If the SPI doesn't reset between nCS assertions, but retains cycle count, well, sucks to be you.  No seriously, you're going to miss a clock some day and literally all your data will be trash after that.  There are a surprising number of ICs that misbehave in this way.  A surprising number of which have no way to reset the comm state, you must send a global reset or power-cycle them when they fuck up.  Brain dead design, but it's totally real. :(

As a corollary, you probably can't use such a device on an SPI bus, either because it always receives clocks, or sometimes mistakenly receives them; or MISO isn't a tristate pin; or...


1. Sequence a DMA transaction with the SPI control register, to set a 6 or 7-bit mode for the last byte or two.

Probably, the DMA isn't complex enough to do this?


2. Trigger an interrupt on the penultimate byte, change the config register, go one (or two) more bytes, then do the usual processing.

More manual, requires CPU intervention -- but doesn't require interaction from the main() thread.  Basically #1 but patching in the functionality with an interrupt.  Pretty typical case I would guess.


3. Trigger an interrupt on the penultimate byte, and bit-bang out the remaining bits.

Time consuming.  Perhaps more time than you can afford in an interrupt.  May have to be pushed into a subroutine elsewhere, perhaps a lower priority interrupt, or triggering an event which is polled for in main().

If the bitrate is intentionally on the low side, a timer could be used to take up the space between clock edges, so the last few bits would be received by a timer interrupt.  This would save the CPU cycles that a 100%-software bit-bang would need.

I suppose you could also make a hybrid, where you start one last byte (or word) transfer, but start a timer simultaneously, and stop the SPI module right in the middle of the transaction.  Tricky to resolve timing, and may not be reliable (ooh, random bit errors!).  May not be supported on a given platform (e.g. if the shift register contents are considered invalid (and unreadable) until a full byte/word is complete).


If you need coherent clocking (it's sensitive to timing as well as the number of bits), consider a hardware solution.  Discrete logic could do it, but an FPGA would be sensible.  Upside, you can make any kind of bridge you like: SPI, I2C, parallel, async serial; to name a few.  A full [double-]buffered solution would incur a fair amount of delay (i.e., at least a full frame), but a solution more like bridging clock domains could be used for minimum latency.

Tim
« Last Edit: November 20, 2019, 07:18:45 am by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: luiHS, I wanted a rude username

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
Re: 134 bit package by SPI using DMA
« Reply #3 on: November 20, 2019, 07:55:49 am »
You don't mention the MCU type. This has nothing to do with DMA. SPI peripheral is likely to be hard coded to receive 8- or 9-bit bytes. So it is SPI that does not tell the DMA that there is a new byte.

MCU is Kinetis MK66, C source code for Teensy with Arduino IDE.

I do not know if it is possible and how to do it, that SPI / DMA can be configured to receive data packets that are not complete bytes, in my case 134 bit packets, which result in 16 bytes + 6 bits. I only receive the 16 bytes, and I lose the last 6 bits.

It is possible that the frame size of the SPI can be defined in this parameter, but I still don't see how I can receive the 134 bit data packet, using SPI and DMA.

SPI0_CTAR0_SLAVE = SPI_CTAR_FMSZ (15);

134 bit is only divisible by 2. Create, if possible, 2bit frames? I do not see how to do this and load the data received in the DMA buffer.


« Last Edit: November 20, 2019, 08:10:27 am by luiHS »
 

Offline ataradov

  • Super Contributor
  • ***
  • Posts: 11277
  • Country: us
    • Personal site
Re: 134 bit package by SPI using DMA
« Reply #4 on: November 20, 2019, 08:08:35 am »
This MCU supports frames from 4 to 16 bits. Unfortunately 134 is not divisible by any number in this range, so in general you are out of luck.

Any software solution would involve compromise to robustness.

How fast is your clock?
Alex
 
The following users thanked this post: luiHS

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
Re: 134 bit package by SPI using DMA
« Reply #5 on: November 20, 2019, 08:35:38 am »
 
This is the logic analyzer data.
The clock frequency is 625 kHz.

DOT_CLOCK signal receives 134 pulse packets to get the data in DOT_DATA by SPI using DMA.

The DMA is enabled using an interrupt triggered by the COLUMN_LATCH signal by rising, in this way DMA would read 134 bits, but lose the last 6 bits.

I modified the software, so that after the interrupt of COLUMN_LATCH for rising, I wait until COLUMN_LATCH goes down and then apply a delay of 10 microseconds before enabling the DMA, to avoid the first 6 bits, and only 128 bits remain to read, this It works but affects the rest of the software.

« Last Edit: November 20, 2019, 08:46:22 am by luiHS »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: 134 bit package by SPI using DMA
« Reply #6 on: November 20, 2019, 10:02:08 am »
Are the last 6 bits readable from the SPI receive shift register? Or do they sit in a hidden register that's not actually visible to the CPU core at all?

If they are, can you not trigger an interrupt on the rising edge of COLUMN_LATCH, which reads the shift register and stores its contents into the byte immediately after the end of the DMA buffer? The DMA controller transfers the 16 complete bytes, and your ISR handles the 17th.

The same interrupt might need to reset the SPI device, because it'll still be waiting to receive the last two bits.
 
The following users thanked this post: luiHS

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
Re: 134 bit package by SPI using DMA
« Reply #7 on: November 20, 2019, 02:33:59 pm »
Are the last 6 bits readable from the SPI receive shift register? Or do they sit in a hidden register that's not actually visible to the CPU core at all?

I don't know that. What I do is copy the DMA buffer to an array as soon as the interrupt in COLUMN_LATCH is triggered by rising, and the last 6 bits of the 134-bit packet are not in the buffer.

memcpy (wpc_planes [plane] [ 0], plane_buffer, ROW_LENGTH * sizeof (uint16_t));

If before enabling DMA, I apply a delay of 10 microseconds, then the first 6 bits (not useful) are discarded, and I can read the next 128 bits, but that affects the way that the program works.
   
delayMicroseconds(10);
dmaSPI0rx->destinationBuffer(plane_buffer, ROW_LENGTH * sizeof(uint16_t) * 2);
dmaSPI0rx->enable();

SPI0_MCR &= ~SPI_MCR_HALT;
digitalWriteFast(DMD_CS, LOW);

Probably this is the solution, but I need to review the rest of the program so that it works well with this modification. Unless there is some way to read the 134 bits by SPI/DMA.
« Last Edit: November 20, 2019, 02:51:05 pm by luiHS »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: 134 bit package by SPI using DMA
« Reply #8 on: November 20, 2019, 03:24:44 pm »
The last 6 bits aren't in the buffer, which makes total sense because the SPI peripheral hasn't yet received a full 8 bits, so it doesn't trigger the DMA controller to fetch the contents of the SPI receive shift register and copy it to RAM.

You just need to check whether those last 6 bits are in fact visible to the CPU at all. Sometimes the shift register is visible to the CPU in the memory map, so you can read it in software. Sometimes it's completely hidden, and only transferred into a memory mapped register once a complete byte has been received - and in that case you're out of luck. Have a good read of the register interface section in the MCU's reference manual - there should be a block diagram of the SPI interface which will help clarify this.

What's the thinking behind the delay before enabling DMA? Is the 10us critical? Are you just trying to mask out the first 6 bits, so they're not received at all, then capture the remaining 128? If so then the exact timing between the SPI data and the time at which the SPI interface is enabled will be critical, and that's not something I'd ever recommend.

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14510
  • Country: fr
Re: 134 bit package by SPI using DMA
« Reply #9 on: November 20, 2019, 03:36:26 pm »
Of course as said above, just transmit one more byte and discard the unused bits. There's no way you can store less than 8 bits without using a fully byte in memory anyway, so nothing is wasted.

Now if your problem is that you absolutely want the SPI master (MCU) to only generate 134 clock pulses, and that any pulse beyond this would screw up your peripheral (whatever it is), then said peripheral has a definite problem. Any well behaved SPI slave should tolerate excess clock pulses IMO (and just ignore them), then get reset with a CS signal. If this is not your case, then you have a problem. I don't think there is any way of generating exactly 134 clock pulses with standard SPI on your MCU. 134 = 2x67 (67 is prime)

OTOH, if 136 clock pulses are OK with your system, just use packets of 17 bytes = 136 bits.

If the master SPI in your MCU can handle 6-bit transfers, you can do something a bit ugly as a workaround: set SPI with 8-bit transfers, issue a 16 bytes transfer with DMA, then when it's done, switch SPI to 6-bit and send the last 6 bits. Ultra clunky, but that would be the only way I can think of if you ABSOLUTELY need to generate 134 clock pulses and not one more.
« Last Edit: November 20, 2019, 03:42:42 pm by SiliconWizard »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: 134 bit package by SPI using DMA
« Reply #10 on: November 20, 2019, 04:27:53 pm »
I was under the impression that we're trying to receive data from some (deeply unhelpful!) device that generates a strange number of bits externally, and that the MCU is a slave.

Is that not the case?

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14510
  • Country: fr
Re: 134 bit package by SPI using DMA
« Reply #11 on: November 20, 2019, 04:50:50 pm »
I was under the impression that we're trying to receive data from some (deeply unhelpful!) device that generates a strange number of bits externally, and that the MCU is a slave.

Is that not the case?

Only the OP can tell. I assumed the MCU acted as a SPI master. They just said "receiving". Obviously, you can "receive" data as a SPI master, it's bidirectional.

If the MCU is the SPI slave, then there is a serious problem here. The OP should have mentioned that clearly, if this is the case, because a few of us here seem to have assumed the MCU was the master.

I'm not sure you can "peek" at the internal RX shift register; in most MCUs I know, it is transfered to a readable RX register only when the transfer is complete.
 

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
Re: 134 bit package by SPI using DMA
« Reply #12 on: November 20, 2019, 06:39:45 pm »
 
Problem solved as I said.

Interrupt for falling trigger signal COLUMN_LATCH to start the DMA, before starting the DMA I apply a delay of 10 microseconds to avoid the 6 first bits and then start DMA to capture the next 128 bit, now all is ok and works perfectly.

Of the 134 bits, only the last 128 bits are good, the problem was to avoid capturing the first 6 bits, then after apply a delay I can  capture the next 128 bits to work with bytes (16 bytes of data).

PS: The MCU is slave, receive data to display graphic animations in a LED display. I can not modify the master.
« Last Edit: November 20, 2019, 06:51:54 pm by luiHS »
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: 134 bit package by SPI using DMA
« Reply #13 on: November 20, 2019, 08:04:55 pm »
Well, OK, it's your project and ultimately your decision, but for the benefit of anyone else who might be reading and who has a similar problem: I wouldn't do it this way.

By delaying a fixed amount of time, you're requiring a timing relationship (6 bits = 10 us) that doesn't otherwise have to exist. You run the risk of your solution breaking as soon as anything changes, such as the SPI clock speed or the interrupt latency in your CPU. If you know nothing will ever change then fair enough, but you may be storing up a nasty surprise for the future where a change in something else, apparently unrelated, causes SPI to break.

Can your MCU clock one of its internal timers from the same pin as the SPI clock? Perhaps you could use one of those to count external clock pulses, and enable the SPI receiver as quickly as possible once the count reaches 6. It's not pretty, but it avoids using fixed delays and so shouldn't break just because of a change in SPI clock speed.

I don't suppose you have an FPGA or CPLD anywhere in your system? A little programmable logic could give a much cleaner, more robust solution.
 
The following users thanked this post: luiHS, I wanted a rude username

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
Re: 134 bit package by SPI using DMA
« Reply #14 on: November 20, 2019, 10:29:25 pm »
By delaying a fixed amount of time, you're requiring a timing relationship (6 bits = 10 us) that doesn't otherwise have to exist. You run the risk of your solution breaking as soon as anything changes, such as the SPI clock speed or the interrupt latency in your CPU. If you know nothing will ever change then fair enough, but you may be storing up a nasty surprise for the future where a change in something else, apparently unrelated, causes SPI to break.

Nothing will ever change, the machine is from the 90s, the manufacturer no longer exists or those machines are manufactured, only those that exist in the hands of collectors who love these products.

My hardware is also not going to change in the short term and if I did it would not change anything, because the external signals will remain the same, and a delay of 10us can be achieved with the same precision in any ARM microcontroller.


Can your MCU clock one of its internal timers from the same pin as the SPI clock? Perhaps you could use one of those to count external clock pulses, and enable the SPI receiver as quickly as possible once the count reaches 6. It's not pretty, but it avoids using fixed delays and so shouldn't break just because of a change in SPI clock speed.

I don't suppose you have an FPGA or CPLD anywhere in your system? A little programmable logic could give a much cleaner, more robust solution.

I cannot unnecessarily complicate something that already works, either with hardware or software. In addition, the current program is already quite complex, because not only does it receive external data from a machine, it has to process that data and generate the images it must display on an LED display, with its corresponding timers and DMA channels.

The solution was finally simpler than I thought. I just had to change the trigger signal from rising to falling and add a small delay to remove the first 6 bits that did not allow to capture the 128 bits of useful data.
« Last Edit: November 20, 2019, 10:50:25 pm by luiHS »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14510
  • Country: fr
Re: 134 bit package by SPI using DMA
« Reply #15 on: November 21, 2019, 04:21:36 pm »
Interrupt for falling trigger signal COLUMN_LATCH to start the DMA, before starting the DMA I apply a delay of 10 microseconds to avoid the 6 first bits and then start DMA to capture the next 128 bit, now all is ok and works perfectly.

Of the 134 bits, only the last 128 bits are good, the problem was to avoid capturing the first 6 bits, then after apply a delay I can  capture the next 128 bits to work with bytes (16 bytes of data).

PS: The MCU is slave, receive data to display graphic animations in a LED display. I can not modify the master.

OK, thanks for the follow-up. It just happens that the above description (now that you have "solved" it) is quite clear and quite compact. The fact that the first 6 bits could be ignored is actually a major information here. If they couldn't, I'm still not sure how this could be solvable, except maybe with some really bad trick.

I kind of agree with AndyC though, I don't like the fixed delay much. A similar approach as yours, but IMO a bit more robust, would be to set your interrupt to trigger on the 6th raising (or falling, depends on clock polarity) edge instead of the first. Then you wouldn't need any fixed delay. Not sure about the MCU you're using, but quite a few MCUs allow to set a specific IO to generate an interrupt every n-th edge. Just a thought.


 
The following users thanked this post: luiHS

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
Re: 134 bit package by SPI using DMA
« Reply #16 on: November 21, 2019, 05:45:02 pm »
OK, thanks for the follow-up. It just happens that the above description (now that you have "solved" it) is quite clear and quite compact. The fact that the first 6 bits could be ignored is actually a major information here. If they couldn't, I'm still not sure how this could be solvable, except maybe with some really bad trick.

If I needed to capture the 134 bit, I think I could connect a new GPIO, configured as output, to the input SPI clock pin. So when the interrupt is triggered for the next block of data, before stopping the DMA, we generate two more pulses per program from the GPIO to the SPI clock pin, to complete the 8 bits, that way the DMA would capture the last byte

I kind of agree with AndyC though, I don't like the fixed delay much. A similar approach as yours, but IMO a bit more robust, would be to set your interrupt to trigger on the 6th raising (or falling, depends on clock polarity) edge instead of the first. Then you wouldn't need any fixed delay. Not sure about the MCU you're using, but quite a few MCUs allow to set a specific IO to generate an interrupt every n-th edge. Just a thought.

I don't know how that could be done.
You mean being able to configure the DMA, so that it starts to capture data by SPI from the sixth clock pulse?, If that is
possible, it would be perfect.

The interrupt is not made on the SPI clock signal, but from another GPIO that receives a signal with each data start. There is no possibility to program the interrupt directly on the SPI clock signal, as far as I know.

The MCU is a Kinetis MK66.
« Last Edit: November 21, 2019, 05:51:00 pm by luiHS »
 

Offline jhpadjustable

  • Frequent Contributor
  • **
  • Posts: 295
  • Country: us
  • Salt 'n' pepper beard
Re: 134 bit package by SPI using DMA
« Reply #17 on: November 21, 2019, 05:57:19 pm »
Almost every MCU in history can count in hardware and request an interrupt when the terminal count is reached, and the vast majority of those can be gated by an external enable input (such as slave-select). You will probably need to take the SPI peripheral off the pin until the sixth clock pulse is received, then enable SPI, and you might or might not need to disable/enable DMA too. Perhaps you can get away with just resetting the SPI peripheral upon the sixth clock.
"There are more things in heaven and earth, Arduino, than are dreamt of in your philosophy."
 
The following users thanked this post: luiHS

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14510
  • Country: fr
Re: 134 bit package by SPI using DMA
« Reply #18 on: November 21, 2019, 06:49:33 pm »
OK, thanks for the follow-up. It just happens that the above description (now that you have "solved" it) is quite clear and quite compact. The fact that the first 6 bits could be ignored is actually a major information here. If they couldn't, I'm still not sure how this could be solvable, except maybe with some really bad trick.

If I needed to capture the 134 bit, I think I could connect a new GPIO, configured as output, to the input SPI clock pin. So when the interrupt is triggered for the next block of data, before stopping the DMA, we generate two more pulses per program from the GPIO to the SPI clock pin, to complete the 8 bits, that way the DMA would capture the last byte

I think it would qualify a bit as a kind of "bad trick" I was talking about above, but yes, if you had no choice...
You would still need some additional external component to switch between the external SPI clock and your own clock (unless the external one goes HI-Z by itself once it's done...), and make sure there would be no possibility of glitches while switching.
(...) set your interrupt to trigger on the 6th raising (or falling, depends on clock polarity) edge instead of the first. Then you wouldn't need any fixed delay. Not sure about the MCU you're using, but quite a few MCUs allow to set a specific IO to generate an interrupt every n-th edge.

I don't know how that could be done.
You mean being able to configure the DMA, so that it starts to capture data by SPI from the sixth clock pulse?, If that is
possible, it would be perfect.

The interrupt is not made on the SPI clock signal, but from another GPIO that receives a signal with each data start. There is no possibility to program the interrupt directly on the SPI clock signal, as far as I know.

Again I don't know about the specifics of your MCU, so you would have to check whether this is possible. So I'm just talking about what would be possible with some other MCUs.

The basic idea would be to:

- Route the external SPI clock to both the SPI clock input of your MCU, and to another GPIO;
- Configure this other GPIO as an input, set to trigger an interrupt on the 6th rising edge;
- Configure the DMA and the SPI slave peripheral, but disable the latter temporarily;
- Wait for the interrupt to trigger;
- In the ISR: enable the SPI peripheral (should require just one clock cycle, it's a register write) and disable the above GPIO interrupt. Provided that the interrupt latency + time to enable the SPI peripheral is lower than one SPI clock pulse, it should work fine;
- Once the full DMA transfer is done: disable the SPI peripheral, reset and re-enable the interrupt for the above GPIO;
- If there is an additional external "framing" signal, you can still use it to reset the state of the above mechanism for added robustness.

 
The following users thanked this post: luiHS

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26964
  • Country: nl
    • NCT Developments
Re: 134 bit package by SPI using DMA
« Reply #19 on: November 24, 2019, 10:36:02 am »
I see this is for reading a display. How about reading an entire frame instead of one line. That way you probably end up with a much better situation where you miss a few pixels at most.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline luiHSTopic starter

  • Frequent Contributor
  • **
  • Posts: 592
  • Country: es
Re: 134 bit package by SPI using DMA
« Reply #20 on: November 24, 2019, 08:27:29 pm »
I see this is for reading a display. How about reading an entire frame instead of one line. That way you probably end up with a much better situation where you miss a few pixels at most.

 
Then I would have to modify the entire program, and it is quite complex. In addition, signals are received from several machine models, each with different signal formats.

I would also have the RAM problem, these screens have 32 and 64 rows. Receiving complete frames would mean increasing the size of the DMA buffer and the working array to which the DMA buffer is copied, all with a size 32 and 64 times larger than the current one, with the corresponding increase in RAM storage.

I do not see any advantage to working with full frames, instead of doing it row by row as they are received.

 
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf