Electronics > Microcontrollers

How to get very short delays for timing (32F4xx)

(1/3) > >>

peter-h:
I came across this snippet in the code generated by Cube IDE



The ... delay_us is in this case 3, for 3 us. And the counter-- will take 1 cycle per decrement and 1 cycle for the test for zero, unless they get pipelined and then it is just 1 cycle per loop.

Obviously this will give you a minimum delay, and interrupts etc will extend it.

Getting short delays has generally been dodgy because a compiler could optimise-out stuff.

ataradov:
Here is the code I use that is not subject to compiler optimizations or flash wait states and cache optimizations. Still subject to interrupts, of course.

--- Code: ---__attribute__((noinline, section(".ramfunc")))
static void delay_ms(int ms)
{
  uint32_t cycles = ms * F_CPU / 3 / 1000;

  asm volatile (
    "1: subs %[cycles], %[cycles], #1 \n"
    "   bne 1b \n"
    : [cycles] "+r"(cycles)
  );
}

__attribute__((noinline, section(".ramfunc")))
void delay_cycles(uint32_t cycles)
{
  cycles /= 4;

  asm volatile (
    "1: subs %[cycles], %[cycles], #1 \n"
    "   nop \n"
    "   bne 1b \n"
    : [cycles] "+l"(cycles)
  );
}

--- End code ---

Replace "subs" with "sub" for Cortex-M0+.

You can /3 in the last example and remove the "nop", of course. Division by 4 is more efficient, especially on CM0+. See what works better in a participial case.

Doctorandus_P:
Adding the "volatile" keyword to the counter variable or an "asm("nop")" will prevent optimising code away and force the compiler to keep your code, though it may still shove things around a bit.

If you want accurate timing, then have a look of using a timer in one-shot mode.

I'm not a fan of using software delays. but sometimes I do use them. One example of a software delay was multiplexing of some relatively big 7-segment displays in an ISR routine. These needed a few us between turning a digit off, and turning the next digit on to prevent ghosting. Timing for this is not critical.

If your needs are beyond something simple like this, then spend some time on the high level design and consider whether you really want to use software delays.

rhodges:
This is what I use. The variable cpu_speed is set in my board setup.

--- Code: ---/*
 * Loop for N microseconds.
 * Avoid delays that are a significant fraction of one millisecond.
 */
void delay_usecs(int usecs)
{
    delay_cycles(usecs * (cpu_speed / 1000000));
}
/*
 *  Loop for N SysTick (==CPU) cycles
 */
void delay_cycles(int cycles)
{
    int start, diff;

    start = SysTick->VAL;
    for (;;) {
        diff = start - SysTick->VAL;
        if (diff < 0)
            diff += SysTick->LOAD;
        if (diff > cycles)
            break;
    }
}

--- End code ---

peter-h:
Yes I would think loading a hardware timer and hanging on it until it goes to zero (or overflows, whichever way the timers work; on many chips they can only increment) is the best way, but is obviously not "thread safe".

Can systick be used for microsecond delays?

Navigation

[0] Message Index

[#] Next page

There was an error while thanking
Thanking...
Go to full version