Author Topic: ARM Noob Question  (Read 1171 times)

0 Members and 1 Guest are viewing this topic.

Offline kfnightTopic starter

  • Regular Contributor
  • *
  • Posts: 71
ARM Noob Question
« on: November 11, 2017, 12:49:44 am »
Suppose there is a memory-mapped peripheral that I want to read from or write to using LDR/STR. The peripheral is an APB slave, if that matters. Do the LDR/STR instructions typically block program execution until the read/write operation successfully completes, or does it return immediately and I have to check a status bit to see when the read/write completes?

Thanks
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11261
  • Country: us
    • Personal site
Re: ARM Noob Question
« Reply #1 on: November 11, 2017, 03:06:09 am »
That depends a lot on the actual core and implementation. Most implementations have at least write buffers. Reads will block execution, but may or may not block the bus entirely.

And no, there will be no status bits, it will just work as you expect. Except writes may happen much later in time than the actual instruction. There are synchronization barrier instructions that help dealing with this.

And some devices (like Atmel/Microchip SAM Dxx) will have specially designed peripherals that themselves will or may need synchronization, but the actual bus transfer happens as usual, peripheral just takes time to prepare the result, and you have to actively poll for it.
« Last Edit: November 11, 2017, 03:10:02 am by ataradov »
Alex
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: ARM Noob Question
« Reply #2 on: November 11, 2017, 04:39:11 pm »
The STR will probably be buffered with interlocks such that the next instruction can read the same port and get the new value.  This isn't a big issue because the data that went to the port is still in a register somewhere.  Just copy it and move on.

The LDR has to wait for the data to traverse the bus.  As a programmer you would expect to be able to read the peripheral and operate on the results with the next instruction.  As a CPU designer, you know this won't work without a pipeline stall so you may have to delay the CPU or begin executing instructions out-of-order (or insert NOPs).

As a compiler writer, you know there will be pipeline stalls so you emit code in a sequence that optimizes throughput.

The application programmer may know about these issues but just sits back and lets the compiler  writers work with the CPU designers.  Unless the errata says otherwise, there should be no glitches caused by any sequence of instructions.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf