Author Topic: About that Wishbone B4's "SEL_I" signal  (Read 681 times)

0 Members and 1 Guest are viewing this topic.

Offline josuahTopic starter

  • Regular Contributor
  • *
  • Posts: 119
  • Country: fr
    • josuah.net
About that Wishbone B4's "SEL_I" signal
« on: May 16, 2022, 03:43:55 pm »
Let's be honest, the "SEL_I" signal's purpose is a bit vague to me.

It comes from the Wishbone B4 standard: http://cdn.opencores.org/downloads/wbspec_b4.pdf

While there is a Wishbone bus transaction, there might be data sizes of 8, 16, 32, or 64 bits, and who knows what else.

The SEL_I signal received by a wishbone peripheral will be selecting whether it is the upper bits, or lower bits that will be targeted by the write, using a bitmask pattern:

For instance: (*(uint8_t *)0x20001025) = 5; would end-up in a bus write with a SEL_I like 4'b0001 assuming a 32-bit word size with 8-bit bytes.

Or it would be something like this 32'b00000000000000000000000011111111 with 1-bit bytes.

Now what if I give you 3-bit bytes, 21-bit words and this SEL_I value 7'b1000101;?

Or a bit less crazy: 8-bit bytes, 32-bit words, and 4'b1000 (instead of the expected 4'b0001), that is, an uint8_t write three addresses too low, but corrected by the SEL_I signal.

It also permits to have 8-bit bytes, but addresses that increment by chunk of 32 bytes, with SEL_I used to select the "sub-address": 4'b0001 for lowest byte, 4'b0010 for second byte, 4'b0100 for third byte, 4'b1000 for fourth byte.

So my question is: why choosing an encoding scheme encouraging such sick setups? Why not having SEL_I just being the address size? Like 2'b00 for 8-bit, 2'b01 for 16-bit, 2'b10 for 32-bit, and 2'b11 for 64-bit?

P.S.: I hope you had fun seeing the nightmare fuel above

[EDIT]: the post was sent too soon!
« Last Edit: May 16, 2022, 04:07:54 pm by josuah »
 

Offline josuahTopic starter

  • Regular Contributor
  • *
  • Posts: 119
  • Country: fr
    • josuah.net
Re: About that Wishbone B4's "SEL_I" signal
« Reply #1 on: May 16, 2022, 04:41:06 pm »
The only place I could see this discomfort with SEL_I expressed is in this Wishbone UART core:

https://www.isy.liu.se/en/edu/kurs/TSEA44/OpenRISC/UART_spec.pdf

Quote
The 32-bit mode is fully WISHBONE compatible and it uses the WISHBONE [SEL_I]
signal to properly receive and return 8-bit data on 32-bit data bus. The 8-bit version might
have problems in various WISHBONE implementations because a 32-bit master reading
from 8-bit bus can expect data on different bytes of the 4-byte word, depending on the
register address
.

I better not use SEL_I in any funky way and stick to the only 3 sane values (for 32-bit): 4'b0001, 4'b0011 and 4'b1111.

That will permit to make peripherals that only expect one of these three values, and not support anything like what you could see in the previous post.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4525
  • Country: au
    • send complaints here
Re: About that Wishbone B4's "SEL_I" signal
« Reply #2 on: May 16, 2022, 11:20:35 pm »
Wishbone defines the smallest addressable part as a BYTE of 8bits, so addresses count bytes.

For instance: (*(uint8_t *)0x20001025) = 5; would end-up in a bus write with a SEL_I like 4'b0001
It will not align to the lowest byte as the address does not end with all LSBs = 0, the document you link to specifies how bytes must be aligned within a larger bus.

Yes, the documentation isnt great (AXI is a comparable example which is better documented) but its pretty clear.
 

Offline josuahTopic starter

  • Regular Contributor
  • *
  • Posts: 119
  • Country: fr
    • josuah.net
Re: About that Wishbone B4's "SEL_I" signal
« Reply #3 on: May 17, 2022, 09:14:17 am »
Thank you Someone, you are giving me the missing step in my stair.

Quote
document you link to specifies how bytes must be aligned within a larger bus

I entirely glossed over that section 3.5 Data Organization somehow! It contains everything I was wondering about as you pointed-out.

The figure named Data Organization for 32-bit Ports cannot be more explicit, and shows exactly how SEL_I/O signals should be set and which bits do they map to.

So is the Illustration 3-15 showing the alignment restrictions.

So we do have the sane memory model enforced by the design.

This suggests that, while the encoding of SEL_I/O supports invalid data (forbidden by the standard), it is possibly done so for convenience:.
For instance in this code:

Code: [Select]
            if (we_i & sel_i[i]) begin
                mem[adr_i_valid][WORD_SIZE*i +: WORD_SIZE] <= dat_i[WORD_SIZE*i +: WORD_SIZE];
            end

Emphasis on sel_i that permits to simplify read/writes access for less-than-word sizes.

Quote
Yes, the documentation isnt great (AXI is a comparable example which is better documented) but its pretty clear.

It looks not so bad to me: there are illustrations, and even a tutorial (Chapter 8.) giving a good introduction.
I knew there was something off with how I understood it, but could not spot where.
It is not like the author wrote everywhere "Go Look that Memory Organization section!" everwhere (he did)... *Ahem*

All good now, thanks!
« Last Edit: May 17, 2022, 09:28:25 am by josuah »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf