Let's be honest, the "SEL_I" signal's purpose is a bit vague to me.
It comes from the Wishbone B4 standard:
http://cdn.opencores.org/downloads/wbspec_b4.pdfWhile there is a Wishbone bus transaction, there might be data sizes of 8, 16, 32, or 64 bits, and who knows what else.
The SEL_I signal received by a wishbone peripheral will be selecting whether it is the upper bits, or lower bits that will be targeted by the write, using a bitmask pattern:
For instance:
(*(uint8_t *)0x20001025) = 5; would end-up in a bus write with a SEL_I like
4'b0001 assuming a 32-bit word size with 8-bit bytes.
Or it would be something like this 32'b00000000000000000000000011111111 with 1-bit bytes.
Now what if I give you 3-bit bytes, 21-bit words and this SEL_I value
7'b1000101;?
Or a bit less crazy: 8-bit bytes, 32-bit words, and
4'b1000 (instead of the expected 4'b0001), that is, an
uint8_t write three addresses too low, but corrected by the SEL_I signal.
It also permits to have 8-bit bytes, but addresses that increment by chunk of 32 bytes, with SEL_I used to select the "sub-address": 4'b0001 for lowest byte, 4'b0010 for second byte, 4'b0100 for third byte, 4'b1000 for fourth byte.
So my question is: why choosing an encoding scheme encouraging such sick setups? Why not having SEL_I just being the address size? Like 2'b00 for 8-bit, 2'b01 for 16-bit, 2'b10 for 32-bit, and 2'b11 for 64-bit?
P.S.: I hope you had fun seeing the nightmare fuel above
[EDIT]: the post was sent too soon!