You can also design the struct for the platform (unportable code, no, of course not!...
), so, say, using uint32_t for the size number, then a uint32_t for flags which just happens to include the two byte-size things you had, plus either "reserved" (unused) bits, or even part of the next variable, if applicable. This way, instead of, say, checking byte variables (which are compiled as 32 bits anyway) for individual values and flags, you check the same 32 bit variable for more flags. That way, you're handling the same 32 bits as exactly that, 32 bits, and the compiler output follows much more closely from your input -- which helps reduce misunderstandings when inspecting the disassembled output, and misunderstandings between compiler and platform limitations when you do run into weird or unexpected behavior.
Or if it's not random flags, but the bytes are actually numeric, you can byte-slice them into variables at the point of use (which can be in a variable, or an inline expression like "(int8_t)((var & 0x0000ff00) >>
" or whatever), and let the compiler decide how it's going to approach that (32 bit processors often have byte and nibble swapping or selecting instructions). On very advanced processors, you'll even have the luxury of packed (SIMD) instructions, which if you're doing the same actions to all the bytes, can save even more execution time.
The resulting code should still be portable, but will be suboptimal on other platforms. Applying full-width bitmasking operations on uint32_t variables on an 8-bit processor would be simply dreadful. So if you're well and truly designing the code for multi-platform use, maybe you'd add #ifdefs for different cores or WORD sizes or whatever. Which would be a little easier to maintain than separate implementations altogether, but not by much.
Tim