Hello, I'm currently trying to store 2 values of 2 bytes each to the EEPROM to my PIC16f1827. The EEPROM stores single bytes. How would I divide my value into 2 bytes to write them? And how do I then read them into one single value again?
uint16_t original_data;
uint8_t b1,b2;
b1 = (uint8_t) (original_data & 0xff);
b2 = (uint8_t) ((original_data >> 8) & 0xff);
I don't get some of those answers. Why to mess with pointers and structures? Uh and memcpy? Seriously?I (very strongly) agree about memcpy comment, but pointer to array has it's advantage.
is simple, quick and easy.
could you elaborate on the vast inferiority (apart from "safer" and more readable) and focus the exact use case.is simple, quick and easy.
... and incorrect. Type-punning solutions are vastly inferior to arithmetic solutions in more ways than one. But if you are dead set on using memory reinterpretation, at least remember not to use `char`. It is either `unsigned char` or `uint8_t`.
...yes uintX_t is absolutely superior, but have your own typedef header file checked and adjusted for each project is probably the only universal way for portability
I can tell you from personal experience that that is not an option in many automotive projects....yes uintX_t is absolutely superior, but have your own typedef header file checked and adjusted for each project is probably the only universal way for portability
Uuuuh, no! That's actually a pretty good way to shoot yourself in the foot when an algorithm assumes one size and you have helpfully typedeffed it to something else in some obscure header.
stdint.h is available pretty much everywhere with a reasonably recent C compiler (reasonably recent - released in the last 15 years or so). It is part of the now old C99 standard, not even the more recent C11 one.
could you elaborate on the vast inferiority (apart from "safer" and more readable) and focus the exact use case.
I can tell you from personal experience that that is not an option in many automotive projects.
And in many more applications std is unwelcomed.
His biggest problem might be running out of resources (which I noticed in some hobby projects to be an issue) due to many unoptimized code reuse from online examples, or unnecessary importing of libraries and here casting helps a little bit.
...
Code: [Select]uint16_t original_data;
uint8_t b1,b2;
b1 = (uint8_t) (original_data & 0xff);
b2 = (uint8_t) ((original_data >> 8) & 0xff);
In a nutshell: "Always implement your own stdint.h because in some automotive environment unchanged from 1990's, stdint.h may not be available".
How terrible advice.
regarding types we can play the target/compiler/C standard game all day, and I could say he might not have uint8_t available... there is not much data shared by the OP to discuss it, yes uintX_t is absolutely superior, but have your own typedef header file checked and adjusted for each project is probably the only universal way for portability
...such basic beginner questions make people pop up left and right showing off some very interesting bubbles they live inThat's the beauty of this forum
In a nutshell: "Always implement your own stdint.h because in some automotive environment unchanged from 1990's, stdint.h may not be available".
How terrible advice.
uint16_t original;
uint8_t left,right;
//to read
//*reading 2 bytes from eeprom*
original = 0;
original = right | (left<<8);
//to write
right = original;
left = original >> 8;
//*writing 2 bytes to eeprom*
Code: [Select]uint16_t original_data;
uint8_t b1,b2;
b1 = (uint8_t) (original_data & 0xff);
b2 = (uint8_t) ((original_data >> 8) & 0xff);
Is the mask with FF neccessary?
Isn’t the typecast to uint8_t all that is needed?
Code: [Select]uint16_t original_data;
uint8_t b1,b2;
b1 = (uint8_t) (original_data & 0xff);
b2 = (uint8_t) ((original_data >> 8) & 0xff);
Is the mask with FF neccessary?
Isn’t the typecast to uint8_t all that is needed?
In terms of making the compiler happy and getting the semantics you want - probably not necessary.
In terms of clearly indicating your intentions to the next programmer to read the code - priceless.
The alternative solution, based on arithmetic, may be be valid, but may as well contain a bug.
Regarding alignment; most proper CPUs give ALIGNMENT ERROR interrupt (for example, busfault), making it easy to see what happened, why, and where, and fix it. Instead, a mistyped bitshift somewhere is completely hidden and only causes some certain parameter or field to act strangely. Almost impossible to debug; you may not even know you have a problem.
The problem with the arithmetic version, whether based on / % * or << >> | &, doesn't matter, is that it is error prone manual work.
Agree with you two here. With that reasoning, a simple incrementation is also error-prone and should be avoided. Indeed, "n += 1;"... but you could mess it up and write "n += 2;" or "n -= 1;" instead.
Why not write N = N + 1?
The alternative solution, based on arithmetic, may be be valid, but may as well contain a bug.
This is a very good point. Personally, I worked with hand-maintained bit shift fests until one day I hunted for a bug for hours; a bug which was caused by mistyping < instead of <<. Then I started thinking, there has to be a better way.
It's much better to write things in the way that lets you have the smallest amount of hand-written code. For example, if you're tempted to manually copy&paste&edit 20 copies of something -- totally evil! -- because you can't make that into a function or it's beyond the capabilities of macro processing (or your language doesn't have macros) *don't* *do* *that*
It's much better to write things in the way that lets you have the smallest amount of hand-written code. For example, if you're tempted to manually copy&paste&edit 20 copies of something -- totally evil! -- because you can't make that into a function or it's beyond the capabilities of macro processing (or your language doesn't have macros) *don't* *do* *that*
This is EXACTLY what I was saying above. Apparently others than brucehoult and golden_labels did not get it.
No, I strongly disagree with the others. If you have a nicely designed struct with 57 variables holding your state and you are constructing each of the 57 variables by writing 57 LoC each with various combinations of casting, bitshifting and masking, you are really doing it wrong. Yes, programming requires careful work, but repeated manual copy-paste-like (doesn't matter if you avoid the actual copy-pasting and rewrite manually) should still be avoided. Human brain is good in creative work, but does a lot of mistakes (say, several %) when repeating mechanical "simple" work. This is why automation exists.
#include <stdint.h>
#include <stddef.h>
// max size 32 bits
#define EXTRACT_FIELD(FIELD, BASE, OFFSET, SIZE) \
do { \
size_t byteOff = (OFFSET)/8, bitOff = (OFFSET)%8; \
char *CHAR_BASE = (char*)(BASE); \
uint32_t val = CHAR_BASE[byteOff] >> bitOff; \
if (bitOff+SIZE > 8) val |= (uint32_t)(CHAR_BASE[byteOff+1]) << ( 8-bitOff); \
if (bitOff+SIZE > 16) val |= (uint32_t)(CHAR_BASE[byteOff+2]) << (16-bitOff); \
if (bitOff+SIZE > 24) val |= (uint32_t)(CHAR_BASE[byteOff+3]) << (24-bitOff); \
if (bitOff+SIZE > 32) val |= (uint32_t)(CHAR_BASE[byteOff+4]) << (32-bitOff); \
FIELD = val & ((1<<SIZE)-1); \
} while (0);
int foo(char *p){
int r;
EXTRACT_FIELD(r, p, 40, 8);
return r;
}
int bar(char *p){
int r;
EXTRACT_FIELD(r, p, 10, 8);
return r;
}
00000000 <foo>:
0: 00554503 lbu a0,5(a0)
4: 8082 ret
00000006 <bar>:
6: 00154783 lbu a5,1(a0)
a: 8789 srai a5,a5,0x2
c: 00254503 lbu a0,2(a0)
10: 051a slli a0,a0,0x6
12: 8d5d or a0,a0,a5
14: 0ff57513 andi a0,a0,255
18: 8082 ret
0000000000000000 ltmp0:
0: 00 14 40 39 ldrb w0, [x0, #5]
4: c0 03 5f d6 ret
0000000000000008 _bar:
8: 08 04 c0 39 ldrsb w8, [x0, #1]
c: 09 08 40 39 ldrb w9, [x0, #2]
10: 29 65 1a 53 lsl w9, w9, #6
14: 28 09 48 2a orr w8, w9, w8, lsr #2
18: 00 1d 00 12 and w0, w8, #0xff
1c: c0 03 5f d6 ret
What I don't like in this reoccurring discussion is the mental dishonesty of double stardards.
If you have a nicely designed struct with 57 variables holding your state and you are constructing each of the 57 variables by writing 57 LoC each
Well, actually, the problem we really get is when edge cases are presumed to be the deciding factor. AFAIA, no-one was suggesting using shifts or rotates or whatever to achieve the 3249 lines you suggest would happen1.
...
Don't forget - the OP asked about a single instance of two-byte data in external storage,
In this situation I am less concerned about mistyping code. More about having to take care about data types and their ranges. If not that most architectures use two’s complement and compilers use the simplest approach to implementing arithmetic, accidently masking errors in arithmetic, huge portion of C code would explode. :)
but I was completely clear in my earlier post where I started by stating that the ad-hoc arithmetical approach is best for the OP's "one 16-bit variable case", and also that it only breaks down when that comes to 100 LoC constructed the same way, showing that the approach is not generic or scalable;
No comments on the wonders (or evils) of modern C compiler optimisation?
I don't get some of those answers. Why to mess with pointers and structures? Uh and memcpy? Seriously?
Assuming this is C and your data are really 16 bit, this is all you need to do:
uint16_t original_data;
uint8_t b[2];
memcpy( b, &original_data, sizeof( b ) );
QuoteNo comments on the wonders (or evils) of modern C compiler optimisation?
Is it relevant?
Let's say we take a look and don't like what the compiler spits out. Are we going to rewrite the compiler to out preference?
Perhaps the output is pretty damn good. Is out C code going to produce the same output with a different compiler?
Will we refuse to use compiler A because of the not awfully good optimisation?
Or, horrors of horrors, would we change out C source to persuade the compiler to do a certain thing?
If we did that, why the hell are we writing in C and not assembler?
Sometimes optimisation is important, but until then it doesn't really matter what comes out so long as it works. And didn't a certain D Knuth note that premature optimisation is the root of all evil...
I don't get some of those answers. Why to mess with pointers and structures? Uh and memcpy? Seriously?
Assuming this is C and your data are really 16 bit, this is all you need to do:
Yes. Seriourly.
1. Because memcpy is simpler than your shift suggestion:Code: [Select]uint16_t original_data;
uint8_t b[2];
memcpy( b, &original_data, sizeof( b ) );
It is like building cars with wheels that fall off. You know "It doesn't go far, but at least you always know what the problem is!"
Is it relevant?QuoteYes.
sizeof(char) returns 2 on DSP43.
never trust that "char" is 1 byte
That absolutely violates the C standard -- sizeof(char) is *defined* to be 1.
Not 1 byte. Just 1. Whatever size a char is, is the unit you measure other things in.
No comments on the wonders (or evils) of modern C compiler optimisation?
sizeof(char) returns 2 on DSP43.
never trust that "char" is 1 byte
That absolutely violates the C standard -- sizeof(char) is *defined* to be 1.
Not 1 byte. Just 1. Whatever size a char is, is the unit you measure other things in.
...
Returns the size, in bytes, of the object representation...
...
...
1 == sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
...
Note: this allows the extreme case in which bytes are sized 64 bits, all types (including char) are 64 bits wide, and sizeof returns 1 for every type.
...
byte
addressable unit of data storage large enough to hold any member of the basic character
set of the execution environment
NOTE 1 It is possible to express the address of each individual byte of an object uniquely.
NOTE 2 A byte is composed of a contiguous sequence of bits, the number of which is implementation-
defined. The least significant bit is called the low-order bit; the most significant bit is called the high-order
bit.
2. In some situations, when strict aliasing rules are in effect, memcpy is the only safe way to transfer multi-byte POD structure and preserve layout.Not only memmove (or the old memcpy). Any access through a char* and (un)signed versions is always valid, as those are an exception to the aliasing rules.(1) And I doubt that exception is likely to disappear, as language’s internal consistency depends on it. Otherwise all the guarantees WRT byte representation would become meaningless.
sizeof(char) returns 2 on DSP43.I believe everyone here assumes the compiler works properly, as not making that assumption adds a whole new level of complexity that can’t even be dealt with without specifying the exact situation and compiler.
never trust that "char" is 1 byte
I believe everyone here assumes the compiler works properly, as not making that assumption adds a whole new level of complexity that can’t even be dealt with without specifying the exact situation and compiler.
C can be frustrating, this is the typical example.
specifying the exact situation and compiler
# b
firmware-5.128(ppc64.le/f2)
0) clean
1) configure
2) compile
3) analyze
4) ICE@192.168.1.21
5) misc
name: firmware-5.128-64bit-f2
image: elf
note: engineering version
qualified_host: passed
qualified_toolchain: passed
then I found that sizeof(char) returns 2And that is a bug. sizeof is defined to return the number of char elements that hold its argument. Therefore sizeof(char) can by definition be 1 and 1 only. If a compiler, claiming to be a C compiler, fails to properly undestand valid C code, it’s a bug. Period.
C can be frustrating, this is the typical example.That is *not* C.
It might look very like C, but it's not C.
No compiler is completely perfect and bug-free but they still can be called "C compilers".
sizeof(char) returns 2 on DSP43.
... and will thoroughly read the manual.