Author Topic: C struct bitfields - size versus speed  (Read 9649 times)

0 Members and 1 Guest are viewing this topic.

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: C struct bitfields - size versus speed
« Reply #25 on: May 02, 2018, 09:48:12 am »
Sharing structs between platforms/compilers (for example to communicate) isn't a good thing to do anyway because of alignment issues and packing strategies. I always tell people not to map structs and plain C types onto byte buffers because it will go wrong at some point.
Look at the file formats (ELF, ZIP etc.), or communication protocols (such as TCP/IP). They all work with records which are essentially mapped C structures. They work quite well across platforms without causing issues. Of course, when these structures were created they were aligned by hand - there's no need for padding (and if it is then the padding is explicitly added). Byte/bit order is also fixed by design. The designers were clever about this. But so can you.
And still there are plenty of ways this can go wrong giving very subtile errors which you won't catch doing unit testing on a different platform. Being clever isn't always being smart.
I've seen this problem pop up a couple of times in a large software project I (and some others) inherited. This took a couple of days to hunt down so there goes your productivity.
Besides that if you pack a struct the compiler may start to shuffle bytes around anyway because it is likely you provide a byte (void) pointer to the data and the compiler can no longer know how the data is aligned in memory. Remember that many platforms (for example ARM) cannot do unaligned 16 or 32 bit read/writes and it may not lead to an exception. And then there is big endian versus little endian conversion. All in all it is better to create a program with defined behaviour using byte shifts to read/write data into a byte array. It won't be slower anyway because a lot of protocols are big endian and most processors used nowadays are little endian.
« Last Edit: May 02, 2018, 09:54:05 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: C struct bitfields - size versus speed
« Reply #26 on: May 02, 2018, 02:28:46 pm »
And still there are plenty of ways this can go wrong giving very subtile errors which you won't catch doing unit testing on a different platform. Being clever isn't always being smart.

This is typical to any programming. Not all the bugs are easy to catch. Therefore, you need to think - think when designing software and think when designing your tests. And if you find a bug, start blaming yourself, so that next time you make less bugs and find them fasters. If you start blaming these things on alignment, endianness, or other similar factors, you not only miss learning from the mistakes, but you will cast lots of useful tools out of your toolbox because they were "unsafe" in your past experiences.

... a lot of protocols are big endian and most processors used nowadays are little endian.

If you get data with different endianness, you have to deal with it anyway. But writing piles of unnecessary code is much more prone to bugs than simply calling stub functions a la htons().

Alignment is never a problem if your structures don't have any internal misalignments - you align the whole structure and everything gets aligned automatically.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: C struct bitfields - size versus speed
« Reply #27 on: May 02, 2018, 02:42:01 pm »
And still there are plenty of ways this can go wrong giving very subtile errors which you won't catch doing unit testing on a different platform. Being clever isn't always being smart.
This is typical to any programming. Not all the bugs are easy to catch. Therefore, you need to think - think when designing software and think when designing your tests. And if you find a bug, start blaming yourself, so that next time you make less bugs and find them fasters. If you start blaming these things on alignment, endianness, or other similar factors, you not only miss learning from the mistakes, but you will cast lots of useful tools out of your toolbox because they were "unsafe" in your past experiences.
Actually my past experiences have taught me not to do esoteric stuff in C like relying on how structs are mapped in memory. I write my code in a way even a complete idiot can maintain it succesfully when I have moved on to a more interesting project. There are enough ways left in C to shoot yourself in the foot so don't make things more complicated then they have to. BTW you seem to have missed that I inherited the project with the alignment bug which wasn't caught by unit testing on a PC.
Quote
... a lot of protocols are big endian and most processors used nowadays are little endian.
If you get data with different endianness, you have to deal with it anyway. But writing piles of unnecessary code is much more prone to bugs than simply calling stub functions a la htons().

Alignment is never a problem if your structures don't have any internal misalignments - you align the whole structure and everything gets aligned automatically.
That is a very big IF. What if your input buffer gets misaligned because someone changes a pointer somewhere or inserts an extra field? Besides that you can create a simple wrapper like htons/htonl (which is platform independent as a bonus) yourself to fill a buffer. Good, platform independant protocol implementations work that way because it doesn't depend on the capabilities of the person maintaining that code and/or compiler dependent settings/pragmas/attributes.
« Last Edit: May 02, 2018, 03:35:14 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6258
  • Country: fi
    • My home page and email address
Re: C struct bitfields - size versus speed
« Reply #28 on: May 02, 2018, 03:21:58 pm »
For what it is worth, I've found data accessor functions to be much more robust and maintainable than struct bitfields, in the long term.

The main reason is that the order in which bitfields are packed, is completely up to the compiler. Compilers can even have different conventions on different architectures - and there is nothing barring them from changing it from one version to next.  Plus, people change both targets and compilers quite often, too.

It is also easier to thoroughly test the accessor functions.  For example, you can test bitstream operations by comparing to a slow, known-good version, that extracts the values one bit at a time; or by using a test vector and comparing to known results.  For structs with bitfields, you basically need a comprehensive set of structs (saved in binary), then verify the struct correctly maps to the binary data by comparing the test structs to known values.  Very few programmers bother.

As to speed, you're going to have a hard time measuring the difference in timing between structs with bitfields and accessor functions (unless you write horribly stupid accessor functions, that e.g. use a loop to extract individual bits from the same source word/byte).  This is because memory access, even on embedded systems, is slower than the few bit shift and mask operations needed to extract/pack a field; we're talking of less than a dozen clock cycles that on superscalar architectures pipelines extremely well.  If you find a case where you can measure the difference, I bet there is an even faster approach (typically by avoiding accessing the packed fields that often, and using an unpacked, fast structure for the repeated accesses instead). In marginal cases, like on a microcontroller, you can optimize the accessor functions just for that hardware.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: C struct bitfields - size versus speed
« Reply #29 on: May 02, 2018, 03:59:41 pm »
... you seem to have missed that I inherited the project with the alignment bug.

A bug is always a consequence of a programmer's mistake. Someone years ago hired an incompetent programmer and he made a bug. You woudn't say that the bad hiring decision is somehow related
to the alignment, would you?

That is a very big IF. What if your input buffer gets misaligned because someone changes a pointer somewhere or inserts an extra field?

This is a very good question. If you use a structure in communications then your structure may change in the future. Therefore, you design it in such a way that the communicating parties have means to figure out what version of the structure they get. At the very least you include the length field in the end, and you make sure your structure is easy to align by placing padding or reserved fields at the end. Such measures ensure that any number of programmers can use your communication protocol and it works well across the versions and platforms.

If one of the thousands of programmers who use your protocol won't take time to understand the mechanism and misaligns something or otherwise fails to follow the protocol, this is clearly a bug. You don't want to make 999 good programmers write extra parsing code just because one programmer is incompetent, right?

I think this thread has deviated from the original question which the IP has posted. The OP didn't ask about communications. He asked about whether it's a good idea to use bitfields (combining multiple individual variables into a single integer) or it's better to use separate variables.

I personally do not use bitfields, I prefer bitwise logic and masks. Such approach appears more flexible to me, but, under the hood, it's the same as a bitfields.

If variables are 1-bit long (TRUE or FALSE) then it's a good idea to unite them into a single number. Many CPUs will have some sort of instructions to access single bits. More importantly, you can access several variables together. Such as you can test if any one of flag_a, flag_d or dlag_e in a single operation. If these were different variables, it wouldn't be so easy. Similarly, you can set multiple flags by or'ing, clear them by and'ing etc. Also, you can pass the whole set of flags to a function as a single parameter.

For 2-3-bit long variables, the access to a bitfield becomes inefficient. To set the bitfield, you need to read the variable, "and" it with the mask, "xor" it with the value, then write back. Some of the CPUs can combine "and" and "xor" in s single instruction (and even do a shift), but there's still a need to read, modify, and write. Single variable requires only a write, which definitely beats the bitfield.

Of course, 8-bit bitfield is as good as a separate variable.

« Last Edit: May 02, 2018, 06:14:09 pm by NorthGuy »
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3642
  • Country: us
Re: C struct bitfields - size versus speed
« Reply #30 on: May 02, 2018, 06:00:52 pm »
For 2-3-bit long variables, the access to a bitfield becomes inefficient. To set the bitfield, you need to read the variable, "and" it with the mask, "xor" it with the value, then write back. Some of the CPUs can combine "and" and "xor" in s single instruction (and even do a shift), but there's still a need to read, modify, and write. Single variable requires only a write, which definitely beats the bitfield.

Of course, 8-bit bitfield is as good as a separate variable.
When you consider the whole hardware architecture, single-byte writes also require some form of read-modify-write on anything beyond very simple 8-bit micros. Writing a solitary byte that misses in the cache requires the entire cache block to be loaded, then modified, and finally written back. In many cases uncachable writes smaller than a full word are not allowed.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: C struct bitfields - size versus speed
« Reply #31 on: May 02, 2018, 06:22:58 pm »
Of course, 8-bit bitfield is as good as a separate variable.
When you consider the whole hardware architecture, single-byte writes also require some form of read-modify-write on anything beyond very simple 8-bit micros. Writing a solitary byte that misses in the cache requires the entire cache block to be loaded, then modified, and finally written back. In many cases uncachable writes smaller than a full word are not allowed.

I don't understand how this makes aligned 8-bit bitfield different from a standalone char variable.

Besides, there are CPUs without cache (such as PIC16), or CPUs with cache lines much larger than 32-bit (such as modern Intel).
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3642
  • Country: us
Re: C struct bitfields - size versus speed
« Reply #32 on: May 02, 2018, 06:26:20 pm »
The point is more that a single 8-bit datum (a char member) may not have any advantage over a 7 or 9 bit field in a struct, as RMW would be required in both cases. The only difference is that the char may be written with a simple instruction, but the hardware has to take care of it in both cases.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: C struct bitfields - size versus speed
« Reply #33 on: May 02, 2018, 07:25:49 pm »
The point is more that a single 8-bit datum (a char member) may not have any advantage over a 7 or 9 bit field in a struct, as RMW would be required in both cases. The only difference is that the char may be written with a simple instruction, but the hardware has to take care of it in both cases.

Yes. Big CPUs as Intel do their own optimizations as they execute instructions, which works better than C compiler. It's very hard to predict how fast the code may run, or even measure the execution time accurately.
 

Offline abyrvalg

  • Frequent Contributor
  • **
  • Posts: 824
  • Country: es
Re: C struct bitfields - size versus speed
« Reply #34 on: May 02, 2018, 11:09:29 pm »
Big CPUs as Intel do their own optimizations as they execute instructions, which works better than C compiler.
The compilers have a huge potential advantage over CPUs for optimizations: they have more time and more information about the code. It’s just the x86 world’s poor compatibility tradeoff limiting them: you never know at compile time what CPU microarch will be used at runtime, so it’s CPU’s job to do the final optimizations at runtime (repeating the same things at every run, having less time, seeing smaller pieces of code - what a pity).
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: C struct bitfields - size versus speed
« Reply #35 on: May 06, 2018, 05:40:12 am »
There are a lot of potential optimizations, bottlenecks, syntactic advantages and portability pitfalls associated with bitfields.
Everything CAN be done using shift and mask operations, but if that's what your compiler does ALL the time, it's not a very good compiler!

8-bit and other "small" architectures tend to have specialized instructions for dealing with single bits.  MANY processors have "some" capabilities WRT *certain* single bits (ie "branch if minus", carry bits, etc.)  So right away you're faced with "produce different code depending on size and position of bitfield" possibilities.  Some ARM Cortex chips have bit-banding for accessing single bits, and "bit field" instructions for larger bit fields (CM3 and up.)  Freescale seems to have added a "bit manipulation engine" to some of their CM0 chips, that extends the bit-banding idea to multiple bit fields and additional operations, but only for the peripheral address space.

Most 32bit chips have barrel shifters, but most 8bit chips don't (which means that they might do a 6bit shift in a loop that takes 6+ instruction times.)  OTOH, they might have the option of fetching only the necessary byte of a longer-than-byte bitfield.  Or have a nybble-swap that's equivalent to a 4-bit rotate.

Setting or comparing bitfields with constants may be easier than variables, because you can shift the constant instead of the variable.  And especially if the constant is all zero bits or all one bits.

 

Online gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: C struct bitfields - size versus speed
« Reply #36 on: May 06, 2018, 08:26:23 am »
Quote
Writing a solitary byte that misses in the cache requires the entire cache block to be loaded, then modified, and finally written back. In many cases uncachable writes smaller than a full word are not allowed.

Btw, "cache" is a good keyword. Memory operand access times with L1 cache hit, vs. L3 cache miss differ by a factor of about two decades. So it can make a significant performance difference whether the whole working set fits into L1 or L2 cache, or whether we are facing cache misses and fetches from DRAM frequently. Reducing the size of data structures may help to achieve this goal (I'm thinking e.g. of a huge array of structures), even if it possibly takes a couple of instructions more to access the bitfield data then. For instance, if an x86 does not need to stall on a DRAM fetch for say 100ns due to a L3 cache miss, then it can execute many instructions in the same time. It all depends on the memory access patterns of the individual application and on the hardware architecture, of course. For some microcontrollers these consideration may not apply at all. Performance analysis/tuning needs to be done for each use case individually.

 

Online gf

  • Super Contributor
  • ***
  • Posts: 1170
  • Country: de
Re: C struct bitfields - size versus speed
« Reply #37 on: May 06, 2018, 09:04:21 am »
Quote
Big CPUs as Intel do their own optimizations as they execute instructions

Most if these CPU internal "optimizations" are, however, related to instruction pipelining, i.e. the CPU tries to avoid pipeline stalls, by doing out of order execution, branch prediction, speculative execution,...

[ Well, we would not need this, if Moore's Law would also apply to CPU clock frequency and DRAM latency. However, we already had 2 GHz clocked CPUs about ten years ago, but we don't have 500 GHz CPUs today - so the CPU makers had the need to find other tricks to still increase performance. ]
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: C struct bitfields - size versus speed
« Reply #38 on: May 06, 2018, 01:41:48 pm »
Quote
Big CPUs as Intel do their own optimizations as they execute instructions
Most if these CPU internal "optimizations" are, however, related to instruction pipelining, i.e. the CPU tries to avoid pipeline stalls, by doing out of order execution, branch prediction, speculative execution,...

That's exactly what you need here. It can pre-fetch the data, and it can postpone data write after the instruction, so what you have left is only the operation itself. Furthermore, Intel has a number of parallel execution units, so multiple instructions can execute in parallel. Because of all the mechanisms, the execution of a series of instructions may take only one cycle. Moreover, it is possible that inserting an extra instruction just in the right place makes the execution faster compared to the same code without the instruction.

Of course, this makes the execution timing totally unpredictable - if things go wrong, the CPU may stall waiting for memory for enormous amount of time. Not what you want in the embedded system.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6258
  • Country: fi
    • My home page and email address
Re: C struct bitfields - size versus speed
« Reply #39 on: May 06, 2018, 06:57:19 pm »
Btw, "cache" is a good keyword. Memory operand access times with L1 cache hit, vs. L3 cache miss differ by a factor of about two decades. So it can make a significant performance difference whether the whole working set fits into L1 or L2 cache, or whether we are facing cache misses and fetches from DRAM frequently. Reducing the size of data structures may help to achieve this goal
Yup; another is split/recombine data that may or may not be logically related, but are typically accessed at the same time. The keyword is then "cache locality".

Performance analysis/tuning needs to be done for each use case individually
with actual data.

It is often not hard to microbenchmark specific operations, and for things like microcontrollers this may be enough. On pipelined superscalar machines (with complex caching schemes shared at some level between cores), one must benchmark the overall effect of an approach, because of the overall complexity of the situation. Something that yields fantastic figures at microbenchmarks, but overall slows down the entire algorithm when used in practice, is not unheard of: bad patterns of cache usage (or just touching a lot of cache lines, changing the caching patterns and how the CPU predicts future accesses) easily does that.  Which is why I personally make a big difference between microbenchmarking (an operation or an algorithm) and actually benchmarking an approach (implementation tested with actual data). The former is indicative, the latter is a finding.

As an example, I've seen developers get very surpriseed when they find that optimizing some code for size makes it run faster than with otherwise aggressive optimizations enabled. On arches (like Intel/AMD x86 and x86-64) where the hardware does a lot of speculative execution, some types of conditional expressions are cheap, while others are expensive, and the difference may depend entirely on exactly how your compiler behaves.

(None of the compilers I've used do vectorization particularly well for C, either.  ICC is probably the best in this regard, but one definitely cannot rely on it, especially because of how it treats non-Intel processors at runtime -- unless you know you'll only run the code on Intel processors, of course.  This is a bit off topic for this thread, because most code that uses vectorization uses it for floating-point components; for binary operations, vectorization is only useful if you perform the same operation with optionally different operands to many consecutive 8, 16, 32, or 64-bit sized aligned units at the same time.)
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: C struct bitfields - size versus speed
« Reply #40 on: May 06, 2018, 11:59:36 pm »
BTW, the biggest "oops" with bitfields that I've seen recently wasn't related to bit ordering or padding, but to the "atomic" functions provided by the hardware.  You might think that a line like:
Code: [Select]
SERCOM5->USART.INTENCLR.bit.ERROR = 1;   Would clear the ERROR interrupt enable for SERCOM5, right?  But NO!  The INTENCLR register reads ones in bit positions where the interrupts are currently enabled, so when the compiled code reads the register, OR's in the ERROR bit, and writes it back out (unoptimized bitfield code, and a CM0 that doesn't really have any optimization possibilities), it ends up disableing ALL the interrupts that were enabled!
This statement works correctly, and probably generates less code as well...
Code: [Select]
SERCOM5->USART.INTENCLR.reg = SERCOM_USART_INTENCLR_ERROR;https://community.atmel.com/forum/problem-clearingsetting-bit-interrupt-flag-register


The debate over bit-field non-portability is ... amusing ... in light of ARM's CMSIS essentially standardizing on the use of "bitfields overlaid on hardware registers."  (not even a nod to packing or padding.)   I guess theoretically, these are definitions closely associated with a particular hardware implementation provided with a particular compiler, so it's not so important.  OTOH, ARMv7m (CM3 and higher) theoretically has user-selectable endianness (but only at RESET time), and I've never seen a CMSIS file that provides big-endian definitions.  (OTTH, I don't think I've seen a ARMv7 that implements the endianness selection, nor one that's big-endian.)

 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11253
  • Country: us
    • Personal site
Re: C struct bitfields - size versus speed
« Reply #41 on: May 07, 2018, 12:17:38 am »
At this point in time, I would say that non-portability of bit fields is a hardware problem. There is a lot of code that nobody wants to rewrite, so making hardware that does not naturally supports those assumption is a sure way to get DOA hardware. It is like designing a new CPU to be big-endian only. Good luck with that.

Same goes for compilers. There are plenty of very good choices, so if you make a new compiler and it breaks those assumptions, you will have hard time marketing that compiler.

I use packed structs with bitfields for all my software and so far I have never ran into a problem. And I get the most optimal code for the situation, since I clearly communicate what I want to do to the compiler.
Alex
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4034
  • Country: nz
Re: C struct bitfields - size versus speed
« Reply #42 on: May 07, 2018, 02:07:04 am »
BTW, the biggest "oops" with bitfields that I've seen recently wasn't related to bit ordering or padding, but to the "atomic" functions provided by the hardware.  You might think that a line like:
Code: [Select]
SERCOM5->USART.INTENCLR.bit.ERROR = 1;   Would clear the ERROR interrupt enable for SERCOM5, right?  But NO!  The INTENCLR register reads ones in bit positions where the interrupts are currently enabled, so when the compiled code reads the register, OR's in the ERROR bit, and writes it back out (unoptimized bitfield code, and a CM0 that doesn't really have any optimization possibilities), it ends up disableing ALL the interrupts that were enabled!
This statement works correctly, and probably generates less code as well...
Code: [Select]
SERCOM5->USART.INTENCLR.reg = SERCOM_USART_INTENCLR_ERROR;https://community.atmel.com/forum/problem-clearingsetting-bit-interrupt-flag-register

That's really nothing to do with C compilers being stupid or bitfield code not being compiled properly. It's purely down to the programmer treating something like RAM that doesn't behave like RAM. Or the hardware designer providing a badly designed interface.

Can this register only disable interrupts, and there another register for enabling them? If that's the case then it's nothing like memory and shouldn't be treated as if it was. But the compiler can't know that.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: C struct bitfields - size versus speed
« Reply #43 on: May 07, 2018, 04:20:03 am »
Or the hardware designer providing a badly designed interface.

If marketing says you must build your MCUs with ARM CPU which doesn't have suitable instructions for the task, what the poor designer can do?
 

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
Re: C struct bitfields - size versus speed
« Reply #44 on: May 07, 2018, 04:36:56 am »
The debate over bit-field non-portability is ... amusing ...
If your code is always compiled with the same compiler and always for ARM Cortex (e.g., memory mapped registers), then bitfields are fair game from a portability perspective. But if hardware targets and toolchains vary (e.g., for network formats), then staying with standard types and bit masks might be a better choice.

ARM's SVDConv.exe can generate mask and shift values, but most of the .svd files I've seen use bitfields.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: C struct bitfields - size versus speed
« Reply #45 on: May 07, 2018, 05:18:40 am »
Quote
[experience with INTENCLR register bitfields]
That's really nothing to do with C compilers being stupid or bitfield code not being compiled properly.
Agreed; I didn't say it did.  Bitfield endianness issues are not compiler bugs or bad code, either.  They're both cases where using bitfields has unexpectedly "bitten" people, though, where more traditional manipulation would have been more obvious (perhaps.)



Quote
I use packed structs with bitfields for all my software and so far I have never ran into a problem.
Ditto.  Well, except for fragmentOffset in IP, but I may have been blinded by the 16bit CPUs of the (68k v 80186) day.
Not so much a "hardware problem" as a SOLVED problem.  If your compiler can't make it come out right, it's time for a different compiler.
(ps: Intel added bi-endian support to their x86 compiler some time ago (because fetch and bswap is only a tiny bit slower than just a fetch.)  It's pretty cool, and I'm surprised I haven't seen it spring up elsewhere (keil, llvm, gcc.))


Quote
ARM's SVDConv.exe can generate mask and shift values, but most of the .svd files I've seen use bitfields.
Ah.  That explains why some (many?) of the .h files have both...  I had forgotten that they're program-generated.

Code: [Select]
typedef union {
  struct {
    uint32_t SWRST:1;          /*!< bit:      0  Software Reset */
    uint32_t ENABLE:1;         /*!< bit:      1  Enable */

 :

  } bit;                       /*!< Structure used for bit  access */
  uint32_t reg;                /*!< Type      used for register access */
} SERCOM_I2CS_CTRLA_Type;

 :

#define SERCOM_I2CS_CTRLA_SWRST_Pos 0            /**< \brief (SERCOM_I2CS_CTRLA) Software Reset */
#define SERCOM_I2CS_CTRLA_SWRST     (0x1ul << SERCOM_I2CS_CTRLA_SWRST_Pos)


 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf