Author Topic: How dead are 4bit MCUs?  (Read 9811 times)

0 Members and 1 Guest are viewing this topic.

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14313
  • Country: fr
Re: How dead are 4bit MCUs?
« Reply #75 on: June 21, 2022, 05:57:08 pm »
I haven't thought every aspect of complying with the standard through regarding exact-width types, but I'm still "almost" sure that not being able to access memory as 8-bit chunks and using the (u)int8_t types would break something. And I'm of course not talking about (u)int_least8_t. Haven't taken the time to find a good example as an illustration though.

Now if anything, even if that were possible without breaking any aspect of the standard, that would imply emitting convoluted assembly for a lot of cases, which IMO makes it a bit unpractical.

One of the questions that pops up (among a bunch) is regarding the sizeof operator.
sizeof(char) is 1 by definition, if I'm not mistaken. 'char' is thus the smallest unit of memory for sizeof. In cases for which (u)int8_t is smaller than char (let's assume a 9-bit char), what is sizeof((u)int8_t) ?
What is sizeof(int8_t[10])? Is it respectively 1 and 10? Or is it 1 in the first case (because it's the minimum size) and 9 in the second case, considering the array is "packed"? In the latter case, the usual sizeof(a)/sizeof(a[0]) to get the number of items would not work.

I admit I am not even sure how arrays would be implemented in this case, aren't the items supposed to be packed? If so, if int8_t[10] were actually an array of 9-bit words (keeping my example), then would it not break something somehow? It looks confusing. There may be a clear answer in the std that I haven't seen yet. But if, still in this example, int8_t is actually implemented as a masked 9-bit word under the hood (which I suppose is what you have in mind and what may be implied by the std), then this size matter kinda bugs me.

That may end up being one of those aspects of the C std that will eventually get cleared up. Time will tell.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8606
  • Country: gb
Re: How dead are 4bit MCUs?
« Reply #76 on: June 21, 2022, 07:09:57 pm »
Is it a requirement that the addressable unit in C be an 8 bit byte?
I'm not sure what the standards say, but there are numerous compilers where the addressable unit is 16 bits or more, because the machine is incapable of addressing smaller units. DSPs are the typical example of this.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4003
  • Country: nz
Re: How dead are 4bit MCUs?
« Reply #77 on: June 21, 2022, 09:05:02 pm »
One of the questions that pops up (among a bunch) is regarding the sizeof operator.
sizeof(char) is 1 by definition, if I'm not mistaken. 'char' is thus the smallest unit of memory for sizeof. In cases for which (u)int8_t is smaller than char (let's assume a 9-bit char), what is sizeof((u)int8_t) ?

1, clearly.  Or 2 if you're using 6 bit char on your 36 bit machine (which used to be common, but the current C standard does not allow I think)

Quote
What is sizeof(int8_t[10])? Is it respectively 1 and 10? Or is it 1 in the first case (because it's the minimum size) and 9 in the second case, considering the array is "packed"?

I don't see how it can be 9 -- that's not big enough. It could be 12, right? Three 36 bit words. At least if it's in a struct? Maybe not a bare array.

Quote
I admit I am not even sure how arrays would be implemented in this case, aren't the items supposed to be packed? If so, if int8_t[10] were actually an array of 9-bit words (keeping my example), then would it not break something somehow? It looks confusing. There may be a clear answer in the std that I haven't seen yet. But if, still in this example, int8_t is actually implemented as a masked 9-bit word under the hood (which I suppose is what you have in mind and what may be implied by the std), then this size matter kinda bugs me.

It has to be masked, right? Either in 36 bits or in (least waste) 9.  Sizes have to be an even number of "bytes", and calculations have to be "as if" it was actually stored in 8 bits.

So an array would be 4 masked 9 bit items in each word, not 4 packed 8 bit items and 4 bits unused at the end.
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4395
  • Country: dk
Re: How dead are 4bit MCUs?
« Reply #78 on: June 22, 2022, 12:57:41 pm »

One of the questions that pops up (among a bunch) is regarding the sizeof operator.
sizeof(char) is 1 by definition, if I'm not mistaken. 'char' is thus the smallest unit of memory for sizeof. In cases for which (u)int8_t is smaller than char (let's assume a 9-bit char), what is sizeof((u)int8_t) ?
What is sizeof(int8_t[10])? Is it respectively 1 and 10? Or is it 1 in the first case (because it's the minimum size) and 9 in the second case, considering the array is "packed"? In the latter case, the usual sizeof(a)/sizeof(a[0]) to get the number of items would not work.

afaik the (u)intX_t types must only be defined on platforms that directly support them with no padding, so a platform with 9bit chars won't have int8_t
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14313
  • Country: fr
Re: How dead are 4bit MCUs?
« Reply #79 on: June 22, 2022, 05:29:59 pm »
One of the questions that pops up (among a bunch) is regarding the sizeof operator.
sizeof(char) is 1 by definition, if I'm not mistaken. 'char' is thus the smallest unit of memory for sizeof. In cases for which (u)int8_t is smaller than char (let's assume a 9-bit char), what is sizeof((u)int8_t) ?

1, clearly.  Or 2 if you're using 6 bit char on your 36 bit machine (which used to be common, but the current C standard does not allow I think)

Quote
What is sizeof(int8_t[10])? Is it respectively 1 and 10? Or is it 1 in the first case (because it's the minimum size) and 9 in the second case, considering the array is "packed"?

I don't see how it can be 9 -- that's not big enough. It could be 12, right? Three 36 bit words. At least if it's in a struct? Maybe not a bare array.

Uh? Yes it is. I was assuming a target on which the smallest addressable unit was 9-bit.

If the compiler "packs" the array, 9 9-bit words are enough to store 10 8-bit words.
I don't really see anything in the standard that would prevent a compiler for implementing arrays like this, although this is again one of the things that seem a bit confusing there. (Of course that wouldn't be very efficient, but that's another matter.)

 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14313
  • Country: fr
Re: How dead are 4bit MCUs?
« Reply #80 on: June 22, 2022, 05:31:39 pm »

One of the questions that pops up (among a bunch) is regarding the sizeof operator.
sizeof(char) is 1 by definition, if I'm not mistaken. 'char' is thus the smallest unit of memory for sizeof. In cases for which (u)int8_t is smaller than char (let's assume a 9-bit char), what is sizeof((u)int8_t) ?
What is sizeof(int8_t[10])? Is it respectively 1 and 10? Or is it 1 in the first case (because it's the minimum size) and 9 in the second case, considering the array is "packed"? In the latter case, the usual sizeof(a)/sizeof(a[0]) to get the number of items would not work.

afaik the (u)intX_t types must only be defined on platforms that directly support them with no padding, so a platform with 9bit chars won't have int8_t

Good, we're moving forward. =)
Are you sure that (u)int8_t type is not mandatory in the std though and thus one can't avoid implementing it? Which, if such platform existed, would just mean that any C compiler for it couldn't be compliant past C89.
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4395
  • Country: dk
Re: How dead are 4bit MCUs?
« Reply #81 on: June 22, 2022, 05:57:33 pm »

One of the questions that pops up (among a bunch) is regarding the sizeof operator.
sizeof(char) is 1 by definition, if I'm not mistaken. 'char' is thus the smallest unit of memory for sizeof. In cases for which (u)int8_t is smaller than char (let's assume a 9-bit char), what is sizeof((u)int8_t) ?
What is sizeof(int8_t[10])? Is it respectively 1 and 10? Or is it 1 in the first case (because it's the minimum size) and 9 in the second case, considering the array is "packed"? In the latter case, the usual sizeof(a)/sizeof(a[0]) to get the number of items would not work.

afaik the (u)intX_t types must only be defined on platforms that directly support them with no padding, so a platform with 9bit chars won't have int8_t

Good, we're moving forward. =)
Are you sure that (u)int8_t type is not mandatory in the std though and thus one can't avoid implementing it? Which, if such platform existed, would just mean that any C compiler for it couldn't be compliant past C89.

from the last free C17 draft:

7.20.1.1 Exact-width integer types

1
The typedef name int N _t designates a signed integer type with width N, no padding bits, and a
two’s complement representation. Thus, int8_t denotes such a signed integer type with a width of
exactly 8 bits.
2
The typedef name uint N _t designates an unsigned integer type with width N and no padding bits.
Thus, uint24_t denotes such an unsigned integer type with a width of exactly 24 bits.
3
These types are optional. However, if an implementation provides integer types with widths of
8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement
representation, it shall define the corresponding typedef names.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14313
  • Country: fr
Re: How dead are 4bit MCUs?
« Reply #82 on: June 22, 2022, 06:08:41 pm »
OK thanks. I had to re-open the C99 to see what it said too. And indeed, from what I get, actually all integer types from stdint.h seem to be optional.
For some reason, I remembered (falsely) that at least the 8, 16 and 32-bit types were mandatory.

But interestingly, I didn't find any mention of this non-padding rule in C99.

So, I was onto something but the conclusion is (slightly) different: for what I can deduce from the wording in the std (at least in C17), for any platform not being able to support 8-bit words without padding, the stdint exact-width types multiple of 8-bit would just not be available. Which would effectively make supporting them... non-compliant.
 

Offline gnuarmTopic starter

  • Super Contributor
  • ***
  • Posts: 2218
  • Country: pr
Re: How dead are 4bit MCUs?
« Reply #83 on: June 23, 2022, 04:36:20 am »
wants a big stack , large register banks. try any 8 bit micro that does no have that. anything that is accumulator based for example where you have no register bank. or processors that have switchable banks (like 8051) or processors with hardware stacks (pic).
the PDP-11 had uniform memory where i/o was mapped in the memory. basically von neumann.  try a harvard machine...
i've seen compilers that, to perform a print operation, have to move the data first from rom to ram and then pass the ram copy off ... they waste lots of time and ram just to copy and shuffle data.

How many programs on an 8051 use print statements?  Isn't the library that supports that very large, as in too large for such small memory devices?
i do it all the time. i have written programs that drive VT100 terminals including the ansi escape codes to emulate field entry. fits in 8kilobyte, using my own print routines. Sending text to uart is like 10 bytes of code to send a null terminated string. Sending numbers is barely 20 bytes. Written in PL/M.

Code: [Select]
putchar : procedure (char) public;
   declare char byte;
   do while not TI;  /* wait for Transmit flag to set. TI is the name of a bit in the SCON register */
   end;
   TI=0; /*arm the transmitter*/
   SBUF=char; /*throw character in transmit register. this will automatically set TI to one when transmit is complete*/
end putchar;

printstring : procedure (stringpointer) public;
   declare stringpointer address;                           /* we will be receiving a pointer*/
   declare char based stringpointer byte constant;  /* pointer points to a byte that resides in rom (constant)*/
   do while char <>0; /* look for null terminator*/
   call putchar(char);
   stringpointer=stringpointer+1;
   end;
end printstring;

printstring(.('hello world'),0);

yeah it's not a fullblown printf, but it can do strings , strings are stored in rom anyway. why would you want to copy those to ram ? as for numbers , another 30 bytes or so create the routines to send out bytes, words and quads (32 bit). larger numbers use BCD arithmetic and are stored as packed bytes. (2 digits per byte). The runtime library is like 300 bytes to do almost anything you want.

the pl/m manual gives an example of a simple calculator program that works over uart. it does unsigned 16 bit arithmetic with + - / *. it's roughly 700 bytes in rom ( 400 bytes are strings for the user.. there's barely 300 bytes of real code) long, uses 5 bytes of ram and 4 bytes of stack.

I wasn't asking how many programs print.  I was asking how many programs use the C library printf code.  That was the context of the reference to copying data from ROM to RAM.  Then your example clearly prints directly from the ROM. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline tepalia02

  • Regular Contributor
  • *
  • Posts: 100
  • Country: bd
Re: How dead are 4bit MCUs?
« Reply #84 on: June 23, 2022, 12:41:58 pm »
Personally, I never used one. Can I get any datasheet?
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8515
  • Country: us
    • SiliconValleyGarage
Re: How dead are 4bit MCUs?
« Reply #85 on: June 24, 2022, 05:45:11 pm »
I wasn't asking how many programs print.  I was asking how many programs use the C library printf code.  That was the context of the reference to copying data from ROM to RAM.  Then your example clearly prints directly from the ROM.
on an 8-bitter with constrained resources like an 8051 and likes ? nobody in his right mind would use the printf library. it is too large. and it most likely cannot handle the duality of printing from rom and ram.
i analysed the C-compilers from Mikroe once. they have a "helper" function that copies the string from rom to a ram buffer, and then pass it to printf. The problem is the variable list of arguments in printf .

printf("blah") <- clearly hardcoded string
const blah_string "Blabla"
print blah_string <- still hardcoded as it is a constant
blabla char[11] = "more blabla \0"  <- this starts getting tricky ... you need to analyse all code to see if blabla ever gets modified. if not, you can plonk it in rom ... otherwise ram

printf ("blabla %1",some_number) <- well.. blabla can be stuffed in rom  while the integer needs a conversion routine to ascii string and that has to be stored somewhere and memory allocated for it... so one portion comes from om , another from ram...

that is getting tricky for the printf library. on a von Neumann machine it doesn't matter, it's all flat memory and there is only one machine language operation for access.. in a Harvard machine the opcodes are different...  it becomes spaghetti very quickly.

So the compiler builders resort to trickery where they allocate ram and build the complete string , constants and all in a ram buffer first.... then send it to a uniform handler that writes it to the output buffer. the problem is ... you only have 128 bytes of ram (or less) on those machines. so you end up eating half of your rom for the printf and half of the ram to send out a string of 80 characters ...

Many programming languages for these machines do not have a printf. you roll your own. and you constrain it enough so that you have a routin to print form rom and one from ram. you do the splitting and optimise it for the least amount of ram usage.
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4003
  • Country: nz
Re: How dead are 4bit MCUs?
« Reply #86 on: June 25, 2022, 01:26:50 am »
that is getting tricky for the printf library. on a von Neumann machine it doesn't matter, it's all flat memory and there is only one machine language operation for access.. in a Harvard machine the opcodes are different...  it becomes spaghetti very quickly.

So the compiler builders resort to trickery where they allocate ram and build the complete string , constants and all in a ram buffer first.... then send it to a uniform handler that writes it to the output buffer. the problem is ... you only have 128 bytes of ram (or less) on those machines. so you end up eating half of your rom for the printf and half of the ram to send out a string of 80 characters ...

Many programming languages for these machines do not have a printf. you roll your own. and you constrain it enough so that you have a routin to print form rom and one from ram. you do the splitting and optimise it for the least amount of ram usage.

This is where C++ cout "foo" << someInt << "bar" << endl is better. Properly implemented, this doesn't need to assemble the whole line anywhere, and each part can come from ROM, RAM, or be converted in a minimal buffer as appropriate.

The same goes for the C++ overloaded single-argument print() functions in Arduino, designed for use on microcontrollers with barely any RAM.

People only like actual printf() because it's more convenient to type a format string with %s in it, and it's less code at the caller (only one function call).

When the printf format string is constant, a compiler can split it up into print_literal_string_from_ROM(); print_integer(); print_literal_string_from_ROM();  And then you don't drag in the long long or floating point conversions that you're not using.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14313
  • Country: fr
Re: How dead are 4bit MCUs?
« Reply #87 on: June 25, 2022, 02:03:13 am »
There are several ways of handling this problem of data access.

The first obvious (but not efficient as far as RAM use is concerned, which could be a problem on small targets) way is to put all "constants" in data memory (RAM) so that the CPU never has to access anything in, say, Flash memory during normal execution of the code.

This can be "trivially" done. A C compiler isn't required to put constants in Flash memory (or generally speaking, in some kind of read-only memory.)
All that can be done writing an appropriate linker script. Then the "startup code" would have to copy all constants in RAM, just like it does for initializing non-zero global variables, using specific instructions for doing so. Most Harvard architectures used these days are modified Harvard, and there is always some kind of bridge between the different memory areas, even when it's not fully transparent.

The second way, which is what was required on older 8-bit PICs, for instance, is that you need to use specific instructions to access Flash memory - and the compiler needs to handle two types of pointers with specific qualifiers. Not very nice, but the plus side is that you don't waste RAM as in the above solution, and you have full control over memory access. Yes, as inconvenient as it can be, I consider this also a benefit for security reasons.

And that said, using printf() on small targets when all you need are simple formats for displaying integers and maybe floating point numbers, is rarely a good idea. Writing your own conversion functions is not difficult and will be much more efficient. And, once you have written them, you can reuse them as often as you want.
« Last Edit: June 25, 2022, 02:05:13 am by SiliconWizard »
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8515
  • Country: us
    • SiliconValleyGarage
Re: How dead are 4bit MCUs?
« Reply #88 on: June 25, 2022, 06:01:07 am »
The first obvious (but not efficient as far as RAM use is concerned, which could be a problem on small targets)
well that is the problem on these small machines.

printf ('Hello %i world %s : %i' , 128, q , b)
passing a constant integer , a string (could be ram , could be rom...) and a ram integer
very difficult to unroll, so they simply build it in a ram buffer, then pass that off to the output handler. the output handler becomes simple and short because only one source and one target , but you eat memory... of which you have very little


Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline harerod

  • Frequent Contributor
  • **
  • Posts: 449
  • Country: de
  • ee - digital & analog
    • My services:
Re: How dead are 4bit MCUs?
« Reply #89 on: June 25, 2022, 08:03:57 am »
A bit late to the show, but closer to the original topic: Has anybody ever worked with the Atmel MARC4? I had my eyes on that one for several projects, but its use was never justified by the expected savings. Atmel discontinued this family in 2015.
https://media.digikey.com/pdf/Data%20Sheets/Atmel%20PDFs/T48C510.pdf
Digikey lists several devices as 0 stock / discontinued. Some MCU core with RF interface.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14313
  • Country: fr
Re: How dead are 4bit MCUs?
« Reply #90 on: June 25, 2022, 05:56:10 pm »
A bit late to the show, but closer to the original topic: Has anybody ever worked with the Atmel MARC4?

Never heard of them. I'll have a look at the DS out of curiosity.

Has anyone ever written assembly for one of the HP Saturn CPUs I mentioned earlier? Those were, at least in my memory, pretty interesting and the assembly was rather nice. I did write routines in assembly on the HP28s and HP48 back in the days.
 

Offline harerod

  • Frequent Contributor
  • **
  • Posts: 449
  • Country: de
  • ee - digital & analog
    • My services:
Re: How dead are 4bit MCUs?
« Reply #91 on: June 25, 2022, 08:19:16 pm »
SiliconWizard, after three decades my HP48SX is still in use, albeit rarely. I only ever was a user of other people's low level code (Sokoban...). I used the high level language to write small programs, e.g. transmission line calculations and such.
There are a bunch of emulators available, in case anybody wants to dabble with that architecture.
 

Offline gnuarmTopic starter

  • Super Contributor
  • ***
  • Posts: 2218
  • Country: pr
Re: How dead are 4bit MCUs?
« Reply #92 on: June 27, 2022, 04:10:22 am »
A bit late to the show, but closer to the original topic: Has anybody ever worked with the Atmel MARC4? I had my eyes on that one for several projects, but its use was never justified by the expected savings. Atmel discontinued this family in 2015.
https://media.digikey.com/pdf/Data%20Sheets/Atmel%20PDFs/T48C510.pdf
Digikey lists several devices as 0 stock / discontinued. Some MCU core with RF interface.

I can't say I worked with it.  I did take a hard look at trying to use it, but that was around 10 years ago and even then they were not doing much to support it.  I don't recall ever finding the tools for it.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline harerod

  • Frequent Contributor
  • **
  • Posts: 449
  • Country: de
  • ee - digital & analog
    • My services:
Re: How dead are 4bit MCUs?
« Reply #93 on: June 27, 2022, 06:49:24 am »
gnuarm, thanks to your comment, I figured out that my interest in MARC4 was over 20 years ago. At that time I was still an inmate at a large American medical devices manufacturer. Paying real money to the manufacturer for the development tools (hard- and software) was much more common then.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf