EEVblog Electronics Community Forum

Products => Computers => Programming => Topic started by: Kittu20 on September 29, 2023, 09:15:18 am

Title: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 29, 2023, 09:15:18 am
Hello everyone,

I'm trying to wrap my head around the concept of memory allocation at compile time versus run time. My understanding is that memory for variables isn't allocated until the program runs on a PC. I belive that memory allocation happens at runtime. However, I'm confused about how memory allocation can occur at compile time when the program doesn't actually run on hardware(PC)

consider code

Code: [Select]
#include <stdio.h>
#include <stdlib.h>
 
// Global variable
int globalVar = 10;
 
 
// Static global variable
static int staticGlobalVar = 20;
 
 
int main() {
    // Local variable
    int localVar = 5;
 
 
    // Static local variable
    static int staticLocalVar = 15;
 
 
    // Dynamic memory allocation using pointers
    int *dynamicVar = (int *)malloc(sizeof(int));
 
 
    if (dynamicVar == NULL) {
        printf("Memory allocation failed.\n");
        return 1;
    }
 
 
    *dynamicVar = 25;
 
 
    // Extern variable (defined in another source file)
    extern int externVar;
 
 
    // Pointer variable
    int *pointerVar;
    pointerVar = &globalVar;
 
 
    printf("Global variable: %d\n", globalVar);
    printf("Static global variable: %d\n", staticGlobalVar);
    printf("Local variable: %d\n", localVar);
    printf("Static local variable: %d\n", staticLocalVar);
    printf("Dynamic variable: %d\n", *dynamicVar);
    printf("Extern variable: %d\n", externVar);
    printf("Pointer variable (points to globalVar): %d\n", *pointerVar);
 
 
    // Free dynamically allocated memory
    free(dynamicVar);
 
 
    return 0;
}

Could you please clarify this concept of memory allocation in C programs?
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 29, 2023, 09:30:56 am
I'm trying to wrap my head around the concept of memory allocation at compile time versus run time. My understanding is that memory for variables isn't allocated until the program runs on a PC. I belive that memory allocation happens at runtime. However, I'm confused about how memory allocation can occur at compile time when the program doesn't actually run on hardware(PC)
...
Could you please clarify this concept of memory allocation in C programs?

Yet another poor question :( See https://entertaininghacks.wordpress.com/library-2/good-questions-pique-our-interest-and-dont-waste-our-time-2/

Your code example is useless, because you have just copied it from somewhere. You need to explain what you think will happen; only after that can anybody here help you correct your misunderstanding.

Yes, we can help: this is the same for every computer language, and is well described in textbooks. Textbooks will explain this subject better than quickly scribbled responses on a forum.

Textbooks about C are widely available in India, in many of the languages used in India.

Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: IanB on September 29, 2023, 09:56:36 am
Maybe you are familiar with the game of Monopoly? The rules of the game allocate a certain amount of money to each of the players to start with. How can the rules allocate money to the players before the game has even begun?
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 29, 2023, 10:08:55 am
I'm trying to wrap my head around the concept of memory allocation at compile time versus run time. My understanding is that memory for variables isn't allocated until the program runs on a PC. I belive that memory allocation happens at runtime. However, I'm confused about how memory allocation can occur at compile time when the program doesn't actually run on hardware(PC)
...
Could you please clarify this concept of memory allocation in C programs?

Yet another poor question :( See https://entertaininghacks.wordpress.com/library-2/good-questions-pique-our-interest-and-dont-waste-our-time-2/

Your code example is useless, because you have just copied it from somewhere. You need to explain what you think will happen; only after that can anybody here help you correct your misunderstanding.

Yes, we can help: this is the same for every computer language, and is well described in textbooks. Textbooks will explain this subject better than quickly scribbled responses on a forum.

Textbooks about C are widely available in India, in many of the languages used in India.

Thank you for taking the time to respond to my question, and I appreciate your feedback. In my studies of memory allocation in C programs, I've come across the concept of memory allocation happening both at compile time and run time. My confusion arises from the fact that during compilation, the program isn't actually executed on hardware. So, I'm trying to undrstand  how memory allocation can occur at compile time when there's no physical execution of the program.

The code I shared was meant to serve as an initial reference point to frame my question regarding memory allocation. Specifically, I intended to gain a better understanding of how memory allocation works in  Compile Time vs. Run Time
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Jeroen3 on September 29, 2023, 10:23:25 am
Quote
However, I'm confused about how memory allocation can occur at compile time when the program doesn't actually run on hardware(PC)
https://devconnected.com/understanding-processes-on-linux/ (https://devconnected.com/understanding-processes-on-linux/)

On targets without OS the process is statically linked, that means that the program has explicit references of the target hardware inside.
eg: the programs knows it's .TEXT (rom) will be at 0x0000 and memory .DATA (ram) will be at 0x8000 for example.

Whatever happens next is on your runtime, and implementation of that.
https://www.gnu.org/software/libc/manual/html_node/Memory-Allocation-and-C.html (https://www.gnu.org/software/libc/manual/html_node/Memory-Allocation-and-C.html)

Quote
The C language supports two kinds of memory allocation through the variables in C programs:

- Static allocation is what happens when you declare a static or global variable. Each static or global variable defines one block of space, of a fixed size. The space is allocated once, when your program is started (part of the exec operation), and is never freed.
- Automatic allocation happens when you declare an automatic variable, such as a function argument or a local variable. The space for an automatic variable is allocated when the compound statement containing the declaration is entered, and is freed when that compound statement is exited.
In GNU C, the size of the automatic storage can be an expression that varies. In other C implementations, it must be a constant.

A third important kind of memory allocation, dynamic allocation, is not supported by C variables but is available via GNU C Library functions.

This should help you find relevant keywords to find literature.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 29, 2023, 10:36:03 am
I'm trying to wrap my head around the concept of memory allocation at compile time versus run time. My understanding is that memory for variables isn't allocated until the program runs on a PC. I belive that memory allocation happens at runtime. However, I'm confused about how memory allocation can occur at compile time when the program doesn't actually run on hardware(PC)
...
Could you please clarify this concept of memory allocation in C programs?

Yet another poor question :( See https://entertaininghacks.wordpress.com/library-2/good-questions-pique-our-interest-and-dont-waste-our-time-2/

Your code example is useless, because you have just copied it from somewhere. You need to explain what you think will happen; only after that can anybody here help you correct your misunderstanding.

Yes, we can help: this is the same for every computer language, and is well described in textbooks. Textbooks will explain this subject better than quickly scribbled responses on a forum.

Textbooks about C are widely available in India, in many of the languages used in India.

Thank you for taking the time to respond to my question, and I appreciate your feedback. In my studies of memory allocation in C programs, I've come across the concept of memory allocation happening both at compile time and run time. My confusion arises from the fact that during compilation, the program isn't actually executed on hardware. So, I'm trying to undrstand  how memory allocation can occur at compile time when there's no physical execution of the program.

The code I shared was meant to serve as an initial reference point to frame my question regarding memory allocation. Specifically, I intended to gain a better understanding of how memory allocation works in  Compile Time vs. Run Time

Simple and correct answer: you have told the compiler to allocate memory for a variable, and the compiler has chosen where to allocate it.

You need to understand what a compiler (any compiler) is and what a compiler does. Concentrate on the correspondence between simple language statements and machine instructions.

Go to https://godbolt.org/noscript/c and type in a very short program, i.e. of similar length to the example they supply.

Use compiler directive -O0 and choose whatever machine code you are familiar with. If you aren't familiar with any, then you will struggle to understand any answer.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 29, 2023, 10:47:54 am
memory layout categories:

Text Segment:

Read-only.
Contains CPU-executable machine code instructions.

Data Segment:

Initialized Data: Holds initialized global/static variables.
Uninitialized Data: Stores uninitialized global/static variables.

Stack:

Stores local variables

Heap:

Utilized for dynamic memory allocation.
Below is a simple C code that demonstrates each of the memory layout categories 

Code: [Select]
 #include <stdio.h>
#include <stdlib.h>

/* Global initialized variable in the Data Segment */
int global_initialized_var = 42;

/* Global uninitialized variable in the Data Segment (BSS) */
int global_uninitialized_var;

/* Static variable in the Data Segment */
static int static_var = 30;

void stack_example() {
    /* Local variable in the Stack */
    int local_var = 10;
    printf("Local Variable in Stack: %d\n", local_var);
}

int main() {
    /* Text Segment: CPU-executable code */
    printf("Hello from the Text Segment!\n");

    /* Accessing Initialized Data in Data Segment */
    printf("Initialized Global Variable: %d\n", global_initialized_var);

    /* Accessing Uninitialized Data in Data Segment (BSS) */
    printf("Uninitialized Global Variable: %d\n", global_uninitialized_var);

    /* Accessing Static Data in Data Segment */
    printf("Static Global Variable: %d\n", static_var);

    /* Calling a function to demonstrate Stack usage */
    stack_example();

    /* Dynamic memory allocation in the Heap */
    int *dynamic_var = (int *)malloc(sizeof(int));
    *dynamic_var = 20;
    printf("Dynamic Variable in Heap: %d\n", *dynamic_var);

    free(dynamic_var);

    return 0;
}
 
 

I have a basic understanding that memory allocation occurs when a program both compiles and runs on a PC. However, I'm trying to understand the concept of "compile-time allocation" and whether memory allocation actually happens when a program is compiled but not run.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 29, 2023, 10:54:03 am
I have a basic understanding that memory allocation occurs when a program both compiles and runs on a PC. However, I'm trying to understand the concept of "compile-time allocation" and whether memory allocation actually happens when a program is compiled but not run.

You need to understand what a compiler does, in particular how the output corresponds to its input. Use godbolt.org (and you can even choose the language).

Short answer...

1) The compiler takes a sequence of characters and outputs a sequence of numbers.
2) During execution the processor interprets the numbers as being instructions, and executes them with the side effects defined in the processor's programming manual.

Until 2 happens, the question is meaningless.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: T3sl4co1l on September 29, 2023, 11:58:49 am
Note that malloc() and friends, are nothing more than functions.  They may have some direct compiler support, whether for purposes of safer reasoning around freed/dangling/null pointers, or assumptions about what their return values are being used for, I don't know; but in general, they are just functions, which operate on resources already in the program at compile time.  You can write your own, if you like, overriding malloc() and free() and such (but, generally speaking, please don't; memory allocation is hard enough as it is).

The basic way a dynamic memory allocator works, is to statically declare a pool of fixed memory, and return a pointer somewhere into that.  It is entirely and explicitly up to your program to avoid stomping on anything else in that pool, which is just an ordinary array, with full ownership/usage rights by any function in the program (or any API calls made thereby).  No CPU protection is going to stop that from happening, not automatically by the compiler's normal effort at least; and give or take warnings, the compiler itself won't stop you from doing the same, either.

So, on a basic level, they're all static allocation.  You're just using one such object as a dynamic pool, and praying that malloc() and free() do what they're supposed to do in the average case, and, apparently ignoring the failure case when memory cannot be allocated or freed, not that it matters for a one-off do-nothing like this example, but, in general that's not good practice.

Now, on a hosted system, the OS can do a lot with a program, give or take how it's linked (what components are visible to the OS: modules, functions, memory objects, etc.).  It may be that memory locations are remapped when the program is loaded, so that for example no objects ever share common memory spaces, they could be on entirely different pages for example -- therefore any overrun working with one object causes a protection fault rather than stomping another otherwise-adjacent object.  Or even more fine-grained protection methods these days.  The OS can also be asked to allocate memory, in which case rather than writing all that memory into the application file, it's simply tagged as "oh by the way I need this much at label so-and-so", and the OS fills in both blanks (it allocates some of its available memory, at the address it patches in to the program in the required slots that reference that object).  And maybe this allocation happens at run-time as well as load-time, and then you have truly dynamic memory as far as the application is concerned (being able to read/write to memory locations that didn't exist at compile time -- addresses that the compiler could otherwise reasonably assume are invalid!).

Related anecdote: a long time ago, I wrote an 8086 assembly program on MS-DOS.  The resulting executable was something like 201kB, despite only ~9kB of actual code.  The ~3x 64kB graphics buffers I had declared, as uninitialized or don't-care memory, were linked into the *.EXE as all-zeroes, because I didn't pass the correct flag to the linker to tell it to let the OS allocate uninitialized memory, and frankly I didn't care at the time lol, so it dutifully wrote them into the EXE as default initialized memory, to be copied directly from hard disk, all 192k of zeroes. :)  (I could've also done dynamic allocation, where, when MS-DOS loads an application, it assumes it's going to use all available memory whether it means to or not, and allocates everything available; to use the OS for allocation, you first de-allocate the excess -- up to only as much as your program actually uses statically -- and then ask for new bits and pieces as needed.  Mind, you can go ahead and stomp on whatever memory you want, in MS-DOS -- the 8086 had no memory protection, nor when running on any later CPU either (DOS runs in real mode); it was entirely a matter of custom, and trust, that programs behaved with each other back in those days.  You could even stomp over as much of DOS itself as you wanted, if you didn't mind losing out on basic OS or even IO functionality, if you needed those last few kB of the massive 640kB RAM that was installed.  Most likely such a program ended in a JMP F000:FFF0h, i.e. to the CPU reset vector -- the easiest way for the BIOS and OS to clean up all the munged memory is to completely start over.)

A common thread through this is, how much the compiler handles, what a linker is and what it handles, and what it means to load an application and begin executing it.  And which cases are applicable where, bare-metal vs. hosted.  I'm not going into further detail on this (and I'll likely get even more things wrong in the process), but this is excellent further reading to pick up on.

Tim
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 29, 2023, 12:53:56 pm

Simple and correct answer: you have told the compiler to allocate memory for a variable, and the compiler has chosen where to allocate it.

You need to understand what a compiler (any compiler) is and what a compiler does. Concentrate on the correspondence between simple language statements and machine instructions.


correct me if i am wrong in my understanding. When we compile a C program, no actual memory is allocated for variables or functions. Instead, what happens is the compiler generates instructions for the program regarding how much memory should be allocated and where it should be allocated when the program is run.

This means that the allocation of memory for variables and functions occurs during the runtime of the program, not during the compilation phase. The role of the compiler during compilation is to provide instructions and information, which manages memory allocation on the hardware, about how to allocate memory for the various variables  of  program when it is executed
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: T3sl4co1l on September 29, 2023, 01:01:41 pm

Simple and correct answer: you have told the compiler to allocate memory for a variable, and the compiler has chosen where to allocate it.

You need to understand what a compiler (any compiler) is and what a compiler does. Concentrate on the correspondence between simple language statements and machine instructions.


correct me if i am wrong in my understanding. When we compile a C program, no actual memory is allocated for variables or functions. Instead, what happens is the compiler generates instructions for the program regarding how much memory should be allocated and where it should be allocated when the program is run.

This means that the allocation of memory for variables and functions occurs during the runtime of the program, not during the compilation phase. The role of the compiler during compilation is to provide instructions and information, which manages memory allocation on the hardware, about how to allocate memory for the various variables  of  program when it is executed

What variables?

Statics and globals are allocated, and you will indeed find them in the EXE or whatever via hexdump or binutils.  (Uninitialized/zeroed may not, as most OSs and linkers, or platform support libraries, handle them more efficiently.)

Anything on the stack, doesn't exist anywhere in the program at all, until such time as the function performs the allocation on entry.  Whether a given system has enough stack space available for such an allocation, is up to you (give or take if the compiler is aware of it, and does something about it; offhand, I think avr-gcc for example does not, you can happily allocate more on the stack than extant RAM and it doesn't care).  Likewise, it ceases to exist (semantically) on exiting the function so must not be referred to outside the function (e.g. a function can erroneously return a pointer to a local variable; and sometimes you'll actually get away with it..!)

Tim
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 29, 2023, 02:25:51 pm

Simple and correct answer: you have told the compiler to allocate memory for a variable, and the compiler has chosen where to allocate it.

You need to understand what a compiler (any compiler) is and what a compiler does. Concentrate on the correspondence between simple language statements and machine instructions.


correct me if i am wrong in my understanding. When we compile a C program, no actual memory is allocated for variables or functions. Instead, what happens is the compiler generates instructions for the program regarding how much memory should be allocated and where it should be allocated when the program is run.

This means that the allocation of memory for variables and functions occurs during the runtime of the program, not during the compilation phase. The role of the compiler during compilation is to provide instructions and information, which manages memory allocation on the hardware, about how to allocate memory for the various variables  of  program when it is executed

Why don't you have a look at the output of a compiler at godbolt.org? There you will see what the compiler has done.

The compiler "knows" how an instruction will manipulate memory (and registers) when the instruction is executed. During compilation, the compiler internally keeps track of what manipulations need to be done, and outputs the series of instructions necessary - those are merely a sequence of numbers.

The exact manipulations can vary considerably, depending on the language, the compiler, the processor, and especially the compiler optimisation level. Sometimes they will involve memory, sometimes - particularly with higher optimisation levels - they will be purely register operations.

At runtime the processor interprets the series of instructions.

Example source C code:
Code: [Select]
extern int put( int );

int aStatic = 12;

int foo( int x ) {
    int aLocal = 34;
    return x + aLocal - aStatic;
}

void baz() {
    int result = foo( 56 );
    put(result & 0xff);
}

using x86-64 clang (trunk) -O0
Code: [Select]
# Compilation provided by Compiler Explorer at https://godbolt.org/
foo:                                    # @foo
        push    rbp
        mov     rbp, rsp
        mov     dword ptr [rbp - 4], edi
        mov     dword ptr [rbp - 8], 34
        mov     eax, dword ptr [rbp - 4]
        add     eax, dword ptr [rbp - 8]
        sub     eax, dword ptr [rip + aStatic]
        pop     rbp
        ret
baz:                                    # @baz
        push    rbp
        mov     rbp, rsp
        sub     rsp, 16
        mov     edi, 56
        call    foo
        mov     dword ptr [rbp - 4], eax
        mov     edi, dword ptr [rbp - 4]
        and     edi, 255
        call    put@PLT
        add     rsp, 16
        pop     rbp
        ret
aStatic:
        .long   12                              # 0xc
where you can see statically allocated memory, push/pop operations, and subroutine calls/returns.

But when optimised using -O3
Code: [Select]
# Compilation provided by Compiler Explorer at https://godbolt.org/
foo:                                    # @foo
        sub     edi, dword ptr [rip + aStatic]
        lea     eax, [rdi + 34]
        ret
baz:                                    # @baz
        mov     eax, 90
        sub     eax, dword ptr [rip + aStatic]
        movzx   edi, al
        jmp     put@PLT                         # TAILCALL
aStatic:
        .long   12                              # 0xc
the compiler has done all the addition and subtraction arithemetic operations at compile time, kept values in registers, and avoided subroutine calls.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: MikeK on September 29, 2023, 05:03:41 pm
Why don't you have a look at the output of a compiler at godbolt.org? There you will see what the compiler has done.

This is cool.  Thanks.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 29, 2023, 06:01:57 pm
Why don't you have a look at the output of a compiler at godbolt.org? There you will see what the compiler has done.

This is cool.  Thanks.

At least one person is paying attention :)

I wonder if the OP will?
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: IanB on September 29, 2023, 09:06:07 pm
I'm trying to wrap my head around the concept of memory allocation at compile time versus run time.

OK

Quote
My understanding is that memory for variables isn't allocated until the program runs on a PC.

This understanding is wrong. You are mistaken.

Quote
I believe that memory allocation happens at runtime.

It can happen at runtime, but it doesn't have to be this way. For example, in Fortran 77 allocation never happens at runtime--the possibility is not even available in the language.

Quote
However, I'm confused about how memory allocation can occur at compile time when the program doesn't actually run on hardware(PC)

You are confused because you are starting from a position of incorrect understanding.

It might be helpful to know what your situation is. Are you a student? If so, are you a student of computer science or computer engineering, or of some other discipline? In any event, you will want to look at basic textbooks describing how computers work. You need that understanding before you think about compilers (which is an advanced topic). If you try to think about compilers before you think about computing machines, you will naturally get confused.

So what you need to first understand is how a computer works at the machine level. What is the basic design of all modern computer systems? Once you understand how memory is laid out, and how the CPU can read and write memory locations by memory address, then you can understand how the compiler can allocate certain memory addresses in advance to store data (variables). The compiler can even pre-load those addresses with initial data.

(This is glossing over the roles of a linker and loader, and relocating programs in memory, but relocation does not need to happen on the most primitive machines.)
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: SiliconWizard on September 29, 2023, 09:46:56 pm
static vs. dynamic allocation would make more sense than "compile time" vs. "run time" allocation IMHO.

Formally speaking, until some executable runs, nothing is happening, so is there any allocation to speak of? Also formally speaking, static "allocation" happens (or maybe more correctly, is "prepared" or "layouted" ) more at link-time than compile-time, if that matters. But also note that even static allocation has a run-time effect, as - at least on hosted environments - it is usually prepared before the main() function is called. (For instance, initialized data copied - typically .data segment(s) - and non-initialized data zeroed out - typically .bss segment.)

Regarding dynamic allocation, it can take various forms. The most common people have in mind is allocation on the "heap". But allocation on the stack is also dynamic. So, uh, yeah.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 30, 2023, 01:41:57 am

I wonder if the OP will?

I tried the link, but it produced assembly code. I'm not familiar with assembly language, so I didn't comment on it. I know it's good to learn assembly language, but at this time, I am focusing on learning C language only
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 30, 2023, 02:08:00 am
static vs. dynamic allocation would make more sense than "compile time" vs. "run time" allocation IMHO.


This link say https://www.geeksforgeeks.org/difference-between-static-and-dynamic-memory-allocation-in-c/ (https://www.geeksforgeeks.org/difference-between-static-and-dynamic-memory-allocation-in-c/)  that static memory is allocated at compile time.

Suppose we've written a program that includes a global variable. When we compile this code on  Windows PC, does the compiler allocate memory for the global variable.?

Or Instead,  does it write instructions into the program's executable file that allocate memory for the global variable when it run on PC?
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: IanB on September 30, 2023, 02:52:21 am
I tried the link, but it produced assembly code. I'm not familiar with assembly language, so I didn't comment on it. I know it's good to learn assembly language, but at this time, I am focusing on learning C language only

In order to understand the answer to the question you are asking, you must learn something about computing machines and how they work, and that includes understanding something about how memory is organized within a computer. You cannot reasonably expect to learn the C language without doing this, or you will become completely confused when you try to understand arrays and pointer arithmetic. If you are unwilling to do this, then you should pick another language to learn that does not have pointers, such as C#, Java or Python.

This link say https://www.geeksforgeeks.org/difference-between-static-and-dynamic-memory-allocation-in-c/ (https://www.geeksforgeeks.org/difference-between-static-and-dynamic-memory-allocation-in-c/)  that static memory is allocated at compile time.

Suppose we've written a program that includes a global variable. When we compile this code on  Windows PC, does the compiler allocate memory for the global variable.?

Yes.

Quote
Or Instead,  does it write instructions into the program's executable file that allocate memory for the global variable when it run on PC?

No. What you are missing is that a program's executable can contain both instructions and data. The data part is where variables can be stored.

The definition of static memory is that it is allocated by the compiler into a data area before the program starts running, and remains accessible until the program ends. But you will not understand this until you learn about machines, machine code, and assembly language. Without that, you must simply accept it as given.

By the way, when I was growing up, 14 year old students learning computer science would learn about the basic structure of a computer (and assembler language) before ever learning about a high level language. This is how fundamental it was considered.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 30, 2023, 03:07:08 am

This link say https://www.geeksforgeeks.org/difference-between-static-and-dynamic-memory-allocation-in-c/ (https://www.geeksforgeeks.org/difference-between-static-and-dynamic-memory-allocation-in-c/)  that static memory is allocated at compile time.

Suppose we've written a program that includes a global variable. When we compile this code on  Windows PC, does the compiler allocate memory for the global variable.?

Yes.


so Global and static variables are allocated memory during compilation, and the compiler  allocate their memory addresses in the program's data segment. For example, let's say globalVar is located at memory address 0x1000. For example, let's say staticVar is located at memory address 0x2000. 

Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: magic on September 30, 2023, 05:13:02 am
Quote
My understanding is that memory for variables isn't allocated until the program runs on a PC.
This understanding is wrong. You are mistaken.
It really depends on what you mean by those words.

I'm with SiliconWizard - memory is allocated when you run a program and deallocated when you stop it to run a different one.
Technically, what the complier linker allocates statically are addresses.


The only case of truly static allocation of memory is when you design a computer which always runs one particular software :D
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: IanB on September 30, 2023, 05:31:49 am
This is semantics, and it depends on the meaning of the word allocate. To allocate can mean to select in advance, or to earmark, how something will be used. Hence my post in Reply #2 above. The rules of Monopoly allocate, or earmark, a certain amount of money to each of the players at the start of the game. This happens in the abstract, before any game has actually started.

In the same way, the compiler allocates, or earmarks, a certain amount of memory to store static variables. This happens before the program has even been loaded.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: magic on September 30, 2023, 06:09:08 am
My point is, you can't really say that it's wrong to state that memory is allocated at runtime.
In a way, almost all memory is allocated dynamically in a multi tasking OS.
Hell knows what the OP really meant.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: metebalci on September 30, 2023, 07:15:17 am
This is semantics, and it depends on the meaning of the word allocate. To allocate can mean to select in advance, or to earmark, how something will be used. Hence my post in Reply #2 above. The rules of Monopoly allocate, or earmark, a certain amount of money to each of the players at the start of the game. This happens in the abstract, before any game has actually started.

In the same way, the compiler allocates, or earmarks, a certain amount of memory to store static variables. This happens before the program has even been loaded.

I think I agree with this point and I think it is because of use or definition of allocate (warning: English is not my first language). For allocation, Oxford (dict) says "an amount of money, space, etc. that is given to somebody for a particular purpose". In that senses, I dont like using allocation with compile time. Merriam Webster, in addition to above meaning, says "to set apart or earmark : DESIGNATE", and in this sense, it makes sense. Naturally it should be obvious what is meant in context of compilation and runtime, but still it may cause some ambiguity.

By the way, when program is compiled in IBM mainframes in 60s using punch cards etc., maybe something different was going on, and maybe this usage is at least partially an historical artifact.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 30, 2023, 07:37:10 am

I wonder if the OP will?

I tried the link, but it produced assembly code. I'm not familiar with assembly language, so I didn't comment on it. I know it's good to learn assembly language, but at this time, I am focusing on learning C language only

You will fail to understand C or C++ unless you understand (1) how a processor operates and (2) interacts with its memory.

Without that understanding any programs you write will be cargo cult programming (https://en.wikipedia.org/wiki/Cargo_cult_programming), which might work sometimes but which will "randomly" fail occasionally. That is especially true with multicore processors and cache memory hierarchies.

I suggest you stick with a language that is designed to hide the hardware from you, e.g. Python.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 30, 2023, 08:02:00 am

You will fail to understand C or C++ unless you understand (1) how a processor operates and (2) interacts with its memory.

My understanding is that Computer hardware reads binary instructions from program memory to perform tasks. Program memory stores the instructions that make up a program, and these instructions guide the hardware in executing specific tasks. Data memory, on the other hand, is where the values of variables are stored.

When we declare a global or static variable in code, the compiler assigns a memory address to that variable, which serves as the location in data memory where the variable's (global or static ) value will be stored during program execution. 
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: DiTBho on September 30, 2023, 10:01:15 am
Technically, what the complier linker allocates statically are addresses.
The only case of truly static allocation of memory is when you design a computer which always runs one particular software :D

yup, which however can be absolute, or relative, depending on the ISA of the CPU, the machine-layer of the compiler, in turn conditioned by user flags or directives passed from the source.

e.g. when you want "relocatable code", addresses are all relative, except memory mapped devices and shared memory, which need absolute addressing.

Variables inside a function, can have two approches
1) assigned to registers (RISC approach)
2) pushed on the stack (stack-machines(1) and CISC approach)

Putting something on the stack is equivalent to making it relative addressing, based on the stack pointer.



(1) an educational example is the ijvm (integer java(-like) virtual machine) created by prof Andrew S. Tanenbaum in the early 2000s. In the book that describes it almost all the examples are in ijvm assembly, very easy to follow.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 30, 2023, 10:31:37 am

You will fail to understand C or C++ unless you understand (1) how a processor operates and (2) interacts with its memory.

My understanding is that Computer hardware reads binary instructions from program memory to perform tasks. Program memory stores the instructions that make up a program, and these instructions guide the hardware in executing specific tasks. Data memory, on the other hand, is where the values of variables are stored.

When we declare a global or static variable in code, the compiler assigns a memory address to that variable, which serves as the location in data memory where the variable's (global or static ) value will be stored during program execution.

That is simplistic, especially with higher compiler optimisation levels. My example illustrated that.

Once you understand how a processor and memory works, then it will be much easier for you to understand why C has keywords "const", "volatile", "restrict", and why aliasing is an issue - especially in multicore processors. Understanding why the keywords are there is, unfortunately, only the start: you also need to understand what they don't mean/guarantee.

Caution: many experienced programmers think they understand what those qualifiers do, but actually they don't. In general C is full of "undefined behaviour" (UB) traps, which can easily cause surprises up to and including entire programs being reduced to a couple of instructions - or demons flying out of your nose (https://en.wikipedia.org/wiki/Undefined_behavior) (a.k.a. nasal demons).
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: DavidAlfa on September 30, 2023, 11:10:44 am
Your question mainly asks the difference between static, stack and dynamic memory allocation.
I understand some concepts are hard to get by, by others like this one only take some reading!
There're thousands of sites explaining this. Have you even tried to google it?

https://www.geeksforgeeks.org/static-and-dynamic-memory-allocation-in-c/ (https://www.geeksforgeeks.org/static-and-dynamic-memory-allocation-in-c/)
https://craftofcoding.wordpress.com/2015/12/07/memory-in-c-the-stack-the-heap-and-static/ (https://craftofcoding.wordpress.com/2015/12/07/memory-in-c-the-stack-the-heap-and-static/)

https://www.youtube.com/watch?v=jKcg3ze10Hk (https://www.youtube.com/watch?v=jKcg3ze10Hk)
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 30, 2023, 11:29:14 am

Once you understand how a processor and memory works, then it will be much easier for you to understand

The CPU is responsible for executing program instructions stored in program memory. It fetches, decodes, and executes these instructions (refrence 8051 archtecture), using registers and other components to manipulate data and perform operations. Program memory holds the program instructions,
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 30, 2023, 01:06:06 pm

Once you understand how a processor and memory works, then it will be much easier for you to understand

The CPU is responsible for executing program instructions stored in program memory. It fetches, decodes, and executes these instructions (refrence 8051 archtecture), using registers and other components to manipulate data and perform operations. Program memory holds the program instructions,

That is true for all computers.
It is dangerously simplistic for many, e.g. ARM, x86. Remember, you have previously started threads on large/complex systems, not 8051-class systems https://www.eevblog.com/forum/microcontrollers/larger-and-more-complex-embedded-systems/msg5072092/#msg5072092 (https://www.eevblog.com/forum/microcontrollers/larger-and-more-complex-embedded-systems/msg5072092/#msg5072092)
Although it is 25 years too late(!), C has tried to keep up with processors+memory by finally defining a memory model.

The 8051 has, I believe, four different types of memory - and programmers will have to specify which type to use using mechanisms that are outside the C standard. Some compilers add another three types of memory, again not standard C. https://en.wikipedia.org/wiki/Intel_8051#Memory_architecture (https://en.wikipedia.org/wiki/Intel_8051#Memory_architecture)

If you want to program an 8051 you must understand the hardware architecture, and that can't be done without understanding assembly code.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 30, 2023, 01:11:12 pm
Your question mainly asks the difference between static, stack and dynamic memory allocation.
I understand some concepts are hard to get by, by others like this one only take some reading!
There're thousands of sites explaining this. Have you even tried to google it?

Yes, unlikely, and it has already been suggested :(

The OP has admitted to using ChatGPT to write his questions.

I've asked  questions because I'm genuinely curious and eager to understand a particular concept. English is not my first language, so I used AI to  format my question. However, I know that ChatGPT can't replace the real-time work experience and expertise that individuals in the industry have earned through their careers.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on September 30, 2023, 02:52:53 pm
Your question mainly asks the difference between static, stack and dynamic memory allocation.
I understand some concepts are hard to get by, by others like this one only take some reading!
There're thousands of sites explaining this. Have you even tried to google it?

Yes, unlikely, and it has already been suggested :(

The OP has admitted to using ChatGPT to write his questions.


I'd like to clarify that DavidAlfa mention what i need to ask.  Just to clarify, Now I don't use AI to draft questions; I simply ask questions in my own version of English.

There're thousands of sites explaining this. Have you even tried to google it?
I've done quite a bit of research on this topic, and I've also included a reference link in my previous post (#17) FYI
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on September 30, 2023, 03:14:22 pm
Your question mainly asks the difference between static, stack and dynamic memory allocation.
I understand some concepts are hard to get by, by others like this one only take some reading!
There're thousands of sites explaining this. Have you even tried to google it?

Yes, unlikely, and it has already been suggested :(

The OP has admitted to using ChatGPT to write his questions.


I'd like to clarify that DavidAlfa mention what i need to ask.  Just to clarify, Now I don't use AI to draft questions; I simply ask questions in my own version of English.

There're thousands of sites explaining this. Have you even tried to google it?
I've done quite a bit of research on this topic, and I've also included a reference link in my previous post (#17) FYI

Get a decent textbook in your native language. Too many sites are poor recapitulations of what is on other sites by people that aren't experts. Soon ChatGPT generated sites will make that even worse.

I see no problem in formulating your question in your native language and using google translate to mutate it into English.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: DavidAlfa on September 30, 2023, 09:29:09 pm
I also understand the struggle when you don't know the term you're searching for!

For example in mechanical devices, I need some sort of metal thing holding two pieces together, preventing them from slippping when rotating.
It's unlikely that searching that in Google leads to a woodruff key or DIN 6885 parallel keys, or at least not fast.

Same for computing/sorting algorithms, chemical reactions, hand tools, etc, etc, you're f*** up unless you know the name  :-DD.

Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: ejeffrey on October 02, 2023, 01:55:59 am

I wonder if the OP will?

I tried the link, but it produced assembly code. I'm not familiar with assembly language, so I didn't comment on it. I know it's good to learn assembly language, but at this time, I am focusing on learning C language only

I'm going to go somewhat against what people have said that you "need" to understand how processors work at a machine level to understand C.  However you are asking a question that goes beyond the scope of just the language.  All C requires is that the implementation make global and static variable lifetime for the entire duration of the program.  How and where that happens is fundamentally a machine question and if you won't take "it just does" for an answer you are going to need to learn about how actual machines operate.

Basically what happens when you launch a program on almost anything that constitutes an operating system is that a certain amount of memory is set aside for the program's execution.  Then some of all of that memory is initialized based on the contents of the program executable image.  Then the operating system jumps to a defined address within that space.  Exactly how this process works is different from system to system, DOS COM files work differently than Linux dynamically linked executables, but those are still the steps.

That memory generally includes both code and data although the OS may or may not make a distinction between them.

So when you declare a global variable in C, the compiler (including the linker) put information into the executable image to make sure space for the variable is included when the program is loaded and initialized as needed.

In the simplest case where the OS just loads a blob of data to a fixed address, that would just be including the variables initial value in the blob and making sure references to it use the right address.  That's a common way for this to work but not the only way.  The only thing that is required is that the variables are allocated before main() starts, and exist until after main exits.

Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Nominal Animal on October 02, 2023, 09:49:06 am
One of the key concepts to understand before one can understand memory allocation is address spaces.

As an example, let's take 8051.  It is a good example, because it has several address spaces:The key is that knowing an address, say 5th byte or address 4, may refer to any address space.  (Even AMD64/x86-64 supports at least three concurrent separate address spaces: the default one, and two others specified via FS and GS segment register selectors.)

Each address space can be split into many different logical parts.  For example, most current 8051 implementations have 256 bytes of internal RAM, where the upper 128 bytes is typically used for stack.  If we use the ELF object file format terms, we call human-decided logical address space parts sections.

Typical microcontroller code and applications running under real OSes have one or two sections in read-only program memory: .text, and possibly .rodata; and up to four sections in the data memory: initialized data in .data, uninitialized or zero-initialized data in .bss, an unnamed section for stack, and optionally an unnamed section for heap.  On some microcontrollers like AVRs, and under older operating systems like DOS, the heap and stack occupied the same section (address range), with heap addresses growing up or in increasing addresses, and stack growing down with decreasing addresses.
Under fully featured operating systems, there are additional syscalls (brk/sbrk, mmap) to request additional accessible address ranges, if needed at run time.

Thus, if we look at a typical 8051 or other microcontroller code, static and global variables' addresses are fixed at link time, and so is the address ranges used for dynamic allocation.  If the dynamic allocations always occur with specific sizes in specific order, the dynamically obtained addresses will always be the same.  (Under desktop and server OSes, the address spaces can actually be randomized, but that is an intentional security measure.)



As a real-world analog, think of each address space available on your microcontroller as a bookshelf with movable dividers with labels.
At compile time, the compiler generates machine code, but instead of putting code and static variables and global variables directly in the bookshelf, it puts them in boxes (sections).
At link time, these boxes are arranged on the bookshelf according to the 'linker script', or link-time directions.  (In the past, linkers were separate tools one would execute in ones build scripts, but nowadays we tend to let the compiler call the linker directly.)
As I mentioned earlier, on a microcontroller, an additional box may be set up for stack, and another for heap, for run-time allocations.

In a very real sense, the heap allocator and deallocator functions –– in C, typically malloc() and free() –– put and remove additional sub-boxes into the heap box at run time; essentially just reserving small sub-chunks of address space within the pre-reserved region.

Thus, the concept of memory allocation 'at compile time or at run time' is itself wrong: the address ranges are set at link time –– which can be at build time when linking statically, or at run time when linking dynamically, and often both; but for microcontrollers with fixed address spaces is at the end of the build ––, but additional ranges can be requested from the operating system if running under one; and if a 'heap' range is set up or requestable from the operating system, it can be managed at run time like books from a library.  Some specific addresses can also be set at compile time, for example peripheral device I/O addresses and such.  It is a hierarchy that starts in the code at compile time, and extends all through to run time; not either-or.

Therefore, the correct answer is that some special addresses can be fixed in the code and thus set at compile time; function addresses and static and global variable addresses are set at link time; sections/regions like stack are set at link time; heap region is initialized at link time but more or additional heap regions can be requested from the OS if running under one at run time; and one or more contiguous parts within the heap region or regions can be allocated and deallocated at run time.  Hierarchy of things, not either-or.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on October 02, 2023, 09:59:42 am

I wonder if the OP will?

I tried the link, but it produced assembly code. I'm not familiar with assembly language, so I didn't comment on it. I know it's good to learn assembly language, but at this time, I am focusing on learning C language only

I'm going to go somewhat against what people have said that you "need" to understand how processors work at a machine level to understand C.  However you are asking a question that goes beyond the scope of just the language.  All C requires is that the implementation make global and static variable lifetime for the entire duration of the program.  How and where that happens is fundamentally a machine question and if you won't take "it just does" for an answer you are going to need to learn about how actual machines operate.

Some aspects of C really cannot be understood without understanding how processors+memory work. Even "experts" have had problems with that concept, as demonstrated by Hans Boehm (of the conservative GC fame) needing to write "Threads cannot be implemented as a library" in exhaustive detail http://hboehm.info/misc_slides/pldi05_threads.pdf (http://hboehm.info/misc_slides/pldi05_threads.pdf)

The OP is targeting the 8051. Given how hostile that is to plain-vanilla C without extensions, good luck getting C code to work on that without understanding the 8051's arhitectural features.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: DiTBho on October 02, 2023, 11:46:20 am
Intel 51 is one of the architecture that you d best program in assembly.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on October 02, 2023, 11:55:44 am
Intel 51 is one of the architecture that you d best program in assembly.

That's my impression, but some of the PICs look worse.

In either case I would want to verify that a compiler had translated my code into sane machine code.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: Kittu20 on October 02, 2023, 03:25:36 pm
AT89S52 microcontroller executes programs from its flash memory by fetching, decoding, and executing instructions stored in flash memory. It uses a program counter to keep track of the next instruction to execute, allowing it to run the program sequentially.

The opcode is the part of an instruction that defines the operation to be performed (e.g., 'MOV' for move), while the operand specifies the data or location involved (e.g., '#5' for a constant). Machine code is the binary representation of these instructions stored in the microcontroller's flash memory, which the CPU directly executes.

SFR (Special Function Register) addresses, as detailed in Datasheet Table 1 on Page 5, are memory-mapped locations that provide access to specific functions and configurations. Microcontroller programmers utilize these SFR addresses to configure I/O pins, set up timer/counters, control interrupts, and interact with other hardware modules.

I can find SFR address information in the header file.

https://www.keil.com/dd/docs/c51/atmel/regx51.h (https://www.keil.com/dd/docs/c51/atmel/regx51.h)

I think the compiler assigns these addresses at compile time, so when it generates the executable file , they should be hard-coded. they remain the same when the program is running on the microcontroller.

Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: ejeffrey on October 02, 2023, 06:14:30 pm
Some aspects of C really cannot be understood without understanding how processors+memory work. Even "experts" have had problems with that concept, as demonstrated by Hans Boehm (of the conservative GC fame) needing to write "Threads cannot be implemented as a library" in exhaustive detail http://hboehm.info/misc_slides/pldi05_threads.pdf (http://hboehm.info/misc_slides/pldi05_threads.pdf)

I look at that in exactly the opposite way.  Understanding the machine doesn't really help you here.  Actually it can and does lead you astray, since the guarantees that the hardware makes don't matter if the compiler doesn't expose them to the programmer in a usable fashion.  For instance, even recently I have argued against to people still using volatile for multi-thread synchronization that they shouldn't do that because it is incorrect.  They usually claim something like "I only care about x86 and this code is correct under x86 memory order guarantees."  That would be fine if they were using assembly, but it's not correct when writing C (or C++) because the C memory model is not the same as the architectural memory model.

C and C++ needed a memory model to enable multi-threaded programming, and knowledge of the underlying architecture and it's memory model *was not* sufficient to produce correct multi-threaded code without it.

Quote
The OP is targeting the 8051. Given how hostile that is to plain-vanilla C without extensions, good luck getting C code to work on that without understanding the 8051's arhitectural features.

That's a fair point, although at that point you have to debate whether you are programming C or "a C-like language"
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: SiliconWizard on October 02, 2023, 07:29:02 pm
Some aspects of C really cannot be understood without understanding how processors+memory work. Even "experts" have had problems with that concept, as demonstrated by Hans Boehm (of the conservative GC fame) needing to write "Threads cannot be implemented as a library" in exhaustive detail http://hboehm.info/misc_slides/pldi05_threads.pdf (http://hboehm.info/misc_slides/pldi05_threads.pdf)

I look at that in exactly the opposite way.  Understanding the machine doesn't really help you here.  Actually it can and does lead you astray,

I actually agree with both of you on different levels. The two are not mutually exclusive, contrrary to how it may look at first thought.

- I agree with that fact that understanding the low-level aspects of the hardware is a definite requirement for really understanding C.
- But I do agree that with this knowledge must come the appropriate level of abstraction when you design your code, and this level entirely depends on what exactly you are designing.

C is precisely this spot - that's often been considered a sweet spot, at least until now - between low-level and high-level, and as such requires a good command of both.

If you want much lower level than C, use assembly. Conversely, if the high-level aspects of C are not high-level enough to you or for implementing a given piece of software - at least according to your standards - use another language as well.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: SiliconWizard on October 02, 2023, 07:30:43 pm
Intel 51 is one of the architecture that you d best program in assembly.

It's not pretty but I've used C for 8051-based MCUs (like the Cypress FX stuff) using SDCC and it was perfectly usable once you got familiar with the limitations.
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: DiTBho on October 03, 2023, 10:43:54 am
Intel 51 is one of the architecture that you d best program in assembly.

It's not pretty but I've used C for 8051-based MCUs (like the Cypress FX stuff) using SDCC and it was perfectly usable once you got familiar with the limitations.

perfectly usable ... sure, even a colleague of mine also spoke well about it, then I noticed that he wasn't even aware of the various problems that the 8032 and 8051 have, he had taken a ready-made template project found on a GitHub, modified it, and "oh, look, it can be programmed, perfectly usable"

A bloody Genius  :o :o :o
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on October 03, 2023, 07:04:29 pm
Some aspects of C really cannot be understood without understanding how processors+memory work. Even "experts" have had problems with that concept, as demonstrated by Hans Boehm (of the conservative GC fame) needing to write "Threads cannot be implemented as a library" in exhaustive detail http://hboehm.info/misc_slides/pldi05_threads.pdf (http://hboehm.info/misc_slides/pldi05_threads.pdf)

I look at that in exactly the opposite way.  Understanding the machine doesn't really help you here.  Actually it can and does lead you astray, since the guarantees that the hardware makes don't matter if the compiler doesn't expose them to the programmer in a usable fashion.  For instance, even recently I have argued against to people still using volatile for multi-thread synchronization that they shouldn't do that because it is incorrect.  They usually claim something like "I only care about x86 and this code is correct under x86 memory order guarantees."  That would be fine if they were using assembly, but it's not correct when writing C (or C++) because the C memory model is not the same as the architectural memory model.

We agree, except that my attitude is that not understanding the hardware allowed people to be blind to C's manifest deficiencies w.r.t. memory consistency.

I say "manifest" because K&R C was explicit that the language did not address threading, and that that was the responsibility of libraries. Subsequent generations of programmers didn't realise that, and blithely ignored the issue. Hence the need for Boehm's paper.

Instead the C committee spent years amusing themselves about whether it should be possible or impossible to "throw away  constness"; there are solid arguments for and against each.

That lack of decision made me decide that C was becoming part of the problem rather than part of the solution.

Others finally seem to agree, and soon Rust will supplant C in the same way that COBOL has been supplanted by Java and other languages.

Quote
C and C++ needed a memory model to enable multi-threaded programming, and knowledge of the underlying architecture and it's memory model *was not* sufficient to produce correct multi-threaded code without it.

Scandalously C took quarter of a century to gain a memory model, even though is was obviously mandatory - as per Java!

Whether C's memory model and compilers' implementation is sufficient and correct is an open question in my mind. It is a very complex subtle topic; even Java got it subtly wrong and had to correct it after a decade of experience.

Quote
Quote
The OP is targeting the 8051. Given how hostile that is to plain-vanilla C without extensions, good luck getting C code to work on that without understanding the 8051's arhitectural features.

That's a fair point, although at that point you have to debate whether you are programming C or "a C-like language"

Yup :)
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: tggzzz on October 03, 2023, 07:11:21 pm
If you want much lower level than C, use assembly. Conversely, if the high-level aspects of C are not high-level enough to you or for implementing a given piece of software - at least according to your standards - use another language as well.

In the 1990s C.reached a point where it needed to decide whether to be a general purpose high level programming language, or a special purpose close to the silicon language. Either is good and acceptable.

It failed to decide, tried to be both, and failed at both.

People have moved on to Java for HLL purposes, and are moving on to Rust for close to silicon purposes. Good! About time too :)
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: SL4P on October 03, 2023, 10:34:04 pm
You can be allocated a parking space, but never use it,
You can be assigned a parking space, but never use it,

There is a subtle difference.
Allocation means a space (unknown) has been noted, will be given ‘assigned to you when you need it.. it will be assigned to you.

Until it’s been ‘assigned’, other vehicles can use it, and you’ll never know about it,
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: c64 on October 03, 2023, 11:25:23 pm
Short answer to the original question.

Global variables

If your target has OS, it will be OS who allocate memory. Before your main() is started.

If you are on bare metal, there is no allocation needed. You already own all the memory available.

Locals and dynamic

These are allocated by your running application. Usually on stack/heap
Title: Re: Memory Allocation in C: Compile Time vs. Run Time
Post by: c64 on October 03, 2023, 11:37:49 pm
Compiler or linker do not allocate anything. They calculate how much memory your application will require for all the global variables, and there they will be (variable x will be at address 0, variable y at address 8, etc.) when the application is running on your target device