Poll

Do you ever code to free() NULL pointers?

Sure, why not!
14 (53.8%)
Hell no, only free allocated memory
12 (46.2%)
I only use a memory safe language
0 (0%)

Total Members Voted: 26

Voting closed: May 17, 2020, 10:52:59 pm

Author Topic: Poll: Freeing NULL Pointers  (Read 827 times)

0 Members and 1 Guest are viewing this topic.

Online hamster_nz

  • Super Contributor
  • ***
  • Posts: 2285
  • Country: nz
Poll: Freeing NULL Pointers
« on: May 13, 2020, 10:52:59 pm »
I've been made aware of this in the man page for free(3):

Quote
The free() function frees the memory space pointed to by ptr, which must have been returned by a previous call to malloc(), calloc() or realloc(). Otherwise, or if free(ptr) has already been called before, undefined behavior occurs. If ptr is NULL, no operation is performed.

Currently I deliberately avoid freeing NULL pointers. So if memory is free()ed in a different function to where they are malloc()ed I usually use this pattern:

Code: [Select]
   if(mystruct->ptr) {
      free(mystruct->ptr);
      mystruct->ptr = NULL;
   }

Now I see I could just:

Code: [Select]
   free(mystruct->ptr);
   mystruct->ptr = NULL;

I think I might move to the dark side...
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1149
  • Country: fi
Re: Poll: Freeing NULL Pointers
« Reply #1 on: May 13, 2020, 10:58:33 pm »
If it's normal that nullptrs are freed, there's no point in checking. If they should not be nullptrs, it's a good place for a runtime assert. But a check that does nothing if the pointer is NULL serves no purpose.

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #2 on: May 13, 2020, 11:00:30 pm »
I don't. Mostly because in many cases, freeing an allocated "object" may require further operations to properly free it aside from freeing the object itself, so I make it a habit to test for NULL pointers before proceeding. But if the freeing is associated with no other steps, then it's useless to test for NULL indeed. It may still be an opportunity to check that all your free'ing operations have a matching allocation. YMMV.

But you're right, provided that the implementation of free() on your particular target is compliant, it's perfectly valid.

« Last Edit: May 13, 2020, 11:02:33 pm by SiliconWizard »
 

Online dunkemhigh

  • Super Contributor
  • ***
  • Posts: 1915
Re: Poll: Freeing NULL Pointers
« Reply #3 on: May 14, 2020, 02:38:10 am »
Quote
a check that does nothing

Strictly speaking, the check prevents a spurious write.

If you're using strncpy or snprintf or any of those, do you automatically shove a nul at the end of the destination as a matter of course, or do you check the returned value and only write the nul if necessary? I generally write it anyway just to be sure, and I generally check for NULL on the same basis. Doing that at the point of freeing, as opposed to after the free call, isn't that much of a biggie. Forgetting to do it when something extra needs to be done can be pretty huge.
 

Offline Rick Law

  • Super Contributor
  • ***
  • Posts: 2827
  • Country: us
Re: Poll: Freeing NULL Pointers
« Reply #4 on: May 14, 2020, 03:49:53 am »
Do you really mean "freeing NULL pointers" or do you actually mean "setting the pointer to NULL when an object is no longer used?"

Freeing NULL pointers make no sense.  What is it freeing when the poitner is not pointing to anything.

On the other hand, if you mean "setting point to NULL when the object is no longer in use":  I don't recall where I read it, but I do remember reading that for JAVA it is a good practice to set the pointer of unused object to NULL - that made it clear to the garbage collection utilities that memory for the object can be freed.
 

Online hamster_nz

  • Super Contributor
  • ***
  • Posts: 2285
  • Country: nz
Re: Poll: Freeing NULL Pointers
« Reply #5 on: May 14, 2020, 04:04:41 am »
Do you really mean "freeing NULL pointers" or do you actually mean "setting the pointer to NULL when an object is no longer used?"

Freeing NULL pointers make no sense.  What is it freeing when the poitner is not pointing to anything.

Yes, exactly the second. Some feel it makes sense because it saves a few lines of code, and believes it avoids bugs.



Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1149
  • Country: fi
Re: Poll: Freeing NULL Pointers
« Reply #6 on: May 14, 2020, 07:35:55 am »
Quote
a check that does nothing
Strictly speaking, the check prevents a spurious write.
I meant that there is no else case. If freeing a nullptr is an unexpected error (otherwise, why bother checking?) it should be handled in some way.

Setting the value of a free'd pointer works best if you can set it to a trapping value. Unfortunately, on most microcontrollers it's either not possible to generate access traps, or the region around address zero contains registers or valid memory. Both double-frees and use-after-free are serious, memory-corrupting bugs, and frequently the root cause of security vulnerabilities, and it's certainly worth protecting yourself against them. But silently ignoring a potential cases isn't right either.

Offline golden_labels

  • Regular Contributor
  • *
  • Posts: 185
  • Country: pl
Re: Poll: Freeing NULL Pointers
« Reply #7 on: May 14, 2020, 09:16:17 am »
No, it needlessly duplicates operation that is an inherent part of free itself. It makes as much sense as:
Code: [Select]
if (NULL != ptr) {
    if (NULL != ptr) {
        doSomething(ptr);
    }
}
Skipping the NULL check may seem weird if you first missed the fact that free has well-defined and expected behaviour for NULL. Just to provide more general source, C11 §7.22.3.3:
Quote
The free function causes the space pointed to by ptr to be deallocated, that is, made
available for further allocation. If ptr is a null pointer, no action occurs.
Another function that can be used that way is realloc (§7.22.3.5):
Quote
If ptr is a null pointer, the realloc function behaves like the malloc function for the
specified size.
This also shows why this situation is natural and not merely some “weird”, unexpected operation which I usually opose. realloc is literally designed to be used with NULL and free fits with realloc nicely.

The question should rather be about assigning NULL. Some suggest it prevents dangling pointers and in some contexts it does. If freeing something in a structure, that will live beyond the current function or a pointer that may later be accessed in the current function, it is a good idea that will save you a ton of pain. If you want to have a habit of doing so indiscriminately, it is still fine — good habits are good unless proven harmful, even if in some cases they do nothing. However, if you are consciously assigning a NULL to something that is never to be used again, do not waste your time. It will not do what you think it does, because to a compiler it’s a no-op and most likely it will be skipped in the output binary.

Strictly speaking, the check prevents a spurious write.
Strictly speaking it introduces a spurious test and a branch, and prevents either nothing or a call in situation where a call is expected. Good that most compilers reserve some kind of a zero-register, because that test alone would introduce a spurious write itself.
« Last Edit: May 14, 2020, 09:21:18 am by golden_labels »
Worth watching: Calling Bullshit — protect your friends and yourself from bullshit!
 

Online hamster_nz

  • Super Contributor
  • ***
  • Posts: 2285
  • Country: nz
Re: Poll: Freeing NULL Pointers
« Reply #8 on: May 14, 2020, 10:13:39 am »
So I went hunting... it seems that older C libraries (from my time learning C in the 80s) did indeed crash if you tried to free a NULL pointer. For example the original K+R C book has an implementation of free() that crashed with NULL.

As mentioned, since K+R ANSI C, a standards-complaint free(NULL) does nothing.

And my second result for the search "example C free" (https://www.guru99.com/free-in-c-example.html) showed exactly the sort of thing that made me flinch - it calls free() even when it is known that ptr is NULL:

Code: [Select]
#include <stdio.h>
int main() {
  int* ptr = malloc(10 * sizeof(*ptr));
  if (ptr != NULL){
    *(ptr + 2) = 50;
    printf("Value of the 2nd integer is %d",*(ptr + 2));
  }
  free(ptr);
}

However, armed with my newfound knowledge this is fine. Perhaps a little odd, but causes no harm.

(The first example I found - https://www.tutorialspoint.com/c_standard_library/c_function_free.htm - made me flinch is so many ways... don't look!)
« Last Edit: May 14, 2020, 10:24:48 am by hamster_nz »
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: edavid, I wanted a rude username

Offline chriva

  • Regular Contributor
  • *
  • Posts: 102
  • Country: se
Re: Poll: Freeing NULL Pointers
« Reply #9 on: May 14, 2020, 11:36:12 am »
If anything it's important to make sure pointers are set to 0 before they're allocated and after they've been freed if you have any intention of looking for a 0 pointer before freeing.

Reason:
In larger apps where you don't have full control over the program flow, it's entirely possible for gui elements or other external triggers to enter that part of the code before it has even been allocated (example: you want to make sure another object is freed before allocating a new one).

Basically I've ran over myself enough times to find it worthwhile to do that as a precaution. Especially if I have to check for 0 pointers :)
« Last Edit: May 14, 2020, 11:47:13 am by chriva »
 

Offline golden_labels

  • Regular Contributor
  • *
  • Posts: 185
  • Country: pl
Re: Poll: Freeing NULL Pointers
« Reply #10 on: May 14, 2020, 12:14:36 pm »
hamster_nz:
Some comments on that last code, as it contains some traps.

While malloc may return NULL, on some platforms it may return a non-NULL value that will still be invalid. That’s the case with Linux systems with overcommit (which is the default). This is a trap for new programmers in many ways. This is also why sometimes you may see Linux-specific code, which doesn’t care about NULL returned directly from malloc or calls abort: the test is nearly meaningless and it is more likely that a write to that memory will fail than seeing a NULL there, so it is fine to just let it crash that way.

That brings another question: what if you need to allocate memory in a manner that, if it crashes, it does that in a predictable place? The answer is: write the memory immedietely after allocating it. And, as it happens, C alredy has a function to do that: calloc. It will allocate objects and fill their bytes with 0, so if a crash is to happen, it will happen at that call. And if you receive a non-NULL value, it reliably indicates that the memory is valid. There are some situations in which you want to allocate memory but not obtain it from the system at that particular moment and then this method is not fine. But those are rare and, if you ever need them, you will likely already know about that pecularity or even use mmap directly.

calloc has one more advantage: it handles multiplication overflow properly. 10 * sizeof(int) usually can’t overflow, but that is not true for arbitary expression. In particular if the value is based on user input.

The disadvantage is that tools like Valgrind will not be able to detect invalid reads from calloc-ed memory. There is also no standard realloc counterpart to calloc.
Worth watching: Calling Bullshit — protect your friends and yourself from bullshit!
 

Online dunkemhigh

  • Super Contributor
  • ***
  • Posts: 1915
Re: Poll: Freeing NULL Pointers
« Reply #11 on: May 14, 2020, 01:43:41 pm »
Quote
I meant that there is no else case.

There isn't in that example, but it's easy to put one in if you have the framework :) I think the implied 'else' would be problem-specific and not relevant to an example.

Of course, there may not be an else, but just like it's good practice to still have the {} in an if which has only one statement, it might be good practice to have this test anyway for future hooking into. And as demonstration (to a maintainer) that you knew about the possibility of a NULL there.
 

Online dunkemhigh

  • Super Contributor
  • ***
  • Posts: 1915
Re: Poll: Freeing NULL Pointers
« Reply #12 on: May 14, 2020, 01:46:18 pm »
Quote
Just to provide more general source, C11 §7.22.3.3:

Ouch! maybe you're a C11 buff and get tasked to maintain an older codebase and don't realise this free() doesn't handle NULL well... That alone is a good reason to shove it in unless there's a better reason not to.
 

Online dunkemhigh

  • Super Contributor
  • ***
  • Posts: 1915
Re: Poll: Freeing NULL Pointers
« Reply #13 on: May 14, 2020, 01:54:27 pm »
Quote
While malloc may return NULL, on some platforms it may return a non-NULL value that will still be invalid.

I would expect to manually set the pointer to NULL in that case. malloc() should have some mechanism for returning status - it isn't acceptable to not say a thing when it's failed and, instead, rely on the code crashing. In fact, I can't believe such a malloc() doesn't have some other indicator of success or failure.

The convention is that NULL is invalid. Doesn't matter how that got there - if malloc() does it then fine, use the return value. If not, it's up to you as the programmer to sort it out. That's why we are programmers and not script kiddies! The exception is if your system has a different convention. That's fine. Whatever it is, you need to be consistent.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #14 on: May 14, 2020, 01:58:44 pm »
So I went hunting... it seems that older C libraries (from my time learning C in the 80s) did indeed crash if you tried to free a NULL pointer. For example the original K+R C book has an implementation of free() that crashed with NULL.
As mentioned, since K+R ANSI C, a standards-complaint free(NULL) does nothing.

Yes, hence why I said a compliant implementation. I would even expect a number of implementations with not so old libraries to possibly crash as well, or otherwise fuck something up.

Code: [Select]
#include <stdio.h>
int main() {
  int* ptr = malloc(10 * sizeof(*ptr));
  if (ptr != NULL){
    *(ptr + 2) = 50;
    printf("Value of the 2nd integer is %d",*(ptr + 2));
  }
  free(ptr);
}

That is one example of ptr being checked against NULL anyway, to proceed with other steps before freeing it. So in this example, I don't really see a reason not to put the 'free(ptr)' call inside the if block.

But maybe some people would consider a matching free() to any malloc(), whether it succeeds or not, good practice/good style.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 1724
  • Country: fi
    • My home page and email address
Re: Poll: Freeing NULL Pointers
« Reply #15 on: May 14, 2020, 02:03:54 pm »
I have never used an ISO C library that handled free(NULL); incorrectly.  It is safe to do.  (That is, anything that claims to implement C89 or later.  The older buggy C libraries have multitudes of other quirks you need to cater for anyway.)

When using POSIX C, the following pattern is often used for reading input line-by-line:
Code: [Select]
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <stdio.h>

int main(void)
{
    char  *line = NULL;
    size_t  size = 0;
    ssize_t  len;

    while (1) {
        len = getline(&line, &size, stdin);
        if (len == -1)
            break;

        /* Do something with line.
             line has len characters, including the newline (if any). */
    }

    /* Discard unneeded line buffer */
    free(line);
    line = NULL;
    size = 0;

    /* Do something else, possibly read another file the same way */

    return EXIT_SUCCESS;
}
At the Do something with line point, it is completely safe and acceptable to steal the buffer, or even free it, if one also sets line = NULL, size = 0 .  Just like for the initial line, getline() will then dynamically allocate a buffer large enough for the next line.

I regularly apply the topic at hand, in the point with comment Discard unneeded line buffer.  If there was no data to read from standard input, then at that point line could well be NULL.  However, it is completely safe to just free it, and ensure the pointer and the allocated size get zeroed.

But why ensure line = NULL, size = 0?  Because this is a pattern that avoids a common error case: use-after-free.  It costs basically nothing, and it helps with debugging when something goes b0rkb0rkb0rk.

I have seen many new C programmers, and even tutorials, that suggest using malloc() to allocate an initial buffer.  That is complete waste: extra code (with bug risks) with zero benefits.

The above pattern has proven its worth to me, in that it yields code that tends to have less bugs, and is easier to debug.  So, nowadays I apply this pattern extensively.

As an example, consider a highly dynamic (entries and their number change often) hash table implementation based on
Code: [Select]
struct hash_entry {
    struct hash_entry *next;  /* Next entry in the same table slot */
    size_t  hash;  /* Actual hash value for this entry */
    /* Payload */
};

struct hash_table {
    size_t  size;
    size_t  entries;
    struct hash_entry **slot;
};
#define  HASH_TABLE_INITIALIZER  { 0, 0, NULL }

static inline void hash_table_init(struct hash_table *ht)
{
    ht->size = 0;
    ht->entries = 0;
    ht->slot = NULL;
}
The idea is that the user creates a hash table using either
    struct hash_table  my_table = HASH_TABLE_INITIALIZER;
or
    struct hash_table  my_table;
    hash_table_init(&my_table);

The actual slot pointer array is allocated or reallocated whenever the size of the hash table is changed; and typically that happens when the first entry is added, last entry is removed, or the ratio entries/size exceeds (so the table is grown) or drops below (so the table is shrunk) some heuristic limit.

In this scheme, each hash table slot is a pointer, and an unused slot is just a NULL pointer.  (Obviously, an item with hash h is stored in the chain hanging off .slot[h % .size].)  Because each hash table entry has the original hash in it, the hash table can be resized (and entries moved to their corresponding new slots) when needed.  The shown one is strictly a single-theaded structure; multithreaded access requires a careful locking scheme anyway.

If you implement the operations for this (type of) hash table, you'll see how useful it is to explicitly keep unused pointers marked NULL.  Here, the fact that even the slot array itself will not exist when .size == 0 does mean an "extra" check (an "extra" conditional) in the function implementations, but it also means they become much simpler.  For example, you can provide a function that allows the caller to specify exactly how many slots they want, or how many new hashes they intend to add, without adding complexity to the other functions.

(This is not the most efficient hash table implementation, obviously, but it has proven to be quite acceptable and robust in real life.)

So yeah, it is definitely an useful pattern.  While it is not very common, the cost (NULLifying the pointer after free()) is utterly neglible in real life, and the pattern itself helps write more robust and easier to maintain code, which to me means it is worth it.

Of course, even I avoid doing that just before returning from main or exit()ing.  I fully trust the OS to clean up after the process anyway, so freeing any dynamic memory allocations (except for deleting any shared memory segments!) just before the process exits isn't part of this pattern.
« Last Edit: May 14, 2020, 02:05:38 pm by Nominal Animal »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #16 on: May 14, 2020, 02:05:49 pm »
Quote
While malloc may return NULL, on some platforms it may return a non-NULL value that will still be invalid.

I would expect to manually set the pointer to NULL in that case. malloc() should have some mechanism for returning status - it isn't acceptable to not say a thing when it's failed and, instead, rely on the code crashing. In fact, I can't believe such a malloc() doesn't have some other indicator of success or failure.

I have personally never run into cases of malloc() that would NOT return NULL in case of failure to allocate the requested memory.

I've read about this "overcommit" issue, but have never run into it personally. Yes that would certain suck to have to deal with this.

According to this: https://pubs.opengroup.org/onlinepubs/009695399/functions/malloc.html
which refers to ISO C (I guess C90):
Quote
Upon successful completion with size not equal to 0, malloc() shall return a pointer to the allocated space. If size is 0, either a null pointer or a unique pointer that can be successfully passed to free() shall be returned. Otherwise, it shall return a null pointer [CX] [Option Start]  and set errno to indicate the error.

So not returning a null pointer if not successful doesn't seem to be compliant. I'd have to check C99 and C11, but I doubt this has really changed?

Now of course - that may subtly depend on what a given implementation calls "successful completion". Little buggers. ::)
 

Offline golden_labels

  • Regular Contributor
  • *
  • Posts: 185
  • Country: pl
Re: Poll: Freeing NULL Pointers
« Reply #17 on: May 14, 2020, 02:26:35 pm »
Ouch! maybe you're a C11 buff and get tasked to maintain an older codebase and don't realise this free() doesn't handle NULL well... That alone is a good reason to shove it in unless there's a better reason not to.
It’s the same in 9899:1999, §7.20.3.2. The draft for 9899:1990 §7.10.3.2 also states the same. Sure, one may be maintaining some non-conforming or really old software. But if someone works with tools that fails to support 30 years old version of the language, I am sure this situation very different from writing new software or even working with relatively recent code..

I would expect to manually set the pointer to NULL in that case. malloc() should have some mechanism for returning status - it isn't acceptable to not say a thing when it's failed and, instead, rely on the code crashing. In fact, I can't believe such a malloc() doesn't have some other indicator of success or failure.
There is no way to detect that condition at the point of malloc call. As far as I know there is no way to detect it at all in a manner that doesn’t kill the process. Sorry, sometimes reality doesn’t match language’s abstraction.

I have personally never run into cases of malloc() that would NOT return NULL in case of failure to allocate the requested memory.
Make sure you have swap disabled (unless you want 20 minutes of thrashing ;)), ensure you have important data saved (OOM killer may eat any of your processes) and run that, setting n to more thatn you have memory GiB:
Code: [Select]
// WARNING: may kill other apps, may cause swap thrashing
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>

int main(void) {
    enum {n = 16}; // more than you have GiB of memory
    void* gigmems[n] = {0};
   
    for (int i = 0; i < n; ++i) {
        puts("Allocating 1GiB.");
        fflush(stdout);
        gigmems[i] = malloc(UINT32_C(1073741824));
        printf("Result (total: %dGiB): %p\n", i + 1, gigmems[i]);
        fflush(stdout);
    }
   
    printf("Successfully allocated %d x 1GiB\n", n);
    fflush(stdout);
   
    for (int i = 0; i < n; ++i) {
        printf("Writing 1GiB to %p\n", gigmems[i]);
        fflush(stdout);
        memset(gigmems[i], 0x5A, UINT32_C(1073741824));
    }
   
    // Not even caring about freeing, as the program will not reach this
   
    return EXIT_SUCCESS;
}

hamster_nz:
Extending a bit what I have earlier written about ptr = NULL not neccesserily doing what you think it does, see examples below. Note that I am not claiming that assigning NULL is wrong! Merely showing that there are situations in which the final code will not do exactly the thing you expected it to do, if the compiler notices that this assignment can’t have further effects. That code:
Code: [Select]
#include <stdlib.h>

int fooize(int* ptr) {
    int const value = *ptr;
   
    free(ptr);
    ptr = NULL;
   
    return value;
}
… produces:
Code: [Select]
=== gcc 9.3.0, x86_64/Linux ====================================================
0:    41 54                    push   %r12             | prologue
2:    44 8b 27                 mov    (%rdi),%r12d     | value = *ptr
5:    e8 00 00 00 00           callq  a <fooize+0xa>   | free(ptr) [1]
a:    44 89 e0                 mov    %r12d,%eax       | return via eax
d:    41 5c                    pop    %r12             | \_ epilogue
f:    c3                       retq                    | /

[1] The argument is already in the register, since it was passed to `fooize`


=== clang 10.0.0, x86_64/Linux =================================================
0:    53                       push   %rbx             | prologue
1:    8b 1f                    mov    (%rdi),%ebx      | value = *ptr
3:    e8 00 00 00 00           callq  8 <fooize+0x8>   | free(ptr) [1]
8:    89 d8                    mov    %ebx,%eax        | return via eax
a:    5b                       pop    %rbx             | \_ epilogue
b:    c3                       retq                    | /

[1] The argument is already in the register, since it was passed to `fooize`


=== gcc 10.1.0, Atmega16 =======================================================
 0:    cf 93           push    r28                     | \_ prologue
 2:    df 93           push    r29                     | /
 4:    fc 01           movw    r30, r24                | \_ value = *ptr
 6:    c0 81           ld      r28, Z                  | |
 8:    d1 81           ldd     r29, Z+1                | /
 a:    0e 94 00 00     call    0    ; 0x0 <fooize>     | free(ptr) [1]
 e:    ce 01           movw    r24, r28                | return via r24:r25
10:    df 91           pop     r29                     | \_ epilogue
12:    cf 91           pop     r28                     | |
14:    08 95           ret                             | /

[1] The argument is already in the register, since it was passed to `fooize`
As you may see, that assignment is removed. If you expect to see any NULL  during debugging, you may be surprised.
« Last Edit: May 14, 2020, 02:38:28 pm by golden_labels »
Worth watching: Calling Bullshit — protect your friends and yourself from bullshit!
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #18 on: May 14, 2020, 02:37:37 pm »
I have personally never run into cases of malloc() that would NOT return NULL in case of failure to allocate the requested memory.
Make sure you have swap disabled (unless you want 20 minutes of thrashing ;)), ensure you have important data saved (OOM killer may eat any of your processes) and run that, setting n to more thatn you have memory GiB:

Alright. Now how are you supposed to properly deal with out-of-memory conditions on such a system then? And how do you detect them?
Finally, do you think this makes malloc() really compliant with the standard?

From C99: talking about malloc, calloc and realloc:
Quote
If the space cannot be allocated, a null pointer is returned.

Yeah.

hamster_nz:
Extending a bit what I have earlier written about ptr = NULL not neccesserily doing what you think it does, see examples below. Note that I am not claiming that assigning NULL is wrong! Merely showing that there are situations in which the final code will not do exactly the thing you expected it to do, if the compiler notices that this assignment can’t have further effects. That code:
Code: [Select]
#include <stdlib.h>

int fooize(int* ptr) {
    int const value = *ptr;
   
    free(ptr);
    ptr = NULL;
   
    return value;
}

As you may see, that assignment is removed. If you expect to see any NULL  during debugging, you may be surprised.

Well, of course. This OTOH doesn't really have anything to do with assigning NULL to a pointer, but just to assigning values to a variable that are never used.
In your above example, obviously the "ptr = NULL' statement has no effect anyway. But, if you were further using 'ptr' in the rest of the function before returning, it could have, and then the assignment wouldn't get pruned.
I'm sure hamster_nz knows this, and from what I saw, the example he gave was for instance for pointers that were members of structures that would potentially live AFTER the freeing operation.
Different story. ;)
« Last Edit: May 14, 2020, 02:41:58 pm by SiliconWizard »
 

Online dunkemhigh

  • Super Contributor
  • ***
  • Posts: 1915
Re: Poll: Freeing NULL Pointers
« Reply #19 on: May 14, 2020, 02:44:57 pm »
This wouldn't be a question at all if free() set the ptr to NULL (or whatever passes for invalid) when you call it. Of course, you'd have to go free(&ptr) but that's hardly difficult. There is no doubt some hangover compatibility thing with K&R, but it's not like there's half a dozen ways of allocating the stuff yet only one way to get rid of it.
 

Offline golden_labels

  • Regular Contributor
  • *
  • Posts: 185
  • Country: pl
Re: Poll: Freeing NULL Pointers
« Reply #20 on: May 14, 2020, 02:51:13 pm »
Alright. Now how are you supposed to properly deal with out-of-memory conditions on such a system then? And how do you detect them?
AFAIK you can’t.

(doesn’t work)I guess you could manually mmap an anonymous page and then mlock it, but there  are some disadvantages. The memory region will remain in RAM, which is not what one wants in a normal application. You need to run it as a root or with CAP_IPC_LOCK capabilities, which is not desirable for a normal process and, effectively, you may accidentally kill some other apps. mlock may be expensive.

Finally, do you think this makes malloc() really compliant with the standard?
This situation is outside of C scope, just like a microcontroller losing power or RAM getting corruped. From the point of view of malloc and the program the memory is allocated. It’s the operating system that fails to deliver it and kills the whole process, completely outside of the C abstraction.

Well, of course. This OTOH doesn't really have anything to do with assigning NULL to a pointer, but just to assigning values to a variable that are never used. (…)
Yes, but it affects assigning NULL. I have explicitly said that I do not deliver this example to deprecate setting NULL in that manner — my goal was to show that it may produce outputs different than one expects if seen from the world outside C. For example while using a debugger.
« Last Edit: May 14, 2020, 03:37:26 pm by golden_labels »
Worth watching: Calling Bullshit — protect your friends and yourself from bullshit!
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #21 on: May 14, 2020, 03:10:14 pm »
Alright. Now how are you supposed to properly deal with out-of-memory conditions on such a system then? And how do you detect them?
AFAIK you can’t. I guess you could manually mmap an anonymous page and then mlock it, but there  are some disadvantages. The memory region will remain in RAM, which is not what one wants in a normal application. You need to run it as a root or with CAP_IPC_LOCK capabilities, which is not desirable for a normal process and, effectively, you may accidentally kill some other apps. mlock may be expensive.

That bites. I suppose you can disable overcommitting if you're not happy with this?

Finally, do you think this makes malloc() really compliant with the standard?
This situation is outside of C scope, just like a microcontroller losing power or RAM getting corruped. From the point of view of malloc and the program the memory is allocated. It’s the operating system that fails to deliver it and kills the whole process, completely outside of the C abstraction.

Yeah, I see the point, but as I said earlier, I think this is a twisted approach from an implementation POV. Allocated memory that can't be used is not allocated memory IMHO. I understand the rationale of the implementation, but I do not agree with it entirely, and think this is a twisted definition of being allocated. I do not completely agree with that being outside of C scope. From the standard, any allocated memory with the above std functions is supposed to be usable if the allocation succeeds. I do think there's a slight problem here, however you see it.

Well, of course. This OTOH doesn't really have anything to do with assigning NULL to a pointer, but just to assigning values to a variable that are never used. (…)
Yes, but it affects assigning NULL. I have explicitly said that I do not deliver this example to deprecate setting NULL in that manner — my goal was to show that it may produce outputs different than one expects if seen from the world outside C. For example while using a debugger.

Sure - but as I said, this is nothing specific to this topic whatsoever. It's just a general thought about statements that get pruned at compile time because the compiler considers they have no effect. Normally, any moderately experienced developer should be aware.

You can really take a much simpler example of this even:
Quote
int func(int n)
{
    int a = 1;
    return n;
}

Likewise, there will usually be absolutely no trace of a in the compiled code. Something that should only surprise beginners IMHO. ;)
« Last Edit: May 14, 2020, 03:17:09 pm by SiliconWizard »
 

Offline andersm

  • Super Contributor
  • ***
  • Posts: 1149
  • Country: fi
Re: Poll: Freeing NULL Pointers
« Reply #22 on: May 14, 2020, 03:42:25 pm »
Extending a bit what I have earlier written about ptr = NULL not neccesserily doing what you think it does, see examples below. Note that I am not claiming that assigning NULL is wrong! Merely showing that there are situations in which the final code will not do exactly the thing you expected it to do, if the compiler notices that this assignment can’t have further effects.
The issue is a bit more insidious than your example shows. This type of code has been used to clear out sensitive data (passwords, crypto keys, etc.) before returning the memory:
Code: [Select]
void safe_free(unsigned char *buf, size_t len)
{
    memset(buf, 0, len);
    free(buf);
}
Looking at the disassembly, the memset has been optimized away:
Code: [Select]
0000000000000000 <safe_free>:                           
   0:   48 83 ec 08             sub    $0x8,%rsp       
   4:   e8 00 00 00 00          callq  9 <safe_free+0x9>
   9:   48 83 c4 08             add    $0x8,%rsp       
   d:   c3                      retq                   

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #23 on: May 14, 2020, 04:03:11 pm »
Extending a bit what I have earlier written about ptr = NULL not neccesserily doing what you think it does, see examples below. Note that I am not claiming that assigning NULL is wrong! Merely showing that there are situations in which the final code will not do exactly the thing you expected it to do, if the compiler notices that this assignment can’t have further effects.
The issue is a bit more insidious than your example shows. This type of code has been used to clear out sensitive data (passwords, crypto keys, etc.) before returning the memory:
Code: [Select]
void safe_free(unsigned char *buf, size_t len)
{
    memset(buf, 0, len);
    free(buf);
}
Looking at the disassembly, the memset has been optimized away:
Code: [Select]
0000000000000000 <safe_free>:                           
   0:   48 83 ec 08             sub    $0x8,%rsp       
   4:   e8 00 00 00 00          callq  9 <safe_free+0x9>
   9:   48 83 c4 08             add    $0x8,%rsp       
   d:   c3                      retq                   

Now this is indeed a more interesting, and less obvious case of pruning. But the idea is still the same - to the compiler, anything assigned without being used afterwards is considered having no effect.

Even more interesting - you'd think using a volatile qualifier could work around this. Unfortunately, it doesn't here: since memset() itself has no volatile qualifier for its first parameter, whatever you do, the compiler will consider memset() itself to have no useful effect in this context.

The only simple workaround I can think of right now is to implement that yourself: (of course I didn't bother to optimize the loop depending on the pointer alignment, so this would not be as efficient as memset)

Code: [Select]
void safe_free2(unsigned char *buf, size_t len)
{
volatile unsigned char *buf2 = buf;
size_t i;

for (i = 0; i < len; i++)
buf2[i] = 0;
   
    free(buf);
}
 

Offline golden_labels

  • Regular Contributor
  • *
  • Posts: 185
  • Country: pl
Re: Poll: Freeing NULL Pointers
« Reply #24 on: May 14, 2020, 04:05:25 pm »
That bites. I suppose you can disable overcommitting if you're not happy with this?
I’ve checked the mmap+mlock method and it doesn’t work. So the answer is: I have no idea, completely.

Yes, you can disable overcommiting or put limits on how much memory can be allocated (googling reveals a lot of results, relevant for the particular situation and configuration). But then you lose benefits of using overcommit. And the problem of your application being killed due to memory issues is still not completely solved, because during memory exhaustion the OOM killer may still decide to eliminate your process. Having huge swap may seem like a relief, but that comes at the cost of performance.

In other words: sometimes reality trumps abstraction. Otherwise software development would be ten times easier. ;)

andersm:
Though it is a bit off-topic, this is a good point. And, unfortunately, there is still no portable solution. What is employed now is a bunch of platform-specific solutions (like FreeBSD’s explicit_bzero, hoping that volatile pointers will in fact cause overwrite etc.). Even those are still not fully effective.
« Last Edit: May 14, 2020, 04:06:58 pm by golden_labels »
Worth watching: Calling Bullshit — protect your friends and yourself from bullshit!
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #25 on: May 14, 2020, 04:16:07 pm »
andersm:
Though it is a bit off-topic, this is a good point. And, unfortunately, there is still no portable solution. What is employed now is a bunch of platform-specific solutions (like FreeBSD’s explicit_bzero, hoping that volatile pointers will in fact cause overwrite etc.). Even those are still not fully effective.

Though, per the standard itself, there is no absolute guarantee indeed, I've never run into any platform on which the piece of code I posted above would not work and would get pruned.
The volatile qualifier ensures the compiler can''t assume the assignment has no effect, so in practice, it won't prune it, because pruning it would violate the "side-effect" rule.

What makes the complete end-result implementation-defined, though, is the following statement:
Quote
What constitutes an access to an object that has volatile-qualified type is implementation-defined.

So, yeah. IME it's safe to assume it will do what you want, but there is no strict guarantee it will be portable.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 1724
  • Country: fi
    • My home page and email address
Re: Poll: Freeing NULL Pointers
« Reply #26 on: May 14, 2020, 05:11:02 pm »
I have personally never run into cases of malloc() that would NOT return NULL in case of failure to allocate the requested memory.
Dealing occasionally with huge data sets, I've encountered this.

Technically, the C library has allocated the requested memory, and the OS kernel has provided the virtual memory (address space) for it.  (Note that I'll assume POSIXy system for the following.  Non-POSIXy systems can be affected by this as well, but probably don't provide the POSIX C functions I'll mention.)

The problem only occurs when at the moment of use, the kernel finds that it cannot provide a RAM backing to the virtual addresses it has already provided.  (It will try swap and reusing clean pages first; when this happens, you really do have too many applications using too much RAM already.)

So, the overcommit memory issue has nothing to do with C or memory allocation, and can happen to any process (that has not "locked" its pages to memory via mlock() or mlockall()).  It is a failure of the kernel to provide actual random access memory to back up the virtual memory/address space it has already provided to the process.

Accessing the memory immediately only makes sure the RAM backing exists at that point, with the belief that the kernel cannot rescind that allocation later on.
If allowed (by the resource limits set), a process can lock pages in memory via mlock()/mlockall(), ensuring that accessing those pages will never cause I/O (paging).
In Linux, it is possible to catch SIGBUS and SIGSEGV, and either emulate the failed access (requires an instruction decoder, unfortunately, and preferably one that does not need dynamic memory, and can operate from read-only memory), or do a longjmp()-like jump cancelling the entire access attempt.  The latter is easier, but it is rather arcane, and definitely hardware-specific.

One workaround is to skip malloc() et al., and instead memory-map an already allocated file without swap reservation.  (By allocated file, i mean that you either write each block of the file, or use posix_fallocate() to ensure each block of the file has been allocated on storage, without holes.)  If there is sufficient RAM, the mapping will stay in page cache.  The upside is that this file can also be used for checkpointing a long-running simulation; allowing it to continue even if it gets terminated early.

One can use mincore() to examine if the memory is present, and attempt to entice the kernel to provide the RAM backing via madvise().  Unfortunately, madvise() is advisory, and doesn't yet have a 'make sure backing exists, or fail now' option.

The proper management function is mlock()/mlock2()mlockall().  Calling these, one can ensure the required page(s) are in RAM.

Under 4.9 and later Linux kernels, the RLIMIT_MEMLOCK soft resource limit dictates how much memory a process can lock in RAM.  In 2.6.8 and earlier either superuser privileges or CAP_IPC_LOCK capability was needed; now, only the resource limit matters.

It is also important to notice that GNU C library doesn't return typical allocations back to the OS, but keeps it in the process heap for future allocations – unless the allocation exceeds a certain limit (130 MiB, IIRC).

If you have lots of RAM on your machine, feel free to turn overcommit off.  The setting is exposed by the kernel as the pseudofile /proc/sys/vm/overcommit_memory,  0 = heuristic guess, 1 = allow, 2 = never allow.  This is system wide.  Different Linux distros usually already have something in their init system where you can tune this knob.  The /proc/sys/vm/ pseudofiles described in man 5 proc can be informative.

It is important to note that this isn't just some zealot knob that Linuxies decided they wanted.  It turns out that a lot of userspace processes allocate lots of memory, but use only a fraction of it.  Per-thread stacks (8 MiB by default) are an excellent example; typical worker threads use maybe a couple of dozen KiB, but have megabytes allocated.  By allowing overcommit, those allocated but unused pages are nearly free (cost is page table setup etc), so the user can get full use of their RAM.

We didn't use to have gigabytes of RAM, and overcommit made a big difference in performance.  There are quite a few tunable knobs, too, if you had a workload that benefits from such tuning.  On a desktop system, turning off overcommit and setting reasonable per-process limits especially for memory-hungry applications (like browsers! And Java!) probably works better if you have say 16 GiB of RAM...  It's just that the Linux distributions span such a huge range of hardware, that the defaults won't be optimal for anyone.
« Last Edit: May 14, 2020, 05:13:26 pm by Nominal Animal »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #27 on: May 14, 2020, 05:16:26 pm »
We didn't use to have gigabytes of RAM, and overcommit made a big difference in performance.

Yes, yes, I understand the rationale and the benefits.
I've just been lucky (probably) never to be bitten by it.
And, I still think this is problematic as far as application robustness is concerned. That was certainly a trade-off between robustness and performance.

But for the developer, as we said, this is a bit nasty.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 1724
  • Country: fi
    • My home page and email address
Re: Poll: Freeing NULL Pointers
« Reply #28 on: May 14, 2020, 05:31:37 pm »
(I'd love to have a Ryzen Threadripper, with say 64 GiB of RAM, and a couple of 512 GiB Samsung M.2 SSDs in RAID-0, for HPC.  Even Dalton would run fast on such a machine, not to mention my own simulator I've been working on...
 But, I'm just a poor burned out husk of a man; can't work enough to afford one.  :-//)

Yes, yes, I understand the rationale and the benefits.
I've just been lucky (probably) never to be bitten by it.
And, I still think this is problematic as far as application robustness is concerned. That was certainly a trade-off between robustness and performance.
I fully agree.  Sorry, I didn't intend to sound repetitive or anything; I just wanted to describe the problem to those others who might be reading the thread, as it is one I am familiar with.

Yes, in general, it Just Works.  When a desktop application runs amok (starts gobbling memory like crazy; it happens), it tends to crash before OOM killer is invoked.  (And while the OOM killer is a bit of a shotgun approach, it is becoming better and better.  Right now, we can even freely assign the "OOM scores", so that the order in which OOM killer targets processes is deterministic; this is excellent for locked down and embedded systems.

Desktop applications should not care at all.  It's up to the user to manage how they use their tools, really, since the defaults do a pretty good job for most use cases.

Most service daemons shouldn't care either.  A multiprocess server with a privileged core and unprivileged/lesser-privileged worker processes can use resource limits (set in global configuration; AFAICR both Apache and Nginx already support this) and perhaps even tweak OOM scores, so that the worker processes will be killed before the core or any other services, so even an OOM will be perfectly survivable without outages.

The painful bit is with security-sensitive applications, like those maintaining keyrings.  I REALLY don't like having a keyring in the browser process!   It would be best to store the keys either in the kernel (both Linux and Mac kernels have keyring facilities), or in locked memory inside a paranoid keyservice per-user/per-session daemon.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #29 on: May 14, 2020, 05:49:28 pm »
Yes, certainly, in *most* cases, it's adequate. But in some cases it can be a hazard.
And, if your particular application requires a lot of memory to operate, it can be a real problem. I guess there probably are convoluted (and/or non-portable) ways of programmatically figuring out how much memory you can reasonably allocate and go from there, but it's unconvenient to say the least.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 1724
  • Country: fi
    • My home page and email address
Re: Poll: Freeing NULL Pointers
« Reply #30 on: May 14, 2020, 06:43:34 pm »
I guess there probably are convoluted (and/or non-portable) ways of programmatically figuring out how much memory you can reasonably allocate and go from there, but it's unconvenient to say the least.
unc.. inc.. noncon.. Definitely not convenient, agreed.

My point is, I guess, that the application should not make that kind of determinations, and instead only allocate what it actually needs.  I similarly dislike apps that try to sniff how many CPU cores/threads I have, because depending on my own workflow, the logical priority of each application varies a lot.  (I do use ionice and nice in real life, often, when running long-running but low-priority stuff, while I do something more interesting on the desktop.)

Apps' own run-time tunables also help.  For example, when dealing with massive I/O, the I/O block size starts to matter.  No, fstat() et al. doesn't really tell the optimum I/O block size, because the optimum I/O block size depends on the workload.  So, I like to use a compile-time default, with a easy user configuration override, say via environment variables, if you don't want to do configs.  Or in the UI, for GUI apps.   A single one, sliding from "background idle" to "max performance" usually suffices, with different steps' exact scales configured at compile time.

It is interesting to note that old Mac OS, pre-X, had per-application memory limits set in the executable preferences (easily edited by the user).  If one wanted to edit a huge image, or one with many layers in Photoshop, kick that to 75% of installed RAM but don't start any other applications.  Then, when doing e.g. publishing-type stuff, editing images destined to be laid out in Pagemaker or Illustrator or Freehand, drop the limits so one can keep an instance of each open at the same time...
That was over two decades ago.

This is not a new problem.  I kinda like the old Mac OS approach;  I did a lot of work with a PowerMac 7200 with 32 MiB of RAM, and the different workloads did need adjusting the limits to squeeze the max out of the hardware.
« Last Edit: May 14, 2020, 06:47:28 pm by Nominal Animal »
 

Online dunkemhigh

  • Super Contributor
  • ***
  • Posts: 1915
Re: Poll: Freeing NULL Pointers
« Reply #31 on: May 14, 2020, 07:10:25 pm »
Quote
The proper management function is mlock()/mlock2()mlockall().

That's kind of different, then, isn't it? Being aware of that and not using those before accessing the memory is pretty stupid, ISTM. On that basis, I think that disqualifies this example from the discussion of whether NULL is meaningful or not in a pointer context.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #32 on: May 14, 2020, 08:06:46 pm »
I guess there probably are convoluted (and/or non-portable) ways of programmatically figuring out how much memory you can reasonably allocate and go from there, but it's unconvenient to say the least.
unc.. inc.. noncon.. Definitely not convenient, agreed.

Inconvenient, I think. Sorry for the slip-up.

My point is, I guess, that the application should not make that kind of determinations, and instead only allocate what it actually needs.

That's a very general statement, but doesn't fit all needs.
In some applications, the amount of memory needed may vary tremendously, and the app may decide how much to allocate at some point depending on how much is available. Whether you find that bad practice is debatable, but there certainly are cases in which it can be useful.
Sure there are other means of dealing with it, such as making that a preference for the application, and letting the user set it instead. Why not.

Whatever the approach, I still think returning pointers to allocated memory that could suddenly vanish for any reason is not sane. There are apparently ways to circumvent this (mlock() et al.?), but it's kind of quirky. I personally am against the approach that considers letting processes just crash instead of giving the developer an opportunity to handle things gracefully. YMMV. If anything, even if the developer doesn''t know what to do, they can always at least give the user a message about the app not having enough memory available to run. A lot more user-friendly than them having to debug core dumps. Anyway. As you mentioned, in some safety- or security-critical apps, that could be completely unacceptable.

As to the C standard, I've re-read the section about memory management in C99, and from my interpretation, the discussed behavior, to my humble eyes, looks non-conforming. Read it carefully, and tell me what you think.

Just my 2 cents.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 1724
  • Country: fi
    • My home page and email address
Re: Poll: Freeing NULL Pointers
« Reply #33 on: May 15, 2020, 12:38:29 pm »
Inconvenient, I think.
Me fail English often, so I wouldn't know  :-[.

My point is, I guess, that the application should not make that kind of determinations, and instead only allocate what it actually needs.
That's a very general statement, but doesn't fit all needs.
I meant that for userspace applications (category GUI-based tools, as opposed to services) exclusively.  For services and applications dealing with massive datasets, I mentioned other options in a prior message.

I personally am against the approach that considers letting processes just crash instead of giving the developer an opportunity to handle things gracefully. YMMV. If anything, even if the developer doesn''t know what to do, they can always at least give the user a message about the app not having enough memory available to run. A lot more user-friendly than them having to debug core dumps. Anyway. As you mentioned, in some safety- or security-critical apps, that could be completely unacceptable.
I myself am ambivalent about this.  I know why it is done, and that it works for many users, but I have been bitten by it in practice.  I do not know if banning overcommit makes sense.  I do support users and admins who disable it on their own systems, and believe that doing so on machines with plenty of RAM is a good idea for several different reasons; but as a global default or non-user-tunable, I am not sure.

And by "not sure", I mean it literally: I am on the fence on this.  I never use "not sure" as a weaselly "I disagree". 

As to the C standard, I've re-read the section about memory management in C99, and from my interpretation, the discussed behavior, to my humble eyes, looks non-conforming. Read it carefully, and tell me what you think.
It is really the OS kernel (other OSes besides Linux do this; I recall seeing this first on an Unix box) that fails to uphold its promise.
In other words, I believe you can say that these kernels make it impossible for any implementation on them to fulfill the C standard.
I do not blame the C compiler or standard library though, because it is the kernel that is failing in its promise of service, not anything in C; all processes, even those written in raw assembly, are affected by this.

Quote
The proper management function is mlock()/mlock2()mlockall().
That's kind of different, then, isn't it? Being aware of that and not using those before accessing the memory is pretty stupid, ISTM. On that basis, I think that disqualifies this example from the discussion of whether NULL is meaningful or not in a pointer context.
Me fail English, so let me re-state my position.  Apologies for the wall of text, but I am trying to write in a manner that lets even new C programmers to follow the discussion.

Typical userspace applications, both GUI and non-GUI should not care about overcommit or work around it.  If an application can benefit from preallocating memory, or switch between using very little memory but being slow and using a lot of memory to speed up things, I think it should have an easy user tunable to determine that – combining it into a single sliding scale value, from "minimal memory" to "maximum performance", with a bit of tuning how that maps to different details available at compile/build time for different HW arches.  The only applications that should ever need to use mlock() are security-sensitive services, and perhaps some very important low-level services (that'd use mlockall() if the admin so desires, based on some config setting) like remote access services, or critical network infrastructure services.

The problem is that from the userspace perspective, including what the standard C library sees, the interfaces used to request new memory from the OS kernel, and thus passed to the malloc() caller, have fulfilled the promise.  It is just that these kernels have learned to LIE because some human users WANT/NEED THAT, and simply kill the target process if they catch the kernel in that lie (having promised memory, but not being able to fulfill that promise).

The file-backed mmap() approach can be used to sidestep the entire issue.  This is useful for simulators and other programs dealing with large and/or persistent datasets, and it can provide restart/continue support too in case the process keels over for any reason.

The mlock()/mlockall() approach can be used to enforce the kernel promise.  On current Linux, Android, Mac, iOS, and various *BSD systems, the RLIMIT_MEMLOCK resource limit (managed by the system administrator and/or user running the process) defines this limit, per-process.

It is possible for a Linux process to detect and catch this via SIGBUS/SIGSEGV signal handlers, in a per-thread manner, but as it involves manipulating the processor and thread state (particularly including the instruction pointer when the un-fulfillable memory access occurs), it is too complicated to do for standard programs.  I would only consider this approach if the process can safely undo/ignore the failure, and throw away (return back to the kernel) large-ish swathes of memory if that ever occurs; an "opportunistic memory-hog that nevertheless behaves nicely".  The issue then is that if the kernel runs out of memory after providing the RAM backing to such a nicely behaved memory-hog, it can detect the situation when trying to fulfill the promise it made to some other process.  There is no way – and I do not think C even provides any possible way! (POSIX signals could, I guess) – for the kernel to ask back memory, or even to inform processes about memory pressure.  So, even this SIGBUS/SIGSEGV handler way is really no solution at all.

It should be noted that this – memory provided by the kernel not really being there – ONLY ever happens when the system is overburdened already.



Okay, so we know there are lots of use cases, where we'd really like to know how much memory we should speculatively allocate, and still be reasonably sure (or completely sure) that the allocations will be fulfilled by the kernel.  And also things like how many threads we should use.

My opinion is that this is where we'd need a helper library.  Not to ensure the memory we allocate is there, but to give the resource use hints to the process.  Perhaps the library could also provide an interface to ensure memory allocations will be there if both the application requests it, and the library allows.
This library would be a policy manager.  It would be the extension of the human users'/admins' will, telling the application how to behave.  It should also have trivial tuning ability, something like "Application properties", and it could include CPU and I/O priority tuning.

Unfortunately, that is unlikely to happen at all.  It might be incorporated into some mutated form into systemd, so you can do a dbus query for these things, which basically defeats the entire idea, but that is unfortunately the direction current Linux distributions are "progressing".

(If anyone is interested, the entire Init system debacle and all related problems, including parallel and asynchronous service startup, would be trivial to solve, if system service developers would just add calls to a library that is responsible for providing the service status updates, and provides an interface to manage inter-service dependencies.  We've known that for decades, but that hasn't happened, because coordinating something like that among software developers is like herding cats: it won't work, and just leads to hissing (or bikeshedding, really).  It seems the best we can do, is cobble together a super-privileged superdaemon that forcibly inserts its tentacles into all over the userspace, rewriting many of those service daemons, with honestly speaking quite poor quality code, and dropping it in from above, using Linux distribution human politics.)

Similarly, it would be trivial for Linux kernel developers to provide a new syscall, say mpopulate(), that would take as a parameter a memory range, for the userspace to request the kernel to populate those pages, or return a failure code.  A flag parameter would tell the kernel how important the userspace process believes this memory to be, so the kernel can decide whether to try evicting other pages and do other make-more-ram-available stuff or not.

Problem is, most users do not care.  The users who care, can simply tune or turn off overcommit.  Many users never encounter this thing, and even if they do, they think the crash occurred for some other reason ("shit this is badly coded" is one).  Developers claiming they want this feature is not taken seriously, because their users are not demanding these features.  (And the users don't listen to the developers, because humans.  I'm a dev too, and I hate being between a stone and a hard place like this, so I fully understand the frustration others have expressed about overcommit issues.)

A number of years ago, I discussed with various compiler and C library developers whether we could provide a function, say memfill() or memrepeat(), that would repeat an initial part of the specified buffer into the entire buffer.  (It is essentially the third part of the memcpy() - memmove() - memrepeat() triangle; all three functions are implemented in a very similar fashion at the hardware level, and thus actually belong to compiler support rather than C library; i.e. to be available in freestanding environments also, not just in hosted environments.)  That failed, for the same reason, even though it alone changes the result of certain Fortran - C comparison benchmarks.  The main use of memrepeat() is to initialize arrays with structures or floating-point values, faster than copying those structures or floating-point values in a loop; because it deals with the storage representation alone, none of the value requirements (signaling NANs and such) apply.  It is particularly advantageous for odd-sized elements in a continous array.

In summary, whether one can trust malloc() to return NULL when at some point later on, the system cannot fulfill that promise anymore, is not really a C question, but a question about OS kernel services, and how to implement policy (providing human operator intent and wishes, and human administrator limits and overrides, to applications).
« Last Edit: May 15, 2020, 12:45:10 pm by Nominal Animal »
 
The following users thanked this post: mrflibble

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 5344
  • Country: fr
Re: Poll: Freeing NULL Pointers
« Reply #34 on: May 15, 2020, 01:31:09 pm »
As to the C standard, I've re-read the section about memory management in C99, and from my interpretation, the discussed behavior, to my humble eyes, looks non-conforming. Read it carefully, and tell me what you think.
It is really the OS kernel (other OSes besides Linux do this; I recall seeing this first on an Unix box) that fails to uphold its promise.
In other words, I believe you can say that these kernels make it impossible for any implementation on them to fulfill the C standard.
I do not blame the C compiler or standard library though, because it is the kernel that is failing in its promise of service, not anything in C; all processes, even those written in raw assembly, are affected by this.

Oh, I agree! Although, I guess you could say that a fully conforming C std lib would implement malloc (and siblings) locking the allocated memory. But then it would defeat the whole purpose of overcommitting and would likely cause performance issues... My personal view on this is that it would have been more consistent to provide what it takes at the OS level so that malloc can be implemented with locked memory, thus making it default on the language level, and then provide additional memory management functions to allow allocating memory you can afford "losing" at some point. Apparently, the decision was to do the exact opposite. I'm sure we'd find many opinions on this anyway.

I'm more generally for functions that return a status the developer can rely on, so this one certainly tickles me.
« Last Edit: May 15, 2020, 01:34:08 pm by SiliconWizard »
 
The following users thanked this post: mrflibble, Nominal Animal


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf