Computing > Programming

C: read() and write().

(1/9) > >>

Who uses read(), write(), open() & close()  functions in C?

Do you:

a) check for EINTR and retry?

b) check for things like short writes, and call write() again?

I do both of these, but some people don't believe it is needed, and some wave it away saying they use and trust SA_RESTART)...

Never done any of those checks. I just fail on any error. Never had a single problem with that.

Nominal Animal:

--- Quote from: hamster_nz on March 16, 2021, 11:07:51 pm ---a) check for EINTR and retry?

--- End quote ---
Depends on whether I'm using timeout (a signal delivered to an userspace handler installed without SA_RESTART).

--- Quote from: hamster_nz on March 16, 2021, 11:07:51 pm ---b) check for things like short writes, and call write() again?

--- End quote ---
Always.  (Well, there are a couple of exceptions, but those involve specific types of file descriptors with Linux-specific guarantees/semantics.)

For me, this is just the bare minimum, because otherwise you could be losing or garbling data and not know it.  "Never had a problem" is just an indication you don't care, and it never bit you bad enough for you to notice.

The things I've had to argue with people are checking the return value of close() for a delayed write error (or some other kernel-internal filesystem errors); and whether it makes sense to check for malloc()/calloc()/mmap() failures, or just let the process die from segment violation when it dereferences a NULL pointer.  (The latter especially on systems with memory overcommit.)

The purpose for me is that the data my code deals with is important to the user.  If the kernel reports there was something iffy, suspicious, or just abnormal, I believe my ethical responsibility as a software developer is to let the user know about it.  Silently garbling data is evil.

I openly admit this is rather "paranoid" (non-trusting) approach, but this is the way I do my job.  More than once have my programs been the first indication of malfunctioning hardware (although that usually leads to claims my "code is buggy, because everything else works fine" – except that on more careful checking, some data has already been garbled).  Others do it differently.

Smart people do it the way their employer pays them to do it.

Well to check for errors you have to check for a short write. An error that occurs after writing at least one byte will cause a short write.

Whether you have to worry about EINTR or short writes caused by interrupted system calls depends on your application.  If you are writing a library that might be used in a larger program or you don't have control over signal usage or you know you are using signal handlers without SA_RESTART then you need to be prepared for that.

In linux at least, ordinary file writes are not interruptible so if you know you are accessing an ordinary file then short write == error and you don't have to worry about signals. Also, writes to pipes less than the pipe buffer size are guaranteed to be atomic.  Therefore you will never get a short write only an error with no bytes written or success.

Just to clarify my point. I don't just ignore status codes or short writes. I put asserts on that stuff and let the program fail with a meaningful message. Once it does, I have a good reason to investigate and add the handling into the code.

There are so many ancient error codes and behaviors that no longer happen on modern OSes. You will go crazy for no reason trying to handle all that.

Same with malloc(). I never "handle" NULL result. I just assert.


[0] Message Index

[#] Next page

There was an error while thanking
Go to full version