I just can't seem to remember what all these almost randomly chosen letters do. And after using a reference chart for the third time today, I wondered, am I alone? Or are there others like me? Maybe even someone who has designed an alternative, with user-friendliness in mind?
I know it would be non-standard, but I could still use it for some things. Googling turned up some possibilities for C++, Haskell, and so on. But not plain C.
Seriously? Nothing could be simpler. More modern stuff uses regular expressions. Try that then go back to the simplicity of printf()
Nothing could be more simple! Give us an example of how you would do it better?!
Conciseness = user friendliness when it comes to coding. You could write one where you go
printFbyChris("I have %DECIMAL(figures=5,show_leading_zeros=true) bananas", ...)
But I think even you would soon go back to
printf("I have %05d bananas", ...)
digits=5 surely?
and so the confusion builds!
I do like the old fashioned printf type specifiers. In engineering education I first learned C and later C++. Then I was suggested to use cout for text output. With all that type safety propaganda. I wasn't able to see the benefit of that and simply sticked to printf and it's comrades in stdio.h. For me it was easy to verify the correctness of something that fits into a single line.
I even do goto sometimes in C. No I don’t write spaghetti code. It is easier to follow a few goto’s than a complex while condition for me.
I've used C for over 30 years now, I am quite used to the % format specifiers and I know the common ones like the back my hand, the more esoteric format specifiers I still have to look up. But once you know them they are quick and concise to write.
However, I am still fond of the COBOL PICTURE clauses, even though I didn't have a COBOL career
For those of us who remember Mark Williams COHERENT, there is a picture() function there, for C. In 2015, they open-sourced COHERENT and that particular picture() call is available in the libmisc source.
picture(3101.1, "*$ZZZ,ZZZ.99", output_string) yields "$***3,101.10" in
output_string.
picture(5.1, "ZZ99", output_string) yields " 05" in
output_string. (two spaces and 2 digits)
http://www.nesssoftware.com/home/mwc/manpage.php?page=libmisc (scroll down to Pictures)
The main page is here
http://www.nesssoftware.com/home/mwc/COHERENT source is here :
http://www.nesssoftware.com/home/mwc/source.php
I like sprintf() for flexibility and the ability to arbitrarily concatenate formatted strings (without using strcat())
If Chris is using Arduino - problem solved for simplicity, but not flexibility / performance.
OMG - I almost forgot the COBOL picture statement... clumsy, but certainly readable)
Heh. Maybe it is just me.
C is NOT my primary language. So I see "d", and the first two thoughts that come to my mind is "double precision float". Or "decimal", which in the contexts I'm more used to, doesn't mean base 10 - it means a number stored much like a string, with no limit on the number of digits or precision. I know I can use "i" instead, and that makes more sense to me as an integer. But it seems like everyone else uses "d", so I feel obliged to use it as well. And then I don't mess with printf for a while, go back to old code and see "d", and get confused all over again. I try to figure it out without the reference chart, and it goes something like this:
Hmm, "d" is an integer, right? But wait, "u" is an unsigned integer, so that would make a signed integer "i', and that makes "d"...double? Arg, gotta check that chart again...
Seriously, WHY did they assign two letters to the same darn thing? I don't care what anyone says, this is NOT intuitive.
I'm also trying to code for portability to other hardware. Currently in the middle of the second port, this time from 16 to 32 bit architecture. Had the foresight to write it from the start using types like int16_t where appropriate, which I typedef'ed to I16 for conciseness. So most of it's been super easy. But I'm now getting some stack issues using printf. In the process of looking into it, I find something that says in this scenario, I really should have been writing:
printf("I have %05" PRIi16 " bananas", ...)
Which is no longer concise, like [rs20]'s original example. But at least it's totally unambiguous. And different enough that I no longer feel obliged to use "d", when I really want to use "i". Yet, if I want those bananas in hex, I have to do this:
printf("I have %05" PRIx16 " bananas", ...)
But what I'll end up doing is this:
printf("I have %05" PRIi16x " bananas", ...)
Remember I typedef'ed int16_t to I16? That means I'm going to instinctively want to put that "i" in there, right before the "16", every time. Which makes perfect sense, really, because IT IS STILL AN I16. That I'm asking for output in hex, has NOTHING to do with the input data type, and should NOT replace or require changes to that convention.
So I can fix this with more defines. Give myself "PRIi16x", or more likely "priI16X" which fits in better with my existing I16 convention, and whatever else I want to do that is comfortable and intuitive. In fact, by adding a preprocessing wrapper around printf, I could make it accept any of these:
printf("I have %05" priI16 priX " bananas", ...) // I16, uppercase hex
printf("I have %05" priX priI16 " bananas", ...) // I16, uppercase hex
printf("I have %05" priI16 prix " bananas", ...) // I16, lowercase hex
printf("I have %05" prix priI16 " bananas", ...) // I16, lowercase hex
Which is slightly less concise, but even more intuitive. Because then I don't even have to remember where in a naming convention the "x" is supposed to go. I will not be confused on whether the "x" replaces "I" or not, even if I have recently seen someone else using "PRIx16".
And should I ever actually need a hex floating-point number (though I can't imagine why), I don't have to remember that somehow becomes "A". What is that "A" supposed to stand for anyway? Arcane? Who thought that was even useful? A signed hex integer is less weird, yet apparently impossible... Seriously, the more I look at all this, the more arbitrary it seems.
Ok, rant mode off.
My intent in posting this question was that before doing my own defines and whatnot, I just wanted to see if anyone else had come up with something similar, but possibly more elegant.
For those of us who remember Mark Williams COHERENT, there is a picture() function there, for C. In 2015, they open-sourced COHERENT and that particular picture() call is available in the libmisc source.
That looks similar to formatting functions I've used in other languages. I'll have to check that out, thanks.
COBOL. My grandma coded in that, no kidding.
Seriously? Nothing could be simpler. More modern stuff uses regular expressions. Try that then go back to the simplicity of printf()
Uhh, regular expression are used for string parsing - not string formatting...
I share Chris' sentiments. My day job makes me program in C# that uses a different set of placeholders. Result is the same, I also have to look these up when it gets more complex. So I guess, no matter what you think up - when you cannot remember the logic behind the placeholders it will always feel as a PitA.
The big problem with C interfaces like printf() isn't that they are hard to remember, it's that they are completely untyped.
Hmm, "d" is an integer, right? But wait, "u" is an unsigned integer, so that would make a signed integer "i', and that makes "d"...double? Arg, gotta check that chart again...
Seriously, WHY did they assign two letters to the same darn thing? I don't care what anyone says, this is NOT intuitive.
I know this one
, when reading text, %d is always base 10, while %i accepts any of the C standard number formats (like 0xff for 256, or 0177 for octal). This difference isn't important for printf because the default for %i is base 10, so it looks the same as %d, however for scanf there is a big difference. It would be weird for a specifier to work in scanf but not printf so printf accepts both even though they behave the same so you can reuse the same formatting strings to read/write the same data.
And there is nothing wrong at all with using %i in printf for int. I use it exclusively, most modern programmers do I think for the reasons you state.
picture(3101.1, "*$ZZZ,ZZZ.99", output_string) yields "$***3,101.10" in output_string.
...and, on IBM mainframes at least, the conversion is done with a single machine instruction.
The big problem with C interfaces like printf() isn't that they are hard to remember, it's that they are completely untyped.
You sit in front of a computer when writing C code. It takes 4 seconds to type c lib printf into google, but you can print a C reference guide (good ones are 2-4 pages) and stick it on the wall if you are a programmer.
I think it is the most convenient way of outputting a string. I find myself using it even if I'm programming in C++ or on Arduino for example. sprintf it to a buffer, serial.print the buffer, leave me alone with the made up bullshit. C string formatting is simple, elegant and it works regardless the architecture number of bits or platform.
The big problem with C interfaces like printf() isn't that they are hard to remember, it's that they are completely untyped.
You sit in front of a computer when writing C code. It takes 4 seconds to type c lib printf into google, but you can print a C reference guide (good ones are 2-4 pages) and stick it on the wall if you are a programmer.
I think it is the most convenient way of outputting a string. I find myself using it even if I'm programming in C++ or on Arduino for example. sprintf it to a buffer, serial.print the buffer, leave me alone with the made up bullshit. C string formatting is simple, elegant and it works regardless the architecture number of bits or platform.
This doesn't seem to refute the claim that printf is completely untyped. Also, the OP has outlined the not-so elegant hacks you need for your printf to work "regardless the architecture number of bits or platform".
Remember I typedef'ed int16_t to I16? That means I'm going to instinctively want to put that "i" in there, right before the "16", every time. Which makes perfect sense, really, because IT IS STILL AN I16. That I'm asking for output in hex, has NOTHING to do with the input data type, and should NOT replace or require changes to that convention.
The problem with this specific strategy is that printf() commands are not type specifiers, they are format conversions. And if you understand the way that C passes arguments, you should know that "I16" is not the type of any actual argument to printf(). All arguments to a variadic function undergo the usual type conversions, which means that what printf() receives is simply an "int". It's seductive to think that you can fix C's unspecified-integer-width problems with type definitions and make everything a known number of bits, but this is not generally the case.
It takes 4 seconds to type c lib printf into google
More programmers who are only as smart as google
sprintf it to a buffer, serial.print the buffer, leave me alone with the made up bullshit.
Excellent, I'll make sure to give your products special attention when it comes to stack overflow exploits.
it works regardless the architecture number of bits or platform.
Yes, as long as you don't care whether overflow is detected or how many bits are actually given.
This document explains the interface of the printf family.
http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.htmlNotice that this is not a description of an implementation, but more the standard used. So results may vary depending on the laziness of the lib people.
There is a lot of implicit type conversion going on, you should be able to make it more strict using the length modifiers. Still, no errors will be shown, it's C, not java.
If you really don't like it, you can get angry for a week and create your own, it not impossible. But remember that the parameters occupy memory. So using %d if more efficient than using %int_16.t
http://chibios.sourceforge.net/html/group__chprintf.html
The big problem with C interfaces like printf() isn't that they are hard to remember, it's that they are completely untyped.
Since compilers do verify the printf arguments if you use a literal format and complain if you don't. IIRC GCC has a pragma that allows to write functions of your own with printf like format and have the compiler verify it.
EDIT: from GCC manual:
format (archetype, string-index, first-to-check)
The format attribute specifies that a function takes printf, scanf, strftime or strfmon style arguments which should be type-checked against a format string. For example, the declaration:
extern int
my_printf (void *my_object, const char *my_format, ...)
__attribute__ ((format (printf, 2, 3)));
The big problem with C interfaces like printf() isn't that they are hard to remember, it's that they are completely untyped.
Smart compilers can recognize the printf call and check the parameters.
The big problem with C interfaces like printf() isn't that they are hard to remember, it's that they are completely untyped.
exactly, for that reason I prefer to create my collection of functions, one functions for every data type
uint32_t ---> put_uint32
sint32_t ---> put_sint32
char_t ---> put_char (which can be 8 or 16bit)
string_t ---> put_string
the code is also smaller than printf, and I do not need all the support for the stack
ADA works this way, and so my embedded C
Translations become more complicated though.
With printf you can just swap the first arguments, with custom stuff and operator<< this becomes more work.
I agree. The elegance of printf is that you can mix fixed text and numbers. But nobody says you can't create your own printf. It isn't difficult to do that; there are many examples on how to create a function with a variable number of arguments. However.. some compilers (GCC for example) check the format specifier of printf against the type of the variable and emit a warning if there is a mismatch.
some compilers (GCC for example) check the format specifier of printf against the type of the variable and emit a warning if there is a mismatch.
SierraC does not, that was the first reason why I have decided to created a dedicated function, one function for every data_type.
ADA works exactly this way, so from my point of view I am simplifying my job (translating from ADA to C)