After getting past some of the first chapters in a C book, I found myself bored only getting results of "hello world" or performing some math calculations.
This is an universal problem for anyone interested in using programming as a tool to achieve a purpose, instead of learning it for the fun of it.
Getting nice visual and sonic feedback from the code you write is an important way to keep your motivation high. (With dopamine kicks!)
My recommended solution is not to try and incorporate everything into the C program, simply because I do not do that in the real life either. Instead, use it for the interesting bits, and pipe the inputs from and outputs to useful sonification or visualization tools.
The main ones I recommend, are
- Gnuplot. Program output consists of points, which will be visualized into plots – curves or surfaces – by a simple gnuplot command. Typically used with math stuff, like approximate functions, comparing your own fixed-point math implementation against the native floating-point implementation, and so on.
- Graphviz. Program output is DOT language text, describing graphs or directed graphs, including trees and forests.
This is particularly useful for visualizing nested function exection, and many different types of data structures from heaps, trees, linked lists, to hash tables, disjoint-set data structures, and so on.
- Netpbm tools. Program output is an image in one of the very easy-to-use formats (PBM/P4 for bitmaps, PGM/P5 for grayscale images, and PPM/P6 for full-color images). If directed to a file, the output can be opened in Photoshop, GIMP, and many other image editors. Netpbm is a collection of utilities to convert the images to and from these formats. For example, if your program modifies an image, you use netpbm tools to convert the input images, losslessly, to the corresponding PBM/PGM/PPM format, then does the pixel manipulation, and outputs the modified image as PBM/PGM/PPM format, which can then be converted to whatever format you like. To load and save these images correctly, you only need a simple header file implementing a couple of functions; not even a library. You can also write your own functions easily to do this with the tools you normally use, although they will typically be much less portable.
- SVG 1.1. This is the file format browsers support for describing vector graphics. For example, the graphics on my home page is a simple SVG image embedded in the HTML file, all in plain text you can see if you view the page source. If you save or redirect the output to a file, you can open the generated image in any browser. I use this to generate vector graphics representations and diagrams, especially using 1:2 isometric perspective for 3D objects. (The math for that is trivial, and you've seen similar in pixel art games already.)
- aplay -f S32_LE -c1 -r48000 or paplay --format=s32le --channels=1 --rate=48000 to play mono sound at 48000 samples per second, each sample being a int32_t. This is the easiest way to generate sounds. If you intend to use low-quality audio reproduction on a microcontroller, you can use -f U8/--format=u8 instead, in which case each sample is an unsigned char, between 0 and 255, corresponding roughly to various PWM and PDM methods to generate an unipolar (0 to Vcc) audio signal.
For interactive stuff, I wholeheartedly recommend starting with
Curses (especially
ncursesw) for terminal stuff.
ncursesw supports Unicode glyphs, so that when you use a good-coverage font in your terminal, you have much more than the DOS cp437 old-style ANSI graphic characters. You can also use the
ANSI escape sequences to colorize your terminal output, although for unbuffered input you need to do more work. Yes, this is the "DOS-like" stuff, but it is much simpler than other GUI approaches.
One of my favourite examples is to combine a maze generator and a singleplayer maze game in a Curses application, as a minimal start to
old Rogue-like games. (
NetHack ring a bell?)
When using Curses "DOS-like", you still have the full control of the program flow. When you move to Gtk or other GUI widget toolkits, you
must relinguish the full control, and switch to
event-driven model. Essentially, the GUI toolkit will be given full reign of the control flow, and it will simply call some of your functions when necessary. You cannot do "delay for 50 milliseconds" anymore, you instead must use an interface to say "after 50 milliseconds or more have passed, please call
this function, toolkit". While Gtk is pretty easy to use if you have experience in C, combining learning C
and the event-driven model into a single learning step seems a bit risky/harsh/difficult to me.
Unfortunately, all of this is much easier in Linux than in Windows, because you basically have everything you need for these installed in Linux, or can get them installed in with just a few clicks (or a single command-line command starting with
sudo apt install or
sudo rpm install), as everything is provided in standard software repositories for all Linux distributions. It helps that everything, even Curses documentation, is available as
man pages, so a simple
man function or
man -k topic will usually provide you with details on what you seek. I often also have a web browser open at
Linux man pages project and whatever library pages or other documentation I use, since memorizing details is wasted effort (and only risks bugs when one remembers the details wrong).
I have not had an opportunity to work with anyone learning game programming via
SDL2 (which is C
and highly portable, used for the basis of many, many 2D games), but as I mentioned earlier, it is another possibility. It is somewhat event-driven, too, but with focus on near-realtime and similar, as is needed by typical games.
I used to do some volunteer tutoring/bugfixing for STEM students having Linux laptops, and it was fun to see how fast they progressed from using the default tools to adjusting everything for maximum personal comfort/productivity, and concentrating on getting the task at hand done (learning programming, and using various computational tools from computer algebra to numeric calculation to running distributed simulations on the local HPC clusters). Many of them were not interested in Linux
at all, and only used it as a tool to get things done. I think that is a very good, very useful attitude. Linux has no value to me unless I can use it to
achieve something interesting or useful to me.
So, my suggestion would be to install some Linux distribution, perhaps Linux Mint, in a virtual machine in your Windows, and do this stuff there.
Or, if you happen to have an old laptop not grunty enough to run Windows anymore, install Linux on that. (I use a 6+ year old EliteBook 840 G4 with a Core i5 7200U, and it is
plenty grunty enough.)
Otherwise, you'll need advice from others who used Windows to learn C programming. My own experience with such persons is from a decade ago (I havent used Windows in anger for longer than that, even though I used to use and even administer a network of Windows machines over two decades ago), but it was consistently... not positive. They hadn't learn C per se, they had learned the Microsoft way, and had big trouble learning portable stuff (especially anything POSIX.1, which is absolutely useful), basically because they needed to
unlearn Microsoft-specific details and truths they had learned without noticing, having learned to assume them to be axiomatically true everywhere; and now, they had to replace that with new learning. Unlearning is harder than learning, so it ended up being 2× or more difficult for them.
Those who had more than one OS available, and were used to the idea that a fact in one is not necessarily a fact in the other, even though they are very similar –– just have differences in
approach and interface details –– had it much, much easier, because they didn't need to work on the "unlearning" part.
Old C64 and DOS plus Windows gives you already a much better starting point, although mixing in some macOS and perhaps Linux would not hurt.
For best advice, I recommend you listen to someone who uses Windows to write C code for both hosted (Windows or other OS) stuff, and for programming microcontrollers. Like I said, my observations are dated and negatively colored; someone having done that successfully will be more use to you.
When you get the basics of C down, you'll find it rather straightforward to progress to Gtk+ (or the other widget toolkits, or to SDL) to do GUI stuff, if you want to. The main step there is relinguishing control of the program flow to the toolkit/library, and the associated change in thinking/code-structuring/design approach.
I often don't go that way, because I prefer doing the user interface in Python –– requires no compilation, lets end users easily tweak UI details without knowing much about programming –– with any heavy computation or trickery or proprietary stuff implemented in a native library, C code compiled to a shared library, that is accessed via the Python
ctypes module. This is quite portable, too, as long as the shared library is compiled for the target architecture and OS.
That said, I have written Gtk+ GUI apps in C for specific useful cases. There should be a skeletion of a VU meter here somewhere, for example.
For connecting to microcontrollers, I currently use USB Serial, and in Linux, macOS, BSDs the
termios interface; not any library. (I'm particularly unhappy with libserial, libusb, etc.; they
seem to work for many, but having taken a careful look at their code, I don't trust them at all.)
This applies to both C code and Python GUI code. Unfortunately, it is
not portable to Windows.
I *often* play with USB HID devices, like keyboards, mice, joysticks, gamepads, etc. The Linux interface is truly simple, a single one covers all, and if you want, you can easily synthesize one using a simple (privileged) application. This is why I much prefer microcontroller development boards with native USB interfaces. My preferred one is USB Serial + USB HID endpoints on the same device. (There are several 8-bitters from CH552G to ATmega32U4 capable of this.
Teensy 4.x can provide data over USB Serial at 25+ mbytes/sec, or 200 Mbit/s, as it supports USB 2.0 High Speed (480 Mbit/s); most only support Full Speed (12 Mbit/s).)
For bulk and isochronous transfers, I like to use the Linux-specific "usbfs". It is not a filesystem, just a device interface for the "raw USB device", without any kernel or userspace driver at all. It is what
libusb typically uses under the hood, but in a way I find much less than optimal.
The one key "trick" in Linux is to add an
udev rule that permits your user account or group it belongs to, full access to the device. That way there are no privilege issues. It is common to forget to do this or skip this as "I'll do it later, as I just want to test this for now", and then complain about all the access- and insufficient privilege-related problems they encounter, and how difficult everything seems to be.
I even use my own wrapper around
avrdude, so that when programming my ATmega32u4 (Pro Micro clones off eBay, using the Arduino Leonardo bootloader), the wrapper script resets the target microcontroller, and waits for it to appear, and reprograms the appearing one; regardless of what port/device was selected in the Arduino UI, because that way I avoid the occasional case when the OS changes the microcontroller "USB address" or device node, when the microcontroller is reset/rebooted.
Teensies have their own GUI-based utility which works solidly for me, but there is also a command-line one available at Paul Stoffregen's GitHub.
A few years ago, I mentioned in another thread here that I was thinking of creating a course on how to interface microcontrollers (via USB Serial and USB HID) to a GUI program written in Python for Linux, BSDs, and macOS, but it fell through. I don't have anything written down, but as it is something I often do, I have lots of ideas on how to do this, starting from manipulating leds and relays on the microcontroller, to transferring intensive amounts of ADC'd data and processing them using FFT or similar (using e.g.
fftw3).