Hmmm. I think my idea was either not expressed usefully or it failed to be noticed (which probably comes to the same).
The latter is always possible with me, because me fail English often; I miss subtext and cues.
I am aware that there is big difference in getting compiled and interpreted stuff going, but surely that isn't what makes a language. [...] So, are we really saying that the language's lexical features aren't actually very important, and what everyone is liking is the ability to type into a live system and watch run or crash as they're doing that?
No, we (I believe!) are saying that a small detail in one context can be a big factor in another context. That is, that there isn't one set of weights that would apply generally, that one could use to calculate a single qualitative scalar for describing the usefulness of a programming language.
Or, put another way, that the uses of programming languages vary so wildly, that there physically cannot be a single programming language that fits them all.
A better question would be,
when does the interpretability/scriptability of a language override its lexical/library features, or vice versa?
(I have my own answer, already outlined in my posts where I describe how and when I use Python and why, but it definitely isn't "universal". It would be interesting to me to hear when others find themselves crossing that line. I expect use cases where domain-specific languages were traditionally used, would be quite informative.)
As to C standard library, I find it quite deficient, and instead prefer POSIX C (which basically all operating systems' standard C libraries implement, except for Windows). I've also started a couple of threads here talking about replacements for the standard C library, as it is not an intrinsic part of the language: the standard specify the
freestanding environment, where the standard library is not available. Indeed, freestanding C/C++ is, I believe, the most common choice for software development on microcontrollers.
I find it useful to have a C interpreter that can quickly execute a script when what matters (again) is the speed of the programmer.
Okay, that would be an valid use case. I stand corrected.
The reason I didn't think of that, is that the way I develop my own projects, means that I don't actually ever need to wait for the compiler. Key algorithms and verification suites –– like the Xorshift64* with Marsaglia polar method
PRNG I wrote yesterday elsewhere, to generate normally distributed pseudorandom
floats on Cortex-M7 –– I first implement in separate programs to test and verify their operation, before including in the larger project. For larger projects (and even smaller ones), I use Make extensively, so only the recently modified sources are recompiled and then the project linked, and even on this five-year-old HP EliteBook 840 G4 laptop with an Intel Core i5-7200U processor, I never need to
wait. I would not save any time even if the compiler was faster.
For larger projects, like the Linux kernel, I run the build on the background (
nice'd and
ionice'd) while I check the Changelog, documentation, et cetera. Even there, the compilation itself isn't necessarily the slowest part of the build; there are lots of dependency tracking and such, which is why you'll see massive improvement in build speed if you have more RAM (because Linux uses otherwise unused RAM as a cache, which speeds up random access times). Of course, now with fast SSD drives, the difference has diminished quite a bit –– but not completely, because Linux is one of those projects with LOTS of small files accessed during the build, so it's more about I/O requests per second than about raw data bandwidth. Caching still helps.
I only intend to highlight the advantages of interpreted code over compiled code.
I've described how I do that too. I'm not trying to counter anyone here; I too am just trying to help concentrate on the useful and practical aspects of this discussion, and not let it devolve into an opinion shootout.