Given that Unicode support is patchy across platforms: how is one supposed to know what glyphs are widely supported?
Easy - don't assume, use what you really need. Simply because something is just technically possible and novel, doesn't mean you have to use it. Don't expect new features, especially unnecessary gimmicks to work.
I hope that the important Greek letters and symbols such as μ, Ω, π , Δ, ° etc. used in electronics widely supported,
Indeed, let's hope that. I think it's fairly safe assumption they work 99.99% of the time because everybody have been using computers to produce the symbols in question since 1990's, they are not some unnecessary few-years-old gimmick like the flags.
The fact that emoji spread from a Japan-only SMS novelty to something fundamentally supported on all platforms and used on a daily basis would hint at them not being “unnecessary gimmicks”. Being dismissive of it is just gonna get you in trouble, if you’re a developer of any kind. FWIW, Linux was the last major holdout platform; Mac and Windows have had emoji support for a decade at this point, it’s nothing new.
What irks me a lot more is the small number of things that
still don’t support Unicode at all. It’s shrinking, but… why?!? Developers should have moved to Unicode around the turn of the millennium.
I'm perplexed at why any OS would go to the trouble of implementing useless parts of the Unicode standard, like turd emojis, whilst ignoring more useful things such as country flags.
It’s clearly not a technical limitation, since it’s not as though they’d need a separate code path. Emoji are nothing more than color bitmap or vector glyphs. Our OSes have supported color text rendering for ages (including pixel-level color, thanks to code paths for both grayscale and subpixel antialiasing), they’ve supported bitmap fonts for even longer, and so it likely wasn’t a big deal to allow color fonts.
Note that extra code is only needed for
color emoji support; one could, in theory, install a black-and-white emoji font to any OS whose Unicode implementation has Supplementary Multilingual Plane support. (Which any self-respecting implementation does, since all sorts of useful character blocks are among those.)
Nobody knows for sure, but the leading theory is that Microsoft is attempting to avoid the issue of flags of disputed areas like Taiwan. This has been at times troublesome for other vendors.
Given that Unicode support is patchy across platforms: how is one supposed to know what glyphs are widely supported? I hope that the important Greek letters and symbols such as μ, Ω, π , Δ, ° etc. used in electronics widely supported, otherwise we might as well stick with plain old ASCII.
”Patchy” is quite an overstatement. Every major platform has Unicode support now, and has had it for years. The issue of glyphs is, for the most part, not one of Unicode support, but of the fonts. But frankly, that problem is also one that was solved ages ago. The symbols you list are ones that were fully supported in the very earliest Unicode fonts. (In fact, they were also supported in many 8-bit character sets. As someone who’s been a Mac user since the early 90s, I have never owned a computer whose default character set didn’t include all 5 glyphs you listed, since the old Mac Roman character set included them all.)
The only places where Unicode seems to still not be supported properly (or at least not always by default) is in some web server backends (like if a forum’s backend database is mistakenly configured as some ASCII code page instead of Unicode), the Windows DOS prompt, and things like basic ANSI C. In contrast, every major platform (and most minor ones) uses Unicode for its text handling APIs, so a developer using the APIs should get Unicode support for free.
What annoys me is Luddite admins and devs who, when encountering an issue with Unicode, instead of fixing it just takes the lazy way out and says “use ASCII instead”, contributing to the dragged-out transition to Unicode.