Products > Computers

Language/ compiler and the most apropriate CPU to run the application.

(1/6) > >>

From a software onlooker...

Is it reasonable to expect that the development platform/language/ libraries used in writing an application will result in a preference for a  CPU in terms of number of cores/ threads etc?

I hope I am showing my ignorance at a sufficiently low level so that an appropriate education will follow on.

Different languages and compilers (or even interpreters) will definitely have a difference of impact on the performance they will provide. C is a good example of where the compiler can be the difference between unusable, and heavily optimized. Languages and compilers will also have different support for threading, making multi-threaded systems better or less able to take advantage of their additional threads.

This isn't usually something you have to worry about unless you are working incredibly close to the metal if you're just starting programming. Modern compilers are incredibly efficient, and threading libraries are easy and common.


--- Quote from: IconicPCB on June 19, 2019, 02:52:20 am ---Is it reasonable to expect that the development platform/language/ libraries used in writing an application will result in a preference for a  CPU in terms of number of cores/ threads etc?

--- End quote ---


I like a language and processor that allows hard[1] realtime applications:

* guaranteed worst case timings, without executing the code and hoping you caught a bad case
* input-to-processing latencies of around 10ns
* FPGA-like i/o structures, e.g. SERDES, strobed, clocked etc
* guaranteeing when the next output will occur
* recording when the last input occurred
* easily expandable processing power by adding more chips and without changing code
* hardware support for waiting for multiple events, e.g. i/o completion, or timeouts, or messages from another core
* in other words, put a simple RTOS in silicon!
* language support for all the above
There's only one candidate that satisfies those: xC on the xCORE processors. Buy them at Digikey and elsewhere.

[1] as I'm sure you are aware, hard realtime is predictable, not necessarily fast

Nominal Animal:
Is it reasonable to expect.. ?  I dunno.  Depends.

I write a lot of code using C99 and POSIX.1-2008.  If one uses the appropriate types (like size_t instead of int for nonnegative lengths of in-memory objects), this is surprisingly portable even between 32- and 64-bit architectures, across a number of processor types (Intel/AMD and ARM in particular).  However, if I use vectorization (more properly, single-instruction-multiple-data extensions), that code usually has to be rewritten for each processor family; definitely when switching processor types.

For atomics, I still use compiler-provided built-ins (Intel CC, gcc, etc. provide), as I still see <stdatomic.h> as being a bit too "new" to rely on, right or wrong.  Personal quirk.

My favourite language for writing user interfaces is currently Python 3.  It is surprisingly portable, and I like the idea of having the user interface side being easily modified, even by end users.  However, the current Python interpreters only run Python code in one thread at a time.  That is, you can have multiple real threads, but only one of them can execute Python code at the same time.  This means that you cannot use threads to parallelize Python computation; but you can e.g. have multiple threads blocking in I/O operations, waiting for incoming data to arrive or outgoing data to be sent.  This in turn leads to a specific style or approach to code that works best (for example, how to create user interfaces that stay responsive even if the application is doing some computation at the same time).

(Plus, it is easy to interface C code to Python, so I can do all the heavy work in C anyway.)

Even C/C++ code I write for microcontrollers (C on bare metal, C++ in Arduino environment) tends to be somewhat portable.  Obviously, parts that fiddle directly with the hardware change from one microcontroller to another, but to port the code, I usually only have to rewrite those hardware-facing parts.  I do not like the idea of relying on any single vendor; I fundamentally prefer the opportunity for competition instead.

So, in my case, the answer is no, my code does not directly indicate what CPU is the most appropriate to run the application, on any of the programming languages I use.  There are parts that I optimize or port to each CPU or CPU family, but that's about it.  Even the OS is pretty irrelevant, as long as it supports POSIX.1-2008 in a practical manner; most of my code runs equally well on Linux, Mac, and BSD variants. Oh, and aside from libraries or code used by a Python user interface, I do not use Windows, or even care if my code runs in Windows or not.  It is just not a relevant OS for me at all, being special and unlike anything else.

Do note, however, that a lot of C and even POSIX C code I encounter, is tied to a particular word size (typically expecting int and pointers to be the same size, which is not true on all architectures).  It does take a bit of care (to not make certain assumptions) to write easily portable code like I do.  It is common in current GNU tools and utilities (because they work on both ILP32 and LP64 architectures), but varies a lot in other projects.  (I too learned this "the hard way", by having written code I wanted to work on both 32-bit and 64-bit architectures without kludges.)

I do know of special hardware that is particularly suited for a specific programming language, but I don't have any of those.


--- Quote from: IconicPCB on June 19, 2019, 02:52:20 am ---From a software onlooker...

Is it reasonable to expect that the development platform/language/ libraries used in writing an application will result in a preference for a  CPU in terms of number of cores/ threads etc?

I hope I am showing my ignorance at a sufficiently low level so that an appropriate education will follow on.

--- End quote ---

To what end?

* A development platform has a given CPU, configuration, and usually toolchain associated with it.
* A language is not associated with any particular CPU, but will be most popular (most commonly used, best supported) on a subset of them.
* Libraries that are compiled binaries, are limited to a family at least, if not a specific CPU/MCU part.  Library sources can be compiled to any target, given the same drawbacks as above.
* If you're looking at multicore CPUs and threads, you're automatically looking at a "supercomputer"*.  A high speed (GHz) CPU core, coupled to multiple levels of caches, repeated a few times (multicore, SMP more specifically), plus at least a few hundred megs of RAM and storage (preferably gigs), and preferably high speed network, peripheral or other connections.

*I like to think of things in ~80s terms: back then, a PC was little more than an embedded CPU.[1]  A workstation[2] was powerful enough to do things we consider normal PC activity today, while mainframe sized supercomputers[3] had the pure cranking power we consider normal today, but for very special-purpose applications due to their cost.

[1] 8 or 16 bit, low MHz, and 64k to 1MB address space (mostly RAM, as PCs need it), with enough peripherals bolted on, and software available, to be reasonably useful.  Nowadays, we might compare to an AVR8, PIC, STM8, or MSP430, or, heh, well, MC68k is still available to this day but not so mainstream, and I forget what else is common in this space right now.  These are all MCUs, so have a few peripherals in common (interrupt controllers, timers, serial and parallel IO), but have much less related to storage, or display (no FDC/HDC, no graphics), and have much less RAM, but much more ROM typically (a PC might've had 32k or so of EPROMs between its BIOS and expansion cards; this much Flash is common among MCUs).

[2] Workstations were usually 32 bit, sometimes 64 (give or take just when things were introduced), operating in the mid 10s of MHz (say 20-50MHz), had megs of RAM, high speed networking and storage (SCSI, Ethernet, etc.), high resolution graphics (low-color for CAD, or high-color for photo/video).  Today, this sort of space is filled by the more upscale ARM MCUs, which often include fractional-meg onboard RAM (but support many megs of external RAM), a meg or so of Flash, and support USB2 or 3, Ethernet, LCD panel graphics and more.  They're also often operated with an OS of some sort; Cortex M4 and up have an MMU so can run Linux.  Performance is also comparable to PCs of the 90s -- an ARM may be simpler than a Pentium, but at 240MHz versus 90MHz, it's comparable or better!

[3] Take the Cray-1 for example.  Liquid-cooled beast of a machine, 64 bit, 80MHz, 8 MB RAM, floating point, vector instructions (meaning, linear algebraic operations like dot and cross products, or matrix arithmetic, are particularly easy), up to 160MFLOPS (with later versions through the 80s pushing over 1GFLOPS).  This compares with, probably late 90s/early 00s PCs with graphics accelerators, but those were very special purpose chipsets still, until general purpose vectorization and GPGPUs were introduced in the mid-00s.  Or modern CPUs in PCs or cellphones.

Hmm, that's not really a good example of what I wanted to make a point about.  I should really be using Cray X-MP (multiprocessor) which was dual core.  Massively multicore systems didn't take over until the 90s or 00s I think, but have always been around -- LINKS-1 for example, or the Connection Machine.  (A CM-5 topped the list of supercomputers in 1993, but I don't see anything comparing the CM-1 or other earlier parallel machines?)

Anyways, the interesting thing is not just that computers get bigger and more complex over time, but that the simplest computers have never disappeared.  There will always be an application for the dumb 8-bit CPU (or 4 bit even!).  It is interesting that 32-bit machines are now so cheap and common that they're displacing 8- and 16-bit machines (e.g., ARM M0 instead of PIC or AVR), but they haven't fully replaced them, at least yet.  (But you may well opt for, say, an STM32F0 over an ATMEGA328, for most of your low-power embedded applications, and consider alternatives when and if the product moves into such quantity that a potentially cheaper chip can be used.)

So, if you want to do high level development, on a comfortable, powerful platform, just pick up any old PC -- a PC as such, or a rasPi, or a tablet, or even a cellphone (well, maybe not the easiest interface to dev with..).  OS probably Linux, and any languages that do what you want -- C/C++ for boring stuff, say Python for general tasks, Octave for vector math and scripting, etc.

If you want to do development for anything else, you'll probably still be based on a PC, but cross-compiling for that target.  A programming dongle is usually needed, but bootloaders can offer this function through traditional ports (e.g. USB or serial).

As you can see, a truly general answer is not easy to put together, and probably not all that useful in the end.  The field of computing is massive and complex, and most practitioners keep to their own little corner -- see the above replies for example.  (Personally, I've mostly worked with 8086 and AVR8.)

If you can't make up your mind, you should consider what ends you want to reach, and try the languages that best support that end.  You will inevitably get locked into that pattern, as we all do; that's a bit of a downside, but the alternative is becoming paralyzed and not doing anything at all -- clearly a worse outcome!  So just accept it, and specialize in what is most useful to you.



[0] Message Index

[#] Next page

There was an error while thanking
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod