Author Topic: The Imperium programming language - IPL  (Read 65041 times)

0 Members and 1 Guest are viewing this topic.

Offline Sherlock HolmesTopic starter

  • Frequent Contributor
  • **
  • !
  • Posts: 570
  • Country: us
Re: A new, hardware "oriented" programming language
« Reply #200 on: November 26, 2022, 10:31:22 pm »
I used a naked "8-pin Arduino" in my Vetinari clock, because that was the quick way to get an undemanding result.

But when making something that required very low power consumption, I simply wrote C to peek/poke the atmega328 registers. That avoided me having to work out how Arduino libraries might be frustrating me.
That's exactly why ATmega32u4 is one of my favourite 8-bitters.  It has native full-speed USB (12 Mbit/s, can do 1 Mbyte/sec even over USB Serial), but still is completely programmable without any HAL or similar, on bare metal, using nothing but avr-gcc (and avr-libc, if you really want), in (freestanding) C or C++.

I note that one of the desires there was "give easy access to the flags". I pointed out that many ISAs don't even *have* flags.
Yep.  In particular, you helped me realize I did not actually desire access to flags, just multiple function result values, to fulfill my needs.

An extremely common pattern for functions is to return a value or error.  Obviously, with scalar return types, when you return a single scalar, some values must be reserved for error codes.  (That can get messy, like it once did in the Linux kernel, where certain filesystem drivers could not handle reads over 231 bytes (due to widespread use of int), and returned negative values.  Now, the Linux kernel limits all single read() and write() syscalls to 231 less one page automagically.)  Having a separate value and status returns is sometimes more robust, even if the status return is just a single "OK"/"Error" flag.

However, SysV ABI on x86-64 (LP64, so ints are 32-bit, long and pointers are 64-bit) as used on Linux, macOS, FreeBSD, and others, already supports this (by using rax and rdx registers for 128-bit non-FP return values), so code like
    typedef struct { long x, y } ipair;
    lpair somefunc(long x, long y) { return (lpair){ .x = x, .y = y }; }
compiles to
    somefunc:
        mov  rax, rdi
        mov  rdx, rsi
        ret
because rdi and rsi are the first two (non-FP) parameters, and the pair of return values are returned in rax and rdx.
This suffices for my needs, although it is slightly odd-looking at first, wrapping return values into structs.  It would be nicer to have syntactic sugar for this and not require struct use, but this works too.

Because this is architecture ABI, any compiler for this architecture can use this.  Other ABIs, however, need to do annoying "emulation" by e.g. passing a pointer to the additional return values, similar to how larger (struct-type) result values are passed.  A compiler can obviously use its own function call ABI internally, but it does have to have some kind of foreign function interface* for bindings to binary libraries (if used on hosted environments/full OSes, and not just bare metal embedded).  It might be useful to use a rarely used register for the status/error return, so that it is not so often clobbered by ordinary code.

Simply put, I wasn't hankering so much for syntactic sugar, but for more sensible ABIs like SysV on x86-64.  Brucehoult put me straight on this, albeit indirectly.  :-+



* Part of the reason I like using Python for graphical user interfaces is that it has a very nice interface to calling native code, without needing any native code support; using pure Python code for the interface.  Certain libraries, like glib's gobject introspection for C, provide introspection information (.gi or gir files) that means full interfaces can be constructed without any bespoke code (machine or Python), using the introspection information only.  It also means that when the library and corresponding introspection file is updated, the new features are immediately available in Python also.  Very useful.

Because native machine code is dynamically loaded for access from Python, this also provides a perfect mechanism for license separation.  You can keep the user interface open source, modifiable by end users, but keep all Sekret Sauce proprietary code in your binary-only dynamic library.  In my experience, this provides the optimal balance between user access and proprietary requirements when going full open source isn't viable, e.g. for commercial reasons.

The C# language has options "ref" and "out". With these we can designate a parameter as "out" meaning it must, will, be assigned a value within the callee that will be visible to the caller.

One of the patterns we see often in C# code is the "try" pattern:

Code: [Select]

if (TryGetRemoteSystemStatus(system_id, var out status))
   {
   // do stuff.
   }


The language only lets the "status" be accessible inside the { } block of the if statement. It's scope (when used as shown) is restricted to that block.

In the function TryGetRemoteSystemStatus we return a bool, a false if we can't get what we want. Assuming the function has been written sensibly, it's then close to impossible for calling code to access the "status" when it should not. This a good pattern because it avoids the "if (status != null)..." pattern which relies in "null" as a sentinel.


So with "out" a write must take place to the variable within the callee before returning to the caller and with "ref" a write must take place to the variable within the caller before calling the callee.
« Last Edit: November 26, 2022, 10:40:22 pm by Sherlock Holmes »
“When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” ~ Arthur Conan Doyle, The Case-Book of Sherlock Holmes
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14230
  • Country: fr
Re: A new, hardware "oriented" programming language
« Reply #201 on: November 26, 2022, 10:35:49 pm »
You can pass and return structures in C, which makes your life much easier. Yes the downside is that you need 1/ to define the structure first and 2/ to access its members. Sure a bit more syntax overhead. Interesting to note that very few developers ever use that feature.

I have mixed "feelings" about returning multiple values though. A few reasons for this, but one of them being the "implicit" order. Note that this is the exact same issue with function parameters. Ada fixes it by using parameter designators, although those are optional. But at least function parameters have an identifier. Many languages handle returning multiple values using some kind of tuples - so no identifier per se associated with each value returned inside the tuple, only some guaranteed order.

Most of that is mainly syntax sugar coating anyway.

And, tuples could be used in other contexts - not necessarily just return values of functions. To make it consistent, the list of function parameters would itself be a tuple. And tuples could be used as expressions.

The usual cute example is: 'x, y = y, x ' for swapping x and y.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6131
  • Country: fi
    • My home page and email address
Re: A new, hardware "oriented" programming language
« Reply #202 on: November 26, 2022, 10:39:21 pm »
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?
There is very little (if anything) about language grammar that could be useful to MCU developers.
How did you establish that? what evidence supports this claim? Here's a post by an engineer that proves you wrong, I quote:
Ehh, what? :o

What has that post by that engineer have to do with language grammar?  They're talking about conditional builds, which is solved by any toolchain that generates linkable object files by selecting which object files to compile and include in the final image.

ELF object files support per-symbol sections, regardless of the language you use, which allows the linker to select which symbols (usually functions) to include based on which symbols may be referenced.  It means that only features you actually use are included in the linked target, and this is heavily used in current embedded toolchains, which basically all use ELF object files.  You do not even have to deselect the features you do not need; all you need to do is make sure all you might need is available (via whatever include or import mechanism your language uses), and use the ones you want.  Only the features you use, and the features they refer to, will get included in the final linked object.
_ _ _

The way we established "language grammar isn't important to embedded developers" is by being, and working with embedded developers, who use whatever tools fit the task at hand best.  Grammar is just flavour.  I've never heard of any successful developer choosing their programming language or environment based on the grammar.  I have seen many who did choose their programming language based on grammar or because it was the only language they could use, fail.  I've talked with, and continuously talk to, and occasionally help, many developers, both in embedded and application development.  Grammar has never been more than a flavour on top, as in "I find it ugly" or "it looks neat"; never at the core of any discussion related to solving a problem.

I'm not sure if I have talked shop in such details with the commonly suggested 1000 persons sufficient for a random opinion poll, and the selection definitely isn't random, but it is a significant number, well over a hundred at minimum.  I also do not shy away from antagonistic discussions, so I often disagree with others –– often to find out the underlying reasons for their opinions, because opinions themselves aren't useful, but the reasons underlying the opinions are.  (For one, opinions cannot be compared in any rational manner, but the reasons for those opinions can usually be rationally examined and compared.)

I note that one of the desires there was "give easy access to the flags". I pointed out that many ISAs don't even *have* flags.
Yep.  In particular, you helped me realize I did not actually desire access to flags, just multiple function result values, to fulfill my needs.
Yes. Multiple return values are very desirable. Returning structures can substitute, but it's notationally annoying.

Most modern ABIs allow returning two word-size or one double sized value. With register based ABIs, usually in the same registers as used for the first two arguments. x86_64 is a weird exception with the first argument passed in RDI but the result in RAX.

But why can't you pass the same number of results as arguments? Function return is semantically no different to function call -- it is "calling the continuation". It would clearly be very very easy to enable this in modern ABIs that pass up to 6 or 8 arguments in registers -- it's simply relaxing a restriction. Figuring out where to put return values that overflow into RAM is harder. If the caller knows how many results are to be returned then it can allocate stack space for max(arguments, results). Support an unbounded number of return values would be a lot harder. But it's also hard to see a reason to support it. VARARGS seems like mostly a hack to support printf and friends.
Exactly.  No, I don't need unbounded parameters or return values myself; even for printf and friends I prefer a string-building approach (if you recall my old replacement for standard C thread).

If we consider things like Foreign Function Interfaces and such, we could augment object file symbol versioning (or similar) to explicitly specify the exact function ABI variant, as in how many inputs and outputs, in which registers or stack –– noting that many architectures like x86-64 and ARMv8 already have separate vector registers for FP parameters.  It does not even need to know the logical types, ie. whether something is an address or a number, just which registers or stack words are used for inputs, outputs, and as scratch registers.  One would still need the exact declarations in each programming language, and compilers would need to check the declaration matches the object file ABI variant.

This is not dependent on any specific programming language, though.

It would be very interesting to consider if binary-only libraries should contain introspection information for their symbols, including the above information.  Not necessarily introspection in the same file, but in an associated file, in some kind of common format.  We already know from the widely used GObject Introspection (for FFI bindings of GObject-based libraries, to a number of programming languages) that it can be extremely useful and powerful mechanism for run-time linkage across programming languages.  (Most users of it aren't even aware of using it, it works that well.)
 

Offline Sherlock HolmesTopic starter

  • Frequent Contributor
  • **
  • !
  • Posts: 570
  • Country: us
Re: A new, hardware "oriented" programming language
« Reply #203 on: November 26, 2022, 10:49:15 pm »
So, what specifically did you want to say about language grammar, extensibility, features that could be helpful to MCU developers?
There is very little (if anything) about language grammar that could be useful to MCU developers.
How did you establish that? what evidence supports this claim? Here's a post by an engineer that proves you wrong, I quote:
Ehh, what? :o

What has that post by that engineer have to do with language grammar?  They're talking about conditional builds, which is solved by any toolchain that generates linkable object files by selecting which object files to compile and include in the final image.

Yes, one language grammar lets him express it much better than the other.

ELF object files support per-symbol sections, regardless of the language you use, which allows the linker to select which symbols (usually functions) to include based on which symbols may be referenced.  It means that only features you actually use are included in the linked target, and this is heavily used in current embedded toolchains, which basically all use ELF object files.  You do not even have to deselect the features you do not need; all you need to do is make sure all you might need is available (via whatever include or import mechanism your language uses), and use the ones you want.  Only the features you use, and the features they refer to, will get included in the final linked object.

Yes you can jump through hoops or you can simplify things by empowering the programming language.

The way we established "language grammar isn't important to embedded developers" is by being, and working with embedded developers, who use whatever tools fit the task at hand best.  Grammar is just flavour.  I've never heard of any successful developer choosing their programming language or environment based on the grammar. 

Your conflating two things reasons for designing a language grammar and reasons for choosing a language, I never ever said these were the same. Your reasons for - say - choosing a gun will likely not be the same as the reasons the manufacturer has for designing that gun the way they did.

I have seen many who did choose their programming language based on grammar or because it was the only language they could use, fail.  I've talked with, and continuously talk to, and occasionally help, many developers, both in embedded and application development.  Grammar has never been more than a flavour on top, as in "I find it ugly" or "it looks neat"; never at the core of any discussion related to solving a problem.

And? what does that prove? your perception is your perception. And what does "ugly" mean, first you used "pretty" and now "ugly" what - do you mean by these terms? What is undesirable about "ugly" code and what is appealing about "pretty" code? Is one preferable to the other? if so, in what way? why?

I'm not sure if I have talked shop in such details with the commonly suggested 1000 persons sufficient for a random opinion poll, and the selection definitely isn't random, but it is a significant number, well over a hundred at minimum.  I also do not shy away from antagonistic discussions, so I often disagree with others –– often to find out the underlying reasons for their opinions, because opinions themselves aren't useful, but the reasons underlying the opinions are.  (For one, opinions cannot be compared in any rational manner, but the reasons for those opinions can usually be rationally examined and compared.)

So please don't present personal anecdotal perceptions as being anything more than that, this is why you'll sometimes see "IMHO" inside some of my posts.



« Last Edit: November 26, 2022, 10:56:59 pm by Sherlock Holmes »
“When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” ~ Arthur Conan Doyle, The Case-Book of Sherlock Holmes
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6131
  • Country: fi
    • My home page and email address
Re: A new, hardware "oriented" programming language
« Reply #204 on: November 26, 2022, 11:09:15 pm »
You can pass and return structures in C, which makes your life much easier. Yes the downside is that you need 1/ to define the structure first and 2/ to access its members. Sure a bit more syntax overhead. Interesting to note that very few developers ever use that feature.
Things get very interesting when you start using e.g. in C
    typedef {
        type   *data;
        size_t  size;
    } region;
because then you can use
    region  allocate(size_t minimum, size_t maximum);
    region  reallocate(region, size_t minimum, size_t maximum);

If you use a triplet,
    typedef {
        type   *data;
        size_t  size;
        size_t  used;
    } buffer;
then your allocation and reallocation routines can examine the amount of data (size), and only keep that, discarding the rest, even when increasing the size of the allocation.  Letting the library choose the exact size within a range, means the library can use much more time-efficient allocators without wasting memory, because when they have a larger slot/slab/slub to fulfill the allocation, they can examine the maximum, and provide either the entire region, or a suitably size-aligned one.  This is an extremely common pattern when reading data from a socket or a pipe (e.g. network comms), and the current message does not fit in the existing buffer, and the exact message length is not known beforehand (e.g. most HTTP transfers, WebSockets, etc.).

I have mixed "feelings" about returning multiple values though. A few reasons for this, but one of them being the "implicit" order. Note that this is the exact same issue with function parameters. Ada fixes it by using parameter designators, although those are optional. But at least function parameters have an identifier. Many languages handle returning multiple values using some kind of tuples - so no identifier per se associated with each value returned inside the tuple, only some guaranteed order.
I do not know what kind of syntax would work best, either.

Currently, my opinion is shaded by my ubiquitous use of Python tuples in some of my 2D/3D code.  Basically, I subclass tuples for quite efficient 2D/3D vectors, with vector-scalar and vector-vector operators implemented by the subclass.  I also use four-tuples for versors/bivectors, with just intra-class operators, and an accessor that yields a 3D rotation matrix, or a 4×4 transformation matrix (using the convention of (x,y,z,1) referring to 3D position vectors, and (x,y,z,0) referring to 3D direction vectors) with a caller-specified translation pre- or post-rotation.

I do use structs –– actually, structs containing unions with structs and vectorized types, because that happens to be compatible on all architectures I have that have SIMD –– in a similar fashion in C.  On those architectures, they're passed and returned in the vector registers, even though the C API looks normal, i.e.
    vec3d  vec3d_cross(const vec3d, const vec3d);
    double  vec3d_dot(const vec3d, const vec3d);
    vec3d  rot3d_apply(const rot3d, const vec3d);
    vec3d  trans3d_position(const trans3d, const vec3d);
    vec3d  trans3d_direction(const trans3d, const vec3d);
    float  vec2f_dot(const vec2f, const vec2f);
    float  vec2f_cross(const vec2f, const vec2f);
    vec2f  rot2f_apply(const rot2f, const vec2f);
    vec2f  trans2f_position(const trans2f, const vec2f);
    vec2f  trans2f_direction(const trans2f, const vec2f);
and so on, showing typical operations on 3D double-precision floating point vectors, 2D single-precision floating point vectors, 3D 3×3 rotation and 4×4 (3×4) transformation matrices, and 2D 2×2 rotation and 3×3 (2×3) transformation matrices.  Where _position applies the translation, _direction does not.

So, I am definitely biased towards something like (a, b) syntax describing a tuple.  Don't care what the exact grammar would be, though; I'll adapt.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6131
  • Country: fi
    • My home page and email address
Re: A new, hardware "oriented" programming language
« Reply #205 on: November 26, 2022, 11:11:01 pm »
So please don't present personal anecdotal perceptions as being anything more than that, this is why you'll sometimes see "IMHO" inside some of my posts.

Ah, I see.  When you post someones personal anecdotal perceptions, they are 'proof'.
When I describe my own experiences discussing these technical details with well over a hundred other embedded developers, they're just 'perceptions', and not indicative of anything.

You, sir, are full of shit.

I'm out.  I'm not ready to try to help another electrodacus see the reality from their own preconceptions, sorry.
 
The following users thanked this post: Siwastaja, MK14, brucehoult, Jacon, DiTBho

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19194
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: A new, hardware "oriented" programming language
« Reply #206 on: November 26, 2022, 11:33:51 pm »
concentrate on my questions.
.

We are not here at your beck and call, nor to spoon feed you.

You have been pointed towards some answers to your questions. Now you need to read (not skim) the points, and do the work to understand the literature.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: MK14, brucehoult, Jacon

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: A new, hardware "oriented" programming language
« Reply #207 on: November 26, 2022, 11:36:55 pm »
When I last looked at the problem, c1986, it had to be portable across machines, the language was much simpler and any given compiler targeted, at best, a small subset of machines.

My comment was based on erroneously ignoring the changes.since then. Mea culpa.

I had implemented a 32 bit floating point package on a 6800 a decade earlier, in a medium level language. Without checking, I would have used the carry flag, which doesn't exist in C

Comparing the language called "C" and compilers for it in the 1970s or 1980s to the language called "C" in 2022 is pretty much pointless.

Early compilers produced a very literal translation that gave code often many times slower than hand written assembly language. Most compilers could target only one ISA. If they were "portable" then they made even worse code. If they were free then they were probably even worse again -- and even paying $10k+ got you something pretty awful.

Modern gcc and llvm have very sophisticated machine-independent optimisation pipelines that can recognise patterns in portable C code that map to low level machine instructions -- for example to map unsigned "(a+b) < a" to using the carry flag on ISAs that have one. As demonstrated a few messages up. And to use SETcc instructions on machines that have those. And MOVcc instructions on machines that have those. And use an actual conditional branch if/then/else as a last resort if the machine doesn't have any of the above.

You won't get that from a 1970s compiler.
 
The following users thanked this post: MK14, SiliconWizard, DiTBho

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19194
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: A new, hardware "oriented" programming language
« Reply #208 on: November 26, 2022, 11:38:17 pm »
So please don't present personal anecdotal perceptions as being anything more than that, this is why you'll sometimes see "IMHO" inside some of my posts.

Ah, I see.  When you post someones personal anecdotal perceptions, they are 'proof'.
When I describe my own experiences discussing these technical details with well over a hundred other embedded developers, they're just 'perceptions', and not indicative of anything.

You, sir, are full of shit.

I'm out.  I'm not ready to try to help another electrodacus see the reality from their own preconceptions, sorry.

FWIW, those were my thoughts too.

Control-d
« Last Edit: November 26, 2022, 11:40:34 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: A new, hardware "oriented" programming language
« Reply #209 on: November 26, 2022, 11:54:53 pm »
It would have been convenient in that case, but different processors have very different semantics for their carry flag.

Exactly correct.

Quote
For all I know there may be processors without a carry flag; I no longer bother to keep up with processor ISAs.

MIPS
DEC Alpha
Intel IA64
RISC-V

Probably others I can't think of right now.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: A new, hardware "oriented" programming language
« Reply #210 on: November 27, 2022, 12:15:47 am »
Your position is pessimistic, discouraging, not open minded, this is nothing to do with your undoubted credentials either, it is you, your hostile tone. This is just a discussion and you're frothing and fuming.

The only frothing is coming at me, not from me.

I see others I respect have now reached and expressed the same conclusion as I did. To their credit, they perhaps gave the benefit of the doubt for longer, though not forever.
 
The following users thanked this post: MK14

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19194
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: A new, hardware "oriented" programming language
« Reply #211 on: November 27, 2022, 01:22:10 am »
For all I know there may be processors without a carry flag; I no longer bother to keep up with processor ISAs.

MIPS
DEC Alpha
Intel IA64
RISC-V

Probably others I can't think of right now.

I thought I remembered MIPS not having them, and I presume that would carry over [sic] into RISC-V.

If Alpha had crossed my mind, I would have presumed it didn't, since that went to all extremes to excise everything they could. (And, IIRC, excised byte addressing - oops).

I'm not sure what the other currently interesting architecture (the Mill) does. That plus CHERI do raise hopes for the future, and are worth watching. Interestingly they both start from the position that even though it is a right pain in the backside, pragmatically it is necessary to be able to run C efficiently.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3772
  • Country: gb
Re: A new, hardware "oriented" programming language
« Reply #212 on: November 27, 2022, 01:54:00 am »
MIPS
DEC Alpha
Intel IA64
RISC-V

Probably others I can't think of right now.

also, subtract with borrow (68k) vs subtract with carry (ppc)
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: A new, hardware "oriented" programming language
« Reply #213 on: November 27, 2022, 01:59:26 am »
If Alpha had crossed my mind, I would have presumed it didn't, since that went to all extremes to excise everything they could. (And, IIRC, excised byte addressing - oops).

Alpha always had byte addressing. What it doesn't have, for the first half dozen models (until 199's "EV56" 21164A), was byte (and 16 bit) load and store instructions. It wasn't a raw speed thing -- Alpha was often the fastest CPU in the world during that time, including on software doing a lot of byte operations.

The problem was multiprocessor systems and software that assumes byte accesses are atomic.

Quote
I'm not sure what the other currently interesting architecture (the Mill) does.

No globals flags, of course :-)  Each value on the belt has its own metadata.

In http://millcomputing.com/topic/metadata/#post-419 Ivan wrote:

"It would certainly be possible to have a carry metabit in each byte, and we considered it. After all, there is already a NaR bit in each byte. However, we could not find enough usage for such a design to justify the cost."

That was 2014. Things could have changed. I don't know.


I make the following bold claim: No clean sheet ISA designed for high performance since 1990 has had a carry flag, or global flags register.

Note 1) IBM RS/6000->PowerPC->POWER is exactly 1990. It has 8 flags registers.

Note 2) ARM Aarch64 is not a clean sheet design. While it makes quite a lot of changes from Aarch32, it needs to share an execution pipeline with 32 bit code, with both running optimally, at least for the first 10 years or so.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: A new, hardware "oriented" programming language
« Reply #214 on: November 27, 2022, 02:10:46 am »
MIPS
DEC Alpha
Intel IA64
RISC-V

Probably others I can't think of right now.

also, subtract with borrow (68k) vs subtract with carry (ppc)

Correct.

At least the following use subtract with carry: IBM System/360, 6502, PIC, MSP430, ARM, PowerPC

At least the following use subtract with borrow: 8080/z80, 8051, x86, M680x, M68k, AVR
 

Offline pcprogrammer

  • Super Contributor
  • ***
  • Posts: 3574
  • Country: nl
Re: A new, hardware "oriented" programming language
« Reply #215 on: November 27, 2022, 05:41:39 am »
At least the following use subtract with carry: IBM System/360, 6502, PIC, MSP430, ARM, PowerPC

Don't know about the others but although the 6502 instruction is called SBC (subtract with carry) my R6500 programming manual states it is with borrow.

You have to set the carry before the first subtract to indicate no borrow.

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 3971
  • Country: nz
Re: A new, hardware "oriented" programming language
« Reply #216 on: November 27, 2022, 07:40:29 am »
At least the following use subtract with carry: IBM System/360, 6502, PIC, MSP430, ARM, PowerPC

Don't know about the others but although the 6502 instruction is called SBC (subtract with carry) my R6500 programming manual states it is with borrow.

You have to set the carry before the first subtract to indicate no borrow.

Then the manual is naming it incorrectly, at least according to the modern lexicon. Setting carry to indicate no borrow means that borrow is the inverse of the carry.

On the Intel, Motorola etc machines in the other list, you need to clear carry before doing the first word of a subtract -- carry for add and borrow for subtract are in the same sense.

On most of the machines there is a plain SUB instruction as well as SBC, so you never need to be aware of what you would set carry to before the first SBC in a chain. The convention shows up only in the result afterwards: Carry set if there was no borrow in the machines in first list, carry set if there was a borrow in the machines in the second list.

To put it more mathematically:

On all machines, X + Y is implemented as X + Y + C, giving an N+1 bit sum with the MSB going to C

On 6502 etc X - Y is done as X + ~Y + C, , giving an N+1 bit sum with the MSB going to C

On 8080 etc X - Y is done as X + ~Y + ~C, , giving an N+1 bit sum with ~MSB going to C


On 6502, ARM, PowerPC, MSP430 etc, the only difference between add and subtract is inverting Y.

On the other type, the C flag is inverted both before and after a subtraction, complicating the design, adding two inverter delays to the critical path etc.
« Last Edit: November 27, 2022, 07:53:01 am by brucehoult »
 
The following users thanked this post: pcprogrammer

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19194
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: A new, hardware "oriented" programming language
« Reply #217 on: November 27, 2022, 08:58:46 am »
If Alpha had crossed my mind, I would have presumed it didn't, since that went to all extremes to excise everything they could. (And, IIRC, excised byte addressing - oops).

Alpha always had byte addressing. What it doesn't have, for the first half dozen models (until 199's "EV56" 21164A), was byte (and 16 bit) load and store instructions. It wasn't a raw speed thing -- Alpha was often the fastest CPU in the world during that time, including on software doing a lot of byte operations.

The problem was multiprocessor systems and software that assumes byte accesses are atomic.

Yes, that's it.

I always admired Alpha's purism, but was skeptical about the lack of byte load/store. The Mill is careful to avoid that excessive purism.

Around the same time I was skeptical about the approach of the HP PA-RISC successor, ItanicItanium. At that time any tiny change to the implementation required someone to re-hand-optimise inner loops; the compiler was supposed to solve that issue, but never quite managed it. In addition, just as CPU power consumption was becoming the limiting factor, the Itanic strategy was to waste power doing lots of speculative execution.

Quote
Quote
I'm not sure what the other currently interesting architecture (the Mill) does.

No globals flags, of course :-)  Each value on the belt has its own metadata.

In http://millcomputing.com/topic/metadata/#post-419 Ivan wrote:

"It would certainly be possible to have a carry metabit in each byte, and we considered it. After all, there is already a NaR bit in each byte. However, we could not find enough usage for such a design to justify the cost."

That was 2014. Things could have changed. I don't know.

That sounds right.

One thing I like about the Mill is that its creators have a detailed knowledge of successful radically different hardware and software architectures, how they interact and how each feature meshes with the other features. They then have the good taste to choose wise goals and select/reject features that support the goals.

Basically, since they evolved their understanding at the tight era, they are "renaissance men" of computer system architecture. I doubt we will see people with such a wide knowledge again.

The last time I saw such a good summary of a new product based on selecting/rejecting features based on historical experience was Gosling's Java Whitepaper in 1996.

Quote
I make the following bold claim: No clean sheet ISA designed for high performance since 1990 has had a carry flag, or global flags register.

Note 1) IBM RS/6000->PowerPC->POWER is exactly 1990. It has 8 flags registers.

Note 2) ARM Aarch64 is not a clean sheet design. While it makes quite a lot of changes from Aarch32, it needs to share an execution pipeline with 32 bit code, with both running optimally, at least for the first 10 years or so.

It does seem that way. The flags can only be computed after the operation has been computed. That wasn't a problem when logic/memory speeds were at a certain ratio, but it ceased to be the case in the mid-late 80s.

The other strategy exhibiting that phenomenon was the TI9900's registers being in memory. Great for fast context changes, but somewhat limiting otherwise.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8090
  • Country: fi
Re: A new, hardware "oriented" programming language
« Reply #218 on: November 27, 2022, 12:05:11 pm »
I think the unimportance of details in grammar can be demonstrated with a simple example:

Consider the achievements of human kind. We have built pyramids, airplanes, went to the Moon and whatnot. All of this is possible thanks to human being able to communicate with others, with a natural language.

Yet, such great achievements have been made in countless of different cultures, with different languages, with very different grammars and syntax.

It's the process itself, behind the grammar and syntax, which dictates the result. A surgeon can discuss with their colleagues in English, Chinese or Finnish just fine, and as long as they know the language, they absolutely do not care, it simply does not matter to them.

Same can be said about programming languages, for example C and Ada have very different looking grammar and syntax but if you learn both, it's confusing only in the very beginning. You learn the grammar and syntax quickly.

That's why it's more important to start from what the language can do and how, then the grammar will follow.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8574
  • Country: gb
Re: A new, hardware "oriented" programming language
« Reply #219 on: November 27, 2022, 02:11:44 pm »
Around the same time I was skeptical about the approach of the HP PA-RISC successor, ItanicItanium. At that time any tiny change to the implementation required someone to re-hand-optimise inner loops; the compiler was supposed to solve that issue, but never quite managed it. In addition, just as CPU power consumption was becoming the limiting factor, the Itanic strategy was to waste power doing lots of speculative execution.
It was obvious from the very start of Itanium development that its strategy would make for extremely high power consumption. So, it seemed to be a strategy to split the market into mobile and non-mobile threads. That was always a very weird strategy for a company like Intel, when notebooks were a booming business.
The other strategy exhibiting that phenomenon was the TI9900's registers being in memory. Great for fast context changes, but somewhat limiting otherwise.
No architecture is general purpose. They all have a time and a place. The 9900, and the 990 mini-computers based on the same architecture, worked really well in their day. The 990 was one of the biggest selling min-computer lines, although most people have barely heard of it. It crunched a lot of number in things like geophysical survey and defence applications. Memory and circuit performance changed. Even small machines started to get cache. All the underpinings of why the 990 made sense changed, and it went away.
 

Offline Sherlock HolmesTopic starter

  • Frequent Contributor
  • **
  • !
  • Posts: 570
  • Country: us
Re: A new, hardware "oriented" programming language
« Reply #220 on: November 27, 2022, 03:27:05 pm »
concentrate on my questions.
.

We are not here at your beck and call...

Fascinating to contrast that remark with:

Now you need to read (not skim) the points, and do the work...

I suggest you parse your own remarks sir, there's more going on that meets the eye.

If you really can't discuss this subject without continually reverting to discussing me, my perceived needs then please simply refrain.

“When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” ~ Arthur Conan Doyle, The Case-Book of Sherlock Holmes
 

Offline Sherlock HolmesTopic starter

  • Frequent Contributor
  • **
  • !
  • Posts: 570
  • Country: us
Re: A new, hardware "oriented" programming language
« Reply #221 on: November 27, 2022, 03:31:46 pm »
Your position is pessimistic, discouraging, not open minded, this is nothing to do with your undoubted credentials either, it is you, your hostile tone. This is just a discussion and you're frothing and fuming.

The only frothing is coming at me, not from me.

I see others I respect have now reached and expressed the same conclusion as I did. To their credit, they perhaps gave the benefit of the doubt for longer, though not forever.

Please stop discussing me, that seems all you want to do, discuss me all the time, it's a hallmark of poor reasoning skills. If you were a prosecution lawyer you'd likely lose every case because you can't win a trial by simply attacking the defense lawyer over and over and over.

One must be dispassionate, leverage facts, logic and sound reasoning to reach sound conclusions, all you seem to want to do is berate me, if you disagree with a statement I've made then quote that, and explain - calmly, politely and rationally, why to disagree, it really is that simple, or could be.


« Last Edit: November 27, 2022, 03:41:26 pm by Sherlock Holmes »
“When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” ~ Arthur Conan Doyle, The Case-Book of Sherlock Holmes
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8090
  • Country: fi
Re: A new, hardware "oriented" programming language
« Reply #222 on: November 27, 2022, 03:32:06 pm »
That's exactly why ATmega32u4 is one of my favourite 8-bitters.  It has native full-speed USB (12 Mbit/s, can do 1 Mbyte/sec even over USB Serial), but still is completely programmable without any HAL or similar, on bare metal, using nothing but avr-gcc (and avr-libc, if you really want), in (freestanding) C or C++.

Now that we are anyway sidetracked (as always in these threads!), I have to add, although I think you know this pretty well but others may not: in my experience almost all microcontrollers can be described by being "completely programmable without any HAL or similar, on bare metal, using nothing but avr-gcc (and avr-libc, if you really want), in (freestanding) C or C++".

All the same old principles and strategies apply. The only thing to do, really, is to ignore advice which claims otherwise and keep doing it exactly like you put it,
just replace "avr-<gcc|binutils|libc>" with "arm-none-eabi-<gcc|binutils|libc>" or whatever architecture you are using.

I mean, I work that way 100%, and it makes jumping between different microcontrollers a breeze, especially important in these times of poor component availability.

My new favorite microcontroller family for bare metal programming is nRF52. The peripheral system is designed ground-up just ease of bare metal programmability in mind. No "turn the peripheral on from multiple places" nonsense, like in most ARM controllers. Dedicating complete 32-bit register for one thing reduces performance a bit, but not much; it's much easier to write PERIPHERAL->TASKS_START = 1; than to work with multi-bit control registers. Or polling PERIPHERAL->EVENTS_RXDONE instead of a bitmask of a status register, you don't need to think about those #defines at all.

Extremely simple configurable task-event routing system: write &PERIPH1->EVENT_XYZ into one register, and &PERIPH2->TASK_ABC into another, and now they are mapped: EVENT_XYZ triggers TASK_ABC without CPU interaction. You can also see they use addresses of registers instead of separate mapping channel numbers. Probably cost them a few dozen logic gates; meaningless when the CPU alone is tens of thousands anyway.

Not much to configure; for example, DMA does not need to be configured from a "DMA controller" at all, as there is no separate "DMA controller" at least externally exposed, and no channels to map. DMA is nothing more than a pointer register plus data count register in the peripheral itself! Here is how you use UART with DMA bare metal in nRF52:

Code: [Select]
uint8_t buffer[] = {0xac, 0xdc, 0xab, 0xba, 0xcd};
NRF_UARTE0->TXD.PTR = (uint32_t)buffer;
NRF_UARTE0->TXD.MAXCNT = sizeof buffer;
NRF_UARTE0->TASKS_STARTTX = 1;

while(!NRF_UARTE0->EVENTS_ENDTX)
;

They do offer classical non-DMA UART, but call it deprecated, because the DMA version is so obvious there is no reason not to use it.

These nRF guys really succeeded making register level programming look like a high-level API! And it's a total polar opposite to the Rapsberry Pi Pico folks, who deliberately made the peripherals as difficult as possible, and decided to force people into using their library code by directly documenting the library use into datasheet. Some may like it, but if you compare it to NRF52, I think it's a total failure. Documenting a library into the datasheet would only make sense if there was some fundamental reason why these peripherals must be difficult, necessitating using complicated code, and then it's good to provide a high-quality, well documented library, and use it throughout all examples. But this is not the case with microcontrollers, as Nordic Semi hardware engineers have well demonstrated; most of the peripherals do fundamentally very simple things.
« Last Edit: November 27, 2022, 03:36:53 pm by Siwastaja »
 
The following users thanked this post: spostma, DiTBho

Offline Sherlock HolmesTopic starter

  • Frequent Contributor
  • **
  • !
  • Posts: 570
  • Country: us
Re: A new, hardware "oriented" programming language
« Reply #223 on: November 27, 2022, 03:39:45 pm »
I think the unimportance of details in grammar can be demonstrated with a simple example:

Consider the achievements of human kind. We have built pyramids, airplanes, went to the Moon and whatnot. All of this is possible thanks to human being able to communicate with others, with a natural language.

Yet, such great achievements have been made in countless of different cultures, with different languages, with very different grammars and syntax.

It's the process itself, behind the grammar and syntax, which dictates the result. A surgeon can discuss with their colleagues in English, Chinese or Finnish just fine, and as long as they know the language, they absolutely do not care, it simply does not matter to them.

Same can be said about programming languages, for example C and Ada have very different looking grammar and syntax but if you learn both, it's confusing only in the very beginning. You learn the grammar and syntax quickly.

That's why it's more important to start from what the language can do and how, then the grammar will follow.

Two things.

1. I largely agree with you but that does not demonstrate that all programming language grammars are logically identical, grammars have properties and they can be compared by tabulating those properties, different languages have different properties - surely you must agree with me on this point?

2. There is no algorithm for designing languages no formula, one can consider a grammar for certain interesting properties then jump to looking at semantics and desirable language features. Then one can jump back and alter or enhance the grammar, back and forth, be creative, creativity is not algorithmic there are no rules for creativity, each of has a mind and our minds work in different ways at different times, what matters is not the journey but the destination, there is more than one route.

So of course we can discuss what the language can do, go ahead I've been asking for some nine pages now! For example I recently said that I think the language should support runtime access to metadata like string capacity/lengths, arrays dimensions and bounds and so on. The C language is - IMHO - rather poor in that area, so lets address that.

« Last Edit: November 27, 2022, 03:43:59 pm by Sherlock Holmes »
“When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” ~ Arthur Conan Doyle, The Case-Book of Sherlock Holmes
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8090
  • Country: fi
Re: A new, hardware "oriented" programming language
« Reply #224 on: November 27, 2022, 03:59:54 pm »
So of course we can discuss what the language can do, go ahead I've been asking for some nine pages now! For example I recently said that I think the language should support runtime access to metadata like string capacity/lengths, arrays dimensions and bounds and so on. The C language is - IMHO - rather poor in that area, so lets address that.

Accessing implementation internals like string capacity/length, but isn't that exactly where C is the best tool you have? Higher level languages hide all such implementation details in their string classes (that's the whole point). In C, such string type does not exist, you have to create your own, which also means you have access to everything you want, and exactly as you want. More work, but this is why embedded developers with limited resources prefer C.

Arrays are indeed a weak point in C, being too primitive and requiring you to babysit implementation details, but if you did actually read any of this thread, it has been discussed in numerous replies above, mostly by Mr. N. Animal. No need to complain about this not being discussed.

On the other hand, people often write crappy code like this, as taught in classes:
Code: [Select]
#define ARRAY_LEN
unsigned char array[ARRAY_LEN] = {1, 2, 3, 4};

for(int i=0; i<ARRAY_LEN; i++)
   ...


When C is much more capable:
Code: [Select]
uint8_t array[] = {1, 2, 3, 4}; // automagic array length!

for(int i=0; i<NUM_ELEM(array); i++) // despite being automagic, length is still known, at compile time! Googling the NUM_ELEM helper macro is left as an excercise for reader. Add to utils.h or something
    ...

Of course still quite primitive, but... We manage! And those who complain, often do not even know what we have available. As I said earlier, use C to its fullest, and it's not as crappy as some lazy examples from 1980's seem like.
« Last Edit: November 27, 2022, 04:02:20 pm by Siwastaja »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf