Author Topic: Comparing "Modern" embedded chip programming languages?  (Read 18379 times)

0 Members and 1 Guest are viewing this topic.

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #75 on: July 10, 2016, 09:17:53 pm »
Many years ago, I spent a lot of time with the run time interpreter for the UCSD Pascal system.  Pascal, on that system, used P-Code but it was still pretty fast considering the hardware of the time.  A 6 MHz Z80 was pretty much top of the line.

You may not have heard about this
Pascal MicroEngine
https://en.wikipedia.org/wiki/Pascal_MicroEngine
 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #76 on: July 10, 2016, 10:22:46 pm »
Ada has many of the features of Pascal and for that reason, I am willing to spend some time looking at it.  Oberon is another language worth considering and it is available for ARMs.

http://www.astrobe.com/default.htm

I'm not clear on whether it has any support for the STM32F boards.
You might want to look at
vishap oberon compiler
free portable oberon-2 compiler
Oberon-2 in and C for GCC output
http://oberon.vishap.am/
https://github.com/vishaps/voc

And when you can compile Oberon-2 parts of the
Oberon (operating system) could be a handy add
https://en.wikipedia.org/wiki/Oberon_(operating_system)
You would then have the Source for all parts.

I know nothing but what I read for vishap oberon compiler
You can try the Oberon_(operating_system) with full source in a VM.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #77 on: July 10, 2016, 10:30:10 pm »
I had the Oberon operating system running a while back.  It seemed kind of minimalist as I recall.  My only interest in Oberon would be for embedded programming and that most certainly wouldn't require much of an OS.  A little RTOS perhaps but that's about all.

In looking at Astrobe (Oberon), I noted that there doesn't seem to be library code to use TCP/IP with the LPC1768 (mbed) and that's a huge problem.  I most certainly do not want to rewrite the TCP/IP stack.
 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: Comparing "Modern" embedded chip programming languages?
« Reply #78 on: July 10, 2016, 10:45:56 pm »
I have no objection if someone says JIT is faster than interpreter, but JIT or bytecode or whatever is faster than native machine code? thats unjustifiable.

You will no doubt be surprised to learn that it isn't as simple as that. See http://www.hpl.hp.com/techreports/1999/HPL-1999-78.html In some cases interpreters increase the execution speed.

The key point is that a static compiler has to guess the code/data execution patterns at compile time, and - especially with C/C++ - has to make pessimising assumptions which are reflected in the generated machine code. In particular the data access patterns greatly influence the overall speed where the system has L1/2/3 caches and NUMA memory. OTOH an interpreter/runtime can inspect what's actually happening, and optimise that.

Hence in the Dynamo project, they used a C compiler to generate code for a machine X, and measured its speed. They then took an emulator of machine X running in machine X, observed what was happening, and changed the machine code on the fly. They found that when running in the emulator of machine X, the optimised code was sometimes faster than the compiler's output running directly on the bare metal machine X.

Of course it is unlikely that an embedded system will have NUMA memory, but many do have caches.

I'd really be glad to se which MCU has 80 to 512 MB of RAM to work with (the ammount that the workstation using pa-8000 series processor used in that that paper had), even if the memory footprint is quite low in workstation terms 280-300ish KB it is not in MCU terms  just to have the jit (and not even a modern one) is huge to put it into perspective it would occupy more than half the memory of the biggest stm cortex M7, so it could technically work but that a lot of practically wasted RAM

In general terms JIT (or vm's or runtimes or any other name you have) are not a magic kind of cure-all-evils panacea, as always in engineering they can guve you speed and safety but you are paying them in terms of RAM, latency and predictability (a jit does't just compiles and optimizes all the application at startup, it start as a pure (dog slow) interpreterà and then by profiling the  code compiles and gradually optimizes (thats right usually they arent going from -o0 to -o3 directly) the most utilized code paths, you have to take into account that when the JIT  is compiling/optimizing your code slows down significantly, than we have the issue  that if while running optimizes code your code branches to a previously unrun part of the bytecode the JIT falls back to "dog slow" mode (aka interprete) and if this code path is not getting hit frequently enough it wont ever be optimizes so wou will have essentially random slowdowns and spedups.

In summary this technology is not that well suited for embedded use, where predictable timing and execution,knowing that startup will always take 5 ms, not 5 ms most of the times, but if te Ethernet cable is attached it will take 50ms because the network code has to be jitteed is kind of more important than a 10% boost in mean execution speed sometimes
 
The following users thanked this post: george.b

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Comparing "Modern" embedded chip programming languages?
« Reply #79 on: July 10, 2016, 11:13:09 pm »
I have no objection if someone says JIT is faster than interpreter, but JIT or bytecode or whatever is faster than native machine code? thats unjustifiable.

You will no doubt be surprised to learn that it isn't as simple as that. See http://www.hpl.hp.com/techreports/1999/HPL-1999-78.html In some cases interpreters increase the execution speed.

The key point is that a static compiler has to guess the code/data execution patterns at compile time, and - especially with C/C++ - has to make pessimising assumptions which are reflected in the generated machine code. In particular the data access patterns greatly influence the overall speed where the system has L1/2/3 caches and NUMA memory. OTOH an interpreter/runtime can inspect what's actually happening, and optimise that.

Hence in the Dynamo project, they used a C compiler to generate code for a machine X, and measured its speed. They then took an emulator of machine X running in machine X, observed what was happening, and changed the machine code on the fly. They found that when running in the emulator of machine X, the optimised code was sometimes faster than the compiler's output running directly on the bare metal machine X.

Of course it is unlikely that an embedded system will have NUMA memory, but many do have caches.

I'd really be glad to se which MCU has 80 to 512 MB of RAM to work with (the ammount that the workstation using pa-8000 series processor used in that that paper had), even if the memory footprint is quite low in workstation terms 280-300ish KB it is not in MCU terms  just to have the jit (and not even a modern one) is huge to put it into perspective it would occupy more than half the memory of the biggest stm cortex M7, so it could technically work but that a lot of practically wasted RAM

In general terms JIT (or vm's or runtimes or any other name you have) are not a magic kind of cure-all-evils panacea, as always in engineering they can guve you speed and safety but you are paying them in terms of RAM, latency and predictability (a jit does't just compiles and optimizes all the application at startup, it start as a pure (dog slow) interpreterà and then by profiling the  code compiles and gradually optimizes (thats right usually they arent going from -o0 to -o3 directly) the most utilized code paths, you have to take into account that when the JIT  is compiling/optimizing your code slows down significantly, than we have the issue  that if while running optimizes code your code branches to a previously unrun part of the bytecode the JIT falls back to "dog slow" mode (aka interprete) and if this code path is not getting hit frequently enough it wont ever be optimizes so wou will have essentially random slowdowns and spedups.

In summary this technology is not that well suited for embedded use, where predictable timing and execution,knowing that startup will always take 5 ms, not 5 ms most of the times, but if te Ethernet cable is attached it will take 50ms because the network code has to be jitteed is kind of more important than a 10% boost in mean execution speed sometimes

Typical matchbox-size Zync systems consist of an FPGA, a dual core ARM processor, cache, and 512MB ram, and are used in scopes, and many highly specialised embedded systems. That's the  way the world is going.

If you look elsewhere you will see I state that frequently timing predictability is more important than speed. That also means any interpreter overhead is not critical.

But overall my point is to dispel the mistaken belief that jits and hotspot techniques are slow. In many important cases, they  aren't.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #80 on: July 10, 2016, 11:39:50 pm »
I had the Oberon operating system running a while back.  It seemed kind of minimalist as I recall.  My only interest in Oberon would be for embedded programming and that most certainly wouldn't require much of an OS.  A little RTOS perhaps but that's about all.

In looking at Astrobe (Oberon), I noted that there doesn't seem to be library code to use TCP/IP with the LPC1768 (mbed) and that's a huge problem.  I most certainly do not want to rewrite the TCP/IP stack.

TCP/IP and other nice things are in the Oberon operating system.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Comparing "Modern" embedded chip programming languages?
« Reply #81 on: July 11, 2016, 01:57:15 am »
But overall my point is to dispel the mistaken belief that jits and hotspot techniques are slow. In many important cases, they  aren't.

One thing which slips past me, and I am failing to grock...

JITs and so on are useful when your target architecture changes (e.g. Java on ARM vs Intel), or you don't know what code you will be running in advance (e.g. dynamically linking large class libraries, SQL queries, stored procedures, packet filters, graphics shaders...).

If you know the target platform (after all this is in the context of embedded systems), and you know pretty much exactly what code you will be running and how it behaves, then why not just compile the machine independent byte-code to real op-codes before you push it into the device? (i.e. just what the backend of most compilers does). Do it once and do it well - take your time and optimize it.

Unless the JIT has information that is not available at build time (or can be gained during profiling within the development cycle) then how can it do a better job?

The only reasons I can think of is that it might improving code density, or it may provide a sandbox
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11648
  • Country: my
  • reassessing directives...
Re: Comparing "Modern" embedded chip programming languages?
« Reply #82 on: July 11, 2016, 02:28:43 am »
Unless the JIT has information that is not available at build time (or can be gained during profiling within the development cycle) then how can it do a better job?
The only reasons I can think of is that it might improving code density, or it may provide a sandbox
well you heard it. runtime monitoring, probably sandboxing way, and then optimize for code behaviour. probably in the same parallel of time hence its not slow. how? its anyone guesses but the runtime environment designer. (ps: optimize for code behaviour: machine to machine basis, even on 2 exact same machines may produce different code signatures if one user is a pothead and another one is not.)
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #83 on: July 11, 2016, 02:37:14 am »
I had the Oberon operating system running a while back.  It seemed kind of minimalist as I recall.  My only interest in Oberon would be for embedded programming and that most certainly wouldn't require much of an OS.  A little RTOS perhaps but that's about all.

In looking at Astrobe (Oberon), I noted that there doesn't seem to be library code to use TCP/IP with the LPC1768 (mbed) and that's a huge problem.  I most certainly do not want to rewrite the TCP/IP stack.

TCP/IP and other nice things are in the Oberon operating system.

True but the Oberon OS doesn't target embedded processors.  I don't know if it will port easily.  I wandered all over the Astrobe site and there is mention of trying to get Ethernet working but I couldn't find anything about TCP/IP.  I would want a minimalist stack like lwIP.
 

Offline westfwTopic starter

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #84 on: July 11, 2016, 05:33:18 am »
Quote
proper thread title should be... "advice me on best/fastest compiler/emulator/VM" regardless of how the "language" formed.
No!
First, some apologies.  I entered the original question just before going on vacation, hoping it would go in useful directions.
But it's not...

I don't particularly care about the footprint or performance hit caused by going to this "new" language.  Im more interesting in "why do python and lua both exist; aren't they very similar?"  Good answers would be "lua has a smaller footprint because most of its high level functionality is aimed at strings, while python has more features for dealing with float math." (note: entirely made-up answer.)

I'll settle for "Java uses a VM that is tightly speced to avoid security issues" (oops) while "pyhton bytecodes are designed for easily addable user functions." or similar as well.

The conversation so far HAS raised some questions:
  • Whats the difference between a byte-code-interpreted language, and one that uses a VM?
  • Which languages fall into which categories?  Java and C# use VMs, while python and lua use BCIs, right?
  • JIT compilation is a pretty deep heuristic, still characterized by vert large VM footprints?  Yes or no?

Quote
Can anybody suggest anything that has actually changed this decade that make the technique any more or less suited then running native code on an smallish embedded system?
Sure.  At the same time that security and code quality have become big issues, the price for a chip that will run some kind of VM or BCI while still leaving me "enough" performance and user code space has come down to the same price as a smaller micro that couldn't, making me think that such a thing doing HL memory management, runtime range and type checking and so, might not be an unreasonable idea.  I am assuming that the HLL will run fast enough and small enough for some tasks - how do they compare OTHER than that.

 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: Comparing "Modern" embedded chip programming languages?
« Reply #85 on: July 11, 2016, 06:24:38 am »
I have no objection if someone says JIT is faster than interpreter, but JIT or bytecode or whatever is faster than native machine code? thats unjustifiable.

You will no doubt be surprised to learn that it isn't as simple as that. See http://www.hpl.hp.com/techreports/1999/HPL-1999-78.html In some cases interpreters increase the execution speed.

The key point is that a static compiler has to guess the code/data execution patterns at compile time, and - especially with C/C++ - has to make pessimising assumptions which are reflected in the generated machine code. In particular the data access patterns greatly influence the overall speed where the system has L1/2/3 caches and NUMA memory. OTOH an interpreter/runtime can inspect what's actually happening, and optimise that.

Hence in the Dynamo project, they used a C compiler to generate code for a machine X, and measured its speed. They then took an emulator of machine X running in machine X, observed what was happening, and changed the machine code on the fly. They found that when running in the emulator of machine X, the optimised code was sometimes faster than the compiler's output running directly on the bare metal machine X.

Of course it is unlikely that an embedded system will have NUMA memory, but many do have caches.

I'd really be glad to se which MCU has 80 to 512 MB of RAM to work with (the ammount that the workstation using pa-8000 series processor used in that that paper had), even if the memory footprint is quite low in workstation terms 280-300ish KB it is not in MCU terms  just to have the jit (and not even a modern one) is huge to put it into perspective it would occupy more than half the memory of the biggest stm cortex M7, so it could technically work but that a lot of practically wasted RAM

In general terms JIT (or vm's or runtimes or any other name you have) are not a magic kind of cure-all-evils panacea, as always in engineering they can guve you speed and safety but you are paying them in terms of RAM, latency and predictability (a jit does't just compiles and optimizes all the application at startup, it start as a pure (dog slow) interpreterà and then by profiling the  code compiles and gradually optimizes (thats right usually they arent going from -o0 to -o3 directly) the most utilized code paths, you have to take into account that when the JIT  is compiling/optimizing your code slows down significantly, than we have the issue  that if while running optimizes code your code branches to a previously unrun part of the bytecode the JIT falls back to "dog slow" mode (aka interprete) and if this code path is not getting hit frequently enough it wont ever be optimizes so wou will have essentially random slowdowns and spedups.

In summary this technology is not that well suited for embedded use, where predictable timing and execution,knowing that startup will always take 5 ms, not 5 ms most of the times, but if te Ethernet cable is attached it will take 50ms because the network code has to be jitteed is kind of more important than a 10% boost in mean execution speed sometimes

Typical matchbox-size Zync systems consist of an FPGA, a dual core ARM processor, cache, and 512MB ram, and are used in scopes, and many highly specialised embedded systems. That's the  way the world is going.

If you look elsewhere you will see I state that frequently timing predictability is more important than speed. That also means any interpreter overhead is not critical.

But overall my point is to dispel the mistaken belief that jits and hotspot techniques are slow. In many important cases, they  aren't.

I didnt think the zinq was an object for this thread, sure it can be used in embedded devices, bur a 600MHz to 1 GHz  dual core processor with external ram and rom is by definition a General purpose processor, running a general purpose OS (i used the zynq to do image filtering belive it or not so i'm very familiari with it, and i was running a stock Linux kernel on it) so there is no need for embedded lua or micro python here, there are so much resources that you can use full fledged lua and CPython, the really time critical part of the system would be put in FPGA anyway (or else why are you using a Zynq?)
 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: Comparing "Modern" embedded chip programming languages?
« Reply #86 on: July 11, 2016, 07:05:44 am »
Quote
proper thread title should be... "advice me on best/fastest compiler/emulator/VM" regardless of how the "language" formed.
No!
First, some apologies.  I entered the original question just before going on vacation, hoping it would go in useful directions.
But it's not...

I don't particularly care about the footprint or performance hit caused by going to this "new" language.  Im more interesting in "why do python and lua both exist; aren't they very similar?"  Good answers would be "lua has a smaller footprint because most of its high level functionality is aimed at strings, while python has more features for dealing with float math." (note: entirely made-up answer.)

I'll settle for "Java uses a VM that is tightly speced to avoid security issues" (oops) while "pyhton bytecodes are designed for easily addable user functions." or similar as well.

The conversation so far HAS raised some questions:
  • Whats the difference between a byte-code-interpreted language, and one that uses a VM?
  • Which languages fall into which categories?  Java and C# use VMs, while python and lua use BCIs, right?
  • JIT compilation is a pretty deep heuristic, still characterized by vert large VM footprints?  Yes or no?

Quote
Can anybody suggest anything that has actually changed this decade that make the technique any more or less suited then running native code on an smallish embedded system?
Sure.  At the same time that security and code quality have become big issues, the price for a chip that will run some kind of VM or BCI while still leaving me "enough" performance and user code space has come down to the same price as a smaller micro that couldn't, making me think that such a thing doing HL memory management, runtime range and type checking and so, might not be an unreasonable idea.  I am assuming that the HLL will run fast enough and small enough for some tasks - how do they compare OTHER than that.
1) byecode is by definition an intermediate rappresentation of your program, in between source code and macchine code, as such you can have VM's with bytecode (Java's JVM for example), interpreterà with bytecode (Cpython) JIT compilers that use bytecode as input (CLR and .NET) and AOT compiles that use bytecode ad an input (LLVM)
2) it is not that clear cut Java uses a VM but is also JIT compilers on the fly the same goes for C#, python depending on what you use can be  purely interpreted (Cpython) jit compiles (pypy, pyston) I don't know lua enogh to speak about it
3)yes of course it is, after all you are asking the JIT to do an equivalent work as a normal AOT compiler does just in much less time, so the JIT isn't run on all the program at the start (if that was the case what would be the point of JIT's you could AOT compiler everything) just the hot paths are compiled and optimized, so there is a huge ammount of heuristic going on in the choice of which code paths to optimizes and how much

Here the WebKit developers are describing indetail theis JavaScript execution stack composed of a baseline interpreter and 3 different JIT's differently tuned for performances or runtime , and you can se how to get dynamic interpreted languages (such as JavaScript or python) to executes at only 2x the execution time of a equivalent C program you have to go pretty far and use a lot of resources in jitting and optimizing

 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Comparing "Modern" embedded chip programming languages?
« Reply #87 on: July 11, 2016, 07:12:42 am »
I have no objection if someone says JIT is faster than interpreter, but JIT or bytecode or whatever is faster than native machine code? thats unjustifiable.

You will no doubt be surprised to learn that it isn't as simple as that. See http://www.hpl.hp.com/techreports/1999/HPL-1999-78.html In some cases interpreters increase the execution speed.

The key point is that a static compiler has to guess the code/data execution patterns at compile time, and - especially with C/C++ - has to make pessimising assumptions which are reflected in the generated machine code. In particular the data access patterns greatly influence the overall speed where the system has L1/2/3 caches and NUMA memory. OTOH an interpreter/runtime can inspect what's actually happening, and optimise that.

Hence in the Dynamo project, they used a C compiler to generate code for a machine X, and measured its speed. They then took an emulator of machine X running in machine X, observed what was happening, and changed the machine code on the fly. They found that when running in the emulator of machine X, the optimised code was sometimes faster than the compiler's output running directly on the bare metal machine X.

Of course it is unlikely that an embedded system will have NUMA memory, but many do have caches.

I'd really be glad to se which MCU has 80 to 512 MB of RAM to work with (the ammount that the workstation using pa-8000 series processor used in that that paper had), even if the memory footprint is quite low in workstation terms 280-300ish KB it is not in MCU terms  just to have the jit (and not even a modern one) is huge to put it into perspective it would occupy more than half the memory of the biggest stm cortex M7, so it could technically work but that a lot of practically wasted RAM

In general terms JIT (or vm's or runtimes or any other name you have) are not a magic kind of cure-all-evils panacea, as always in engineering they can guve you speed and safety but you are paying them in terms of RAM, latency and predictability (a jit does't just compiles and optimizes all the application at startup, it start as a pure (dog slow) interpreterà and then by profiling the  code compiles and gradually optimizes (thats right usually they arent going from -o0 to -o3 directly) the most utilized code paths, you have to take into account that when the JIT  is compiling/optimizing your code slows down significantly, than we have the issue  that if while running optimizes code your code branches to a previously unrun part of the bytecode the JIT falls back to "dog slow" mode (aka interprete) and if this code path is not getting hit frequently enough it wont ever be optimizes so wou will have essentially random slowdowns and spedups.

In summary this technology is not that well suited for embedded use, where predictable timing and execution,knowing that startup will always take 5 ms, not 5 ms most of the times, but if te Ethernet cable is attached it will take 50ms because the network code has to be jitteed is kind of more important than a 10% boost in mean execution speed sometimes

Typical matchbox-size Zync systems consist of an FPGA, a dual core ARM processor, cache, and 512MB ram, and are used in scopes, and many highly specialised embedded systems. That's the  way the world is going.

If you look elsewhere you will see I state that frequently timing predictability is more important than speed. That also means any interpreter overhead is not critical.

But overall my point is to dispel the mistaken belief that jits and hotspot techniques are slow. In many important cases, they  aren't.

I didnt think the zinq was an object for this thread, sure it can be used in embedded devices, bur a 600MHz to 1 GHz  dual core processor with external ram and rom is by definition a General purpose processor, running a general purpose OS (i used the zynq to do image filtering belive it or not so i'm very familiari with it, and i was running a stock Linux kernel on it) so there is no need for embedded lua or micro python here, there are so much resources that you can use full fledged lua and CPython, the really time critical part of the system would be put in FPGA anyway (or else why are you using a Zynq?)

Your thinking is far too limited. The only reason to use a Zynq is for an embedded system where you need a tightly-coupled FPGA. If you don't need the FPGA then you would simply use a much cheaper ARM processor!

There's no requirement to run a general purpose OS (although it is possible). To ensure real-time operation people often run bare metal code on one processor and a simple RTOS on the other.

Just look at the pictures, and you will see none of these are "general purpose computers":
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: Comparing "Modern" embedded chip programming languages?
« Reply #88 on: July 11, 2016, 08:07:20 am »
Your thinking is far too limited. The only reason to use a Zynq is for an embedded system where you need a tightly-coupled FPGA. If you don't need the FPGA then you would simply use a much cheaper ARM processor!

There's no requirement to run a general purpose OS (although it is possible). To ensure real-time operation people often run bare metal code on one processor and a simple RTOS on the other.

Just look at the pictures, and you will see none of these are "general purpose computers":

I whink you misunderstood me, I know full well that a zynq (or cyclone v soc or other fpga+arm combos) are usefull where you need tight integration between processing and fpga, and I also know that you could run bare metal on a zynq,that doesn't change the fact that:
1) if you have an fpga you might as well use it to implement the time critical part of the work ( control loops, precise sampling etc) the arm cores can then handle comms, configuration, HMI, low priority stuff
and there is no doubt that an FPGA (if the designer knows how to use it) is much more deterministic in term of execution time, and more often than not (but that is not always the case) much faster due to the high parallelism and pipelining that can be done
2) even if you need the tight coupling between core and fabric that doesn't change the fact that the processors on a ZYNQ are beasts and not by any mean limited to embedded use, if you want to run python or lua on it you can use full python and lua there is no need for micropython and elua, in particular if you can even run an OS (namely linux) on core 1 and get your lua/python/ada/[insert language here] and on core 2 you can run bare metal C or C++ to get full blown determinism (if preempt_rt is not enough)

3) are they not? Here Your thinking is far too limited, I see you have carefully chosen only the cheapest boards, where the cost of the ZYNQ is a significant part of the final produt's price, so they could not fit HDMI/DVI connector in, but the soc itself will merrily support HDMI video out (it might not play crysis, but that's not the point) , so you could argue they are not GP computers since they haven't got graphics (mind that by that logic a headless server is not a GP computer since it hasn't got graphics), and else they all have at least USB 2.0, most of the boards have some kind of networking capability (wifi or ethernet based) and they can run a general purpose OS
so I ask you (and I'm really sincerely asking, is not a rethoric question) what is the difference between these and a GP computer ( or at least a Raspberry Pi which is undoubtedly a GP computer)?
 

Offline Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11648
  • Country: my
  • reassessing directives...
Re: Comparing "Modern" embedded chip programming languages?
« Reply #89 on: July 11, 2016, 08:51:39 am »
Quote
proper thread title should be... "advice me on best/fastest compiler/emulator/VM" regardless of how the "language" formed.
No!
First, some apologies.  I entered the original question just before going on vacation, hoping it would go in useful directions.
But it's not...
dont take it personally and apologize from my part as well.. its 'not uncommon' that a 'this or that language' thread like this will end up in 'bell and whistle' pissing contest. I personally will flavor on 'what a language' capable of doing or what it can give you power of, at language/syntax level, such as the ability to type cast, non aligned byte or pointer shift etc, and even address spaces beyond boundary if the os permit :P while other prefer to argue on how intelligent and complete and featurefull its 'bundled' library/compiler/re/jit are. The common one is automatic in the background array boundary check. They may well aware how that works behind the scene, but albeit the less likelihood, but still probable, that they are not. Why this eternal pissing contest you ask? from my conclusion its simple. We come in two flavor school of practice. One is being a professional type of coders, they want to accomplish task in tight time budjet, delegate as much problem to others as possible,hence ide developers who took responsibility of managed libraries won their heart. Another school of practise is being a codemonkey lord type, the hobbiests/hackers and such community who want as much power and control from a particular language. No managed library? No problem! We build that in house! This language can do this, others cant..so that type of schools. So what school are you? If you can answer that,then you can make selection easier from plethora of choices. I can understand the rationality of their school of practise,but i'm not sure about them to us..but the best thing is to agree on both and respect the choice each one made that suit best their particular needs.fwiw 2cnt.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Comparing "Modern" embedded chip programming languages?
« Reply #90 on: July 11, 2016, 10:55:00 am »
Your thinking is far too limited. The only reason to use a Zynq is for an embedded system where you need a tightly-coupled FPGA. If you don't need the FPGA then you would simply use a much cheaper ARM processor!

There's no requirement to run a general purpose OS (although it is possible). To ensure real-time operation people often run bare metal code on one processor and a simple RTOS on the other.

Just look at the pictures, and you will see none of these are "general purpose computers":

I whink you misunderstood me, I know full well that a zynq (or cyclone v soc or other fpga+arm combos) are usefull where you need tight integration between processing and fpga, and I also know that you could run bare metal on a zynq,that doesn't change the fact that:
1) if you have an fpga you might as well use it to implement the time critical part of the work ( control loops, precise sampling etc) the arm cores can then handle comms, configuration, HMI, low priority stuff
and there is no doubt that an FPGA (if the designer knows how to use it) is much more deterministic in term of execution time, and more often than not (but that is not always the case) much faster due to the high parallelism and pipelining that can be done
2) even if you need the tight coupling between core and fabric that doesn't change the fact that the processors on a ZYNQ are beasts and not by any mean limited to embedded use, if you want to run python or lua on it you can use full python and lua there is no need for micropython and elua, in particular if you can even run an OS (namely linux) on core 1 and get your lua/python/ada/[insert language here] and on core 2 you can run bare metal C or C++ to get full blown determinism (if preempt_rt is not enough)

3) are they not? Here Your thinking is far too limited, I see you have carefully chosen only the cheapest boards, where the cost of the ZYNQ is a significant part of the final produt's price, so they could not fit HDMI/DVI connector in, but the soc itself will merrily support HDMI video out (it might not play crysis, but that's not the point) , so you could argue they are not GP computers since they haven't got graphics (mind that by that logic a headless server is not a GP computer since it hasn't got graphics), and else they all have at least USB 2.0, most of the boards have some kind of networking capability (wifi or ethernet based) and they can run a general purpose OS
so I ask you (and I'm really sincerely asking, is not a rethoric question) what is the difference between these and a GP computer ( or at least a Raspberry Pi which is undoubtedly a GP computer)?

I may well have misunderstood what you wrote, but the primary use-case for a Zynq is as an embedded system, therefore it is (to use your phrase) "an object for this thread".

As for "carefully choosing the cheapest boards", they weren't carefully chosen and they aren't the cheapest.

If you want a more expensive embedded board or subsystem with a Zynq, look at
EUR1500 https://shop.trenz-electronic.de/en/TE0782-02-045-2I-High-Performance-Xilinx-Zynq-Z-7045-Modul-ind.-temp.range-8-5-x-8-5-cm
or search for "software defined radio", e.g. https://epiqsolutions.com/quadratiq/
or search for "software defined networking", e.g. http://www.xilinx.com/products/boards-and-kits/1-411yn3.html
or many many many special-purpose boards
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: Comparing "Modern" embedded chip programming languages?
« Reply #91 on: July 11, 2016, 11:35:55 am »
If you want a more expensive embedded board or subsystem with a Zynq, look at
EUR1500 https://shop.trenz-electronic.de/en/TE0782-02-045-2I-High-Performance-Xilinx-Zynq-Z-7045-Modul-ind.-temp.range-8-5-x-8-5-cm
or search for "software defined radio", e.g. https://epiqsolutions.com/quadratiq/
or search for "software defined networking", e.g. http://www.xilinx.com/products/boards-and-kits/1-411yn3.html
or many many many special-purpose boards

Again I fail to see why these boards are not general computers, they are general computers programmed to do a very specific task so they act as a special purpose board, but there is nothing stopping you (ok crypto if secure boot is used but that is beside the point), from flashing the memory and use the module at least as a server (qhich qualirfies as GP to me), is it a good idea? no, of course not, it is a terrible idea having a 1500$ sdr board do what a 35$ raspberry pi 3+ can do, so i'm not suggesting that it should be done, only that is can be done.

than again is funny that you point to me to 3 systems, 1 of which is just a glorified breakout board (the trenz one) so what is it's special purpose? the other 2 just as I said use normal (General purpose) linux kernel on the processor par that handles, for example the web gui (web servers are a very special purpose embedded technology i know) while the real real time embedded stuff (all the DSP and signal processing) is done in the programmable logic not on a embedded lua or micro python interpreter running on bare metal, that is pathetic just to think about it
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Comparing "Modern" embedded chip programming languages?
« Reply #92 on: July 11, 2016, 12:36:48 pm »
If you want a more expensive embedded board or subsystem with a Zynq, look at
EUR1500 https://shop.trenz-electronic.de/en/TE0782-02-045-2I-High-Performance-Xilinx-Zynq-Z-7045-Modul-ind.-temp.range-8-5-x-8-5-cm
or search for "software defined radio", e.g. https://epiqsolutions.com/quadratiq/
or search for "software defined networking", e.g. http://www.xilinx.com/products/boards-and-kits/1-411yn3.html
or many many many special-purpose boards

Again I fail to see why these boards are not general computers, they are general computers programmed to do a very specific task so they act as a special purpose board, but there is nothing stopping you (ok crypto if secure boot is used but that is beside the point), from flashing the memory and use the module at least as a server (qhich qualirfies as GP to me), is it a good idea? no, of course not, it is a terrible idea having a 1500$ sdr board do what a 35$ raspberry pi 3+ can do, so i'm not suggesting that it should be done, only that is can be done.

than again is funny that you point to me to 3 systems, 1 of which is just a glorified breakout board (the trenz one) so what is it's special purpose? the other 2 just as I said use normal (General purpose) linux kernel on the processor par that handles, for example the web gui (web servers are a very special purpose embedded technology i know) while the real real time embedded stuff (all the DSP and signal processing) is done in the programmable logic not on a embedded lua or micro python interpreter running on bare metal, that is pathetic just to think about it

Sigh. Playing around with different definitions of "embedded" and "general purpose computer" is a waste of everybody's time. Neither terms are well-defined.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline captbill

  • Contributor
  • Posts: 37
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #93 on: July 17, 2016, 05:40:34 pm »
I had the Oberon operating system running a while back.  It seemed kind of minimalist as I recall.  My only interest in Oberon would be for embedded programming and that most certainly wouldn't require much of an OS.  A little RTOS perhaps but that's about all.

In looking at Astrobe (Oberon), I noted that there doesn't seem to be library code to use TCP/IP with the LPC1768 (mbed) and that's a huge problem.  I most certainly do not want to rewrite the TCP/IP stack.

TCP/IP and other nice things are in the Oberon operating system.

True but the Oberon OS doesn't target embedded processors.  I don't know if it will port easily.  I wandered all over the Astrobe site and there is mention of trying to get Ethernet working but I couldn't find anything about TCP/IP.  I would want a minimalist stack like lwIP.

Congrats on your decision to use Astrobe/Oberon. You won't regret it.
There is indeed a learning curve or more a "breaking of old habits" that is learned.
Oberon is all about modularization and minimalism and getting the feel of the great freedom
afforded by it's virtues takes some getting used to.

Writing a "TCP stack library" for Oberon would probably DWARF the size of the full Oberon OS
(which is <2mb). Oberon's power is that it avoids overly complex "protocols" like TCP, which
are anything but minimalistic. Oberon has it's own, sleek little network based around the Nrf24l01
RF chip.

The way to approach TCP, if you just must have it, is to establish a serial-to-tcp-bridge. The ESP8266
is a good route. The ESP8266 has a SoC to handle the "network stack" so the heavy lifting is done
by actual hardware and not software. With this arrangement, you talk to the ESP8266 with
a standart UART/Rs232 library and any TCP functionality is performed by the ESP8266, which
is purpose built for networking.

This is a neat all-around solution:
https://github.com/jeelabs/esp-link
« Last Edit: July 17, 2016, 06:02:00 pm by captbill »
 

Offline Kalvin

  • Super Contributor
  • ***
  • Posts: 2145
  • Country: fi
  • Embedded SW/HW.
Re: Comparing "Modern" embedded chip programming languages?
« Reply #94 on: July 17, 2016, 06:20:16 pm »
Writing a "TCP stack library" for Oberon would probably DWARF the size of the full Oberon OS
(which is <2mb). Oberon's power is that it avoids overly complex "protocols" like TCP, which
are anything but minimalistic. Oberon has it's own, sleek little network based around the Nrf24l01
RF chip.

If you take a look at the code and RAM size requirements of some of the TCP/IP-stack implementations for the ARM processors written in C, the requirements seem typically vary from 60 KB to 100+ KB and onwards, depending on what services are needed.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #95 on: July 17, 2016, 07:20:55 pm »
I had the Oberon operating system running a while back.  It seemed kind of minimalist as I recall.  My only interest in Oberon would be for embedded programming and that most certainly wouldn't require much of an OS.  A little RTOS perhaps but that's about all.

In looking at Astrobe (Oberon), I noted that there doesn't seem to be library code to use TCP/IP with the LPC1768 (mbed) and that's a huge problem.  I most certainly do not want to rewrite the TCP/IP stack.

TCP/IP and other nice things are in the Oberon operating system.

True but the Oberon OS doesn't target embedded processors.  I don't know if it will port easily.  I wandered all over the Astrobe site and there is mention of trying to get Ethernet working but I couldn't find anything about TCP/IP.  I would want a minimalist stack like lwIP.

Congrats on your decision to use Astrobe/Oberon. You won't regret it.
There is indeed a learning curve or more a "breaking of old habits" that is learned.
Oberon is all about modularization and minimalism and getting the feel of the great freedom
afforded by it's virtues takes some getting used to.

Writing a "TCP stack library" for Oberon would probably DWARF the size of the full Oberon OS
(which is <2mb). Oberon's power is that it avoids overly complex "protocols" like TCP, which
are anything but minimalistic. Oberon has it's own, sleek little network based around the Nrf24l01
RF chip.

The way to approach TCP, if you just must have it, is to establish a serial-to-tcp-bridge. The ESP8266
is a good route. The ESP8266 has a SoC to handle the "network stack" so the heavy lifting is done
by actual hardware and not software. With this arrangement, you talk to the ESP8266 with
a standart UART/Rs232 library and any TCP functionality is performed by the ESP8266, which
is purpose built for networking.

This is a neat all-around solution:
https://github.com/jeelabs/esp-link

The thing is, I need a bidirectional server port to which I may make a TCP connection for a console keyboard/typewriter (this 'device' would only be connected when required), a unidirectional server port to which I will almost always connect a card reader and a TCP client port to connect to a printer/plotter.  These are all virtual devices for an IBM1130 I built in an FPGA.  The entire reason for considering any particular language, in terms of this project only, is the availability of a TCP/IP stack that will support multiple streams at a reasonable data rate.  The printer/plotter should be wire speed but the others can be the equivalent of 19.2 kBaud although faster is always better.

The way I see the networking going is that the 2 servers are waiting on external requests.  They may, or may not, get them depending on the job.  The TCP client will make a connection to an existing LaserJet as required.  A complicating fact is that the same LaserJet is used for both plotter and printer output.  I need some kind of spooler to queue up the jobs - probably to an SD card.  It would be better if the printer stream went to a terminal session as well because there is little point in printing a job listing if it didn't compile.  So the idea was to spool both streams and have some kind of graphic display and a joystick to select output files and either print them and/or delete them.

I had planned to use one of the higher speed ARMs that includes the MAC/PHY.  I am currently using an LPC1768 (mbed) to handle the plotter stream but I would like to step it up a notch and get rid of my UART ports.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Comparing "Modern" embedded chip programming languages?
« Reply #96 on: July 17, 2016, 09:43:49 pm »
There are also the Wiznet network chips (for wired network) which use SPI. Usually SPI allows the use of FIFOs and DMA.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13748
  • Country: gb
    • Mike's Electric Stuff
Re: Comparing "Modern" embedded chip programming languages?
« Reply #97 on: July 17, 2016, 11:39:16 pm »
There are also the Wiznet network chips (for wired network) which use SPI. Usually SPI allows the use of FIFOs and DMA.
I've just been playing with these - starting off from knowing nearly zero about ethernet/TCP etc. I got it running pretty quickly - main issue was the datasheet isn't very clear in some places.
The W5500 is pretty cheap, includes the PHY and supports SPI up to something like 80MHz. Because it handles all the low-level stuff, you don't need particularly low latency, just empty the buffer fast enough.
I was using it with a PIC32MZ, which actually has ethernet onboard, but needs an external PHY, but the Wiznet chip just made things really easy so was a good solution for what I was doing.
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline richardman

  • Frequent Contributor
  • **
  • Posts: 427
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #98 on: July 18, 2016, 02:13:42 am »
I don't particularly care about the footprint or performance hit caused by going to this "new" language.  Im more interesting in "why do python and lua both exist; aren't they very similar?"  Good answers would be "lua has a smaller footprint because most of its high level functionality is aimed at strings, while python has more features for dealing with float math." (note: entirely made-up answer.)

I'll settle for "Java uses a VM that is tightly speced to avoid security issues" (oops) while "pyhton bytecodes are designed for easily addable user functions." or similar as well.

I knew the reason you asked (since we had a few discussions off-the-forum on similar subjects) and hence I did not contribute to this thread so far. Anyway, I think the answer really is that the language designer(s) invent a new language for their own purpose (e.g. Ritchie wanted something to rewrite Unix in), and then the language popularity grow organically because users like them, often beyond the scope of the original designers' parameters.

For example, I was pretty surprised to learn about micro-python. Certainly "embedded" and Python don't sound like they should have much in common but there it is. The guy ran a successful KickStarter and seems to continue to have followers and fan. If one asks the micro-Python users why they choose the board, I wonder how many what the answers would be.
// richard http://imagecraft.com/
JumpStart C++ for Cortex (compiler/IDE/debugger): the fastest easiest way to get productive on Cortex-M.
Smart.IO: phone App for embedded systems with no app or wireless coding
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Comparing "Modern" embedded chip programming languages?
« Reply #99 on: July 18, 2016, 03:23:03 am »
There are also the Wiznet network chips (for wired network) which use SPI. Usually SPI allows the use of FIFOs and DMA.
I've just been playing with these - starting off from knowing nearly zero about ethernet/TCP etc. I got it running pretty quickly - main issue was the datasheet isn't very clear in some places.
The W5500 is pretty cheap, includes the PHY and supports SPI up to something like 80MHz. Because it handles all the low-level stuff, you don't need particularly low latency, just empty the buffer fast enough.
I was using it with a PIC32MZ, which actually has ethernet onboard, but needs an external PHY, but the Wiznet chip just made things really easy so was a good solution for what I was doing.

I ordered a board with the WZ5500.  The board has only the WZ5500 and a bunch of pins in a header.  This will allow me to connect it to any of the uCs I have laying around.  The Wiznet development board was pretty underwhelming in terms of on-chip RAM.

I have a few STM43F boards laying around so I should be able to come up with something.

BTW, the WZ5500 can handle 8 sockets.  More than enough for my needs.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf