Author Topic: High Level code on micro-controllers  (Read 4464 times)

0 Members and 1 Guest are viewing this topic.

Offline andyturk

  • Frequent Contributor
  • **
  • Posts: 895
  • Country: us
Re: High Level code on micro-controllers
« Reply #25 on: March 29, 2018, 02:03:56 pm »
Harel statecharts are a good starting point, a.k.a. UML state diagrams.
Or maybe any kind of diagram, because pictures speak to a different part of your brain.

I was messing around with graphviz once and fed it some grep output from our codebase that found all the events, and who was sending which message to whom. This was an event-driven system that never had a top-down design, it just grew over the years. Graphviz/dot will automatically "optimize" a diagram to minimize arcs that cross each other, and the result put a particular subsystem right in the middle with arrows coming and going like a star burst.

None of us had seen this kind of information before, but we all realized that the module in the middle had been more tricky to work with than other parts of the system. The picture confirmed what some of us felt about the code and provided a great starting point to a conversation about architecture.
 
The following users thanked this post: Frank

Online nctnico

  • Super Contributor
  • ***
  • Posts: 26968
  • Country: nl
    • NCT Developments
Re: High Level code on micro-controllers
« Reply #26 on: March 29, 2018, 02:13:12 pm »
I think there are certain types of MCU devices that can be considered suitable for moving the level op abstraction up a bit. But it all starts with a good compiler. I would not go with interpreted languages such a Lua (which sucks imo).

So yes, it is definitely time to move to a higher level, but not necessary time for a new language and certainly not an interpreted language (unless you aim to run user/custom scripts on the device itself).
FYI: Lua isn't interpreted perse. You can also distribute/run it as bytecode. AFAIK the Eluaproject is setup this way so you don't need to have the Lua compiler in the flash. The VM which executes the bytecode is enough.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19590
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: High Level code on micro-controllers
« Reply #27 on: March 29, 2018, 02:17:11 pm »
Harel statecharts are a good starting point, a.k.a. UML state diagrams.
Or maybe any kind of diagram, because pictures speak to a different part of your brain.

I was messing around with graphviz once and fed it some grep output from our codebase that found all the events, and who was sending which message to whom. This was an event-driven system that never had a top-down design, it just grew over the years. Graphviz/dot will automatically "optimize" a diagram to minimize arcs that cross each other, and the result put a particular subsystem right in the middle with arrows coming and going like a star burst.

None of us had seen this kind of information before, but we all realized that the module in the middle had been more tricky to work with than other parts of the system. The picture confirmed what some of us felt about the code and provided a great starting point to a conversation about architecture.

All sensible, and very valid.

I'm not a UML/Harel zealot, but it does have the advantages that
  • it is well documented and there are many examples around. That means less writing on our part :)
  • it encourages a sound way of thinking about, structuring, and decompomsing systems before rushing into implementation
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14513
  • Country: fr
Re: High Level code on micro-controllers
« Reply #28 on: March 29, 2018, 02:19:07 pm »
Indeed.

Never assume that any specific tools or languages, no matter how hype they look, will make up for a bad architectural design (or even lack thereof).
If anything, they will probably make things worse if there is no solid underlying architecture.

And all you need for this is some brain, really.

 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3148
  • Country: ca
Re: High Level code on micro-controllers
« Reply #29 on: March 29, 2018, 03:07:46 pm »
Don't you rather need a quad core ARM written in Java to blink the LED?  >:D

You're living in the past. The quad core ARM doesn't cut it any more. Today, if you want to blink a LED, you need an FPGA:

http://www.instructables.com/id/CmodA7-7-segment-Stopwatch/
 

Offline C

  • Super Contributor
  • ***
  • Posts: 1346
  • Country: us
Re: High Level code on micro-controllers
« Reply #30 on: March 29, 2018, 03:27:51 pm »
I think a lot are looking at problem the wrong way.

You have the statement about runtime library.
Often pushed off to the side as a separate thing to be used by compiler or language.

Some do not like parts of PCODE, while some do.
The same with runtime.

One problem here is that you are thinking only ONE.
Internal to compiler or interpreter it  is not hard to have a bunch of switches if you plan ahead.

A good example is the cross compiler. A bad one only does one instruction set.  A little better one lets you specify which of a few instruction sets.
Think of what you really want.

As technix stated,
One builds with more powerful CPU.
One builds with very limited CPU.

Build is a very good word and IDEA.
Stop compiling and start building. A computer using your knowledge can build many. If it works on building at same time it can easily find where the source code breaks for the one of many.
A good part of making something work on many is having the hooks to adapt to the many.
Think of what you want, If the CPU has a multiply instruction, you want to use it if it's better. If not then you need a multiply function. You just need a table that specifies which CPU the CPU uses. In the process you have added or removed a part of the runtime. What you need to change is all the black boxes. You need to specify CPU instructions or language instructions.

PCODE
For a comparison, think of the old Z80. With interpreted byte code the instruction is a byte. You gain some speed by changing to a CALL at a cost of 3 bytes.
Then you have how much each code is doing. Is it checking for errors or not.
If you are thinking my way here you again want choice. You want that new source code to be really checked for errors at byte code level and when good back off to called or machine code versions.
This is not something new. It is much easer for the compiler writer if he can get you to think the only good thing is machine code.
That old Z80 has a good foundation of instructions. It can work with 64-bit or larger math, it just takes time. Again this saves the compiler writer a little time. But think of how much time? The old PCode that first Pascal's used could be foundation for larger math.
By not giving the programmer the option of larger math, the compiler writer removed all the complaints of it being slow.
Think the better choice is for you the programmer have a choice with out having to jump through the hoops of an added library. With it built in you get better error checking then added library.

So break it down some.
You do not care how large the source is unless you are building on something small.
The only part that matters is the final code size that goes to machine.
If you have studied it or just think of what you are using to read this, there are ways to swap program code in and out of ram.

In the 1980's, I was programming 68000's using Pascal. I often turned on the HP system programming switch for a small area of the program and could load new code into the running program. Was not that hard to do and only a few of pascal error checks needed to change.
Notice I said Change. This Pascal already had the capability of treating a procedure or function call as a pointer with all the error checking in place. Was a minor change to allow dynamic modules.

If you open you eyes, you might find that the full Oberon operating system is very small, yet has a character based GUI.
One of the things as a programmer you have to learn is how to input your source. Oberon actually has less rules then Pascal and does better checking of code.

Need to wrap your mind some around the internals of a language some.

In C you have a function. The body of the function is the same as the two possible bodies of an IF statement. The difference is the before body starts and when body ends.
Your C program is actually a function.
The pascal that I used back then had many options.
You had the stardard three for pascal of Program, Function & Procedure and more.
One added was MODULE.
You could build modules. The SYSTEM module was actually the interface to all built in types.

Here the compiler writers pulled a trick or were smart.
The normal pascal compiler started with compiling the Program function. Here this compiler started with the System module and then the Program function.
Think of this, the normal code generator that you used actually built all the internals needed for the compiler to function.

Change the System Module from ___ to Z80 and you had a Z80 compiler. Modify the System Module some and you had a special version for a specific Z80 system.

Again not not radical or new. Think you will find that the DotNet languages do the same.

So with the right foundation that is actually simpler, you get choice and more power.

Procedures and functions could work with the many different interfaces that other used. This pascal could use functions written in C.
When you needed to get to assembly it could generate the procedure or function interface to assembly.
Think of the compiler writer, This would make quick changes or additions to compiler much faster & also help the programmer.

So by putting together what has been done properly, the compiler can build or extend it's self. Can use new base types while sometimes needing help with the code generation.

So I think the question is the card before the horse type of question.

You should be asking what editor actually understands the language you are using and can generate code. Not as a step in the editor but when you exit a block of code.
You write a function and when you move from function you get all error messages and code shortly from a background process.
This editor should let you have many information screens. When working with a module or class what is visable to other modules or classes is different from internal.
I say screen as all the pop-up windows are not good for fast programming.

When you get to this level then it is you, compiler writer, language creator all working together. You now have a choice.

Need to keep in mind also that the programming syntax is not the only syntax you need to work with.

When you add a debug statement trying to display a variable the build system should know to use the binary_to_Human function and if this does not exist ask for it.
If you are working with XML you have to binary to/from XML function.

Think of the parts the controller is interfacing with. You should be able to quickly add that table of bit positions you find in the data sheet. Less chance of error if you are just matching tables.
One bit could have many names. The data sheet name and your programs name for example.
A chip becomes one of the many views that you have to work from/with.

So with a good foundation you can build up and build down.
At any point you could right click on a variable and change or add to some of it's information. A name change would only effect this one variable and not the many that have the same name.  Remember that when you have many classes, modules and others, the short name could match many places.

Looks like the current trend is hiding more and more things. Hiding is good at preventing errors but more choice gives you power. Think of all the different ways to point to something.
Having a choice can be as simple as entering your choice.
The build system can be smart.
By using an exposed NEW to create storage for a variable you can gain the easy location setting of where. Here you are just changing a value in the AST for the code generator to use. It's that simple at the low level.

Think of the steps
Source code to AST and you get source code format errors.
Error checking of AST and you get language errors.
Code gen used AST and you get Not possible in Code errors.

Not having a good AST actually increases code size and errors.
Having access to AST from source code lets your source get fancier and safer. A size of function is just one example.

Sorry for the Book but you need to keep in mind that each little broken thing is the base of something huge later.

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf