Author Topic: Pico vs 8bit Compiler  (Read 4034 times)

0 Members and 1 Guest are viewing this topic.

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Pico vs 8bit Compiler
« Reply #25 on: October 01, 2021, 06:33:33 pm »
MicroPython supports threads and I suppose that implies dynamic memory allocation and Python itself does a lot of garbage collection so I would imagine it's the same for MicroPython.  Very complicated.  Then again, C++ brings a lot of complication to memory management.  C certainly has malloc() but it's possible to avoid using the heap by not using library string functions.  If sbrk() isn't referenced, the heap isn't used.  It is often the programmer's job to implement sbrk() in syscalls.c along with a lot of other system functions.

It's complicated:

https://www.embecosm.com/appnotes/ean9/html/ch05s03s15.html

 

Offline blacksheeplogic

  • Frequent Contributor
  • **
  • Posts: 532
  • Country: nz
Re: Pico vs 8bit Compiler
« Reply #26 on: October 01, 2021, 10:41:12 pm »
Interpreted languages may not be as fast as compiled languages but they still have a place.  I don't write much production code.  Mostly, I write a program to get an answer to something, run it a couple of times and it's scrap.  I may spend hours writing and debugging the code and, here, Python might be a little faster than compiling C++ for every iteration.  In any event, repetitive execution is more dependent on my thinking and typing speed than on the language.

If there were consensus on the Internet, it would be that Python is on the order of 30 times slower than C++.  In many cases, this simply doesn't matter because it doesn't account for the edit->compile->execute->loop.  Edit consumes the vast majority of the time.

When I was a trainee programmer system time for compiling was extremely limited. Early on code had to be written out and sent to data entry. A bug fix would involve a 2 hour drive each way to a system where we could use to compiler or waiting until the business closed and time became available on the clients systems to compile, in some cases this also meant shutting down the runtime environment.

I see a lot of programmers changing a few lines of code and then compiling out the syntax errors or going though multiple iterations debugging logic. The compiler seems to have replaced design or thought out problem solving, instead it's a hash of code and MB of executable code dragging in libraries to do the simplest of tasks.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
Re: Pico vs 8bit Compiler
« Reply #27 on: October 01, 2021, 11:46:37 pm »
Interpreted languages may not be as fast as compiled languages but they still have a place.  I don't write much production code.  Mostly, I write a program to get an answer to something, run it a couple of times and it's scrap.  I may spend hours writing and debugging the code and, here, Python might be a little faster than compiling C++ for every iteration.  In any event, repetitive execution is more dependent on my thinking and typing speed than on the language.

If there were consensus on the Internet, it would be that Python is on the order of 30 times slower than C++.  In many cases, this simply doesn't matter because it doesn't account for the edit->compile->execute->loop.  Edit consumes the vast majority of the time.

Any C++ program you can write in a couple of hours will compile in 0.2 seconds on a modern PC/Mac, so that's not exactly a factor. Or 2 seconds on a PC from 2000 or a Raspberry Pi or HiFive Unmatched -- but that's not a big deal either compared to think/editing time.

Code: [Select]
Mac-mini:programs bruce$ cat fib.c
#include <stdio.h>

int fib(int n) {
  return n<2 ? n : fib(n-1) + fib(n-2);
}

int main() {
  printf("fib(40) = %d\n", fib(40));
  return 0;
}
Mac-mini:programs bruce$ cat fib.py
def fib(n):
    return n if n<2 else fib(n-1) + fib(n-2)

print "fib(40) =", fib(40)
Mac-mini:programs bruce$ gcc -O fib.c -o fib && time ./fib
fib(40) = 102334155

real 0m0.792s
user 0m0.430s
sys 0m0.005s
Mac-mini:programs bruce$ time python fib.py
fib(40) = 102334155

real 0m18.957s
user 0m18.752s
sys 0m0.147s
Mac-mini:programs bruce$

So, at least on that, there's a factor of 44 in speed (looking at user time)

Programs that use a lot of build in functions on more complex data (strings, hash tables) will have a smaller speed difference because those built in functions are written in C :-) For something like reading in a 1 MB text file, replacing all FOO with BAR, and writing it out again there may be no speed difference at all.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf