Author Topic: Fast unsigned integer multiply by x100 on 8bit AVR?  (Read 11040 times)

0 Members and 1 Guest are viewing this topic.

Offline Leiothrix

  • Regular Contributor
  • *
  • Posts: 104
  • Country: au
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #50 on: January 16, 2019, 10:23:40 pm »
The longest time you can cover from a 16 bit free running timer incrementing a 1ms interval is 32.7 seconds.

No, 65.5 seconds.  You'd use an unsigned int to hold the counter, not signed. 
 
The following users thanked this post: Kilrah

Offline beduinoTopic starter

  • Regular Contributor
  • *
  • Posts: 137
  • Country: 00
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #51 on: January 18, 2019, 11:01:16 pm »
For short time differences lower than 65ms with ISR disabled, but timer incrementing in hardware TCNT0 each 1us, this code
Code: [Select]
inline uint16_t avr_time_us_fast_inline(uint16_t *time_us_prev, uint8_t *t0_prev  ) {
uint8_t t0 = TCNT0;

// static uint8_t t0_prev = 0;

// static uint16_t time_us_prev= 0;


if(t0<(*t0_prev) ) {
(*time_us_prev)+= 100;
}


uint16_t time_us= (*time_us_prev) + t0;


(*time_us_prev)= time_us;
(*t0_prev)= t0;

return time_us;

}

 for fast timing retrieval optimized by AVR C looks very good, since there is no memory acceses despite we have pointers in function defined  8)
Below example where this function was used and in AVR C assemler listing only memory accesses to volatile variables are generated using LDS/STS - no memory acces for function pointers since only registers used in optimized code
Code: [Select]
// forever init
volatile uint8_t time_fast_is= 0;
volatile uint8_t time_fast_is_copy= 1;
volatile uint8_t time_fast_is_copy2= 2;

uint8_t t0_prev = 0;
uint16_t time_us_prev= 0;

uint16_t time_us_fast_prev= avr_time_us_fast_inline(&time_us_prev,&t0_prev  );

// forever
for(;;) {
uint16_t time_us_fast= avr_time_us_fast_inline(&time_us_prev,&t0_prev  );


uint16_t dtime_us_fast= time_us_fast - time_us_fast_prev;

if(dtime_us_fast>0 ) {
time_fast_is= 1;
} else {
time_fast_is= 0;
}

time_us_fast_prev= time_us_fast;

time_fast_is_copy= time_fast_is;
time_fast_is_copy2= time_fast_is_copy;
} // forever


Assembler listing of section above with inline avr_time_us_fast_inline function optimized:
Code: [Select]
214 0116 1B82      std Y+3,__zero_reg__
 215 0118 81E0      ldi r24,lo8(1)
 216 011a 8A83      std Y+2,r24
 217 011c 82E0      ldi r24,lo8(2)
 218 011e 8983      std Y+1,r24
 219 0120 42B7      in r20,0x32
 220 0122 242F      mov r18,r20
 221 0124 30E0      ldi r19,0
 222 0126 61E0      ldi r22,lo8(1)
 223                .L14:
 224 0128 52B7      in r21,0x32
 225 012a C901      movw r24,r18
 226 012c 5417      cp r21,r20
 227 012e 00F4      brsh .L11
 228 0130 8C59      subi r24,-100
 229 0132 9F4F      sbci r25,-1
 230                .L11:
 231 0134 850F      add r24,r21
 232 0136 911D      adc r25,__zero_reg__
 233 0138 2817      cp r18,r24
 234 013a 3907      cpc r19,r25
 235 013c 01F0      breq .L12
 236 013e 6B83      std Y+3,r22
 237                .L13:
 238 0140 2B81      ldd r18,Y+3
 239 0142 2A83      std Y+2,r18
 240 0144 2A81      ldd r18,Y+2
 241 0146 2983      std Y+1,r18
 242 0148 452F      mov r20,r21
 243 014a 9C01      movw r18,r24
 244 014c 00C0      rjmp .L14

I haven't got time to test this on real MPU, but assembler code looks very fast, so in "brute force" loop with ISR disabled not sure if we can get 1us in test "forever" loop, but it should be very fast - for example waiting for pin change time difference ...

Of course uint16_t time clock used here will overflow after 65ms, but it should be sometimes enougth time for eg. reading packet, etc...
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3138
  • Country: ca
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #52 on: January 18, 2019, 11:42:48 pm »
For short time differences lower than 65ms with ISR disabled, but timer incrementing in hardware TCNT0 each 1us, this code ...

appears to be too complex. Say, with timer overflowing at 256 you can do:

Code: [Select]
typedef union {
  struct {
    uint8_t time_low;
    uint8_t time_high;
  };
  uint16_t time;
} time_t;

uint16_t get_time() {
  static time_t cur_time;
  uint8_t t;
 
  if (cur_time.time_low > (t = TCNT0)) {
    cur_time.time_high ++;
  }
  cur_time.time_low = t;
 
  return cur_time.time;
}

which does exactly the same as yours, although I don't think I would do this in real life. The necessity of calling it very often is too restrictive.

 

Offline beduinoTopic starter

  • Regular Contributor
  • *
  • Posts: 137
  • Country: 00
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #53 on: January 20, 2019, 06:23:24 pm »
Say, with timer overflowing at 256 you can do:
...
which does exactly the same as yours, although I don't think I would do this in real life. The necessity of calling it very often is too restrictive.
My timer in CTC mode start from 0 after reaching 99, so it is not the same, however when you look into assembler code increment by one (1) looks similar to when 100 is added instead, but I do not have to call too often this function, since when I know that eg. pin change interrupt while decoding some input bits stream  is below 100us than it is sufficeint to get time during pin change interrupt and store to calculate differences, so it can be usefull sometimes.

It is always worth to see how assembler code looks like to ensure we are not loosing too many time, because of those experiments showed that by playing with different C code static/inline hints sometimes leads to interesting low level code generated, eg. in your case for this variable
Code: [Select]
  static time_t cur_time;
will not be optimized by using only registers and probably you will have LDS/STS instructions to read/store this variable in generated code, as well as difficult to reinitialize inside "get_time" function- that is why I pass those variables as pointers,
which seams in "inline" version that are optimized and registers are used without LDS/STS instructions in your case probably ;)
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3138
  • Country: ca
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #54 on: January 20, 2019, 08:47:22 pm »
My timer in CTC mode start from 0 after reaching 99

Make it roll after 255. Are you trying to make your life more difficult on purpose?

in your case for this variable
Code: [Select]
  static time_t cur_time;
will not be optimized by using only registers

Of course not. It is a long-term variable holding the time.

There's no reason to speculate about the assembler. Just compile and post.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #55 on: January 20, 2019, 10:35:05 pm »
I've not followed or read this thread, but skimming over it I thought this might prove useful.


From "Hacker's Delight" by Henry S. Warren, Jr.

Edit:

100x = (32x - 8x + x) * 4

3 shifts and 2 adds
« Last Edit: January 21, 2019, 03:38:22 am by rhb »
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #56 on: January 21, 2019, 06:07:14 am »
Quote
100x = (32x - 8x + x) * 4
3 shifts and 2 adds
The original post had 3 shifts and 4 adds - theoretically only slightly worse.
The problem is that a 32bit shift is not particularly "inexpensive" on an AVR (which takes at least three instructions to shift 32bits by one position), and the original effort wouldn't factor the shifts to notice that 32x = 4*8x, or equiv.)

Since then, much of the discussion has been about how to re-think the overall program so that you never need to multiply by 100 in the first place.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3481
  • Country: us
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #57 on: January 21, 2019, 11:35:37 am »
All I'd  meant to do was post the section from "Hacker's Delight"  which is full of obscure tricks. 

Unfortunately I read enough while scanning it  that the edit popped into my head after I got in bed.  I knew it was not going  to leave me alone unless I added the edit.
 

Online splin

  • Frequent Contributor
  • **
  • Posts: 999
  • Country: gb
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #58 on: January 22, 2019, 06:11:18 am »

Anyway, I've decided to... do not use do-while loop, but instead added correction for time - something like back in time  :popcorn:



Quote
but probably useless since ISR clear this flag during its execution
You cannot make use of that flag if using the isr.

But, I've used this flag as you can see in assembler code above  >:D

Warning: Do not try this code at home - it is not tested yet, but "Patent pending :D:o

I wouldn't bother with that patent as it doesn't work  :--

It seems that what you are trying to do is:

Quote
When added at the begining of avr_time_us_get() wait for OCF0A cleared in TIFR by ISR during its execution, since OCF0A is set when TCNT0 is reset to 0 in CTC

Ok, so what you need is to wait (for up to 100us) until you detect that OCFOA has changed from set to cleared at which point you can be sure that the ISR has just executed and you can safely read avr_time_counter knowing that the ISR won't run whilst you're doing so. But as you have already surmised OCFOA will only be active for a very short time (unless you have interrupts disabled - which I don't believe you do) - specifically from the moment the timer rolls over, until the ISR starts.

The exact timings might be published in some application note but it doesn't matter - your non-interrupt code probably won't ever see it and then only if it happens to read the TIFR register within that very short time window. Depending on how the MPU is designed there is a possibility that your code could *never* see the OFCOA flag set because it is only set, within each 1us processor clock cycle, at a point *after* the 'in Rx, 0x38' instruction actually reads the flag; at the end of that instruction the ISR will execute, resetting the flag.

If you really want to wait until just after the ISR has executed then have a do while() loop waiting for TCNT0 to change. But I'm pretty sure that isn't what you want as it would waste far too much time - an average of 50us for each call to avr_time_counter(). In your code you check the OFCOA flag and if it's clear you go on to read the 4 bytes of avr_time_counter - but the ISR can occur at any point after your check of OFCOA including part way theough reading avr_time_counter.

cv007 had the solution - re-read avr_time_counter if TCNT0 has rolled over (but it doesn't need to be in a loop unless you have an ISR that can take 100us or more).
 

Offline Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3341
  • Country: nl
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #59 on: January 26, 2019, 10:07:07 am »
If you need some performance, then use a microcontroller that has MUL instructions, or at least a barrel shifter.
If you don't need the performance then why bother?

It all looks like some silly academic excersize to me.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6227
  • Country: fi
    • My home page and email address
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #60 on: January 26, 2019, 01:28:04 pm »
If you need some performance, then use a microcontroller that has MUL instructions, or at least a barrel shifter.
If you don't need the performance then why bother?
For the same reason we don't live on the Savannah anymore, clubbing animals on the head, and making single-use tools by knapping flint.

"Just throw money at it!" is not the best option, it is just the easiest one, and one that any monkey with fistfuls of cash and no real skills can do.

In cases like this, where the microcontroller has the necessary performance, but the designer is having difficulty utilizing that, this "academic exercise" has two purposes: One is to save money and resources by using the cheaper hardware, the second is to become a better designer/developer for personal and business reasons by learning how to utilize the microcontroller to its full potential.  To me, that makes perfect business sense, and worth the bother.
 
The following users thanked this post: Siwastaja

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14431
  • Country: fr
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #61 on: January 27, 2019, 08:03:06 pm »
Knowing how to use fixed-point or fully integer calculations instead of resorting to floating-point is not just useful for performance reasons on low-end parts.
It's an essential skill every time you need to control the precision of calculations at every stage, something that is much harder (sometimes impossible) to guarantee with floating-point.
It's also an essential skill to actually understand how to properly use floating-point!
Oh, and it's also an essential skill if you do digital design. It's so useful that's it's very far from being an academic exercise only.

Lastly, only if you have that skill can you actually judge whether/or when it's appropriate to use it or not.

I know they say ignorance is bliss, but it's certainly not an engineer's best friend.
 

Offline Doctorandus_P

  • Super Contributor
  • ***
  • Posts: 3341
  • Country: nl
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #62 on: January 28, 2019, 05:05:06 pm »
It seems that you are also simply assuming that the undisclosed "8bit AVR" that OP is using is cheaper than some other 32bit ARM (or other) processor.

There is hardly any correlation between processing speed and price in small microcontrollers nowaday's.

I haven't even seen any evidence that price is a concern in this thread.
These "academic excersizes" can be (are) usefull to jog the brain and improve programming skills. That is exactly what academic excersizes are for.

But I see my fault now.
I should not have used the word "silly", that was a fart of the moment and I apologise for that.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6227
  • Country: fi
    • My home page and email address
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #63 on: January 28, 2019, 08:19:07 pm »
There is hardly any correlation between processing speed and price in small microcontrollers nowaday's.
True, but I'd still claim it is a good idea to make sure you can utilize each microcontrollers' subsystems to their fullest extent.

I *am* assuming ATtiny85 here. They happen to be quite cheap, but interestingly powerful microcontrollers. I personally like the original DigiSpark approach, where the less-than-square-inch PCB itself acts as a full-sized USB connector.  There are only a couple of I/O pins, but it opens up a large number of options.  I do normally use ATmega32u4 for USB 1.1 stuff, and various ARM Cortex microcontrollers (esp. Cortex-M4F) for anything that needs any computational oomph.

If it isn't the exact one OP is using, it seems to be darned close; close enough for the discussion to make sense.



In this particular case, we can throw away the stated question (fast integer multiply by 100), and look at the underlying problem OP is working on.

The microcontroller has an 8-bit timer/counter, that is used for a regular timer tick. (There are two other timers, one of them 16-bit if I recall correctly offhand, but in many cases you want to use them for something more important.)

The idea OP has, is to use the actual timer (at or around instruction clock frequency) value as a timer, with the overflow counter just updating the extra bits.  The problem is that to use a decimal rate (some power of ten cycles per second), the 8-bit timer must wrap around at 100 or 200.  Not all use cases need the exact timer, and for best bang for buck and largest number of options, one would prefer to have both a fine (down to timer/counter step) and coarse (just number of overflows) timer values, especially if coarse timer is cheaper to read.

This is not a complicated problem, and definitely does not warrant using a 16- or 32-bit microcontroller.

My suggestion above is the absolute overengineered one, that shows how to do it with zero issues as to cases when the reader is interrupted by the timer overflow itself.  It involves using generation counters, which are the most basic form of spinlocks (although mine is for a single writer and multiple readers only), and is easily formally proven to result in at most two iterations if the timer overflows occur at intervals greater than a few dozen cycles, and allow "atomic" snapshots of both the timer counter and the overflow counter as a single unsigned integer value.  Essentially, you can make it a cycle counter on ATtiny85 if you want, if you write the ISR in assembly.  The fine counter is incremented by the counter limit, so that the full cycle counter value is just a sum of the timer/counter and the overflow count. The coarse counter value is incremented by one.  You can implement either a minimum average cycle version (by only incrementing the least significan bytes when they do not overflow), or a fixed-duration version (whose latency effects are trivial to note an measure at run time, and being absolutely regular, are easy to take into account).

If this was a single one-off product being implemented for a paying customer, I'd agree with Doctorandus_P: then, switching to a more powerful microcontroller gives you much more leeway, and simply Makes Sense.

However, I don't think people like to discuss any single one-off products they're working on here.  So, I am working on the assumption that this is a prototype, or for-learning experimentation.  For that, just discovering the generation counters and using a single ISR to provide multiple clocks running at different rates, makes this thread worthwhile.

Not seeing the value in doing this, and instead recommending using a more powerful microcontroller, is alarming to me.
(This also means my argument may look too aggressive.  Just remember I am arguing against your argument, and not you as a person.)



I also do not think "academic exercise" is anything anyone should use in any derisive manner, ever.

Our technology is advancing at a tremendous pace, and software engineering and programming languages are not keeping up.  This leads to a prevalent belief that we already know everything there is to know about software engineering, and rather than waste time with "academic exercises", the proper cost-effective solution is to throw more hardware at it.

This is simply not true.  I have seen this personally in the HPC world.  Simply put, aside from support of new, much more performant hardware, no real advances have been made in the HPC software engineering side.  Almost all simulations still use process distribution, distribute data only when not doing computation, and avoid threading models, simply because they're too hard -- or "not cost-effective to teach the developers to do", as I've been told.  Data mining, expert systems, and "AI" are nothing new; even self-organizing maps were pretty well known by 1980s.  It's just that now we have the hardware to collect and process the vast amounts of data at timescales that allow even relatively crappy implementations (compared to biological ones!) produce "miraculous" results.

The exact same happened in the automotive world in the United States in the last fifty years or so, when the fuel consumption of a typical car grew, not shrunk, because it was not thought of as important.  It is even funnier to think that the typical "grocery bag" car in the early 1900s in New York was an electric car.  If you've ever read Donald Duck comics, the car Grandma Duck uses is a Detroit Electric from 1916.  Yet, somehow, electric cars are somehow thought of as a new innovation.  (The battery technologies and some of the materials tech is, but not the electric car concept, not in the least.)  We would have had cheap home 3D printers in the late 1980s, early 1990s at the latest, if it were not for certain patents that were mainly used to protect existing plastics manufacturing methods from competition.

I cannot stress enough how important it is to not let engineers and designers to rely on the hardware improvements to keep their work relevant, and stop learning.  It just isn't good for anyone in the long term.  It is a seductive option, because it makes your work easier in the short term;  but in the long term, the side effects make it a poor choice.

To everyone with degrees, I'd remind them that their studies were to prepare them for the real work, the real learning.  The only way you can belittle "academic exercises" is that if you do your work right, develop your skills to the fullest, you'll do much harder work and learn more every day than you ever did in academia.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19450
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #64 on: January 28, 2019, 10:13:31 pm »
Those are very sensible points about "academic exercises", and HPC, and the necessity of academic studies.

The point about academic exercises is that they should enable the key point to be considered, without it getting lost in a morass of boring irrelevant stuff. The lessons learned should then be applicable to far more than merely a single example problem.

Much tech knowledge has a half-life of a few years, e.g. which button to press to get the frobnitz to kazump when the moon is in the third quarter. The understanding gained from academic exercises lasts a lifetime.

I recently returned to embedded software and electronics after a couple of decades doing other things. I was both delighted and horrified at how little had changed since the 1980s - the experience I gained 30 years ago was still directly relevant, so I slotted back in within a couple of weeks!

The major changes at the speed/resolution of ADCs and DACs, nanopower electronics, ease of making PCB,  and that things were smaller faster and cheaper. But all the fundamentals and pinch points were horrifyingly unchanged.

(Well, there are a few glimmers of hope, e.g. the capabilities of XMOS xCORE processors with xC, but even they are familiar from the 80s and 70s, respectively)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #65 on: January 29, 2019, 07:48:34 am »
Quote
Simply put, aside from support of new, much more performant hardware, no real advances have been made in the HPC software engineering side.
I'm not quite sure what "HPC" is supposed to mean in this context.  But I think it's a major mistake to omit
"massive performance increases" from the "significant advances" column.

Quote
We would have had cheap home 3D printers in the late 1980s, early 1990s at the latest
An interesting example.  I wonder at your definition of "cheap", given that "low cost" dot matrix printers from that era were $500 to $1000, and the Mac IIci I bought in that timeframe was about $7k (~1MB RAM, 100MB disk, built-in 640*480 graphics. color monitor.)   While the technology of the day would have supported the "several stepper motors and a heater" sort of 3D printer, I think I'll claim that the CAD software needed to effectively drive such a printer would have been essentially impossible in that timeframe (on any "reasonable" (but not "low cost"!) home computer.  I mean: no "windows" yet; VGA graphics was "new"; a typical PC had a 16 to 20MHz CPU with 1 to 4MB of RAM.
(Although I did find an ad in a 1988 Byte Magazine for "DesignCad 3D that claimed to work "even on EGA graphics."https://www.americanradiohistory.com/Archive-Byte/80s/Byte-1988-04.pdf)

 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #66 on: January 29, 2019, 08:09:56 am »
Heh.  A whole article on 3D CAD from a 1988 PC Magazine!  "If you're going to be spending $3k for software, you should certainly have at least 640k of RAM, and you might want to spring the extra $3k for one of those new 1024*768 color monitors!"
https://books.google.com/books?id=ObYblXvjuhUC&lpg=PA121&ots=atgG2szPEd&dq=designcad%203d%201988&pg=PA115#v=onepage&q&f=false
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3138
  • Country: ca
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #67 on: January 29, 2019, 02:08:16 pm »
Heh.  A whole article on 3D CAD from a 1988 PC Magazine!  "If you're going to be spending $3k for software, you should certainly have at least 640k of RAM, and you might want to spring the extra $3k for one of those new 1024*768 color monitors!"
https://books.google.com/books?id=ObYblXvjuhUC&lpg=PA121&ots=atgG2szPEd&dq=designcad%203d%201988&pg=PA115#v=onepage&q&f=false

Interesting. The resolution of 1080p monitor (which I'm looked at right now) is not that different - only 2.6 times more, but the CPU frequency is 500 times, and the memory is 25000 times bigger. If they could make 3D CAD back then, it should simply fly now.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21651
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #68 on: January 29, 2019, 03:57:09 pm »
And just to bring things a little back to topic here -- an AVR isn't much different in terms of raw computing power, versus a PC-compatible of the day (if certainly not one of the better workstations that you'd want to be running CAD on!).

The main difference is, programming it that way is a pain, and you have to add a ton of peripherals to support that kind of functionality.

Namely: with little SRAM, you need to treat it as a cache against external SRAM and Flash.  Probably the same goes for program memory as well.  Flash can be rewritten live, but it is a wear item, so that wouldn't be such a great idea; more likely, you'd implement a rich operating system, and run programs from external memory as an interpreted virtual machine.

And now that I've speculated about a thoroughly unpleasant system to develop for and use, let's just grab an STM32F4, stick an LCD on it, USB hub, external DRAM and Flash, and run Linux instead. ;-DD

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6227
  • Country: fi
    • My home page and email address
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #69 on: January 29, 2019, 04:52:20 pm »
Quote
Simply put, aside from support of new, much more performant hardware, no real advances have been made in the HPC software engineering side.
I'm not quite sure what "HPC" is supposed to mean in this context.  But I think it's a major mistake to omit
"massive performance increases" from the "significant advances" column.
The massive performance increases stem from new hardware: the software side has not changed.  The software engineering side utilizing that hardware has not kept up, we are simply relying on very old (>25 year old) techniques with very little change since then. Except for new hardware. There is basically nothing new in coding/software engineering; everyone is just coasting on the hardware.

Based on my experience in MD simulations, the software side has really stagnated.  The hardware is not utilized to its full potential; the software folks have not kept up.  Even multithreading is still avoided, using multiprocessing instead.  If you look at GPGPU computation, it treats the GPUs as isolated compute units, very much like separate computers in a cluster environment.  Nothing new, and definitely not using the hardware to its full potential.

I wonder at your definition of "cheap"
I was obviously looking only at the hardware cost. Something as simple as a 6502 is definitely beefy enough to run HPGL or basic G-code.

given that "low cost" dot matrix printers from that era were $500 to $1000
We got a Star LC-10 in 1988, I think, but I do believe it was much cheaper than that.  In 1988, it cost under £200 in the UK (according to adverts).

I'll claim that the CAD software needed to effectively drive such a printer would have been essentially impossible in that timeframe (on any "reasonable" (but not "low cost"!) home computer.
GUI CAD? Absolutely agreed.

But direct path generation via a simple programming language, something between HPGL, Turtle Graphics, and Gcode? I claim that possible.  Didn't you ever write PostScript by hand to run on the printer itself?  (I definitely did, for the first HP LaserJet I got access to, in the early nineties.)

PostScript is a simple, but hugely powerful language. Because of rasterization, it did need surprisingly large amounts of memory (as in often more than on the associated computer, in the early times).  While we use proper CAD and slicing for 3D printing now, does not mean it cannot be done much, much simpler.

Anyway, you have good enough points for me to want to amend my claim, to something like (without the patents,) "we might have had", with the point being that the stopping factor was not so much lack of existing technology, or the high cost of most of that technology (GUI CAD design notwithstanding, definitely), but patents obtained for anticompetitive purposes: for plastics manufacturers to use them to reduce competition in their field, reducing the need for further product development.

(While there has been a lot of research and development on the plastic materials themselves, even PLA is a hundred years old invention.  I would not be too surprised to find out that Lego's product development efforts have been a very big driving factor in the precision plastics industry.  Those little toys are surprisingly high-precision bulk-manufactured things.  The tolerances are, and were already in the eighties, absolutely ridiculous for the blocks to attach and detach hundreds of times with very consistent friction fit.  That should tell a lot about what kind of engineering/development actually pushes the world forwards.)



To clarify, my point was to show that in software engineerin, for decades we have not done what is possible, only what is easy or makes short-term business sense.  The development in hardware has masked the software stagnation, but the stagnation is nevertheless obvious in my opinion.  Using other engineering areas like electric cars in the automotive world for comparison, this stagnation seems very costly, although calculating its exact price is very difficult: it is hard to say how much you lose by only using a fraction of the available tools.

While I do complain about the difficulty in getting funding for overcoming that stagnation by example, I understand the reluctance.  I do not agree, but I understand.  Funds are limited, and the risk/benefit ratio hard to estimate.  Hardware is easy and safe.

What I fail to understand, is the unfounded assertion that there is no need to overcome that; that it is somehow unprofessional or wasteful for an engineer to try to do that; that the core of what an engineer or scientist does is something other than learning and sharing that knowledge, even in product form; that that the proper engineering approach is to throw more hardware at it and keep going like we always have in the software side.  I see no evidence supporting that approach. It makes no sense in the medium to long term; it only makes sense in the short term, for one-off commercial products and services.

I suspect that many have accepted that approach axiomatically, without examining it, because it does feel good to think that what you know and can do now will tide you for the rest of your life, and is therefore emotionally very attractive as an axiomatic approach to your profession: it says you're complete now, no need to struggle to keep up anymore.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6227
  • Country: fi
    • My home page and email address
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #70 on: January 29, 2019, 05:01:39 pm »
If they could make 3D CAD back then, it should simply fly now.
If we look at the differences in the approach, our current tools seem to heavily lean towards a what-you-see-is-what-you-get visual representation.  That was not always the case, not even for word processors.

This is why I think the CAD software used would have been different; more abstract.

Mathematical solid geometry modelling like OpenSCAD could have been possible, maybe.  I did not suggest it above, because I think the amount of memory generated would have been costly to store (and I'm too lazy to work out if cassette tape drive data rates would suffice, and the entire tape approach work); but slicing the models would have definitely been too slow to do in real time.  Dedicated helper processors, maybe?  Simon's Basic equivalent cartridge on the C64, but for 3D printing?  Not likely, but I don't think it impossible, either; hits me straight in the Uncanny Valley, whenever I think about it.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3138
  • Country: ca
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #71 on: January 29, 2019, 05:29:36 pm »
The development in hardware has masked the software stagnation, but the stagnation is nevertheless obvious in my opinion.

The rapid growth in hardware was the cause of the "software stagnation". The hardware growth now slowed down, but the other, much worse factor starts to influence the software industry. Lots of software went free and open source. There's no money in it. Hence no progress. I expect it will only get worse with time.

 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #72 on: January 29, 2019, 06:08:34 pm »
The development in hardware has masked the software stagnation, but the stagnation is nevertheless obvious in my opinion.

The rapid growth in hardware was the cause of the "software stagnation". The hardware growth now slowed down, but the other, much worse factor starts to influence the software industry. Lots of software went free and open source. There's no money in it. Hence no progress. I expect it will only get worse with time.

That's complete nonsense, both on the "no money" and the "open source" parts.

If there was no money in it, then why do software guys command so high salaries and are in such high demand? One would think that nobody would want to do such work and companies would be going out of business or pivoting away from software left and right. Kinda don't see it - just look at any job website or ask any recruiter.

And re open source - open source certainly didn't cause any quality "stagnation", more like opposite, because more people can (and do) participate and any crap code tends to be quickly pointed out and fixed, at least in the popular and actually used projects. And it also pushes vendors of competing commercial projects to fix their messes or their clients will jump ship - which they didn't have to do before.

Look at projects like LLVM which actually enabled building a ton of tooling for programming languages that simply wasn't feasible before because the barrier of entry in terms of complexity was so high. Or Linux. Or FreeBSD (Apple owes the BSD folks quite a bit there). Or OpenCascade. Or GCC ...

Also I don't see companies like Autodesk or even Microsoft fearing of going out of business any time soon, despite there being open source alternatives for their products.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3138
  • Country: ca
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #73 on: January 29, 2019, 07:38:13 pm »
That's complete nonsense, both on the "no money" and the "open source" parts.

If there was no money in it, then why do software guys command so high salaries and are in such high demand? One would think that nobody would want to do such work and companies would be going out of business or pivoting away from software left and right. Kinda don't see it - just look at any job website or ask any recruiter.

And re open source - open source certainly didn't cause any quality "stagnation", more like opposite, because more people can (and do) participate and any crap code tends to be quickly pointed out and fixed, at least in the popular and actually used projects. And it also pushes vendors of competing commercial projects to fix their messes or their clients will jump ship - which they didn't have to do before.

Look at projects like LLVM which actually enabled building a ton of tooling for programming languages that simply wasn't feasible before because the barrier of entry in terms of complexity was so high. Or Linux. Or FreeBSD (Apple owes the BSD folks quite a bit there). Or OpenCascade. Or GCC ...

Also I don't see companies like Autodesk or even Microsoft fearing of going out of business any time soon, despite there being open source alternatives for their products.

The process is just starting, and you're speaking as it is already complete.

LLVM is a huge ecosystem and huge effort, and people use it, but did it really make any difference in software development? Is today's LLVMed software any less buggy or less bloated than the software before LLVM? I don't think so.

Linux is developing sideways. It is certainly getting better in some places, but it didn't win lots of new users in the past 10 years. This certainly helps Microsoft, but mostly Microsift twist hands of computer manufacturers to pre-install Windows on every computer. This probably cannot last forever. However, open source Android has already pushed Microsoft away in mobile space.

There are places where market reach of free software is huge. GCC for example. Microsoft no longer sell their VC++ compiler, it's forced to be free. Is VC++ any worse than GCC? I don't think so.

Or FreeRTOS. 10 years ago there were lots of vendors, such as uOS. FreeRTOS pushed them all out, but not because FreeRTOS is any better, but because it's free.

So, there's no doubt that, little by little, the free software will take over everywhere, just give it enough time, I guess 20-30 years.

 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: Fast unsigned integer multiply by x100 on 8bit AVR?
« Reply #74 on: January 29, 2019, 09:59:46 pm »
The process is just starting, and you're speaking as it is already complete.

That process is "starting" for 30 something years that free/open source software exists. That's eternity in IT.

LLVM is a huge ecosystem and huge effort, and people use it, but did it really make any difference in software development? Is today's LLVMed software any less buggy or less bloated than the software before LLVM? I don't think so.

Compared to what?

If I compare LLVM (or even GCC) to any of the proprietary (and expensive) compilers I had to deal with in the past, jeeze, give me LLVM any day! Most of that proprietary stuff was utter crap compared to LLVM or GCC. In fact, recommendation to install/compile GCC and GNU tools was usually the first thing anyone who had to deal with commercial Unix saw, because the vendor-supplied compilers were buggy and supported only obsolete C/C++ versions.

Linux is developing sideways. It is certainly getting better in some places, but it didn't win lots of new users in the past 10 years. This certainly helps Microsoft, but mostly Microsift twist hands of computer manufacturers to pre-install Windows on every computer. This probably cannot last forever. However, open source Android has already pushed Microsoft away in mobile space.

And what do you think Android is based on? Linux, surprise. Desktop Linux is irrelevant but pretty much everything mobile runs either iOS or a Linux kernel today. And that iOS seems to be doing quite well there. Microsoft had a stake in mobile but they have only themselves to blame because of their clueless and hamfisted OEM and developer support. The cost of the system had little to do with it. The same with Nokia's S60 Symbian - it didn't disappear because Linux or iOS were free (the latter certainly isn't) but because S60 was hopelessly outdated and when the first iPhone appeared it was literally like comparing a bullet train with a steam engine ...

There are places where market reach of free software is huge. GCC for example. Microsoft no longer sell their VC++ compiler, it's forced to be free. Is VC++ any worse than GCC? I don't think so.

Free software made some things into commodities. But that doesn't mean the paid-for tools ceased to exist. E.g. Microsoft still sells their compiler and tools. Only the Community edition of Visual Studio is free, which has severe licensing restrictions. If you have more than 5 users or make more than $100k annually you have to buy the commercial version.

The same holds for e.g. Unity 3D or Unreal engine - they are "free" in the sense that the development tools are free for personal use. The moment you start developing commercially or selling something, you owe them money. Etc.

Or FreeRTOS. 10 years ago there were lots of vendors, such as uOS. FreeRTOS pushed them all out, but not because FreeRTOS is any better, but because it's free.

I do wonder where is VxWorks, EUROS, Neutrino, Nucleus, QNX ... Also that FreeRTOS has a commercial license available as well, same as ChibiOS.

So, there's no doubt that, little by little, the free software will take over everywhere, just give it enough time, I guess 20-30 years.

Riiight ...  GCC alone is more than 30 years old. And we still have proprietary compilers (e.g. IAR) and some vendors even repackage and sell GCC-based toolchains (Microchip). Heck, some people prefer to use the expensive IAR compilers even where free GCC-based tools exist. Could it be that GCC simply doesn't (and cannot) cover all the market needs?

Free works for some things but we are not going to see a competitive high end CAD system (too complex, requires specialized knowledge and customers don't care about free, they need support, they need import and export of various proprietary data formats, etc.), free office software exists but it is pretty much irrelevant because Microsoft's formats are the standard, tools like the Adobe Creative Suite pretty much have no free or paid competition (and certainly aren't going to have any time soon, given how much work it would require - GIMP really isn't in the same league). Etc.

And that's generic, commodity software - most software is made-to-measure, custom development. Even if you use free components you will still need engineers to write all the application glue together. And they don't work for a smile and a beer.

I am certainly not worried about lack of work, if anything, there will be more of it in the future because everything is moving from hardware to software due to lower costs of changes and faster time to market.

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf