There is hardly any correlation between processing speed and price in small microcontrollers nowaday's.
True, but I'd still claim it is a good idea to make sure you can utilize each microcontrollers' subsystems to their fullest extent.
I *am* assuming ATtiny85 here. They happen to be quite cheap, but interestingly powerful microcontrollers. I personally like the original DigiSpark approach, where the less-than-square-inch PCB itself acts as a full-sized USB connector. There are only a couple of I/O pins, but it opens up a large number of options. I do normally use ATmega32u4 for USB 1.1 stuff, and various ARM Cortex microcontrollers (esp. Cortex-M4F) for anything that needs any computational oomph.
If it isn't the exact one OP is using, it seems to be darned close; close enough for the discussion to make sense.
In this particular case, we can throw away the stated question (fast integer multiply by 100), and look at the underlying problem OP is working on.
The microcontroller has an 8-bit timer/counter, that is used for a regular timer tick. (There are two other timers, one of them 16-bit if I recall correctly offhand, but in many cases you want to use them for something more important.)
The idea OP has, is to use the actual timer (at or around instruction clock frequency) value as a timer, with the overflow counter just updating the extra bits. The problem is that to use a decimal rate (some power of ten cycles per second), the 8-bit timer must wrap around at 100 or 200. Not all use cases need the exact timer, and for best bang for buck and largest number of options, one would prefer to have both a fine (down to timer/counter step) and coarse (just number of overflows) timer values, especially if coarse timer is cheaper to read.
This is not a complicated problem, and definitely does not warrant using a 16- or 32-bit microcontroller.
My suggestion above is the absolute overengineered one, that shows how to do it with zero issues as to cases when the reader is interrupted by the timer overflow itself. It involves using generation counters, which are the most basic form of spinlocks (although mine is for a single writer and multiple readers only), and is easily formally proven to result in at most two iterations if the timer overflows occur at intervals greater than a few dozen cycles, and allow "atomic" snapshots of
both the timer counter and the overflow counter as a single unsigned integer value. Essentially, you can make it a cycle counter on ATtiny85 if you want, if you write the ISR in assembly. The fine counter is incremented by the counter limit, so that the full cycle counter value is just a sum of the timer/counter and the overflow count. The coarse counter value is incremented by one. You can implement either a minimum average cycle version (by only incrementing the least significan bytes when they do not overflow), or a fixed-duration version (whose latency effects are trivial to note an measure at run time, and being absolutely regular, are easy to take into account).
If this was a single one-off product being implemented for a paying customer, I'd agree with Doctorandus_P: then, switching to a more powerful microcontroller gives you much more leeway, and simply Makes Sense.
However, I don't think people like to discuss any single one-off products they're working on here. So, I am working on the assumption that this is a prototype, or for-learning experimentation. For that, just discovering the generation counters and using a single ISR to provide multiple clocks running at different rates, makes this thread worthwhile.
Not seeing the value in doing this, and instead recommending using a more powerful microcontroller, is alarming to me.
(This also means my argument may look too aggressive. Just remember I am arguing against your argument, and not you as a person.)
I also do not think "academic exercise" is anything anyone should use in any derisive manner,
ever.
Our technology is advancing at a tremendous pace, and software engineering and programming languages are
not keeping up. This leads to a prevalent belief that we already know everything there is to know about software engineering, and rather than waste time with "academic exercises", the proper cost-effective solution is to throw more hardware at it.
This is simply not true. I have seen this personally in the HPC world. Simply put, aside from support of new, much more performant hardware,
no real advances have been made in the HPC software engineering side. Almost all simulations still use process distribution, distribute data only when not doing computation, and avoid threading models, simply because they're too hard -- or "not cost-effective to teach the developers to do", as I've been told. Data mining, expert systems, and "AI" are nothing new; even self-organizing maps were pretty well known by 1980s. It's just that now we have the hardware to collect and process the vast amounts of data at timescales that allow even relatively crappy implementations (compared to biological ones!) produce "miraculous" results.
The exact same happened in the automotive world in the United States in the last fifty years or so, when the fuel consumption of a typical car
grew, not shrunk, because it was not thought of as important. It is even funnier to think that the typical "grocery bag" car in the early 1900s in New York was an electric car. If you've ever read Donald Duck comics, the car Grandma Duck uses is a Detroit Electric from 1916. Yet, somehow, electric cars are somehow thought of as a new innovation. (The battery technologies and some of the materials tech is, but not the electric car concept, not in the least.) We would have had cheap home 3D printers in the late 1980s, early 1990s at the latest, if it were not for certain patents that were mainly used to protect existing plastics manufacturing methods from competition.
I cannot stress enough how important it is to not let engineers and designers to rely on the hardware improvements to keep their work relevant, and stop learning. It just isn't good for anyone in the long term. It is a seductive option, because it makes your work easier in the short term; but in the long term, the side effects make it a poor choice.
To everyone with degrees, I'd remind them that their studies were to prepare them for the real work, the real learning. The only way you can belittle "academic exercises" is that if you do your work right, develop your skills to the fullest, you'll do much harder work and learn more every day than you ever did in academia.