Using 16F1xxx enhanced midrange. I happen to have an older code which has a long subroutine which wraps across page boundary. It's absolute code and there are so many local sub subroutines in it, I can't easily extract it. In this particular code, I have to put "pagesel $" as the first line of the ISR, or I get a bad jump. I assume it is due to presence of this weird subroutine, but I can't understand why. Is it possible that the hardware is saving and restoring the address of the initial call and does not update itself when the subroutine crosses page boundary? But then I don't get how putting pagesel $ at the beginning of the ISR would fix anything.... yet it does. I treat the subroutine in question as a long call (I can actually call it locally from the first page, but pagesel $ after the call), and it works fine, except I have to put that pagesel in the ISR.
pagesel Something
call Something
^Ahh, ok. Good to know I'm maybe not crazy for putting pagesel $ in my ISR.
I thought it was due to this subroutine, but in hindsight perhaps just a factor of the first time I crossed over to page 1 in my life. And yeah, I put that pagesel there, everafter.
So ISR vector is special and pointer gets there without any paging address? I wonder if RESET vector the same? (I am putting "pagesel START, goto START" even when START is on page 0 but I'm not sure that's really necessary.)
These addresses are 0x0000 and 0x0004... and same addresses exist on page 1 (0x0800 and 0x0804) and every other page... I take it they reach reset and isr vector by pure hardware?
goto 0x0220
PCH = (3 bits from the instruction code) | (PCLATH & 0xf8);
I went my round with TGZZZZ several months ago. It's matter of perspective, which I accept as assembly programmer of simple device, I have complete control over actual hardware limitations. So I see North Guy's perspective as valid. But if you are using RTOS, you are using more complex toolchain which you can't completely rewrite or circumvent with inline assembly unless you are in a very specialized business. Most people are paid to get a job done, efficiently. Not to fine tune tools to make tools specific to a given end product.
PICs don't use extra cycles to store registers. They're stored automatically and restored back automatically. This is all done in hardware at the very moment the interrupt is taken. No single cycle is added.
PIC16F1* stores 8 different registers, which saves 16 instructions at the beginning of the ISR, which gives you 2 cycle latency (250 ns, or 166 ns on PIC16F14* which can use 48MHz clock). If you would do savings manually, it would give you 18 cycle latency - 4.5 us (plus 16 cycles when you leave the interrupt). 4.5 us is way bigger latency than 250 ns, so the auto-save is very important - it lets you achieve much higher performance.
@northguy: i finally have a couple of 26k42 with the latest components order. What i'm interested at is to test
-IF the IVT can help me reduce the jitter in generating square waves with the timer countdown method... not that there is need to, i have microseconds jitter on a constantly changing 1-10 kHz signals (airflow sensor), though i have to adjust corretions factor to have a linear response over the entire range. It's more to amuse myself AND be ready for when i will actually need it
-IF with the DMA and ADC-Squared i can siplify the analog input routine and further slow down the processor
last one is more related to the topic: what if the inputs are changed by DMA in the middle of the routine ? Simple answer: i make a temp variable (hoping that a hardware (DMA)/compiler (Compiled stack) bug doesn't make the chip modify it)
dspic: was the latency that high?
let them try to define or impose on our reality with their own reality, its quite amusing watching from here. for us, their theories seems or sounds plausible yet confusing in multitude grade of levels, but above all, unproven or in another simpler word... nonsense... from a simple switch non blocking single cored code to multicores, parallelism, resources sharing, priority inversion, real time, ecosystem and whatever sheets thats going to be arised next... afaik most of those terms are very well defined and known since before our birth or probably since the invention of Babbage machine, then here they are come up with probably their own delusional meaning in this ever growing thread, just like the fate of the other threads.. the first rule of thumb is, when the slightest correlation between a noob thread and programming, you'll see long winded people with long winded theory try promoting this and that and how unholy C/C++ is... dont you ever try to correct the path to the laymen ship otherwise you'll be dragged away with all these confusing theories and will be doomed to look just like a fool... well... weekend is over... the fuck with the theories, we only care about practical and workable codes. we harness machines to do our jobs, not the other way around...
Personally I'm more interested in, for lack of a better term, the philosophy rather than a specific implementation of the philosophy.
Hence my being "satisfied" by xCORE+xC.
How many times can they do that without restoring?
Being able to save and restore the entire register set is nice but what I usually do is rely on the ISR to save *only* the registers that it is going to use to save time which is neatly undone with a RTOS that lacks knowledge of which registers those are.
Maybe if that matters, you should be using a faster processor.
In particular I agree with the concept that for very simple applications you can use many techniques. However, such techniques aren't necessarily scalable to larger and/or more complex applications. I believe it is important that people understand that.
Maybe if that matters, you should be using a faster processor.What faster processor would you suggest which would give you 125 ns interrupt latency?
In particular I agree with the concept that for very simple applications you can use many techniques. However, such techniques aren't necessarily scalable to larger and/or more complex applications. I believe it is important that people understand that.
Any project has some level of complexity. You cannot make your project less complex than it is. You can, however, make it as simple as possible (but not simpler than that Einstein says). The simplest possible solution has more chances of being developed faster and cheaper and also has better chance of being scalable and maintainable.
I'm happy for people to suggest simple techniques and to note that they are limited to simple small applications. That's reasonable and valid.
I'm not happy for people to suggest such techniques and omit to mention that they aren't scaleable to interesting applications (interesting = non-trivial or complex or large). Any such omissions suggest a lack of experience/knowledge, and will probably mislead the inexperienced.
Maybe if that matters, you should be using a faster processor.
What faster processor would you suggest which would give you 125 ns interrupt latency?
I'm happy for people to suggest simple techniques and to note that they are limited to simple small applications. That's reasonable and valid.
I'm not happy for people to suggest such techniques and omit to mention that they aren't scaleable to interesting applications (interesting = non-trivial or complex or large). Any such omissions suggest a lack of experience/knowledge, and will probably mislead the inexperienced.
You really need to clarify what techniques you're talking about and what interesting applications your have in mind. Otherwise, it just doesn't make any sense.
So far we've seen 15MHz frequency counter on $30 MCU which you have built in just two days. This doesn't sound very interesting. Anyone can build 50MHz frequency counter with 60-cent PIC16F1501, and I cannot see how this can possibly take more than two hours.
I'm not happy for people to suggest such techniques and omit to mention that they aren't scaleable to interesting applications (interesting = non-trivial or complex or large). Any such omissions suggest a lack of experience/knowledge, and will probably mislead the inexperienced.
You really need to clarify what techniques you're talking about and what interesting applications your have in mind. Otherwise, it just doesn't make any sense.
I've mentioned various techniques in other posts; I suggest you re-read them. I'm not going to waste my life repeating them to people with a short attention span.
^ This what I was thinking. Who cares if it automatically saves registers? Newer 8 bit PIC have automatic context saving. Older ones don't. Only difference to me is 6 lines of assembly, 3 in the beginning of ISR and 3 at the end. This is pretty much saying "new model has 6 extra words of instructions and 3 extra bytes of memory compared to older model." You can also conisder those resources are automaticallly reserved for isr, which deprives the user in case he didnt need them for ISR. This is nice to the programmer but not an actual improvement in specs.
If it did this in parallel to core processor which reduces latency, then that's different.
PICs don't use extra cycles to store registers. They're stored automatically and restored back automatically. This is all done in hardware at the very moment the interrupt is taken. No single cycle is added.
PIC16F1* stores 8 different registers, which saves 16 instructions at the beginning of the ISR, which gives you 2 cycle latency (250 ns, or 166 ns on PIC16F14* which can use 48MHz clock). If you would do savings manually, it would give you 18 cycle latency - 4.5 us (plus 16 cycles when you leave the interrupt). 4.5 us is way bigger latency than 250 ns, so the auto-save is very important - it lets you achieve much higher performance.
I'm happy for people to suggest simple techniques and to note that they are limited to simple small applications. That's reasonable and valid.
I'm not happy for people to suggest such techniques and omit to mention that they aren't scaleable to interesting applications (interesting = non-trivial or complex or large). Any such omissions suggest a lack of experience/knowledge, and will probably mislead the inexperienced.
You really need to clarify what techniques you're talking about and what interesting applications your have in mind. Otherwise, it just doesn't make any sense.
So far we've seen 15MHz frequency counter on $30 MCU which you have built in just two days. This doesn't sound very interesting. Anyone can build 50MHz frequency counter with 60-cent PIC16F1501, and I cannot see how this can possibly take more than two hours.
Advancing strawman arguments doesn't make you look good. The frequency counter was a trivial kick-the-tyres exercise to see if the tools lived up to their claims (they did).
I've mentioned various techniques in other posts; I suggest you re-read them. I'm not going to waste my life repeating them to people with a short attention span.
I'm not happy for people to suggest such techniques and omit to mention that they aren't scaleable to interesting applications (interesting = non-trivial or complex or large). Any such omissions suggest a lack of experience/knowledge, and will probably mislead the inexperienced.
You really need to clarify what techniques you're talking about and what interesting applications your have in mind. Otherwise, it just doesn't make any sense.
I've mentioned various techniques in other posts; I suggest you re-read them. I'm not going to waste my life repeating them to people with a short attention span.
I don't really understand whether all of the techniques you mentioned in your 5226 posts are not scalable, or only some of them are not scalable? And if some of the techniques mentioned in your posts are not scalable and you omitted to disclose this, does it "suggest a lack of experience/knowledge, and will probably mislead the inexperienced"?
But -correct me if I'm wrong- it makes having nested interrupts much harder because you'll need to save the context of the previous interrupt. The ARM7TDMI cpu core also has context switching but doing nested interrupts takes a lot of extra assembly. The ARM Cortex-M OTOH does need extra cycles at the beginning of the interrupt but it can deal with several interrupts at once without restoring the context in between and it can do nested interrupts as well. All in all that makes it a whole lot more flexible then having a set of shadow registers. Sure the latency is longer but who cares if you have peripherals with FIFOs and/or DMA capability (which in the end have better realtime performance than you can even achieve using software especially with multiple high priority tasks).
I omitted to mention "interesting" applications, because I was wondering how to get the concepts over to someone that, presumably, has yet to encounter them.
"Interesting" applications are legion; there's no way to begin listing them. Instead consider some characteristics (from systems/applications I've developed) which frequently hint an application is non-trivial:And if I thought for a little while longer, I'm sure I could add to that list.
- a need for formal validation and verification processes
- a bug costs a year's salary, and/or the re-spin latency is measured in agricultural seasons
- someone gets hurt if it fails; worse, someone dies when it works as designed
- remote operation in unattended buildings that you don't have key access to
- high availability, where parts of the system will fail in normal operation, and the system must continue to function 24/7
- development teams split across continents
- development teams split across companies
- customers that throw chairs at FSEs when the product doesn't work as they were expecting
- there is a probability that during commissioning different companies will attempt to shift blame onto other companies, and their lawyers are ready and waiting
- enhancements will be made over many years, and the development team has departed for pastures new
PIC16 doesn't have nested interrupts. Theoretically, you can do it in software, but it is so inefficient that it certainly isn't worth it.