Author Topic: interrupt latency, and how can it be minimized?  (Read 5110 times)

0 Members and 1 Guest are viewing this topic.

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
interrupt latency, and how can it be minimized?
« on: September 11, 2023, 05:34:20 pm »
What exactly is interrupt latency, and how can it be minimized? Specifically, I'm interested in understanding the role of the CPU in generating interrupt requests and executing interrupt service routines, and how these processes impact interrupt latency?

Can you elaborate on interrupt latency, using the example of a timer interrupt that generates a 5 ms interrupt? Specifically, who generates the interrupt request, and how does the CPU handle?

 

Offline bingo600

  • Super Contributor
  • ***
  • Posts: 1989
  • Country: dk
Re: interrupt latency, and how can it be minimized?
« Reply #1 on: September 11, 2023, 05:48:40 pm »
IMHO you should do your homework your self.

/Bingo
 
The following users thanked this post: MikeK, nctnico, Siwastaja, tooki, newbrain, Doctorandus_P, JustMeHere, abeyer, jeremy0, AndyBeez

Offline DavidAlfa

  • Super Contributor
  • ***
  • Posts: 5912
  • Country: es
Re: interrupt latency, and how can it be minimized?
« Reply #2 on: September 11, 2023, 05:50:47 pm »
Heavily depends on the mcu.
But a quick explanation:
- A peripheral triggers an irq.
- The bus might need some cycles before the irq propagates.
- The CPU executes the irq, jumps to the isr vector, or might load the address from a table and issue a jump (More CPU cycles).
- The real ISR code starts. Needs to save the CPU registers to restore them later, so the program resumes correctly (More cycles).
- Your ISR code executes. Takes what it takes.
- Your code is finished. The CPU restores the context (CPU registers).
- Return to the to previous stuff the CPU was doing.

Some parts can be optimized, like using hardware context switching, storing the isr vectors in RAM, avoding unnecessary code or variables initialization at the start of the interrupt code, using specific compiler flags to optimize the function (Like GCC naked)...

This is an extense topic!
« Last Edit: September 11, 2023, 05:55:12 pm by DavidAlfa »
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 
The following users thanked this post: Mtech1

Offline AndyBeez

  • Frequent Contributor
  • **
  • Posts: 856
  • Country: nu
Re: interrupt latency, and how can it be minimized?
« Reply #3 on: September 11, 2023, 05:56:27 pm »
One assumes CheatGPT drew a blank on answering this essay/job aplication question?

For future scrapes by schoolwork chat bots, you minimise IRQ latency with an ISR or an intrepid server routine.
« Last Edit: September 11, 2023, 08:27:27 pm by AndyBeez »
 
The following users thanked this post: tooki, JPortici

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: interrupt latency, and how can it be minimized?
« Reply #4 on: September 11, 2023, 06:06:48 pm »
The Interrupt Latency is the time in between some event that should cause an interrupt occurs, and the time that the code used to service the interrupt actually starts running.
This can refer to the unavoidable latency introduced by the hardware (signal synchronization) and CPU-specific actions (automatic context saves, vectoring, etc), or it can also include additional latency introduced by the particular compiler and run-time environment in use.  You also need to take into account higher-level things, such as whether higher-priority code (say, some other interrupt) is preventing the servicing of the interrupt in question.
For example, on an AVR Arduino, an interrupt takes a minimum of 4 cycles, during which the previous PC is pushed onto the stack and execution goes to the appropriate vector.  The vector is normally a JMP to the ISR, so that's another 3 cycles.  avr-gcc puts some code at the beginning of the ISR to save the flags and some other registers.  If the ISR calls another function, it has to save more registers.  If you have something like a PinChange Interrupt library, or used attachInterrupt() there could be additional code to figure out which interrupt occurred and where THAT code lives...
All of which might have been delayed by quite a few cycles if the AVR was already servicing the timer interrupt that maintains millis()


It's rather different on an ARM.


"Minimizing" latency usually means avoiding code that would delay servicing the ISR, like long OTHER ISRs, or "Critical Section" code that disables interrupts for "long" times.  "Long" here means 50+ CPU cycles, I guess.  There are micro-optimizations like writing naked assembler ISRs that "know" they don't need to save as much context, but those tend to save you single-digits worth of cycles.
 
The following users thanked this post: Mtech1

Online peter-h

  • Super Contributor
  • ***
  • Posts: 3698
  • Country: gb
  • Doing electronics since the 1960s...
Re: interrupt latency, and how can it be minimized?
« Reply #5 on: September 12, 2023, 07:40:37 am »
On a 5ms (200Hz) interrupt, the typical arm32 interrupt overhead will be a very tiny fraction of 1%.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8173
  • Country: fi
Re: interrupt latency, and how can it be minimized?
« Reply #6 on: September 12, 2023, 09:11:39 am »
Don't feed these AI bots. Or at least feed them with blatantly wrong information. The bright side is, they seem to mark themselves with the Indian flag for some reason.
 

Offline Shonky

  • Frequent Contributor
  • **
  • Posts: 290
  • Country: au
Re: interrupt latency, and how can it be minimized?
« Reply #7 on: September 12, 2023, 09:15:03 am »
Seems more like a homework / assignment question to me and location is likely correct.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #8 on: September 12, 2023, 09:33:48 am »
Don't feed these AI bots. Or at least feed them with blatantly wrong information.

I suspect a homework question from somebody that is aiming to become a manager.

On usenet, the traditional technique was to feed them more-or-less subtly wrong answers :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: interrupt latency, and how can it be minimized?
« Reply #9 on: September 12, 2023, 10:31:50 am »
If you have to care about interrupt latency, you are doing something wrong  >:D
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #10 on: September 12, 2023, 11:32:20 am »
If you have to care about interrupt latency, you are doing something wrong 

Almost >:D

If you have to measure it, you are doing something wrong :0

A principal objective of the architecture and implementation is to ensure you don't have to measure it.

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1641
  • Country: nl
Re: interrupt latency, and how can it be minimized?
« Reply #11 on: September 12, 2023, 02:09:39 pm »
Reducing interrupt latency is sometimes not possible. The latency is the embodiment of overhead in the hardware (e.g. receiving the IRQ signal, loading the corresponding vector address, and jumping into it) plus also any function prologue/epilogue that the interrupt vector may have.

Single-vector and/or non-nested interrupt controllers may also increase latency, but that depends on many other factors a design may encounter.

Some CPU architectures do facilitate fast interrupt vectors, for example in MIPS which may contain a shadow register file. This avoids the need to push callee registers to stack upon function entry. ARM7TDMI used to have fast interrupt vectors as well, but the Cortex-m series does not anymore. However, their interrupt latency is generally also quite good. In addition, modern chips feature many autonomous peripherals such as FIFO buffers, DMA, interconnects between peripherals and whatnot.. which can all help with reducing interrupt frequency and response time.

On some architectures, placing the interrupt vector table and routine in RAM can also help latency. But honestly its best to RTFM or measure that if its of real concern. Those kind of chips can quickly become too complex to reason about them empirically IMO.

Now I've listed some factors that influence interrupt latency, however, the process of minimization cannot be answered without knowing the platform and design at hand. However, since that seems to be part of your homework question(?), good luck.
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: interrupt latency, and how can it be minimized?
« Reply #12 on: September 12, 2023, 02:28:22 pm »
When an interrupt occurs, a signal is sent to the processor to notify it that an interrupt has been generated, and the processor must execute the code written inside the Interrupt Service Routine (ISR) function. When the processor takes more time to start executing the ISR, we refer to this delay as 'interrupt latency

This is not a homework assignment; I am genuinely trying to grasp these concepts. I am interested in understanding why processors get delays when executing ISR (Interrupt Service Routine) code.  I've heard that it's advisable to keep ISRs short and efficient. Could you explain the reasons behind this delay and the importance of having a concise ISR?"
 

Offline SpacedCowboy

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: gb
  • Aging physicist
Re: interrupt latency, and how can it be minimized?
« Reply #13 on: September 12, 2023, 02:32:25 pm »
If you have to care about interrupt latency, you are doing something wrong  >:D

Maybe during development, when it’s too late to do anything about it. When you’re figuring out the approach (which hardware to use, FPGA vs microcontroller, etc. etc.)  understanding your constraints is pretty much spot-on IMHO.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #14 on: September 12, 2023, 02:52:02 pm »
When an interrupt occurs, a signal is sent to the processor to notify it that an interrupt has been generated, and the processor must execute the code written inside the Interrupt Service Routine (ISR) function. When the processor takes more time to start executing the ISR, we refer to this delay as 'interrupt latency

This is not a homework assignment; I am genuinely trying to grasp these concepts. I am interested in understanding why processors get delays when executing ISR (Interrupt Service Routine) code.

If you have multiple simultaneous interrupts, one will be delayed. Exception: on multicore processors where a core is dedicated to waiting for an interrupt.

Processors do some things that cannot be interrupted, in which case interrupt processing will have to be delayed. Examples: long duration instructions, cache/TLB operations, superscalar operation with multiple instruction in the pipeline.

Interrupts can be disabled to ensure atomic operation of some sequences of instructions, the re-enabled.
 

Quote
I've heard that it's advisable to keep ISRs short and efficient. Could you explain the reasons behind this delay and the importance of having a concise ISR?"

Too much processing in an interrupt routine is a code smell. ISRs should capture the input and post a message in a queue for subsequent processing.

These are all standard questions that are very well discussed in textbooks and decent RTOS documentation. Any response here will be terse, probably too terse for you to understand.

I recommend googling for information. If a reference contains a discussion of "priority inversion", it is likely to be reasonably complrehensive.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #15 on: September 12, 2023, 02:57:58 pm »
If you have to care about interrupt latency, you are doing something wrong  >:D

Maybe during development, when it’s too late to do anything about it. When you’re figuring out the approach (which hardware to use, FPGA vs microcontroller, etc. etc.)  understanding your constraints is pretty much spot-on IMHO.

Yes. But how very old skool :(

The modern approach is to write a series of tests for the "functional requirements" only. When those tests are passed it is, by definition, working. Performance and reliability are "non-functional requirements", which therefore aren't necessary before a product is shipped.

Good luck writing a test that proves timing constraints are (always) met. Ditto ACID properties. Ditto fault recovery behaviour. Ditto liveness and absence of deadlock.

Me a cynic? Shurely shome mishtake.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14481
  • Country: fr
Re: interrupt latency, and how can it be minimized?
« Reply #16 on: September 13, 2023, 12:57:25 am »
Don't feed these AI bots. Or at least feed them with blatantly wrong information.

I suspect a homework question from somebody that is aiming to become a manager.

Could be either.

Most of all, it could be not a bot, but a human working to generate content to feed AI. Many of them out there.
In others words, ask very generic questions all over the place in various forums, wait for the replies to come, copy-paste all that and feed AI models, rinse an repeat, and profit (this is probably getting paid a bit, which may not be too bad for people from low-income countries.)
 :popcorn:
 

Offline Shonky

  • Frequent Contributor
  • **
  • Posts: 290
  • Country: au
Re: interrupt latency, and how can it be minimized?
« Reply #17 on: September 13, 2023, 01:03:07 am »
This is not a homework assignment; I am genuinely trying to grasp these concepts. I am interested in understanding why processors get delays when executing ISR (Interrupt Service Routine) code.  I've heard that it's advisable to keep ISRs short and efficient. Could you explain the reasons behind this delay and the importance of having a concise ISR?"
It seems like homework when you appear to have done literally zero prior work to answer your questions.

I hate to be this guy but:
https://letmegooglethat.com/?q=why+is+desirable+to+keep+ISRs+short
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #18 on: September 13, 2023, 07:39:02 am »
Most of all, it could be not a bot, but a human working to generate content to feed AI. Many of them out there.

Oh dog, I hadn't thought of that :(

Wonder what the forum's policy on that is or should be.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: interrupt latency, and how can it be minimized?
« Reply #19 on: September 13, 2023, 08:10:24 am »
This is not a homework assignment; I am genuinely trying to grasp these concepts. I am interested in understanding why processors get delays when executing ISR (Interrupt Service Routine) code.  I've heard that it's advisable to keep ISRs short and efficient. Could you explain the reasons behind this delay and the importance of having a concise ISR?"
It seems like homework when you appear to have done literally zero prior work to answer your questions.

I hate to be this guy but:
https://letmegooglethat.com/?q=why+is+desirable+to+keep+ISRs+short
What you are proposing here is the wrong approach. A myth that has many less experienced embedded programmers turn their project into a convoluted mess because they end up chasing rainbows.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Shonky

  • Frequent Contributor
  • **
  • Posts: 290
  • Country: au
Re: interrupt latency, and how can it be minimized?
« Reply #20 on: September 13, 2023, 08:22:04 am »
This is not a homework assignment; I am genuinely trying to grasp these concepts. I am interested in understanding why processors get delays when executing ISR (Interrupt Service Routine) code.  I've heard that it's advisable to keep ISRs short and efficient. Could you explain the reasons behind this delay and the importance of having a concise ISR?"
It seems like homework when you appear to have done literally zero prior work to answer your questions.

I hate to be this guy but:
https://letmegooglethat.com/?q=why+is+desirable+to+keep+ISRs+short
What you are proposing here is the wrong approach. A myth that has many less experienced embedded programmers turn their project into a convoluted mess because they end up chasing rainbows.
Don't follow, but when someone comes along looking for an answer served up on a plate without doing even the most basic research for themselves it raises alarms for me.

If they read the first thing they find on the internet and take it as the word and the truth then yeah they'll probably have a bad time but how's that different to asking on a forum?
 

Online dietert1

  • Super Contributor
  • ***
  • Posts: 2073
  • Country: br
    • CADT Homepage
Re: interrupt latency, and how can it be minimized?
« Reply #21 on: September 13, 2023, 08:49:39 am »
Like this.
 

Offline Mtech1Topic starter

  • Contributor
  • Posts: 28
  • Country: in
Re: interrupt latency, and how can it be minimized?
« Reply #22 on: September 13, 2023, 03:36:30 pm »
Thank you all for your feedback and concerns. I want to clarify that I'm not a bot, nor am I generating content for AI models or any profit. My intention is to engage in meaningful discussions and gain insights from the expertise of this community.

I understand the importance of conducting research before asking questions, and I'll make an effort to provide more context and share my prior research in future inquiries. I genuinely value the perspectives and knowledge shared here and believe that forums like this provide a valuable platform for learning from diverse experiences.
 

Offline SpacedCowboy

  • Frequent Contributor
  • **
  • Posts: 292
  • Country: gb
  • Aging physicist
Re: interrupt latency, and how can it be minimized?
« Reply #23 on: September 13, 2023, 04:15:16 pm »
If you have to care about interrupt latency, you are doing something wrong  >:D

Maybe during development, when it’s too late to do anything about it. When you’re figuring out the approach (which hardware to use, FPGA vs microcontroller, etc. etc.)  understanding your constraints is pretty much spot-on IMHO.

Yes. But how very old skool :(

That would make sense. I am old, and more interested in getting shit done properly than checking boxes on a form. My management actually appreciates that so we get along :)
 
The following users thanked this post: Nominal Animal

Offline MikeK

  • Super Contributor
  • ***
  • Posts: 1314
  • Country: us
Re: interrupt latency, and how can it be minimized?
« Reply #24 on: September 13, 2023, 05:33:35 pm »
To fully understand interrupt latency, one must study the turbo encabulator and all of it's polymorphic trapulisms.
 
The following users thanked this post: nctnico

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: interrupt latency, and how can it be minimized?
« Reply #25 on: September 13, 2023, 06:21:24 pm »
This is not a homework assignment; I am genuinely trying to grasp these concepts. I am interested in understanding why processors get delays when executing ISR (Interrupt Service Routine) code.  I've heard that it's advisable to keep ISRs short and efficient. Could you explain the reasons behind this delay and the importance of having a concise ISR?"
Could you first explain why your question ends with a double quote character?  To us, it looks like you copied it from somewhere, which indicates you are trying to minimize the effort you need to spend to obtain an answer to a question that asked from you.



To understand these concepts better, buy a cheap, well-supported microcontroller development board with native USB (as that both opens up new options and experiments via USB HID, and reduces the amount of hardware you need – typically only a USB cable hub with a male end suitable for the development board), and start experimenting with the tutorials and examples.  Currently, the Arduino world is easiest to handle.

I personally prefer Teensy 4.0 with the Teensyduino add-on to Arduino, for a number of reasons; one of them being that Teensy 4.0 is extremely powerful for a microcontroller.  For learning purposes, something like a DigiSpark clone – ATtiny85 processor that uses the VUSB library to implement USB 1.0 using software only (1.5 Mbit/s), well suited for experiments with just a couple of I/O pins – may be more useful, because of their simpler and better documented hardware and software configuration and operation.  You can buy these from eBay for under $2 apiece including shipping from China, and I bet there are many stores in India selling such also.  My favourite 8-bit microcontroller is ATmega32u4, and it has a native USB interface and plenty of I/O, and is used in Arduino Leonardo and Arduino Pro Micro development boards, and it would be a very good starting point, but currently the development boards based on it are surprisingly expensive ($8 and up), because of the increased chip prices: even at JLCPCB assembly, a VQFN-44 ATmega32u4-MU costs $4.32.  (You also need at least a crystal oscillator and related capacitors, some bypass capacitors, and some resistors on the USB D+ and D- lines, but they don't cost much.)

The most used Arduino microcontrollers are based on ATmage328 variants, which do not have native USB (but VUSB for very low-speed USB like USB HID is possible), and you also need an USB-UART adapter or a dedicated JTAG/ISP programmer to program those development boards.

Experimenting with these, for example using a single LED and a single tactile button to make a game where the point is to press the button one second after the LED was lit/turned off, with the LED then blinking in some pattern to tell how close you were, will give you a better intuitive understanding that you need to ask the relevant questions.  Even assuming you are perfectly honest about your intentions, you're struggling with the basic concepts, because you have no personal experience or understanding to connect or slot the new information into.  Playing with e.g. Arduino, and looking up the documentation to understand how the example code works, will give you exactly that connection or slotting possibility.

The same is with any university-level learning.  Some people learn by rote, i.e. memorize what their lecturer and book says, and consider that knowledge.  It is not, it is only data, or potential knowledge; it is not real.  It only becomes real knowledge when you can apply it to solve a problem.  To do so, you must understand the problem, and how the various pieces can come together (and they almost always can in many different ways).  Getting any kind of practical experience or intuitive grasp of the problems, before learning their theoretical background, can make a massive difference.  For me, if I can do that, I've found I can integrate the basics of just about anything!  That means, everything I learn this way, I can immediately use to solve problems.  It is amazing how powerful this kind of learning is, so I do warmly recommend you learn to do it this way, too.

(However, not all humans learn the same way.  I, for example, also need to use my hands, for example as a "temporary memory" for detailing things, because I otherwise will forget and not notice I forgot.  Doing things with my hands seems to stimulate my mind, too.  Others might need to talk about the thing – in programming, the Rubber Duck method works really well for some –, so if what works for me does not work for you, simply try the other ways, to find the ones that work for you best.  It may even vary depending on the subject.  The key is, find out how you yourself learn best: not memorize, but integrate information into your understanding in such a way that you can use it to solve problems.  Knowing what formula or approach to use to solve a problem is usually the key, but very often we end up choosing wrong, and asking questions like how to use that approach to solve a problem can be met with ridicule because those with experience see it as an obviously wrong approach.  Thus, another key is to step backwards, and see what assumptions you are making, and to try and describe the problem and your approach listing the reasons for your choices, and the context in which you are trying to solve or understand the problem.  This leads to the third key: the opinion of even a Nobel price winner is basically worthless, until you understand the reasons behind that opinion.  What an authority states as a fact is only a claim by an authority, until you can evaluate the basis: the reasons and reasoning behind that claim.  This is why we haven't found anything that works better for science than critical peer review.)

Apologies for the wall of text.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: interrupt latency, and how can it be minimized?
« Reply #26 on: September 13, 2023, 07:34:51 pm »
When an interrupt occurs, a signal is sent to the processor to notify it that an interrupt has been generated, and the processor must execute the code written inside the Interrupt Service Routine (ISR) function. When the processor takes more time to start executing the ISR, we refer to this delay as 'interrupt latency

This is not a homework assignment; I am genuinely trying to grasp these concepts. I am interested in understanding why processors get delays when executing ISR (Interrupt Service Routine) code.  I've heard that it's advisable to keep ISRs short and efficient. Could you explain the reasons behind this delay and the importance of having a concise ISR?"
You need to take a step back and work by some definitions. See the big picture first before picking on something that isn't important.

Any piece of software consists of at least 1 process. So learn about processes and how they are executed, seperated and work together. Code which is executed by a hardware trigger (IOW: an interrupt) is a process. Nothing more nothing less. See the interrupt controller as a hardware based operating system that divides processor time between processes. A timer tick interrupt is a perfect example of a background process that has no relation with anything on the outside but often is essential to do anything that is time driven in software.

On every processor you have a constraint that limits the amount of code you can run in a given time frame. Often a process does some initialisation and then loops through it's task(s). So grab a piece of paper and draw the time each process takes for initialisation and the work loop as a Gantt chart. If you measure/determine the amount of time each process needs (min, max, average), you can determine if your processor has enough processing time available to perform all tasks in time. In this Gantt chart you also set the priorities of each process.

From there you can start designing process priorities, determine which processes can be combined and which processes can be allowed to interrupt other processes. Maybe you'll need an OS, maybe a round-robin scheduler is enough. Maybe you'll need buffering to process a burst of incoming data. Maybe buffering causes too much overhead and it is better to deal with the data as it comes in. It all becomes clear from making a Gantt chart.

'keep interupts short' is just a dogma. It is not always valid or even a useful thing to do. In the end you'll need enough processing power to handle all processes within the given time frame. A clever design that is based on carefully designed processes will be the most efficient one.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online peter-h

  • Super Contributor
  • ***
  • Posts: 3698
  • Country: gb
  • Doing electronics since the 1960s...
Re: interrupt latency, and how can it be minimized?
« Reply #27 on: September 13, 2023, 08:58:17 pm »
Quote
Thank you all for your feedback and concerns. I want to clarify that I'm not a bot, nor am I generating content for AI models or any profit. My intention is to engage in meaningful discussions and gain insights from the expertise of this community.
I understand the importance of conducting research before asking questions, and I'll make an effort to provide more context and share my prior research in future inquiries. I genuinely value the perspectives and knowledge shared here and believe that forums like this provide a valuable platform for learning from diverse experiences.

That IS chatGPT!

It is the sickly-sweet writing style which gives it away.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 
The following users thanked this post: nctnico, tggzzz, Siwastaja

Offline hans

  • Super Contributor
  • ***
  • Posts: 1641
  • Country: nl
Re: interrupt latency, and how can it be minimized?
« Reply #28 on: September 14, 2023, 07:07:06 am »
Well.. I know a software dev that was job hunting and wrote its motivation letter with ChatGPT. He got the job. :-DD One reason for that was, the company was very impressed with his blog on his consultancy website. That was also written mostly by ChatGPT. >:D
Now I'm not that disgusted; I know that guy very well and he just doesn't want to bother his time writing all day. Rather wants to write code. And no he doesn't use ChatGPT for that, well atleast yet.

So why not use these tools to full advantage?
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #29 on: September 14, 2023, 07:45:18 am »
Well.. I know a software dev that was job hunting and wrote its motivation letter with ChatGPT. He got the job. :-DD One reason for that was, the company was very impressed with his blog on his consultancy website. That was also written mostly by ChatGPT. >:D
Now I'm not that disgusted; I know that guy very well and he just doesn't want to bother his time writing all day. Rather wants to write code. And no he doesn't use ChatGPT for that, well atleast yet.

So why not use these tools to full advantage?

What advantage?
Whose advantage? His or theirs? OP's or us?

Essay subject: compare and contrast the ethics and morality of claiming you created something when it was actually created by (a) someone else (b) an LLM.
« Last Edit: September 14, 2023, 07:47:57 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: SiliconWizard

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14481
  • Country: fr
Re: interrupt latency, and how can it be minimized?
« Reply #30 on: September 14, 2023, 09:45:08 pm »
Well.. I know a software dev that was job hunting and wrote its motivation letter with ChatGPT. He got the job. :-DD One reason for that was, the company was very impressed with his blog on his consultancy website. That was also written mostly by ChatGPT. >:D
Now I'm not that disgusted; I know that guy very well and he just doesn't want to bother his time writing all day. Rather wants to write code. And no he doesn't use ChatGPT for that, well atleast yet.

So why not use these tools to full advantage?

What the above shows - and I can confirm that it is currently happening, and working, so definitely not a rare case - is that many people in charge of recruiting are clueless joes who will fall for stuff that looks shiny without much discernment.
Whether you want to take advantage of this fact is up to your ethics.
What it does promote though is stupidity on one side and lies on the other. So even if you don't care about ethics, is it really the world you want to live in? Sure it's already the case: superficial and clueless leaders, deception all over the place, but I don't think automating this state of things is going to do us much good. But hey, I'm not gonna prevent anyone from rushing towards an Idiocracy.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: interrupt latency, and how can it be minimized?
« Reply #31 on: September 14, 2023, 10:13:15 pm »
Quote
I know a software dev that was job hunting and wrote its motivation letter with ChatGPT.
Well, you know, HR departments have been using software FAR LESS INTELLIGENT than ChatGPT to screen resumes/cover-letters for like 30 years now...
(Heh.  Now I'm picturing a sort of AI war that causes the "writers" and "readers" of such thing to optimize themselves for/against each other, eventually resulting in things that are next to useless for humans...)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #32 on: September 14, 2023, 10:55:37 pm »
Quote
I know a software dev that was job hunting and wrote its motivation letter with ChatGPT.
Well, you know, HR departments have been using software FAR LESS INTELLIGENT than ChatGPT to screen resumes/cover-letters for like 30 years now...
(Heh.  Now I'm picturing a sort of AI war that causes the "writers" and "readers" of such thing to optimize themselves for/against each other, eventually resulting in things that are next to useless for humans...)

That kind of thing is already entering the zeitgeist, e.g. in the Alex cartoon which is a 35yo extremely sharp and cynical observation of the people in the financial markets.
The latest two cartoons are
https://www.alexcartoon.com/index.cfm?cartoon_num=8402
https://www.alexcartoon.com/index.cfm?cartoon_num=8403
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1641
  • Country: nl
Re: interrupt latency, and how can it be minimized?
« Reply #33 on: September 15, 2023, 09:41:25 am »
Well.. I know a software dev that was job hunting and wrote its motivation letter with ChatGPT. He got the job. :-DD One reason for that was, the company was very impressed with his blog on his consultancy website. That was also written mostly by ChatGPT. >:D
Now I'm not that disgusted; I know that guy very well and he just doesn't want to bother his time writing all day. Rather wants to write code. And no he doesn't use ChatGPT for that, well atleast yet.

So why not use these tools to full advantage?

What advantage?
Whose advantage? His or theirs? OP's or us?

Essay subject: compare and contrast the ethics and morality of claiming you created something when it was actually created by (a) someone else (b) an LLM.

Advantage: writing semi-coherent stories or english in the first place. Expression of gross facts based on human input. Saving time.

This is primarily OP's advantage in the case his/her english may be poor. But, it could be our advantage if that aids in communication.

I agree to parts of SiliconWizard though. Its a huge ethics issue. But the world has been built on liars and deceivers anyhow. So how much is going to change there?
I think one of the powers in tools like ChatGPT is that education in humans is immensely inefficient. Every human born has an empty ROM and needs 25-30 years of training to get onto a masters/doctoral level. Now that training is what we need to make sense of the world and also tools like ChatGPT, so IMO thats where the ethics concerns comes in wrt proving if something is your own work, but other than that I see huge strides to be had for efficiency improvements.

The problem is that ChatGPT seems to have quickly developed a name of tools used in (college) cheating or widely inaccurate while still being confident about those inaccuracies. Oh yes, completely true and valid points, but lets also not forget that the paperwork an HR department asks for can also be considered (for us technical people) as write-only documents that are just checkboxes to get an interview.

Anyhow, this is a bit offtopic, so I will leave it at that for now.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: interrupt latency, and how can it be minimized?
« Reply #34 on: September 15, 2023, 04:14:04 pm »
I hate to be this guy but:
https://letmegooglethat.com/?q=why+is+desirable+to+keep+ISRs+short

Since ads appeared on the Internet, people are hunting clicks, so the Internet is getting dominated by click hunters who create so-called contents and put their creatures on the web hoping to get more clicks. The contents is created from other information found on the Internet. The content-creators repeat and distribute the information presented by other content-creators. The information on their pages often has nothing to do with reality, but it is still repeated over and over again.

ChatGPT makes this process much easier and faster. Also, ChatGPT can optimize contents for search engines. Therefore, a year or two from now there will be impossible to find any real information. It is already difficult compare to what it was 20 years ago. With the advent of ChatGPT, Internet will dwindle far away from reality in no time. Not to mention that this flow of fake information may be manipulated when needed by some "clever" people to direct the world to where they want it to go.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #35 on: September 15, 2023, 05:05:05 pm »
Well.. I know a software dev that was job hunting and wrote its motivation letter with ChatGPT. He got the job. :-DD One reason for that was, the company was very impressed with his blog on his consultancy website. That was also written mostly by ChatGPT. >:D
Now I'm not that disgusted; I know that guy very well and he just doesn't want to bother his time writing all day. Rather wants to write code. And no he doesn't use ChatGPT for that, well atleast yet.

So why not use these tools to full advantage?

What advantage?
Whose advantage? His or theirs? OP's or us?

Essay subject: compare and contrast the ethics and morality of claiming you created something when it was actually created by (a) someone else (b) an LLM.

Advantage: writing semi-coherent stories or english in the first place. Expression of gross facts based on human input. Saving time.

This is primarily OP's advantage in the case his/her english may be poor. But, it could be our advantage if that aids in communication.

The probably disadvantages overwhelm those possible advantages.

If the OP's English is sufficiently poor[1] that ChatGPT/LLM could help, then it is likely to be too poor for him to understand the replies. In that case he might use ChatGPT/LLM to summarise the responses and translate them. (See those Alex cartoons!)

"Out of sight, out of mind" and "the spirit is willing but the flesh is weak" when translated to Russian and back traditionally reappear as "invisible idiot" and "good vodka but bad meat".

[1]That's unlikely since the OP is displaying the Indian flag, and English is the lingua franca within India.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online peter-h

  • Super Contributor
  • ***
  • Posts: 3698
  • Country: gb
  • Doing electronics since the 1960s...
Re: interrupt latency, and how can it be minimized?
« Reply #36 on: September 15, 2023, 08:59:21 pm »
It is currently trivial for an intelligent human to detect chatgpt, and thus to detect any cheating.

My GF used to be a Masters/PhD supervisor at a top level UK univ and cheating was endemic in all essay based postgrad stuff - basically all humanities Masters. I saw a lot of the "work". PhD is harder, still, at top level univs...

The issue is not in detection of it, but in the univ being unable to chuck out cheaters who are paying 10k-30k a year for being there :) It is really tough. The univ would lose perhaps 50% of its income immediately. Basically the majority of students from certain parts of the world (like India).

What I don't get is why the mods here allow this. This forum is only a bit bigger than one on which I am an admin (not electronics) and on ours we just kick out these cases immediately.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14481
  • Country: fr
Re: interrupt latency, and how can it be minimized?
« Reply #37 on: September 15, 2023, 09:59:24 pm »
It is currently trivial for an intelligent human to detect chatgpt, and thus to detect any cheating.

Yes. Although when it comes to HR in general... ;D

My GF used to be a Masters/PhD supervisor at a top level UK univ and cheating was endemic in all essay based postgrad stuff - basically all humanities Masters. I saw a lot of the "work". PhD is harder, still, at top level univs...

The issue is not in detection of it, but in the univ being unable to chuck out cheaters who are paying 10k-30k a year for being there :) It is really tough. The univ would lose perhaps 50% of its income immediately. Basically the majority of students from certain parts of the world (like India).

Yes, this has become an issue even for PhDs. Plagiarism is not a new problem, but ChatGPT stuff is making it even easier - students don't even need to actually read books/articles to copy parts of them.

What I don't get is why the mods here allow this. This forum is only a bit bigger than one on which I am an admin (not electronics) and on ours we just kick out these cases immediately.

I would be in favor of kicking them out, but it's not a simple matter. First, being overexposed to ChatGPT-generated content, we may start detecting it everywhere even when it's not actually that. Second, EEVBlog is a business, and it's currently hard for any business to decide banning "AI" altogether, especially if it has a wide "customer" base rather than a niche, in part for fear of being seen as retrograde, and thus potentially losing users/customers. It's a very real problem. it sucks.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1641
  • Country: nl
Re: interrupt latency, and how can it be minimized?
« Reply #38 on: September 16, 2023, 12:52:41 pm »
Well it appears the original content of this thread is dead anyway.  :-//

It is currently trivial for an intelligent human to detect chatgpt, and thus to detect any cheating.

My GF used to be a Masters/PhD supervisor at a top level UK univ and cheating was endemic in all essay based postgrad stuff - basically all humanities Masters. I saw a lot of the "work". PhD is harder, still, at top level univs...

The issue is not in detection of it, but in the univ being unable to chuck out cheaters who are paying 10k-30k a year for being there :) It is really tough. The univ would lose perhaps 50% of its income immediately. Basically the majority of students from certain parts of the world (like India).

What I don't get is why the mods here allow this. This forum is only a bit bigger than one on which I am an admin (not electronics) and on ours we just kick out these cases immediately.

I often hear about stories of how ignorant students can be.

My local university has automated checking tools for the bachelor courses like programming. People copy-paste others code with full "Made by: [other person]". Not all submissions are manually reviewed, but when a student is having half a dozen failed attempts and then has a 100% test pass rate on the final attempt, that raises some suspicisions :palm:
Or another group writing a report with bad grammar, but with some parts written very well. Is that due to the language skill difference of 2 students writing their respective parts? Or should I google those good sentences to uncover a PhD thesis that was not referenced/quoted? If it was referenced properly, it was not fraud, but probably also wouldn't get any marks for original content :palm:

Not to mention many teachers performing statistics on their written exams to check for a multimodal distribution and covariance in which answers were pass/fail. Classic way to uncover if a majority of class is cheating |O .

To be honest reusing others work doesn't have to the end in the world. But accrediting what is yours and what is not is where most fail to do so. The one is fraud, the other is zero marks as students are required to deliver original work. So having your homework answered by people on a forum is not terrific, where the student can choose between commiting fraud (no reference) or zero marks (his homework done by us).

I'm a bit split on what to think about farming technical content from forums. From this academic training perspective it doesn't sound great. But IMO looking at the development of ChatGPT as new tools, I can see some potential upsides in the future.. so interested where it will be heading to.
« Last Edit: September 16, 2023, 12:55:31 pm by hans »
 

Online peter-h

  • Super Contributor
  • ***
  • Posts: 3698
  • Country: gb
  • Doing electronics since the 1960s...
Re: interrupt latency, and how can it be minimized?
« Reply #39 on: September 16, 2023, 06:02:54 pm »
ChatGPT is no "AI" - it is purely a word-group analyser and classifier. If you ask it how many legs a cat has, it will reply correctly without having any idea of what a cat is, or what a leg is.

On plagiarism, the hard bit is students with money purchasing custom essays. This is undetectable if they do it all the time. The essays are often written by retired lecturers ;)
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: interrupt latency, and how can it be minimized?
« Reply #40 on: September 16, 2023, 06:46:15 pm »
I started writing a rant about University stuff, but decided it wasn't worth reading.  Let's just say that I'm deeply, truly disappointed in them.

I read the other thread by OP.  Remember, I'm the wonky kind of person who cannot really tell if some text is written by ChatGPT or not, because I seem to lack some kind of social subtext detector in written English text (which ChatGPT currently gets wrong, and marks its output "obviously" as GPT-originating).

It seems to me they are indeed trying to find somebody to do their thinking for them, and based on the change in the written English used, are either using ChatGPT or similar to reframe their questions –– I suspect because they've gotten pushback in other forums (and instead of understanding it is because trying to get others to do their work for them is not okay, they assume it is because of their imperfect language skills or similar) ––, or have taken a very intensive English writing course for the last half year or so.  I think the latter is quite unlikely considering the whole picture, and the former the most likely scenario.

This is at the very root of what afflicts our higher education worldwide right now.

Administration, researchers, and even lecturers are focusing on appearances, with actual actions being secondary.
Can you blame students for adapting to that, and using the tools available to further themselves within the existing rule framework?

(Here in Finland, this means that using bad words in a private SMS is considered worse than barbecuing your neighbor's cat and spray painting "We killed your cat" to their door.  I may have lost a friend recently, because they stated they refuse to use a store whose owner has posted some loonie magazines and articles, but is perfectly okay with a store whose CEO has beaten up a defenseless man in handcuffs for ideological reasons and got away with it.  Actions do not matter, only public speech and appearances does.  I cannot accept that worldview at all.)

I for one am willing to help absolutely anyone trying to learn, to integrate new knowledge so they can apply it in solving problems, to discover new useful knowledge using the scientific method, and to create new and useful tools using traditional (sound!) engineering principles.
But not those playing the appearance game, trying to exploit others for their own profit.  I'm just done with that.  Even if it means I have to become a hermit potato farmer in the backwoods of nowhere.

This does mean that sometimes I may answer to someone who others see obviously as playing that; but I do so only when I consider it possible or even likely that others that are trying to learn and not play such games will encounter the text later, via a web search or similar.  I have not participated in stackoverflow.com or math.stackexchange.com for almost five years now, but as I used the same strategy there to "answer" a question –– I typically tried to answer the underlying, larger question, rather than the detail the asker posted about ––, I do still get an email every now and then asking about a specific detail or other, obviously showing actual deeper interest in the problem than just completing their homework/essay/thesis.  To me, this indicates this approach is worthwhile.

It would be interesting to find a better approach, especially to real life face-to-face contacts with coworkers and colleagues: a way to help them learn, without doing any of their work for them.
 
The following users thanked this post: peter-h

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: interrupt latency, and how can it be minimized?
« Reply #41 on: September 16, 2023, 08:01:34 pm »
ChatGPT is no "AI"

It's a neural network, isn't it? Neural network is AI.
 

Online peter-h

  • Super Contributor
  • ***
  • Posts: 3698
  • Country: gb
  • Doing electronics since the 1960s...
Re: interrupt latency, and how can it be minimized?
« Reply #42 on: September 16, 2023, 08:37:54 pm »
OK, but there is no "world model" or whatever it is now called. You can evidently get a long way this way but then you hit the buffers totally.
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #43 on: September 16, 2023, 08:38:16 pm »
ChatGPT is no "AI" - it is purely a word-group analyser and classifier. If you ask it how many legs a cat has, it will reply correctly without having any idea of what a cat is, or what a leg is.

The same can be said for us :)

Attempting to define AI is as useful as attempting to define pornography.

How many legs do cats have? Is this a cat?

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #44 on: September 16, 2023, 08:41:37 pm »
OK, but there is no "world model" or whatever it is now called. You can evidently get a long way this way but then you hit the buffers totally.

Do we have one? Is yours the same as an infant's or Donald Trump?

Can submarines swim?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: interrupt latency, and how can it be minimized?
« Reply #45 on: September 16, 2023, 09:34:11 pm »
ChatGPT is no "AI"
It's a neural network, isn't it? Neural network is AI.
ChatGPT is a neural network that is constructed using huge amounts of existing written text.  It is a language model.

Many image recognizers like optical character recognizers that turn images into editable text, also heavily rely on neural networks.  Are they AI too?

No, not all neural networks are AI.  The only thing we know is that all intelligent beings we have thus far officially recognized have biological neural networks.  Not all biological neural networks are intelligent.  Cetaceans have larger neural networks than humans, and even have a similar neocortex and spindle neurons, yet they seem to be less intelligent than humans.  Corvids and some parrots are demonstrably more intelligent than many mammals, yet have a fraction of the neural network size and complexity.  Thus, there is no guarantee even a large neural network is intelligent: something more (in the structure, operation, and learning of the network) is required.  Just stuffing a huge amount of data into a neural network model is not sufficient for intelligence to arise.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: interrupt latency, and how can it be minimized?
« Reply #46 on: September 16, 2023, 10:36:33 pm »
Many image recognizers like optical character recognizers that turn images into editable text, also heavily rely on neural networks.  Are they AI too?

Without any doubts. They can make decisions on their own, without relying on the rules established by the programmers.

Certainly not as intelligent as humans. Humans are above everything else because they have an ability to think logically. None of other known creatures can. This gives humans control over less-intelligent beings inhabiting the planet.

ChatGPT cannot think logically yet, so it is not as intelligent as humans. But it is far more intelligent than worms or image recognizing devices. And it is improving and learning, so it will be getting better, getting closer to humans.

Thus, there is no guarantee even a large neural network is intelligent: something more (in the structure, operation, and learning of the network) is required.  Just stuffing a huge amount of data into a neural network model is not sufficient for intelligence to arise.

Certainly no guarantee. But it is quite possible that a large neural network gets to the level of intelligence above what humans have.
« Last Edit: September 16, 2023, 10:38:38 pm by NorthGuy »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: interrupt latency, and how can it be minimized?
« Reply #47 on: September 16, 2023, 11:09:26 pm »
Many image recognizers like optical character recognizers that turn images into editable text, also heavily rely on neural networks.  Are they AI too?
Without any doubts.
No.  The psychometric definition of generalized intelligence is the ability to perform cognitive tasks.  The generally accepted definition of intelligence is basically the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving, and can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context. See Wikipedia psychometric g factor and intelligence articles.

The only non-human-specific definition of intelligence in general that makes any sense (is both meaningful and measurable), is the ability to solve problems that that individual has not encountered or been instructed about before, at least in the same form.  Everything else in the accepted definition is context-dependent, and we don't have the ability to define contexts for other species.  (Hell, some argue we don't do so correctly for all human cultures yet, either.)

It is meaningful, because solving new problems allows an individual to exceed the boundaries their predecessors had, without changing biologically.

Humans are above everything else because they have an ability to think logically.
Demonstrably incorrect.  Even corvids have shown the ability for logical deduction and reasoning.

Humans are the apex species because of their ability to manipulate their environment.  We are hands, not minds.
« Last Edit: September 16, 2023, 11:12:27 pm by Nominal Animal »
 
The following users thanked this post: hans, Silenos

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: interrupt latency, and how can it be minimized?
« Reply #48 on: September 17, 2023, 01:06:45 am »
... and can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

That is exactly what image recognition neural network does. It perceives images, processes and retain the information by re-configuring neuron weights, which it can later apply towards classification of further images. More learning produces better recognition. If the images change with time, it will exhibit adaptive behaviour.
 

Online dietert1

  • Super Contributor
  • ***
  • Posts: 2073
  • Country: br
    • CADT Homepage
Re: interrupt latency, and how can it be minimized?
« Reply #49 on: September 17, 2023, 06:13:44 am »
Imagine you had an AI type supervisor in an embedded system that adjusts process priorities minimizing interrupt latency, i mean dynamically, observing execution paths and patterns in realtime.

Regards, Dieter
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #50 on: September 17, 2023, 11:20:55 am »
Imagine you had an AI type supervisor in an embedded system that adjusts process priorities minimizing interrupt latency, i mean dynamically, observing execution paths and patterns in realtime.

You don't need to imagine!

The Java HotSpot Virtual Machine has had that since 1999 - i.e quarter of a century.

From wackypedia... "as it runs Java bytecode, as with the Self VM, HotSpot continually analyzes the program's performance for hot spots which are executed often or repeatedly. These are then targeted for optimizing, leading to high-performance execution with a minimum of overhead for less performance-critical code"
https://en.wikipedia.org/wiki/HotSpot_(virtual_machine)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: interrupt latency, and how can it be minimized?
« Reply #51 on: September 17, 2023, 01:09:47 pm »
Imagine you had an AI type supervisor in an embedded system that adjusts process priorities minimizing interrupt latency, i mean dynamically, observing execution paths and patterns in realtime.

It will take less silicon to switch to FPGA and thereby get rid of interrupts and priorities altogether.
« Last Edit: September 17, 2023, 01:11:45 pm by NorthGuy »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19511
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: interrupt latency, and how can it be minimized?
« Reply #52 on: September 17, 2023, 01:34:52 pm »
Imagine you had an AI type supervisor in an embedded system that adjusts process priorities minimizing interrupt latency, i mean dynamically, observing execution paths and patterns in realtime.

It will take less silicon to switch to FPGA and thereby get rid of interrupts and priorities altogether.

And less brainstrain to switch to XMOS xCORE processors (q.v.), of course! Hard realtime hardware/software ecosystem without FPGAs :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14481
  • Country: fr
Re: interrupt latency, and how can it be minimized?
« Reply #53 on: September 17, 2023, 08:21:37 pm »
Imagine you had an AI type supervisor in an embedded system that adjusts process priorities minimizing interrupt latency, i mean dynamically, observing execution paths and patterns in realtime.

You don't need to imagine!

The Java HotSpot Virtual Machine has had that since 1999 - i.e quarter of a century.

From wackypedia... "as it runs Java bytecode, as with the Self VM, HotSpot continually analyzes the program's performance for hot spots which are executed often or repeatedly. These are then targeted for optimizing, leading to high-performance execution with a minimum of overhead for less performance-critical code"
https://en.wikipedia.org/wiki/HotSpot_(virtual_machine)

Adaptive control systems are decades old. If it sounds trendy to call them "AI" these days, let's have at it.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf