Why don't we just create the whole system on interrupts and make the main loop sleep?

Thread Starter

microcontroller60

Joined Oct 1, 2019
62
Most embedded systems use interrupts, which are typically triggered by some peripheral device. In a lot of interrupt tutorials/examples, it always says that ISR code must be kept short
Why Short Interrupt Service Routines Matter?
Why don't we just create the whole system on interrupts and make the main loop sleep?
 

atferrari

Joined Jan 6, 2004
4,764
Most embedded systems use interrupts, which are typically triggered by some peripheral device. In a lot of interrupt tutorials/examples, it always says that ISR code must be kept short
Why Short Interrupt Service Routines Matter?
Why don't we just create the whole system on interrupts and make the main loop sleep?
I am not even sure your idea makes sense. Think of your own work: at a certain point is a loop of work that you interrupt only when it is needed.

What triggers interrupts varies enormously depending what is the application. Not only peripherals.
 

geekoftheweek

Joined Oct 6, 2013
1,201
The reason they say to keep interrupt routines short is because of the nature of interrupts. Ideally you don't want to be doing a bunch of extra stuff in the interrupt routine that will prevent another interrupt from getting the attention it needs. There are some with various interrupt priorities that can interrupt in the middle of an already running interrupt routine, but too much of that and you'll end up with mangled memory somewhere along the line with trying to save the important registers at the beginning and restoring at the end of the interrupt routine.

Interrupts should really be saved for time critical processing. The rest of the program should be in the main loop.
 

nsaspook

Joined Aug 27, 2009
13,079
Most embedded systems use interrupts, which are typically triggered by some peripheral device. In a lot of interrupt tutorials/examples, it always says that ISR code must be kept short
Why Short Interrupt Service Routines Matter?
Why don't we just create the whole system on interrupts and make the main loop sleep?
Short Interrupt Service Routines Matter most in systems without vectored/priority interrupts with simple loop processing. What's important is to limit the time the interrupt system is disabled when processing time critical interrupts. Even in a lowly MCU with only high/low interrupt levels you can process in the low interrupt using a state machine to maintain checkpoint/restart during high level interrupts. This requires software design that accounts for possible non-atomic updates and possible race-conditions in shared data. Keeping the ISR short (little processing, I/O only) minimizes the possibility of subtle errors that cause maddening system errors.

We can create the whole system on interrupts and make the main loop sleep.
https://forum.allaboutcircuits.com/threads/how-to-use-interrupt.169318/post-1507098
 
Last edited:

Papabravo

Joined Feb 24, 2006
21,157
The systems that have been implemented in the manner you suggest have serious performance flaws including lockup and dropped I/O transactions. I cannot for the life of me imagine why anybody would think this was a reasonable idea.
 

nsaspook

Joined Aug 27, 2009
13,079
The systems that have been implemented in the manner you suggest have serious performance flaws including lockup and dropped I/O transactions. I cannot for the life of me imagine why anybody would think this was a reasonable idea.
Improperly designed and implemented systems on inadequate hardware will have these flaws, properly designed systems using sound systems programming techniques like the Linux Kernel on machines with proper hardware do not.

https://www.cs.utah.edu/~regehr/papers/interrupt_chapter.pdf
Interrupts have some inherent drawbacks from a software engineering point of view. First, they are relatively non-portable across compilers and hardware platforms. Second, they lend themselves to a variety of severe software errors that are difficult to track down since they manifest only rarely. These problems give interrupts a bad reputation for leading to flaky software: a significant problem where the software is part of a highly-available or safety-critical system. The purpose of this chapter is to provide a technical introduction to interrupts and the problems that their use can introduce into an embedded system, and also to provide a set of design rules for developers of interrupt-driven software. This chapter does not address interrupts on shared-memory multiprocessors, nor does it delve deeply into concurrency correctness: the avoidance of race conditions and deadlocks. Concurrency correctness is the subject of a large body of literature. Although the vast majority of this literature addresses problems in creating correct thread-based systems, essentially all of it also applies to interrupt-driven systems.
 

Deleted member 115935

Joined Dec 31, 1969
0
Bottom line,
when your code goes into the interrupt routine, what happens if other interrupts come in ?

it depends upon the interrupt controller,

you could have a hierarchy of interrupts, such that IR1 has higher priority than IR4. But then if your in IR4, and IR1 happens, do you abort IR4, and jump to IR1, or continue IR4, and then go to IR1,

then what happens if you have two IR1's whilst your in IR4.

Then your into the world of schedulers.

interrupts are a statistical system problem, there is always the chance that yo can miss an interrupt, and then your system has to be resilient to cope with that,

The longer your in an interrupt, the more chance you have of missing another interrupt,

just to add to the load, some systems, you have to disable interrupts whilst your in the interrupt routine.

So the golden rule of all interrupts is keep them short.

An example, data receiver, receives characters, and send an interrupt on each character received. If you dealt in the IR with the character, say added it to a string, and modified it, then you have a higher chance of missing a character, then if your IR just made a note of the number of characters in a buffer, and then the main routine emptied the receive buffer of the right number of characters. All the Ir has to do is increment a number on each new character received, and the main loop makes not has there been more characters since I last looked. That way its only the IR that changes the character count, the main loop only reads the count, and takes account of the integer counter wrapping.
 

nsaspook

Joined Aug 27, 2009
13,079
My point is that all interrupts are not created equally and short is relative to the type (“fast” with interrupts disabled/ blocked and “slow” with possible preemption/unblocked), needed response time and processing domain of the interrupt. Linux resolves the problem by having split top/bottom interrupt handlers. The top half handles the interrupt 'fast' part (new character/packet received) of a interrupt while the bottom is run as a "slow" protocol layer 'Tasklets' interrupt process. For a embedded small processor you don't need the full OS scheduling model, top fast interrupts that are 'short' are handled by the I/O interurpts system at a high priority while bottom Tasklets' are handled in a ''slow' low priority interrupt context that provides 'cooked' data abstraction to a main process.
 

nsaspook

Joined Aug 27, 2009
13,079
The simple answer is not all tasks are interrupt driven.

If you want to perform an FFT calculation you would not do this in an ISR.
You would not do that in a 'fast' ISR but there is no requirement for that computation to be in main loop code. A low priority 'slow' interrupt from a timer could easily run the computation at specified compute times while using 'fast' interrupts for ADC conversion and sampling periods.
 

cmartinez

Joined Jan 17, 2007
8,218
Why don't we just create the whole system on interrupts and make the main loop sleep?
Because not everything that a system does has to do with input states. And even if it did, it is quite clear (as some have already mentioned) that long interrupt routines run the possibility of colliding with other routines in the program. I have always favored a polling approach to programming instead so as to have a more (for me) clearly structured program. But many times interrupts make better sense, both logically and power-wise.
 
Last edited:

John P

Joined Oct 14, 2008
2,025
I did this as a beginner--wrote a program to control a piece of equipment in 8051 assembler. It seemed to me that a reasonable way to accomplish it was to do nothing in the main loop, but have an interrupt that occurred 1000 times a second and did everything. My one concern was that the interrupt absolutely must finish in less than 1msec! I had a test pin that would indicate busy versus idle, and I checked that with a scope to make sure it wasn't taking too long. I now realize this isn't the right way to do it, but it actually worked OK.

My main loop had the form
Code:
KAFKA:
  {Do a few minimal things}
  AJMP KAFKA
I liked the label KAFKA. The idea was "Wait forever". Engineers can be literate!
 

nsaspook

Joined Aug 27, 2009
13,079
I did this as a beginner--wrote a program to control a piece of equipment in 8051 assembler. It seemed to me that a reasonable way to accomplish it was to do nothing in the main loop, but have an interrupt that occurred 1000 times a second and did everything. My one concern was that the interrupt absolutely must finish in less than 1msec! I had a test pin that would indicate busy versus idle, and I checked that with a scope to make sure it wasn't taking too long. I now realize this isn't the right way to do it, but it actually worked OK.

My main loop had the form
Code:
KAFKA:
  {Do a few minimal things}
  AJMP KAFKA
I liked the label KAFKA. The idea was "Wait forever". Engineers can be literate!
If it worked as expected without errors IMO it's hard to see what's 'wrong' with it.
 

Deleted member 115935

Joined Dec 31, 1969
0
There are different parts to programming,
Writing and testing the code is only the first part.
documenting is second,

and the bit that every one forgets, is code lives "for ever" ,
I've been given jobs to modify code thats 30 years old,

A bit of code , that needs a scope to prove its ok, is as you have learnt, not maintainable,
well done you , its amazing how many coders dont recognise that
 

nsaspook

Joined Aug 27, 2009
13,079
There are different parts to programming,
Writing and testing the code is only the first part.
documenting is second,

and the bit that every one forgets, is code lives "for ever" ,
I've been given jobs to modify code thats 30 years old,

A bit of code , that needs a scope to prove its ok, is as you have learnt, not maintainable,
well done you , its amazing how many coders dont recognise that
If you're writing low-level embedded software that directly manipulates hardware you'd better have a scope to prove it's correct. Depending on software tools for hardware correctness is a minefield of bad software and hardware errata.

One of endless examples:
https://www.intel.com/content/www/u...nios-ii/errata/ips-niosii-51-er-hardware.html

IMO what's as important as absolute correctness is engineering Margin of safety in software designs. Software that's 100% correct but fragile (hardware faults) can, at times be more more destructive than systems that anticipate bugs and handle errors in a safe manner. Small cost-sensitive embedded systems are notorious for small compute margins where programming efficiencies are a much higher priority than programming structure and maintainability of source code.
https://blog.regehr.org/archives/50
Margin of safety is a fundamental engineering concept where a system is built to tolerate loads exceeding the maximum expected load by some factor. For example, structural elements of buildings typically have a margin of safety of 100%: they can withstand twice the expected maximum load. Pressure vessels have more margin, in the range 250%-300%, whereas the margin for airplane landing gear may be only 25%. (All these examples are from the Wikipedia article.)

We can say that a software system has a margin of safety S with respect to some external load or threat L only when the expected maximum load Lmax can be quantified and the system can be shown to function properly when subjected to a load of (1+S)Lmax. Software systems are notoriously low on margin: a single flaw will often compromise the entire system. For example, a buffer overflow vulnerability in a networked application can permit an attacker to run arbitrary code at the same privilege level as the application, subverting its function and providing a vector of attack to the rest of the system.
 
Last edited:

Thread Starter

microcontroller60

Joined Oct 1, 2019
62
I had thought that interrupt is used for all tasks in a real-time operating system. An interrupt is used to set priority and time-sharing. so main part of the program is interrupt that's why I thought the main loop should be sleep
 

nsaspook

Joined Aug 27, 2009
13,079
It depends on what you mean by a real-time operating system on what type of hardware. The answer is not a simplistic 'keep it short' in all cases or run all code in a interrupt while main sleeps. There are entire bookcase collections of software engineering books about embedded system design and the specific subject of handling interrupts. Take an Operating Systems class or two for an introduction to this complex subject.

IMG_20200820_213852.jpg

https://www.thriftbooks.com/w/operating-system-concepts-addison-wesley-series-in-computer-science_james-lyle-peterson/646705/item/3039904/?mkwid=|dc&pcrid=448964098780&pkw=&pmt=&slid=&plc=&pgrid=105775167313&ptaid=pla-924743128016&gclid=Cj0KCQjwvvj5BRDkARIsAGD9vlLW5-7If9gOMrV4-SPGDFmcCy25zhPXePXzecqOsJ1WT1qQRDQFHukaAnquEALw_wcB#isbn=0201061988&idiq=3039904
 

Deleted member 115935

Joined Dec 31, 1969
0
Just to add to the confusion, and may be a little off topic,
I've had a system to de-bug that was fantastic at recovery from faults,
the bit that got missed off was logging and making accessible the faults,

In hind site easy to see and fix, but the result was the systems all fell over spectacularly, when they had been so stable for years, turns out the fault was the last straw , so if you make a self check / repair / recover system, rember to have a log thats easy to see of faults.

As a side line,
remember Alarm 1202 ?

https://www.discovermagazine.com/the-sciences/apollo-11s-1202-alarm-explained
 

nsaspook

Joined Aug 27, 2009
13,079
https://forum.allaboutcircuits.com/threads/apollo-11s-50th-anniversary-the-1202-error.161613/

https://forum.allaboutcircuits.com/...-work-on-so-many-machines.170826/post-1525640

https://www.americanscientist.org/article/moonshot-computing
The cause of this behavior was not a total mystery. It had been seen in test runs of the flight hardware. Two out-of-sync power supplies were driving a radar to emit a torrent of spurious pulses, which the AGC dutifully counted. Each pulse consumed one computer memory cycle, lasting about 12 microseconds. The radar could spew out 12,800 pulses per second, enough to eat up 15 percent of the computer’s capacity. The designers had allowed a 10 percent timing margin.

Much has been written about the causes of this anomaly, with differing opinions on who was to blame and how it could have been avoided. I am more interested in how the computer reacted to it. In many computer systems, exhausting a critical resource is a fatal error. The screen goes blank, the keyboard is dead, and the only thing still working is the power button. The AGC reacted differently. It did its best to cope with the situation and keep running. After each alarm, the BAILOUT routine purged all the jobs running under the Executive, then restarted the most critical ones. The process was much like rebooting a computer, but it took only milliseconds.
Recalling the episode of the 1202 alarms, I asked if the key might be to seek resilience rather than perfection. If they could not prevent all mistakes, they might at least mitigate their harm. This suggestion was rejected outright. Their aim was always to produce a flawless product.

I asked Hamilton similar questions via email, and she too mentioned a “never-ending focus on making everything as perfect as possible.” She also cited the system of interrupts and priority-based multitasking, which I had been seeing as a potential trouble spot, as ensuring “the flexibility to detect anything unexpected and recover from it in real time.”
 

402DF855

Joined Feb 9, 2013
271
What exactly do you mean by sleep? In an embedded system, "sleep" usually implies putting the processor into a low power mode with reduced or non-existent activity.

In FreeRTOS, there is an "idle" task which runs when no other task needs processing time. Various low priority activities take place there, for instance book keeping type activities.

As has been mentioned, I've seen multi-ton high speed control systems designed around a nominal 1 KHz control loop, driven by a timer ISR. It was desirable to have all functions complete within 1 ms but occasional overruns were usually tolerable as long as they weren't too egregious.
 
Top