Timer interrupt for 1 day delay

Thread Starter

ckilerci

Joined Jun 20, 2014
6
Hi everyone,

Can someone help me to understand how to setup an interrupt timer that occurs every hours? As far as I understand, you can achieve amounts of ms delay with timer interrupts. I am quite puzzled what happens if the interrupt over flows when I exceed the maximum amount? Also if I repeat the timer interrupt to achieve required delay so many times while I am doing other processing, do I lose timer precision because interrupt keeps occurring lets say every 1s?

Thanks
 

Art

Joined Sep 10, 2007
806
You don't lose timer precision on the interrupt driven timer, only the programatic stuff you're writing such as serial comms that will keep getting interrupted.

You can count overflows in another variable, and reset the timer with the time it took to
save & restore your program state, increment the count and reset the timer.
 

Thread Starter

ckilerci

Joined Jun 20, 2014
6
You don't lose timer precision on the interrupt driven timer, only the programatic stuff you're writing such as serial comms that will keep getting interrupted.

You can count overflows in another variable, and reset the timer with the time it took to
save & restore your program state, increment the count and reset the timer.
So I can keep resetting the timer interrupt within the interrupt routine itself while I can keep the records of overflows in a variable till it all adds up to a delay of 1 hour or day or months.

Did I understand correctly Art?
 

ErnieM

Joined Apr 24, 2011
8,377
I actually have a product in production that is little more then a PIC based timer. It uses a method similar to what Art suggests.

I used an on chip timer to give me interrupts every .25 mS, and on each interrupt I increment my time variable. When that value reaches (or exceeds "just to be sure") I trip the output.

These timers have outputs that vary from 100mS to some 22 hours, and the delay accuracy is the same as the crystal (actually a ceramic resonator in that product) itself.

So for long intervals you would basically setup a timer interrupt for something both long and evenly divisible by your final time and you can hit it fairly exactly every time.
 

Thread Starter

ckilerci

Joined Jun 20, 2014
6
I thought about it but I have limited board space to add another IC but I get the idea. Can you use several timers at the same time with same method? What happens if two timer interrupts appears at the same time? Is there a priority between timer0 and timer1?
 

Papabravo

Joined Feb 24, 2006
21,159
I thought about it but I have limited board space to add another IC but I get the idea. Can you use several timers at the same time with same method? What happens if two timer interrupts appears at the same time? Is there a priority between timer0 and timer1?
The issue of simultaneous interrupts depends on the flavor of the microprocessor. In the mid-range 16F PIC family all interrupts vector the the same location. The code at that location checks for each of the enabled interrupt sources and processes whichever one it deems the "highest" priority. The remaining interrupts remain pending until they can be serviced. This allows great flexibility in designing and implementing interrupt processing algorithms.

Other processors will vector directly to the highest priority interrupt which is enabled while the others remain pending. Algorithmically it makes no difference weather the prioritization is done in hardware or firmware.
 

Thread Starter

ckilerci

Joined Jun 20, 2014
6
Also, does it make any difference to write the interrupt service routine with assembly or C language for processing speed concerns?
 

shteii01

Joined Feb 19, 2010
4,644
Also, does it make any difference to write the interrupt service routine with assembly or C language for processing speed concerns?
Generally assembly code is "tighter", has less stuff and because it has less stuff it is executed faster. However, you can write code in C, compile it and assembly code will be generated for you, then you can look at the assembly code and see if there is stuff that you can prune away.

Example from my uC class:
We were given assignment to write code in assembly to convert hex to ascii character. In assembly it is not complicated, it is just long, it took a page or page and a half of code. In C same assignment takes just a couple of for loops.
Conclusion: Assembly takes longer to write because it is detail oriented, but the result is the code that does exactly what you want the way you want it, no extras. C is easier to write, but the ease comes with a price, you surrender the detailed control of your code, now compiler interprets your code, not you.
 
Last edited:

ErnieM

Joined Apr 24, 2011
8,377
Also, does it make any difference to write the interrupt service routine with assembly or C language for processing speed concerns?
Are you asking if the time interval varies for an interval of hours if you use C?

Ummm.... not in any noticeable manor.
 

Art

Joined Sep 10, 2007
806
I don't know how you'd go about adding instruction time to counters with a high level language where you don't actually know the assembly instruction time.

Say you had another interrupt source other than the timer (such as an IO pin),
or you wanted to increment a seconds, minutes, and hours counter for a display.
You need to know how long it takes to figure that out in your interrupt loop so
you can add that time to the counter after it has overflowed.

Are you asking if the time interval varies for an interval of hours if you use C?

Ummm.... not in any noticeable manor.
What does noticeable mean? Wouldn't noticeable depend on the application?
 

graham2550

Joined Jun 24, 2014
14
You could use a real time clock (RTC) such as the DS1307. its only an 8 pin DIL.
You can then poll that continuously and set a trigger for the time you want and do whatever you need to do. They use I2C, only a couple of wires.
Bonus battery backup if the power fails.
 

ErnieM

Joined Apr 24, 2011
8,377
I don't know how you'd go about adding instruction time to counters with a high level language where you don't actually know the assembly instruction time.

Say you had another interrupt source other than the timer (such as an IO pin),
or you wanted to increment a seconds, minutes, and hours counter for a display.
You need to know how long it takes to figure that out in your interrupt loop so
you can add that time to the counter after it has overflowed.



What does noticeable mean? Wouldn't noticeable depend on the application?

OK, let's make up some numbers based upon the application being discussed: a timer for hour long intervals. Let us assume:

8 MHz PIC clock source
1 hour timeout
1000 extra instructions to fire interrupt, recognize timeout, and turn off the output.

(Where "extra" is defined as code bloat of a routing badly written in C vs hand crafted assembler.)

The instruction rate of 1/4th the clock so the "extra" time overhead for using our high level language is:

1000 instructions * 4 / 8 MHz = .0005 seconds

An interval of 1 hour lasts 3,600 seconds.

Thus the "error" due to code bloat is a whopping:

.0005 seconds / 3,600 seconds * 100% = .000013 %


Such an error is way way below any tolerance of the time element (the crystal) itself and therefor irrelevant to the overall error budget. Thus is not noticeable on this application.

In other applications such as clocks there is some heartbeat interval produced (such as 1 second) that initiate an interrupt. There are typically hardware sources, I do not know of any where they are software based.

As long as the ISR can handle the simple increment of time in under 1 second the display can be updated in a regular way again unnoticeable by any person watching the clock.

If you add external instrumentation you may be able to notice some bobble in the seconds update, but there are a myriad of simple ways to code against that too, such as make the second update twice a second, first one do the computation, next update the display. As the update is a fixed length task it would occur at regular intervals.
 
Top