Problem with multiple software timers

Thread Starter

johndeaton

Joined Sep 23, 2015
63
Hi All-

I am programming with a PIC16F1809. I have written code that uses one hardware timer to implement multiple software timers. It works great when I only define one software timer. It is dead on accurate within just a few milliseconds. However, when I define multiple timers, the timing gets thrown way off (20%) or more. Is there anything I can do to keep the timers accurate?

Thanks
 

Attachments

Last edited:

dannyf

Joined Sep 13, 2015
2,197
it probably makes more sense for you to tell people how it works, and show an example where the timer is off.

I think I might have posted a solution here a short while ago on this sort of things. Essentially a hardware timer produces a tick, and in the isr software timers are checked against the tick to update the flags. The counters for the software timers are updated if an overflow takes place.

Such a mechanism is accurate up to the tick.
 

Thread Starter

johndeaton

Joined Sep 23, 2015
63
Hi Guys,

This is how it works...

- I call timer_task() from the main routine using an interrupt set from a timer overflow.
- When I want to start a timer, I call the start_timer() routine.
- Then, I periodically call the is_tmr_expired() routine to check if the timer is expired.

By the way... I figured out my problem. What was happening was the timer_task() function was taking too long and causing the function to not be called every 100us as it should. I slowed down the timer to only overflow every 1 ms and changed the function accordingly. It is working great now.
 
Last edited:

dannyf

Joined Sep 13, 2015
2,197
One limitation of your code is the fixed number of software timers.

Generally, you want to take one of the two approaches:

1) on resource limited mcus, declare individual software timers and process them individually; this has the advantage of being simple but puts more burden on the users;
2) on resource risk mcus, use a linked list to manage timers. this allows fully automatic, and dynamic software timers. It has the disadvantage of being longer to execute.

On 8-bit timres, I tend to use the 1st approach.
 

dannyf

Joined Sep 13, 2015
2,197
here is an example of my implementation:

Code:
typedef uint8_t TICK_Type;                    //8/16/32-bit timer/counter
typedef struct {
    TICK_Type R;                            //timer counter
    TICK_Type PR;                            //period register
} TMR_Type;

volatile TICK_Type sTick;                    //timer tick
TMR_Type sTMR0;                    //period for timer0
TMR_Type sTMR1;                    //period for timer1
TMR_Type sTMR2;                    //period for timer2
sTick is the global tick: it has no "time base" so it rolls over on its own, depending on the data types used for TICK_TYPE.

sTMR0/1/2 are three software timers declared by the user. Each contains its own counter (R) and its own periodic register (PR). PR set by users. and R is advanced synchronously and updated to produce an overflow signal:

Code:
//test timer for overflow
uint8_t sTimer_ovf(TMR_Type *tmr) {
    tmr->R += 1;                            //increment timer
    if (tmr->R == tmr->PR) {                //overflow
        tmr->R = 0;                            //reset timer counter
        return 1;                            //overflow has taken place
    } return 0;
}
the isr or loop counting looks like this:

Code:
        sTick_update();                        //update timer
        if (sTimer_ovf(&sTMR0)) IO_FLP(OUT_PORT, OUT0);
        if (sTimer_ovf(&sTMR1)) IO_FLP(OUT_PORT, OUT1);
        if (sTimer_ovf(&sTMR2)) IO_FLP(OUT_PORT, OUT2);
once an overflow is detected, the code flip a particular pin.

the execution is very fast, especially if 8-bit types are used. Adding timers is also quite easy.
 

JohnInTX

Joined Jun 26, 2012
4,787
Not bad compared to 6 clocks the assembler code would take :)
Agreed. Using structs and pointers can make things bigger and slower - pointers especially in PICs depending on the compiler..

FWIW In C or assembler, I usually use one byte per timer, sometimes on a chain of prescalers to keep the times in one byte. They get loaded with the number of ticks, run to 0 then stop. Timeout is detected by the timer == 0. Since reading/writing a single byte is atomic, there is no need to disable interrupts when setting or testing. Its not a suitable approach for all things but handles most system timings, flashes, delays etc. well. For precise event timing, a more detailed approach might be appropriate.

Code:
 ; Service a timer - usually part of a periodic timer interrupt routine
movf  Timer1,F   ; check for 0, dec if not 0
btfss STATUS,Z
decf Timer1,F
; service next timer
;---------
; how to test the timer
movf Timer1,F  ; set Z flag if timeout
btfsc STATUS,Z
bra   Timer_is_0
.. continue, timer not 0
C:
unsigned char Timer1;

//---------------  Service Timer  ----------
// done on Interrupt TIK
  if(Timer1)Timer1--;  // decrement to 0 then stop

//-------------- Test Timer  --------------
  if(Timer1 == 0) Timer_is_0();
Even C generates just a few bytes of code for each section in most cases. In assembler, testing for zero in the 18F can be usually be done using TSTFSZ to save some code.
 
Last edited:
Top