How precise are delay instructions in C?

Discussion in 'Programmer's Corner' started by ke5nnt, Apr 3, 2013.

  1. ke5nnt

    Thread Starter Active Member

    Mar 1, 2009
    384
    15
    Referencing Microchip PICs...
    When reading through books about time delays, they always tend to make mention of counting instruction cycles for accuracy. For example, 4MHz oscillator executes an instruction in 1 microsecond assuming no prescaler, unless the instruction is a branch instruction in which case it would take 2 microseconds.

    When using a delay instruction in C, as in the Microchip XC8 compiler using "__delay_ms(20)" for a 20 millisecond delay, will the compiler ensure that the delay is precisely 20 milliseconds, or is it one of those kinds of things where the compiler says "close enough" and if you want true accuracy, you should write the delay yourself in ASM?
     
  2. JohnInTX

    Moderator

    Jun 26, 2012
    2,616
    1,187
    You'd have to look at the generated code. The compiler / library macro will calculate and get pretty close, exact in cases. Just how close will depend on how it does it and whether it can get a precise number of Tcycs built. For most applications, its close enough. For others e.g. timing the return of a sonar ranger, it probably isn't.

    An often overlooked issue with using compiler generated dumb delays is the overhead of what happens before and after the delay if it matters. You have to get into and out of the delay which takes Tcycs too. All of that will contribute to the total delay. On very short delays, the overhead can get to be noticeable. Whether it matters depends on you.

    You can hand-craft delays in ASM if you like counting things. Do a few and you'll appreciate the effects of the overhead getting in and out of the delay. Don't forget that if you have any IRQs running, any hope of precise, repeatable delays goes out the window unless you turn off the IRQs during the delay. That may or may not be a show stopper right there. It is for me..

    For your 20ms example, in most cases, you'll likely find that the contributions made by errors and overhead are not significant from a practical point of view.
     
    Last edited: Apr 3, 2013
  3. ke5nnt

    Thread Starter Active Member

    Mar 1, 2009
    384
    15
    Thank you John for the swift response as always. I wonder if you could indulge me a little more with 2 additional questions that your response has brought to mind.

    First, define IRQ, as it is a term I'm not yet familiar with.

    Second, if you were writing a program in the C language, and you needed an absolute precise delay, could you explain to me how you would accomplish it?

    Thanks again,
    Ryan
     
  4. JohnInTX

    Moderator

    Jun 26, 2012
    2,616
    1,187
    IRQ is my shorthand for an interrupt service routine. If you have something that can interrupt the processor, it will break off what it is doing and service the device that caused the interrupt. When it is done, it will resume processing where it left off.

    The impact on a dumb delay can be considerable. The delay works by counting off a calculated number of Tcycs (determined by the compiler or you if you hand code it). When the interrupt runs, it of course consumes Tcycs too but when it returns to the dumb delay that it interrupted, the count resumes as if nothing had happened, making the delay longer. The only choices are to accept some timing jitter or turn the interrupts off, which kind of defeats the reason for having them in the first place i.e. fast response to an event.

    Define precise. If it needed to be Tcyc kind of precise, I would use one of the internal timers that used a high priority interrupt. When it ran out and interrupted the processor, I would handle it in the IRQ service routine. The initial timer setting would include an offset (determined by examination of the code) to compensate for the IRQ service overhead.

    Regardless of the coding approach, the more precise you have to be, the higher priority whatever code you use will have to have. For a dumb delay, you turn off the interrupts making that little segment of code the highest priority it can be. For an interrupt-driven process (see why I abbreviate to IRQ?) the interrupt has to have a high enough priority to interrupt all other processes (including other interrupts) to get where its going in a known, short time. All of this is application dependent. What resources vs what requirements... bla bla.. Note that midrange PICs only have a single IRQ level, making some of this problematic..

    For hard-core things like generating a precise pulse width, I use one of the capture compare PWM peripherals. Set it up to generate the exact width you need then interrupt the processor when its done so that you can turn it off before the next duty cycle.

    Its not trivial when you have to coordinate many timed things. My way to approach it is to engineer the fast/precise stuff first using the peripherals whenever possible then fit the lesser stuff in around it.
     
    ke5nnt likes this.
  5. MrChips

    Moderator

    Oct 2, 2009
    13,381
    3,749
    For stable and accurate timing, use the on-chip timer hardware.
    Make sure you use the external XTAL oscillator.
     
  6. ErnieM

    AAC Fanatic!

    Apr 24, 2011
    7,819
    1,736
    If you need "IRQ" defined then you don't have one running. <grin>

    IRQ means "Interrupt ReQuest" or a hardware call to certain code, such as a timer periodically forcing certain code to be run.

    There is no such thing as an "absolute precise delay." The best one can do is delay a "absolute precise" number of instruction cycles, but then the accuracy of the device instruction clock is still a factor.

    You can construct a delay with an "absolute precise number of instruction cycles" out of imprecise elements such as the good ole traditional blocking delay call: make one just short of your desired delay, then pad it out with other time wasting tasks such as the "no operation" or Nop() macro.

    Using the simulator in MPLAB you can get an exact count of instructions some code takes, then by guess and by gosh add in or remove code till the exact number of cycles is burned off.
     
  7. ke5nnt

    Thread Starter Active Member

    Mar 1, 2009
    384
    15
    Okay, John thank you for the more in depth response. I have always used and seen Interrupt Service Routines abbreviated as ISR, hence the confusion on IRQ. I am familiar with using interrupts, as I've used them many times. I am taking the time to write down some notes aimed at a friend that has no experience with programming PICs, so I come to find that when you're writing something for someone that's a true beginner, you want to explain things that you've just kind of taken for granted as "I accept that it is this way".

    Mr. Chips, thank you. Personally, I am familiar with the fact that the oscillator signal plays a large roll in the accuracy of timing operations, but I'm sure others that may stumble upon this thread may not be, so I appreciate that you included it.

    Ernie, yes in this example I was not using an interrupt, though as I said a moment ago, I'm familiar with them, just had never seen it abbreviated as IRQ. Your comments in addition to John's are helpful. I'm taking away from this discussion that no timing operation can really be 100% dead accurate, but that we can assist our design via code and hardware (i.e. the chosen oscillator) to ensure that it is damn close enough for anything we want to do where timing accuracy is critical. In my opinion, "accurate timing operation" means that our program functions the way in which it is intended. In John's initial example of the sonar ping, I would say that if the program is written in such a way that it reads the return and functions accurately, such as might be seen in a sonar speed gun, that the timing operation is precise.

    Thank you for the input Gents, it is very much appreciated.
     
  8. atferrari

    AAC Fanatic!

    Jan 6, 2004
    2,741
    844
    Understanding "accurate" in this context, as a predefined event repeating after the same number of clock cycles, in a 18F PIC, you can get 100% dead accurate timing if the ticking is materialized inside of a high priority interrupt ISR. Necessarily the rest of the interruptions (if any) MUST be low priority.
     
    Last edited: Apr 3, 2013
    ke5nnt likes this.
  9. ErnieM

    AAC Fanatic!

    Apr 24, 2011
    7,819
    1,736
    I have several similar timing devices in production. By specfication they need to be 10% accurate, and to achieve that they have a reasonably accurate (<1% @25°C) resonator as a time base. The hardware triggers the code to count 4 times each millisecond as a "tick." These devices are programmed with delays between .1 and 500 seconds.

    Each and every one counts exactly the number of ticks required for the interval. Every time the same count.

    The only variance in delay is caused by how fast the resonator resonates.
     
Loading...