C++Delays

Thread Starter

mpuvdd

Joined Feb 11, 2007
50
Hello,
I was reading on a web site that by using the following code:

int x,y;
for(x = 0; x < 2000; x++)
{
for(y = 0; y < 2000000; y++)
{
}
}

as a delay, that the actual time delay will be different on different PICs.
Why is this?
Also, what's the reason for two variables, why not just use on variable?
Thanks a lot,
mpuvdd
 

beenthere

Joined Apr 20, 2004
15,819
The delay time may differ because of different clocks.

The loops with two variables may come from the programmer's preference to doing that vice running a loop.
 

Salgat

Joined Dec 23, 2006
218
The action "x++" takes so many cycles by the processor, and how fast it goes through those cycles is dependant upon the frequency of the processor.
 

Papabravo

Joined Feb 24, 2006
21,159
Hello,
I was reading on a web site that by using the following code:

int x,y;
for(x = 0; x < 2000; x++)
{
for(y = 0; y < 2000000; y++)
{
}
}

as a delay, that the actual time delay will be different on different PICs.
Why is this?
Also, what's the reason for two variables, why not just use on variable?
Thanks a lot,
mpuvdd
IMHO this is extremely poor practice for the reason so obviously stated by the OP. There is no easily verifiable way to determine how long the delay is without looking at the compilers output listing.

Using a hardware timer is still dependent on the basic clock frequency and the mechanics of the timer, but this is far easier to verify than loops which sit and spin. At least they could do somthing useful like checksum the non-volatile memory.

Anybody working for me who wrote this in a piece of production code would be fired on the spot. That's just me, and I understand that this view may not be widely held.
 

niftydog

Joined Jun 13, 2007
95
The reason for two variables is to extend the delay by the multiple of the second variable. You could use one variable, but to get the same delay would require an enormous number. The code complied for a PIC is very different to that complied for a PC because the ability of a PIC to crunch large numbers is limited - large numbers might cause an overflow.
 
There are other ways that are far more efficient for delaying rather than getting the processor to execute wastefull codes.

For example every m/s Windows passes messages onto every hwnd that is in running. You think that they would actually have an unreliable nested loop to do this? I think the interface with the interrupts and various other mechanics.

Good luck
 
Top