Time complexity of an algorithm

Thread Starter

AndrieGnd

Joined Jun 25, 2019
52
Hi guys,
when I write an algorithm we are in generally not relating to how much time "one operation" of PC is consuming while doing this operation. would anyone please explain to me why we aren't taking that into consideration for calculating the time complexity of the algorithm?
it's significant if we have two PC with two different processors, the time is taking for the same algorithm is differently because the two processors are different .. so if the time is different we should take into consideration that option while writing an algorithm .

How should I think as "an algorithmician" who's writing an algorithm in general

thanks alot
 

Thread Starter

AndrieGnd

Joined Jun 25, 2019
52
Hi guys, in aspect of algorithm there's something confused me so much and I guess by you guys we would overcome on that confusion.
let's assume I have a while loop into while loop like this:
Code:
 while(x<n/2)
{ int i=8;
int j=9;
while(j<((x^2)/2))
{j+=1;}
x++;
}
what's confusion me, lets assume I want to to calculate the time complexity of that algorithm, why we are not considering the row of the second while loop(inner) one as a constant C like the other rows o the plain of the function(like int j=9, it takes a constant time C)? I mean if we would actually calculate the time complexity of algorithm we deal with the inner loop as different way from the other rows .. why? we know that at specific n the inner loop isn't infinite- on that case at specific n we are getting specific amount of operation which can be considered as C so we can deal with while like other rows inti=8 etc .. which takes a constant time .. !

that's my claim and I deep believe it's wrong, anyone can correct me to get the concept of how we analysis the time complexity of algorithm? I guess we aren't analysis spontaneously it would be their logic or concepts to follow..

thanks
 

djsfantasi

Joined Apr 11, 2010
9,163
Let’s assume you are asking about functional code that you have.

You don’t have any functional code.

With the information which you provided, that loop can be equal to some time constant, equal to the sum of the following.
  • A division +
  • A comparison
That’s about it...

To calculate time complexity, you need to know timings for all common operations. Assembly timings are readily available. If you’re using a high level language, obtaining this metrics are more difficult but not impossible.

You sum all operations within a loop, and create a function to give the time based on the loops parameters and tests.

Do this for all inner loops and feed the results into calculations for outer loops, until you are at the main loop/level. It’s an iterative, possibly recursive, model.

Basically, you got to do the work.
 

WBahn

Joined Mar 31, 2012
30,052
Hi guys,
when I write an algorithm we are in generally not relating to how much time "one operation" of PC is consuming while doing this operation. would anyone please explain to me why we aren't taking that into consideration for calculating the time complexity of the algorithm?
it's significant if we have two PC with two different processors, the time is taking for the same algorithm is differently because the two processors are different .. so if the time is different we should take into consideration that option while writing an algorithm .

How should I think as "an algorithmician" who's writing an algorithm in general

thanks alot
Think of it as being big scale and small scale.

When developing an algorithm, we generally don't know what processor it's going to be implemented for, what language it's going to be programmed in, what resources it's going to have, or how big the problem is that it will be tasked with solving. For small enough problems, none of this really matters because even a poor algorithm poorly implemented will likely be fast enough. But as the problem size grows (wanting to sort a database with a billion entries compared to one with a few thousand) we need an algorithm that fundamentally scales well. Beyond a certain size of problem, a good algorithm poorly implemented will outperform a poor algorithm perfectly implemented.

Once we have good algorithm, then we can look at optimizing the implementation in all kinds of ways, including ways that are specific to particular hardware and software architectures.
 

WBahn

Joined Mar 31, 2012
30,052
Hi guys, in aspect of algorithm there's something confused me so much and I guess by you guys we would overcome on that confusion.
let's assume I have a while loop into while loop like this:
Let's first format the code so that it is more readable

Code:
while ( x < n/2 )
{
   int i=8;
   int j=9;
   while ( j < ((x^2)/2) )
   {
      j += 1;
   }
   x++;
}
First off, what is the size of the problem? Is it 'n'?

What is 'i' being initialized to 8? I don't see 'i' being used anywhere. Is it supposed to be?

Since 'i' and 'j' are being initialized to non-intuitive values, we can't assume that 'x' is being initialized to something obvious. What is 'x' being initialized to?

What is the '^' operator? In most languages it is the bitwise-XOR operator. Is that what you mean here? Or do you mean this to be x-squared? I'll assume the latter.

The inner while loop is not of constant time complexity because it doesn't take the same amount of time to execute during each pass of the outer loop.

Let's say that n = 2500 and let's consider the pass when x = 10 and when x = 1000.

When x = 10, x^2/2 is 50. Since j starts at 9 and goes up until it equals 50, the inner loop will be executed 41 times.

When x = 1000, x^2/2 is 500,000, so the inner loop will be executed 499,991 times.
 

Picbuster

Joined Dec 2, 2013
1,047
Hi guys,
when I write an algorithm we are in generally not relating to how much time "one operation" of PC is consuming while doing this operation. would anyone please explain to me why we aren't taking that into consideration for calculating the time complexity of the algorithm?
it's significant if we have two PC with two different processors, the time is taking for the same algorithm is differently because the two processors are different .. so if the time is different we should take into consideration that option while writing an algorithm .

How should I think as "an algorithmician" who's writing an algorithm in general

thanks alot
We define the speed needed for that function or functions.
e.q measure and control close a switch @ zero point sin wave while the air controlled switch takes 0.8 mS to close.
We have one period - (0.8mS + safety) the time to detect and take action.
We do have the time needed next step is to look at the program number of instructions and find a mpu + clock able to handle that within the calculated time.

Take extra time when more task are running on the same mpu.
Add also an external watch dog, creating a save output condition when system fails.

Picbuster
 

402DF855

Joined Feb 9, 2013
271
IIRC "time complexity" would likely refer to Big O notation (or is it little O?). And I'm pretty rusty on deriving the result analytically but probably involves adding a summation for each for loop. But I'd swag the code fragment given is O(n*n*n). IME big O notation doesn't come into play often in real world software development, more perhaps in academia. Exceptions to this might be sorting, for example, or FFTs, which really are magical considering they can be achieved O(n log n), a fact that had huge implications in practice.

A related but different topic is evaluating specific code for actual performance in terms of execution time and memory usage. The given code fragment doesn't fit well for such analysis, as I'm fairly sure it would be optimized away (as is) and/or could be replaced with an expression assignment with no loops at all (assuming a multiply instruction on the given hardware): x = f(n).
 

mckenney

Joined Nov 10, 2018
125
why we aren't taking that into consideration for calculating the time complexity of the algorithm?
You're mixing up the terms:
1) "algorithm" with "program"
2) "time complexity" with "time cost"

Instantiating an algorithm as a program is what gives us "c". "c" produces the time cost from the time complexity.

Time complexity (big-O) is a normalization mechanism to allow us to compare algorithms. It's not very useful if N is small, since "c" dominates.
 

BobaMosfet

Joined Jul 1, 2009
2,113
@AndrieGnd-

Here is how it is actually done if you want accuracy (I've done this a lot)--

In order to correctly evaluate how much time any piece of code takes, you must first know exactly how the compiler assembles the code into assembly language. That is one of the reasons good compilers have disassemblers. Then, once you know specifically what instructions are being used, you can look up exactly how long they take in the architecural pipeline, to execute, and voila- then you can evaluate them.

Does not matter if you are comparing one chip to another, using the same C program. Whatever timing applies for each processor will be used regarding its assembly language instruction set, and whatever time it takes, is exactly what it takes- However, because processors have prefetch caching and pipelining now, you can run operands in parallel. If you have multi-core systems, that's even more parallelism, which all eliminates time.

Without it being beyond your capability, the best you can do is a) rely on an aspect of your compiler that is designed to time your code- this is called a profiler, and better compilers have them (they do all the hard thinking), or disassemble it and calculate the timing by hand, which is worst case because you won't account for any parallelism.

Lastly- don't waste your time, trying to get every erg of energy out of your C code unless you have an objective that requires it. And if you do, use inline assembly. I've written 6DoF games (think Quake/DOOM), and the world-editors, nuclear robotics control simulators and been involved in many projects that run code from one architecture seamlessly on another, and even the same code running on different platforms with timing requirements that had to be maintained across each. I've also executed binary objects from one platform inside other architectures so that critical, no-fail systems didn't know they were now operating on much more capable hardware.

In fact, and you usually only see this in work in game studios is the fact that you don't run the code balls to the wall all the time. You actually attempt to run the code at a specific pace, independent of the architecture you're running on. This is usually handled by the predictive game engine aspect (I've written those as well- they are super cool). If you need to run the same code on 2 platforms at the same speed, you determine what speed it needs to run at, and you run it at the speed on both systems by using your own 'clock' based on the system clock.
 

402DF855

Joined Feb 9, 2013
271
I found this note in the ARMv7 processor documentation very interesting:

"The timing effects of including a NOP instruction in a program are not guaranteed. It can increase execution time,
leave it unchanged, or even reduce it. Therefore, NOP instructions are not suitable for timing loops."
 

sisoj

Joined Nov 10, 2019
6
When you are comparing 2 alghoritms, you are keeping the constants same and PC is one the constants that people generally assume. There are no hard rules however, you could make the case for taking PC and its varying performance into consideration given that it is a factor in your particular analysis. But the reason for time complexity calculation is not to be accurate but it is to have a performance metric for a particular way of solving a problem that you can compare to other solutions against (And that requires to single out only the factors that are not constants in your lab so to speak.).
 
Top