I am wondering if it is more efficient to #define some common constants at the start of my code, or if the compiler is effectively doing this for me. Let's say thoughout my code I have a lot of arithmetic like this:
My understanding was that the compiler would realise that 2/3 and 1/sqrt(3) would always evaluate to the same thing, and therefore they would be replaced with their constant values at compile time. However, I have noticed in a few TI header files that they make a point of doing this:
I can't seem to find anything on this in the compiler reference, other than "For optimal evaluation, the compiler simplifies expressions into equivalent forms", which isn't quite the same thing. I wouldn't like to think that my processor was gobbling up computation cycles doing a Taylor series expansion of sqrt(3) or anything like that!
Code:
outputA = (float)2/3*inputA;
outputB = (1/sqrt(3))*inputB;
Code:
#define TWO_THIRD 0.6666666666666
#define ONE_OVER_SQRT_THREE 0.57735026919
...
outputA = TWO_THIRD*inputA;
outputB = ONE_OVER_SQRT_THREE*inputB;