# Is there merit in #define a constant?

#### TechWise

Joined Aug 24, 2018
151
I am wondering if it is more efficient to #define some common constants at the start of my code, or if the compiler is effectively doing this for me. Let's say thoughout my code I have a lot of arithmetic like this:
Code:
outputA = (float)2/3*inputA;
outputB = (1/sqrt(3))*inputB;
My understanding was that the compiler would realise that 2/3 and 1/sqrt(3) would always evaluate to the same thing, and therefore they would be replaced with their constant values at compile time. However, I have noticed in a few TI header files that they make a point of doing this:
Code:
#define TWO_THIRD 0.6666666666666
#define ONE_OVER_SQRT_THREE 0.57735026919

...

outputA = TWO_THIRD*inputA;
outputB = ONE_OVER_SQRT_THREE*inputB;
I can't seem to find anything on this in the compiler reference, other than "For optimal evaluation, the compiler simplifies expressions into equivalent forms", which isn't quite the same thing. I wouldn't like to think that my processor was gobbling up computation cycles doing a Taylor series expansion of sqrt(3) or anything like that!

#### ericgibbs

Joined Jan 29, 2010
13,615
hi,
When possible I always define a fixed value as a Constant, reduces computational and running time.
E

#### TechWise

Joined Aug 24, 2018
151
hi,
When possible I always define a fixed value as a Constant, reduces computational and running time.
E
Yes, my objective is to reduce the computational effort at runtime. Clearly, there is no need for the processor to calculate 2/3 over and over again when its value never actaully changes. What I was getting at was that I thought the compiler might realise this too and do the optimisation at compile time.

#### ericgibbs

Joined Jan 29, 2010
13,615
hi Tech,
A quick check would be write two very short programs , using the two alternative methods and then compare visually the to compiled code listings.
E

#### TechWise

Joined Aug 24, 2018
151
hi Tech,
A quick check would be write two very short programs , using the two alternative methods and then compare visually the to compiled code listings.
E
Good idea. Looking at some of the assembly listings output by Code Composer Studio and its compiler have been on my "list of interesting things to do" for a long time. This may be the thing that finally makes me do it.

#### ericgibbs

Joined Jan 29, 2010
13,615
hi Tech,
It would informative if you could post the outcome of your tests.
E

#### atferrari

Joined Jan 6, 2004
4,425
Probably the real advantage is that you define a value of something ONCE, and eventually change it ONCE.

When revising code you identify the defined thing more easily.

#### TechWise

Joined Aug 24, 2018
151
I have run two tests and looked at the assembler output. Here is the first test program, where the #define is not used:
Code:
#include "F28x_Project.h"
#define TWO_OVER_THREE 0.66666666666667

void main(void)
{
volatile float a = 0;
float b = 6;

a = (float)2/3 * b;
}
Here is the second program, where the #define is used:
Code:
#include "F28x_Project.h"
#define TWO_OVER_THREE 0.66666666666667

void main(void)
{
volatile float a = 0;
float b = 6;

a = TWO_OVER_THREE * b;
}
As far as I can see, the assembly language output produced is exactly the same in both cases. I have attached the two assembly listings side by side as a screengrab as they are very long. The instruction set for the C2000 is very long and complicated so I haven't bothered trying to figure out what it is actually doing.

#### Attachments

• 344.8 KB Views: 5

#### Papabravo

Joined Feb 24, 2006
16,783
You should understand that #define is not really defining anything except a "text substitution" macro. In the "pre-processor" phase of compilation wherever the macro name occurs, the replacement text is substituted. Pass 1 of of the compiler never "sees" the macro (#define) name. So the result is entirely expected.

#### WBahn

Joined Mar 31, 2012
26,398
You should understand that #define is not really defining anything except a "text substitution" macro. In the "pre-processor" phase of compilation wherever the macro name occurs, the replacement text is substituted. Pass 1 of of the compiler never "sees" the macro (#define) name. So the result is entirely expected.
The result is entirely expected only if you expect the compiler to replace "(float)2/3" with "0.66666666666667".

I wouldn't expect every compiler to do that.

I certainly wouldn't expect every compiler to replace "1/sqrt(3)" with "0.57735026919".

#### bignobody

Joined Jan 21, 2020
97
Perhaps you'd find Godbolt's Compiler Explorer useful - it's an online compiler explorer that lets you see how your programs compile under various compilers. Pick the one you're using and see what the compiler does with it. Make changes, see what happens.

https://godbolt.org/

#### Papabravo

Joined Feb 24, 2006
16,783
The result is entirely expected only if you expect the compiler to replace "(float)2/3" with "0.66666666666667".

I wouldn't expect every compiler to do that.

I certainly wouldn't expect every compiler to replace "1/sqrt(3)" with "0.57735026919".
You could only do it if you knew the compiler's answer to the evaluation. So using the former method of having the compiler evaluate an expression is arguably a safer thing to do. Who hasn't dropped a significant figure in transcription once or twice.

#### TechWise

Joined Aug 24, 2018
151
You should understand that #define is not really defining anything except a "text substitution" macro. In the "pre-processor" phase of compilation wherever the macro name occurs, the replacement text is substituted. Pass 1 of of the compiler never "sees" the macro (#define) name. So the result is entirely expected.
I understand the #define part alright. It is basically the preprocessor doing a "find and replace" so if I #define TWO_OVER_THREE 0.66666666666667 then it goes through the file and replaces TWO_OVER_THREE with 0.66666666666667 before the compiler is even invoked.

My question was more about whether the compiler was clever enough to realise that 2/3 is always 0.6666667 and therefore evaluate it at compile time and store it as a constant, rather than asking the microprocessor to compute it.

It states in the compiler reference (TMS320C28x Optimizing C/C++ Compiler v18.12.0.LTS) that:
"For optimal evaluation, the compiler simplifies expressions into equivalent forms, requiring fewer instructions or registers. Operations between constants are folded into single constants. For example, a = (b + 4) - (c + 1) becomes a = b - c + 3."
However, it doesn't say what happens if b and c happen to be constants in this example.

#### TechWise

Joined Aug 24, 2018
151
The result is entirely expected only if you expect the compiler to replace "(float)2/3" with "0.66666666666667".

I wouldn't expect every compiler to do that.

I certainly wouldn't expect every compiler to replace "1/sqrt(3)" with "0.57735026919".
Going by my testing and looking at the assembly language output, it would seem that this particular compiler does indeed treat "(float)2/3" the same as "0.66666666666667" because both examples produce the exact same code, per post #8.

Now that you've drawn attention to it, the "1/sqrt(3)" is bound to be handled differently as it contains a function call. I suppose it would have to be a very clever compiler to inline the function and then notice that it always produced the same result and remove it and replace it with the resulting constant.

#### MrChips

Joined Oct 2, 2009
24,178
If the objective is to reduce execution time then you are going about it the wrong way.
You can improve code efficiency by factors of 20-50 times by not using float.

If precision was your major concern then 2/3 becomes
multiplication by 43691 and shift right 16 bits.

1/√3 becomes multiplication by 37837 and shift right 16 bits.

Therefore, if one were to assume (as an example) that inputA and inputB are 16-bit integer readings from an ADC, your uncertainty is already greater than 1 LSB. Use uint64_t data type and perform integer arithmetic.

As an exercise, do it both ways, float vs integer and compare execution times and precision.

#### TechWise

Joined Aug 24, 2018
151
If the objective is to reduce execution time then you are going about it the wrong way.
You can improve code efficiency by factors of 20-50 times by not using float.
I assume you are referring to the performance cost of floating point versus fixed point. If you are referring to another issue, then the following is probably irrelevant.

Using floating point arithmetic throughout my control algorithm has made the whole thing a lot simpler for me and reduced my development time a great deal. I'm using a powerful C2000 series processor with built in hardware support for floating point operations so I am not yet close to running out of resources.

The question was posed because my curiosity was piqued when I saw Texas Instruments #define -ing a lot of constants in their header files. Also, even though using floating point is resource intensive, it seems like a very simple improvement that I could make to my code with minimal effort even if there are bigger fish to fry. A lot of these constants are used multiple times in my code.

#### MrChips

Joined Oct 2, 2009
24,178
Since #define is simply text substitution there is little advantage in a high speed processor.

The purpose of #define and header (.h) files is to keep all user definitions in one place. This assists in code and project reusability and maintenance.

#### MrChips

Joined Oct 2, 2009
24,178
btw, this is my perpetual rant about today's coders and software and hardware.

SW users and CEOs demand more features in apps. Execution time takes a big hit. Coders and project managers demand more memory space and processor speed. They take the easy route. Your computer hw and sw is obsolete within a couple of years. You need to upgrade your HW, OS and app SW.

Why does any computer take more than 60 seconds to startup and many times more than 60 seconds to shut down?
Imagine if you had to wait 60 seconds for you car engine computer to startup and 60 seconds before you could open the door and exit your vehicle!

#### andrewmm

Joined Feb 25, 2011
1,751
To highlight what has been said above,

#define makes no difference to the C compiler,
the # says use a pre compiler that does a text substitution for you, so the c compiler never see the #define any way.

The advantage, and the great advantage , is putting all the user defined bits in one place,
and giving them useful names,

So if I was defining something that had timing information in it, time a and b , which could be the same or different

I could put
#define base_time_A 120
#define base_time_B 120

or where ever int he code I was refering to the time unit A and B I could type in 120, and 120
or if I wanted half the time I could type 60, or base_time_A / 2

Now think about the user in a years time, who looks at your code,

in one case they see base_time , base_time_A / 2 , and such like in the code,

in the other they see 120 and 60 in the code,
but the 120 could be refering to some other unit,
and which 120 is for which time ? its totally ambiguous.

If you use #define, the code is "obvious",
and more , its easy to reliably change the time,

Do not use "magic" numbers in your code, use constants / # defines.

#### WBahn

Joined Mar 31, 2012
26,398
I understand the #define part alright. It is basically the preprocessor doing a "find and replace" so if I #define TWO_OVER_THREE 0.66666666666667 then it goes through the file and replaces TWO_OVER_THREE with 0.66666666666667 before the compiler is even invoked.

My question was more about whether the compiler was clever enough to realise that 2/3 is always 0.6666667 and therefore evaluate it at compile time and store it as a constant, rather than asking the microprocessor to compute it.

It states in the compiler reference (TMS320C28x Optimizing C/C++ Compiler v18.12.0.LTS) that:
"For optimal evaluation, the compiler simplifies expressions into equivalent forms, requiring fewer instructions or registers. Operations between constants are folded into single constants. For example, a = (b + 4) - (c + 1) becomes a = b - c + 3."
However, it doesn't say what happens if b and c happen to be constants in this example.
If b and c are both constants, then many (not all) compilers would replace it with a = constant; Some compilers would also see if they can eliminate the assignment altogether and replace occurrences of 'a' in other places with the value of the constant provided they were able to determine that it was safe to do so. Compiler optimizations are largely a black art and are also a large part of the bread and butter of what separates one compiler from another. So the optimizations can run from non-existent (cheap compiler for a niche part) to exceptionally arcane employing techniques that are barely on this side of magic.