Should a variable used all the time be declared as static?

Thread Starter

TechWise

Joined Aug 24, 2018
151
Let's say I have an algorithm triggerd periodically by a timer interrupt on a microcontroller. Let's also assume that I have loads of memory to play with and speed of execution is my only concern. If there are local variables inside that algorithm, is there a performance benefit from declaring them as static versus having them allocated on the stack every time the function is called? What would be the pros and cons?
 

nsaspook

Joined Aug 27, 2009
13,087
With a non-cached controller the overhead for local variables is zero as the stack allocation is always done for task return data storage from the interrupt. Extra variables just change the stack pointer value. Use static variables inside functions for what it was mainly designed for, persistent storage between execution. Static variables might even be slower if the heap access requires extended addressing inside the function.

https://wiki.c2.com/?PrematureOptimization
Premature optimization is the root of all evil -- DonaldKnuth
 
Last edited:

402DF855

Joined Feb 9, 2013
271
Generally I would not expect much impact on performance by allocating variables on the stack or globally. Allocating stack space is often just subtracting from the stack pointer, likely a register. A global variable may require extra instructions to load an address.

In embedded systems it is probably more important to be aware of stack allocations to avoid overrun and memory corruption. Simplifying (reducing) stack usage can be very helpful in proving enough space has been set aside. Stack corruption is usually a very nasty condition to track down. Statically allocated space simplifies device behavior which can help quality at the expense of flexibility. For example, a function argument is often passed on the stack. An alternative is to put the argument in global memory but this impacts reentrancy and recursion.

Ultimately optimizing performance of any code involves trying various alternatives and explicitly measuring the timing impacts. In the old days we could count cycles in the emitted assembly language, but with the complex chips of today, one should observe the behavior directly. For example, call your "algorithm" 1000 times and blink an LED or toggle a discrete IO and measure with your scope.
 
Last edited:

AlbertHall

Joined Jun 4, 2014
12,345
For a PIC, if you can run the code in the simulator, there is a stopwatch which will time between breakpoints in both cycles and time.
 

Deleted member 115935

Joined Dec 31, 1969
0
Let's say I have an algorithm triggerd periodically by a timer interrupt on a microcontroller. Let's also assume that I have loads of memory to play with and speed of execution is my only concern. If there are local variables inside that algorithm, is there a performance benefit from declaring them as static versus having them allocated on the stack every time the function is called? What would be the pros and cons?
As an aside, I would generally suggest minimum "work" inside the interrupt,
most processor mask other interrupts or only have a one deep stack when they are servicing an interrupt,
so its easy to miss an interrupt.

I have seen some great interrupts, that do a FFT in the interrupt and wonder why they were missing,
even a simple multiply can be to much time , I have seen that on set top box's with a line interrupt,
 

Thread Starter

TechWise

Joined Aug 24, 2018
151
As an aside, I would generally suggest minimum "work" inside the interrupt,
most processor mask other interrupts or only have a one deep stack when they are servicing an interrupt,
so its easy to miss an interrupt.

I have seen some great interrupts, that do a FFT in the interrupt and wonder why they were missing,
even a simple multiply can be to much time , I have seen that on set top box's with a line interrupt,
I didn't make it clear in the original post, but all the ISR is doing is setting a "samplesReady" flag which is polled in main(). At present, there is only the one interrupt source which is periodic and comes from a timer so everything is nice and predictable. In future, there may be "receive" interrupts from a UART peripheral which could have upset things, hence why I kept the timer ISR short.
 

Thread Starter

TechWise

Joined Aug 24, 2018
151
With a non-cached controller the overhead for local variables is zero as the stack allocation is always done for task return data storage from the interrupt. Extra variables just change the stack pointer value. Use static variables inside functions for what it was mainly designed for, persistent storage between execution. Static variables might even be slower if the heap access requires extended addressing inside the function.

https://wiki.c2.com/?PrematureOptimization
Premature optimization is the root of all evil -- DonaldKnuth
I've added that to my bookmarks to read later. A lot of interesting points in there based on a quick skim reading.
 

Thread Starter

TechWise

Joined Aug 24, 2018
151
For a PIC, if you can run the code in the simulator, there is a stopwatch which will time between breakpoints in both cycles and time.
I'm working on a TI C2000 platform and I've found setting a GPIO at the start of the routine and clearning it at the end is a reasonable way of figuring out the burden of certain sections. It's obviously not very precise and it's difficult to discern small changes.
 
Top