Microcontroller, is some way to measure the code efficiency?

Thread Starter

andriasm

Joined Aug 2, 2022
25
There is some way to measure the code efficiency?
For example... Can I to know when there is code enough in my microcontroller?

If I set a GPIO to toggle its state in the main loop and measure through an oscilloscope, can I get some result?
 

Papabravo

Joined Feb 24, 2006
21,225
There is some way to measure the code efficiency?
For example... Can I to know when there is code enough in my microcontroller?

If I set a GPIO to toggle its state in the main loop and measure through an oscilloscope, can I get some result?
Yes to all three questions
 

nsaspook

Joined Aug 27, 2009
13,267
There is some way to measure the code efficiency?
For example... Can I to know when there is code enough in my microcontroller?

If I set a GPIO to toggle its state in the main loop and measure through an oscilloscope, can I get some result?
Sure, you can instrument (using simple gpio signals and internal counters) the controller to provide utilization information of resources like cpu cycles, memory cycles, interrupt cycles that change depending on the method (software and hardware) of handling a problem.
https://forum.allaboutcircuits.com/threads/pic32mk-mc-qei-example.150351/post-1617150
https://forum.allaboutcircuits.com/threads/fifo-logic-block-meta-stability.136284/post-1143241
 

Ya’akov

Joined Jan 27, 2019
9,150
As a practical matter, “efficiency” is a multidimensional entity that includes the time you spend writing, debugging, and testing the code. If you spend time trying to work out some theoretical efficiency metric, which will have no particular effect in the real world, your could will be less efficient in real terms.

The proper way to answer your question, hinted at in other responses is to answer the question “what are the Key Performance Indicators (KPI) for my project?”

The KPIs could be things like cost, energy use, speed, reliability, time-to-completion, and a number of other things that you have to work out based on the project’s specifications.

The goal becomes optimizing the satisfaction of these things. If you manage to bum three instructions from an assembler loop in an 8-hour workday, it may make you feel good but in light of the analysis (e.g.: cost (your time), time-to-market (day lost), speed (does it change the benchmarks?)) you have made your code less “efficient” in the only way that really mattters.

If you have a PKI for execution time, and you meet it, there is nothing useful about knowing some theoretical efficiency number. Of course wisdom is required when creating the PKIs. For example if you have an execution time PKI, does it include the idea of future feature adds? Do you have room to do that, or have you used the present and future execution time budget on the first version.

There was a rule of thumb for early big iron OTP (Online Transaction Processing) systems. This was that the user can tolerate a 3 second delay before believing there is something wrong or feeling the system is “slow”.

There is an anecdote, based on real practice but probably itself apocryphal, that a junior programmer was working on such a system and spent a long time making his part very fast. He achieved subsecond response times and was very proud of himself. He demoed the code for the project manager who was suitably impressed with the work but told him to put in a delay of 2.95 seconds before the system sent the response.

The junior was confused, he worked so hard to make the system fast and here the PM wanted to do nothing for an eternity compared to his fast, efficient code. Exasperated by the instruction, he asked the PM why on earth he would want to slow the code down!

The PM calmly explained to this neophyte that it was all well and good that his subsystem returned so quickly and was good work, and very helpful—but—they have a 3 second budget for response time and just like departmental budgets, if you don’t use it you lose it.

The delay was a placeholder to accommodate feature creep, unexpected load, and other vagaries of the real world so that when they were required to add to the system, it, at least for a while, would be as fast as ever and acceptable to the users. It was a way of managing expectations so the system didn’t “get slower” as demanded features were added.

So, in this case, the most “efficient” code included 295% of the time doing nothing at all.
 

BobaMosfet

Joined Jul 1, 2009
2,113
There is some way to measure the code efficiency?
For example... Can I to know when there is code enough in my microcontroller?

If I set a GPIO to toggle its state in the main loop and measure through an oscilloscope, can I get some result?
'Code Effeciency' is based on how good a programmer you are; not your MCU. Learn flow-charting, and then learn your programming language inside and out- Not just syntax, but _how_ to really use the language effectively. One thing I recommend, is learning assembly language. When I began, decades ago, I went from BASIC to assembly because assembly let me do the coolest things and was *so fast* compared to anything else. I later learned Pascal, then C and over 40 other programming languages, many of which I still use today.
 

BobTPH

Joined Jun 5, 2013
8,957
What a fun thread this us. Everyone gets to use their own concept of code efficiency, and no one has bothered to give a metric for it. The question was how to measure it. One cannot measure something that is not have a metric.

Efficiency is usually applied to a process that does some kind of transformation with an inherent loss. Thus the efficiency is a measure of how close we get to a lossless conversation, with 100% being the upper limit.

How does this apply the a program? What characterizes a 100% efficient program?

Is it the program with the fewest bytes of code? Well, show me a 100% efficient program, and I can show you another one that is faster.

Computational complexity theory has proved that there is no optimal program by any measure of complexity (Blum speed up theorem.) You can always come up with another program which is better for all but a finite set of inputs. But, this was a full semester course in grad school, so I cannot give you a proof in a forum post.
 

nsaspook

Joined Aug 27, 2009
13,267
My standard project metric for Code Efficiency is usually the ability to add more features to existing systems. That usually means letting hardware (modules that work independent of the processor) do more work instead of using smart pure software algorithms.
 

Gorbag

Joined Aug 29, 2020
13
Computational complexity theory has proved that there is no optimal program by any measure of complexity (Blum speed up theorem.) You can always come up with another program which is better for all but a finite set of inputs. But, this was a full semester course in grad school, so I cannot give you a proof in a forum post.
I think Blum only applies to sufficiently complex algorithms (one might even call them malgorithms), but it's a good reminder that any measure of efficiency should be a) defined and b) not considered in isolation (at least in non-academic projects).

OP: Most of the time a better algorithm is more useful (time/space wise) than just doing local optimization on code (unrolling loops, say). That means knowing for the size of the problem you are concerned with, what is the best representation for data, what are the best ways to access that data, etc.

For example, what would be the most time efficient way to calculate the number of '1' bits in a 32 bit word? What would be the most space efficient?
 
Top