Using fixed point notations on dsPIC

Discussion in 'Embedded Systems and Microcontrollers' started by darkfeffy, Jul 17, 2012.

Apr 1, 2009
12
2
Hi
I want to use a dSPIC33f (16-bit) for a closed-loop dc-dc converter application. I have some difficulty understanding certain details about the fixed point notation and conversions. From sample code, I see certain instructions like:

currentReferenceDynamic = SineTable512[((sineAngle) >> 5)];

I need to understand what the operators "<<" and ">>" do in these instructions. Could someone explain please? Any documents/links that explain these kinds of operations?

Thanks.
ed.

2. MrChips Moderator

Oct 2, 2009
12,636
3,454
<< means shift left.

Hence << 5 is shift left 5 bits which is a quick way to multiply by 32.

Similarly >> 5 means shift right 5 bits which is also integer divide by 32.

Apr 1, 2009
12
2
Thanks MrChips

4. WBahn Moderator

Mar 31, 2012
18,087
4,917
These are "shift" operations.

The << is the 'left shift' operator and the expression evaluates to the value of the left operand (the ADCBUF3) left shifted by the right operand (the '5'). So, in this case, the value stored in pvOutputVoltage is what you would get if you wrote the value in ADCBUF3 in binary and then moved them all five places to the left (you lose the high order 5 bits since they 'fall off then end') and you backfill the right hand side with five zeros.

The >> is the 'right shift' operator and is basically the same except the value in sineAngle is shifted to the right five places. The five lowest order bits fall off the end and are lost. There is a subtlety, however, that can bite you. Depending on the language and the data type of the argument, the backfill that occurs on the right could either be all zeros (making it a logical right shift) or the most significant bit (the sign bit) could be copied into all of the vacated positions (making it an arithmetic right shift). In C, right shifts on unsigned types are logical and right shifts on signed types is arithmetic.

Shifting left by one place is the equivalent of multiplying by 2. So shifting left by 5 places is the same as multiplying by 2 five times, or by 2^5=32. Right shifting by one place is roughly equivalent to dividing by 2, so right shifting by two places is the same as dividing by 32. However, right shifting and division are not exactly the same in the case of negative integers. In C, the standard does not specify what a compiler does in this case, it only requires that the compiler specify what it happens to do.

In the case of floating point variables, this is treated by most compilers as an illegal operation. I don't recall if that is required by the standard, or if the behavior is undefined.

Apr 1, 2009
12
2

6. MrChips Moderator

Oct 2, 2009
12,636
3,454
That depends on the compiler. An optimizing compiler might recognize that *32 can be implemented as << 5 which is usually faster.

Or if the MCU architecture has hardware multiply or divide, doing the integer multiply or divide could be just as fast. Hence a lot depends on the compiler as well as the processor hardware.

Performing 5 shift operations in assembler is always faster than integer multiply or divide in software.

7. WBahn Moderator

Mar 31, 2012
18,087
4,917
Depends on how good at optimizing the compiler is and what the capabilities of the processor are. Most higher-end (and even many pretty low end) processors have barrel shifters/rotators that can perform arbitrary shifts and rotates in either direction in one instruction cycle. Hardware multipliers can make the execution of arbitrary multiplication very efficient, but the set of processors that support it is a lot smaller (though it goes further down the food chain in DSP-oriented chips than general purpose MCUs). Arbitrary integer division is still pretty expensive, even on high-end processors. A good optimizing compiler can frequently identify when a multiplication or division can be replaced by a shift, but not allows. It might also optimize some lines and not see others -- a lot of that seems to have to do with how macros evaluate and where in the overall process the optimization occurs and whether one of the patterns it is able to recognize exists at that point. This is a pain for anything resembling real-time code because the performance of your code could change significantly from one compile to the next for no apparent reason.

For embedded code, I would generally recommend that you do as much of that optimization yourself as you can, but be forewarned that sometimes you can slit your own throat and box the compiler into accepting how you wrote it instead of using an even better optimization it would have seen had you written it more generically.