Choosing the right data size

Thread Starter

danielb33

Joined Aug 20, 2012
105
Jakob Engblom claims that if working with a 8 bit processor, one should try to always use 8 bit variables. Using 32 bit will bog down the processor. When using a 32 bit processor, one should try to always use 32 bit variables, because the processor uses 32 bit registers anyways. Using smaller variables will just cause the processor to have to use shift, mask, and sign-extend operations in the code (depending on how the smaller types are represented). He also states that this is not true for structures, arrays, etc. that obviously take up much space in memory.

My question is, is he only talking about local variables? I am not sure if a 32 bit processor only uses 32 bit registers for storing locals or if all of the processor only uses these for any type of memory? I know I am missing the the main point he is getting at from a lack of micro architecture or programming understanding. Any ideas?
 

MrChips

Joined Oct 2, 2009
30,711
I don't know who he is, never heard of him, and it appears that he doesn't know anything about microprocessor architecture.
 

WBahn

Joined Mar 31, 2012
29,978
Well, I'm not going to wade through the 39 points to find the specific one you are talking about. Perhaps you could say which number it is?

In general, a processor is optimized for a certain size datapath and if your data can conform to that size, you will get the "best" performance (notice that "best" is in quotes, which means that I couldn't think of a better term to use but that I recognize that this is not strictly correct). This is why the size of the basic data types in some languages, such as C, is not specified by the language but rather by the implementation. So, in C, the int data type used to be almost universally 16 bit integers but is now almost universally 32 bit. Who knows what it will be in ten or twenty years.

But modern processors tend to have intrinsic capabilities designed to work with (mostly) smaller data sizes efficiently. This is particularly the case with superscalor processors in which you can levage SIMD (Single Instruction, Multiple Data) capabilities to peform multiple operations on smaller data types in parallel within the larger data path. It is less the case for embedded and lower end processors.
 

Thread Starter

danielb33

Joined Aug 20, 2012
105
MrChips, I highly doubt he knows nothing. He has his PHD in cimputer engineering and developed compilers at IAR for many years.
WBahn, that makes more sense. I think I am getting in a little over my head lol.
 

MrChips

Joined Oct 2, 2009
30,711
Thanks for the update. It is possible that his statements are taken out of context. The choice of data size vs efficiency will depend on the specific processor and compiler. For maximum efficiency one would choose asm since one does not have full control over what the compiler does. A complete knowledge of the processor architecture and instruction set would be required in order to make a definitive statement.
 

MrChips

Joined Oct 2, 2009
30,711
Let me give you an example. I do a lot of work with digital signal processors (DSP).
The one I use is a 16-bit machine but the arithmetic-logic unit (ALU) has 32-bit registers. In every application there is a lot of byte/character processing being done. Most processors can handle 8-bit operations efficiently. It would be inefficient to use 16-bit data.

In my interrupt routines, all my code is written in ASM for efficiency. Sometimes I choose 16-bit, 32-bit or 36-bit arithmetic depending on what needs to be done. By choosing ASM I can tailor my code to suite the math and get the best performance from the processor.
 
Top