[SOLVED] short keyword

Thread Starter

Djsarakar

Joined Jul 26, 2020
489
Hi
I have PIC18F45K80. I am using MPLABX 5.40 and XC8 2.30.

int and short are two different keywords in c language I understand int data type very well, whenever I have to store integer value I use int data type in my code but i don't know where short keyword should be use ?


unsigned int = 60000; // statement 1
unsigned short int = 60000; // statement 2

similarity in two statement :
size same 16 bits
range same ( 0 to 65535)

I don't understand any difference instead of keywords name.
 

MrChips

Joined Oct 2, 2009
30,711
As everyone else is saying, the size of int, long, short, etc. depends on how it is implemented by the compiler.

int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t

etc. will always be implemented with the correct size across all compilers.
 

Thread Starter

Djsarakar

Joined Jul 26, 2020
489
Sorry but still I do not understand. Let's say we have two keyword unsigned int and unsigned short int so if I want to store digit from 0 to 65535
I would use unsigned int = digit

I am asking when and where should use short keyword

Anybody had ever use short keyword in your program
 

Attachments

Last edited:

WBahn

Joined Mar 31, 2012
29,978
Hi
I have PIC18F45K80. I am using MPLABX 5.40 and XC8 2.30.

int and short are two different keywords in c language I understand int data type very well, whenever I have to store integer value I use int data type in my code but i don't know where short keyword should be use ?


unsigned int = 60000; // statement 1
unsigned short int = 60000; // statement 2

similarity in two statement :
size same 16 bits
range same ( 0 to 65535)

I don't understand any difference instead of keywords name.
The C language specification does not specify specific, fixed lengths for the original integer data types, only minimums. The idea was that the 'int' data type would be the natural data type for whatever processor the compiler was targeting and whoever wrote the compiler would define the other sizes in a way that was reasonable for that processor (as long as they were as least as long as the standard required).

The first compiler I used had 16-bit 'int' and 16-bit 'short', so they were the same. I wrote code that relied on an 'int' being 16-bits (hey, I was brand new to C programming) because I thought that an 'int' was 16-bits. When I later recompiled that code on a new compiler that was targeting a 32-bit machine, the code broke in a way that was very hard to track down. So I developed by own custom data type that were fixed width and that were keyed to an include file that case them to the appropriate data types for that compiler. Later, the C language standard adopted a similar approach using the stdint.h header file. I would strongly recommend using those data types, especially for embedded programming where it can really matter.
 

MrChips

Joined Oct 2, 2009
30,711
int16_t is more difficult to type than int.

If you are programming for fun, int is 3 keystrokes, short is 4 keystrokes.
If you are going to become a professional programmer, get into the habit of typing int16_t.
 

John P

Joined Oct 14, 2008
2,025
The C language specification does not specify specific, fixed lengths for the original integer data types, only minimums. The idea was that the 'int' data type would be the natural data type for whatever processor the compiler was targeting and whoever wrote the compiler would define the other sizes in a way that was reasonable for that processor (as long as they were as least as long as the standard required)...
When I started using PIC processors, I had the CCS compiler, and they followed that convention exactly. It was an 8-bit processor, and so its native word was a single byte, and "int" in that compiler was a byte, the same as "char". If you wanted 16 bits, you had to specify "long". I don't know if CCS still does this; it may be correct, but for most people it's confusing.

I just tried this line on a PC:
printf ("%d %d %d %d %d", sizeof(short unsigned), sizeof(short int), sizeof(int), sizeof(long), sizeof(long long));

and got 2 2 4 4 8 .

I put this line at the start of every program I write, PC or microcontroller:
typedef unsigned char byte;

To make things easier to type, typedef 'em.
 

WBahn

Joined Mar 31, 2012
29,978
When I started using PIC processors, I had the CCS compiler, and they followed that convention exactly. It was an 8-bit processor, and so its native word was a single byte, and "int" in that compiler was a byte, the same as "char". If you wanted 16 bits, you had to specify "long". I don't know if CCS still does this; it may be correct, but for most people it's confusing.

I just tried this line on a PC:
printf ("%d %d %d %d %d", sizeof(short unsigned), sizeof(short int), sizeof(int), sizeof(long), sizeof(long long));

and got 2 2 4 4 8 .

I put this line at the start of every program I write, PC or microcontroller:
typedef unsigned char byte;

To make things easier to type, typedef 'em.
Actually, if that compiler used an 8-bit 'int', then it was not conforming to the standard. AFAIK, the earliest standard required and 'int' to be a minimum of 16 bits.

It doesn't surprise me, however, than a C compiler for embedded processors would be non-conforming -- many of them were/are -- and defining an 'int' to be the natural size of the processor would be in keeping with the spirit of the standard, if not the letter.
 

Thread Starter

Djsarakar

Joined Jul 26, 2020
489
The C language specification does not specify specific, fixed lengths for the original integer data types, only minimums. The idea was that the 'int' data type would be the natural data type for whatever processor the compiler was targeting and whoever wrote the compiler would define the other sizes in a way that was reasonable for that processor (as long as they were as least as long as the standard required).
I still can't connivance myself that int and short are both same. I strongly believe they are different but I don't understand how they are different.

How the short keyword is different then int keyword in standard c language?
 

WBahn

Joined Mar 31, 2012
29,978
I still can't connivance myself that int and short are both same. I strongly believe they are different but I don't understand how they are different.

How the short keyword is different then int keyword in standard c language?
In general they aren't the same. The C standard requires that each of them consist of 16-bits AT A MINIMUM. It does NOT require that they both be exactly 16 bits NOR that they both be the same length.

On machines that had 16-bit data paths, it was common for both 'short' and 'int' to be the same.

But on machines that had 32-bit data paths, an 'int' is commonly 32 bits while a short is often 16 bits.

On machines that have 64-bit data paths, an 'int' is commonly 64 bits while a short might be either 16-bits or 32-bits. It varies.

The same thing applies to the 'long' data type -- it has a minimum width, but the actual width varies depending on the compiler. In some instances it is the same as in 'int' and in others it is wider.
 

Thread Starter

Djsarakar

Joined Jul 26, 2020
489
In general they aren't the same.
Hi all
I agree with you completely @WBahn
But after this a new question has come in my mind.

Let's assume example type of declaration in standard c,

int digit1;
short digit 2
Int short digit3,

I understand first and second Statment but I am confused with last statement.

What's meaning of last statement in standard c programming.

What would be the size of variable digit3 for 8 bit micro and xc8 compiler?
 

MrChips

Joined Oct 2, 2009
30,711
Hi all
I agree with you completely @WBahn
But after this a new question has come in my mind.

Let's assume example type of declaration in standard c,

int digit1;
short digit 2
Int short digit3,

I understand first and second Statment but I am confused with last statement.

What's meaning of last statement in standard c programming.

What would be the size of variable digit3 for 8 bit micro and xc8 compiler?
Why don't you try it on your compiler and see what happens?
 

WBahn

Joined Mar 31, 2012
29,978
Hi all
I agree with you completely @WBahn
But after this a new question has come in my mind.

Let's assume example type of declaration in standard c,

int digit1;
short digit 2
Int short digit3,

I understand first and second Statment but I am confused with last statement.

What's meaning of last statement in standard c programming.

What would be the size of variable digit3 for 8 bit micro and xc8 compiler?
The "int" is a family of types -- integers -- that have different sizes and different encodings (signed or unsigned).

The full type descriptor for a "short" would be "signed short int", but if we just say "short" the "signed" and "int" are assumed.
 

WBahn

Joined Mar 31, 2012
29,978
Why don't you try it on your compiler and see what happens?
Since we are talking specifically about things that the language standard allows to differ from compiler to compiler, it is a very bad idea to just try it out on a compiler in order to determine what the language standard means.

The far better way is to start reading the language standard and learn how to understand what it means.
 

nsaspook

Joined Aug 27, 2009
13,082
These are the short and long qualifier rules.

long long is not smaller than long, which is not smaller than int, which is not smaller than short. Very clear, right.
 

WBahn

Joined Mar 31, 2012
29,978
i'm confused because different information has been given at each place.

My way of thinking is as follows

Type of variable: First, I select what I want to store. Integer letter or float value.

Range of variable : Then I decide what the range of this variable should be signed or unsigned

There are the other point such as scope, life time But for now we are discussing short keyword

Why people use short int type declaration? What happens when we declare by combining the short and the int keyword.

I am really sorry if it will be easy for other people to understand but for me it seems very difficult
Again, "short int" is the same as "short" -- the 'int' is assumed. Neither of these are necessarily the same as 'int'.

Perhaps this will make things clearer.

Let's say I develop a new language and I say that I will have two basic data types -- integer and real.

I now say that I will have three kinds of reals, 32-bit, 64-bit, and 128-bit, which I will call short real, normal real, and long real.

I now say that I will have three kinds of integers, 16-bit, 32-bit, 64-bit, which I will call short integer, normal integer, and long integer. Further, I will have both signed and unsigned versions of each.

So if you want a signed 32 bit integer named bob you would have to declare it as

signed normal integer bob;

whereas if you want a unsigned 16-bit integer named sue you would use

unsigned short integer;

On the other hand, if you want a 32-bit floating point variable named fred you would use

short real fred;

Since people will be using "signed normal integer" and "normal real" a lot, it would be nice to let them use a shorter version of the declaration and say, for instance,

integer n;
real y;

and specify that these are the same as

signed normal integer n;
normal real y;

I could also say that if you don't specify that it is unsigned that it will be assumed to be signed, so "short integer" is the same as "signed short integer".

But what if I just say "short"? Is that a "signed short integer" or is it a "short real"? I can specify how it will be interpreted. I can say that if the "integer" verses "real" is left off that it will be interpreted as an integer.

Hence "short" is a shorthand description that means "signed short integer" while "integer" is a shorthand that means "signed normal integer".

That's all that is going on with C.

The "short" is merely a shorthand for "signed short int" while "int" is a shorthand for "signed int".

So what, you might ask, is "char" a shorthand for? It's either "signed char" or "unsigned char" -- the implementation (the compiler) is free to choose which, but it must specify which it has chosen.
 
Top