How to know whether char is signed or unsigned on your system?

Thread Starter

asilvester635

Joined Jan 26, 2017
73
Put signed or unsigned in front when you declare it.

Or set a char to 0xFF and test to see whether it is less than zero.
So we are basically setting char variable x with 255. Then if x is less than 0 then char is unsigned, signed otherwise. Can you explain what is happening when char x = 255? Or is it as simple as storing 255 inside x?

Code:
    char x = 0xFF;

    if (x < 0) {
        printf("char is unsigned\n");
    } else if (x >= 0){
        printf("char is signed\n");
    }
 

djsfantasi

Joined Apr 11, 2010
9,160
Neither. A char variable is not signed or unsigned. It contains a single character, represented by the integer values {0...255}.

While one might want to think of it as either signed or unsigned, according to the definition that is irrelevant.

If one were to persist in this thought, I would propose that char is implemented by s one byte, unsigned integer, in all systems. I believe this because in the set of values an unsigned integer can take a) there are no negative numbers and b) the maximum value in the set requires the use of all 8 bits of the byte.
 

MrChips

Joined Oct 2, 2009
30,794
I think you have that reversed.

255 is 11111111 in binary.

If it is unsigned, the value is 255.
If it is signed, the value is -1, i.e. less than 0.
 

Thread Starter

asilvester635

Joined Jan 26, 2017
73
I think you have that reversed.

255 is 11111111 in binary.

If it is unsigned, the value is 255.
If it is signed, the value is -1, i.e. less than 0.
Sorry, here is the correct code.

Code:
   char x = 0xFF;

   if (x < 0) {
   printf("char is signed\n");
   } else if (x >= 0){
   printf("char is unsigned\n");
   }
 

WBahn

Joined Mar 31, 2012
30,045
Neither. A char variable is not signed or unsigned. It contains a single character, represented by the integer values {0...255}.

While one might want to think of it as either signed or unsigned, according to the definition that is irrelevant.

If one were to persist in this thought, I would propose that char is implemented by s one byte, unsigned integer, in all systems. I believe this because in the set of values an unsigned integer can take a) there are no negative numbers and b) the maximum value in the set requires the use of all 8 bits of the byte.
It doesn't work this way. While it is true that whether a particular 8-bit pattern is signed or unsigned is determined by how it is interpreted, the simple fact is that the whether it is declared as signed or unsigned determines how the code generated by the compiler interprets the bit pattern.

In C (and I think in C++, but not sure), all of the integer data types EXCEPT char are implicitly declared as signed and you must declare them explicitly as unsigned. Whether char is signed or unsigned is implementation defined. In limit.h there are several symbolic constants that will tell you what you want to known.

CHAR_MIN = minimum value that a char variable (without modifier) can be. If this value is less than zero, then char is signed. If it is equal to zero, then char is unsigned.
 

djsfantasi

Joined Apr 11, 2010
9,160
I defer to your wisdom. I finally found a reference that I trust. (Not that I DON'T trust you). I mistakenly identified the default type of plain char. There is no such thing. standard recognized three distinct type of char. Plain char, unsigned char and signed char. Thanks for motivating me to look further.
 

nsaspook

Joined Aug 27, 2009
13,260
I try not to use char or int types in normal embedded programming. It's usually more portable and precise to use the types and limits from the <cstdint> (stdint.h) headers.
http://www.cplusplus.com/reference/cstdint/
http://pubs.opengroup.org/onlinepubs/9699919799/

As a consequence of adding int8_t, the following are true:
A byte is exactly 8 bits.
{CHAR_BIT} has the value 8, {SCHAR_MAX} has the value 127, {SCHAR_MIN} has the value -128, and {UCHAR_MAX} has the value 255.
(The POSIX standard explicitly requires 8-bit char and two's-complement arithmetic.)
 

MrSoftware

Joined Oct 29, 2013
2,196
My personal rule is be explicit. If you want a signed or unsigned number 8 bits in size, then typdef yourself an explicit type (if it's not already available in your environment). i.e. create a type uint8 or sint8.

If your code needs to be super portable, remember that a byte (and a char) is not guaranteed to be 8-bits.
 
Top