# What's difference between unsigned char and signed char ?

#### mukesh1

Joined Mar 22, 2020
68
In 8 bit micro an unsigned type can only represent positive values where as a signed type can represent both positive and negative values. In the case of a 8-bit char this means that an unsigned char variable can hold a value in the range 0 to 255 while a signed char has the range -128 to 127.

Does anyone know what negative value means to store character in C language ?

#### ZCochran98

Joined Jul 24, 2018
217
As far as I can tell, the "negative" characters are exactly the same as the extended ASCII characters. When the computer displays a character, it looks at the entire byte, whereas, for integers (like byte or short), it would look at all but the last bit and do math appropriately from there (using 2's compliment). So when the computer tries to display character "-128," it actually displays character +128. Character -127 corresponds to +129, and so on, all the way to character -1 corresponding to character 255, even though, if you display the integer value, it will display properly.

The following code kind of gives an idea of what I'm talking about:
Code:
#include <stdio.h>

int main()
{

for(signed char i = -128; i < 127; i++)
{
unsigned char j = i;
printf("signed char %d = %c\tunsigned char %d = %c\n", i, i, j, j);
}

return 0;
}
Hope this helps, and doesn't muddy waters further!

#### djsfantasi

Joined Apr 11, 2010
8,579
In 8 bit micro an unsigned type can only represent positive values where as a signed type can represent both positive and negative values. In the case of a 8-bit char this means that an unsigned char variable can hold a value in the range 0 to 255 while a signed char has the range -128 to 127.

Does anyone know what negative value means to store character in C language ?
Negative values use one bit (MSb) to indicate if the remaining seven bits value are positive or negative. 127 is the largest value which can be represented in seven bits. The eighth bit indicates positive (0) or negative (1). This basic convention* is understood by the system and programming languages.

An unsigned value doesn’t need to use that eighth bit to represent the sign, so the value can use all eight bits. Thus, it can represent a value 0-255.

Understand?

*The actual representation is called twos complement. The link describes the method in more detail.

Last edited:

#### djsfantasi

Joined Apr 11, 2010
8,579
Does anyone know what negative value means to store character in C language ?
Using negative values to store a char (ASCII) character means you are using the wrong variable type. If you are processing ASCII characters, normal AND extended, you should use an unsigned char.

Using negative values MAY be possible but MAY not be transportable to other micros.

if you need an eight bit value other than a character, that’s used as an index or offset which will always be less than 127 or 255, you can also use a variable char typed. Either signed or unsigned. Here in this scenario, a negative value may be necessary. A type of char in this instance doesn’t mean character and should probably be of type byte instead.

#### Papabravo

Joined Feb 24, 2006
19,329
The real difference shows up when you do comparisons like ≤ or ≥. This is where you wan to be very careful that you don't mix types when you use comparison operators. This kind of bug can be extremely difficult to spot.

xox

#### MrChips

Joined Oct 2, 2009
27,168
In 8 bit micro an unsigned type can only represent positive values where as a signed type can represent both positive and negative values. In the case of a 8-bit char this means that an unsigned char variable can hold a value in the range 0 to 255 while a signed char has the range -128 to 127.

Does anyone know what negative value means to store character in C language ?
There are no negative ASCII characters.
You can still use the char data type to represent 8-bit signed values from -128 to +127.

#### WBahn

Joined Mar 31, 2012
27,478
In 8 bit micro an unsigned type can only represent positive values where as a signed type can represent both positive and negative values. In the case of a 8-bit char this means that an unsigned char variable can hold a value in the range 0 to 255 while a signed char has the range -128 to 127.

Does anyone know what negative value means to store character in C language ?
A "data type" is just a protocol (set of rules) for how to interpret patterns of bits.

If you give me the bit pattern 10101010 and tell be it is an unsigned 8-bit integer, then I will interpret those bits as representing the value 252 (decimal octal, 170 in decimal), but if you had told me that it was a signed integer (using 2's compliment representation), I would have interpreted them as representing the value -86. While 2's comp is the most common way for representing negative values, it is not the only way. If you had told me it was sign/magnitude I would have interpreted it as -42 while if you had said it was 1' comp I would have come up with -86 and in excess-127 I would have come up with +125. All of these from the exact same bit pattern, so when we do anything with bits that represent things, we have to be very clear on what rules are being used to interpret those bits.

So now the question become how does a device that is being tasked with interpreting a bit pattern as a character interpret the bit pattern 10101010? In almost all instances, it will be interpreted according to whatever character mapping it being used. Since standard ASCII is a 7-bit code, this would require that the character map contain an "extended ASCII" map (which is not universally agreed to) or it might interpret it as Unicode or one of the other possible mappings. As with signed and unsigned, you need to understand the protocol that is going to be used.

EDIT: Corrected post to address error pointed out by @AlbertHall. Thanks!

Last edited:

#### AlbertHall

Joined Jun 4, 2014
12,158
If you give me the bit pattern 10101010 and tell be it is an unsigned 8-bit integer, then I will interpret those bits as representing the value 252 (decimal)
I make it 170. 128 + 32 + 8 + 2

#### WBahn

Joined Mar 31, 2012
27,478
I make it 170. 128 + 32 + 8 + 2
You're right -- but it IS 252 in octal!

That's what I get for blindly using a calculator and not asking if the answer makes sense.

xox

#### mukesh1

Joined Mar 22, 2020
68
Thanks I appreciate all of yours help and advice's. character data type store ASCII value that represent character, http://www.asciitable.com/

#### WBahn

Joined Mar 31, 2012
27,478
Thanks I appreciate all of yours help and advice's. character data type store ASCII value that represent character, http://www.asciitable.com/
That is a very common use of the char data type -- and hence the name -- but it is simply a data type that can be used for anything that it is suitable for and since it is generally the only data type that can access sequential bytes of memory as distinct values easily, it is very often used for that purpose.

#### djsfantasi

Joined Apr 11, 2010
8,579
A "data type" is just a protocol (set of rules) for how to interpret patterns of bits.

If you give me the bit pattern 10101010 and tell be it is an unsigned 8-bit integer, then I will interpret those bits as representing the value 252 (decimal octal, 170 in decimal), but if you had told me that it was a signed integer (using 2's compliment representation), I would have interpreted them as representing the value -86. While 2's comp is the most common way for representing negative values, it is not the only way. If you had told me it was sign/magnitude I would have interpreted it as -42 while if you had said it was 1' comp I would have come up with -86 and in excess-127 I would have come up with +125. All of these from the exact same bit pattern, so when we do anything with bits that represent things, we have to be very clear on what rules are being used to interpret those bits.

So now the question become how does a device that is being tasked with interpreting a bit pattern as a character interpret the bit pattern 10101010? In almost all instances, it will be interpreted according to whatever character mapping it being used. Since standard ASCII is a 7-bit code, this would require that the character map contain an "extended ASCII" map (which is not universally agreed to) or it might interpret it as Unicode or one of the other possible mappings. As with signed and unsigned, you need to understand the protocol that is going to be used.

EDIT: Corrected post to address error pointed out by @AlbertHall. Thanks!
Not to belabor a point, but in any of these interpretations, the physical electronic charges in memory are and remain the representation of the bit pattern
10101010​

Means either 170, 0252, 0xAA,

#### Attachments

• 1.7 KB Views: 2

#### WBahn

Joined Mar 31, 2012
27,478
Not to belabor a point, but in any of these interpretations, the physical electronic charges in memory are and remain the representation of the bit pattern
10101010​

Means either 170, 0252, 0xAA, View attachment 204469
Or that every other light is on or any one of a ton of different interpretations. That's the beauty and power of computers -- we can build a device that does a limited number of operations on patterns of ones and zeros and, by externally applying meaning to those patterns, can process a practically unlimited breadth and depth of information.