So lets says i have a 16 bit ADC and its going to have an input of 0V -> 20V
I want to know what my LSB resolution is.
So the answer is obviously. (Delta Voltage)/2^(number of bits) or 20/2^16
ok fine....
But now lets say that i have an input range of -10 -> 10V
I dont understand why the conversion is the same. In the first example all 16 bits can be used for data because there is no need for a signed bit.
However, now im trying to represent negative numbers also.
So should the conversion be (Delta Voltage)/2^(number of bits -1)
We need the MSB to be the signed bit so isnt my resolution reduced?
20/2^15??
I dont understand how we can represent the negative number with the same degree of resolution.
Can someone clarify for me??


I want to know what my LSB resolution is.
So the answer is obviously. (Delta Voltage)/2^(number of bits) or 20/2^16
ok fine....
But now lets say that i have an input range of -10 -> 10V
I dont understand why the conversion is the same. In the first example all 16 bits can be used for data because there is no need for a signed bit.
However, now im trying to represent negative numbers also.
So should the conversion be (Delta Voltage)/2^(number of bits -1)
We need the MSB to be the signed bit so isnt my resolution reduced?
20/2^15??
I dont understand how we can represent the negative number with the same degree of resolution.
Can someone clarify for me??

