How is 11/12 bit precision useful on the DS18B20 temperature sensor?
I'm trying to understand the meaning of the specifications for accuracy and precision for the DS18B20 temperature sensor. The DS18B20 datasheet says that the DS18B20:
"is accurate to ±0.5°C over the range of -10°C to +85°C.....The resolution of the temperature sensor is user-configurable to 9, 10, 11, or 12 bits, corresponding to increments of 0.5°C, 0.25°C, 0.125°C, and 0.0625°C, respectively."
But if the DS18B20 is only accurate to 0.5 degrees, then why would any application ever use 11 or 12 bit precision? For example, if the DS18B20 returned a reading of 20 degrees, then actual temperature could be from 19.5 to 20.5. Why would any application ever care whether 11 or 12 bit precision might return a reading of 20.125 or 20.0625, since this difference is much less than the accuracy of the reading ?
Thanks for any response or explanation
I'm trying to understand the meaning of the specifications for accuracy and precision for the DS18B20 temperature sensor. The DS18B20 datasheet says that the DS18B20:
"is accurate to ±0.5°C over the range of -10°C to +85°C.....The resolution of the temperature sensor is user-configurable to 9, 10, 11, or 12 bits, corresponding to increments of 0.5°C, 0.25°C, 0.125°C, and 0.0625°C, respectively."
But if the DS18B20 is only accurate to 0.5 degrees, then why would any application ever use 11 or 12 bit precision? For example, if the DS18B20 returned a reading of 20 degrees, then actual temperature could be from 19.5 to 20.5. Why would any application ever care whether 11 or 12 bit precision might return a reading of 20.125 or 20.0625, since this difference is much less than the accuracy of the reading ?
Thanks for any response or explanation