ADC Calibration

Reloadron

Joined Jan 15, 2015
7,501
Do you want to calibrate just the A-D or a sensor along with it.?
That's the big question.

For any calibration process you need a standard for comparison. The article you linked to pretty much sums things up. Just for example the calibration of an A/D will require a known voltage to be applied. There will always be some error so the question begs how much accuracy or uncertainty can you live with? How well can your A/D resolve a quantity?

Ron
 

nsaspook

Joined Aug 27, 2009
13,081
That's the big question.

For any calibration process you need a standard for comparison. The article you linked to pretty much sums things up. Just for example the calibration of an A/D will require a known voltage to be applied. There will always be some error so the question begs how much accuracy or uncertainty can you live with? How well can your A/D resolve a quantity?

Ron
As you say, you need a calibration standard. That's why I love my old Omega.
https://forum.allaboutcircuits.com/threads/cl511-battery-replacement.164695/post-1449053
1659733624543.png
 

upand_at_them

Joined May 15, 2010
940
I didn't read the AdaFruit artible, but, as pointed out, it appears to be about calibrating a sensor not the ADC. In addition to the calibration, you'll need an accurate voltage reference for the ADC to base its measurement on. Otherwise the calibration is meaningless.
 

nsaspook

Joined Aug 27, 2009
13,081
One of the most important things that article talks about is the ADC reference. If you care about accuracy and long term stability of measurements never depend on the controllers internal reference. I can promise (knowing the details of how they are created on the die for a typical controller) that internal band gap reference will be a major part of any initial error found and account for not so great long term stability.
https://www.ti.com/lit/an/slyt339/slyt339.pdf
 

Reloadron

Joined Jan 15, 2015
7,501
Hello, thank you for responding to my post.

I'm interested in the A-D.

Thank you
OK with that in mind consider a few things. Your A/D converter will only be as accurate as the reference it uses be it internal or external. Next your A/D converter will only resolve in bits. Just for example a 10 bit A/D conversion with a stabile 5.00 volt reference will look like this: 5.00 / 1024 bits = 4.88 mV so the best resolution per step change will be 4.88 mV. If you want better resolution you choose a better A/D. Today 12 and 16 bit are pretty popular.
As to calibration? You apply a known voltage and look first at the bit count out. I assume you are writing code to read your A/D converter. So what it comes down to is applying a known voltage and looking at the A/D output. The trick is making sure your known voltage is just that, a known accurate voltage.

I see nsaspook has already stressed the importance of the A/D Reference. It needs to be stable and accurate. :)

Ron
 

MisterBill2

Joined Jan 23, 2018
18,167
Aside from the accuracy concern, there is also the resolution issue, mentioned already but not really discussed adequately. Accuracy better than the resolution is always an interesting claim to see. At this point it gets complicated. How can you have +/- 5 millivolt accuracy on a digital meter that reads volts and tenths of volts??
 

nsaspook

Joined Aug 27, 2009
13,081

MisterBill2

Joined Jan 23, 2018
18,167
I have calibrated load cell systems, torque cell systems, and pressure transducer systems, the process is similar: First, with nothing applied, set the zero adjustment for a zero reading, OR, with some minimum value applied, set the zero adjust to display that value, then apply an amount of variable equal to about 80% of full scale, and adjust the span (gain) control until the reading equalls what is applied. Then go back to the minimum input and re-adjust the zero until that reading is correct. Usually it takes just a few times back and forth to obtain correct readings over the whole range.
If just calibrating an A/D converter, use a voltage for the variable input, and the digital display as the output.
 

Reloadron

Joined Jan 15, 2015
7,501
This film is circa 1966. Why Calibrate is or was a US Navy training film for Metrology (Measurement Technology) students. Keep in mind 1966 before we had the standards equipment we have today. Figured I would toss it out there for amusement. :) Takes a min or so for the audio to start.

Ron
 

Reloadron

Joined Jan 15, 2015
7,501
On another note if we knew exactly which A/D converter you have I am sure we could spell out a better procedure and method to calibrate it. :)

Ron
 

ErnieM

Joined Apr 24, 2011
8,377
I find it interesting that no one has described or linked to any sort of calibration procedure. Here is how I have done it.

First we need to know the relationship from input to output, typically this is a linear relationship, meaning a straight line as we learned in high school;

Y = m * X + B eq 1
where: X is what we input
Y is what we read out of the A/D
B is the zero offset.
m is the slope of the line

To calibrate such a system we need to take two readings, one close to the bottom (X1, Y1) and one close to the top (X2, Y2). By "close" I mean where the output is changing uniformly with the input, so we are past any offset clipping and the like.

With our two measurements we can compute the slope:
m = delta Y / delta X = (Y2 - Y1) / (X2 - X1)

Now plug in either pair into eq 1 to compute B.

If you are on a "powerful enough" controller doing this using floating point numbers is fine, but in a resource limited micro it is usually best to pick the units such that m and B are integers. Once the reading is computed the decimal point (or binary point) may be shifted to get a more useful result.
 

nsaspook

Joined Aug 27, 2009
13,081
I find it interesting that no one has described or linked to any sort of calibration procedure. Here is how I have done it.

First we need to know the relationship from input to output, typically this is a linear relationship, meaning a straight line as we learned in high school;

Y = m * X + B eq 1
where: X is what we input
Y is what we read out of the A/D
B is the zero offset.
m is the slope of the line

To calibrate such a system we need to take two readings, one close to the bottom (X1, Y1) and one close to the top (X2, Y2). By "close" I mean where the output is changing uniformly with the input, so we are past any offset clipping and the like.

With our two measurements we can compute the slope:
m = delta Y / delta X = (Y2 - Y1) / (X2 - X1)

Now plug in either pair into eq 1 to compute B.

If you are on a "powerful enough" controller doing this using floating point numbers is fine, but in a resource limited micro it is usually best to pick the units such that m and B are integers. Once the reading is computed the decimal point (or binary point) may be shifted to get a more useful result.
Maybe it's because we all assumed that everyone knows that. ;)

https://github.com/nsaspook/mbmc_k42/blob/new_mcc/mbmc_k42.X/daq.c

I cheated a bit and used trim pots for the initial (internal standard calibration).
https://forum.allaboutcircuits.com/...c-controlled-battery-array.32879/post-1448225
The pic18f57k42 processor board (designed for this communications project SECSII but with added circuitry for precision analog measurements) drives the SPI display (using background DMA transfers DMA) and I/O board that has the voltage regulators to supply 5v and 3.3v device power, 12vdc buss power and precision 4.095 (REF3440-EP) ADC and 5.00 (REF02) calibration reference voltages from the 24vdc battery input. The main ADC ref is the REF3440 so adjustable (4 turn 200 ohm trim pots for each analog channel) voltage dividers allow for exact calibration for the 5.2 and 33 volt max signal monitor lines. There are two hall current sensors (100A PV input, 200A battery current), each with it own REF02 precision voltage source for the sensor current signal regulated from the 12vdc general power buss.
1659805113720.png
 
Last edited:
Top