# efficiency of adc after calibration

Discussion in 'General Electronics Chat' started by yagyasen, Jun 24, 2013.

1. ### yagyasen Thread Starter New Member

Jun 18, 2013
11
0
We calibrated an ADC using sine wave as input from tester,hence we are sure that this calibrated ADC will work properly if it gets input which has been tested.
What if it is getting general input(eg. when used in any real time application), that is the inputs for which it has not been calibrated ? How can I be sure of its performance for such random inputs?

Please suggest some solutions.

2. ### kubeek Expert

Sep 20, 2005
5,452
1,036
Solution to what exactly? Calibration means that you know what digital value belongs to which analog input voltage.
As long as the input voltage is inside the range that the ADC allows, and the singal is bandwidth limited to 1/2 of sample frequency, the readings will truly represent what is coming into the ADC. So what exacly are you concerned about?

3. ### GopherT AAC Fanatic!

Nov 23, 2012
7,983
6,775
The whole point of an ADC is that it converts unknown voltages to a readable digital value.

As long as the...
- frequency of the unknown voltage is not too high,
- the voltage is within the input range (or can be forced within the input range with voltage dividers or amplifiers),
- output impedence of the unknown voltage is not too high that a drop-out occurs when you connect to the ADC,
- your sample time is long enough to measure a clean value

Read the ADC datasheet for the inportant criteria listed above. There may be others.

Finally, what are you trying to measure. We can let you know if you should have concerns of any type.

4. ### MrChips Moderator

Oct 2, 2009
18,192
5,710
Generally, if you wish to calibrate an ADC you would use a stable calibrated constant voltage, not sine wave.
A sine wave input is used to determine the spectral purity of the ADC.

5. ### ErnieM AAC Fanatic!

Apr 24, 2011
7,973
1,834
What does the A2D see? Does the input wave drive it directly, or thru some filter or peak detector?

6. ### yagyasen Thread Starter New Member

Jun 18, 2013
11
0
in practical situaton, ADC output corresponding to an input will be slightly different from ideal digital value corresponding to that input. ( eg say for 5 volt I should get 10011 ideally but I am getting 10000, so I provide some mechanism in circuitry so that it adjusts 10000 to 10011,also i store that mechanism so that whenever input of 5 volt comes it will recall this mechanism and give output as 10011) let us say I did this type of adjustment for 10 input values generated by tester, so now in real time application if ADC gets an input voltage other than those 10 inputs, how can I be sure about the accuracy of corresponding output ??.

7. ### yagyasen Thread Starter New Member

Jun 18, 2013
11
0
in practical situaton, ADC output corresponding to an input will be slightly different from ideal digital value corresponding to that input. ( eg say for 5 volt I should get 10011 ideally but I am getting 10000, so I provide some mechanism in circuitry so that it adjusts 10000 to 10011,also i store that mechanism so that whenever input of 5 volt comes it will recall this mechanism and give output as 10011) let us say I did this type of adjustment for 10 input values generated by tester, so now in real time application if ADC gets an input voltageother than those 10 inputs, how can I be sure about the accuracy of corresponding output ??.

8. ### kubeek Expert

Sep 20, 2005
5,452
1,036
Then you are doing the calibration wrong. The point of calibration is to make a function that does this correction for ALL the input values.
A simple example with two points on an 8bit ADC : 0V gives you 0000 0010 (2) and 5V gives you 1111 1010 (250). Now you need to take these points, put them into equation y=a*x+b where a and b are real coefficients such that for x=250 y=5V and for x=2 y=0V. (you might wanna make a graph of voltage vs. value to see it better)
If you measure more points of the calibration, then you would use more lines to connect them, or some other better fitting function. But I think that calibrating the low and high points, and then checking the deviation from correct value in the middle should be most of the time enough.

yagyasen likes this.