How would you calculate total measurement error/uncertainty?

Thread Starter

ballsystemlord

Joined Nov 19, 2018
167
Hello,
I tired googling this, but all the articles I can find depend on you knowing what both the observed and actual values are, and I don't.

So if you are measuring watts, your error is "max error of DMM voltage range" times "max error of DMM amperage range". Very simple. But if you're doing measurements that are a bit more complex, like watts and lumens, or watts and temperature, or anything even more arcane, how would you calculate the total measurement error/uncertainty?

Thanks!
 

nsaspook

Joined Aug 27, 2009
13,426
We use 0.1% Wattmeters for precision RF level calibrations of processing tools and must have calibration data with error and uncertainty on the cert.
https://www.newport.com/n/measurement-uncertainty
Understanding Total Measurement Uncertainty in Power Meters and Detectors

https://blog.softexpert.com/en/what-is-uncertainty-in-measurement-and-calibration/

Uncertainty is not error
Uncertainty and error are two concepts that are frequently confused, but they are not the same. Error is the difference between the measurement taken by our instrument and the one taken by a standard reference instrument. While uncertainty is related to the quality of the calibration or measurement taken and involves repeatability and predictability.
 

wayneh

Joined Sep 9, 2010
17,498
Uncertainty is not error
Uncertainty and error are two concepts that are frequently confused, but they are not the same. Error is the difference between the measurement taken by our instrument and the one taken by a standard reference instrument. While uncertainty is related to the quality of the calibration or measurement taken and involves repeatability and predictability.
Statisticians use the terms interchangeably. The preference is "uncertainty" since "error" implies doing something wrong, but you'll certainly see "error" used for brevity. We talk about residual error, standard error, and so on.

You may be thinking of systemic uncertainty, in other words bias error. That is indeed something different than random errors, which over a large sample should sum to zero.
 

nsaspook

Joined Aug 27, 2009
13,426
Statisticians use the terms interchangeably. The preference is "uncertainty" since "error" implies doing something wrong, but you'll certainly see "error" used for brevity. We talk about residual error, standard error, and so on.

You may be thinking of systemic uncertainty, in other words bias error. That is indeed something different than random errors, which over a large sample should sum to zero.
Statisticians :rolleyes:

We don't use them interchangeably in the equipment metrology universe of physical measurements. "Uncertainty" is an intrinsic property of the universe per Heisenberg.

https://link.springer.com/chapter/10.1007/978-3-642-80199-0_8
Abstract
‘Error’ and ‘uncertainty’ are two complementary, but distinct, aspects of the characterization of measurements. ‘Error’ is the difference between a measurement result and the value of the measurand while ‘uncertainty’ describes the reliability of the assertion that the stated measurement result represents the value of the measurand. The analysis of error considers the variability of the results when the measurement process is repeated. The evaluation of uncertainty considers the observed data to be given quantities from which the estimates of certain parameters (the measurement results) are to be deduced. The failure to distinguish between these two concepts has led to inconsistency, and a lack of uniformity in the way uncertainties have been expressed. The 1993 ISO (International Organization for Standardization) Guide to the Expression of Uncertainty in Measurements is the first international attempt to establish this uniformity and makes no distinction in the treatment of contributions to the total uncertainty in a measurement result between those arising from “random errors” and those arising from “systematic errors.”
 
Last edited:

BobTPH

Joined Jun 5, 2013
9,149
Yes, I am. So give us an example of a relationship like that and two measurements where you cannot determine how to combine the errors.

Added: Let me clarify. Give us a quantity that you need to compute from two or more measurements and you do not know how to estimate the error of the result based on the error of the individual measurements.
 
Last edited:

WBahn

Joined Mar 31, 2012
30,243
Hello,
I tired googling this, but all the articles I can find depend on you knowing what both the observed and actual values are, and I don't.

So if you are measuring watts, your error is "max error of DMM voltage range" times "max error of DMM amperage range". Very simple. But if you're doing measurements that are a bit more complex, like watts and lumens, or watts and temperature, or anything even more arcane, how would you calculate the total measurement error/uncertainty?

Thanks!
What you want to look into is referred to (at least back when I learned about it around forty years ago) as "propagation of errors".

There are several ways to go about it, depending one what you do and don't know about the quality and nature of the measurements you have made -- and also how detailed you want to be, since determining the "best" answer is seldom worth the additional effort compared to determining an answer that is more than "good enough".

One factor that is often overlooked is just what you mean by "error" or "uncertainty". If you say that the length of a rod is 2.57 m ±0.03 m, does that mean that there is zero possibility that the rod than 2.54 m or more than 2.60 m long, or does it mean that 0.03 m would be the standard deviation of the measurements if they were repeated many, many times? Or does it mean something else. Also at play are factors such as the source of the different measurements. Is it because the length of the rod itself varies (i.e., if you were to make ten rods, the actual lengths would vary by that amount), or is it because of the quality of the measurement (i.e., if you were to measure the same rod ten times, the measurements would vary by that amount). Also, what is the distribution of the values, irrespective of the source of the variation. For instance, lets say that I want a hundred rods. if I cut rods with equipment that only got me within 0.1 m of the desired length and then used a go-no-go gauge to select rods that were within 0.03 m of the desired length, I might have to cut several hundred rods to get a hundred acceptable ones, and I would expect the distribution of lengths of those acceptable rods to be pretty uniform, meaning that the likelihood that a particular rod was right at the limit would be about the same as the likelihood that it was right at the ideal length. But if I had equipment that was better controlled, so that I could make, say, 110 rods in order to get 100 acceptable ones, then I would probably expect the likelihood that a particular rod was close to the nominal length to be much higher than the likelihood that it was barely long enough or barely short enough to be accepted.
 

Janis59

Joined Aug 21, 2017
1,855
The uncertainty is depending by Gauss dispersion law from number of measurements and dispersion of these results - theme of class 10th in physics. Guess the table of measurment results is like this: 12V, 15V, 9V, 0V, 110V. Then average result will be sum(all together)/count of them=146/5=29.2 V. Now must calc the deviation from this average, it means -17.2; -14.2; -20.2; -29.2; +80.8. Now must square it each and sum. 285.94+201.64+408.04+852.64+6528.64=8286.8. Now there are bifurcation. By American style You have to divide it by n^2-1=5*5-1=24 and then square root. By soviet style this divider figure is n(n-1)=5*4=20 and then square root. All is in end compensated by Student coefficient, what in US version is different from russian version, logically, thus in the end the RMS value will fit both equal. (stalin finger - he throw the best soviet mathematicians in the hall and said every day one of them will be shoot until the alternative method to US will be invented to not pay the patent - in those times it was allowed to patent the formulas). So, we have the sqrt(8286.2/24)=18.58 V. I have bit too lazy to lift up the precize Student coefficient tables, but for the n=5 it should be something between the 2 and 3 (russian version exactly 3). Thus 18.58*2.5=46.5. Thus the result is 18+/-45 (V) (R=95). Clear, that 18.2 must be rounded to decimal-less figure at so grandiose uncertainty. Vice versa,if the results would be say 12; 12.1; 11.9; 12; 12, then all would have ideally precise and exact. First case many surplus measurements are demanded, second variant - job is done and well done.
When result is composed from two multiplyable sub-results, then both uncertainties must be added. Example N=F*v. F=3+/-10% Newtons and v=10+/- 5% meters per second. Then N=3*10=30 Watts +/- 15%.
More badly is when one have a deal with dividing the subresults. Then take the best and worst the cases and divide them. Example: i=V/R. V=12+/-2 Volts and R=24+/-4 Ohms. Then i(min)=(12-2)/(24+4) and i(max)=(12+2)/(24-4).
 

MrAl

Joined Jun 17, 2014
11,566
Hello,
I tired googling this, but all the articles I can find depend on you knowing what both the observed and actual values are, and I don't.

So if you are measuring watts, your error is "max error of DMM voltage range" times "max error of DMM amperage range". Very simple. But if you're doing measurements that are a bit more complex, like watts and lumens, or watts and temperature, or anything even more arcane, how would you calculate the total measurement error/uncertainty?

Thanks!
Hi,

I am not entirely sure what you are asking, but you can also look into the Total Differential. That allows you to estimate the errors in various things even if each error occurs along a different dimension.
 

MisterBill2

Joined Jan 23, 2018
18,986
Once again as I explained for another thread, and as others have stated, the difference between uncertainty and error is total. In all systems lacking infinite resolution, the uncertainty is half the size of the smallest bit of resolution.
Any error is the result of deviation of the limits of that bit from the calculated values.

In calculated measurements the uncertainty is the sum of the uncertainties of the measurements used.

In all fields except "rocket science" and financial dealings, the solution is always to have the resolution adequate to assure that the result uncertainty is within the allowable margin of error. It is unlikely that there will be "perfectly accurate" measurements very often in the real world.
The exception will be examples found in textbooks.
 
Top