# Integral Error

Thread Starter

#### RdAdr

Joined May 19, 2013
214
Consider two functions f, g that take on values at t=0, t=1, t=2.

Then the total error between them is:

total error = mod(f(0)-g(0)) + mod(f(1)-g(1)) + mod(f(2)-g(2))

where mod is short for module.

This seems reasonable enough.

Now, consider the two functions to be continuous on [0,2].
What is the total error now?

My guess is that it is the integral of the absolute value of their difference divided by the length of the interval:
total error =1/2 * integral from 0 to 2 of mod(f(x)-g(x)) dx

Is this right?

Or is the error evaluation done in a different way?

Thread Starter

#### RdAdr

Joined May 19, 2013
214

#### wayneh

Joined Sep 9, 2010
17,152
Thinking about it, measuring the error between the curves using the area between them is good enough.
Good enough for what? The problem to be solved dictates the tools to be used. In normal curve fitting, it's customary to sum the squared errors. This gives a different result than fitting to minimize the area between the data and the model. Of course the data is not continuous in this case.

Thread Starter

#### RdAdr

Joined May 19, 2013
214
Consider an ideal ADC and the input signal i(t).
Through the sampling process, an error appears. Analog information on the time axis is lost.
So how can we evaluate this error? We could define a continuous-time signal f(t) that has the value of the samples and zero everywhere.
So:
f(t) = i(t) for t=kT, and f(t) = 0 for t<>kT

Now, the error between i(t) and f(t) can be given by the absolute value of the area between them. The larger the area, the bigger the error. Why not.

When the output of ADC is fed to an ideal DAC, then the continuous-time signal is obtained back through the zero-order hold process. So, now f(t) is this continuous-time signal. But when analyzing only the ideal ADC, maybe I want to define for me some f(t) and to evaluate with it the error, without thinking about the DAC.

Of course, it is more natural to take into consideration the DAC also. When taking DAC into consideration, the initial error will decrease. This is because using the zero-order hold process instead of making the value 0 outside the samples is better.

So good enough for this: the error between the input of ideal ADC and the output of ideal DAC.

Last edited:

#### wayneh

Joined Sep 9, 2010
17,152
Thread Starter

#### RdAdr

Joined May 19, 2013
214
Very interesting.

#### WBahn

Joined Mar 31, 2012
26,398
Consider two functions f, g that take on values at t=0, t=1, t=2.

Then the total error between them is:

total error = mod(f(0)-g(0)) + mod(f(1)-g(1)) + mod(f(2)-g(2))

where mod is short for module.

This seems reasonable enough.
Are you claiming that this IS the definition of the "total error between them", or that you are simply choosing to you that as YOUR definition of the "total error between them" for the particular purpose at hand?

Now, consider the two functions to be continuous on [0,2].
What is the total error now?
That would depend on what your purpose is.

"error" is usually defined in such a way that it scales well and limits well. For instance, if your original function were defined at many more points, then your definition of total error would grow even if the error at each point were made less. That defies most criteria for reasonableness. So most definitions of total error would normalize this by the number of points involved.

This is reasonable because if f(t) = g(t) + A, there is a constant "error" between f(t) and g(t), yet your definition would have the total error growing without bound if the number of points grows.

My guess is that it is the integral of the absolute value of their difference divided by the length of the interval:
total error =1/2 * integral from 0 to 2 of mod(f(x)-g(x)) dx

Is this right?

Or is the error evaluation done in a different way?
Depends on what makes sense for the particular problem at hand.

The usual metric for "error" is rms-error (root of the mean of the square). The primary advantage that this has over using the absolute value function is that the square root function is continuous and much easier to perform various derivations and analyses with. It also exaggerates large errors, which is often preferred, but that may or may not be a good thing for a particular problem.

Thread Starter

#### RdAdr

Joined May 19, 2013
214
Are you claiming that this IS the definition of the "total error between them", or that you are simply choosing to you that as YOUR definition of the "total error between them" for the particular purpose at hand?

That would depend on what your purpose is.
My definition. I saw that the quantization noise error between the sampled signal and the quantized signal is between -1/2VLSB and 1/2VLSB at every sample, where VLSB = Vref/2^N.
And then I thought that, ok, the total quantization noise error between the sampled signal and the quantized signal could be the sum of all these individual errors.
And then I thought that what if the sampling process would be infinite in nature. Thus, we would obtain the same continuous-time signal after sampling. Then this signal is quantized. And we would obtain a continuous-time signal having finite discontinuities (stair steps). And the error at a certain instant in time would still be between -1/2VLSB and 1/2VLSB.
And the total quantization noise error, I thought, by extension from the discrete case, could be the integral of the individual error. But the integral is the area and is measured in V*s. In the discrete case, the total area was measured in V. So I divided the integral by time of the signal to obtain Volts.
That's how I ended up with that definition.

"error" is usually defined in such a way that it scales well and limits well. For instance, if your original function were defined at many more points, then your definition of total error would grow even if the error at each point were made less. That defies most criteria for reasonableness. So most definitions of total error would normalize this by the number of points involved.

This is reasonable because if f(t) = g(t) + A, there is a constant "error" between f(t) and g(t), yet your definition would have the total error growing without bound if the number of points grows.

Depends on what makes sense for the particular problem at hand.

The usual metric for "error" is rms-error (root of the mean of the square). The primary advantage that this has over using the absolute value function is that the square root function is continuous and much easier to perform various derivations and analyses with. It also exaggerates large errors, which is often preferred, but that may or may not be a good thing for a particular problem.
Ok, I see. Maybe the total error in the discrete case could be better given by xrms and in the continuous case by frms from here:
https://en.wikipedia.org/wiki/Root_mean_square
Thanks.

#### MrAl

Joined Jun 17, 2014
8,262
Hello,

I think what you might be interested in looking at is called the "correlation coefficient" and related. It's a wide area really, but basically what you are measuring is how good the correlation is between the two sets.

Also, most of the absolute error methods dont work well in practice because that's not usually how we do things. For example, if you bought 100 crates of apples and there was one missing from each crate except the first one the sales guy showed you, when you got home you'd find you were missing 99 apples! That's a lot of apples.
But wait, how many apples are in each crate? If there were 2 apples in each crate you'd be missing almost 50 percent of the apples you thought you were purchasing, but if where were 20 apples per crate you'd only be missing 5 percent, and of course if there were 100 per crate you'd only be missing 1 percent.
So this gives some idea why PERCENT ERROR is usually more important. Resistors, capacitor values, etc. If we had a 1 ohm resistor that was off by 1 ohm we'd be very unhappy, but if it was a 1k resistor we'd be very happy to find it was only off by 0.1 percent.
An error method that takes this into account is usually better because it reflects how we normally deal with quantities that vary over a wide range.

Also, two performance criteria are:
ISE, integral of the squared area, and
IAE, integral of the absolute area.
There are others also, such as integral of time times the absolute value of the error, which is used to emphasize errors that occur later in time.

Last edited:
Similar threads