Instrument Errors

Discussion in 'General Electronics Chat' started by thanujarc, Jan 2, 2010.

  1. thanujarc

    Thread Starter New Member

    Oct 29, 2008
    Hi All,

    These days Im investigating on errors that an instrument can have. Il be glad if you all can help me out with it. I need to find some methods to prove that Digital multimeters contain measurement errors in AC voltage, Resistance. How can I show this?

    Thanks a lot
  2. Paulo540


    Nov 23, 2009
    Generally, the product nomenclature displays their average error %. To fully investigate you would need at least one piece of highly accurate calibrated measuring equipment.

    In my limited opinion, linearity is far more important than base accuracy. Most the work I've done is comparing two or more values. And through this is the foundation of a lot of formulas. Good ol' Delta and such.

    Building (or buying) 4 wire leads takes the lead resistances out the equation, so that's a big help.

    Sorry for the jumble of ideas, that's just how I roll.

  3. russ_hensel

    Distinguished Member

    Jan 11, 2009
    get 2 meters
    measure some things with both.
    unless all measurements agree you have some errors--proven.
  4. someonesdad

    Senior Member

    Jul 7, 2009
    AC voltage: hook up a function generator known (with an oscilloscope) to have flat output. Set the frequency to e.g. 60 Hz. Input a 1 volt RMS sine wave into the DMM. Increase the function generator's frequency and watch the DMM fail to indicate the correct voltage (verify by simultaneously watching the output voltage with the scope). You can repeat this exercise by decreasing the frequency also.

    You can show the same thing indirectly by measuring the voltage of a square wave at 60 Hz. Unless you have a DMM capable of measuring the RMS value of the input, you won't read the correct voltage.

    Resistance: cut off a 10 cm long chunk of 12 gauge copper wire. Measure the resistance with the digital multimeter. Calculate what the resistance should be from a copper wire table.

    Ultimately, as implied by these methods, you need to compare the DMM's measurement to something that performs the measurement more accurately.
  5. SgtWookie


    Jul 17, 2007
    This inquiry is valid, but the answer is really tough.

    If you want absolute accuracy, you have to go back to the calibration standards. The further you get from the official calibration standards, the less accurate your readings are.

    Sure, you can get close; and most of the time, close is "good enough".

    But if you demand absolute accuracy, it's going to be very expensive.

    Just for grins, I tested a cheap $4 DMM that I bought from Harbor Freight using a Fluke calibrator that had just returned from calibration. I was genuinely surprised when I discovered the readings were well within reason; from nearly the minimum to nearly the maximum it read within 1% tolerance.

    It did not do well at the extreme ends of it's ratings, but that was to be expected.

    As far as RMS ratings; you'll need a really good meter to get accurate readings.

    You probably want the reasons behind all of this. You will need to do a lot of research.

    If you are a student, join as a student. It's only $30/year for students. It's well worth the price.
  6. lmartinez

    Active Member

    Mar 8, 2009
  7. someonesdad

    Senior Member

    Jul 7, 2009
    I'll assume we've answered the OP's question satisfactorily.

    For others who might be interested, metrology is an interesting subject and making accurate physical measurements can be a demanding and challenging task.

    One of the references I recommend to the interested person is the NBS special publication 300 (I think I got the number right; my copy is buried somewhere in the house and I'm too lazy to go look for it). I especially like the article(s) by Harry Ku on the propagation of error formulas; it discusses Cramer's Theorem and can be a good jumping-off point should you wish to delve into a mathematical statistics book for more details. Here's a starting point for some more references to examine.

    I got interested in this stuff in the 1980's when helping bring up a thin film disk manufacturing facility. An interesting topic is the growth of type 1 and type 2 errors in a process that uses measurement inspection to pass parts on to the next process. Things become interesting when you're using state-of-the-art measurements where the precision and uncertainties are significant -- and you know the probability distributions are not normal. The only way I ever found to estimate the numbers was through Monte Carlo simulation and that was an interesting task in its own right.

    One thing I've concluded over the decades, both from watching other workers and observing my own bumbling, is that we often do a poor job of assessing and stating the uncertainties associated with our physical measurements (result = often underestimating the uncertainties). Two contributors are statistical sloppiness and not spending enough time working on identifying the true uncertainties of measurement. Of course, sometimes the powers that be won't allocate the resources necessary to such work and that's understandable. A pet peeve of mine has been the poor and/or unclear reporting of uncertainties in the literature over the last century or two -- there never was a standard and each worker was free to use his own definitions and assumptions (often not even stated). Hopefully, things have gotten a bit better with such things as the NIST technical note, but it will probably take a generation to become firmly entrenched (and that's probably being optimistic :)).

    The real lesson is that we make measurements to help us make decisions and we often don't examine the sensitivity of the decision making process to the uncertainties and biases in the measurements.
  8. studiot

    AAC Fanatic!

    Nov 9, 2007
    If you really want to do this practically without all that national calibration stuff you can do several things.

    Get two very very accurate resistors and measure any voltage source across them in series and as a potential divider. This will give you one calibration point. More resistors will give you more.

    Measure any test resistor then add one of your accurate resistors then the other in series and parallel and remeasure.

    Another way is if your college has a decade ratio transformer. You can compare your DVM against this to many decades of accuracy. Mine has 10 decades.