I am trying to calculate the energy delivered by a short electrical pulse. I am using an oscilloscope to capture the waveform and then exporting it to a CSV file. My scope took measurements at a rate of once every 8 microseconds which means that each time time value could potentially vary by plus or minus 4 microseconds. The manual of my scope gives no mention of uncertainties whatsoever, but it says that the maximum capture rate that can be used is once very 1 microsecond. My question is whether my uncertainty should be taken as half of the time increment I am using (± 4 microseconds) or half of the smallest possible time increment the scope can capture (± 0.5 microseconds).
Here comes the real issue, to calculate energy, we must square voltage to get power, and add up power values multiplied by how long each power value lasted for (energy=timestep*SUM(powers)). We are dealing with the error of the oscilloscope's time resolution. To get the final uncertainty we must add up the relative uncertainties of the timestep and uncertainty of the sum of powers. If I am to say that the error in the time resolution is ±4 microseconds, then I necessarily must have a relative error of at least 4μs/8μs=0.5 which is tremendous. That does not seem right, so which method of estimating uncertainty is correct here?
Thank you all
Here comes the real issue, to calculate energy, we must square voltage to get power, and add up power values multiplied by how long each power value lasted for (energy=timestep*SUM(powers)). We are dealing with the error of the oscilloscope's time resolution. To get the final uncertainty we must add up the relative uncertainties of the timestep and uncertainty of the sum of powers. If I am to say that the error in the time resolution is ±4 microseconds, then I necessarily must have a relative error of at least 4μs/8μs=0.5 which is tremendous. That does not seem right, so which method of estimating uncertainty is correct here?
Thank you all