# finding capacitor value using RC time constant thingy, you know what i mean

Discussion in 'Homework Help' started by ninjaman, Dec 10, 2014.

1. ### ninjaman Thread Starter Member

May 18, 2013
306
1
hello

I got a dso nano, el cheapo pocket scope off ebay.
I have a homework assignment to find the values of a capacitor and inductor. I looked up the RC method online and tried it with funny results. I measured the voltage across the capacitor. the voltage was 10v, I want one time constant the first being 63%. so I did 10v / 100 * 63 and got 6.3 volts.
I put the cursor on this value (6.3v) to find the time at that point and put this time over the resistor of 2200 ohms. I got a time of 4.56mS this over 2237 ohms (measured value) I got .2uF
I measured the capacitor and got 1uF. I don't know what im doing wrong here. I want to show the spreadsheet but I have excel starter and I cant import the file, I have a nice little picture though!

the voltages at the bottom don't mean much (I think?)
it looks like it should but I cant find the correct value.

any help on what I may be doing wrong would be great
thanks

simon

2. ### MikeML AAC Fanatic!

Oct 2, 2009
5,451
1,066
What is the input impedance of your cheapie dso?

3. ### WBahn Moderator

Mar 31, 2012
18,092
4,918
How are you getting 4.56 ms?

Just looking at your trace I'm estimating something closer to 2.4 ms. With a resistance of 2337 Ω that would put the capacitance at about 1.0 uF.

4. ### ninjaman Thread Starter Member

May 18, 2013
306
1

hello WBahn, how did you estimate the 2.4ms, the resistance is 2237 ohms. im guessing you took your time from the yellow line, this is the trigger, im not to sure how to set it. I thought it was the very start of the voltage rise, but it doesn't look that way. I suppose that I can set the trigger sensitivity?
im not to sure about how that stuffs works. I will have to read up on it.

thanks for the help WBahn

5. ### MrChips Moderator

Oct 2, 2009
12,649
3,460
Use the Time cursors (white). Set one cursor at the start of the rising edge. Set the second cursor to the 6.7V rise. Then read off ΔT.

6. ### WBahn Moderator

Mar 31, 2012
18,092
4,918

(1.2 div)*(2ms/div) = 2.4ms

A better way to make rise time measurements is to use 10% to 90%. Older scopes actually had these horizontal tracks as a permanent part of the grid.

7. ### crutschow Expert

Mar 14, 2008
13,509
3,385
Not when you want to determine the RC time-constant.

8. ### WBahn Moderator

Mar 31, 2012
18,092
4,918
Why not?

You can use the time between any two thresholds you want. You want to use a time that is as long as possible. You don't want to use 0% because, just as in this screen capture, there is often some noise at the transition. You could use less than 10% but 10% was chosen as the convention and it works nicely. You can't use 100% because that is technically undefined. At 90% you still have enough slope to be able to tell the crossing pretty nicely. It takes a first-order response ln(9) time constants to go from 10% to 90%.

9. ### crutschow Expert

Mar 14, 2008
13,509
3,385
Not really. The accuracy near the end of the rise is low because the voltage change is small with respect to time. That's why using the value at one time-constant, where the voltage versus time slope is high, is a good point to measure, and it also avoids any calculations involving the logarithm.

10. ### WBahn Moderator

Mar 31, 2012
18,092
4,918
At 90% the slope is still high enough to get a good measurement. It's also easier to construct reference lines at 10% and 90% than it is to do so at 63.2...%. Then there is the issue of determining where the 0% point is given the transition disturbance that often exists. And what's so hard about dividing by 2.2?

11. ### MrAl Distinguished Member

Jun 17, 2014
2,567
521
Hi WBahn,

I have to agree with Carl on this because the single time constant is so much easier to measure and calculate.
1. It is easy to find where 63 percent of the voltage is.
2. It is easy to measure the time for the wave to reach that voltage (usually) and that is already the time constant, with no other calculation necessary.
3. The estimate of 63 percent is straightforward and consistent with time constant theory.

With the 10 and 90 percent method you have to measure the time between two places, 10 percent and 90 percent, then multiply that time by the factor:
1/ln(9)
which is about equal to 1/2.2 so it's not that hard to do but still seems a little more removed from the theory of what a time constant really means ( the 0 to 63 percent time).
Also, if you dont know where 0 percent is then you dont know where 10 percent is either
I do have to agree that it is another method though which would allow for error checking for the first method.

Some other interesting places to measure are:
20 percent and 70 percent, which results in a factor very close to 1 (actually 0.98) so it is for most purposes the same as the time constant itself (no factor needed really), and
20 percent and 50 percent, which results in a factor so close to 1/0.47 that it can be called 1/0.47 in almost every case (actually 1/0.470004), or just divide by 0.47 or for rough estimates divide by 0.5 or just multiply by 2 (ha ha).
There are probably other interesting measurement points too.
For example, 20 percent and 70.71 percent results in a factor even closer to 1 which is the time constant again, and 70.71 is about 100 times 1/sqrt(2) which can also be estimated at 71 percent.

Last edited: Dec 13, 2014
12. ### crutschow Expert

Mar 14, 2008
13,509
3,385
The start of the rise is easy to determine accurately if you also display the step input used to power the RC circuit for the measurement.
But if you want to use the more inaccurate 10%-90% point for the measurement you are welcome to use it.

13. ### WBahn Moderator

Mar 31, 2012
18,092
4,918
I guess I don't understand what is so hard about adjusting the scope gain until the waveform just goes from the 0% to the 100% reference lines, choosing a time scale such that the 90% line is crossed in the right half of the display, and then using the horizontal position control to move the point where it crosses the 10% line to a major division (such as the left edge or the first major division) and then counting how many divisions to where it crosses the 90% line which, on most scopes without cursors, was one of the major graticule lines that was marked with the subdivisions specifically for this reason. Yes, you then need to divide by 2.2, but I've never found that to be much of a burden.

The start of the rise time is NOT always that easy to determine because there are often switching transients that distort the signal. These are usually resolved before it gets to 10% (but not always, in which case you just use a higher threshold and adjust accordingly. The relationship is trivial, you simply divide by ln(high/low) where 'high' is the fraction at the end of the measurement and 'low' is the fraction at the start of the measurement.

I don't find that finding 63% is anywhere near as easy as accurately finding 10% and 90% since those are hard drawn lines on the graticule that run continuously from left to right whereas 63% has to be interpolated on a pair of vertical lines and then run across visually to where it intersects the signal trace.

If you want to claim that the 10-90 method is less accurate than the 0-63 method, then please back that up with a suitable error analysis of both.

14. ### crutschow Expert

Mar 14, 2008
13,509
3,385
And I don't see what's difficult about determining the 63% point, but perhaps that's because I'm using a Digital scope with adjustable readout cursors and you are using an old analog scope without cursors.

If there's a problem starting from the zero point then you can start at the 10% point but still end at or near the 63% point and adjust the calculations accordingly.

Going from 1 time-constant (63%) to 1.1 time-constants (66.7%) gives a 5.5% change in amplitude (for a 10% time-constant change in time). Going from 2.3 time-constants (90%) to 2.53 time-constants (92%) gives a 2.3% change in amplitude (for the same 10% time-constant change in time). Thus it should be apparent that, for a given voltage/time readout accuracy from the oscilloscope, the time to a particular point on the time-constant curve can be determined with at least twice the accuracy at the 1 time-constant point as compared to the 2.3 time-constant point.
Any further error analysis of this, if needed, I leave as an exercise to the reader.

15. ### WBahn Moderator

Mar 31, 2012
18,092
4,918
It's not that I'm using an old analog scope (thought I do have one that I seldom use but many people have one and use them regularly), but when I mentioned the 10%-90% method I specifically referenced that older scopes had the necessary marks as a permanent part of the scope face.

Your error analysis makes little sense. What needs to be focused on is the error in the determination of each end point and how it propagates to the error in the final answer. One huge factor that you are failing to take into account is that the wider the span the more accurate the measurement, all other things being equal.

For instance, let's say that you wanted to know how fast a shaft is turning by counting timing how long it takes to complete one revolution. Let's use a shaft that is turning at 60 rpm, or one revolution per second. Now let's say that your ability to start and stop your clock allows you to capture the desired event within 200ms. You thus have a 200 ms error (which we will assume is the standard deviation of normally distributed error distribution) in each of the two endpoint measurements of an even that lasts one second. Propagation of errors shows that the std dev of the error in the overall measurement will be 282 ms. Since the measured event is 1 s, the speed will only be known to ±28%. But that error amount is the same regardless of whether you take the measurement for 1 rotation or for a thousand rotations. If you do the latter, your measurement will be 1000±0.282 s yielding a speed measurement that is good to better than ±0.03%.

Now, I know your claim is that the error in the time measurement of when a first-order response crosses a later threshold grows -- and I agree that it does. So let's say that because we tend to lose count of how many times the shaft has rotated when we could a lot of rotations that we might have only counted 940 revolutions or we might have counted 1060 revolutions when we thought we had counted 1000 revolutions -- in other words, a 6% error in the revolution count. That means that our first measurement would still have an expected error of 282 ms, but our second measurement would have an expected error of a minute -- or 60 times the amount of time it takes the shaft to turn once. Yet the ends result is that our speed will be known to within 6%, much better than the nearly 30% obtained by using just one revolution.

Now, if our ability to keep track of the number of revolutions was so bad that our error in the final measurement was off by more than 30%, then we would be on the wrong side of the curve and would be better off just measuring one revolution.

The same is true with measuring the time constant. The real question is whether the error in our ability to measure the endpoint accurately degrades faster than the gain of using a longer measurement baseline.

I have some stuff I have to do now, but I'll come back to this later (perhaps even today).

16. ### crutschow Expert

Mar 14, 2008
13,509
3,385
I'm sorry my analysis makes little sense to you. I believe I clearly showed that the measurement accuracy at one time-constant is more accurate than that taken at two, given a fixed measurement accuracy of the RC voltage.

Your analogy is faulty. A rotating shaft measurement is a linear function with time whereas the time-constant measurement is a non-linear function.
Since you couldn't see that obvious difference and think my analysis "makes little sense" to you I'm beginning to feel that you are ignoring or not understanding any arguments contrary to your faulty premise and thus no amount of further discussion will convince you otherwise, so this is my last comment on the matter.

17. ### WBahn Moderator

Mar 31, 2012
18,092
4,918
According to your reasoning, then the measurement accuracy would be even better at half a time constant or a tenth of a time constant. What is so magical about one time constant?

My analogy is not faulty and it was given in two parts. The first part dealt with the beneficial effect of using a longer measurement span -- something which you have chosen to completely ignore. The second part dealt with the adverse affect that arises when the uncertainty in the measurement grows with the measurement span.

I see no reason why you've chosen to resort to a personal attack (perhaps it was an affront to you that I had other things that I had to do today and chose to forego a detailed analysis until later), but that's your choice. I find your remarks quite insulting, especially given the number of times I have been more than willing to not only be corrected by others on this forum but to point out my own errors. Also, frankly, I really had had a much higher opinion of you (I guess live and learn). None-the-less, I will choose to ignore that and, instead, will continue to present an analysis of the propagation of errors.

The basic problem is that we want to measure the time constant based on the time difference between two thresholds. We could do this without loss of generality by considering a first order decaying exponential, but to ward off claims that now I am analyzing something else instead of a rising waveform, we will use a rising waveform that nominally obeys

$
v(t) \; = \; V_0 $$1 \; - \; e^{-\frac{t}{\tau}}$$
$

What we are trying to do is find the time constant by measuring the time at two different thresholds, t_α & t_β, where α and β are both fractions of the step voltage.

$
t_{$$\frac{V_{th}}{V_0}$$} \; = \; - \tau \cdot \ln $$1 \; - \; \frac{V_{th}}{V_0}$$
$

So

$
t_{\alpha} \; = \; -\tau \cdot \ln $$1 - \alpha$$
t_{\beta} \; = \; -\tau \cdot \ln $$1 - \beta$$
$

The time constant is a function of the difference

$
t_{\beta} \; - \; t_{\alpha} \; = \; -\tau $\ln $$1 - \beta$$ \; - \; \ln $$1 - \alpha$$$\; = \; \tau \cdot \ln $$\frac{1 - \alpha}{ 1 - \beta}$$
\
\tau \; = \; \frac{$$t_{\beta} \; - \; t_{\alpha}$$ }{\ln $$\frac{1 - \alpha}{1 - \beta}$$}
$

But we are interested in the uncertainty in the time constant, Δτ, which, assuming that the errors in the measurements of the two times are independent, is

$
\Delta \tau \; = \; \sqrt{{$$\frac{\partial \tau}{\partial t_{\alpha} } \Delta t_{\alpha}$$}^2 \; + \; {$$\frac{\partial \tau}{\partial t_{\beta}} \Delta t_{\beta}$$}^2}
$

Since

$
\frac{\partial \tau}{\partial t_{\alpha}} \; = - \frac{1}{\ln $$\frac{1 - \alpha}{1 - \beta}$$}
\frac{\partial \tau}{\partial t_{\beta}} \; = \frac{1}{\ln $$\frac{1 - \alpha}{1 - \beta}$$}
$

we have

$
\Delta \tau \; = \; \sqrt{ {$$\frac{ \Delta t_{\alpha} }{\ln \( \frac{1 - \alpha}{1 - \beta}$$ } \) }^2 \; + \; {$$\frac{ \Delta t_{\beta} }{ \ln \( \frac{1 - \alpha}{1 - \beta}$$ } \) }^2}
$

If we normalize this by dividing by the time constant, we get

$
\frac{\Delta \tau}{\tau} \; = \; \frac {\sqrt{ $$\Delta t_{\alpha}$$^2 \; + \; $$\Delta t_{\beta}$$^2}}{ $$t_{\beta} \; - \; t_{\alpha}$$ }
$

Next we take into account the nonlinearity by assuming that the uncertainty in the time measurement in proportional to the voltage range that it corresponds to.

$
\Delta V \; = \; \frac{V_0 e^{-\frac{t}{\tau}}}{\tau}\Delta t
$

$
\Delta t \; = \; $$\frac{\Delta v}{V_0}$$ $$\frac{\tau}{e^{-\frac{t}{\tau}}}$$
$

Thus

$
\Delta t_{\alpha} \; = \; \frac{ $$\tau \frac{\Delta v}{V_0}$$ }{ $$1-{\alpha}$$ }
\Delta t_{\beta} \; = \; \frac{ $$\tau \frac{\Delta v}{V_0}$$ }{ $$1-{\beta}$$ }
$

Substituting this into the prior expression for the uncertainty in the time constant:

$
\frac{\Delta \tau}{\tau} \; = \; $$\frac{\tau}{t_{\beta} \; - \; t_{\alpha}}$$ \sqrt{ $$\frac{1}{\(1-{\alpha}$$} \)^2 \; + \; $$\frac{1}{\(1-{\beta}$$} \)^2} $$\frac{\Delta v}{V_0}$$
$

which reduces to

$
\frac{\Delta \tau}{\tau} \; = \; $$\frac{1}{\ln \( \frac{1 - \alpha}{1 - \beta}$$} \) \sqrt{ $$\frac{1}{\(1-{\alpha}$$} \)^2 \; + \; $$\frac{1}{\(1-{\beta}$$} \)^2} $$\frac{\Delta v}{V_0}$$
$

Now, your claim is that this is minimized when α=0 and β=1-1/e (=63.2%). Under these conditions the uncertainty is 2.90 times the uncertainty in the voltage level.

Now, we could take the partial derivatives of this and set them equal to zero and find the optimal thresholds. I would like to get to bed soon so I choose not to do this analytically. It is a pretty safe bet that the optimal lower threshold is 0% (provided we have a sufficiently clean transition) and, under those conditions the optimal upper threshold is actually just a bit over 0.67006 where the uncertainty is 2.878 times the uncertainty in the voltage level. This is admittedly fairly close to 1-1/e, which is 0.63212, and with a quite negligible improvement in the uncertainty.

For a lower threshold of 10%, the optimal upper threshold is right around 0.70305 with an uncertainly of 3.198 times the voltage uncertainty. If we use 63.2% as the upper threshold, then the error multiplier is 3.282.Using a 90% upper threshold the uncertainly factor is 4.579, which is a substantial degradation.

If we plot this as a function of the upper threshold for a given lower threshold, we get a curve like the following (which is for a lower threshold of 0%).

As can be seen, there is actually quite a range that has nearly the same uncertainty multiplier.

The bottom line is that I was wrong about 10-90 giving a better result. The crossover point occurs earlier at the ~70% level. It does NOT occur at one time constant, but it is rather close.

Now, although I have no problem at all going through in detail and revealing (in fact, explicitly proving) myself wrong (in point of fact, I'm happy to do so because I learn something in the process), I don't see any reason to take time away from my vacation in order to be insulted by a member of this forum whom I once held in high regard. Whether I was wrong or right is immaterial to that. So I bid you all farewell.

18. ### crutschow Expert

Mar 14, 2008
13,509
3,385
I apologize if I said something that you felt was an insult. It was not my intention to make a personal attack. My frustration was with my feeling that you weren't making a real effort to understand my explanation and by dismissing it as making little sense.
And I did mention that your analogy about a rotating shaft to show that a longer measurement is more accurate does not apply to a non-linear measurement (although I certainly understand it does apply to the rotating shaft).
Thanks for showing the detailed mathematical analysis of the problem. It would have taken me days to do that. Perhaps that's why I usually avoid doing them.
But I again apologize for saying something that you thought was insulting or was a personal attack. I feel bad about that. Sometimes I live up to my curmudgeon label when I shouldn't.

19. ### ninjaman Thread Starter Member

May 18, 2013
306
1
hello

I have since tried this method of measuring and calculating capacitance.

put a unknown capacitor and known resistor in series
attach a sine wave at some frequency
measure voltage across known resistor
voltage/resistance = current
measure voltage across unknown capacitor
voltage/current = capacitive reactance

rearrange formula for capacitance

1 / 2 * pi * Hz * C

I rearrange to get this

Xc * 2 * pi * Hz / 1

I get the wrong answer. im using a home made function generator that puts out an accurate frequency and 1 volt peak to peak. I measured 0.541volts pk
I have tried this method in college with their equipement so it must be something im doing wrong.
should I use ac going into a capacitor, what type of capacitor should I use, what is the best way to measure the voltage?

I get stuck at some weird crossroad where im doing something wrong but cant figure it out.

any help would be great, thanks so far, most of what is written above is way above my head

CHRISTMAS SOON!!!!!!

simon

20. ### MrAl Distinguished Member

Jun 17, 2014
2,567
521
Hi,

Actually we dont need an exact result anyway, but my only question is if you want to use 10 percent and 90 percent and end up dividing by 2.2 (actually ln(9)) then why not use 20 percent and 70 percent, which results in a divider of nearly 1.0 ? That way we dont have to remember the 2.2 along with the other stuff, although we do have to remember 20 percent and 70 percent.

I believe you are more or less engaging in Reductio ad absurdum
when you state that we can reduce the time further from the 63 percent point and thus get very bad results. We only need consider 63 percent and nothing less. On the other hand, we also only need consider the 90 percent point and not say 99.9 percent.

The point Carl is making is very simple: that the slope gets less steep the farther we go out from zero, and this makes the amplitude measurement less accurate. This can be easily found by looking at the slope at the two points 63 percent and 90 percent, and also getting a little sarcastic we can look farther out and find that it gets very difficult to figure out where say the 99 percent point is, and that is because the exponential stabilizes so slowly for larger time periods.
Slopes for different time values:
00 percent: 1.00
10 percent: 0.90
20 percent; 0.80
63 percent: 0.37
70 percent: 0.30
90 percent: 0.10

We quickly see that the slope is 1-p where p is the fractional percentage.
All other things equal, this shows that the measurement accuracy for a given fixed constant measurement error would increase for increasing 'percentage'. Since there are two slopes involved for any one measurement, if we take the average of the two slopes involved we get:
00 and 63 percent: 0.685
10 and 90 percent: 0.500
20 and 70 percent: 0.550

So the best average is 00 and 63 percent, although this is only looking at one dimension. We'd need to look at this as a two dimensional problem to really figure out the best accuracy, because scopes also have a limit on the time axis accuracy too. If it was perfect then 00 and 01 percent looks good: average is almost 1.00, but obviously that isnt a good idea either.

What i have to agree with though is that if we use BOTH methods we get a nice way to double check our results. The only question i have left then is why not use 20 percent and 70 percent, which results in a factor that is only 2 percent different from 1.000 (about 1.01 or so). So both 0 to 63 and 20 to 70 result in a factor of nearly 1.