Rms current and average current

Thread Starter

vead

Joined Nov 24, 2011
629
Hello
I know if the current signal is given than we can find out rms current, peak current and average current
Please look image .
_20160822_133533.JPG
I have solved for rms current and average current. Is it correct?
 

DGElder

Joined Apr 3, 2016
351
You don't define the period of time in which to calculate the average, so assuming over infinite time.....

The first one is correct, but the average value of a sine wave is zero.
The formula you used, and carelessly populated, is for a fully rectified sine wave.
 
Last edited:

Thread Starter

vead

Joined Nov 24, 2011
629
You don't define the period of time in which to calculate the average, so assuming over infinite time.....

The first two are correct, but the average value of a sine wave is zero.
The formula you used, and carelessly populated, is for a fully rectified sine wave.
But In my text book answer is different. I don't understand why my answer is wrong. Correct answer is I rms =0.3535a and Iavg = 3.185A. Can someone tell me where I am wrong?
 

WBahn

Joined Mar 31, 2012
30,076
Apply the definitions of rms current and average current (instead of just plugging numbers into whatever formula happens to be nibbling at your toes at the moment).

So what is the defining concept behind rms (look for "effective" voltage and current)?
 

Thread Starter

vead

Joined Nov 24, 2011
629
Apply the definitions of rms current and average current (instead of just plugging numbers into whatever formula happens to be nibbling at your toes at the moment).

So what is the defining concept behind rms (look for "effective" voltage and current)?
According to definition, effective current is equal to square root of mean square.
 

DGElder

Joined Apr 3, 2016
351
But In my text book answer is different. I don't understand why my answer is wrong. Correct answer is I rms =0.3535a and Iavg = 3.185A. Can someone tell me where I am wrong?

Either the book is wrong, you have transcribed their answers wrong or you have not given us the problem statement as written in the book.
 

MrAl

Joined Jun 17, 2014
11,494
But In my text book answer is different. I don't understand why my answer is wrong. Correct answer is I rms =0.3535a and Iavg = 3.185A. Can someone tell me where I am wrong?
Hello,

If the peak is 5 then the RMS value is not 0.3535 it looks like you or the book shifted the decimal point over one too many places as it should be 3.535

The average current for a sine wave in power applications is the same as that of a full wave rectifier. That is BECAUSE the sine wave has a certain ability to produce power so stating that the average is zero makes less sense. However, in some signal applications the arithmetic average may be calculated as zero. If you are in a power electronics course it will most likely be 2/pi times the peak and this is very very common. Why people keep insisting that it's always zero just baffles me :)
 

MrAl

Joined Jun 17, 2014
11,494
According to definition, effective current is equal to square root of mean square.
Hi,

And that is what it is for power applications. See the post just before this one.

In power applications the voltage that produces the power produces the same power for the negative half cycles as for the positive half cycles so the average is taken to be non zero.

The RMS value is just the peak divided by the square root of 2.
See if you can get the right values now.
 

DGElder

Joined Apr 3, 2016
351
What is it an example of ? Some context would give you an idea of why they expect you to supply average of the absolut value of the current instead of as written. Without some overriding context I stand by my answer: the book is wrong, the average is obviously zero. Whether the average or average of the absolute value of the current is relevant entirely depends on why and for what purpose you need the number.

Your rms answer is correct.

In the other problem you used rms instead of peak voltage in the formula for the average of a fully rectified sine wave.
 
Last edited:

WBahn

Joined Mar 31, 2012
30,076
According to definition, effective current is equal to square root of mean square.
No, that is not the definition of effective current.

The definition is that the effective value of a current waveform is the DC current that would dissipate the same average power in a purely resistive load as the actual current waveform does over the time period of interest.

Now turn that definition into a matching mathematical relationship. On one side write the equation that evaluates to the average power the actual waveform would deliver to an arbitrary resistance, R. On the other side do the same for the average power a DC current of magnitude Ieff would deliver to that same resistance.
 

MrAl

Joined Jun 17, 2014
11,494
vead said:
According to definition, effective current is equal to square root of mean square.

No, that is not the definition of effective current.

The definition is that the effective value of a current waveform is the DC current that would dissipate the same average power in a purely resistive load as the actual current waveform does over the time period of interest.

Now turn that definition into a matching mathematical relationship. On one side write the equation that evaluates to the average power the actual waveform would deliver to an arbitrary resistance, R. On the other side do the same for the average power a DC current of magnitude Ieff would deliver to that same resistance.
Hi there,

Are you sure about what you are saying, or am i interpreting you wrong here.
The 'effective' value is the square root of the mean of the square, which is usually abbreviated RMS for "Root-Mean-Square".

RMS is the value that produces the same power in a resistor as a DC value would.
So if VacRMS=3 then that produces the same power in a resistor as Vdc=3 would.

Agree?
 

WBahn

Joined Mar 31, 2012
30,076
vead said:
According to definition, effective current is equal to square root of mean square.



Hi there,

Are you sure about what you are saying, or am i interpreting you wrong here.
The 'effective' value is the square root of the mean of the square, which is usually abbreviated RMS for "Root-Mean-Square".

RMS is the value that produces the same power in a resistor as a DC value would.
So if VacRMS=3 then that produces the same power in a resistor as Vdc=3 would.

Agree?
The "effective" value is what counts. It is defined in terms of what it means, not in terms of whatever math operations end up having to be used to get at the corresponding value.

An arbitrary waveform applied to a resistive load has the same "effect", in terms of heating (power dissipation), as some DC waveform would have applied to that same resistor. So whatever the magnitude of that comparable DC waveform turns out to be is the "effective" value of the arbitrary waveform.

So if we have a voltage waveform applied across a resistive load between time T0 and T1, the average power dissipated in that resistor over that time frame is found by integrating the instantaneous power over the time period and dividing by the length of the time period (in other words, how we find the average of just about anything).

The instanteous power is

\(
p(t) \; = \; \frac{v^2(t)}{R}
\)

So the average power is

\(
P_{avg} \; = \; \frac{1}{\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, p(t) \, dt
P_{avg} \; = \; \frac{1}{\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, \frac{v^2(t)}{R} \, dt
P_{avg} \; = \; \frac{1}{R\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, v^2(t) \, dt
\)

The effective voltage is a DC voltage that has the same average power:

\(
P_{avg} \; = \; \frac{1}{R\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, V_{eff}^2 \, dt
P_{avg} \; = \; \frac{V_{eff}^2}{R\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, dt
P_{avg} \; = \; \frac{V_{eff}^2}{R\(T_1 \, - \, T_0\)}\(T_1 \, - \, T_0\)
P_{avg} \; = \; \frac{V_{eff}^2}{R}
\)

This, of course, should come as no big surprise. Notice that up to this point we haven't seen hide nor hair of any "root", let alone the "root of the mean of the square" -- that is not part of the definition nor the meaning of effective voltage. It arises out of the math that is a consequence of the definition and meaning of effective voltage when we equate these two different expressions for the average power in the resistor.

\(
\frac{V_{eff}^2}{R} \; = \; \frac{1}{R\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, v^2(t) \, dt
V_{eff}^2 \; = \; \frac{1}{\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, v^2(t) \, dt
V_{eff} \; = \; \sqrt{\frac{1}{\(T_1 \, - \, T_0\)}\int_{T_0}^{T_1} \, v^2(t) \, dt}
\)

And NOW we see that the effective voltage happens to be calculated by taking the root of the mean of the square of the voltage. Because of this coincidental set of mathematical equations, we commonly refer to the effective voltage by the name "root-mean-square" or RMS voltage.

The takeaway is that is that the name "RMS" and the tie to taking a root of the mean of square of something is NOT part of the definition. The definition is a concept involving the goal of associating a single number with an arbitrary waveform that tells us something about the waveform's ability to dump power into a resistor. The number we chose to associate with the waveform just happened to be the voltage of a DC source that would dump the same amount of power. While this is almost certainly the most reasonable choice, it's not the only one that could have been made. We could have chosen as our reference the amplitude of a sinusoidal waveform, or any of several others. They would have worked just find and we would have become comfortable thinking in terms of them. But because of the choice that was actually made, which had absolutely nothing to do with "roots" and "means" and "squares", the math just happens to work out so that the effective voltage computation involves that sequence of operations.
 

MrAl

Joined Jun 17, 2014
11,494
Hi there,

Ok i think what you are doing here is you want to go back to the physical definition of 'effective' voltage, then come up with the expression for RMS. That's understandable of course. What seems strange though is to imply that RMS is somehow completely wrong when so many references refer to it as RMS being the effective value, which can be confusing.
I guess i have to agree that a better description would be:
"The effective value is *often called* the RMS value".
And that is of course because we can use the short form to save time.

The way i understand the physical definition comes from an experiment as follows...
We have two identical resistors in identical environments.
We measure the temperature of the two resistors with two identical thermometers.
We apply the unknown voltage to one resistor, we apply a variable DC voltage to the other resistor.
After a time, we adjust the DC voltage so that both resistors measure the same temperature. After a time when the two temperatures are finally the same, the DC voltage applied has the same value as the effective voltage of the unknown waveform.

So in this experiment we actually use the heat as part of the measurement, and we dont even measure the voltage of the unknown waveform, we just measure the effect of the heat produced. Also, there is no math involved.

Since it is much harder to do that we come up with a shorter form, which of course is to take the root of the mean of the square, and we 'often call' that RMS.

So while i agree that the base definition includes things other than taking a root, in the end we do have to take a root to get the result, and stating that RMS is completely wrong is not the best idea because then people might think that Vpk/sqrt(2) is wrong too.

Many references indicate that the effective value is the RMS value, because the two are equal. We do that a lot as humans...we take shortcuts to make things easier.

Of course in the end it's up to you how you want to explain it. I do agree though that the base definition is based on heat production.
 

WBahn

Joined Mar 31, 2012
30,076
Hi there,

Ok i think what you are doing here is you want to go back to the physical definition of 'effective' voltage, then come up with the expression for RMS.
Which, I thought, was pretty clearly implied when I asked, "So what is the defining concept behind rms (look for "effective" voltage and current)?"

What seems strange though is to imply that RMS is somehow completely wrong when so many references refer to it as RMS being the effective value, which can be confusing.
I never said that calling the effective voltage (or current) the RMS voltage was wrong. I asked the TS what the defining concept behind it was (and specifically pointed him to the term "effective voltage"). He responded that the definition of effective current (not he did not even claim it was the definition of rms current) was the square root of mean square. This is wrong. Worse, it demonstrates that his understanding is based on nothing more than memorized and regurgitated formulas with little to no understanding of the concepts upon which they are based, which is why he is having so much difficulty with problems like this.

The way i understand the physical definition comes from an experiment as follows...
We have two identical resistors in identical environments.
We measure the temperature of the two resistors with two identical thermometers.
We apply the unknown voltage to one resistor, we apply a variable DC voltage to the other resistor.
After a time, we adjust the DC voltage so that both resistors measure the same temperature. After a time when the two temperatures are finally the same, the DC voltage applied has the same value as the effective voltage of the unknown waveform.

So in this experiment we actually use the heat as part of the measurement, and we dont even measure the voltage of the unknown waveform, we just measure the effect of the heat produced. Also, there is no math involved.
This is exactly how old-time true-RMS meters worked -- they measured the temperature of a resistor across which the voltage was applied.

So while i agree that the base definition includes things other than taking a root, in the end we do have to take a root to get the result, and stating that RMS is completely wrong is not the best idea because then people might think that Vpk/sqrt(2) is wrong too.
Well, that would be a lot better than the more common situation of people thinking that Vrms = Vpk/sqrt(2) is right!

Lots of people think that this is always the case; that it is a definition. But this is merely a relation that is ONLY true for a pure sinusoid and a small handful, relatively speaking, of other waveforms. Give someone a triangle wave, or a square wave, or a sinusoid with a DC offset, or just a junk waveform, and many people will just look for the peak value, divide it by the square root of two, and be satisfied that they have found the RMS value. Why? Because they believe that Vrms = Vpk/sqrt(2) is right. They don't understand its origins, the underlying concepts, or its very limited range of applicability. In short, they don't have a clue what it means -- it's merely a formula that they are determined to throw numbers at like a good little monkey.

Many references indicate that the effective value is the RMS value, because the two are equal. We do that a lot as humans...we take shortcuts to make things easier.
Which is why I said, "Because of this coincidental set of mathematical equations, we commonly refer to the effective voltage by the name "root-mean-square" or RMS voltage."
 

MrAl

Joined Jun 17, 2014
11,494
Hi again,

I read it as he said, "The book defines RMS as the effective value", which means that the book itself already took the leap in theory, so it's hard to refute unless we go back to the author. I guess the best thing to do is just inform the reader that it's actually defined in terms of physics that involves power and we can deduce a formula from that. I've seen a lot of references on the web and in books that will define RMS as the effective value, so if we tell someone else it is wrong they think that they cant calculate it with the root of the mean of the square. In other words, they think they cant even calculate it that way but need a different formula.

We could go on and on about this i think, so maybe i'll just conclude with the following statement...

"The actual definition of effective voltage comes from heat equivalence in resistances. We can calculate it though without going into the deep theory of heat and how it is produced, and this formula results in the RMS value which is the root of the mean of the square of the voltage, and for sine waves we can calculated it with Vrms=Vpk/sqrt(2)".

Side note: I dont think there is any universal agreement as to how heat actually conducts though solids, if it travels as a wave or not. Havent read enough on this yet though.
 

WBahn

Joined Mar 31, 2012
30,076
@The Electrician : It could certainly be argued that it is more rigorous to have an explicit integrand equal to 1 (and I had actually included it originally but recalled a couple of times when its "unnecessary presence" confused someone that couldn't understand how it magically appeared, so I took it out). But I don't think its absence makes the integral undefined since 1 is the identity element for multiplication and so the differential element dt is equal to 1·dt, thus inferring that the integrand is equal to 1.

Some people might make a notational argument claiming that the dt doesn't multiply the integrand, but rather is just a delimiter for the integral (and some have then claimed that it isn't even needed as long as the integral isn't followed by anything that isn't a part of it). I reject the notion that the differential doesn't multiply the integrand. First, it's very appearance is the result of a limiting process in which a small difference (a delta) that becomes a differential in the limit most certainly DOES multiply an expression that becomes the integral in the same limiting process. Furthermore, the differential carries units that only make sense if the differential multiplies the integrand. Thus, in my opinion, leaving the integrand as an inferred 1 is sufficiently rigorous.
 

MrAl

Joined Jun 17, 2014
11,494
@The Electrician : It could certainly be argued that it is more rigorous to have an explicit integrand equal to 1 (and I had actually included it originally but recalled a couple of times when its "unnecessary presence" confused someone that couldn't understand how it magically appeared, so I took it out). But I don't think its absence makes the integral undefined since 1 is the identity element for multiplication and so the differential element dt is equal to 1·dt, thus inferring that the integrand is equal to 1.

Some people might make a notational argument claiming that the dt doesn't multiply the integrand, but rather is just a delimiter for the integral (and some have then claimed that it isn't even needed as long as the integral isn't followed by anything that isn't a part of it). I reject the notion that the differential doesn't multiply the integrand. First, it's very appearance is the result of a limiting process in which a small difference (a delta) that becomes a differential in the limit most certainly DOES multiply an expression that becomes the integral in the same limiting process. Furthermore, the differential carries units that only make sense if the differential multiplies the integrand. Thus, in my opinion, leaving the integrand as an inferred 1 is sufficiently rigorous.
Hi,

Just curious what post you are replying to.

But yeah there are different ways of showing what you are talking about with the 'dt' thing. I've always taken it to be more or less multiplying too, and i noticed some authors show the '1' and some dont so i never worried about it. I think when they want to be super clear they show the '1' just to show that whatever was taken out left the identity element but if i just see 'dt' i assume that anyway. Not only that, but i have seen authors show that the integral of dt is just 1 rather than 1+K which is probably even more correct, then later solve for K which may or may not end up to be zero.
Another argument for multiplication is when doing 'u' substitution where it is actually part of the workup for the final integral.

Also we usually write:
integral x dt

where 'integral' is the curly looking integral symbol and so would mean "the integral of x with respect to t", but i have also seen at least one physicist use:
integral dt x

which is still "the integral of x with respect to t".
That's a little unusual, but it makes sense because we like to know what the variable of integration is and what could be better than seeing it right away rather than later.

The two forms shown together:
integral a+b+c*x^2+d*x^3 dx
integral dx a+b+c*x^2+d*x^3

with his second form there we dont have to scan through the whole thing there before we know what the variable of integration is. Kinda nice.
 

WBahn

Joined Mar 31, 2012
30,076
Hi,

Just curious what post you are replying to.
It appears that the post I was replying to was deleted by the author after I had started my response. I didn't use Reply because it had formatting issues that made it a bit unreadable so I figured it would be better to just tag him. I had to step away from my computer for a couple hours and when I got back I saw that my response hadn't posted, so I had to hit post again.

Not only that, but i have seen authors show that the integral of dt is just 1 rather than 1+K which is probably even more correct, then later solve for K which may or may not end up to be zero.
If they say that the indefinite integral is just 1 then they are wrong. It is not. Period. The arbitrary constant MUST be there in order for it to be correct. If it is a definite integral, then the K does NOT belong there because it cancels itself out. I'm not aware of any exception to this (but would love to either be reminded of one I've forgotten or to learn about one I never knew).

Also we usually write:
integral x dt

where 'integral' is the curly looking integral symbol and so would mean "the integral of x with respect to t", but i have also seen at least one physicist use:
integral dt x

which is still "the integral of x with respect to t".
That's a little unusual, but it makes sense because we like to know what the variable of integration is and what could be better than seeing it right away rather than later.

The two forms shown together:
integral a+b+c*x^2+d*x^3 dx
integral dx a+b+c*x^2+d*x^3

with his second form there we dont have to scan through the whole thing there before we know what the variable of integration is. Kinda nice.
The problem is that the second form is wrong due to order of operations. In fact, neither is correct for that reason, but we can take a much better guess at the first one.

Remember that multiplication has higher precedence than addition. So that the first one is actually:

(integral a) + (b) + (c*(x^2)) + ((d*(x^3)) dx)

which is nonsensical. The second one is actually valid, but not what was intended:

(integral (dx a)) + (b) + (c*(x^2)) + (d*(x^3))

Here I trying to be consistent with the intended interpretation of integral·dx·f(x).

The big problem with that second one is that there is no way to determine which terms are and are not part of the integrand. We have to guess. That's bad -- math shouldn't be about guessing.

In the case of the first one, we can use the integral sign and the dx as delimiters and then assume that everything between them is a function with an implicit pair of parentheses around it. But I find that to be sloppy and inviting disaster. So the first one should be written as

integral (a+b+c*x^2+d*x^3) dx

Similarly, the second form should be:

integral dx (a+b+c*x^2+d*x^3)
 

MrAl

Joined Jun 17, 2014
11,494
Hello again,

Well the way i understood it was that if it doesnt make any sense then it cant be that way. So that leaves the only interpretation of:
integral a+b*x dx

as:
integral (a+b*x) dx

I thought that was standard practice. The 'integral' symbol and the 'dx' enclose the integrand just like parens would. I think that is standard practice.

As for the second form we invoke lex parsimoniae so the only interpretation of:
integral dx a+b*x

is:
integral (a+b*x) dx

or else we would have used parens like:
integral (dx a) +b*x

or:
(integral dx a) +b*x

I realize some people wont like this, and it immediately invokes students questions about the form, but the 'inventor' of this form was none other than the same Stanford physicist that proved Steven Hawking wrong about one of the properties of a black hole :)
 
Last edited:
Top