Integrator Time Invariant?

Thread Starter

jegues

Joined Sep 13, 2010
733
NOTE: For both a) and b) assume x(t) has finite length.

a) Is the following system time invariant?

\(y(t) = \int_{-a}^{\infty}x(\tau)d\tau\)

b) Is the following system time invariant?

\(y(t) = \int_{-\infty}^{\infty}x(\tau)d\tau\)

My attempt at the solution,

a)

Shifting the output gives,

\(y(t-T) = \int_{-a}^{\infty}x(\tau-T)d\tau\)

If the input is delayed,

\(x_{d}(t) = x(t-T) \rightarrow y_{d}(t) = \int_{-a}^{\infty}x_{d}(\tau)d\tau =\int_{-a}^{\infty}x(\tau-T)d\tau = y(t-T) \)

Therefor it is time invariant.

b) I can't see how anything changes from case a) to case b). Does anything change? Why?

Thanks again!
 

WBahn

Joined Mar 31, 2012
30,072
Think about that first case again. What if x(t) is some signal that is nonzero for some amount of time after t=-a? Now what is x(t) is shifted in time so that it is nonzero only for some amount of time before t=-a? Do you expect y(t) to be the same, except for a time shift, in both cases?
 

Thread Starter

jegues

Joined Sep 13, 2010
733
Think about that first case again. What if x(t) is some signal that is nonzero for some amount of time after t=-a?
Then the y(t) will be the nonzero area under the curve x(t) for that given amount of time after t=-a.

Now what is x(t) is shifted in time so that it is nonzero only for some amount of time before t=-a?
Then y(t) will be zero.

Do you expect y(t) to be the same, except for a time shift, in both cases?
According to the conclusions I drew above,

No for case a), and yes for case b) because I can't ever shift x(t) back far enough such that y(t) is zero.

Is this correct?
 

WBahn

Joined Mar 31, 2012
30,072
Yep, you are correct and your reasoning is correct.

Hence, (a) is not time-invariant while (b) is. Now you just need to carefully apply the test to show that mathematically.

One thing that I think you were a bit sloppy on in the first post and which, if you are not careful, can lead you to a wrong conclusion.

You have:

y(t) = f(x(t))

i.e., the signal y(t) is the result of some system, f(), operating on some signal x(t).

The question is whether f() is time-invariant. If it is, then:

y(t-T) = f(x(t-T))

WHEN

y(t) = f(x(t-T))

So y(t) and y(t-T) are not generic labels -- they have a definite relation to each other. Hence, in general, it is NOT true that

y(t-T) is NOT whatever f(x(t-T)) happens to turn out to be. Instead, y(t-T) is defined to be a time-shifted copy of f(x(t)).

So it is better to say:

y(t) = f(x(t))

w(t) = f(x(t-T))

The system f() is time-invariant if and only if

w(t) = y(t-T)
 

Thread Starter

jegues

Joined Sep 13, 2010
733
Yep, you are correct and your reasoning is correct.

Hence, (a) is not time-invariant while (b) is. Now you just need to carefully apply the test to show that mathematically.

One thing that I think you were a bit sloppy on in the first post and which, if you are not careful, can lead you to a wrong conclusion.

You have:

y(t) = f(x(t))

i.e., the signal y(t) is the result of some system, f(), operating on some signal x(t).

The question is whether f() is time-invariant. If it is, then:

y(t-T) = f(x(t-T))

WHEN

y(t) = f(x(t-T))

So y(t) and y(t-T) are not generic labels -- they have a definite relation to each other. Hence, in general, it is NOT true that

y(t-T) is NOT whatever f(x(t-T)) happens to turn out to be. Instead, y(t-T) is defined to be a time-shifted copy of f(x(t)).

So it is better to say:

y(t) = f(x(t))

w(t) = f(x(t-T))

The system f() is time-invariant if and only if

w(t) = y(t-T)
So in my mathematical proof above where did I make my mistake? How would you show the mathematical proof for part a) to show that it is indeed not time invariant?
 

WBahn

Joined Mar 31, 2012
30,072
So in my mathematical proof above where did I make my mistake? How would you show the mathematical proof for part a) to show that it is indeed not time invariant?
Because you started out saying that shifting the output gives the function applied to the shifted input. That's only true if the system is time-invariant, which you can't just claim when that is what you are trying to determine. What that equation is saying is that you are defining y(t-T) to be whatever the output of the system is when the input is delayed by T. Well, then you will always get that to end up being y(t-T) because that is what you just defined y(t-T) to be!

You had part of the right idea on the next line, but weren't careful enough when making the change of variables (the variable of integration). If you are careful, you will see that the limits of integration depend on T.
 

Thread Starter

jegues

Joined Sep 13, 2010
733
Because you started out saying that shifting the output gives the function applied to the shifted input. That's only true if the system is time-invariant, which you can't just claim when that is what you are trying to determine. What that equation is saying is that you are defining y(t-T) to be whatever the output of the system is when the input is delayed by T. Well, then you will always get that to end up being y(t-T) because that is what you just defined y(t-T) to be!
I don't really get what you are trying to say here.



You are basically telling me this line is incorrect, right? What would be the correct way to write it?

You had part of the right idea on the next line, but weren't careful enough when making the change of variables (the variable of integration). If you are careful, you will see that the limits of integration depend on T.
Can you show me what you mean?
 

WBahn

Joined Mar 31, 2012
30,072
I don't really get what you are trying to say here.



You are basically telling me this line is incorrect, right?
Correct, because it has no intrinsic relation to the y(t) in the problem statement.

What would be the correct way to write it?
With a different symbol. As I said in my earlier post, call it w(t). Call it anything but a delayed version of the previously defined y(t) because you don't know whether it is or isn't.


Can you show me what you mean?
You can't integrate x(τ-T)dτ. You need to perform a change of variable to get something of the form x(κ)dκ. But when you change the variable of integration, you also must change the limits accordingly.
 

Thread Starter

jegues

Joined Sep 13, 2010
733
Correct, because it has no intrinsic relation to the y(t) in the problem statement.



With a different symbol. As I said in my earlier post, call it w(t). Call it anything but a delayed version of the previously defined y(t) because you don't know whether it is or isn't.




You can't integrate x(τ-T)dτ. You need to perform a change of variable to get something of the form x(κ)dκ. But when you change the variable of integration, you also must change the limits accordingly.
I'm still not seeing it...

Shifting the output, call it w(t)

\(w(t) = \int_{-a-T}^{\infty}x(k)dk\)

Shifting the input,

\(x_{d}(t) = x(t-T) \rightarrow y_{d}(t) = \int_{-a-T}^{\infty}x(k)dk\)

Can you write in latex? I really just want to see this and get it over with so its clear in my head...
 

WBahn

Joined Mar 31, 2012
30,072
Given

\(
y(t) = \int_{-a}^\infty x(\tau) d \tau
\)

that's what y(t) is. It isn't equal to anything else unless we show that it is equal to it.

If we shift the output, we have

\(
w(t) = y(t-T)
\)

where y(t) is defined above.

If we shift the input, we have

\(
z(t) = \int_{-a}^\infty x(\tau-T) d \tau
\)

If the system is time invariant, then

\(
w(t) = z(t)
\)

If these aren't equal (for all values of T), then the system is time variant.
 
Top