Integrator Time Invariant?

Discussion in 'Homework Help' started by jegues, Oct 11, 2012.

  1. jegues

    Thread Starter Well-Known Member

    Sep 13, 2010
    735
    43
    NOTE: For both a) and b) assume x(t) has finite length.

    a) Is the following system time invariant?

    y(t) = \int_{-a}^{\infty}x(\tau)d\tau

    b) Is the following system time invariant?

    y(t) = \int_{-\infty}^{\infty}x(\tau)d\tau

    My attempt at the solution,

    a)

    Shifting the output gives,

    y(t-T) = \int_{-a}^{\infty}x(\tau-T)d\tau

    If the input is delayed,

    x_{d}(t) = x(t-T) \rightarrow y_{d}(t) = \int_{-a}^{\infty}x_{d}(\tau)d\tau =\int_{-a}^{\infty}x(\tau-T)d\tau = y(t-T)

    Therefor it is time invariant.

    b) I can't see how anything changes from case a) to case b). Does anything change? Why?

    Thanks again!
     
  2. WBahn

    Moderator

    Mar 31, 2012
    17,715
    4,788
    Think about that first case again. What if x(t) is some signal that is nonzero for some amount of time after t=-a? Now what is x(t) is shifted in time so that it is nonzero only for some amount of time before t=-a? Do you expect y(t) to be the same, except for a time shift, in both cases?
     
  3. jegues

    Thread Starter Well-Known Member

    Sep 13, 2010
    735
    43
    Then the y(t) will be the nonzero area under the curve x(t) for that given amount of time after t=-a.

    Then y(t) will be zero.

    According to the conclusions I drew above,

    No for case a), and yes for case b) because I can't ever shift x(t) back far enough such that y(t) is zero.

    Is this correct?
     
  4. WBahn

    Moderator

    Mar 31, 2012
    17,715
    4,788
    Yep, you are correct and your reasoning is correct.

    Hence, (a) is not time-invariant while (b) is. Now you just need to carefully apply the test to show that mathematically.

    One thing that I think you were a bit sloppy on in the first post and which, if you are not careful, can lead you to a wrong conclusion.

    You have:

    y(t) = f(x(t))

    i.e., the signal y(t) is the result of some system, f(), operating on some signal x(t).

    The question is whether f() is time-invariant. If it is, then:

    y(t-T) = f(x(t-T))

    WHEN

    y(t) = f(x(t-T))

    So y(t) and y(t-T) are not generic labels -- they have a definite relation to each other. Hence, in general, it is NOT true that

    y(t-T) is NOT whatever f(x(t-T)) happens to turn out to be. Instead, y(t-T) is defined to be a time-shifted copy of f(x(t)).

    So it is better to say:

    y(t) = f(x(t))

    w(t) = f(x(t-T))

    The system f() is time-invariant if and only if

    w(t) = y(t-T)
     
  5. jegues

    Thread Starter Well-Known Member

    Sep 13, 2010
    735
    43
    So in my mathematical proof above where did I make my mistake? How would you show the mathematical proof for part a) to show that it is indeed not time invariant?
     
  6. WBahn

    Moderator

    Mar 31, 2012
    17,715
    4,788
    Because you started out saying that shifting the output gives the function applied to the shifted input. That's only true if the system is time-invariant, which you can't just claim when that is what you are trying to determine. What that equation is saying is that you are defining y(t-T) to be whatever the output of the system is when the input is delayed by T. Well, then you will always get that to end up being y(t-T) because that is what you just defined y(t-T) to be!

    You had part of the right idea on the next line, but weren't careful enough when making the change of variables (the variable of integration). If you are careful, you will see that the limits of integration depend on T.
     
  7. jegues

    Thread Starter Well-Known Member

    Sep 13, 2010
    735
    43
    I don't really get what you are trying to say here.

    [​IMG]

    You are basically telling me this line is incorrect, right? What would be the correct way to write it?

    Can you show me what you mean?
     
  8. WBahn

    Moderator

    Mar 31, 2012
    17,715
    4,788
    Correct, because it has no intrinsic relation to the y(t) in the problem statement.

    With a different symbol. As I said in my earlier post, call it w(t). Call it anything but a delayed version of the previously defined y(t) because you don't know whether it is or isn't.


    You can't integrate x(τ-T)dτ. You need to perform a change of variable to get something of the form x(κ)dκ. But when you change the variable of integration, you also must change the limits accordingly.
     
  9. jegues

    Thread Starter Well-Known Member

    Sep 13, 2010
    735
    43
    I'm still not seeing it...

    Shifting the output, call it w(t)

    w(t) = \int_{-a-T}^{\infty}x(k)dk

    Shifting the input,

    x_{d}(t) = x(t-T) \rightarrow y_{d}(t) = \int_{-a-T}^{\infty}x(k)dk

    Can you write in latex? I really just want to see this and get it over with so its clear in my head...
     
  10. WBahn

    Moderator

    Mar 31, 2012
    17,715
    4,788
    Given

    <br />
y(t) = \int_{-a}^\infty x(\tau) d \tau<br />

    that's what y(t) is. It isn't equal to anything else unless we show that it is equal to it.

    If we shift the output, we have

    <br />
w(t) = y(t-T)<br />

    where y(t) is defined above.

    If we shift the input, we have

    <br />
z(t) = \int_{-a}^\infty x(\tau-T) d \tau<br />

    If the system is time invariant, then

    <br />
w(t) = z(t)<br />

    If these aren't equal (for all values of T), then the system is time variant.
     
Loading...