# Laplace Transforms

#### Vikram50517

Joined Jan 4, 2020
33
Hey all!. I have a simple question. why do we analyze a system's ROC in s-domain and how does it impact our time signal.

#### Papabravo

Joined Feb 24, 2006
15,765
Hey all!. I have a simple question. why do we analyze a system's ROC in s-domain and how does it impact our time signal.
I don't know what ROC stands for. The general answer to your question about Laplace transforms is that they transform an Ordinary Differential Equation in the time domain into an algebraic equation in the frequency domain. It is established that there is a 1:1 correspondence between time domain solutions and frequency domain solutions. In practice this means that you can find all possible solutions by either method and they are unique. That is, there are no other solutions that can be found.

Now the \$64,000 question. would you rather solve an algebraic equation or an Ordinary Differential Equation. I have a pretty strong opinion on this one.

#### Vikram50517

Joined Jan 4, 2020
33
Oh my bad! ROC refers to Region of Convergence. Why do we analyse and what effect does it have on the time domain signal. say let's take a signal f[t]=e^at, whose laplace transform is (1/s-a) , my ROC must be s>a. what does it imply and how does it impact the time domain signal

#### Papabravo

Joined Feb 24, 2006
15,765
That signal diverges as t → ∞. In the frequency domain there is a pole at s = a, because the Laplace Transform diverges to infinity. The Laplace transform converges to a finite value at every point in the complex plane except s = a. As s approachs the point at ∞, on the complex plane, this particular Laplace transform approaches zero.

#### Vikram50517

Joined Jan 4, 2020
33
Yes u got my point. Pole and zero analysis is enough to obtain stability information, then why do we use a concept called Region of Convergence.

#### Papabravo

Joined Feb 24, 2006
15,765
Yes u got my point. Pole and zero analysis is enough to obtain stability information, then why do we use a concept called Region of Convergence.
In Complex Analysis there is the concept of an "analytic function". The following explanation is from wikipedia:

In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions, categories that are similar in some ways, but different in others. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not hold generally for real analytic functions. A function is analytic if and only if its Taylor series about x0 converges to the function in some neighborhood for every x0 in its domain.

What this says is that analytic functions, represented by a convergent power series, can have removable singularities (poles) because the power series converges to the value of the rational function in some neighborhood of the pole. The set of all neighborhoods is the region of convergence. It just assures us that we know how the function behaves almost everywhere.

#### Papabravo

Joined Feb 24, 2006
15,765
I was confused about what ROC referred to. I thought it referred to the result of the Lapalce transform or Transfer Funtion. It actually refers to the integrand. In order for the integral to exist the integrand must be bounded. Watch the following video with a 3D picture of the Laplace transform and the region of convergence. It is related to the choice of the real part for the Laplace variable s.

All the transfer functions with bounded outputs you will ever see besides all the ones that you won't, lie along the jω-axis and are thus included in the region of convergence.
Sorry for the confusion.

Last edited:

#### MrAl

Joined Jun 17, 2014
8,147
Hey all!. I have a simple question. why do we analyze a system's ROC in s-domain and how does it impact our time signal.
Hello there,

The ROC is a general statement which refers to a range of values where a function can provide reasonable results. If you go outside that range that function will provide unrealistic results unless maybe at least part of the response is inside that range and i think it has to be the dominant part.
Poles and zeros tell you something about the response too but that is a somewhat different view because it is more specific which tells you something about just one specific system.
For example, if the ROC is the unit circle, then anything inside that circle is both stable and causal. We are able to state that even without referring to any specific system.

You should really look around the web and look for some examples.
I also recommend looking at the root locus procedure if you are interested in stability.

#### Vikram50517

Joined Jan 4, 2020
33
Hello there,

The ROC is a general statement which refers to a range of values where a function can provide reasonable results. If you go outside that range that function will provide unrealistic results unless maybe at least part of the response is inside that range and i think it has to be the dominant part.
Poles and zeros tell you something about the response too but that is a somewhat different view because it is more specific which tells you something about just one specific system.
For example, if the ROC is the unit circle, then anything inside that circle is both stable and causal. We are able to state that even without referring to any specific system.

You should really look around the web and look for some examples.
I also recommend looking at the root locus procedure if you are interested in stability.
Hi, i want you to elucidate more on this particular point of yours inside the quote, "Poles and zeros tell you something about the response too but that is a somewhat different view because it is more specific which tells you something about just one specific system". From your point what i conclude is, poles and zeroes are specific to a system, then what does ROC do?

#### Vikram50517

Joined Jan 4, 2020
33
Or for a better convenience take a system whose transfer functions is H(s)=1/(s+3), its ROC is s>-3. ok, fine now how does that information reflect into the time domain.

#### Vikram50517

Joined Jan 4, 2020
33
what does it say about the system, about its inputs and outputs?

#### Papabravo

Joined Feb 24, 2006
15,765
what does it say about the system, about its inputs and outputs?
It tells you that you have a first order linear system with an exponential decay. As you move along the jω-axis from very small frequencies to very large frequencies the system response as a function of frequency is monotonically decreasing and approaches 0 at the point 0 + j∞

The magnitude of the transfer function at 0 + j0 is 1/3
As the value of ω increases the magnitude decreases.
So it is a low pass filter with a bounded output for all bounded inputs.

Last edited:

#### MrAl

Joined Jun 17, 2014
8,147
Or for a better convenience take a system whose transfer functions is H(s)=1/(s+3), its ROC is s>-3. ok, fine now how does that information reflect into the time domain.
Here is a graphic of the time domain responses based on the pole positions in the complex plane.
First note that the thin horizontal lines are the zero amplitude lines.
Then note the response at the big red dot is the response of your example. if that red dot is at position -3 on the real axis.

Note anything on the real axis is decreasing left of (0,0) and increasing right of (0,0) and anything above the real axis has the same properties except it is also oscillatory having a sinusoidal component. Anything above the real axis also has a mirror point below the real axis because they are complex pairs. As you go higher up above the real axis the sinusoidal frequency increases.

#### DarthVolta

Joined Jan 27, 2015
521
The more I learn EE, the more I realize I like to learn math 1st, in general, because otherwise I'm banging my head wondering where stuff comes from, and it's too close to cheating. It's like standing on a bridge that I'm afraid is about to collapse.

I've been shagging around with LT's again, and trying to figure out some of equations, calculus-wise in R it's no problem (for common functions), so I'm another few steps closer to doing something useful with freq-domain functions in EE. So far I'm mainly using phasors and single frequencies sill tho. But I can find basic pole's/zero's, but only of the most basic circuits.

My current ODE textbook must have all this tho, so I just have to keep plowing through it all.

Last edited:

#### MrAl

Joined Jun 17, 2014
8,147
The more I learn EE, the more I realize I like to learn math 1st, in general, because otherwise I'm banging my head wondering where stuff comes from, and it's too close to cheating. It's like standing on a bridge that I'm afraid is about to collapse.

I've been shagging around with LT's again, and trying to figure out some of equations, calculus-wise in R it's no problem (for common functions), so I'm another few steps closer to doing something useful with freq-domain functions in EE. So far I'm mainly using phasors and single frequencies sill tho. But I can find basic pole's/zero's, but only of the most basic circuits.

My current ODE textbook must have all this tho, so I just have to keep plowing through it all.
Oh yes math is the root to all engineering and in general science. The math just goes on and the more you know the better understanding of electrical engineering you will gain.

If you are doing frequency domain analysis you can often replace the Laplace variable 's' with "j*w" where 'j' is the imaginary operator and 'w' is the angular frequency. This allows you to use complex math to calculate the response in a wide range of circuits that are driven with an AC source. It is really amazing how this simplifies things.

ODE's are very handy. Wait until you get into PDE's.

#### bogosort

Joined Sep 24, 2011
625
The more I learn EE, the more I realize I like to learn math 1st, in general, because otherwise I'm banging my head wondering where stuff comes from, and it's too close to cheating. It's like standing on a bridge that I'm afraid is about to collapse.
I was the same way. Some students prefer to learn the math along with the engineering concepts because it grounds the math with something tangible or at least relevant to them. But to me the equations were pure sorcery until I understood the math behind them. The Laplace transform, in particular, was a bewildering beast until I took the effort to lear where it came from and what it actually does. Then it all made perfect sense.

#### visionofast

Joined Oct 17, 2018
82
to completely understanding the Laplace transform you should realize the meaning of correlation between two signals in statistics first.
actually Laplace transform formula comes from the correlation formula in statistics, and by that ,you can find how much a signal or function is similar to other signal or function as aspect of variance and then as aspect of frequency as well.
the main criterion signal for determining harmonics (that make frequency components) is exponential signal that is the basic component of sinusoidal signals.
and sinusoidal signals are basic criterion functions for single harmonic or single frequency signals.

#### bogosort

Joined Sep 24, 2011
625
actually Laplace transform formula comes from the correlation formula in statistics...
I don't believe this is historically accurate. Statistical correlation wasn't developed until the late 19th century, many decades after Laplace's death.

Mathematically, the Laplace transform and correlation functions are related in that both are inner products on linear spaces, but they are more like cousins than siblings. Most importantly, the Laplace transform uses a very special kernel, the one-parameter family of complex exponentials. The magic of the transform is revealed when we consider that complex exponentials are eigenfunctions of the differential operator. This makes the Laplace transform a change of basis matrix for differential equations, and in this new basis derivatives become multiplications, and so differential equations become algebraic.

Historically, the Laplace transform was developed in the 18th century as part of the theory of differential equations. In this context, the Laplace transform is the generalization of a discrete power series to the continuous case. Later, Fourier theory would also be developed as a general power series expansion. In both cases, however, the spectral connection with the dual spaces of time and frequency came later.