How to: Calculate frequency response from sweeped time-series data?

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
Hi there.

I understand that frequency response is nothing but the relationship between the input and the output of the system, in terms of magnitude and phase.

Thus it is very easy to deduce the frequency response of the system if we have the time-series data of the input and the output; the input being a frequency sweep of constant magnitude such that the change in frequency happens only after the completion of one cycle. That is, each frequency is maintained for at least one complete cycle.

However, this seldom happens in practical scenarios. Almost all the practical tools that generate a frequency sweep do not follow this condition, and generate a consistently varying frequency sweep (on a linear/logarithmic scale) such that the frequency changes several times in a single cycle. This poses a challenge in deducing the amplitude and phase information in response to a specific frequency.

It would be really appreciable if someone could guide me through this mathematical challenge.
 

MrChips

Joined Oct 2, 2009
30,706
You do not need a complete cycle at each frequency, for example, at 100Hz and 101Hz, or 10000Hz and 10001Hz. What about 100.24Hz?

In fact, you can determine the frequency response from an input signal that is a single pulse 1 microsecond wide.
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
You do not need a complete cycle at each frequency, for example, at 100Hz and 101Hz, or 10000Hz and 10001Hz. What about 100.24Hz?

In fact, you can determine the frequency response from an input signal that is a single pulse 1 microsecond wide.
Thank you for resonding, MrChips.

It would be really great if you could elaborate on how to do it.
 
Last edited:

MrChips

Joined Oct 2, 2009
30,706
Firstly, here is a quick lesson on system response.
Consider the system response as a black box indicated in the diagram below as h(t) and H(f).

1600601883369.png

What does this mean?
A system can be observed in two different ways, (1) time space and (2) frequency space.

An input time series x(t) interacts with the time response of the system h(t) to produce and output y(t).
The mathematical operator is called convolution.
y(t) = h(t) * x(t)

Since you are interested in the frequency response of the system, we look at the system in frequency space.
A frequency spectrum X(f) interacts with the frequency response of the system H(f) to produce an output Y(f).
The mathematical operator is multiplication.
Y(f) = H(f) x X(f)

What you are trying to determine is H(f).
What you have given the system is x(t) and you are measuring y(t).
If you can convert x(t) ⇒ X(f) and y(t) ⇒ Y(f) then you can determine H(f).

To make life simple, if we can make X(f) = 1
then Y(f) = H(f)

All we need to do is to convert our measured output y(t) ⇒ Y(f) and we have our frequency response H(f).
The conversion is call the Fourier Transformation or Fourier Transform.

It turns out that the frequency spectrum (i.e Fourier Transform) of a single spike (called a delta function) contains all frequencies. In other words the Fourier transform of a delta function is unity. Thus, we input a delta function into the system and then take the Fourier Transform of the output y(t).

If you already have the time series y(t) of a frequency sweep x(t), taking the Fourier Transform will also give you a close approximation of the frequency response of the system. Or simply just measure the signal amplitude of y(t) at each frequency and you can plot the frequency response.
 

bogosort

Joined Sep 24, 2011
696
I understand that frequency response is nothing but the relationship between the input and the output of the system, in terms of magnitude and phase.
I think you're asking about determining the frequency content of the input signal, not the frequency response of the measurement system (which we can presume to be flat, i.e., passes all frequencies within some bandwidth at unity gain).

Specifically, I think you are asking how we can analytically determine the frequency content of a frequency-varying signal from its sampled data when the rate of frequency change exceeds the periodicity of the individual frequency components.

The key is that the signal is bandlimited, i.e., there is some finite frequency Ω such that f(ω) = 0 for all frequencies ω > Ω. The sampling theorem tells us that if the signal f is bandlimited to Ω and the signal is sampled at a rate at least 2Ω, then f is perfectly and uniquely characterized by its samples. (This is called the Nyquist criterion.) In plain English, if you sample the sweep fast enough, you'll have all the information you need to reconstruct the signal.

One way to build intuition about this is to think of the sample values as dots on a graph, where the x-axis represents time and the y-axis amplitude. If -- and only if -- the Nyquist criterion is satisfied, then precisely one waveform can be smoothly drawn through the dots.

Let's use a simple example to see how we don't need a full period. Suppose we have a sine wave of 0.3 Hz with some arbitrary phase shift:

1600606133814.png

Imagine we sample this signal for one second, at a sampling rate of 5 Hz. The Nyquist criterion is satisfied because our sampling rate exceeds twice the signal's frequency (0.3 * 2 = 0.6 Hz < 5 Hz). After sampling for one second, we'd have exactly five samples, which we'll graph as dots:

1600606389208.png

Notice that our sample amplitudes are all positive, so we know we definitely have not captured a full period. But we also know that the Nyquist criterion has been met, therefore there is only one function with a frequency less than 2.5 Hz (half the sampling rate) that will fit those points exactly:

1600606860682.png

1600606911468.png

There are many ways to find the actual function that fits these points, the most famous of which is the family of Fourier transforms. In this case, a Fourier series would have told us that the time-domain function is \[ f(t) = \sin \left((0.3)2\pi t + \frac{\pi}{3} \right) \] From this, we could see that the frequency component is 0.3 Hz, even though we never captured a full period of the signal. Hopefully, you can intuit that no matter how complex the signal -- including varying in frequency as well as time -- there is only one signal that will fit the dots perfectly, provided that the Nyquist criterion has been met.

Note that all bets are off if we can't guarantee the Nyquist criterion. If we allow all frequencies, then there are an infinite number of signals that will pass through any finite set of dots. We call these false signals aliases, which we avoid by ensuring that the input to the ADC is bandlimited.
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
Firstly, here is a quick lesson on system response.
Consider the system response as a black box indicated in the diagram below as h(t) and H(f).

View attachment 217619

What does this mean?
A system can be observed in two different ways, (1) time space and (2) frequency space.

An input time series x(t) interacts with the time response of the system h(t) to produce and output y(t).
The mathematical operator is called convolution.
y(t) = h(t) * x(t)

Since you are interested in the frequency response of the system, we look at the system in frequency space.
A frequency spectrum X(f) interacts with the frequency response of the system H(f) to produce an output Y(f).
The mathematical operator is multiplication.
Y(f) = H(f) x X(f)

What you are trying to determine is H(f).
What you have given the system is x(t) and you are measuring y(t).
If you can convert x(t) ⇒ X(f) and y(t) ⇒ Y(f) then you can determine H(f).

To make life simple, if we can make X(f) = 1
then Y(f) = H(f)

All we need to do is to convert our measured output y(t) ⇒ Y(f) and we have our frequency response H(f).
The conversion is call the Fourier Transformation or Fourier Transform.

It turns out that the frequency spectrum (i.e Fourier Transform) of a single spike (called a delta function) contains all frequencies. In other words the Fourier transform of a delta function is unity. Thus, we input a delta function into the system and then take the Fourier Transform of the output y(t).

If you already have the time series y(t) of a frequency sweep x(t), taking the Fourier Transform will also give you a close approximation of the frequency response of the system. Or simply just measure the signal amplitude of y(t) at each frequency and you can plot the frequency response.
Thanks a lot, MrChips, for the explanation. The quick lesson really helped.

Or simply just measure the signal amplitude of y(t) at each frequency and you can plot the frequency response.
Yes, that is the most intuitive and easiest way to figure out the system response, but also the most tiring and time consuming one.

If you already have the time series y(t) of a frequency sweep x(t), taking the Fourier Transform will also give you a close approximation of the frequency response of the system.
Yes, taking the Fourier Transform and then dividing each element of the Y(f) with the corresponding element of the X(f) would definitely give the magnitude response. However, to complete the frequency response, we also need the phase information. How can we deduce that?
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
I think you're asking about determining the frequency content of the input signal, not the frequency response of the measurement system (which we can presume to be flat, i.e., passes all frequencies within some bandwidth at unity gain).

Specifically, I think you are asking how we can analytically determine the frequency content of a frequency-varying signal from its sampled data when the rate of frequency change exceeds the periodicity of the individual frequency components.

The key is that the signal is bandlimited, i.e., there is some finite frequency Ω such that f(ω) = 0 for all frequencies ω > Ω. The sampling theorem tells us that if the signal f is bandlimited to Ω and the signal is sampled at a rate at least 2Ω, then f is perfectly and uniquely characterized by its samples. (This is called the Nyquist criterion.) In plain English, if you sample the sweep fast enough, you'll have all the information you need to reconstruct the signal.

One way to build intuition about this is to think of the sample values as dots on a graph, where the x-axis represents time and the y-axis amplitude. If -- and only if -- the Nyquist criterion is satisfied, then precisely one waveform can be smoothly drawn through the dots.

Let's use a simple example to see how we don't need a full period. Suppose we have a sine wave of 0.3 Hz with some arbitrary phase shift:

View attachment 217620

Imagine we sample this signal for one second, at a sampling rate of 5 Hz. The Nyquist criterion is satisfied because our sampling rate exceeds twice the signal's frequency (0.3 * 2 = 0.6 Hz < 5 Hz). After sampling for one second, we'd have exactly five samples, which we'll graph as dots:

View attachment 217621

Notice that our sample amplitudes are all positive, so we know we definitely have not captured a full period. But we also know that the Nyquist criterion has been met, therefore there is only one function with a frequency less than 2.5 Hz (half the sampling rate) that will fit those points exactly:

View attachment 217622

View attachment 217623

There are many ways to find the actual function that fits these points, the most famous of which is the family of Fourier transforms. In this case, a Fourier series would have told us that the time-domain function is \[ f(t) = \sin \left((0.3)2\pi t + \frac{\pi}{3} \right) \] From this, we could see that the frequency component is 0.3 Hz, even though we never captured a full period of the signal. Hopefully, you can intuit that no matter how complex the signal -- including varying in frequency as well as time -- there is only one signal that will fit the dots perfectly, provided that the Nyquist criterion has been met.

Note that all bets are off if we can't guarantee the Nyquist criterion. If we allow all frequencies, then there are an infinite number of signals that will pass through any finite set of dots. We call these false signals aliases, which we avoid by ensuring that the input to the ADC is bandlimited.
First of all, thank you, bogosort, for responding to this thread.

I think you're asking about determining the frequency content of the input signal, not the frequency response of the measurement system (which we can presume to be flat, i.e., passes all frequencies within some bandwidth at unity gain).
Well, actually I am trying to determine the frequency response of a, say, DUT (a black-box, as mentioned by MrChips) by determining the frequency content of the input and the output signals.

Specifically, I think you are asking how we can analytically determine the frequency content of a frequency-varying signal from its sampled data when the rate of frequency change exceeds the periodicity of the individual frequency components.
Precisely put!

The key is that the signal is bandlimited, i.e., there is some finite frequency Ω such that f(ω) = 0 for all frequencies ω > Ω. The sampling theorem tells us that if the signal f is bandlimited to Ω and the signal is sampled at a rate at least 2Ω, then f is perfectly and uniquely characterized by its samples. (This is called the Nyquist criterion.) In plain English, if you sample the sweep fast enough, you'll have all the information you need to reconstruct the signal.
Yes, I understand the Nyquist Theorem!

One way to build intuition about this is to think of the sample values as dots on a graph, where the x-axis represents time and the y-axis amplitude. If -- and only if -- the Nyquist criterion is satisfied, then precisely one waveform can be smoothly drawn through the dots.

Let's use a simple example to see how we don't need a full period. Suppose we have a sine wave of 0.3 Hz with some arbitrary phase shift:

1600606133814.png


Imagine we sample this signal for one second, at a sampling rate of 5 Hz. The Nyquist criterion is satisfied because our sampling rate exceeds twice the signal's frequency (0.3 * 2 = 0.6 Hz < 5 Hz). After sampling for one second, we'd have exactly five samples, which we'll graph as dots:

1600606389208.png


Notice that our sample amplitudes are all positive, so we know we definitely have not captured a full period. But we also know that the Nyquist criterion has been met, therefore there is only one function with a frequency less than 2.5 Hz (half the sampling rate) that will fit those points exactly:

1600606860682.png


1600606911468.png
But that's a fabulous way of putting Nyquist and Fourier theorems together! Love it!

There are many ways to find the actual function that fits these points, the most famous of which is the family of Fourier transforms. In this case, a Fourier series would have told us that the time-domain function is

f(t)=sin((0.3)2πt+π3)​

f(t) = \sin \left((0.3)2\pi t + \frac{\pi}{3} \right) From this, we could see that the frequency component is 0.3 Hz, even though we never captured a full period of the signal. Hopefully, you can intuit that no matter how complex the signal -- including varying in frequency as well as time -- there is only one signal that will fit the dots perfectly, provided that the Nyquist criterion has been met.
So, how to do it? I mean, how to proceed with the sampled data, let's say your 5 dots? In theoretical mathematics, we are taught like "Fourier Transform of this function is that", "Z-Transform of this function is that" and so on! But in practical scenarios, what we have in hand are not functions but sampled data! How are these transforms computed in real scenarios with sampled data (and not predetermined functions)?
 

MrChips

Joined Oct 2, 2009
30,706
Thanks a lot, MrChips, for the explanation. The quick lesson really helped.


Yes, that is the most intuitive and easiest way to figure out the system response, but also the most tiring and time consuming one.


Yes, taking the Fourier Transform and then dividing each element of the Y(f) with the corresponding element of the X(f) would definitely give the magnitude response. However, to complete the frequency response, we also need the phase information. How can we deduce that?
The phase information is contained in the Fourier Transform.
Why do you need the phase information? In many situations we only need to examine the frequency response, i.e. the power spectrum.
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
Why do you need the phase information? In many situations we only need to examine the frequency response, i.e. the power spectrum.
Yeah, you're right. My bad!
It's just that I have this Keysight 2014A scope, which gives only real values when exporting an FFT into a CSV. So, I almost forgot that the FFT returns a complex number.

Thank you once again.
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
First of all, thank you, bogosort, for responding to this thread.


Well, actually I am trying to determine the frequency response of a, say, DUT (a black-box, as mentioned by MrChips) by determining the frequency content of the input and the output signals.


Precisely put!


Yes, I understand the Nyquist Theorem!


But that's a fabulous way of putting Nyquist and Fourier theorems together! Love it!


So, how to do it? I mean, how to proceed with the sampled data, let's say your 5 dots? In theoretical mathematics, we are taught like "Fourier Transform of this function is that", "Z-Transform of this function is that" and so on! But in practical scenarios, what we have in hand are not functions but sampled data! How are these transforms computed in real scenarios with sampled data (and not predetermined functions)?
@bogosort, I would be grateful if you could help me understand how the computational algorithms for FFT or Z-T handle the sampled data, the maths behind it.
In other words, how should one proceed if (s)he does not want to use the computational algorithms in Matlab/Scilab or any other such software and wants to compute the FFT on the sampled data and figure out the input wave on her/his own, just as a learning exercise?
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
Since you are interested in the frequency response of the system, we look at the system in frequency space.
A frequency spectrum X(f) interacts with the frequency response of the system H(f) to produce an output Y(f).
The mathematical operator is multiplication.
Y(f) = H(f) x X(f)

What you are trying to determine is H(f).
What you have given the system is x(t) and you are measuring y(t).
If you can convert x(t) ⇒ X(f) and y(t) ⇒ Y(f) then you can determine H(f).

To make life simple, if we can make X(f) = 1
then Y(f) = H(f)
So, to figure out H(f), we need to divide Y(f) by X(f). What kind of division should this be? Let's say we have structured X(f) and Y(f) as unidimensional matrices (and they would definitely have the same dimensions, or number of elements), should the division be a matrix division or an element by element division? and Why?

Also, when using mathematical tools like Matlab/Scilab for system estimation using the frequency response H(f), they also require the input frequencies corresponding to which the frequency response has been obtained. How to find those frequencies, since calculating FFT using these tools returns only the complex frequency response and not the frequencies. And, the complex frequency response obtained as a result from these functions has the same number of elements as the input data samples.
 
Last edited:

bogosort

Joined Sep 24, 2011
696
Well, actually I am trying to determine the frequency response of a, say, DUT (a black-box, as mentioned by MrChips) by determining the frequency content of the input and the output signals.
In that case, the usual method is to use an impulse as the input signal. The math for why this is so is straightforward, but here's the intuition. Recall the reciprocal relationship between frequency and time, which are Fourier duals. In the time domain, a perfect impulse has zero width (it is perfectly localized); therefore, in the frequency domain it has infinite width (perfectly globalized). In other words, a zero-time impulse contains every frequency at equal magnitude. Since the impulse has constant energy at every frequency, the DUT is "exercised" at every frequency equally, and so its output is its frequency response.

Of course, perfect impulses are physically impossible, but so are infinite bandwidth DUTs.

So, how to do it? I mean, how to proceed with the sampled data, let's say your 5 dots? In theoretical mathematics, we are taught like "Fourier Transform of this function is that", "Z-Transform of this function is that" and so on! But in practical scenarios, what we have in hand are not functions but sampled data! How are these transforms computed in real scenarios with sampled data (and not predetermined functions)?
Ah, but what do you think a function is? There is a common misunderstanding that a function is an expression of the form \[ f(x) = x^2 \] Whatever that expression is supposed to mean, it is not a function! A function is a set. Specifically, a function is a set of paired elements with the constraint that the first element in each pair is unique.

Suppose that \(A\) and \(B\) are sets. Then a function \( f:A \to B \) maps elements from \(A\) to elements in \(B\). For example, let \[A = \{0, 1, 2, 3, 4\} \qquad \text{ and } \qquad B = \{0, 1, 4, 9, 16\}\] A counting argument can show that there are \(5^5 = 3125\) possible functions on these sets. Here are two of them: \[ \begin{align} f &= \{ (0,0), (1, 0), (2, 0), (3, 0), (4, 0) \} \\ g &= \{ (0, 0), (1, 1), (2, 4), (3, 9), (4, 16) \} \end{align}\] We can write these functions in a way that makes their mapping behavior explicit: \[ \begin{array}{ll}
f & g \\
0 \to 0 & 0 \to 0 \\
1 \to 0 & 1 \to 1 \\
2 \to 0 & 2 \to 4 \\
3 \to 0 & 3 \to 9 \\
4 \to 0 & 4 \to 16 \end{array} \]
Clearly, f maps everything to zero \( x \mapsto 0 \), while g maps elements to their squares \( x \mapsto x^2 \). Note that \(g:A \to B\) is a different function than, say, \(h:\mathbb{R} \to \mathbb{R}\) defined by \( x \mapsto x^2 \). Why? Because if we list out g and h as sets, we'll see that they're entirely different sets, e.g., \[ (10, 100) \in h \; \text{ but } \; (10, 100) \notin g \] Hopefully you now appreciate that a sequence of five sample values \[ x = (x_0, x_1, x_2, x_3, x_4)\] is a function, namely, a function \[ f:\{0,1,2,3,4\} \to \{x_0, x_1, x_2, x_3, x_4\} \] where the first set (the domain) represents the sample indices, and the second set (the codomain) is the set of sample values.

Note that we can equivalently think of the five sample values as a vector in a 5-dimensional space. Either way -- as a function or as a vector -- we can apply linear transformations on it, for instance converting it to a function of frequency values (equivalently, projecting the vector to its dual frequency space). The canonical way to do this is with the DFT, the discrete Fourier transform. As the DFT is computationally expensive, in practice we use the FFT algorithm (provided by tools like MATLAB).
 

bogosort

Joined Sep 24, 2011
696
@bogosort, I would be grateful if you could help me understand how the computational algorithms for FFT or Z-T handle the sampled data, the maths behind it.
In other words, how should one proceed if (s)he does not want to use the computational algorithms in Matlab/Scilab or any other such software and wants to compute the FFT on the sampled data and figure out the input wave on her/his own, just as a learning exercise?
Personally, I find the computational aspect quite boring, and I am more than happy to let a tool like MATLAB handle the rote calculations. For me, the interesting part is the mathematics. I do agree, however, that it's highly beneficial to work through a few simple transformations by hand to get a feel for how they do what they do. In this regard, I recommend that you ignore the FFT, which was designed for machine use -- it's an algorithm that optimizes for efficiency -- not for understanding how the DFT works. There are many good tutorials on the internet that work through calculating Fourier series and DFTs by hand.
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
Ah, but what do you think a function is? There is a common misunderstanding that a function is an expression of the form

f(x)=x2​

f(x) = x^2 Whatever that expression is supposed to mean, it is not a function! A function is a set. Specifically, a function is a set of paired elements with the constraint that the first element in each pair is unique.

Suppose that AA and BB are sets. Then a function f:A→B f:A \to B maps elements from AA to elements in BB. For example, let

A={0,1,2,3,4} and B={0,1,4,9,16}​

A = \{0, 1, 2, 3, 4\} \qquad \text{ and } \qquad B = \{0, 1, 4, 9, 16\} A counting argument can show that there are 55=31255^5 = 3125 possible functions on these sets. Here are two of them:

f​
={(0,0),(1,0),(2,0),(3,0),(4,0)}
g​
={(0,0),(1,1),(2,4),(3,9),(4,16)}


\begin{align} f &= \{ (0,0), (1, 0), (2, 0), (3, 0), (4, 0) \} \\ g &= \{ (0, 0), (1, 1), (2, 4), (3, 9), (4, 16) \} \end{align} We can write these functions in a way that makes their mapping behavior explicit:

fg0→00→01→01→12→02→43→03→94→04→16


\begin{array}{ll} f & g \\ 0 \to 0 & 0 \to 0 \\ 1 \to 0 & 1 \to 1 \\ 2 \to 0 & 2 \to 4 \\ 3 \to 0 & 3 \to 9 \\ 4 \to 0 & 4 \to 16 \end{array}
Clearly, f maps everything to zero x↦0 x \mapsto 0 , while g maps elements to their squares x↦x2 x \mapsto x^2 . Note that g:A→Bg:A \to B is a different function than, say, h:R→Rh:\mathbb{R} \to \mathbb{R} defined by x↦x2 x \mapsto x^2 . Why? Because if we list out g and h as sets, we'll see that they're entirely different sets, e.g.,

(10,100)∈h but (10,100)∉g​

(10, 100) \in h \; \text{ but } \; (10, 100) \notin g Hopefully you now appreciate that a sequence of five sample values

x=(x0,x1,x2,x3,x4)​

x = (x_0, x_1, x_2, x_3, x_4) is a function, namely, a function

f:{0,1,2,3,4}→{x0,x1,x2,x3,x4}​

f:\{0,1,2,3,4\} \to \{x_0, x_1, x_2, x_3, x_4\} where the first set (the domain) represents the sample indices, and the second set (the codomain) is the set of sample values.
OOPS! How can I forget such fundamental principles!
Sorry! My bad!

Thanks a ton for reminding that.
 

MrAl

Joined Jun 17, 2014
11,389
Hi there.

I understand that frequency response is nothing but the relationship between the input and the output of the system, in terms of magnitude and phase.

Thus it is very easy to deduce the frequency response of the system if we have the time-series data of the input and the output; the input being a frequency sweep of constant magnitude such that the change in frequency happens only after the completion of one cycle. That is, each frequency is maintained for at least one complete cycle.

However, this seldom happens in practical scenarios. Almost all the practical tools that generate a frequency sweep do not follow this condition, and generate a consistently varying frequency sweep (on a linear/logarithmic scale) such that the frequency changes several times in a single cycle. This poses a challenge in deducing the amplitude and phase information in response to a specific frequency.

It would be really appreciable if someone could guide me through this mathematical challenge.
Hi,

It sounds like you are thinking about the difference between sweeping through discrete frequency steps like 1Hz, 2Hz, 3Hz,...1000Hz, 1001Hz, etc., versus a continuous sweep approximated by 1.000001Hz, 1.000002Hz, etc.

Simple systems however can be characterized by taking just a few measurements as little as at three different frequencies like 10Hz, 1000Hz, 20000Hz. That is because simple systems are usually just 2nd order and the response like that is very limited in how it can change. Thus you could do a curve fit for the amplitude and a second curve fit for the phase.

As systems get more complicated however it gets harder to predict from any measurements because the way complicated systems can change is really just not predictable at all from any finite number of measurements you could make. This is where theory starts to beat practice because if you know the elements of the system you could do calculations that help.

So the first thing you have to do is characterize your system to classify it as to its complexity. For example, if it is just 2nd order linear then you can use lots of methods like described here and elsewhere, and really if it is just plain perfectly linear you can use a lot of those methods.

So my question to you is, do you know anything about the system in mathematical terms exactly or even approximately?
 

Thread Starter

Abbas_BrainAlive

Joined Feb 21, 2018
113
Hi,

It sounds like you are thinking about the difference between sweeping through discrete frequency steps like 1Hz, 2Hz, 3Hz,...1000Hz, 1001Hz, etc., versus a continuous sweep approximated by 1.000001Hz, 1.000002Hz, etc.

Simple systems however can be characterized by taking just a few measurements as little as at three different frequencies like 10Hz, 1000Hz, 20000Hz. That is because simple systems are usually just 2nd order and the response like that is very limited in how it can change. Thus you could do a curve fit for the amplitude and a second curve fit for the phase.

As systems get more complicated however it gets harder to predict from any measurements because the way complicated systems can change is really just not predictable at all from any finite number of measurements you could make. This is where theory starts to beat practice because if you know the elements of the system you could do calculations that help.

So the first thing you have to do is characterize your system to classify it as to its complexity. For example, if it is just 2nd order linear then you can use lots of methods like described here and elsewhere, and really if it is just plain perfectly linear you can use a lot of those methods.

So my question to you is, do you know anything about the system in mathematical terms exactly or even approximately?
Hello MrAl.

Thanks for your suggestions.

The DUT I have in hand, at the moment, is an 8th order LP MFB Butterworth with an integrated notch at the power-line frequency. However, I am trying to build a generalized system to automate the frequency response analysis of (almost) any device, with minimum limitations.
 
Top