# autocorrelation

#### vineethbs

Joined Nov 14, 2004
56
can anyone tell me what exactly an autocorrelation function is , also what is actually meant by power spectral density ?

thanx

oh no ! i meant this to be in the homework forum , am really sorry

#### Brandon

Joined Dec 14, 2004
306
Originally posted by vineethbs@Mar 8 2005, 10:52 PM
can anyone tell me what exactly an autocorrelation function is , also what is actually meant by power spectral density ?

thanx

oh no ! i meant this to be in the homework forum , am really sorry
[post=5951]Quoted post[/post]​

Coorelation is a method where you convolute a signal with its time reversed copy. It highly used in radar since the return ping can be captured in a reverse fashion.

The main concept is to see how much in common the original signal is with it reflected copy. For auto coorelation, your seeing how consistant the signal is to itself. I.e., if you have a sinusoidal input, your autocorr will be a sinusoidal with a huge peak at t=0. You will find the frequency of the signal is greatly amplified as well as the period. WHere as the original signal had a constant sinusoid, the autocorr will have numerous 'beats' within it structure showing the period of oscilation.

This comes in very very handy when you want to see IF a signal is periodic and how periodic it is. A common application for autocorr is in speech synthesis. You take a sample of speach and shop it up into small snippets to get the phonemes (the sounds we make when speaking). These phonemes are either voiced (like a vowel) or unvoiced (sh, ch, ss, etc), but your program needs to determine which. The voiced parts have sinusoidal structure whereas the unvoiced do not. You autocorr each of the sound samples to magnify their periodicity or lack there of, then you can look for the max point which should be the center of the autocorr. You look for the next peak which will be 1 period of oscilations for the original signal, and bam, you have the frequency of that segment of sound.

Spectral Density has to do with the fourier transform in the end over a windowed signal. Its usually displayed in a spectagram with varying degrees of color to represent the concentration of frequencies over the time of a sample. This is done a lot in the music industry when they want to make sure the mastered recording is nice and 'flat', (i.e, the density is almost constant for all frequencies). Where as the fourier is the frequency responce of the ENTIRE signal, the Spectral Density is the frequency repsonse for small slices of the signal. (i.e., you'll have done TONS of FFTs by the time the Spectral dnesity is done)

#### vineethbs

Joined Nov 14, 2004
56
thanx brandon , now if i have a signal v(t) what is the power spectral density of that ? do u have to take the autocorrelation function and then its spectrum or square the freq spectrum of the orginal signal , i think the first one wud be correct , because , there is an increase in the freq , say like v(t) is a sine then its power wud be having twice its frequency so the spectrum shud spread out .

so i think that the power spectrum shud be the transform of the autocorrelation function , if so , what is the output if am passing this signal thru a filter , i hav seen equations like |h(f)|2 where h(f) is the response of a filter , so why do we just have h(f) 2 in this case , if the filter is an lpf at 100 hz say and the input sine wave is at 70 hz say , then its autocorrelation function wud have a component at 140 hz which wudn't be passed by this filter isn't it ?

but the autocorrelation function has a dc component too , so do u think the whole energy of the input signal is that ?

#### Brandon

Joined Dec 14, 2004
306
Originally posted by vineethbs@Mar 11 2005, 08:01 PM
thanx brandon , now if i have a signal v(t) what is the power spectral density of that ? do u have to take the autocorrelation function and then its spectrum or square the freq spectrum of the orginal signal , i think the first one wud be correct , because , there is an increase in the freq , say like v(t) is a sine then its power wud be having twice its frequency so the spectrum shud spread out .
Yeah, the first one is correct. If you have a signal v(t), its spectral density will be the Pyy=DTFT(autocorr(v(t)))

Since the autocorr magnifies the frequency of the signal, its not that the spectral density will spread out, you will have a FFT just like any other signal. It may look spread out, but when you scale the frequency axis back to 0-2pi, it will disappear.

Forget what I was mentioning about the windowing before. I was in a different mind set at the time. Windows don't apply here at all.

so i think that the power spectrum shud be the transform of the autocorrelation function , if so , what is the output if am passing this signal thru a filter , i hav seen equations like |h(f)|2 where h(f) is the response of a filter , so why do we just have h(f) 2 in this case , if the filter is an lpf at 100 hz say and the input sine wave is at 70 hz say , then its autocorrelation function wud have a component at 140 hz which wudn't be passed by this filter isn't it ?

but the autocorrelation function has a dc component too , so do u think the whole energy of the input signal is that ?

[post=5993]Quoted post[/post]​
You will have a similar output as to the non-filtered signal, except that you will notice a drop in magnitude at the frequency of the filter. Frequency does not change in the autocorrelation. Remember, we are dealing with linear systems. 70 Hz in = 70 Hz out. No form of frequecy shifting occurs.

You would just take the DFT of the v(t) * H(filt) then invert back to the filtered sinusoid. the 70hz signal would still be there since its a lpf @ 100Hz. In this instances, your filtered Pyy would equal the non filter Pyy.

Lets say the lpf is at 50Hz instead and at 70Hz, we attenuate our signal by -5dB. When you do the autocorrelation, you will have a signal that looks almost the exact same as before. The only difference will be that the center peak will be lower and the additional peaks representing the 70hz signal will be lower as well.

In the end, I believe you can just take the Pyy of the original signal and just multiply it with the H(filt)^2 to get the new Pyy.

#### vineethbs

Joined Nov 14, 2004
56
Originally posted by Brandon+Mar 12 2005, 09:27 PM--><div class='quotetop'>QUOTE(Brandon @ Mar 12 2005, 09:27 PM)</div><div class='quotemain'>It may look spread out, but when you scale the frequency axis back to 0-2pi, it will disappear.
[/b]

what if v(t) is a continous time signal ?

<!--QuoteBegin-Brandon
@Mar 12 2005, 09:27 PM
In the end, I believe you can just take the Pyy of the original signal and just multiply it with the H(filt)^2 to get the new Pyy.
[/quote]

ok , but doesn't the Pyy (am taking a FT of the orginal signal) have a component at 140hz ?

#### vineethbs

Joined Nov 14, 2004
56
thanx brandon !
i think i will try and clarify my question once more
its like this

i hav cos(2pi*70t) as the signal
its fourier transform is therefore delta(f-70)

now its autocorrfunction is (cos (wt)cos(w(t+u)))=k(cos (2wt+u) + cos (u))
isn't it ?

this will hav a fourier transform k1delta(0) + k2delta(f-2f0) where f0 , w etc are all related

now if i have an ideal lpf at 100hz
am passing cos(2pi*70t) thru it , i will get the same signal back

if am taking the eq Ryy*|h(f)|2 won't i lose the 2w term and get the o/p autocorr function as delta(0) ?

this is what my doubt is and also what is the time averaged autocorr function ?

Lets say the lpf is at 50Hz instead and at 70Hz, we attenuate our signal by -5dB. When you do the autocorrelation, you will have a signal that looks almost the exact same as before. The only difference will be that the center peak will be lower and the additional peaks representing the 70hz signal will be lower as well.
what happens when the filter is ideal ?

#### Brandon

Joined Dec 14, 2004
306
First, you can't take a correlation of a contineous signal. Its not possible since you need to convolute the signal with its time reversed version. In order to do that, you need a finite signal or a sample window of the contineous signal.

Originally posted by vineethbs@Mar 12 2005, 08:29 PM
i hav cos(2pi*70t) as the signal
its fourier transform is therefore delta(f-70)

now its autocorrfunction is (cos (wt)cos(w(t+u)))=k(cos (2wt+u) + cos (u))
isn't it ?
its FFT would be Delta(f-70) + Delta(f+70) since cosine is a non-complex waveform. Any naturally occuring waveform MUST always have a negative and positive component to their frequency transforms. You only have 1 component for complex waveform which have to be man made since a complex waveform is more of a conceptual tool than anything.

I wish I could do the math for you because I'm to the point I just look at the stuff and I see what its supposed to look like. When you do the auto corr, you not going to get a multiplcation of the frequency. There is no way it can happen. What does happen is that due to the convolution, you get double your samples and you get 'peaks' when the signal is in phase with itself which only happens at period N.

If you want a simple exercise to see this, convolve 1 period of cos with itself over 8 samples. [1 sqrt(2)/2 0 -sqrt(2)/2 -1 -sqrt(2)/2 0 sqrt(2)/2]

we end up with this sequence

[0.70711 0.5 -0.70711 -2 -2.1213 -0.5
2.1213 4 2.1213 -0.5 -2.1213 -2
-0.70711 0.5 0.70711]

We see the central peak of 4 at sample 8. We see a peak at t=1 and at t=15. Distance from each peak o the center is 7 samples, which is the period of our original signal so the frequency is maintained through the process. All we have is an amplitude change in the signal.

Double check your calculations for the autocorr of cosine. I should be cos(2pi*70t)*K wehere K is going to be some attenuation factor centered around sample 69.

this will hav a fourier transform k1delta(0) + k2delta(f-2f0) where f0 , w etc are all related

now if i have an ideal lpf at 100hz
am passing cos(2pi*70t) thru it , i will get the same signal back

if am taking the eq Ryy*|h(f)|2 won't i lose the 2w term and get the o/p autocorr function as delta(0) ?

this is what my doubt is and also what is the time averaged autocorr function ?
what happens when the filter is ideal ?

[post=6024]Quoted post[/post]​
Never messed around with time averaging, but I am fairly sure is the same method as the normal autocorrelation except you integrate over the time change and divide by the period of time, but check on that.

Ideal filters are heavenly. Nothing special happens when you use them except you get to make your math very easy. Filters have no special or unique effect when applied to a signal before auto correlation other than that specific filters function. Say we use a high pass filter turned to 500 Hz. i.e., we lost our 70 Hz signal. All we would have is noise left over. After we auto corr the noise, we will get a noise auto corr waveform which looks like white noise with a single impule at t=69 (center of the auto corr) If that filter happened to be ideal, no difference. It could be a butterworth 2nd order, 10th order FIR, doesnt matter.

Don't kill yourself over the auto corr. Its like a special pipe wrench. Your learning all the math and theory behind it, but its importance is in its application and use.

thank u 