I think I understand the basic concept of sampling theorem, that is that the sampling frequency must at least twice the highest frequency of the function being sampled. I have been watching DSP lectures on youtube and in the video the lecturer presents a sampling theorem example that has confused me. The question is: 3 cosine waves of different frequencies are to be sampled. cos(6πn), cos(14πn) and cos(26πn). All 3 functions are to be sampled at a frequency of 10 Hz. This make T = 0.1s. Multiplying omega (ω= 2πf) by the sampling rate the three functions become. cos(0.6πn), cos(1.4πn) and cos(2.6πn). Only the function with 3Hz original frequency is sampled at the correct rate, the second and third functions are under sampled, therefore aliasing will occur. However the lecturer stated that: "1.4πn is 2πn - 1.4πn which equals 0.6πn and 2.6πn is 2πn + 0.6πn which also equals 0.6πn" All cosine waves will appear to have the same frequency. I don't understand how he is doing the addition and subtraction to obtain the aliased frequency (speech marks bit). If anyone can give me a hint or clue I will be very great full. example starts at 37.10 minutes. http://www.youtube.com/watch?v=JpHXMcDxNiA&feature=relmfu Thanks.
I need some clarification. Normally when one uses 'n' instead of 't', it means that you are already working in the discrete-time domain and 'n' is the sample number. That appears not to be the case here, so what is 'n'? Since you are saying the first one is 3Hz, I'll assume that what was meant was cos(6πt), cos(14πt) and cos(26πt). and we'll see where that leads. So the three sinusoids are at 3Hz, 7Hz, and 13Hz. If the sampling rate is 10Sa/s, then the highest non-aliased signal would be 5Hz and 10Hz would be aliased down to DC. Between 5Hz and 10Hz, the alias frequency will decrease linearly from 5Hz to 0Hz, so 6Hz would end up at 4Hz and 7Hz would end up at 3Hz. Between 10Hz and 15Hz, the aliased frequency will increase linearly from 0Hz to 5Hz, meaning that 13Hz will alias to 3Hz. Thus, all three sinusoids will appear identical when sampled at 10Sa/s.
(I see you replied just as I was getting ready to post this. It may be useful to you or, at least, to someone else). As for where the numbers he is using come from, the sampling points, n, occur at t = nT = 0.1n. Thus cos(6πt) = cos(6π*0.1n) = cos(0.6πn) The other two, then, become cos(1.4πn) and cos(2.6πn). In the discrete-time domain, the highest non-aliased signal is +/-π. From 1π to 2π (which is 5Hz to 10Hz), the aliased frequency comes down from π to 0. Since 1.4π is in 0.4π into this range, the aliased frequency is 1π - 0.4π which is 0.6π. A similar approach can be used for the other one, but this is one is also readily apparent since the spectrum repeats after 2π, so 2.6π is 2π + 0.6π and the 2π goes away.
I think you are speaking mathematically. I think Bill is correct that in many practical contexts, depending on exact methods of implementation, there are benefits to oversampling. http://en.wikipedia.org/wiki/Oversampling
Thanks for the reference to the article on Oversampling. Yes, I agree with the article's reference to three main reasons for oversampling: antialiasing resolution noise One would oversample to obtain a better estimate of the waveform, particularly if there is noise in the signal. But the mere fact that there is noise implies that the Nyquist limit is higher and therefore one has to increase the sampling frequency in order to satisfy the sampling theorem.
I'm not sure i agree with the logic here. Noise extends up to very high bandwidth, and we normally limit bandwidth to exclude the high frequency noise, which is unnecessary to let in. Normally we use an antialiasing filter, which will only let the noise in within the bandwidth constraint, anyway. Then, one might choose to oversample this signal which is band-limited well below half the oversampling rate. I expect there are normally tradeoffs in choosing the oversampled rate and the cutoff frequency and filter type for the anti aliasing filter.