I need your help in understanding how to simulate a timing recovery algorithm. I have already read a lot of literature but I have still doubts how it works.
Let me briefly explain my simulation:
Let me briefly explain my simulation:
- Create a test signal:
- BPSK, 1000 symbol with 16 samples
- Raised cosine filter
- Compute an amplitude and phase of each sample (CORDIC)
- Create N ( = 1000 / (32*16)) blocks: 32 x 16
- For each “sample” - vector (32 x 1) compute a sample with max amplitude and fix its index. Result: array 32 x N, which contains max amplitude samples (sample, called the on-time sample) and second one contains indexes.
- Selects sample from the time-index one greater than that of the on-time sample is the late sample, and the sample from the time-index one less than that of the on-time sample is the early sample.
- Compute an error (= late sample - early sample)
- Depends on the error I shift one sample before or after
- And after that I don’t know what my next step is?
- What should I do with the computed error?
- If I know the sample with max amplitude, do I need the time recovery algorithm at the receiver side?
- How can I compensate the phase offset in this way?
- Guys, does anyone have an example of simulation in Matlab or in C?