Dear All,
I have been looking in great detail into some HD protocols lately and have a question about the HDMI specification allowable skew requirements.
Effectively, HDMI allows for (assuming 165 MHz pixel clock - i.e. 1920x1080p) up to 3nS (actually more as I'm ignoring the source skew and only considering cable + sink [i.e. receiver]). For a 165MHz pixel clock, this means 165MHz * 10 (TMDS bit length) data rate (i.e. 1.65 GBit/s). HDMI using TMDS uses the pixel clock to help 'synchronize' the clock recovery circuit which operates on all 3 data transmission channels.
3nS * 1.65 GBit/s ~= 5 bits.
So the implications here are that from Data CH1 to Data CH2 to Data CH3 there can be a 5 bit misalignment due to skew. This doesn't seem unreasonable but I can't find anywhere in the spec for persons implementing receiver modules in FPGAs for example how this is to be managed. Perhaps it is just trivial and it is specified to ensure no more than a 5 bit (out of 10 bits) misalignment such that you always know to apply the data to the front half of the 10 bit stream amongst all channels to allow for easy synchronization.
Anywho, just curious if anybody knew the specifics of why this is allowed in such protocols (i.e. HDMI/DVI).
Cheers,
Skeeb
I have been looking in great detail into some HD protocols lately and have a question about the HDMI specification allowable skew requirements.
Effectively, HDMI allows for (assuming 165 MHz pixel clock - i.e. 1920x1080p) up to 3nS (actually more as I'm ignoring the source skew and only considering cable + sink [i.e. receiver]). For a 165MHz pixel clock, this means 165MHz * 10 (TMDS bit length) data rate (i.e. 1.65 GBit/s). HDMI using TMDS uses the pixel clock to help 'synchronize' the clock recovery circuit which operates on all 3 data transmission channels.
3nS * 1.65 GBit/s ~= 5 bits.
So the implications here are that from Data CH1 to Data CH2 to Data CH3 there can be a 5 bit misalignment due to skew. This doesn't seem unreasonable but I can't find anywhere in the spec for persons implementing receiver modules in FPGAs for example how this is to be managed. Perhaps it is just trivial and it is specified to ensure no more than a 5 bit (out of 10 bits) misalignment such that you always know to apply the data to the front half of the 10 bit stream amongst all channels to allow for easy synchronization.
Anywho, just curious if anybody knew the specifics of why this is allowed in such protocols (i.e. HDMI/DVI).
Cheers,
Skeeb