how fast does a gigabit NIC sample input?

Discussion in 'General Electronics Chat' started by strantor, Apr 15, 2014.

  1. strantor

    Thread Starter AAC Fanatic!

    Oct 3, 2010
    4,302
    1,988
    Ive looked at datasheets for various ethernet cards and such, as well as googling phrases similar to the title of this thread and found no answers. Found plenty of bits/sec specs, but I want to know how fast the hardware actually scans the input. I figure 10Gb/s maxed out equates to 5Ghz baud (square wave frequency, 101010...) so the card would (wild guess) need to sample at least10x faster than that, at 50Ghz. But that sounds very high so I question myself. Am I incorrect in assuming that a NIC samples the signal like an oscope would? How does it work?
     
  2. AnalogKid

    Distinguished Member

    Aug 1, 2013
    4,515
    1,246
    Basic flaw in your thinking - the card doesn't "sample" the input. It isn't an A/D converter, changing an analog input waveform into digital data like a sound card does with a microphone input. The input is serial digital data, and is equalized, demodulated, deserialized (yes, that's the term), thumped on with error correction and decompression, and fed to the system.

    10GB Ethernet use all 8 wires as 4 signal pairs, 2 pairs out and two pairs back for full duplex communication. CAT-6 cable is tested to 500 MHz. Even with 2 independent paths each way, that's 10% of the net bandwidth required if you use simple serial square waves, so they don't. The signal is not a simple square wave, it is a 16 level PAM (pulse amplitude modulation), with two different kinds of error prevention coding.

    ak
     
  3. #12

    Expert

    Nov 30, 2010
    16,247
    6,744
    Those modulation schemes are a college level course all by them selves! 16 levels of amplitude? Then there is phase shifting! Makes an old analog guy dizzy. :D
     
  4. strantor

    Thread Starter AAC Fanatic!

    Oct 3, 2010
    4,302
    1,988
    [​IMG]

    WHAT?

    Pardon me while I focus my curiosity elsewhere...


    (thank you)
     
  5. AnalogKid

    Distinguished Member

    Aug 1, 2013
    4,515
    1,246
    Oh Count Rare...

    We see it as those digital kids having to come back home to the folks because the real workd turned out to be a bit more complex (jjj) than they thought.

    The old analog guys *invented* simultaneous amplitude and phase modulation. It's called the color subcarrier.

    ak
     
  6. #12

    Expert

    Nov 30, 2010
    16,247
    6,744
    Somehow, the color sub-carrier makes sense to me because it's analog, but 256 QAM makes me wonder how anybody would go about deciphering each bit of information. It must be insane to even consider an analog solution. Probably there's a dedicated chip to do the decoding.
     
  7. geko

    New Member

    Sep 18, 2008
    9
    4
    Both 1Gb and 10Gb Ethernet use all 4 pairs for TX and RX simultaneously - it's not split as separate TX and RX pairs like 10/100Mb is.
     
  8. AnalogKid

    Distinguished Member

    Aug 1, 2013
    4,515
    1,246
    Inphase and quadrature modulation axes, double sideband suppressed carrier doesn't make sense to many, but if you go through the steps the original designers did you can see how they got there and why, and that helps. Same for the digital goop. Having watched each step of the evolution develop, I have the luxury of not having to learn it all in one blast. Plus there's that whole amplitude thing.

    Trivia question - when was the world's first digital phone call?

    ak
     
  9. strantor

    Thread Starter AAC Fanatic!

    Oct 3, 2010
    4,302
    1,988
    See, I was thinking there was probably a way to encode serial data with PWM. I was thinking, if the receiver of the serial data sampled fast enough, (could take enough samples during one period of the clock pulse) it could register each ON time of the signal line, and compare it to the ON time of the clock pulse, and if say the ON time of the signal was 86.71875% of that of the clock, then iit would correspond to 222/256, which corresponds to a byte of 11011110. So a byte could be transmitted in the period that only a bit is transmitted presently. But if the existing recievers are only fast enough to sample once per clock cycle to catch a simple "high" or "low" then to encode a byte with PWM we would have to slow down the transmission speed by a factor of 256, making it 8 times SLOWER, not 8 times FASTER. So to find out the sampling rate of current data transmission, I decided to start at the top; gigabit ethernet. But apparently it doesn't even sample, which I cannot comprehend. As you can see from the above text, my mind is 100% entrenched in this idea of sampling. I won't even ask for an explanation, because I don't care enough to do the research myself. It was just a fleeting idea which I am abandoning to focus on things I care more about.
     
  10. AnalogKid

    Distinguished Member

    Aug 1, 2013
    4,515
    1,246
    Your idea of encoding multiple bits into a single pulse width is merely a differrent version of encoding multiple bits into a single pulse height. 16 level PAM encodes four bits into 16 levels, very much like your idea of encoding 8 bits into 256 different widths. The reason that is not a preferred method is that 256 widths are too difficult to differentiate in the presence of noise.

    ak
     
    Last edited: Apr 16, 2014
  11. #12

    Expert

    Nov 30, 2010
    16,247
    6,744
    There is a simple phase shift keying that can go pretty fast.
    Stays the same = 0
    reverses phase = 1
    This can be done in analog. I know because I've worked on these circuits.
    I can't remember the circuit (from 1978), but you set up something to expect a sine wave and the phase reversal throws an error, and that error signal is a one.
    This is 00100100
     
    • PSK.png
      PSK.png
      File size:
      2.4 KB
      Views:
      5
Loading...