SerDes Mechanism for Gigabit Ethernet Media Converter

Thread Starter

Mezzer26

Joined Jan 11, 2016
26
Hello All,

I am trying to figure out how the media converter for a copper to fiber Ethernet link works. To be more specific, how does the conversion of four diff pairs get brought to two. I'm guessing it's through the following process but am not sure.

You start with a PC connected to a router containing the MAC. The MAC addresses the data, sends it through it's media independent interface (MII) to a physical layer chip (PHY), and the transmits across the media dependent interface (MDI) RJ-45 port to a target machine. Once through the cable it hits the MDI on the other PHY, which then converts it to MII where the target machine gets the signal as usable data.

In order to use the optical fiber however, those four lanes need to be compressed to two (Tx, Rx). To do this a PHY uses RGMII or SGMII, instead of the lower IEEE standards which use the basic MII, in an effort to multiplex the lanes together.

So in a media converter for GigE, they'd have some sort of controller to get status flags and generate codes, power stuff, and a PHY capable of RGMII or SGMII. A link would then have two of these media converters where you transmit/receive optical information through RGMII/SGMII and then shunt that back to copper on MDI side of the PHY which gets sent to the MDI side of the machine's PHY. That then outputs in GMII for the machine to use.

The optical drivers are stored in SFP Modules.

If anyone could confirm this or correct the process I have outlined that would be wonderful and I'd be thrilled! If I've inserted confusing statements that need clarification let me know, I'll try my best to correct them.

- Mezzer26
 
Not really sure what you are asking here but I think you are a little confused.

Any data chanel/streame has a maximum usable bandwidth, beyond which rhe encoding becomes to small/fast/degraded/ to relyably measure.

The physical media, environment and probably 50 other things will affect bandwidth, the most obvious issues being signal rise time which increases as the cable length increases, generally speaking.

10/100 ethernet uses, or at least requires a cable with a minimum performance spec with respect to bandwidth.
Of the 4 pairs that are usually in that cable it uses 2 to carry two differental signals TX & RX, the other two are unused by 10/100.

GB Ethernet is a whole other ball of wax... firstly it uses all four pairs, then it uses them all in whatever direction is appropriate at the time and after that it gets creative by going all oldschool analouge on you and encoding 2 bits in every pulse using 4 different voltage levels.

Goods explanation: http://www.hardwaresecrets.com/how-gigabit-ethernet-works/

Fibre still uses to unidirectional signals, all be it high bandwidth ones using high frequency pulsed light.

And the upshot...
Some media converters just take a pulse from one physical media and generate a pulse on a different media or at a different level.
Some/most receive packets in one protacol and retransmit them in another.
Obviously fibre to gb ethernet is a fundamental change and although I don't know for sure I suspect the converter will be receiving whole packets, in a variety of known formats TCP / UDP / ICMP ..., and then retransmitting them.
The reason I think this is because the two interfaces will be using vastly different timing and protacols so an on the fly conversion will not work.

I am sure that there are many more complexities you could go google but that is thre bones of it.
Hope it helps,
Al
 
Top