Baud error question

Discussion in 'General Electronics Chat' started by blah2222, May 24, 2013.

  1. blah2222

    Thread Starter Well-Known Member

    May 3, 2010
    554
    33
    Hi,

    As an example for this question, I am using an oscillator with a frequency of 8 MHz and I desire a baud of 9600 to communicate with my laptop's RealTerm using 8-bit asynchronous on a PIC18F4550.

    The equation to calculate the SPBRG term for my specific config settings to set the baud rate is:

    SPBRG = \frac{F_{osc}}{64*Baud} - 1 = \frac{8000000}{64*9600} - 1 = 12.02 \approx 12


    Baud_{actual} = \frac{F_{osc}}{64*(SPBRG + 1)} = \frac{8000000}{64*(12 + 1)} = 9615.38 \approx 9615


    Baud_{error} = \frac{Baud - Baud_{actual}}{Baud}*100% = \frac{9600 - 9615}{9600}*100% = 0.16%

    My question is, what does this error actually refer to while my PIC is communicating with my laptop? The characters show up correctly every time with no error so is it only over a really long operation time that one character might show up incorrectly when the error% adds up?

    On the other hand, if my actual baud rate is sending 15 more symbols every second, are those all lost on the host end?

    Thanks
     
  2. kubeek

    AAC Fanatic!

    Sep 20, 2005
    4,670
    804
    The error rate refers to how fare off are the edges of the signal in respect to the proper baud rate. A receiver usually samples the data at 3x or 4x faster rate in order to recover them properly etc.
    The main thing that would propagate the timing error into data error would be the length of the character, and since it is just 10 bits there should be no problem up to around 5% mismatch I think. Between the characters there is the stop bit and then the start bit, which usually allows the receiver to resynchronise so that the error adds up from zero again.
     
  3. blah2222

    Thread Starter Well-Known Member

    May 3, 2010
    554
    33
    Thanks for the reply. Maybe I am just unclear of what is happening on the receiving host side of things. Here is my take:

    The receiver and transmitter agree upon a baud rate data length, and the protocol also implies the use of start and stop bits.

    In my case:

    Baud rate = 9600 symbols/second
    Data length = 8-bits (no parity)
    Start bit = True
    Stop bit = True

    Based on this it knows that each symbol will be 10 bits in length which gives a bit rate of 96000 bits/second.

    How is the receiver able to determine the correct clock to acquire each data bit and what is the mechanism that it uses to synchronize with the start and stop bits using only the above information?
     
  4. kubeek

    AAC Fanatic!

    Sep 20, 2005
    4,670
    804
    http://en.wikipedia.org/wiki/File:Rs232_oscilloscope_trace.svg
    The serial link has two levels- mark and space, which is high depends on the actual protocol, it is different between rs232 and uart.
    When the tranmitter is idle, its output is in the idle state, which is also the level of the stop bit. When the transmission starts, the first symbol sent is the start bit, which has the space level. Then the 8 data bits are transmitted, and the 10th bit is the stop bit. There are different kinds of data formats, you could for example have 9 data bits, a parity bit and 1.5 or 2 stop bits.

    The receiver has a timeout after which it ends up in the Idle state, and waits for the start bit. When it sees the rising edge of the start bit, it starts sampling the incoming data according to the baud rate, and stuffs them into the data register. In the tenth bit it checks that the stop bit is there, and if it isn´t it tells you that there was an error in the frame and starts waiting for the start bit edge again. Otherwise it puts the recived bit to the output.

    The transmission usually does´t have the bytes stuffed right next to each other so that it is easier to resycnchronise after an error.
     
  5. nigelwright7557

    Senior Member

    May 10, 2008
    487
    71
    Its 9600 bits per second not symbols.
    It will be 960 symbols per second.
     
  6. Papabravo

    Expert

    Feb 24, 2006
    10,148
    1,791
    This is not quite right. The correct terminology is "Mark" for a 1, and "Space" for a 0.

    When the serial data line is "Idle" it is in the "Mark" state. When a character is transmitted, the START bit makes a "Mark" to "Space" transition. The receiver will have a clock that is running at 16 times the baud rate and it waits 8 clock times to establish the "middle" of the Start bit. Now it waits 16 clock times to get to the middle of the least significant data bit. It samples the value and shifts it into a serial in parallel out shift register. This process is repeated seven more times for an 8-bit character. The Stop bit is sent and received as a "Mark". If the most significant bit of the character was a 0 there will be a "Space" to "Mark" transition. If it was a 1 then there will be no transition.

    Sometimes you want a bit of extra space between characters so you configure the transmitter to put out two Stop bits. Receivers should always assume there will be only one Stop bit.

    Finally if the receiver clock is running fast, then the sample point moves closer and closer to the beginning of the bit cell. If the receiver clock is running slow, then the sample point moves toward the end of the bit cell. An ERROR occurs when the sample point of a given bit moves to the previous bit or the subsequent bit. Each character is a new ball game so small errors are not cumulative. At 9600 baud you should be able to tolerate a 2% error. At 115,200 baud your baud configuration registers may not have the granularity required to get within 2% of the correct baud rate. In this case it is not uncommon to choose a crystal frequency, such as 7.3728 MHz. that will let you get the baudrate dead nuts accurate. You give up a bit of system speed for reliable serial data transmission.
     
    Last edited: May 24, 2013
    blah2222 likes this.
  7. blah2222

    Thread Starter Well-Known Member

    May 3, 2010
    554
    33
    Hm, but the baud rate is set to be 9600 and its units are symbols/second which is what the COM port is agreeing to receive...
     
  8. kubeek

    AAC Fanatic!

    Sep 20, 2005
    4,670
    804
    Correct, symbols are the data bits, start and stop characters etc., the whole 10 symbols can be called character or something.
     
    blah2222 likes this.
  9. blah2222

    Thread Starter Well-Known Member

    May 3, 2010
    554
    33
    Thanks for your post!

    Okay but what clock frequency does the receiver run at that equates to a baud of 16*9600 = 153600 symbols/second?

    Is the frequency just 153600(symbols/s)*10(bits/symbol) = 1.536 Mbits/s = 1.536 MHz??
     
  10. kubeek

    AAC Fanatic!

    Sep 20, 2005
    4,670
    804
    That´s the internal frequency of the receiver´s decoder, the baud rate is still the same as the decoder is designed to decode that many symbols per second.
    16*9600=153600 samples per second, which after some filtering etc. becomes 9600 symbols per second. Which with 8N1 coding becomes 960 bytes per second.

    Also some decoders use different hardware so the sampling rate multipier could be different.
     
    blah2222 likes this.
  11. blah2222

    Thread Starter Well-Known Member

    May 3, 2010
    554
    33
    Ah, so I found an old textbook that cleared this up with somewhat new terminology. Two things that I did not understand but are clear now:

    1) Symbol != Frame
    2) Binary Modulation vs. N-ary Modulation (N > 2)


    With those understood now, I understand that a symbol just refers to how many bits are carried within that signalling interval. In the case of a PIC that would be 1 bit per symbol (Binary Modulation), where as a modem could have 600 bits per symbol (N-ary Modulation). I thought it was referring to 10 bits per symbol but it is actually 10 bits per frame that contains 10 symbols.

    Using the corrected terminology for my example:

    Clock = 8 MHz
    Actual Baud Rate = 9615 symbols/second
    Desired Baud Rate = 9600 symbols/second
    Actual Bit Rate = 9615 bits/second
    Desired Bit Rate = 9600 bits/second
    Baud/Bit error = [(9600-9615)/9600]*100% = 0.16%

    The receiver uses a multiple of the Desired Bit Rate to sample the incoming frame (usually 16x). The receive synchronizes itself by locating the start bit transition (mark to space). After locating the start bit, it waits 8 sampling clock cycles til it reaches the middle of the start bit, capturing its value. Data bits and stop bit(s) are then captured every 16 sampling clock cycles.

    Receiver Sampling Frequency = 16*9600 bit/second = 153600 bits/second

    Capturing the bits in the middle allows for a 50% error margin per bit, giving a (50%/bit)*(10 bits/frame) = 5% error margin per frame. Taking worst case rise/fall time and phase mismatches it really works out to around 3% error margin.

    Since the Baud/Bit Error is (0.16%*10 bits/frame) = 1.6% per frame, this desired baud rate will work well.

    Posting this in the event that it helps someone else, but really to make sure I know what's going on.

    Thanks again all!
     
    Last edited: May 25, 2013
  12. THE_RB

    AAC Fanatic!

    Feb 11, 2008
    5,435
    1,305
    Another good tip is to do testing.

    You may find the laptop baudrates are not perfectly matching the standard 9600 either.

    To test, you adjust your BRG value up and down to find the limits where transmission starts to fail. Then finally settle for the BRG value right in the middle of the high and low fail points. :)
     
    blah2222 likes this.
Loading...