Something basic about RS232

Discussion in 'General Electronics Chat' started by bumclouds, Jul 8, 2011.

  1. bumclouds

    Thread Starter Active Member

    May 18, 2008
    Hey folks,

    I've tried teaching myself about RS232 and serial communications recently, but there's just one last thing I can't understand. I was hoping you guys could help me.


    With reference to the above image, I think I understand the role of:
    > Idle time
    > Start bit
    > Stop bit(s)
    > and the parity bit(s) (which don't seem to be in that image).

    If the receiver starts listening during the "idle time" at the very beginning of the above message, then the data it receives from that frame will be 0b10101010 which is decimal 170.

    I guess that's all well and good, but what if the receiver starts (accidentally) listening at the LSB instead of during idle time, and then believes that the second bit of data is the LSB, and so forth.. Then it will think that the stop bit is actually "MSB" and the data packet will be interpreted as 0b10101011 which is decimal 171.

    And then after that, every single frame will be misinterpreted and the whole transmission confuzzled. Am I right or wrong?

    I've tinkered with some serial devices before, connected to Arduino boards, and I know that they do not need to be connected before the first frame is transmitted. In my experience they can be plugged in whenever you feel like it, and they will work.

    Thanks for your help explaining this. I am really confused!
  2. debjit625

    Well-Known Member

    Apr 17, 2010
    That will cause an error in your transmission and for that we have a very good error detection mechanism and that is known as parity bit ,but remember not always error could eliminated ,sometimes their will be errors that can't even stopped by using parity bit.

    Normally in short distance (I don't get any problem over 4 to 6 ft) you won't see any problems but at longer distance their happens to be too many problems mainly caused by electrical noise interfaced with the signal(transmission data).If your transmission is for longer distance then add any kind of cyclic redundancy check to the data before and after it is sent over.

    Good luck
  3. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    Actually, a single parity bit is a lousy error detection mechanism. It can only detect (but not correct) single bit errors. Also, if there is an even number of bit errors, the parity will not detect the error at all!

    IMHO, any good design would, at a minimum, packetize the data and tack on additional error detection info at the end of each packet (as simple as a checksum, or more complex like a CRC). Error correction can also be implemented in such a scheme. There should also be a mechanism identify packets, and request retransmission if a particular packet was not received at all, or corrupted during receipt. Many times, this is not required if the corrupt packet can be tossed into the bit bucket on the floor without harm.

    To answer the OP's question: The transmitter/receiver should automatically resynchronize with each other during an idle period between transmissions. So, it is a bad idea to have continuous transmission with no periodic breaks. Usually, data transmissions over RS-232 tend to be sporadic, so this is not normally a problem.
    bumclouds likes this.
  4. bumclouds

    Thread Starter Active Member

    May 18, 2008


    What you described as "periodic breaks" - is that a long stretch of idle time? How many bit lengths would be adequate? 2, 10, 50?
  5. eceblr2011

    New Member

    Jul 8, 2011
    If you can increase number of wires for interconnection then you can use RTS, DSR as handshake signals. If not, then implementing the checksum in the data communication will help to a greater extent.
  6. bumclouds

    Thread Starter Active Member

    May 18, 2008
    I guess my question is more like this -- Once errors are detected (checksum, parity checking, handshake signals, whatever method), does the receiver usually do something once a contiunuous stream of corrupted packets arrive, or does it just continue to collect the error-filled frames?
  7. THE_RB

    AAC Fanatic!

    Feb 11, 2008
    10 contiguous bits at HI is enough to prove there is no start bit, so therefore is proof that it is a pause between bytes.

    Most microcontrollers (like th eone in your Arduino) have USART modules that have error checking and will detect things like framing errors etc so they will do a lot of the error checking for you. The micro datasheet should discuss the systems the USART uses for synchronisation and error handling etc.
    bumclouds likes this.
  8. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    First, handshaking has nothing to do with your original question. Generally, handshaking (either hardware or software) is used when their is a mismatch in the capabilities of the transmitter and receiver. For instance, if the transmitter can send a quantity of data faster than the receiver can reliably receive it, you would use handshaking to slow the transmitter down. Software handshaking typically uses XON/XOFF characters to tell the transmitter to start/stop sending data. Hardware handshaking uses the CTS/RTS lines similarly. Don't be fooled by the term "hardware" handshaking...there generally is no "hardware" that physically starts/stops the transmission. This is usually done in code.

    Second, understand that RS-232 is a byte oriented communication protocol, and the hardware (and the associated device driver hiding beneath your application) has no knowledge of things like packets or error detection or correction. The hardware will signal you if it sees something it doesn't like, like a parity bit or a framing or overrun error. It will still receive a byte, corrupt or not, and your code will have to decide whether that byte is valid, and what do do with it if its not.

    Since you seem interested, I will demonstrate what I do on the transmitter side. I let you figure out the receiver side:

    Code ( (Unknown Language)):
    1. function sendpacket(int num, byte size, char *packetdata)
    2. {
    3.     int x;
    5.     startpacket(num,size) // clear the checksum/crc, and send packet header
    7.     for (for x=0; x<size; x++)
    8.     {
    9.         sendbyte(packetdata[x]);    // update checksum, send one byte
    10.     }
    12.     endpacket();            // send checksum or CRC and packet end byte
    13. }
    In this simple example, a call to send packet initializes a checksum variable (which must be with in the scope of the sendbyte() and endpacket() routines) and then sends a packet header. The packet header usually includes a packet start byte (i.e. a SLIP interface starts and ends every packet with 0xC0), and packet identifier (usually a byte or int exclusively identifying a particular packet), and a value indicating the size of the packet (the number of data bytes in the packet).

    Each call to sendbyte() simply* sends one byte of the actual data at a time, and updates the checksum/CRC with the byte value.

    Finally, endpacket() closes the packet by sending the checksum/CRC followed by a packet end byte (again, i.e. SLIP uses an 0xC0).

    The receiver captures each byte individually, and extracts the packet data from the stream, checking for errors, and possibly correcting them (if error correction info is included). At that point, the receiver would send a packet acknowledge signal indicating successful receipt of the packet based on the packet identifier. If the packet is corrupted, it could also send a request to resend the packet.

    Over a period of time, if the transmitter does not receive an acknowledgment that a particular packet was successfully received, it can take it upon itself to retransmit the packet with the same identifier. This is pretty much how the kermit protocol works.

    If you do something like this, you will find that you can pretty much ignore the hardware errors like parity, framing, and overrun, and still have 100% reliability in your transmissions.

    *Careful: if you do things right, every time you send the packet start charcter (i.e. 0xC0), your receiver should reset the packet reception mechanism. So, in this case, you actually cannot send the code 0xC0 in your data! You must encode the 0xC0 (and likewise, decode it at the receiver) in two bytes. I usually use an escape sequence like 0x1E 0xFA for 0xC0, and then I must also encode an actual 0x1E (because the receiver will think its receiving an escape sequence). I'll send a 0x1E as a sequence of 0x1E 0xFB.
    This encoding would take place automatically in the sendbyte() routine (not in your application).

    Any handshaking must be included in the lower level RS-232 (byte oriented) device driver, *not* in your application code!

    Hope this helps!
  9. bumclouds

    Thread Starter Active Member

    May 18, 2008
    Your post is perhaps a little bit too technical for me (I'm a beginner), but I do understand a bit better now :).

    What I understand is -- sendbyte() takes care of sending individual bytes, and the number of iterrations sendbyte() has depends on value you give of 'size'.

    endpacket() sends a checksum to the receiver so it knows if the packet just sent by sendpacket() is OK or not.

    The receiver should probably chuck away everything generated by startpacket() and endpacket(), am I right?
  10. joeyd999

    AAC Fanatic!

    Jun 6, 2011

    You can make things simpler:

    If corrupt packets can thrown away, you can eliminate the packet identifier NUM, and then you don't have to deal with acknowledgements or requests-to-resend.

    Additionally, if you make the packet size constant, you can eliminate SIZE, and just send the packet start byte, followed by the proper number of data bytes, followed by the checksum and packet end byte.

    If you layer your code right, this is all pretty easy. For example, consider these as your layers, from top to bottom:

    application -> packet driver -> USART driver -> hardware

    Similar layers should exist on the receiver side.
  11. russ_hensel

    Distinguished Member

    Jan 11, 2009
    All of the above aside, minimum rs232 often works quite well just 3 wires and a protocol with no error detect or corrrect. A lot depends on how robust a connection is needed. Good enough is in the eye of the beholder.
  12. joeyd999

    AAC Fanatic!

    Jun 6, 2011
    I develop products that actually have to work in the field under a broad range of (unpredictable) conditions. If you are putting something together simply as a hobby, or to experiment with RS232 communications, I agree with your comment. OTOH, I would not ship anything not capable of robust communications. "Quite well" is not good enough.