serial communication

Discussion in 'Homework Help' started by jan jaap, Sep 4, 2012.

  1. jan jaap

    Thread Starter New Member

    Sep 4, 2012
    2
    0
    Hello

    I have a question about the serial communication in a 8051 microcontroller. I know you can send signals with a 8 bit size, by using the serial port. But apparently you can also send signals with a 16 bit size. Is there someone who can tell me how this works? Because I can't find any information about this.

    Yours faithfully

    Jan
     
  2. BMorse

    Senior Member

    Sep 26, 2009
    2,675
    234
    It usually gets sent 8 bits at a time...you will have to split the 16 bits into 2 8 bit variables then send them out and put them "back together" on the reception side......
     
    absf likes this.
  3. Papabravo

    Expert

    Feb 24, 2006
    10,140
    1,790
    By itself an 8051 is limited to 8 or 9 bit characters. If you wanted to make a UART that would send and receive wide (aka 16-bit) characters you will need to use an FPGA. You could throw in the 8051 core as a bonus.
     
    absf likes this.
  4. MrChips

    Moderator

    Oct 2, 2009
    12,440
    3,361
    Any serial communications channel can send thousands of bits. You only have to know how to package the data.
     
    absf likes this.
  5. Papabravo

    Expert

    Feb 24, 2006
    10,140
    1,790
    Asynchronous communication requires the Tx and Rx clocks to be resynchronized about every 8-10 bit times. As the bit rate increases this requirement becomes more severe. As the number of bits per character increases this requirement also becomes more severe. This is partly why there are no 16-bit UARTS.

    On the other hand a CAN peripheral can be viewed as a 64-bit Half-Duplex SRT (Synchronous Receiver Transmitter). There the clocks can be adjusted on a bit-by-bit basis. Would that float your boat?
     
    Last edited: Sep 5, 2012
    jan jaap and absf like this.
  6. WBahn

    Moderator

    Mar 31, 2012
    17,743
    4,792
    I don't remember where, but I've seen an async UART (custom) that was 32 bits per packet.

    I think the 12-bit rule (as I've sometimes seen it referred to) comes from the RS-232 (and it's kin/predecessors) when RC oscillators were frequently used, at least for the modem side of the link and you might be settling for hitting the target frequency within 1% or a bit better.

    The general rule of thumb is that the oscillator mismatch between the sender and receiver can't exceed roughly one part in ten times the packet length. With 100ppm oscillators, you could conceivably hope to get 500-bit packets to work. I don't know of anyone that has ever tried it in an actual system, but I'm sure it's been explored experimentally.

    As an aside, in our work we routinely used not 500-bit, but 500,000 (and even 2 million bit) packets across a wireless link that was completely asynchronous with 100ppm oscillators at each end. This was possible as a side-effect of the coding (combined source/channel) algorithms we developed as part of our research into crafting a jam-resistance signal that didn't require any shared secrets between sender and receiver. Someday this may result in braindead distributed networks being able to transmit data in the blind using cheap components while still enabling high network throughput.
     
  7. Papabravo

    Expert

    Feb 24, 2006
    10,140
    1,790
    There are other factors that are not evident from your post. Coding plays a big part in the ability to send and receive data both for controlling run length and for embedding clock and data together. The RS-232 specification per se does not deal with data rates only voltage levels. Single ended NRZ data has inherent limitations with respect to asynchronous communications. Other methods less so.
     
  8. WBahn

    Moderator

    Mar 31, 2012
    17,743
    4,792
    Is it still asynchronous if you have an embedded clock? I've never considered that to be the case, but I really don't know. The one-part-in-10x rule of thumb is generic and applies to asynchronous streams in which neither side (but principlally the receiver) makes any attempt to servo it's sampling points to match the other side (i.e., recover a clock). It only looks at how much mismatch can you tolerate in the worst case for an N-bit frame before you a framing error. Framing errors are really, really bad and difficult to recover from. The only reason that our system was able to ignore (well, mercilessly abuse, is probably a better description) the rule of thumb is because our coding scheme results in a signal that is highly tolerant of framing errors. I'm more than happy to go into detail if anyone is interested, but that needs to be in another thread.
     
  9. Papabravo

    Expert

    Feb 24, 2006
    10,140
    1,790
    All methods have aspects of both synchronous and asynchronous behavior. The distinctions are in where and how the receiver is able to establish or re-establish sync with the transmitter. Using a UART it is at the start of every character. In CAN it is the start of every frame. Using codes with embedded clocks it is every bit. So I find the distinction blurry to say the least.
     
    Last edited: Sep 5, 2012
  10. jan jaap

    Thread Starter New Member

    Sep 4, 2012
    2
    0
    Thank you for your reaction. But I don't think thats the solution. Can you maybe use your I/O portals in a way that would make this possible.
     
  11. Papabravo

    Expert

    Feb 24, 2006
    10,140
    1,790
    You can certainly program the I/O ports on any microcontroller to bit-bang any kind of protocol desired ate the expense of having to do nearly everything in your firmware with very little left over for other tasks.

    Exactly why is CAN not a viable solution for wide character (16-bit) transfers; every frame could have 16 data bits. So what's the problem?
     
  12. BMorse

    Senior Member

    Sep 26, 2009
    2,675
    234
    You can do this yourself by "bit banging" the serial transfer yourself out of any I/O pin on the 8051..... but like what was already mentioned, you will sacrifice performance to be able to accomplish any other tasks..... but take a look at this to see if it may solve your issue >> http://www.dnatechindia.com/Tutorial/8051-Tutorial/BIT-BANGING.html
     
  13. t06afre

    AAC Fanatic!

    May 11, 2009
    5,939
    1,222
    I think your problem is well answered here. But perhaps your question is more related to how to access data on a byte level in 16 bits or higher data types?
     
  14. BMorse

    Senior Member

    Sep 26, 2009
    2,675
    234
    I do believe that is what the OP is really after..... I ran into a similar issue trying to get a 16 bit value across one UART to another and I had to basically split the 16 bit variable into 2 bytes and send it across, then put it back together on the reception side....... (Although it is a bit more complex than what it sounds depending on the BAUD rate and amount of data being transferred) in my case it was just a hand full of bytes every second so it was easy enough to implement....
     
  15. MrChips

    Moderator

    Oct 2, 2009
    12,440
    3,361
    Obviously you can send 8 bits across the UART channel.

    But sending 16 bits as a pair of bytes is not good enough. What happens if you lose one byte? You will have a synchronization problem.

    To send 16 bits you have to send a packet of three or more bytes with some sort of encoding scheme so that you can recover the data in the correct byte order.
     
  16. BMorse

    Senior Member

    Sep 26, 2009
    2,675
    234
    yeah, I actually send several byte's for one 16 bit value, which includes a checksum number, which is appended to the packet sequence so that the sum of data plus checksum is zero. When received, the packet sequence are added, along with the checksum, If the sum is nonzero, an error has occurred. As long as the sum is zero, it is highly unlikely (but not impossible) that any data has been corrupted during transmission.

    and this data "packet" is sent a couple times for verification purposes....

    However, including error-correcting code in a transmission lowers channel efficiency, and results in a noticeable drop in throughput.

    But since I am only sending several packets a second, the speed is negligible for my project.
     
Loading...