16-Bit Binary to HH:MM:SS

Thread Starter

jpanhalt

Joined Jan 18, 2008
11,087
Last year, I developed code to convert a 17-bit binary to BCD based on a 16-bit method from PICList (See: http://www.piclist.com/techref/microchip/math/radix/b2bu-17b5d.htm ) . My need was for a full 6-digit decimal display. My current project is a roast thermometer that requires converting seconds to BCD. Divide by 60 routines followed by BIN2BCD conversions are available, but I wanted to see whether the same polynomial approach as used last year had any advantages for doing that conversion.

First , I limited the problem to 16-bits (65535 seconds = 18:12:15) since it is hard for me to contemplate a roast that takes more than 18 hours to cook. Extending it to 17-bits (36:24:31) should not be difficult. Second, I did not bother to convert "18" hours to unpacked BCD. Nine hours, 59 minutes, 59 seconds is fine for my purposes.

Equations:
If one names the nibbles of a 16-bit binary a0..a3, the binary number can be written as:

(1) a3(4096) + a2(256) + a1(16) + a0(1)

Similarly, if the unpacked bytes of BCD time are labeled b0..b4, the equivalent representation of a 5-digit BCD clock can be written as:

(2) b4(3600) + b3(600) + b2(60) + b1 (10) + b0 (1)

Note that rather than using decades as is done for BCD, the digit values for our time expression are used. Equation (1) must equal equation (2) for a conversion. One can then write a series of equations for the bj terms as functions of the ai terms. That process can be simplified by expanding the ai terms. There are many ways to do that, for example, subtracting 4 from each binary term gives a3(4100 -4)... etc. I decided to subtract 44 from each term to get:

(3) a3(4140-44) + a2(300-44) + a1(60-44) + a0

I was attracted to that choice by the ease of dividing 300 and 60 by 6 and 10. Also 4140 -3600 = 540 = 9(60), which allowed me to skip writing an equation for 600's (i.e., b3).

Gathering like terms gave the following:

b0 = a0 - 4(a3 + a2 + a1)

b1 = - 4(a3 + a2 + a1)

b2 = 9(a3) + 5(a2) + a1

b3 = null (place holder)

b4 = a3

Since the unit of 60s can also be considered as 6(10s), b1 and b2 can be written as:

b1 = 2(a1- 2a2 - 2a3)

b2 = 9(a3) + 5(a2)

I chose the former set of bj equations as being easier to code and consistent with the concept of "60" being a unit per se.

Final Steps:
Simply writing that code ignores the fact that one might end up with, say, 8 for the tens of seconds term. The math is OK, but we are not used to seeing 41:20 (minutes : seconds) expressed as 40:80. The original authors on PICList used the term "normalization" to convert the latter to the former. For ordinary BIN2BCD, one needs only to use "10" to extract the normalized terms. For time, I used 10 for the one's terms and 6 for the ten's terms to force rollover from seconds to minutes and minutes to hours.

The original PICList BIN2BCD code converted each term to a negative by subtracting a constant based on the maximum value of each bj. For example, the largest possible value for b0 is 15. Thus, subtracting 16 would ensure a negative. However, since equality must be maintained, if you subtract 16 from one equation, you must add the equivalent to another, i.e., -16(1's) = 1.6(10's). Thus, rounding up to -20 allows one to deal only with integers when equating the results. That process is repeated for subsequent adjustments, each one of which has to account for the previous "carry" on up to the final equation (b4). The adjustment for that is positive to ensure equality of the two sets of equations . The advantage of adding to a negative versus subtracting from a positive in the normalization process resides in the instruction set for mid-range PIC's and the way the carry bit is handled. For subtraction, one needs to check for zero too, which adds a few steps. In brief, I wrote two schemes and compared them. One was adjusted to have all negative terms, except the last, and used only addition. The other was not adjusted and used addition or subtraction based on the sign of the result. In the end, the all-negative approach worked well. When tested to large counts by 1's (See: Test Program), the average time for conversion (n= 16200, 23533, and 35982 seconds) was about 290 Tcy. Smaller numbers gave faster conversion, e.g., 3540 seconds (59 minutes) took 232 Tcy.

Doing a direct normalization without biasing all bj's might be quicker, as the larger the negative offset, the more time it takes for normalization. Unfortunately, my code for doing that fails at about 0x3FF0 (4:37:04) and seconds <20. Note, 20 seconds = 0x14 and results in b0 =0. I believe the problem resides mostly in handling b0 . Timings for as far as it works were similar to the all-negative version. After a day or so, I put that approach on hold until I get fresh eyes.

I am not particularly happy with the all-negative approach. Namely, I think there can be improvements in the constants added to or subtracted from the bj polynomials to shorten the conversion time. Nevertheless, I decided to post it here in case anyone else is interested.

Selection of Negative Offsets:
upload_2018-12-21_13-8-26.png
*Final adjustments involved some empirical testing.

Code:
The attached code is written in Assembly using the enhanced mid-range instruction set. The additional instructions available for the enhanced mid-range devices were not needed; however, manipulations directly on WREG were done. "BRA" can be replaced with "GOTO."

I have attached code for the conversion and the test programs in separate files.
 

Attachments

Last edited:

Thread Starter

jpanhalt

Joined Jan 18, 2008
11,087
WRAPPING UP:

As mentioned above, I was not particularly happy with my results using large offsets to force each polynomial equation negative. If one watches the code run in MPLAB SIM stepping mode, the time for each equation (i.e., bj) increases with offset. This past week, I looked at some alternative schemes.

The only equation that is not always positive/zero or negative/zero is b0. The first version simply forced b0 to be negative, but rather than carry that adjustment over to b1 as in the original, that adjustment was carried to b2, which is always positive. The adjustment to b0 must therefore be mod 60 (i.e., -60) and the adjustment to b2 is +1. That routine is named "PosNeg" and is attached. Equations that are positive must be normalized using subtraction. Subtraction routines are longer in that "0" must be detected as with the mid-range instruction set, a result of zero shows no "borrow" (i.e., STATUS,0 is set). PosNeg normalization required 29 instructions versus 21 instructions for the All Neg version, but solving the polynomials was slightly shorter (30 versus 34 instructions). The net difference was that the entire code for PosNeg entire code was only 4 instructions longer (i.e., 59 versus 55 instructions). Run time for PosNeg, howver, was shorter (See: Normalization Comparison).

A second version avoids all offsets by determining whether b0 is positive or negative and using subtraction or addition for normalization as required. Unfortunately, that requires a few additional steps and inclusion of both the subtraction and addition routines in b0. In total, normalization increased to 46 instructions, but run time was further decreased (See:Normalization Comparison).

Merry Christmas,

John
 

Attachments

Sensacell

Joined Jun 19, 2012
3,445
If I were contemplating this, I would make the M - S counters count modulo- 60 in BCD.
Then everything is in it's correct format already, no complex conversions required.
 

Thread Starter

jpanhalt

Joined Jan 18, 2008
11,087
This code was not developed for a typical counter. Previously, I used such counters to display time increments and decrements in conjunction with an inexpensive RTC.

In the present case, I will be measuring temperature change at various periods during roasting. As an example, consider an 8# beef roast cooking at 30' per pound. Initial temp (when I cook) is about 40° to 50°F and final temp is about 130°F . Let's call that an 85° rise in 4 hours or about 0.354 °/min. My TC chip (AS5048) reads easily to <0.1° and can be set to average up to 16 readings for added precision. If I read every 6 minutes, there should be a rise of 2.125° if that rate is constant over the entire period. Of course, in reality, the rate changes during the cooking period, and smaller roasts cook faster. My plan is to determine an actual rate (probably to 2 decimal places, maybe less) and based on final desired temperature provide a more accurate estimate of when the roast will be done.

I have been cooking roasts that way for more than 20 years. My TC gauge is a Fluke 52 and calculations have usually been done just in my head or with a TI calculator. I have noticed that two readings can pin down the finish time fairly close. The first reading is done after the temp has increased about 10°. I work in time, i.e., seconds/degree. That first estimate is usually long, so at about half way, I take a second reading. The rate typically has increased, and my estimate is corrected accordingly. I am trying to avoid using an exponential equation but have not ruled that out.

Considering those calculations, I think it would be easier to use seconds and degrees, calculate ending period in seconds, and display that in HMS. Doing multiplication and division in BCD HMS to one or two decimals seems to me to be more complicated. My first version will not have a clock per se.

Edit: Seconds/degree can be captured with TMR1 and used without doing division. That TC chip linearizes degrees internally.
 
Last edited:
Top