Perform binary division using an IDT7216 16x16 multiplier?

Thread Starter

ComputerDesigner

Joined Dec 29, 2020
4
I am midway through the design of a 16-bit computer using only the 74HC logic series and have hit a wall with the ALU design. Because a 16-bit multiplier is fairly complicated, I decided to use the IDT7216 parallel multiplier- that choice mostly because it is one of only a few CMOS 16-bit multipliers which can be purchased for less than a collector's price, and still have their datasheets available online. However, I can't find any mention in the documentation of a division function with this IC. Is this possible? If not, is there another IC out there that would? I know this is possible (and easier!) with a microcontroller such as the Arduino, but I don't want to bring a programmable IC into this project, since the point is to design the processing elements used myself. Even to use an IC of this complexity is a bit of a stretch for my ancient little computer. Anyhow, help with this would be greatly appreciated.
Also, some information that you'll probably need to figure this out:
⋅ The IDT7216's datasheet
⋅ If this chip won't work, but another would, my chief reference for what chips are out there is Ebay. Search for something like '16x16 binary multiplier'; keep in mind this is actually for regular use, so a $300 collectable won't do at all.
Thanks!
 

BobTPH

Joined Jun 5, 2013
8,813
Divide is not a function you can reasonably do with combinatorial logic. Many low end processors do not have a hardware divide, and the ones that do usually implement a “divide step” instruction that does one bit at a time and is repeated 16 times for a 16 bit divide. Others simply do it with a software algorithm.

Bob
 

Thread Starter

ComputerDesigner

Joined Dec 29, 2020
4
Hmm, okay. Interesting that with the complexity of the multipliers available there isn't a divider out there as well, but I like the idea of doing it in software. I found two other links that were useful to figure a method for this out. First, at http://www.8052mcu.com/div16 there is a 16-bit division program written in the 8052 processor's language, which is easy to convert to my own instruction set. Second, All About Circuits also carries a much more in-depth explanation of binary division at https://www.allaboutcircuits.com/te...ary-division-the-algorithm-and-the-vhdl-code/ . My problem has pretty much been solved by these articles - I just post these links in case someone else comes looking for an answer someday. Thank you all for your help!
 

jpanhalt

Joined Jan 18, 2008
11,087
Shifting the divisor left to be slightly larger than the dividend is a well established method. (It can be arbitrarily larger, but that wastes a lot of steps.) Actually, to minimize steps, it only needs to be larger than (dividend/2+1). That condition is probably not processor dependent.

As for trial subtractions, they can lead to a lot of register swapping and are probably avoidable. This latter condition may depend to an extent on what instructions your processor has. With a mid-range or enhanced mid-range PIC, you can avoid the trial subtractions.
 

MrChips

Joined Oct 2, 2009
30,720
Early computers did multiplication and division in software.
Multiplication is not difficult to do. It takes 16 iterations to do 16 x 16 multiplication. Division is a similar process.
 

BobTPH

Joined Jun 5, 2013
8,813
The divide loop shifts the dividend into a register, compares to the divisor, if greater, output a 1 bit and subtract the divisor. Repeat 16 times for a 32 by 16 divide. What is left in the dividend register is then the remainder.

PIC24, which I am working on now, uses 3 of the working registers to do this using a divide step instruction. It is repeated 17 times to complete the divide, I guess the first step is some kind of setup to get things started.

But, if you have a hardware multiplier, divide can be done faster by, basically using long division, just like you learned in school, but using base 65536 instead of base 10. The algorithm is make a guess (which needs to be somewhere close, multiply the guess by the divisor subtract from the dividend. If the guess is good enough, it should be off by no more than one, which you adjust by adding back the divisor if the subtract produced a negative value. Two iterations to do a 32 by 32 divide. I wrote floating point divide for my PIC24 compiler I am working on using that algorithm, using the 17 cycle divide I talk about above as the guess.

Bob
 

Deleted member 115935

Joined Dec 31, 1969
0
@BobTPH
the guess method, and iterate, I think is called the Newton Raphson method,

If I remember, the "problem" with using it generally , is the number of iterations needed is dependent upon how far away the first guess is. ( and big problem if there are more than one "minima" which in this case there is not, )
 

BobTPH

Joined Jun 5, 2013
8,813
Actually no, Newton-Rhapson is indeed a guess and iterate method, but it is not the same as long division. Long division is basically shift and subtract, same as single bit division, except that it shifts and subtracts 16 bits at a time, requiring a 16 bit multiply in doing the subtract part.

Bob
 

Chris65536

Joined Nov 11, 2019
270
I am midway through the design of a 16-bit computer using only the 74HC logic series and have hit a wall with the ALU design.
This is an ambitious and interesting sounding project! Is it just in a simulator, or will you actually build it? I built a 16 digit calculator from (mostly) 7400 series chips. My first version had a 4 bit ALU consisting entirely of a single 74283, 7486, and 7485 chip. A division took around 170,000 clock cycles, but it worked! Here's a video.
 

BobTPH

Joined Jun 5, 2013
8,813
Newton-Raphson is a numerical analysis method for finding roots.
Which can be used to iterate from a guess on the function y = 1 / x. Which gives you reciprocal, one multiply away from division.

The Numerical Analysis section of my PhD qualifier exam was as follows:

Given a single precision reciprocal. Show the algorithm to get a double precision result result. Show the max error and number of iterations required using:
1. A Taylor series.
2. Newton-Raphson.

The answer, it it takes only 1 iteration by either method, using the single precision result as the guess. (I don't think I could still produce the formulae in any reasonable time though:(

Bob
 

djsfantasi

Joined Apr 11, 2010
9,156
Which can be used to iterate from a guess on the function y = 1 / x. Which gives you reciprocal, one multiply away from division.

The Numerical Analysis section of my PhD qualifier exam was as follows:

Given a single precision reciprocal. Show the algorithm to get a double precision result result. Show the max error and number of iterations required using:
1. A Taylor series.
2. Newton-Raphson.

The answer, it it takes only 1 iteration by either method, using the single precision result as the guess. (I don't think I could still produce the formulae in any reasonable time though:(

Bob
You beat me. I only have a BSAM. I used Newton-Raphson and truncation to find the square root of a negative number to create a PRNG with a uniform distribution. Unfortunately, I’ve lost the algorithm and analysis.
 

BobTPH

Joined Jun 5, 2013
8,813
I didn't complete the PhD, I got tired of being poor. But I did pass the qualifier, and I aced that question. I just looked it up and wrote a program do do Newton Raphson reciprocal:
Code:
with io; use io;
program newtonrecip is

    -- do Newton iteration reciprocal
    -- x must be between 0.5 and 1.0
    procedure recip(x : double) is
        guess : double;
        next : double;
    begin
        put("reciprocal of "); put(x); putline;
        guess := 48.0/17.0 - 39.0/17.0 * x;
        loop
            put(guess); putline;
            next := guess * (2.0 - guess * x);
            exit when next = guess;
            guess := next;
        end loop;
    end recip;

begin
      recip(2.0 / 3.0);
end newtonrecip;
It converges to IEEE double (15 digits) in 4 iterations.
 

Deleted member 115935

Joined Dec 31, 1969
0
You bring a warm glow to the heart of an old timer with these.
Thank you .
 

Thread Starter

ComputerDesigner

Joined Dec 29, 2020
4
I am indeed building this in real chips. It's a fun project... I've always wanted to understand the computer better, and this is a hands-on way to learn a lot of computing knowledge. If/when I finish I'll try to post a diagram on the internet somewhere, possibly linked to All About Circuits. :)
 
Top