Subtraction In Review

MrChips

Joined Oct 2, 2009
30,720
In computer arithmetic, the algorithm does not care how many digits are being used, unless you are using BCD.
IEEE floating point representation uses binary and every computer I am aware of uses binary.

If you wish I can show you the algorithm for binary multiply and divide.
 

BobTPH

Joined Jun 5, 2013
8,813
One fairly simple and fast way to do many-digit multiplies and divides follow pretty much the same algorithms humans use, but with the numbers represented in a base that is 2^n where n is the word length of the computer. It uses the computer's n by n giving 2n multiply and 2n by n bit divide instructions. Each n - bit piece of the number is treated just like a single digit in our human base 10 algorithm. On a 32-bit computer you get roughly 9 decimal digits with each iteration of these algorithms.

Bob
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,396
In computer arithmetic, the algorithm does not care how many digits are being used, unless you are using BCD.
IEEE floating point representation uses binary and every computer I am aware of uses binary.

If you wish I can show you the algorithm for binary multiply and divide.
At the heart of storage and number crunching most computers use binary at the lowest level, but as you probably know you dont have to work in binary you can create your own 'base system' or simply 'grouping' and go from there. This allows using a higher level language for everything.
Storing numbers as a sequence (or in C it would be an array) we could do this:
{3,1,4,1,5,9,2,6,5} as the mantissa,
{8} as the exponent,
{+} as sign of mantissa,
{-} as sign of exponent.
So we have four groups for each number, although that is only one way to do it.
Another way is to group by some number of digirs for each group such as:
{{3141}{1592}{0065}} for the mantissa
This allows multiplications of 4 digits at once.
However, i am not that concerned with speed right now, just an algorithm. I'd like to see other algorithms that write out as a formula not an actual example of division. The example ii gave was something like:
x*(2-N*x)
as the next guess given x is the guess and N is the number we want to find the reciprocal for.
This can be done in binary or decimal or octal or anything else we can come up with, so the base system is not of concern either.
I also know of an algorithm for finding the square root of a number in binary, but again i dont want to work binary and the kind of formula i seek is not base system specific as you can see in the example i gave above.
In math terms i believe we could call it an iterative formula or method.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,396
One fairly simple and fast way to do many-digit multiplies and divides follow pretty much the same algorithms humans use, but with the numbers represented in a base that is 2^n where n is the word length of the computer. It uses the computer's n by n giving 2n multiply and 2n by n bit divide instructions. Each n - bit piece of the number is treated just like a single digit in our human base 10 algorithm. On a 32-bit computer you get roughly 9 decimal digits with each iteration of these algorithms.

Bob
Yes i can do division in binary, but this class of formulae does not depend on the base system it is simply an iterative formula that we keep doing over and over again.
I also dont want to do long division like humans do, and the binary version is long division too although it is much simple to decide what to do with each new calculation. Also note that long division in any base system can only produce ONE new digit per pass while these formulas i talk about can sometimes DOUBLE the NUMBER of digits in one pass. So if we had 1.4142 with one pass, the next pass would produce 1.414213562 and possibly more incorrect digits some to be corrected on the next pass which might produce another 10 correct digits.
 

BobTPH

Joined Jun 5, 2013
8,813
Using a computer with 64 bit arithmetic you can produce 19 decimal digits per iteration. But you are right, for division, an iterative algorithm will be faster for large enough numbers.

For multiplication, I don’t think there is any iterative algorithm, and it is basically n squared time for n digits.

Bob
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,396
Using a computer with 64 bit arithmetic you can produce 19 decimal digits per iteration. But you are right, for division, an iterative algorithm will be faster for large enough numbers.

For multiplication, I don’t think there is any iterative algorithm, and it is basically n squared time for n digits.

Bob
Yes i have to agree that multiplication isnt really iterative, i would call it more like sequential, at least overall, although the inner parts of it can be done in parallel.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,396
I would call multiplication an iterative algorithm.
It is shift, multiply, and accumulate n times.
But that seems different than an ordinary iterative solution like the kind i am talking about.
You are talking about a computer algorithm while i am talking about a mathematical function that is complete in itself. This seems like it is like a nested series vs a recursive algorithm. One is a mathematical statement while the other is just a recursive call of several functions in a program.
But maybe you can come up with an expression that puts what you said into a concise mathematical statement like one calculation for 'e':
e=V(1+1/k) for k=1 to infinity
where "V" is the upside down "V" indicating a nested series.
These functions also require an exit criterion such as:
y2-y1<1e-6
which would indicate the computed value y2 minus the previous computed value y1 has to be less than 1e-6 which means they are almost the same so the iteration can exit. With a multiply function as you describe, teh exit is natural after the required number of loops is complete.
Now maybe you can call that a sub iteration too but it is still different than what i am talking about. Also, if we can call that an iteration then we can call any function that does anything more than once an iteration. So that is the most general meaning of iteration yet in pure mathematics i dont think it is the same.
But see if you can put that multiply function in a concise math form maybe you can.
The difference i am talking about is like computing the square root in binary one digit at a time vs using a function that is repeatedly called and produces the next value:
y[n+1]=f(y[n])
so here the previous value is used to calculate the next COMPLETE value of 'y' and we can stop any time we want to once we get an approximation that is accurate enough for our intended use. A multiply algorithm always produces an exact and has a definite non changeable result.

I guess it is partly a matter of semantics too but you can look at the difference between recursion and iteration and maybe some other comparisons. We draw distinctions not to produce confusion but rather to separate cases from one another in order to be more descriptive about each one without going into detail.
 
Top