# Square Roots Trick (simple)

#### MrAl

Joined Jun 17, 2014
11,575
Hello there,

I thought i would share this square roots trick. It allows calculating the square root of somewhat large numbers like 9801.
In fact, i'll show the solution using that number to start out, 9801.

First we make a two column table of squares from 1 to 9:
1, 1
2, 4
3, 9
4, 16
5, 25
6, 36
7, 49
8, 64
9, 81

Next we add a third column which is simply the last digit of all the squares (so 16 becomes 6):
1, 1, 1
2, 4, 4
3, 9, 9
4, 16, 6
5, 25, 5
6, 36, 6
7, 49, 9
8, 64, 4
9, 81, 1

Ok now we are ready to begin finding the square root of 9801 but we use the table for any number.

First we look at the last digit, which is a 1 in this case.
Now we look at the third column above and find that a '1' appears as the squares of both 1 and 9. Remember that 1 and 9.
Next, we look at the first two digits which are 98. We find in the table a square that is less than 98. This has to be 81 which is the square of 9. This means the first number of the square root is 9.
Next, we multiply that 9 times the next integer which is 10, and get 90. Now 98 (first two digits again) is greater than 90, so then looking at that 1 and 9 from above again we choose the 9 because it is greater than 1, and thus the second number is 9.
The total solution then is 99, and that is the square root of 9801.

Now we will find the square root of 1369.
First we find the last digit the '9' appears in third column of the table for squares of both 3 and 7 so keep those two in mind.
Next looking at the first two digits, the square that is just below 13 is 9, which is the square of 3, so 3 is the first digit of the solution.
Next, 3*4=12, and 13 is greater than 12, so out of the two we got above (the 3 and the 7). we choose the 7 because it is greater than 3.
So the final result is 37 and that's the square root of 1369.

This gets a bit more tricky with squares that are not perfect squares. For say 1370 we'd have to make a choice as to what number we want to start with for the last two digits so this is mostly for perfect squares.

I believe this works because square roots are related to the grouping of every 2 digits of a number, whole or fractional, and the way integers only combine into a limited number of solutions.

Last edited:

#### Ian0

Joined Aug 7, 2020
10,048
I was looking for an easy way of calculating a square root with a microcontroller.
First I tried a quadratic approximation, and that works badly, but I noticed that it is possible to get a very good fit of a quadratic between x=1/4 and x=1
Therefore, to calculate the square root:
1) Shift the number left until it is greater than 2^30, and count the number of shifts (this is easy if your micro has a Count Leading Zeros instruction). 2^30 is a quarter of full scale in a 32 bit micro
2) Then do the quadratic approximation, which requires 3 multiplies and two adds. (Even more efficient if your micro has a Multiply and Accumulate instruction)
3) Then shift right by half the number of shift that were counted in step 1.
It takes 14 instruction in ARM code, and there executes in <1uS with a 16MHz clock.

#### Jon Chandler

Joined Jun 12, 2008
1,072
A trick from the 4-banger calculator days to find square roots:

Divide the number by a guess at the root.

For 9801, 100 is a good starting point.

9801/100 = 98.01

Add this to the original guess, and divide by 2.

100 + 98.01 = 198.01 / 2 = 99.05.

Repeat, using this new value.

9801/99.05 = 98.95

Add and divide by 2.

99.05 + 98.95 = 99

With a few iterations, you quickly close in on the root.

Ok, I haven't done this in 45 years, but I still remember the process!

#### Ian0

Joined Aug 7, 2020
10,048
A trick from the 4-banger calculator days to find square roots:

Divide the number by a guess at the root.

For 9801, 100 is a good starting point.

9801/100 = 98.01

Add this to the original guess, and divide by 2.

100 + 98.01 = 198.01 / 2 = 99.05.

Repeat, using this new value.

9801/99.05 = 98.95

Add and divide by 2.

99.05 + 98.95 = 99

With a few iterations, you quickly close in on the root.

Ok, I haven't done this in 45 years, but I still remember the process!
Isn't that Newton-Raphson?

#### BobTPH

Joined Jun 5, 2013
9,169
Yes.

Bob

#### MrChips

Joined Oct 2, 2009
30,980
There ought to be a blog for tricks like these.
I have one for doing logarithm.

#### MrAl

Joined Jun 17, 2014
11,575
A trick from the 4-banger calculator days to find square roots:

Divide the number by a guess at the root.

For 9801, 100 is a good starting point.

9801/100 = 98.01

Add this to the original guess, and divide by 2.

100 + 98.01 = 198.01 / 2 = 99.05.

Repeat, using this new value.

9801/99.05 = 98.95

Add and divide by 2.

99.05 + 98.95 = 99

With a few iterations, you quickly close in on the root.

Ok, I haven't done this in 45 years, but I still remember the process!
Hi,

That's the way i'v done it too. It is derived from the technique of approximation by differentials.

BTW, (9801/g+g)/2 with g=100 yields 99.005 so it is converging fast for this problem because the guess is so close already. If our guess was 200 we'd get 124.5025 on the first iteration.

The approximation by differentials goes like this...

y=f(x)
dy=f'(x)*dx
with dx=v-x^2 where v is the value of the number to find the square root of.
y+dy=f(x)+f'(x)*(v-x^2)
y+dy=sqrt(x)+(v-x)/(2*sqrt(x))
now with sqrt(x)=g we get:
y+dy=g+(v-g^2)/(2*g)
and factored:
y+dy=(v+g^2)/(2*g)
and since y+dy is the new guess, we get:
gnew=(v+g^2)/(2*g)
or factored differently::
gnew=((v/g)+g)/2

#### MrAl

Joined Jun 17, 2014
11,575
Isn't that Newton-Raphson?
Hi,

It is derived from the technique of approximation by differentials ( see post #7 and on the web).
You could try to derive if from Newton's i didnt try that.

#### MrAl

Joined Jun 17, 2014
11,575
There ought to be a blog for tricks like these.
I have one for doing logarithm.
That's a really really good idea. I have forgotten one of the best methods of calculating trig values where the number of digits doubles for each iteration. Cant find it anywhere either.
I derived it from a series but dont remember which one either.

#### xox

Joined Sep 8, 2017
838
That's a really really good idea. I have forgotten one of the best methods of calculating trig values where the number of digits doubles for each iteration. Cant find it anywhere either.
I derived it from a series but dont remember which one either.
You are probably thinking of the Taylor series expansion of the trigonometric functions.

cos(x) = (x^0)/0! - (x^2)/2! + (x^4)/4! - (x^6)/6! + ...
sin(x) = (x^1)/1! - (x^3)/3! + (x^5)/5! - (x^7)/7! + ...
exp(x) = (x^0)/0! + (x^1)/1! + (x^2)/2! + (x^3)/3! + ...

#### MrAl

Joined Jun 17, 2014
11,575
You are probably thinking of the Taylor series expansion of the trigonometric functions.

cos(x) = (x^0)/0! - (x^2)/2! + (x^4)/4! - (x^6)/6! + ...
sin(x) = (x^1)/1! - (x^3)/3! + (x^5)/5! - (x^7)/7! + ...
exp(x) = (x^0)/0! + (x^1)/1! + (x^2)/2! + (x^3)/3! + ...
Well thank you very much for considering this, but that's definitely not it i have known about that type of series and others like it since i was minus 8 months old (ha ha). Really though about 45 years or something.

This is completely different it's not a series it is a completely iterative solution where we may call it 'nested' It repeats the very same operation over and over again always coming up with a more precise solution.
I have been questioned about this before but i assure you i have used it for years before floating point came into fashion in math co processors. I had single precision math for trig but no double precision and i needed that for many things so i needed algorithms for sine and cosine that could give me a double precision result.
I just wish i could remember this, i am at the point where i might offer a reward for finding that algorithm again it must be known by now. Back in the day i found this when fooling around with Taylor series but i think it got into the public so it should be available somewhere. I've searched the web for hours and cant find it thought. It may be that once co processors came out it wasnt needed anymore, but it could be used for double double precision or double double double precision, etc., to whatever precision you need.

If you can find it or figure it out i'll send you some money for real. I've tried several times to remember it but cant, and i even gave it to another member here but cant remember who it was it was long ago.

BTW it did start out with a Taylor series, but was not just a Taylor series.
The key point was that a Taylor series gets more accurate not only with the more terms you use, but also the closer you get to the point the Taylor series was made from. This means that if you can find a better solution with one pass, the Taylor series gets better by itself, so the next iteration gets even better.
It did start with a 2 term Taylor series, but there was also the logistic built in that caused a better solution with each pass, and because of that the 2 term Taylor series gives a better and better solution.
But keep in mind it was not a term generator algorithm, it used the same 2 term Taylor series for every iteration and that was all that was needed because the rest forced a better solution each time.
The whole algorithm was condensed into a simple formula which if you saw you might not believe it works.
A regular Taylor series just keeps adding terms to get better, but it converges much slower.

If i or someone else can find it again i will post it here in this forum for sure, and have it tattoo'd to my forehead

Last edited:

#### xox

Joined Sep 8, 2017
838
Well thank you very much for considering this, but that's definitely not it i have known about that type of series and others like it since i was minus 8 months old (ha ha). Really though about 45 years or something.

This is completely different it's not a series it is a completely iterative solution where we may call it 'nested' It repeats the very same operation over and over again always coming up with a more precise solution.
I have been questioned about this before but i assure you i have used it for years before floating point came into fashion in math co processors. I had single precision math for trig but no double precision and i needed that for many things so i needed algorithms for sine and cosine that could give me a double precision result.
I just wish i could remember this, i am at the point where i might offer a reward for finding that algorithm again it must be known by now. Back in the day i found this when fooling around with Taylor series but i think it got into the public so it should be available somewhere. I've searched the web for hours and cant find it thought. It may be that once co processors came out it wasnt needed anymore, but it could be used for double double precision or double double double precision, etc., to whatever precision you need.

If you can find it or figure it out i'll send you some money for real. I've tried several times to remember it but cant, and i even gave it to another member here but cant remember who it was it was long ago.

BTW it did start out with a Taylor series, but was not just a Taylor series.
The key point was that a Taylor series gets more accurate not only with the more terms you use, but also the closer you get to the point the Taylor series was made from. This means that if you can find a better solution with one pass, the Taylor series gets better by itself, so the next iteration gets even better.
It did start with a 2 term Taylor series, but there was also the logistic built in that caused a better solution with each pass, and because of that the 2 term Taylor series gives a better and better solution.
But keep in mind it was not a term generator algorithm, it used the same 2 term Taylor series for every iteration and that was all that was needed because the rest forced a better solution each time.
The whole algorithm was condensed into a simple formula which if you saw you might not believe it works.
A regular Taylor series just keeps adding terms to get better, but it converges much slower.

If i or someone else can find it again i will post it here in this forum for sure, and have it tattoo'd to my forehead
Actually, the Taylor series converges VERY quickly. Just consider how fast the denominator of each successive term grows. It's a factorial, so after just a few terms the contribution becomes nearly zero.

#### MrSalts

Joined Apr 2, 2020
2,767
Actually, the Taylor series converges VERY quickly. Just consider how fast the denominator of each successive term grows. It's a factorial, so after just a few terms the contribution becomes nearly zero.
I think you'll have 2 digits of accuracy for each term in the Taylor series after the x^0 term.