Can Not Find Sine Algorithm, Help Needed

Thread Starter

MrAl

Joined Jun 17, 2014
11,389
Hello there,

I am looking for an algorithm i once had long ago. I havent used it in maybe 10 years or longer although i did post it here somewhere at least once but it could have been in a PM. I hope it wasnt though.

This algorithm calculates the sine of an angle, ti's that simple, but it does so very efficiently because it doubles the digits of precision with every pass. So if the exact answer was 0.123456780 and we started with 0.12, the next pass would produce 0.1234... and the next pass would produce 0.12345678 (doubling the correct digits with each pass).
By comparison, Taylor's and other series converge rather slowly, too slowly really, so i wanted to find the better algorithm again. The function would simply calculate the sine of an angle sin(x) and it was a fairly simple function with maybe add, subtract, multiply, divide, maybe squaring and maybe square root, but it was quite short too not long and complex.
Note it is also not an identity type formula like 3*sin(x/3)-4*sin(x/3)^3 which also works but is slow to converge.

Any help finding this either here or somewhere else would be a lot of help.
Thanks much.
 

Papabravo

Joined Feb 24, 2006
21,159
The Taylor Series expansion about x=0 (the Maclaurin Series) is not a likely candidate because it converges very slowly.
The CORDIC technique works well with limited arithmetic capabilities, and the Chebyshev polynomial has the advantage of a bounded error over the entire range of the argument at each possible truncation point.
 

BobTPH

Joined Jun 5, 2013
8,808
I have never heard if an iterative algorithm for sine. The algorithms typically used in math libraries do a range reduction followed by Taylor series for sine(x) - sine(x0).

Bob
 

BobTPH

Joined Jun 5, 2013
8,808
No, but perhaps my memory is bad, this was a long time ago. In any case, it was a range reduction followed by a polynomial.

Bob in
 

Papabravo

Joined Feb 24, 2006
21,159
No, but perhaps my memory is bad, this was a long time ago. In any case, it was a range reduction followed by a polynomial.

Bob in
Yes precisely. You map the interval [0,2kπ] for all positive k onto the interval [-1,+1] and then evaluate the polynomial using synthetic division or a factored form where you can do a multiply accumulate process.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,389
Yes it is not a series of any kind. It is a formula that when you use the exact same formula over and over you keep getting a better and better estimate of the sine of the angle, and there is a similar one for cosine.
So there is no "sum of" notation anywhere because it is not a sum it's entirely different i think you would call it a recurrence or recursive formula or something like that.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,389
Was it CORDIC or using a Chebyshev polynomial?
EDIT: IT could be one of the CORDIC routines but i cant find the sine one.

Thanks, but no not those either and the Chebyshev is just another series and it is not a series of any kind. hen you call this function you call it over and over again and the approximation gets better and better with each pass, and it gets better and better really fast unlike any series i have ever seen. That's what makes it so good.
 
Last edited:

Thread Starter

MrAl

Joined Jun 17, 2014
11,389
I have never heard if an iterative algorithm for sine. The algorithms typically used in math libraries do a range reduction followed by Taylor series for sine(x) - sine(x0).

Bob
Yes it is interesting that this doesnt come up in most post around the web i see. What always comes up is Taylor or Chebyshev, which are series. This one is like the one for square root where you just keep calling the same function over and over and the result gets better and better, and there is no factorial involved either.
There is a chance it is one of the CORDIC routines but i cant seem to find the one for sine.
 

Tesla23

Joined May 10, 2009
542
I suspect you are thinking of the algorithm for sqrt(x) that goes like:

guess a = sqrt(x),
then new estimate = 1/2(a + x/a)
loop...

this converges quadratically (double the number of significant digits each iteration).

I'm pretty sure I have never seen anything similar for the trig functions.

The interesting thing from the stackoverflow link was the algorithm:

Code:
template<typename T>
inline T cos(T x) noexcept
{
    constexpr T tp = 1./(2.*M_PI);
    x *= tp;
    x -= T(.25) + std::floor(x + T(.25));
    x *= T(16.) * (std::abs(x) - T(.5));
    #if EXTRA_PRECISION
    x += T(.225) * x * (std::abs(x) - T(1.));
    #endif
    return x;
}
that can calculates cos(x) to within 0.001 without divisions. Apparently this could give a useful performance advantage over the FPU hardware algorithm where the limited accuracy was sufficient.
 

Thread Starter

MrAl

Joined Jun 17, 2014
11,389
I suspect you are thinking of the algorithm for sqrt(x) that goes like:

guess a = sqrt(x),
then new estimate = 1/2(a + x/a)
loop...

this converges quadratically (double the number of significant digits each iteration).

I'm pretty sure I have never seen anything similar for the trig functions.

The interesting thing from the stackoverflow link was the algorithm:

Code:
template<typename T>
inline T cos(T x) noexcept
{
    constexpr T tp = 1./(2.*M_PI);
    x *= tp;
    x -= T(.25) + std::floor(x + T(.25));
    x *= T(16.) * (std::abs(x) - T(.5));
    #if EXTRA_PRECISION
    x += T(.225) * x * (std::abs(x) - T(1.));
    #endif
    return x;
}
that can calculates cos(x) to within 0.001 without divisions. Apparently this could give a useful performance advantage over the FPU hardware algorithm where the limited accuracy was sufficient.
Hello,

Would you be able to show that function as a math function like we do with the square root function?
That would help me to figure out if that is the one i am looking for.

And yes, the algorithm is similar to that square root function you posted.
And also yes, this kind of algorithm exists i have used it many many times in the past when one of my first computers did not have double precision sin(x) function and i had to produce that myself in the language i was using at the time which was just BASIC. So i started with single precision float, and ended up with double precision float, and it was not that many passes because it effectively doubles the correct digits or something really fast like that with each pass.

I also checked the CORDIC routine for sin and cos and was very surprised to find that they converge very slowly. I have a feeling they are using the 1/2 angle reduction technique which converges way too slow even slower than the Taylor series i think. I think the Taylor series produces two more correct digits with each new term which is kinda slow compared to the one i am talking about. It is very surprising that the routine i am speaking of is not plastered all over the web since it was around since the 1980's and maybe even before that.

Any other ideas would be appreciated too thanks.
 

neonstrobe

Joined May 15, 2009
190
Wendy's formula can be written as an iterative approach.Here's my Fortran version:
program sine

implicit none

real(8) :: a,d, n, x
real(8) :: s
integer(4) :: i

print*,'input x:'
read(*,*) x
a=x
n=1
s=a
d=n
print*,'s=',s
do i=1,10
d=(n+1)*(n+2)
a=-a*x**2/d
s=s+a
n=n+2
print*,'s=',s,' n=',n,' d=',d
end do
print*,'sin(x)=',sin(x)
end program sine
 

ZCochran98

Joined Jul 24, 2018
303
Wendy's formula can be written as an iterative approach.Here's my Fortran version:
program sine

implicit none

real(8) :: a,d, n, x
real(8) :: s
integer(4) :: i

print*,'input x:'
read(*,*) x
a=x
n=1
s=a
d=n
print*,'s=',s
do i=1,10
d=(n+1)*(n+2)
a=-a*x**2/d
s=s+a
n=n+2
print*,'s=',s,' n=',n,' d=',d
end do
print*,'sin(x)=',sin(x)
end program sine
Problem is, it's still the Taylor (McLaurin) series, which converges very slowly.
I'm genuinely surprised that an algorithm such as is being sought isn't all over the internet, if it converges that rapidly. I know there's one for pi that converges at a stupid rate (something like ~12 digits every iteration?), but the only approximation anyone ever wants to discuss is the Taylor series or a non-iterated quadratic fit.
 

Tesla23

Joined May 10, 2009
542
Would you be able to show that function as a math function like we do with the square root function?
That would help me to figure out if that is the one i am looking for.

And yes, the algorithm is similar to that square root function you posted.
And also yes, this kind of algorithm exists i have used it many many times in the past when one of my first computers did not have double precision sin(x) function and i had to produce that myself in the language i was using at the time which was just BASIC. So i started with single precision float, and ended up with double precision float, and it was not that many passes because it effectively doubles the correct digits or something really fast like that with each pass.
It's a C++ template, just replace 'T' with 'double' and it should be readable. If you have trouble, look at the simulation here: https://www.desmos.com/calculator/cbuhbme355, it's pretty readable. I'm pretty sure it's not what you are talking about, it's just a (very) quick and dirty approximation.

I also checked the CORDIC routine for sin and cos and was very surprised to find that they converge very slowly. I have a feeling they are using the 1/2 angle reduction technique which converges way too slow even slower than the Taylor series i think. I think the Taylor series produces two more correct digits with each new term which is kinda slow compared to the one i am talking about.
You're missing the point about the CORDIC algorithm, the cleverness of it is that you can do complex vector rotations by arbitrary angles using ONLY bit shifts and additions, plus one multiplication if you want to preserve scale. Depending on the hardware, this may be the fastest option.

It is very surprising that the routine i am speaking of is not plastered all over the web since it was around since the 1980's and maybe even before that.
I agree....
 

neonstrobe

Joined May 15, 2009
190
The Taylor series converges slowly for large values.
You can pre-process values larger than, say, pi/2, by simple use of mod.
With suitable adjustments to give the right sign etc. afterwards.
But you have reminded me ... some of the fastest maths algorithms used for numerical computing aren't well publicised.
Maybe worth searching under computational tirgonometric algorithms rather than general maths?
 
Last edited:

Thread Starter

MrAl

Joined Jun 17, 2014
11,389
It's a C++ template, just replace 'T' with 'double' and it should be readable. If you have trouble, look at the simulation here: https://www.desmos.com/calculator/cbuhbme355, it's pretty readable. I'm pretty sure it's not what you are talking about, it's just a (very) quick and dirty approximation.



You're missing the point about the CORDIC algorithm, the cleverness of it is that you can do complex vector rotations by arbitrary angles using ONLY bit shifts and additions, plus one multiplication if you want to preserve scale. Depending on the hardware, this may be the fastest option.



I agree....
Ok that may be right about the CORDIC routines but unfortunately they are not of any help here because this is a very specific algorithm and has a very specific application. I could just kick myself for losing it and not remembering how to generate it again.

If i can somehow find it myself again i will post it here so you can see how really amazing this algorithm is and i suspect it would be good in binary too because it is very short and sweet but highly accurate.

I have found one that uses angle/2 reductions and it's not too bad much better than a pure Taylor's, but still nothing like this other algorithm i talk about here.
I also tried a Pade Approximate and it is better than Taylor's but not quite enough better.

Any other ideas would be appreciated too, thanks a lot.
 
Top