derivative of a polynomial function in C and ...

vpoko

Joined Jan 5, 2012
267
If the two one-sided derivatives don't give you the same answer, isn't that equivalent to the function not being differentiable at x?
 

Papabravo

Joined Feb 24, 2006
21,225
Funny. I've never heard of this, but it makes sense. Are there classes of equations where this is a better solution?

Please point me to an authoritative reference -- Google seems not to know of this either.
In the pricing of financial derivatives and the numerical solution of differential equations it is frequently used.

http://mathfaculty.fullerton.edu/mathews/n2003/differentiation/numericaldiffproof.pdf

http://www.amazon.com/Pricing-Finan...id=1444314824&sr=8-1&keywords=Tavella+Randall
 
Last edited:

Papabravo

Joined Feb 24, 2006
21,225
If the two one-sided derivatives don't give you the same answer, isn't that equivalent to the function not being differentiable at x?
Yes. The best example is the absolute value function at x=0, because there is a corner. The Wiener process is a function that is everywhere continuous, but nowhere differentiable. It is made up entirely of corners.
 

MrAl

Joined Jun 17, 2014
11,486
Funny. I've never heard of this, but it makes sense. Are there classes of equations where this is a better solution?

Please point me to an authoritative reference -- Google seems not to know of this either.
Hello again,

Papa:
Usually the 'order' of an approximation is the highest order equation that it can provide a very good approximation to, not the order of the approximation equation itself. Thus a function like y=A*x might be considered second order if it was able to consistently solve a second order equation even though that equation itself is first order.

joey:
Well actually it works best for almost any situation where the original equation is of order higher than 1. The higher order solutions are especially preferred with partial differential equations. BTW the central means derivative is also called the central difference derivative, and goes by other names as well. If you look on Wikipedia for "Numerical derivative" or "Finite difference methods" you should find a lot on this.
Here are examples of first order, second order, and fifth order solutions (first two sets are just first and second order solutions as shown, last set is first, second, and fifth order).
These sets are organized as the test function, the eval point, the value of h, followed by the several solution equations, followed by an actual numerical calculation using all the methods listed just before that.

f(x)=3*x
x=4
h=0.01
dydx0=3
dydx1=(f(x+h)-f(x))/h
dydx2=(f(x)-f(x-h))/h
dydx3=(f(x+h)-f(x-h))/(h+h)
dydx0=3
dydx1=2.999999999999
dydx2=2.999999999999
dydx3=2.999999999999


f(x)=x^2
x=4
h=0.01
dydx0=2*x
dydx1=(f(x+h)-f(x))/h
dydx2=(f(x)-f(x-h))/h
dydx3=(f(x+h)-f(x-h))/(h+h)
dydx0=8
dydx1=8.009999999999
dydx2=7.989999999999
dydx3=7.999999999999


f(x)=x^3
x=4
h=0.01
dydx0=3*x^2
dydx1=(f(x+h)-f(x))/h
dydx2=(f(x)-f(x-h))/h
dydx3=(f(x+h)-f(x-h))/(h+h)
dydx4=(-f(x+2*h)+8*f(x+h)-8*f(x-h)+f(x-2*h))/(12*h)
dydx0=48
dydx1=48.12009999999
dydx2=47.88009999999
dydx3=48.00009999999
dydx4=47.99999999999

One caution here though is that the first order solution gets better with decreasing h, but this isnt always the case with higher order methods which usually get worse as h decreases. There are formulas available for the error estimations of each method.
 
Last edited:

WBahn

Joined Mar 31, 2012
30,060
If the two one-sided derivatives don't give you the same answer, isn't that equivalent to the function not being differentiable at x?
In the limit they yield the same answer (if the function is differentiable at that point), but at a given value of Δh the one-sided derivatives have systematic errors. For instance, where the function is concave upwards, the right-sided derivative will always produce a value that is higher than the actual value at the point. The central derivative will produce, in most cases, both smaller errors and errors that are less systematic and more random.
 

MrAl

Joined Jun 17, 2014
11,486
If the two one-sided derivatives don't give you the same answer, isn't that equivalent to the function not being differentiable at x?
Hi,

As WBahn pointed out, the right sided and left sided are just numerical derivatives, so they are just approximations. This means we cant make too many judgments based on their outcomes.

Just to add a little more info...

When we find the mean of two numbers a and b we just add them and divide by two:
mean=(a+b)/2

The central difference derivative is also called the central means derivative because if we find the mean of the left and right derivatives we get the mean of the two which is the mean derivative:
f1=((f(x+h)-f(x))/h
f2=((f(x)-f(x-h))/h
and taking the average of these two:
fmean=(f1+f2)/2
so:
fmean=[(f(x+h)-f(x))/h+(f(x)-f(x-h))/h]/2=(f(x+h)-f(x-h))/(2*h)
and it may not be as significant today to change 2*h to h+h as it was in the old days when multiplications took much longer, but depending on the system it might still be better to do that.
 
Last edited:

WBahn

Joined Mar 31, 2012
30,060
and it may not be as significant today to change 2*h to h+h as it was in the old days when multiplications took much longer, but depending on the system it might still be better to do that.
And on many systems 2*h is much faster than h+h because 2*h is simply a left-shift of h by one bit. Many C compliers optimize multiplications and divisions by integer powers of two into bit-shift operations. Whether that turns out to be faster than addition depends on the processor architecture.
 

MrAl

Joined Jun 17, 2014
11,486
And on many systems 2*h is much faster than h+h because 2*h is simply a left-shift of h by one bit. Many C compliers optimize multiplications and divisions by integer powers of two into bit-shift operations. Whether that turns out to be faster than addition depends on the processor architecture.
Hi,

Yeah that is true for integers, but i wonder if they have a built in way to 'shift' double floats too?
That's what we end up using most of the time because we have h=0.1, h=0.01, etc. I think the exponent is base 2 also so that might be possible to shift instead. Been a long time since i had to do ASM math routines though.

Way back maybe in the 80's an algorithm like that was characterized for speed based solely on the number of (float) multiplications because they took so long compared to additions. That changed with the introduction of the floating point processor which was eventually integrated with the integer core part of the CPU into one chip. Then we had multiple core CPU's come about which could do several (float) math operations at the same time because each CPU had it's own floating point unit. I checked a long time ago with an AMD quad core and i found that multiplications and additions took almost the same amount of time because they were done with the floating point unit. So four cores meant nearly four times as many float operations as with just one core. But then that changed again when AMD decided to cheat on the established convention of using one float unit per integer core. They decided to make one 'core' be two integer units combined with only ONE float unit, which results in slower float processing with multiple cores. Their 8 core processors (all of them and so far all of them that will come new in the future too) all have this downgraded architecture so if we use all cores at the same time we will have slower float processing than if they all had their own float unit which was standard until they decided to change that. So an 8 core is really just 8 integer cores plus 4 float cores instead of 8+8 like it should be. Intel has an 8 core out now too but i bet it's a true 8 core. It's also very expensive though :)
 

joeyd999

Joined Jun 6, 2011
5,283
I think the exponent is base 2 also so that might be possible to shift instead. Been a long time since i had to do ASM math routines though.
Actually, the exponent of a float is a single byte, and is base 2. Simply incrementing or decrementing it multiplies the value by 2 or 1/2. This is even faster than a multibyte shift.

Edit: The exponent byte is shifted one bit to the right in the IEEE format. The MSB is sign (of the floating point number). Therefore, adding/subtracting 1 to/from the exponent is just slightly more difficult than a single byte increment/decrement.
 
Last edited:

MrAl

Joined Jun 17, 2014
11,486
Actually, the exponent of a float is a single byte, and is base 2. Simply incrementing or decrementing it multiplies the value by 2 or 1/2. This is even faster than a multibyte shift.

Edit: The exponent byte is shifted one bit to the right in the IEEE format. The MSB is sign (of the floating point number). Therefore, adding/subtracting 1 to/from the exponent is just slightly more difficult than a single byte increment/decrement.
Hi,

Yes that's right, thanks for pointing that out. It's been a while for me. Back in the 80's i had to write ASM routines for math and i used that format too because it allowed for faster operations. I cant remember too well now, but i made my exponents quite large to allow very high value numbers like 1e1000 and beyond. I probably still have the routines around, but the floating point hardware that came after that period in time is so much faster now.
 

joeyd999

Joined Jun 6, 2011
5,283
Hi,

Yes that's right, thanks for pointing that out. It's been a while for me. Back in the 80's i had to write ASM routines for math and i used that format too because it allowed for faster operations. I cant remember too well now, but i made my exponents quite large to allow very high value numbers like 1e1000 and beyond. I probably still have the routines around, but the floating point hardware that came after that period in time is so much faster now.
Hey, I've written an awesome .asm stack based (RPN) floating point library that that leaves C floats in the dust. And it supports 32 and 64 bit floats -- try to find that in a canned MCU library! I use it often -- makes floating point a cinch on my .asm stuff.
 

WBahn

Joined Mar 31, 2012
30,060
Hi,

Yeah that is true for integers, but i wonder if they have a built in way to 'shift' double floats too?
That's what we end up using most of the time because we have h=0.1, h=0.01, etc. I think the exponent is base 2 also so that might be possible to shift instead. Been a long time since i had to do ASM math routines though.

Way back maybe in the 80's an algorithm like that was characterized for speed based solely on the number of (float) multiplications because they took so long compared to additions. That changed with the introduction of the floating point processor which was eventually integrated with the integer core part of the CPU into one chip. Then we had multiple core CPU's come about which could do several (float) math operations at the same time because each CPU had it's own floating point unit. I checked a long time ago with an AMD quad core and i found that multiplications and additions took almost the same amount of time because they were done with the floating point unit. So four cores meant nearly four times as many float operations as with just one core. But then that changed again when AMD decided to cheat on the established convention of using one float unit per integer core. They decided to make one 'core' be two integer units combined with only ONE float unit, which results in slower float processing with multiple cores. Their 8 core processors (all of them and so far all of them that will come new in the future too) all have this downgraded architecture so if we use all cores at the same time we will have slower float processing than if they all had their own float unit which was standard until they decided to change that. So an 8 core is really just 8 integer cores plus 4 float cores instead of 8+8 like it should be. Intel has an 8 core out now too but i bet it's a true 8 core. It's also very expensive though :)
Multiplying/dividing floats by integer powers of two is trivial -- you simply increase or decrease the exponent. However, I don't know if processor architectures take advantage of this fully. I suspect they might, though as I would think it is common enough to warrant it the marginal increase in complexity.
 

MrAl

Joined Jun 17, 2014
11,486
Hello again,

Yes it is hard to say without knowing the actual CPU. Statistically, it would not help probably because that would be the only factor that would work so simply, or maybe powers of 2. For example, 1.234 would not work, 2.345 would not work, 837.453 would not work, etc., only 2 and maybe powers of 2 or maybe integer multiples of 2. Maybe this would be handled in the compiler stage because it has to ask and answer the question, "Is one argument 2 or not".
 
Top