In the pricing of financial derivatives and the numerical solution of differential equations it is frequently used.Funny. I've never heard of this, but it makes sense. Are there classes of equations where this is a better solution?
Please point me to an authoritative reference -- Google seems not to know of this either.
Yes. The best example is the absolute value function at x=0, because there is a corner. The Wiener process is a function that is everywhere continuous, but nowhere differentiable. It is made up entirely of corners.If the two one-sided derivatives don't give you the same answer, isn't that equivalent to the function not being differentiable at x?
Hello again,Funny. I've never heard of this, but it makes sense. Are there classes of equations where this is a better solution?
Please point me to an authoritative reference -- Google seems not to know of this either.
In the limit they yield the same answer (if the function is differentiable at that point), but at a given value of Δh the one-sided derivatives have systematic errors. For instance, where the function is concave upwards, the right-sided derivative will always produce a value that is higher than the actual value at the point. The central derivative will produce, in most cases, both smaller errors and errors that are less systematic and more random.If the two one-sided derivatives don't give you the same answer, isn't that equivalent to the function not being differentiable at x?
Hi,If the two one-sided derivatives don't give you the same answer, isn't that equivalent to the function not being differentiable at x?
And on many systems 2*h is much faster than h+h because 2*h is simply a left-shift of h by one bit. Many C compliers optimize multiplications and divisions by integer powers of two into bit-shift operations. Whether that turns out to be faster than addition depends on the processor architecture.and it may not be as significant today to change 2*h to h+h as it was in the old days when multiplications took much longer, but depending on the system it might still be better to do that.
Hi,And on many systems 2*h is much faster than h+h because 2*h is simply a left-shift of h by one bit. Many C compliers optimize multiplications and divisions by integer powers of two into bit-shift operations. Whether that turns out to be faster than addition depends on the processor architecture.
Actually, the exponent of a float is a single byte, and is base 2. Simply incrementing or decrementing it multiplies the value by 2 or 1/2. This is even faster than a multibyte shift.I think the exponent is base 2 also so that might be possible to shift instead. Been a long time since i had to do ASM math routines though.
It's the average of the first two.which i think is second order whereas the first two are just first order.
Hi,Actually, the exponent of a float is a single byte, and is base 2. Simply incrementing or decrementing it multiplies the value by 2 or 1/2. This is even faster than a multibyte shift.
Edit: The exponent byte is shifted one bit to the right in the IEEE format. The MSB is sign (of the floating point number). Therefore, adding/subtracting 1 to/from the exponent is just slightly more difficult than a single byte increment/decrement.
Hi Dan,It's the average of the first two.
Hey, I've written an awesome .asm stack based (RPN) floating point library that that leaves C floats in the dust. And it supports 32 and 64 bit floats -- try to find that in a canned MCU library! I use it often -- makes floating point a cinch on my .asm stuff.Hi,
Yes that's right, thanks for pointing that out. It's been a while for me. Back in the 80's i had to write ASM routines for math and i used that format too because it allowed for faster operations. I cant remember too well now, but i made my exponents quite large to allow very high value numbers like 1e1000 and beyond. I probably still have the routines around, but the floating point hardware that came after that period in time is so much faster now.
Multiplying/dividing floats by integer powers of two is trivial -- you simply increase or decrease the exponent. However, I don't know if processor architectures take advantage of this fully. I suspect they might, though as I would think it is common enough to warrant it the marginal increase in complexity.Hi,
Yeah that is true for integers, but i wonder if they have a built in way to 'shift' double floats too?
That's what we end up using most of the time because we have h=0.1, h=0.01, etc. I think the exponent is base 2 also so that might be possible to shift instead. Been a long time since i had to do ASM math routines though.
Way back maybe in the 80's an algorithm like that was characterized for speed based solely on the number of (float) multiplications because they took so long compared to additions. That changed with the introduction of the floating point processor which was eventually integrated with the integer core part of the CPU into one chip. Then we had multiple core CPU's come about which could do several (float) math operations at the same time because each CPU had it's own floating point unit. I checked a long time ago with an AMD quad core and i found that multiplications and additions took almost the same amount of time because they were done with the floating point unit. So four cores meant nearly four times as many float operations as with just one core. But then that changed again when AMD decided to cheat on the established convention of using one float unit per integer core. They decided to make one 'core' be two integer units combined with only ONE float unit, which results in slower float processing with multiple cores. Their 8 core processors (all of them and so far all of them that will come new in the future too) all have this downgraded architecture so if we use all cores at the same time we will have slower float processing than if they all had their own float unit which was standard until they decided to change that. So an 8 core is really just 8 integer cores plus 4 float cores instead of 8+8 like it should be. Intel has an 8 core out now too but i bet it's a true 8 core. It's also very expensive though
by Jake Hertz
by Jake Hertz
by Aaron Carman