Easy to answer. Two times the logarithm of a square root of a number is equal to the logarithm of the number.Why is it that 2*ln2 = ln4
Another way to look at this is ln2 + ln2 = ln(2*2).I open a new thread for simple math questions. The first question is:
Why is it that 2*ln2 = ln4
?
The two are "almost equal" in the same sense that pi is "almost equal to" 3.1.Better question. Why is the binary logarithm of a number almost equal to the sum of the natural logarithm plus the common logarithm? I.E. Log2(128) = 7.00 is approximately equal to log10(128) + ln(128) = 2.11 + 4.85 = 6.96
The error is (1.442695 - 1.4342945)/1.442695 = 0.582%, which is less than 1%.So the two constants are the same in the first decimal place, which is OK for estimates and rough calculations.
Well, not quite. (Pi - 3.1)/PI = 1.32, over twice the error percentage as the binary logarithm approximation.Right, but it's not that much less. Like I said earlier in the same post, this error is in on par with approximating pi by 3.1.
Not if the error was in your favor.On a more mundane plane, I would be pretty ticked if my bank's calculations were off by 1% or even .5%. I would never be able to get my checkbook balanced.
It was presented as an approximation. Error values usually become greater as the quantity becomes greater.I don't have any problem with saying that log2(x) ≈ log(x) + ln(x), but it's only an approximation, and the error (not the relative error) increases as x gets larger.
Ratch, you're off by a factor of 100 in your calculation. (And yes, I realize that you neglected the % sign...) Do you not understand the phrase "on par with?" When I wrote that particular phrase, I first put "in the same magnitude" but changed it to what you saw, thinking that most of the people in this forum would concede that .6% and 1.3% were in the same ballpark. I should have known that there would be one who wouldn't recognize this.Mark44,
Well, not quite. (Pi - 3.1)/PI = 1.32, over twice the error percentage as the binary logarithm approximation.
Not so. I would move my money to a bank where they were not so fast and loose with their calculations. The fact that their approximation favored me in one instance wouldn't give me much confidence in any of their other calculations.Not if the error was in your favor.
interesting stuffAnother way to look at this is ln2 + ln2 = ln(2*2).
The natural log function transforms a product of numbers to a sum.
The natural exponential function, \[e^x\] does just the opposite: it transforms a sum to a product. \[e^{2 + 2} = e^2 e^2\].
The reason that the ln and other log functions work the way they do is entirely due to the fact that they are inverses of exponential functions.
Would anyone here like to borow my old slide rule, which works on this principle?Quote:
Originally Posted by Mark44
Another way to look at this is ln2 + ln2 = ln(2*2).
The natural log function transforms a product of numbers to a sum.
The natural exponential function, does just the opposite: it transforms a sum to a product. .
The reason that the ln and other log functions work the way they do is entirely due to the fact that they are inverses of exponential functions.
interesting stuff
I don't understand?maybe if you have a slide rule ln(x) + log10(x) is the best choice