Discussion in 'Math' started by boks, Jan 28, 2009.
I open a new thread for simple math questions. The first question is:
Why is it that 2*ln2 = ln4
Easy to answer. Two times the logarithm of a square root of a number is equal to the logarithm of the number.
Better question. Why is the binary logarithm of a number almost equal to the sum of the natural logarithm plus the common logarithm? I.E. Log2(128) = 7.00 is approximately equal to log10(128) + ln(128) = 2.11 + 4.85 = 6.96
LogN(M) = LogX(M)/LogX(N), where X - any number (not 1 of course)
Let's X = e, then:
Log2(X) = ln(X)/ln(2) = ln(X) * 1.44269504089
Log10(X) + ln(X) = ln(X)/ln(10) + ln(X) = ln(X) * (1/ln(10)+1) = ln(X) * 1.4342944819
Another way to look at this is ln2 + ln2 = ln(2*2).
The natural log function transforms a product of numbers to a sum.
The natural exponential function, does just the opposite: it transforms a sum to a product. .
The reason that the ln and other log functions work the way they do is entirely due to the fact that they are inverses of exponential functions.
While your identities appear to be correct, I am having a hard time seeing how they prove the equation I proposed. Here is the way I would do it.
log2(x) = log2(e)*ln(x)
log10(x) = log10(e)*ln(x)
log2(x)-log10(x) = [log2(e) - log10(e)]*ln(x)
log2(x) = log10(x) + [1.442695-0.4342945]*ln(x)
log2(x) = log10(x) + [1.008400]*ln(x) = almost log10(x) + ln(x)
The two are "almost equal" in the same sense that pi is "almost equal to" 3.1.
Let's compare log2(x) and log(x) + ln(x).
log2(x) = log(x)/log(2) = log(x) * 1/log(2) ≈ log(x) * 3.321928
log(x) + ln(x) = log(x) + log(x)/log(e) = log(x)(1 + 1/log(e)) ≈ log(x) * 3.302585
So the two constants are the same in the first decimal place, which is OK for estimates and rough calculations.
The error is (1.442695 - 1.4342945)/1.442695 = 0.582%, which is less than 1%.
log2(128) = 7
ln2(128) = 4.852030
log10(128) = 2.107209
4.852030 + 2.107209 = 6.959240
Error = (7.0 -6.959240)/7 = 0.582% , which is less than 1%,
Right, but it's not that much less. Like I said earlier in the same post, this error is in on par with approximating pi by 3.1.
On a more mundane plane, I would be pretty ticked if my bank's calculations were off by 1% or even .5%. I would never be able to get my checkbook balanced.
I don't have any problem with saying that log2(x) ≈ log(x) + ln(x), but it's only an approximation, and the error (not the relative error) increases as x gets larger.
Well, not quite. (Pi - 3.1)/PI = 1.32, over twice the error percentage as the binary logarithm approximation.
Not if the error was in your favor.
It was presented as an approximation. Error values usually become greater as the quantity becomes greater.
Ratch, you're off by a factor of 100 in your calculation. (And yes, I realize that you neglected the % sign...) Do you not understand the phrase "on par with?" When I wrote that particular phrase, I first put "in the same magnitude" but changed it to what you saw, thinking that most of the people in this forum would concede that .6% and 1.3% were in the same ballpark. I should have known that there would be one who wouldn't recognize this.
In any case, two significant digits being correct is a long, long way from precision, which was my point. Back in the early 90s, Intel shipped a Pentium processor with an FPU that produced an error in some division operations. That error was in the 6th or 7th decimal place, so the relative error was four or five orders of magnitude smaller than the one we're talking about here. BTW, this recall cost Intel several billion dollars.
Not so. I would move my money to a bank where they were not so fast and loose with their calculations. The fact that their approximation favored me in one instance wouldn't give me much confidence in any of their other calculations.
Your are right, I did forget the % sign. As you are trying to say, the approximation is a mathematical curiosity rather than a practical method of determining the binary logarithm of a number. It is easier to use log2(x) = ln(x)/ln(2) or log2(x) = log10(x)/log10(2) . Either division will give the correct answer, and just about all hand calculators have those log functions.
Intel deserved to be taken to the wood shed and spanked. They thought they could pull a fast one by not disclosing their mistake, and charging the same for their defective chips as their corrected ones. It was another party that discovered the defect and notified the world about it.
Would anyone here like to borow my old slide rule, which works on this principle?
I consider log2(x) ≈ ln(x) + log10(x) just a curiosity. relative easy to demostrate.
i think log2(x)= ln(x)/ln(2) = log10(x)/log10(2) a more practicall and axact formula if you do not have in your calculator log2. maybe if you have a slide rule ln(x) + log10(x) is the best choice
I don't understand?
it's because of a basic property of the logarithmic calculus ..that is this one:
log (a)^b = b log a so you can see how 2 ln2 = ln (2)^2 = ln4
I think studiot is not questioning the verity of ln(x) + log10(x) as being equal to log2(x), but rather how would you use a slide rule to get it. Assuming your slide rule had ln and log scales (of my four slide rules, three have only log10 scales, and one of those has log log scales--none has an ln scale), you would have to add the length on, say, the log10 scale to the length on the ln scale. After doing that, what do you get? It's sort of apples and oranges.
If you had a slide rule why would you do these approximations at all?