Hi, I have this function: z = 2.6*ln(4/z) And I was trying to solve for z with the help of an iteration. I start at z = 3, and iteration does not converge. z(0) = 0.748 z(1) = 4.359 z(2) = -0.224 But for example, if I use almost the same function z = 0.26*ln(4/z) I can find the solution z = 0.527. Hence my question is what value or method I should use to be able to solve it?
Try letting the estimate of z for the second iteration equal the average of the previous estimate and the calculated value. And BTW, I get z(0) =0.748.
I don't see how you are getting your values. If z = 0, then you trying to calculate 2.6*ln(4/0) which is infinite. You shouldn't get negative values unless z > 4 If z = 1, then you trying to calculate 2.6*ln(4/1) which is 3.604. If z = 2, then you trying to calculate 2.6*ln(4/2) which is 1.802. So deal with that first. After that, look at f(z) = z / ln(4/z) and see if it is even possible to get a value of 2.6 As you approach z = 4 from the left, ln(4/z) is going to go toward +0, so f(z) is going to go toward +∞. As you approach z = 4 from the right, ln(4/z) is going to go toward -0, so f(z) is going to go toward -∞. So if your iterative approach crosses z = 4 at any point, things are likely to start flailing. FWIW, I get z = 1.91505
Okay, so is z(n) the value of the RHS after iteration number n? If so, that explains the problem because if z(1) > 4, it has crossed the discontinuity in the function.
See z(0) = 2.6*ln(4/3) = 0.748 z(1) = 2.6*ln(4/ 0.748 ) = 4.359 Error So what value of z I should use in the next iteration? And for this z = 0.26*ln(4/z) "my method works". Yes, I use this "method" too.
You can't expect any method to work well if the function is pathological. In general, there is no algorithm that, given f(x) and y, will find x such that f(x) = y. Any algorithm you come up with, someone can come up with an f(x) and a y such that the algorithm fails. Since you know that z > 0 (since you are only interested in real-valued ln()), if you end up with a negative value for z, try starting over with a new initial guess. That's just one way to try to make your algorithm more resilient.
As alluded to earlier, using complex math you can continue iterating. My home brew complex calculator converges at 0.038465+4.059435i, but as it hasn't been vigorously tested I can't guarantee it to be accurate.
How about just plugging that value into the equation and seeing if it works? You don't have to prove that your method is valid if the validity of the result can be proven from the result itself (except, of course, on a school exam ). We know there's a real values solution, namely approximately 1.91505. This is trivial to verify since 4 / 1.91505 ~= 2.08772 ln(2.08772) ~= -0.736073 2.6 * 0.736551 ~= ~1.91379 Note that I intentionally let the round-off errors accumulate in each step. If I let the calculator track everything it comes out to 1.91503.
My solution was for the initial value of Z = 3. Starting with a different value is a different problem, right? Of course I did plug my complex value into the calculator and checked the result; the point is, it's the same tool used to derive the solution. If you have a complex calculator handy you could verify it. No doubt a simple google search would yield one. If I have time I'll do that....
No, starting with a different value is most definitely NOT a different problem. The solution and the initial value chosen for a particular method of finding a solution are completely independent things. Which solution, if any, you obtain from a given initial value is related to the algorithm used to solve it, not to the problem itself. If it was related to the problem, then every algorithm that attempts to solve the problem using that initial value must either find THAT solution, or fail completely. In general, a problem has zero, one, or many solutions over a given domain. Here we don't know what the domain of the problem actually is. If it is the real numbers, which is implied by the fact that a negative value caused the TS's method to fail indicating that he is probably working in the real domain, then it has one solution. On the other hand, he used 'z' as the variable name and not 'x', which often implies that the problem is in the complex-domain. Complicating matters is that you can't really claim that you are using the same solution method as the TS since, as near as we can tell, the TS isn't allowing for complex logarithms and, even if they are, the complex log has infinitely many solutions as soon as there's an imaginary component, so which are you using and how can you be sure that the TS is using the same one?
Ahh, I misunderstood the problem entirely. He actually was looking for a specific value of x that satisfies z=2.6log(4/z). And, yes I get 1.915042161941528. Not sure how the "iteration" technique helps to solve the equation. If someone can offer an explanation it might be a learning experience for me and others. Thanks.
How did YOU get that value for z? I'm not aware of any way to do it in closed form. If you used an online solver or something like Goal Seek in a spreadsheet, then it did it using some form of iterative approach (one of a much larger class of algorithms known collectively as "numerical methods"). Basically, you guess a value, evaluate some expression and, based on that, update your guess and try again. You keep doing this until your solution is close enough to the actual answer that you declare it "good enough". Ideally, the solution will get closer and closer to the actual solution (are AN actual solution) so that the more iterations you make, the better your estimate for that particular solution. For well-behaved functions, a common approach is Newton-Raphson, in which you use the value of f(x) and the derivative at that point, f'(x), to calculate where the derivative intersects the axis (the method presupposes that you are trying to find a solution to f(x)=0). The classic problem with this (and many) approaches is that if you are in a region where the function isn't changing very fast (derivative is close to zero), then it can shoot you way off into oblivion and, from there, it might take a long time to get back, you might gravitate to a different solution (assuming you care about which solution you find, which often you do), or it might put you into a region where you won't ever find a solution no matter how many iterations you make.
Yeah, I used Newton's method. My code uses approximation of the derivative, so the analytic form would probably work better. I wonder if the TS' technique simply works for certain classes of equations where: Given unknown "a" such that a = f(a) For b not equal to a, | f(f(b)) - a | < | f(b) -a | at least on some interval close to a, simply reevaluating the function gets you closer to the solution. Sorry, but I'm out of my league on this one.
Hi, I asked this question because equal that day I was solving one very simple circuit without the help of a computer. Only calculator for ln function. Ic2 = Vt/R2*ln(Ic1/Ic2) https://forum.allaboutcircuits.com/threads/trying-to-understand-current-flow.156067/#post-1346824 And I noticed that the iteration does not always converge whne I used this "method one": x(0) = 2.6*ln(4/3) = 0.748 x(1) = 2.6*ln(4/ 0.748 ) = 4.359 And I was forced to use "method two" x(0) = 2.6*ln(4/3) = 0.748 (3 + 0.748)/2 = 1.874 x(1) = 2.6*ln(4/1.874 ) = 1.971 (1.874 + 1.971)/2 = 1.922 x(2) = 2.6*ln(4/1.922 ) = 1.905 (1.922 + 1.905)/2 = 1.913 x(3) = 2.6*ln(4/1.913 ) = 1.917 (1.913 + 1.917)/2 = 1.915 x(4) = 2.6*ln(4/1.915 ) = 1.915 But what was interesting to me is that sometimes this "method One" does work for the same circuit, the only diffrenc is in resistors values. And I was interested why is that.
Convergence algorithms is a whole science of its own. There are a wide range of solution techniques and they have different advantages and applications. With fast computers these days, most users can afford to be very sloppy and just use brute force. Who cares if it takes 200 iterations, if those get done in under a second? If you're doing it by hand, or you're writing an app, it's worth taking a moment at the outset to consider the efficiency of your algorithm. Give me six hours to chop down a tree and I will spend the first four sharpening the axe. – Abraham LincolnIt helps a lot if you can estimate the slope but that's not always easy. From a developer's perspective, the crucial thing is to prevent your algorithm from blowing up. You have to trap any errors and be prepared to take a different tact.
Hello there, This might have been said already, but you can not expect an arbitrary function to converge to any one value or converge at all. There is no theory that says that you can. What you can do is apply a known solution method like Newton's and then you start to see results. Newtons is simply: g2=g1-f(g1)/f'(g1) and if you do this four times you get 1.915 i think. Of course you have to form a function first. You can then also form an algebraic form for the solution which DOES represent a numerical iteration method. A variation for very wild functions is: g2=g1-a*f(g1)/f'(g1) where 'a' is a damping factor between 0 and 1 not inclusive. The convergence is slower but also smoother for tough functions. The above also can be morphed into a matrix method for more than one variable, but it's a bit more complicated. So for your original equation we get this iteration function (with no damping): z=(13*(log(4/z)+1)*z)/(5*z+13) In the above f'(x) is the first derivative function of the function f(x).
There is a class of functions that won’t converge with iteration. Chaos theory takes advantage of this property. (I think. It’s been over 49 years since I got my degree in ApplMath). In fact, non-convergence can be used to solve many problems. Cryptography, cellular automata, etc. Back in the day, before you could select from a variety of PRNG algorithms (and commercial DB systems), I had to code our own PRNG. I used an iterative method to find the square root of -1. It passed my boss’s rigorous statistical testing and was in use for three decades.
Hi, Hey that sounds interesting i might have to try that as soon as i get a chance. In the past i used the old XOR gate idea in hardware with proper feedback paths, and since it is a hardware solution it is also easy to create a program that does the same thing only slower simply by using XOR statements. With a regular 16MHz microcontroller you can even make a white noise generator.