# how can CPUs be precise ?

Discussion in 'General Electronics Chat' started by MegaMan, Jul 15, 2010.

1. ### MegaMan Thread Starter Member

Sep 25, 2009
53
0
i can get it when a CPU reaches a very high speed
but the thing that i can't understand is

why CPUs don't make mistakes ? how ??

how come calculators error % = 0
or the error is in the speed of calculation .

2. ### tom66 Senior Member

May 9, 2009
2,613
214
CPUs do make mistakes. But it is very rare. When they make mistakes, the result is often a system crash of one sort or another.

A CPU is essentially a very complicated logic circuit. But it is subject to nature. Cosmic rays, for example, can flip bits in memory and cause latch up. Both of these, due to the modern error checking features of memory and processors, often lead to a complete crash, or a complete recovery. A computer is, almost always, either working or not. It is undesirable for processors to be allowed to make mistakes.

3. ### Ghar Active Member

Mar 8, 2010
655
73
I guess you could think of it like a chain of dominoes or something like that.
If you set them up they must fall in sequence unless something awful happens (like a pet or gust of wind). It's very deterministic what will happen and it's essentially impossible for it to happen differently.

The pets and gusts of wind in a processor/computer are avoided by strong design guidelines which fix the issues: race conditions, cross talk, ground bounce, interference, and so on.
They're either eliminated entirely or calculated such that they're harmless or extremely rare.

Jun 1, 2009
499
37
It's also important to note that no CPU derived calculation is without error, the precision limits the error to a known value if the system is operating properly, often so far bellow consequence that it can be ignored. But no computer anywhere can be precise to an infinite number of bits. No computer anywhere can deal with transcendental numbers directly. Just a portion of them useful enough to make them useful.

5. ### tom66 Senior Member

May 9, 2009
2,613
214
sceadwian, that is not quite true. A computer can be precise. For example, dividing 100 by 2 will give you an integer 50 if you use integer inputs. That is precise. I think you are talking about floating point numbers, because if you divided 100 by 2 you may well get 49.99999999.

A CPU has to be exact and so does the rest of the system. For example I tested some memory once. One bit was faulty at around the 100 megabyte range, and this was enough to cause Ubuntu to crash once it reached the desktop (kernel panic.)

6. ### GetDeviceInfo Senior Member

Jun 7, 2009
1,571
230
CPUs are state machines, they don't travel off on thier own, but rather follow predefined sequential states.

Manufacturers strive to fullfill functionality, and over the years have refined the fundamental requirements of number crunching.

If you look back, you'll see many limitations placed on developing cores by the manufacturers as they discovered shortfalls in thier devices. Even today, long erratas accompany many devices.

Commercially successful devices are a result of thier dependability.

7. ### Norfindel Active Member

Mar 6, 2008
235
9
The same way than flip-flops and comparators can be precise. At the end, it's just a bunch of voltage comparators, storage of bits, shift registers, etc. For example, you can divide a number by 2, by shifting all the bits to the right by 1 position. That's all. Won't fail often.

8. ### Wendy Moderator

Mar 24, 2008
20,874
2,654
If it fails it is bad.

The thing about CPUs is they are devices. The software is as reliable as the coder (writer, author, whatever). Two CPUs will run the software reliably or you have a bad device. Software can lots of bugs, but this is almost never the CPUs fault, no matter what the programmer will say.

There are many devices that are borderline. These should be scrapped at the manufacturer, but some might slip through from time to time.

Jun 1, 2009
499
37
I'm sorry tom66 but that is not what I was saying.
100/2 results in an answer which compute perfectly.
100/2 even in floating point will not result in an answer of 49.99~ unless there is an error in the calculation.
1/3 will not calculate so neatly calculate in the same system, it inherently can not. The system can be as precise as it designed for, and still fail to reach the precision required for an application it wasn't.
1/3 == 0.333~ (the ~ means an infinite number of progressions of the digit 3)

Not that I know any calculus but this is the heart of it, the limit, the choosing of the boundaries of mathematical accuracy which can by choosing the boundaries actually result in usefull answer using numbers which have no humanly describable discrete value.

10. ### tom66 Senior Member

May 9, 2009
2,613
214
You can get very precise, but never entirely accurate, when dealing with real numbers. I guess I should have checked my answer first.

An example of floating point numbers falling over is the sum of 0.1 ten times:

Code ( (Unknown Language)):
1.
2. Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
3. [GCC 4.4.3] on linux2
5. >>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1
6. 0.99999999999999989
7.
You will notice this is not the same as 0.1 times ten, despite the fact that they should be the same, in fact there is a tiny error.

Code ( (Unknown Language)):
1.
2. >>> 0.1 * 10
3. 1.0
4.
Here we note the error is in the sixteenth decimal place.

Code ( (Unknown Language)):
1.
2. >>> (0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1) - 1.0
3. -1.1102230246251565e-16
4.
And that it is minuscule. Yet, there is an error.

You do not have to use division or repeating decimals to get errors with floating point numbers. I only used addition and subtraction.

This is the point I am making. Computers can be precise or "good enough" depending on how far you are willing to go.

You can use bignums or special representations. For example my Casio calculator will display recurring decimals and surds appropriately. Those are both exact representations. But sometimes you cannot convert something exactly. For example, pi is 3.141592653589793238646 to most computers, as any higher resolution would exceed the 80 bit long double.

Bring up pi brings me to a curious mathematical oddity, e^pi minus pi is so close to 20, you'd swear the computer had made an error. And yet, it is just a coincidence.

Code ( (Unknown Language)):
1.
2. >>> import math
3. >>> (math.e ** math.pi) - math.pi
4. 19.99909997918947
5.
So what is the point at which you cannot determine an error and a correct result?

Jun 1, 2009
499
37
Knowing the equations... There is no error in the math of the C equations you specified, the error is in the algorithm used to determine the approximate results of the math, which fail in the algorythm used. Pure mathematics doesn't generally even attempt to specify the exact numbers involved, this is why they invented calculus =)

12. ### tom66 Senior Member

May 9, 2009
2,613
214
It's Python, by the way... much easier than C! Though I must admit slower than C because it's interpreted it's plenty fast on most computers for most applications. After coding in C for my latest PIC project I can say I welcome Python any day.

I am only making the point that a processor can be exact and inexact, and both are to be expected. I thought that you were saying floating point numbers are exact except for recurring decimals, which I showed was incorrect.

13. ### Norfindel Active Member

Mar 6, 2008
235
9
You shouldn't had got that result. An addition of .1 ten times results in 1 when done in openoffice calc, which makes sense, as the floating-point processor works with base numbers and exponents, so that .1 is actually $1 \cdot 10^{-1}$

Are you sure phyton isn't using some really low accuracy format? Floating point shouldn't have any problems at all with .1 added ten times.

14. ### tom66 Senior Member

May 9, 2009
2,613
214
It's a known flaw with IEEE754 floating point numbers!

Python uses the double, 64-bit IEEE754 floating point format, which is about as precise as you can get. You can get 80-bit long double formats on x86 and some other processors, but these are not portable.

The problem stems from the fact that 0.1 cannot be represented exactly in binary. The result, when converted into binary, is:

Code ( (Unknown Language)):
1. 1001100110011001100110011001100110011001100110011010
Note it is recurring... it is impossible to represent exactly using a finite binary sum.

OpenOffice is probably rounding-to-nearest, which is fine, but it adds roundoff error sometimes.

15. ### kubeek AAC Fanatic!

Sep 20, 2005
4,789
836
tom66 is right. The point is that 0.1 isn´t represented as $1 \cdot 10^{-1}$ but as $0.1 \cdot 10^{0}$

16. ### zxsa Member

Jun 11, 2010
31
2
I think the answer to the original poster's question, is that processors are not necessarily precise (the level to which a processor is precise depends on its design which again depends on its intended application), but that they are deterministic.

Deterministic means that you can add up 0.1 + 0.1 + 0.1 + .... + 0.1 and get the exact same answer each and every time - even though it is not absolutely correct - regardless of when you do the calculation, what the current conditions are, how many times you do the calculation, etc.

The reason why this can be, is that processors work with digital electronics. A bit is either on or off. There is no inbetween state. The design of the components inside a processor is such that the voltage levels could drift by some margin before a bit that must be in one state ends up being detected as the other state.

A previous poster already mentioned a few reasons why a bit's state can change. These are typically cosmic rays (aka single event upsets), extreme heat causing breakdown in silicon (usually not recoverable!), brown-out (due to bad external design not supplying sufficient power to the processor), bad memory bit (usually in an external memory device), or an unstable clock (again an external device usually generate the clock signal for the processor typically using a crystal circuit which tends to be fairly reliable).

There is actually specific research into high-reliability processors. These are processors for applications where you typically find the above problems. For example, for space/satellite application you need radiation hardened processors, for automotive (and military) use you need processors that will continue to work in a far wider temperature range.

17. ### Norfindel Active Member

Mar 6, 2008
235
9
You're right, Openoffice should be rounding the numbers after some amount of decimal places. The floating point numbers use base 2, so it's mantissa$\times 2^{exponent}$. That means that you need an integer that divided by $2^{something}$ equals 0.1, which doesn't exists, right?

18. ### tom66 Senior Member

May 9, 2009
2,613
214
It's true for almost every fraction of 10:

Code ( (Unknown Language)):
1.
2. Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
3. [GCC 4.4.3] on linux2
5. >>> 0.1
6. 0.10000000000000001
7. >>> 0.2
8. 0.20000000000000001
9. >>> 0.3
10. 0.29999999999999999
11. >>> 0.4
12. 0.40000000000000002
13. >>> 0.5
14. 0.5
15. >>> 0.6
16. 0.59999999999999998
17. >>> 0.7
18. 0.69999999999999996
19. >>> 0.8
20. 0.80000000000000004
21. >>> 0.9
22. 0.90000000000000002
23.
0.5 is the only one which works, because 1/2 fits evenly as 2 is a power of two.

CPUs (and computers) are entirely deterministic machines. Amazing complicated, but precise. Like a watch, they work perfectly, unless broken. Computers cannot be random, because they are logic circuits, executing every instruction precisely and on time. A computer which is random, and thus is not deterministic, is what is known in the computer industry as broken.

Last edited: Jul 24, 2010
19. ### MegaMan Thread Starter Member

Sep 25, 2009
53
0
thank you everyone .