how can CPUs be precise ?

Thread Starter

MegaMan

Joined Sep 25, 2009
53
i can get it when a CPU reaches a very high speed
but the thing that i can't understand is

why CPUs don't make mistakes ? how ??

how come calculators error % = 0
or the error is in the speed of calculation .
 

tom66

Joined May 9, 2009
2,595
CPUs do make mistakes. But it is very rare. When they make mistakes, the result is often a system crash of one sort or another.

A CPU is essentially a very complicated logic circuit. But it is subject to nature. Cosmic rays, for example, can flip bits in memory and cause latch up. Both of these, due to the modern error checking features of memory and processors, often lead to a complete crash, or a complete recovery. A computer is, almost always, either working or not. It is undesirable for processors to be allowed to make mistakes.
 

Ghar

Joined Mar 8, 2010
655
I guess you could think of it like a chain of dominoes or something like that.
If you set them up they must fall in sequence unless something awful happens (like a pet or gust of wind). It's very deterministic what will happen and it's essentially impossible for it to happen differently.

The pets and gusts of wind in a processor/computer are avoided by strong design guidelines which fix the issues: race conditions, cross talk, ground bounce, interference, and so on.
They're either eliminated entirely or calculated such that they're harmless or extremely rare.
 

sceadwian

Joined Jun 1, 2009
499
It's also important to note that no CPU derived calculation is without error, the precision limits the error to a known value if the system is operating properly, often so far bellow consequence that it can be ignored. But no computer anywhere can be precise to an infinite number of bits. No computer anywhere can deal with transcendental numbers directly. Just a portion of them useful enough to make them useful.
 

tom66

Joined May 9, 2009
2,595
sceadwian, that is not quite true. A computer can be precise. For example, dividing 100 by 2 will give you an integer 50 if you use integer inputs. That is precise. I think you are talking about floating point numbers, because if you divided 100 by 2 you may well get 49.99999999.

A CPU has to be exact and so does the rest of the system. For example I tested some memory once. One bit was faulty at around the 100 megabyte range, and this was enough to cause Ubuntu to crash once it reached the desktop (kernel panic.)
 

GetDeviceInfo

Joined Jun 7, 2009
2,192
CPUs are state machines, they don't travel off on thier own, but rather follow predefined sequential states.

Manufacturers strive to fullfill functionality, and over the years have refined the fundamental requirements of number crunching.

If you look back, you'll see many limitations placed on developing cores by the manufacturers as they discovered shortfalls in thier devices. Even today, long erratas accompany many devices.

Commercially successful devices are a result of thier dependability.
 

Norfindel

Joined Mar 6, 2008
326
The same way than flip-flops and comparators can be precise. At the end, it's just a bunch of voltage comparators, storage of bits, shift registers, etc. For example, you can divide a number by 2, by shifting all the bits to the right by 1 position. That's all. Won't fail often.
 

Wendy

Joined Mar 24, 2008
23,415
If it fails it is bad.

The thing about CPUs is they are devices. The software is as reliable as the coder (writer, author, whatever). Two CPUs will run the software reliably or you have a bad device. Software can lots of bugs, but this is almost never the CPUs fault, no matter what the programmer will say.

There are many devices that are borderline. These should be scrapped at the manufacturer, but some might slip through from time to time.
 

sceadwian

Joined Jun 1, 2009
499
I'm sorry tom66 but that is not what I was saying.
100/2 results in an answer which compute perfectly.
100/2 even in floating point will not result in an answer of 49.99~ unless there is an error in the calculation.
1/3 will not calculate so neatly calculate in the same system, it inherently can not. The system can be as precise as it designed for, and still fail to reach the precision required for an application it wasn't.
1/3 == 0.333~ (the ~ means an infinite number of progressions of the digit 3)

Not that I know any calculus but this is the heart of it, the limit, the choosing of the boundaries of mathematical accuracy which can by choosing the boundaries actually result in usefull answer using numbers which have no humanly describable discrete value.
 

tom66

Joined May 9, 2009
2,595
You can get very precise, but never entirely accurate, when dealing with real numbers. I guess I should have checked my answer first.

An example of floating point numbers falling over is the sum of 0.1 ten times:

Rich (BB code):
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) 
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1
0.99999999999999989
You will notice this is not the same as 0.1 times ten, despite the fact that they should be the same, in fact there is a tiny error.

Rich (BB code):
>>> 0.1 * 10
1.0
Here we note the error is in the sixteenth decimal place.

Rich (BB code):
>>> (0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1) - 1.0
-1.1102230246251565e-16
And that it is minuscule. Yet, there is an error.

You do not have to use division or repeating decimals to get errors with floating point numbers. I only used addition and subtraction.

This is the point I am making. Computers can be precise or "good enough" depending on how far you are willing to go.

You can use bignums or special representations. For example my Casio calculator will display recurring decimals and surds appropriately. Those are both exact representations. But sometimes you cannot convert something exactly. For example, pi is 3.141592653589793238646 to most computers, as any higher resolution would exceed the 80 bit long double.

Bring up pi brings me to a curious mathematical oddity, e^pi minus pi is so close to 20, you'd swear the computer had made an error. And yet, it is just a coincidence.

Rich (BB code):
>>> import math
>>> (math.e ** math.pi) - math.pi
19.99909997918947
So what is the point at which you cannot determine an error and a correct result?
 

sceadwian

Joined Jun 1, 2009
499
Knowing the equations... There is no error in the math of the C equations you specified, the error is in the algorithm used to determine the approximate results of the math, which fail in the algorythm used. Pure mathematics doesn't generally even attempt to specify the exact numbers involved, this is why they invented calculus =)
 

tom66

Joined May 9, 2009
2,595
It's Python, by the way... much easier than C! Though I must admit slower than C because it's interpreted it's plenty fast on most computers for most applications. After coding in C for my latest PIC project I can say I welcome Python any day.

I am only making the point that a processor can be exact and inexact, and both are to be expected. I thought that you were saying floating point numbers are exact except for recurring decimals, which I showed was incorrect.
 

Norfindel

Joined Mar 6, 2008
326
You shouldn't had got that result. An addition of .1 ten times results in 1 when done in openoffice calc, which makes sense, as the floating-point processor works with base numbers and exponents, so that .1 is actually \(1 \cdot 10^{-1}\)

Are you sure phyton isn't using some really low accuracy format? Floating point shouldn't have any problems at all with .1 added ten times.
 

tom66

Joined May 9, 2009
2,595
It's a known flaw with IEEE754 floating point numbers!

Python uses the double, 64-bit IEEE754 floating point format, which is about as precise as you can get. You can get 80-bit long double formats on x86 and some other processors, but these are not portable.

The problem stems from the fact that 0.1 cannot be represented exactly in binary. The result, when converted into binary, is:

Rich (BB code):
1001100110011001100110011001100110011001100110011010
Note it is recurring... it is impossible to represent exactly using a finite binary sum.

OpenOffice is probably rounding-to-nearest, which is fine, but it adds roundoff error sometimes.
 

zxsa

Joined Jun 11, 2010
31
I think the answer to the original poster's question, is that processors are not necessarily precise (the level to which a processor is precise depends on its design which again depends on its intended application), but that they are deterministic.

Deterministic means that you can add up 0.1 + 0.1 + 0.1 + .... + 0.1 and get the exact same answer each and every time - even though it is not absolutely correct - regardless of when you do the calculation, what the current conditions are, how many times you do the calculation, etc.

The reason why this can be, is that processors work with digital electronics. A bit is either on or off. There is no inbetween state. The design of the components inside a processor is such that the voltage levels could drift by some margin before a bit that must be in one state ends up being detected as the other state.

A previous poster already mentioned a few reasons why a bit's state can change. These are typically cosmic rays (aka single event upsets), extreme heat causing breakdown in silicon (usually not recoverable!), brown-out (due to bad external design not supplying sufficient power to the processor), bad memory bit (usually in an external memory device), or an unstable clock (again an external device usually generate the clock signal for the processor typically using a crystal circuit which tends to be fairly reliable).

There is actually specific research into high-reliability processors. These are processors for applications where you typically find the above problems. For example, for space/satellite application you need radiation hardened processors, for automotive (and military) use you need processors that will continue to work in a far wider temperature range.
 

Norfindel

Joined Mar 6, 2008
326
It's a known flaw with IEEE754 floating point numbers!

Python uses the double, 64-bit IEEE754 floating point format, which is about as precise as you can get. You can get 80-bit long double formats on x86 and some other processors, but these are not portable.

The problem stems from the fact that 0.1 cannot be represented exactly in binary. The result, when converted into binary, is:

Rich (BB code):
1001100110011001100110011001100110011001100110011010
Note it is recurring... it is impossible to represent exactly using a finite binary sum.

OpenOffice is probably rounding-to-nearest, which is fine, but it adds roundoff error sometimes.
You're right, Openoffice should be rounding the numbers after some amount of decimal places. The floating point numbers use base 2, so it's mantissa\(\times 2^{exponent}\). That means that you need an integer that divided by \(2^{something}\) equals 0.1, which doesn't exists, right?
 

tom66

Joined May 9, 2009
2,595
It's true for almost every fraction of 10:

Rich (BB code):
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) 
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.1
0.10000000000000001
>>> 0.2
0.20000000000000001
>>> 0.3
0.29999999999999999
>>> 0.4
0.40000000000000002
>>> 0.5
0.5
>>> 0.6
0.59999999999999998
>>> 0.7
0.69999999999999996
>>> 0.8
0.80000000000000004
>>> 0.9
0.90000000000000002
0.5 is the only one which works, because 1/2 fits evenly as 2 is a power of two.

CPUs (and computers) are entirely deterministic machines. Amazing complicated, but precise. Like a watch, they work perfectly, unless broken. Computers cannot be random, because they are logic circuits, executing every instruction precisely and on time. A computer which is random, and thus is not deterministic, is what is known in the computer industry as broken.
 
Last edited:
Top