# probability of failure in a cct

Discussion in 'General Electronics Chat' started by toffee_pie, Jan 5, 2015.

1. ### toffee_pie Thread Starter Active Member

Oct 31, 2009
165
7
Guys

Happy 2015 to everyone.

Quick question, the probability of failure of a product will increase with increased component count, has anyone the formula to calculate this? I have learnt it somewhere but forgot how its worked out.

on a side note, how does this probability correlate to modern consumer goods, they have increasingly more complexity and in turn more parts, does that mean the chances of them failing are increasing too?

cheers

2. ### kubeek AAC Fanatic!

Sep 20, 2005
4,875
863
IIRC there is some military standard widely used for calculating MTBF, hopefully that might get you in the right direction.

3. ### toffee_pie Thread Starter Active Member

Oct 31, 2009
165
7
I had some 1-x formula but cant remember. Without going into some mad calculations, I am sure I had a relatively simple formula to calculate the probability of failure in a component from years back.

4. ### Alec_t AAC Fanatic!

Sep 17, 2013
7,152
1,495
I would say definitely yes. If modern gadgets survive the warranty period you're in luck .

5. ### Lundwall_Paul Active Member

Oct 18, 2011
222
19
MTFB is much more than a simple calculation. Working at a Defense company we took our first build product and subjected it to very harsh tests to beat the life out of it. Testing included vibration tables, temperature testing from +75c to -60c, Explosive atmosphere, altitude, Acoustic noise, lightning, ESD…..
After each failure was analyzed design improvement were made to increase MTFB of the product. Testing was repeated to verify the fix. MIL-STD-781C may help. It is a public document that can be downloaded.

6. ### Papabravo Expert

Feb 24, 2006
11,163
2,187
The derivation starts with an assumption about the failure characteristics of the components. For example the failures could be normally distributed with mean μ and standard deviation σ, or they could be exponentially distributed with parameter λ. Then you compute the joint distribution of any one component failing from a collection. It is not trivial stuff.

7. ### #12 Expert

Nov 30, 2010
17,899
9,318
and if the machine is large enough, it is guaranteed to fail regularly. Example: Teams of workers were needed to replace vacuum tubes in the Eniac computer.

8. ### toffee_pie Thread Starter Active Member

Oct 31, 2009
165
7
what I was looking for was this formula, but it brings some surprising results.

[1/Failure Rate(#Units)- 1] x 10^5

So if you have a failure rate of 100ppm, and 20 building blocks or modules in a system the failure rate would be 1 in 50~

Obviously the larger the number of individual blocks in a system the more failures.

Doesn't bode well for modern devices with 100s of building blocks.

9. ### Lestraveled Well-Known Member

May 19, 2014
1,957
1,225
If you want to read the bible of electronic reliability, google and download, "mil-handbook-217f". I used to work for a engineer that grew up in NASA. Before I could be considered anything more than an unwashed cretin, I had to be fluent in this guide.

10. ### ronv AAC Fanatic!

Nov 12, 2008
3,657
2,800
Not necessarily. I can remember the first disk drives. Had a failure rate of about .2 failures per machine per month. 100 megabytes. 5 machines 1 failure a month. Cost, about \$5000.
The last ones I worked on were measured at about .1% per year. 2400 drives to see 1 failure in a month. 1 terabyte. Cost about \$50.
Technology is improving as complexity increases.
For example one micro can replace many discrete IC's.

11. ### #12 Expert

Nov 30, 2010
17,899
9,318
When I worked a military shop, they didn't repair mistakes. They hid them in a back room to maintain their, "failures per thousand hours" rating.

I was eventually fired for finding things like a 6.2 volt zener installed in a 9 volt position.

12. ### ronv AAC Fanatic!

Nov 12, 2008
3,657
2,800
Yep, find a design problem and you potentially fix every machine. Find a bad IC and you found a bad IC.

13. ### wayneh Expert

Sep 9, 2010
13,637
4,431
Agreed. There is nice software these days for running Monte Carlo simulations, which is what you need for this, and we have fast hardware to run it on. But it comes back to the same old problem in modeling and simulation: Shinola IN, shinola OUT. Any useful model would have to include statistical information about every component or module, and information about all the input conditions; ambient temperatures, humidities, supply voltage, and on and on.

I'm pretty sure accelerated testing - instead of modeling - would give a more reliable lifespan estimate in less time with less work.

14. ### #12 Expert

Nov 30, 2010
17,899
9,318
It's also good for knocking out, "infant mortality" problems. 3% when I was working commercial grade.

15. ### wayneh Expert

Sep 9, 2010
13,637
4,431
I was once in a bakery where I saw an office worker with a broken leg hobbling around, obviously on new crutches. When I asked about it, I was informed he was an operator that had broken his leg on the production floor, but was brought back from the hospital and given a task in the office so that the injury would not count as a "lost time accident". A banner in the bakery proudly proclaimed "850 days without a lost-time accident!" Now I know what that means. They'll use you as a doorstop after you're dead, if they have to.