Function coverage of NAND gates.

Thread Starter


Joined Mar 31, 2012
A recent thread got me to pondering a more general question (well, a few of them, actually).

I'm posting here because this strikes me as being fundamentally a math problem compared to a digital design problem.

Given N 2-input NAND gates, what fraction of the possible functions of k-inputs can be realized?

For instance, given four inputs (A, B, C, and D), there are sixteen possible input combinations, each of which can produce one of two possible outputs. Thus there are 65,536 distinct Boolean functions of four variables.

With zero NAND gates, we can use wires to implement four functions:


Thus we can implement 1/16,384, or 0.0061% of the possible functions with zero gates.

With one gate, we can implement fourteen of them (if I'm doing things right), or 0.021% of them.

Note that there is some fine print here -- I'm not allowing explicit HI/LO signals to be used (i.e., we can't tie an output statically HI or LO). Similarly, inputs of gates have to be tied to either one of the input signals, or to an output of one of the gates. They can't be tied to HI or LO (or left unconnected).

With two gates, things start getting more interesting because we can't just count how many ways we can connect things, since now we have redundancies (distinct ways of connecting gates that produce the same logic function).

However, we can certainly construct an upper bound on how many functions by assuming that each possible way of connecting them produces a unique function. It then would be interesting to see how quickly and strongly the actual number of functions with N gates falls away from that.

A different, but not completely unrelated, question is how big does N need to be in order to be able to implement all 65536 functions? Once again, we can come up with an upper bound pretty easily, but how much can we chip away at the upper bound.
As we add more NAND gates, we

Yet another interesting avenue would be to ask what the impact of using a different 2-input gate would be. If we use NOR gates, we also know that we can eventually have enough gates to implement all possible functions. But are the two curves already described identical? I believe that they are and that the logic duality proves this, but I haven't given it an rigorous thought to make sure I'm not overlooking something. But we also know that using other gates, such as AND, OR, or XOR, will never provide full coverage. So what is the maximum coverage they can provide and what is the minimum number of gates required to achieve it? There are sixteen possible 2-input gates (though some of them are degenerate). What would a graph showing the coverage for all sixteen options look like.

I'm sure I'm not the first person to ponder these questions, so I wonder if someone has actually published anything on this and also whether this has any value. Almost certainly no practical value of any significance, but it might have some theoretical value, perhaps as being a building block in some other, more useful, theorem.


Joined Jul 10, 2017
Interesting, but I agree, there probably is no way to use this as a tool.
Incidentally, it brings to mind a situation I was presented with in the mid 60s:
I was working as a Production Engineer in a canning factory. I had a bit of a reputation for coming up with novel solutions to production problems. The Chief Engineer asked me, informally, if I would check out new design for a palletizer using Phillips Notbit logic modules. That was my first introduction to the practical use of digital logic. He told me that it had been designed using boolean algebra.
The system had about half a dozen labelled inputs from sensors and about the same number of labelled outputs to various hydraulic valves. There was no flow diagram or any description of the sequence of the device so I had to study it for some time to figure out exactly what it did. The designer had divided the functions to correspond to each output function. His logic design was quite correct. The device would have worked. My devious mind was not satisfied with that approach, so when I studied it a little more, I realized that the inputs to several of the functions were similar, and that by combining these similarities, I could reduce the number of logic modules required by 20%. That was significant because, even though this was a one-off machine, in those early days, the logic modules were quite expensive. The Chief Engineer was impressed.
The moral of that story is that I earned some significant brownie points by carefully analyzing the whole problem, before I jumped into a mathematical solution.
Last edited:


Joined Mar 14, 2008
carefully analyzing the whole problem, before I jumped into a mathematical solution.
I appreciate that approach to design.
My goal is always to simplify the design as much as possible, while it still works to do the require function(s) reliably.
Even getting rid of one resistor gives me satisfaction.

I like to follow the dictum attributed to Einstein paraphrased as "Make it a simple as possible, but no simpler".

There is an extreme version of that approach attributed to "Madmen Muntz", who made and sold cheap B&W TV tube sets in the early days of TV in LA.
He would supposedly clip out parts on an operating TV, and if the TV stayed operating normally, he would say "Leave that out, it's not needed".