I'm trying to understand the derivation of the equation for the minimum delay of a sequential circuit to work properly and prevent metastability.
In our class, it is given as
min time \( = T_h - T_{clktoq} \)
min time = (holding time of destination flip-flop) - (time it takes for the source flip flop's output value to actually show up on the output after the rising edge of the clock)
If Th < Tclktoq then minimum time is zero, but why is this the case? Shouldn't the circuit have to wait at least Tclktoq or Th to work? I'm not sure how to understand this equation. Also in the case there is no source flip flop, it would be just holding time then?
In our class, it is given as
min time \( = T_h - T_{clktoq} \)
min time = (holding time of destination flip-flop) - (time it takes for the source flip flop's output value to actually show up on the output after the rising edge of the clock)
If Th < Tclktoq then minimum time is zero, but why is this the case? Shouldn't the circuit have to wait at least Tclktoq or Th to work? I'm not sure how to understand this equation. Also in the case there is no source flip flop, it would be just holding time then?