How does grid-tie source-fail detection work?

Discussion in 'General Electronics Chat' started by DMahalko, Jul 10, 2012.

  1. DMahalko

    Thread Starter AAC Fanatic!

    Oct 5, 2008
    What methods are used to detect loss of supply for grid-tie power devices?

    In general I know grid-tie devices are supposed to shut down if the grid power is lost to prevent backfeeding into a dead line. What are the specific detection methods for loss of grid power?

    It's a catch-22 that you can't both inject a perfect match to the grid waveform into the line AND try to detect loss of grid at the same time, since your detector will basically end up detecting the power you're trying to inject.

    By default a detector in such a flawed device would always continuously inject power after initially detecting a good grid waveform, even if then fully disconnected from the grid, since it would just be reading its own output as if it were grid power.

    I am assuming that there must be some sort of repeated cycle-sense process that alternates between supplying output and measuring the line state. Such as, for every n cycles, don't output power. Read the input cycle state, and if the waveform is still there, then resume output. If the cycle check is say every 240 cycles with 60 hz power, then it would shut down within 4.0 seconds of loss of grid power.

    Or does a grid-tie system intentionally inject power slightly out of phase with the grid power? If the grid-tie is always leading or lagging just slightly, then it may be possible to continuously read the grid's waveform state, by subtracting out the time differential of the leading/lagging grid-tie output.

    (I am asking for informational purposes only. I have no interest in building anything like this. Though perhaps it might be useful on Wikipedia if cited properly.)
  2. JDT

    Well-Known Member

    Feb 12, 2009
    The Inverter has to be slightly leading in order to inject power into the supply. Basically due to the impedance of the connecting wires. The inverter also has to remain exactly synchronized with the supply.

    I don't know, I've not designed something like this but I think I would have a micro-controller monitoring the instantaneous voltage of the supply and then control the instantaneous current output by the inverter, proportionally (which would therefore also be sinusoidal). It would be expecting both to be approximately following a sine wave at the known supply frequency. Any deviation from this over more that a few milliseconds would cause the inverter to shut down. To re-start it would need to see a few cycles of the supply voltage to synchronize to first.

    Sound right?