What methods are used to detect loss of supply for grid-tie power devices?
In general I know grid-tie devices are supposed to shut down if the grid power is lost to prevent backfeeding into a dead line. What are the specific detection methods for loss of grid power?
It's a catch-22 that you can't both inject a perfect match to the grid waveform into the line AND try to detect loss of grid at the same time, since your detector will basically end up detecting the power you're trying to inject.
By default a detector in such a flawed device would always continuously inject power after initially detecting a good grid waveform, even if then fully disconnected from the grid, since it would just be reading its own output as if it were grid power.
I am assuming that there must be some sort of repeated cycle-sense process that alternates between supplying output and measuring the line state. Such as, for every n cycles, don't output power. Read the input cycle state, and if the waveform is still there, then resume output. If the cycle check is say every 240 cycles with 60 hz power, then it would shut down within 4.0 seconds of loss of grid power.
Or does a grid-tie system intentionally inject power slightly out of phase with the grid power? If the grid-tie is always leading or lagging just slightly, then it may be possible to continuously read the grid's waveform state, by subtracting out the time differential of the leading/lagging grid-tie output.
(I am asking for informational purposes only. I have no interest in building anything like this. Though perhaps it might be useful on Wikipedia if cited properly.)
In general I know grid-tie devices are supposed to shut down if the grid power is lost to prevent backfeeding into a dead line. What are the specific detection methods for loss of grid power?
It's a catch-22 that you can't both inject a perfect match to the grid waveform into the line AND try to detect loss of grid at the same time, since your detector will basically end up detecting the power you're trying to inject.
By default a detector in such a flawed device would always continuously inject power after initially detecting a good grid waveform, even if then fully disconnected from the grid, since it would just be reading its own output as if it were grid power.
I am assuming that there must be some sort of repeated cycle-sense process that alternates between supplying output and measuring the line state. Such as, for every n cycles, don't output power. Read the input cycle state, and if the waveform is still there, then resume output. If the cycle check is say every 240 cycles with 60 hz power, then it would shut down within 4.0 seconds of loss of grid power.
Or does a grid-tie system intentionally inject power slightly out of phase with the grid power? If the grid-tie is always leading or lagging just slightly, then it may be possible to continuously read the grid's waveform state, by subtracting out the time differential of the leading/lagging grid-tie output.
(I am asking for informational purposes only. I have no interest in building anything like this. Though perhaps it might be useful on Wikipedia if cited properly.)