The Truth about Oscilloscope Waveform Update Rates (and why not to fall for it)

When shopping for a new scope a lot of things need to be considered: what bandwidth do I need? what sample rate and sample memory is required? Do I need additional functionality like serial decoding, or an MSO? Do I need active probing? And so on.

However, there is one recommendation that pretty much every scope shopper has certainly heard at least once, which is to make sure the new scope has a very high Waveform Update Rate. In fact, some oscilloscope manufacturers like Keysight have made the update rate to a main marketing argument.

The Waveform Update Rate (WUR) or Trigger Rate is the rate of how many acquisition sequences a scope can make per second. Why is that important?

When you use your DSO in RUN mode, it typically goes through various phases in a sequence to perform an acquisition:

[--1--][2][-----------------3-----------------][------------------------------------------------------------------4------------------------------------------------------------------]

1. Waiting for Trigger
2. Trigger arrives
3. Signal is acquired until available memory is full
4. Sample data is processed for display and trigger circuit is re-armed
5. Return to 1.

The thing is that the processing phase, #4, is taking a long time, and during that phase the scope is, quite literally, blind. To give you some relation, even on fast scopes the blind phase can be some 97% of the overall time of one sequence. If there is any trigger event during this blind phase (called "Blind Time"), the scope will simply miss it.

Most T&M vendors promote the high update rates of their oscilloscopes. Especially Keysight (former Agilent which was the former HP T&M Division) has made the waveform update rate a central point of its marketing [1] for its general purpose oscilloscopes ever since HP introduced the first MegaZoom ASIC in 1995. Thanks to this new ASIC, HP's new 54645A/D digital oscilloscopes offered update rates which hadn't been possible on digital oscilloscopes before (which usually had slow update rates of just a few updates per second). Today, scopes like the Keysight DSO-X3000T can reach a WUR of >1M waveforms per second.

Proponents of the need for a high WUR argue that it is critical to capture very rare events that a slow scope would have missed otherwise. The method to capture these glitches is usually the good old Persistence Mode, a mode which emulates the phosphor effect of an analog scope and where previous traces are retained (usually in a dimmed state), i.e. every screen pixel through which at some point a waveform has gone through remains illuminated. This is a common method which originated on analog scopes and which is still in wide use to find runts or glitches in repetitive signals. Obviously, the better the WUR, i.e. the smaller the Blind Time, the more likely is it that the scope will see a rare event.


So clearly, if I want to ensure my signal is free of anomalies then a scope with a very high WUR is required, right?

To answer this question, let's look at some real oscilloscopes. To save us to do all the measurements and the calculations over again for now we just trust the data from Keysight's own white paper [1] and use this. In [1], Keysight argues that a high waveform update rate is important especially to capture rare events, and offers update rates (based on their own measurements, which can't always be trusted) for a few of its and competitors' oscilloscopes. Unfortunately, the White Paper only contains percentage dead-time (%DT, i.e. the percentage of time the scope is blind) calculations for two oscilloscopes, the Keysight DSO-X3000T and the Tektronix MDO3000.

The Keysight InfiniiVision DSO-X3000T is a typical oscilloscope designed for high update rates, it achieves over 1M updates/second thanks to it's 4th generation MegaZoom ASIC. The UI feels zippy as well. A good example of a really fast oscilloscope.

The Tektronix MDO3000 on the other hand comes with a slow architecture which requires a special mode (FastAcq) to achieve higher waveform rates, while in normal mode the update rate is very low (and FastAcq comes with a number of its own drawbacks). The unintuitive UI reacts slow, and like other Tektronix DSOs, if the MDO3000 is busy with something slightly demanding the UI locks up. As a former owner of a MDO3054 I can attest that it's a perfectly good example of a truly slow oscilloscope.

So let's look at the numbers:

Keysight DSO-X3000T: up to 1'030'000 waveforms/s, 89.70%DT

Tektronix MDO3000: 2'200 waveforms/s (normal mode)/280'000 waveforms (FastAcq mode), 99.98%DT (normal mode)/97.20%DT (FastAcq mode)

Keysight naturally argues that its own product is better as it's more likely to 'see' a rare event. And 'to see' has to be taken literally here, as Keysight's proposed method of finding glitches is the same persistence mode described above. However, even the exceptionally fast DSO-X3000T is blind for almost 90% of the time. This means even this very fast scope has roughly a 1 in 10 chance to see a rare event.


What does this mean for my signal?

It means that, should the scope show an anomaly, you know that there is a problem. However, due to the large dead time, the absence of evidence of an anomaly (i.e. the signal on the screen looks fine) is no evidence of an absence of anomalies. In short, just because the scope shows a clear signal in persistence mode doesn't mean that there are no anomalies. With an almost 9 in 10 chance to miss an event, it's simply impossible to say if there are any anomalies in the signal.

So while the higher WUR has improved the chance that the DSO-X3000T will see a rare event, it's still almost 9x more likely it will just miss it. Which means that even excessively high update rates don't really help much with the problem of a scope's Blind Time. It also means that the method of using Persistence Mode to find anomalies, a method taken from analog scopes, isn't a great way to reliably find glitches.

Let's remember that the argument for buying a scope with a very high WUR was specifically to find very rare events. Which, clearly, a high waveform rate scope fails to do to rougly the same extend as a low waveform rate scope.

And it's not that on a high WUR scope the update rate is always high, in fact these scopes only achieve that high rate only at very specific settings, and the WUR will be lower, often much lower, in other configurations.

It should now be clear that the argument of a high waveform update rate is mostly a marketing gimmick.


But every DSO has a blind time, so there's nothing we can do to be sure our signals are fine, can we?

Actually, there is a way to be sure that rare events aren't missed, and it doesn't even require a high waveform update rate scope. The solution is called "Triggers". Unlike its analog ancestors, most newer DSOs have a wide range of trigger capabilities, from the very basic Edge Trigger over sophisticated logic and serial decode triggers to violation and exclusion triggers, often with various Holdoff options.

The key to find anomalies is to setup the triggers so that they capture any deviation from the wanted signal. This can be runts, glitches, or any deviation from voltages or frequency. Since the trigger circuit starts the actual aquisition phase, there is no Blind Time while the trigger is armed, sitting there and waiting for that rare glitch to appear. And it does so with absolute reliability.

Only through hardware triggers* an engineer can determine that there really were no anomalies during the time of the investigation. If there are, the engineer will know. Which is why if we look at high-end scopes (i.e. scopes running on a Windows platform with prices of >$20k) then we can see that most of them offer very low WUR, often in the few thousand waveforms per second. Why? Because it's all about the triggers, Baby ;)

More advanced scopes can also provide a history table, i.e. a record of time and number of occurrence, and it can be setup to give alarm, perform a measurement or take a screenshot when a problem is found. This data can then be used to narrow down the root cause. So you can setup the scope and then let it run while going out for lunch or grabbing a coffee. When you come back, the scope will have done all the work, and can reliably tell you if there were any anomalies during your absence.

Now compare this with the old method of persistence mode, even with a high WUR the probability that a glitch has been missed would still be 9 to 1.


So that means that for my new scope I should rather look for a good set of triggers than a high WUR?

Exactly. Even more so as the technology in some high WUR scopes (like Keysight's MegaZoom-equipped InfiniiVision scopes) come with some notable drawbacks, i.e. a very small sample memory which reduces even further in many situations or the lack of manual memory management. Because at the end of the day, triggers are the key to get certainty about the presence or absence of a problem. And while advanced triggers once were the domain of high end oscilloscopes, today even the most basic scopes like a Rigol DS1054z or a Siglent SDS1000X-E already come with many advanced triggers that make glitch hunting easier and more reliable.


* Some advanced scopes also have software triggers which work post-acquisition and therefore are affected by Blind Time

[1] Keysight White Paper 'Can Your Oscilloscope Capture Elusive Events? Why Waveform Update Rate Matters', http://literature.cdn.keysight.com/litweb/pdf/5989-7885EN.pdf

Blog entry information

Author
Wuerstchenhund
Views
4,126
Last update

More entries in General

More entries from Wuerstchenhund

Share this entry

Top