Hello,
Now I'm sure this info is out there someplace (please no wikipedia links, they're not a good source as wikipedia themselves admit), but I've not yet found it.
When resistors were first made, we got three classifications of tolerance for those carbon resistors, 20%, 10%, and 5%. Likewise with oscilloscopes the originals were limited to a few Khz AFAIK.
So how did they managed to improve things? I mean, without a precise reference point, how do EEs know if the part is more precise or less precise?
If you only have oscilloscopes capable of handling only a few Khz, how do you judge that your new oscilloscope design can accurately measure a higher BW than that?
Thanks!
Now I'm sure this info is out there someplace (please no wikipedia links, they're not a good source as wikipedia themselves admit), but I've not yet found it.
When resistors were first made, we got three classifications of tolerance for those carbon resistors, 20%, 10%, and 5%. Likewise with oscilloscopes the originals were limited to a few Khz AFAIK.
So how did they managed to improve things? I mean, without a precise reference point, how do EEs know if the part is more precise or less precise?
If you only have oscilloscopes capable of handling only a few Khz, how do you judge that your new oscilloscope design can accurately measure a higher BW than that?
Thanks!