here to go to our main network analyzer page
here to go to a separate page on smoothing group delay measurements
New for January 2007!
Read an opposing viewpoint at the bottom
of this page!
Before we talk about smoothing,
let's remember there are two completely legitimate ways to reduce
noise in network analyzer data. These are averaging and reducing
IF bandwidth. Both of these techniques you will trade increased
data acquisition time for higher accuracy. We typically use 16 averages.
The default IF bandwidth on Agilent's PNA series network analyzer
is 35 kHz, by dropping it to 500 Hertz your measurements will be
more accurate and your plots will be more pleasing to look at.
Now back to the topic at hand.
This page discusses the smoothing feature that is sometime used
on measurement equipment such as network
What is smoothing?
Why do some
measurements appear "choppy"?"
of using smoothing to cheat
where smoothing is acceptable
viewpoint (new for January 2007!)
Smoothing of data in the frequency
domain is an option on most network analyzers. Smoothing makes "noisy"
measurements seem more likable; this explains why Marketing tells
Engineering to crank up the smoothing when gathering frequency response
data for brochures. They have permission from the legal department,
so long as they put the words "typical data" in the header!
But you should expect a little variation in real data, smoothing
is literally cheating.
The attraction to smoothing is
that it can be used to make a "noisy" measurement into
a smoother, perhaps more believable (or marketable) measurement.
The narrow VSWR bumps that are measured over frequency in complex
or electrically long circuitry, especially circuits containing filter
structures, are quite real if the measurement is performed accurately.
The accuracy of modern network analyzers is such that if the calibration
is done properly and sufficient IF averaging is applied, then all
individual data points that are taken are "good data".
No one at Agilent or anywhere else suggests that averaging data
over frequency improves its accuracy. Averaging the bumps in good
data to smooth its appearance is not an acceptable means of helping
hardware pass a specification; the accuracy of the data is reduced
by smoothing, not improved.
What do we mean by percentage
smoothing? 5% smoothing means that 20 points out of 100 are
averaged. For example, if data was from 1 to 401 hertz, data from
1 to 21 would be averaged to get point 11, data from 2 to 22 would
be averaged to get point 12, etc. less numbers are averaged below
11 hertz and above 390 hertz, because there won't be 20
points to average.
On Agilent (and other manufacturer's)
network analyzers a smoothing function enables the user to transform
measured data by averaging it versus frequency. Percentage smoothing
is calculated by the dividing the percentage of span that is averaged
(the "aperture") by the total swept bandwidth.
Data is grouped in odd numbers (3, 5, 7 etc.) so that it is averaged
symmetrically about each frequency point. In equation form the percentage
Thus some possible
smoothing settings for a sweep of 401 points are:
Note that typical
network analyzers allow the user to adjust the smoothing up to
20% (81 points out of 401 points averaged).
some measurements appear "choppy"?
Why do some "real"
measurements appear choppy? For a variety of reasons. Probably because
problems in fixturing that yielded non-ideal, but repeatable data.
Perhaps a more important reason is that the data really does have
all of those annoying peaks and valleys, particularly in the case
Don't confuse smoothing with
averaging, which is a good thing and improves measurement accuracy.
Averaging means taking many measurements of the same thing over
time, then literally averaging all of the data at a single frequency
point and reporting that as the "final" data. If the source
of choppyness is because of random noise in the measurement, then
averaging is a perfectly acceptable means to increase the data accuracy.
of using smoothing to cheat
Let's apply the smoothing function
to some measured filter data. Below is the frequency response
of an edge-coupled bandpass filter (real data!). No smoothing
is on. The poles of the filter create the well-known dips in the
return loss in the passband. The point here is that you are looking
a "real data", everyone knows that filters have dips
in return loss throughout the passband.
Let's let the
hypothetical specification for this part be a maximum VSWR of
1.9:1. from 3000 to 4000 MHz. The engineer that designed it wisely
extended the passband to allow for frequency shifts due to process
tolerances... what a brilliant and talented individual he must
be... except that the filter fails to meet 1.9:1 VSWR
Now let's look
at just the VSWR of the part, and add some traces to show the
effects of smoothing. The raw data clearly flunks the specification.
When we apply 1.25% or 2.5% it still flunks. At 5% it just barely
hits the spec (and maybe could be shipped). At 10% it fully meets
the 1.9:1 maximum VSWR.
the data over the full band to allow it to pass the VSWR specification
is an extreme example of deceptively altering the true data. Perhaps
it is less clear that smoothing the data over any frequency band
is also a deceptive and unacceptable practice.
situation where smoothing is acceptable
When you are measuring
group delay on a network analyzer,
measurements can be extremely noisy, especially if your circuit
is lossy. The data gets even worse when your frequency points
are close together.
Group delay should
not be a choppy measurement. The "noise" on the data
is due to the problem of limited phase accuracy. So feel free
to crank up the smoothing in this case, until the noise on the
data is reduced so that it is small compared to the group delay
value. But before you do that, try adding averaging (we use 16
averages) and reduce the IF bandwidth to 500 Hertz, to improve
your measurement accuracy. Click
here to go to a separate page on smoothing group delay measurements.
free S-Parameter Utilities
spreadsheet allows you to smooth the group delay of previously
measured S-parameters without having to go back to the network
analyzer to remeasure with smoothing on!
This was sent
from a Microwaves101 reader who tends to disagree with out statement
about cheating... William's point about measuring an airline that
is well-matched to 50 ohms and using smoothing to make the S21
data look more like the way it should look is valid, but we stand
by our point that smoothing is more often used to cheat. We'd
rather see all of the bumps in the data and decide ourselves whether
they are noise or an unwanted resonance. In any case, whenever
you are asked to approve of acceptance test data for a supplier,
ask him/her if they used smoothing and then you decide whether
that is acceptable! And tell them to read this page so we are
all on the same page...
The following paragraph
is not to dispute the “Smoothing is Cheating!”, however I feel
that the referenced algorithm has a very useful function and
is very necessary in Vector Network Analysis. VNA
smoothing is one of the most misunderstood systematic error
correction algorithms within vector network analysis. Granted
the end user can use smoothing to distort already corrected
data to his or her advantage (smoother S21 plot), but smoothing
was never intended for that purpose. Unlike smoothing, reducing
the IF BW and averaging is intended to reduce the measurement
error due to random white noise. Smoothing on the other hand
is intended to statistically improve measurement data due to
the residual VNA systematic errors that 12-term error correction
can not compensate out. For example; if we where to perform
a full two port calibration on your typical VNA and verified
a 35 dB corrected directivity; the resultant corrected load
and source match would be typically 0 - 3dB worse then the corrected
directivity. If we where to measure a bead-less airline or an
two-port device with no internal connection to create voltage
standing waves, we know that the actual |S21| of the device
would have a natural smooth (ripple free) negative slope. However
because the calibration is imperfect, very small error vectors
will be present on the actually S21 parameter. As the VNA sweeps
from low to high the displayed |S21| signal is really a combination
of the actual S21 vector and the residual directivity, source
and load match error vectors. The actual S21 vector plus the
residual error vectors would all rotate in negative direction
where some errors will rotate faster then others displaying
a non periodic ripple; the less P-P ripple, the more accurate
the measurement. If the frequency resolution of the measurement
is practical (fine enough to catch resonance) then adding smoothing
will improve the measurement uncertainty. This is because we
known the actual |S21| parameter is somewhere within the ripple.