Smoothing is Cheating!

Click here to go to our main network analyzer page

Click here to go to a separate page on smoothing group delay measurements

Click here to go to an averaging example (new for May 2020)

Update December 2019.  All VNAs as well as all other real-world electronic instruments, components and devices are "naturally noisy"!  Hence all measured data is corrupted by noise to some extent. It behooves the user to learn how best to mitigate the bad effects of noise in his/her situation, and to use good judgement in choosing the methods for (and the extent of) doing so.  The following comments from a reader discuss this issue in the sole context of making VNA measurements, and addresses the viability of smoothing as a noise reduction strategy.  We had not even considered the use of smoothing on phase data, pay attention to avoid phase wrapping issues.

After a career in EE, spent largely in the RF and Microwave field, I retired and went about making RF work my continuing avocation.  No longer having any access to test equipment though work, I saved my pennies (slightly over 1 million of them) and bought a decent USB VNA.  I'm very happy with it and am suffering no buyer's remorse.

But I confess to being a little paranoid, and worry about potential degradation of the unit's performance over time, or from accidental abuse, etc.  So from time to time I run some basic checks with corrections turned off, just to be sure.

One of these checks is the noise floor level, or you could call it the receivers' NF if you prefer.  I set up the VNA according to carefully thought-out parameters and measure the level of the noise floor on the screen.   But this "floor" is in itself very noisy (and of course totally variable in detail from sweep to sweep), making determination of the average level very imprecise.   Here is where I find a good use for smoothing, which I believe to be the sole completely legitimate use of smoothing on a VNA.   Averaging (of the coherent type that VNAs do) just lowers the floor, but does nothing to tame the grassy appearance of the displayed noise floor.  But smoothing does wonders to convert the grassy landscape into a smooth paved surface with little variation across the trace.  For me, this constitutes the primary use of smoothing, and I hope you'll agree with me that this use does not constitute cheating.  Stated more simply, use of smoothing on VNA data is fully justified only in the case where you want to measure the level of the instrument's pure noise floor. 

On the other extreme, use of any smoothing of VNA CAL data is a distinct no-no.  Doing so will only end up fooling yourself (and anybody else who may be placing faith in your measurement data).

In between, where you'd like to clean up slightly noisy-looking trace data, a bit of smoothing may be justifiable, but if and only if you use a smoothing aperture small enough so as not to hide any genuine details of interest.  And even then, not generally on a phase plot containing wraps, as any smoothing will soften the steps and may lead to genuinely misleading interpretations of what's really going on.  If you want to smooth over noise on a phase plot, please unwrap the phase first!  You can always re-wrap it following the smoothing operation (if you really need to).  Most VNAs include provisions for displaying phase information in unwrapped form; however the user may need to write a simple computer program to re-wrap smoothed phase plots.

If you see too much "noise" in your VNA data, first look to see if you can simply increase the source power setting of your VNA, taking due care to avoid overdriving the DUT or the VNA's receiver inputs.  In severe cases such overdrive could result in damage to (or the destruction of) your DUT, your VNA, or both. 

If increasing power does not pan out, then try reducing the IF bandwidth either by explicitly setting a narrower BW or by averaging a whole bunch of traces.  

However, in doing so you may discover that some fixed pattern noise remains, whose details remain stable from sweep to sweep, no matter how much averaging (or IF BW reduction) you try to use.  This is often a sign of a problem with the VNA calibration step.  If the fixed pattern disturbance looks truly random, you should recalibrate the VNA taking steps to reduce noise contamination (which effectively gets "frozen into the calibration), by any viable combination of using higher source power, or narrower IF BW, or heavy trace averaging. I routinely "over-average" when doing VNA calibrations "just in case".  Then when measuring the DUT I'll turn off averaging and/or use a fairly wide BW in order to work with a fast sweep rate, taking noise reduction steps only as needed.  This is especially true if I'm using the VNA data for tuning up the DUT, as it can save a lot of time during the early stages of making the adjustments.  Then one can take noise reduction steps as he closely approaches the final adjustment state, in order to get the best accuracy.

One other thing that can happen is the discovery of a different sort of fixed-pattern "noise", which upon close inspection turns out to be periodic or otherwise not random. This is usually a case of some fault in the setup changing between the CAL and the DUT
measurement phases.  This can range from a slightly loose connector, a flaky test cable, or even just bending a test cable in between the CAL and the DUT measurement.  The delay through the cable does change with flexing, and this problem is usually worst with very cheap cables like RG-58.  The conformable cables are moderately better, but still far from perfect.  The only way to get really good phase stability in the face of flexing is to lay out heavy money on measurement-grade "phase stable" cables, where a pair will cost hundreds of dollars (or more).  Note that this sort of thing is always worse at higher frequencies, other things being equal.

Perhaps a point could be made that VNAs should also have an incoherent form of averaging, applied after power detection, to address this problem.   Yet I fear that many users would be confused by the two types of averaging and go around reporting invalid test results arising from use of the wrong kind of averaging.

Read another viewpoint on smootihng at the bottom of this page!

Before we talk about smoothing, let's remember there are two completely legitimate ways to reduce noise in network analyzer data. These are averaging and reducing IF bandwidth. Both of these techniques you will trade increased data acquisition time for higher accuracy. We typically use 16 averages. The default IF bandwidth on Keysight's PNA series network analyzer is 35 kHz, by dropping it to 500 Hertz your measurements will be more accurate and your plots will be more pleasing to look at.

Now back to the topic at hand. This page discusses the smoothing feature that is sometime used on measurement equipment such as network analyzers.

What is smoothing?

Percentage smoothing

Why do some measurements appear "choppy"?"

Smoothing versus averaging

Example of using smoothing to cheat

One situation where smoothing is acceptable

An opposing viewpoint

What is smoothing?

Smoothing of data in the frequency domain is an option on most network analyzers. Smoothing makes "noisy" measurements seem more likable; this explains why Marketing tells Engineering to crank up the smoothing when gathering frequency response data for brochures. They have permission from the legal department, so long as they put the words "typical data" in the header! But you should expect a little variation in real data, smoothing is literally cheating.

The attraction to smoothing is that it can be used to make a "noisy" measurement into a smoother, perhaps more believable (or marketable) measurement. The narrow VSWR bumps that are measured over frequency in complex or electrically long circuitry, especially circuits containing filter structures, are quite real if the measurement is performed accurately. The accuracy of modern network analyzers is such that if the calibration is done properly and sufficient IF averaging is applied, then all individual data points that are taken are "good data". No one at Keysight or anywhere else suggests that averaging data over frequency improves its accuracy. Averaging the bumps in good data to smooth its appearance is not an acceptable means of helping hardware pass a specification; the accuracy of the data is reduced by smoothing, not improved. If your hardware misses a spec because a single data point exceeded the specification, you could recalibrate the equipment, remeasure it, or simply ask for a waiver and let the customer decide.

Percentage smoothing

What do we mean by percentage smoothing? 5% smoothing means that 5 points out of 100 are averaged. For example, if data was from 1 to 401 hertz, data from 1 to 21 would be averaged to get point 11, data from 2 to 22 would be averaged to get point 12, etc. Less numbers are averaged below 11 hertz and above 390 hertz, because there won't be 20 points to average.

On Keysight (and other manufacturer's) network analyzers a smoothing function enables the user to transform measured data by averaging it versus frequency. Percentage smoothing is calculated by the dividing the percentage of span that is averaged (the "aperture") by the total swept bandwidth. Data is grouped in odd numbers (3, 5, 7 etc.) so that it is averaged symmetrically about each frequency point. In equation form the percentage smoothing is:

Smoothing is Cheating!

Thus some possible smoothing settings for a sweep of 401 points are:

Points averaged Percent smoothing Points averaged Percent smoothing Points averaged Percent smoothing
1 0% 15 3.5% 29 7%
3 0.5% 17 4.0% 31 7.5%
5 1.0% 19 4.5% 33 8%
7 1.5% 21 5.0% 35 8.5%
9 2.0% 23 5.5% 37 9%
11 2.5% 25 6.0% 29 9.5%
13 3.0% 27 6.5% 41 10%

Note that typical network analyzers allow the user to adjust the smoothing up to 20% (81 points out of 401 points averaged).

Why do some measurements appear "choppy"?

Why do some "real" measurements appear choppy? For a variety of reasons. Probably because problems in fix tu ring that yielded non-ideal, but repeatable data. Perhaps a more important reason is that the data really does have all of those annoying peaks and valleys, particularly in the case of filters.

Smoothing versus averaging

Don't confuse smoothing with averaging, which is a good thing and improves measurement accuracy. Averaging means taking many measurements of the same thing over time, then literally averaging all of the data at a single frequency point and reporting that as the "final" data. If the source of choppiness is because of random noise in the measurement, then averaging is a perfectly acceptable means to increase the data accuracy.

Example of using smoothing to cheat

Let's apply the smoothing function to some measured filter data. Below is the frequency response of an edge-coupled bandpass filter (real data!). No smoothing is on. The poles of the filter create the well-known dips in the return loss in the passband. The point here is that you are looking at "real data", everyone knows that filters have dips in return loss throughout the passband.

Smoothing is Cheating!

Let's let the hypothetical specification for this part be a maximum VSWR of 1.9:1, from 3000 to 4000 MHz. The engineer that designed it wisely extended the passband to allow for frequency shifts due to process tolerances... what a brilliant and talented individual he must be... except that the filter fails to meet 1.9:1 VSWR

Smoothing is Cheating!

Now let's look at just the VSWR of the part, and add some traces to show the effects of smoothing. The raw data clearly flunks the specification. When we apply 1.25% or 2.5% it still flunks. At 5% it just barely hits the spec (and maybe could be shipped). At 10% it fully meets the 1.9:1 maximum VSWR.

Smoothing is Cheating!

Clearly, averaging the data over the full band to allow it to pass the VSWR specification is an extreme example of deceptively altering the true data. Perhaps it is less clear that smoothing the data over any frequency band is also a deceptive and unacceptable practice.

One situation where smoothing is acceptable

When you are measuring group delay on a network analyzer, measurements can be extremely noisy, especially if your circuit is lossy. The data gets even worse when your frequency points are close together.

Group delay should not be a choppy measurement. The "noise" on the data is due to the problem of limited phase accuracy. So feel free to crank up the smoothing in this case, until the noise on the data is reduced so that it is small compared to the group delay value. But before you do that, try adding averaging (we use 16 averages) and reduce the IF bandwidth to 500 Hertz, to improve your measurement accuracy. Click here to go to a separate page on smoothing group delay measurements.

Our free S-Parameter Utilities spreadsheet allows you to smooth the group delay of previously measured S-parameters without having to go back to the network analyzer to remeasure with smoothing on!

An opposing viewpoint

This was sent from a Microwaves101 reader who tends to disagree with out statement about cheating... William's point about measuring an airline that is well-matched to 50 ohms and using smoothing to make the S21 data look more like the way it should look is valid, but we stand by our point that smoothing is more often used to cheat. We'd rather see all of the bumps in the data and decide ourselves whether they are noise or an unwanted resonance. In any case, whenever you are asked to approve of acceptance test data for a supplier, ask him/her if they used smoothing and then you decide whether that is acceptable! And tell them to read this page so we are all on the same page...

The following paragraph is not to dispute the "Smoothing is Cheating!", however I feel that the referenced algorithm has a very useful function and is very necessary in Vector Network Analysis. VNA smoothing is one of the most misunderstood systematic error correction algorithms within vector network analysis. Granted the end user can use smoothing to distort already corrected data to his or her advantage (smoother S21 plot), but smoothing was never intended for that purpose. Unlike smoothing, reducing the IF BW and averaging is intended to reduce the measurement error due to random white noise. Smoothing on the other hand is intended to statistically improve measurement data due to the residual VNA systematic errors that 12-term error correction can not compensate out. For example; if we where to perform a full two port calibration on your typical VNA and verified a 35 dB corrected directivity; the resultant corrected load and source match would be typically 0 - 3dB worse then the corrected directivity. If we where to measure a bead-less airline or an two-port device with no internal connection to create voltage standing waves, we know that the actual |S21| of the device would have a natural smooth (ripple free) negative slope. However because the calibration is imperfect, very small error vectors will be present on the actually S21 parameter. As the VNA sweeps from low to high the displayed |S21| signal is really a combination of the actual S21 vector and the residual directivity, source and load match error vectors. The actual S21 vector plus the residual error vectors would all rotate in negative direction where some errors will rotate faster then others displaying a non periodic ripple; the less P-P ripple, the more accurate the measurement. If the frequency resolution of the measurement is practical (fine enough to catch resonance) then adding smoothing will improve the measurement uncertainty. This is because we know the actual |S21| parameter is somewhere within the ripple.

Author : Unknown Editor