Advertisement

Smoothing is Cheating!

Click here to go to our main network analyzer page

Click here to go to a separate page on smoothing group delay measurements

Read an opposing viewpoint at the bottom of this page!

Before we talk about smoothing, let's remember there are two completely legitimate ways to reduce noise in network analyzer data. These are averaging and reducing IF bandwidth. Both of these techniques you will trade increased data acquisition time for higher accuracy. We typically use 16 averages. The default IF bandwidth on Keysight's PNA series network analyzer is 35 kHz, by dropping it to 500 Hertz your measurements will be more accurate and your plots will be more pleasing to look at.

Now back to the topic at hand. This page discusses the smoothing feature that is sometime used on measurement equipment such as network analyzers.

What is smoothing?

Percentage smoothing

Why do some measurements appear "choppy"?"

Smoothing versus averaging

Example of using smoothing to cheat

One situation where smoothing is acceptable

An opposing viewpoint

What is smoothing?

Smoothing of data in the frequency domain is an option on most network analyzers. Smoothing makes "noisy" measurements seem more likable; this explains why Marketing tells Engineering to crank up the smoothing when gathering frequency response data for brochures. They have permission from the legal department, so long as they put the words "typical data" in the header! But you should expect a little variation in real data, smoothing is literally cheating.

The attraction to smoothing is that it can be used to make a "noisy" measurement into a smoother, perhaps more believable (or marketable) measurement. The narrow VSWR bumps that are measured over frequency in complex or electrically long circuitry, especially circuits containing filter structures, are quite real if the measurement is performed accurately. The accuracy of modern network analyzers is such that if the calibration is done properly and sufficient IF averaging is applied, then all individual data points that are taken are "good data". No one at Keysight or anywhere else suggests that averaging data over frequency improves its accuracy. Averaging the bumps in good data to smooth its appearance is not an acceptable means of helping hardware pass a specification; the accuracy of the data is reduced by smoothing, not improved. If your hardware misses a spec because a single data point exceeded the specification, you could recalibrate the equipment, remeasure it, or simply ask for a waiver and let the customer decide.

Percentage smoothing

What do we mean by percentage smoothing? 5% smoothing means that 5 points out of 100 are averaged. For example, if data was from 1 to 401 hertz, data from 1 to 21 would be averaged to get point 11, data from 2 to 22 would be averaged to get point 12, etc. Less numbers are averaged below 11 hertz and above 390 hertz, because there won't be 20 points to average.

On Keysight (and other manufacturer's) network analyzers a smoothing function enables the user to transform measured data by averaging it versus frequency. Percentage smoothing is calculated by the dividing the percentage of span that is averaged (the "aperture") by the total swept bandwidth. Data is grouped in odd numbers (3, 5, 7 etc.) so that it is averaged symmetrically about each frequency point. In equation form the percentage smoothing is:

Smoothing is Cheating!

Thus some possible smoothing settings for a sweep of 401 points are:

Points averaged Percent smoothing Points averaged Percent smoothing Points averaged Percent smoothing
1 0% 15 3.5% 29 7%
3 0.5% 17 4.0% 31 7.5%
5 1.0% 19 4.5% 33 8%
7 1.5% 21 5.0% 35 8.5%
9 2.0% 23 5.5% 37 9%
11 2.5% 25 6.0% 29 9.5%
13 3.0% 27 6.5% 41 10%

Note that typical network analyzers allow the user to adjust the smoothing up to 20% (81 points out of 401 points averaged).

Why do some measurements appear "choppy"?

Why do some "real" measurements appear choppy? For a variety of reasons. Probably because problems in fix tu ring that yielded non-ideal, but repeatable data. Perhaps a more important reason is that the data really does have all of those annoying peaks and valleys, particularly in the case of filters.

Smoothing versus averaging

Don't confuse smoothing with averaging, which is a good thing and improves measurement accuracy. Averaging means taking many measurements of the same thing over time, then literally averaging all of the data at a single frequency point and reporting that as the "final" data. If the source of choppiness is because of random noise in the measurement, then averaging is a perfectly acceptable means to increase the data accuracy.

Example of using smoothing to cheat

Let's apply the smoothing function to some measured filter data. Below is the frequency response of an edge-coupled bandpass filter (real data!). No smoothing is on. The poles of the filter create the well-known dips in the return loss in the passband. The point here is that you are looking at "real data", everyone knows that filters have dips in return loss throughout the passband.

Smoothing is Cheating!

Let's let the hypothetical specification for this part be a maximum VSWR of 1.9:1, from 3000 to 4000 MHz. The engineer that designed it wisely extended the passband to allow for frequency shifts due to process tolerances... what a brilliant and talented individual he must be... except that the filter fails to meet 1.9:1 VSWR

Smoothing is Cheating!

Now let's look at just the VSWR of the part, and add some traces to show the effects of smoothing. The raw data clearly flunks the specification. When we apply 1.25% or 2.5% it still flunks. At 5% it just barely hits the spec (and maybe could be shipped). At 10% it fully meets the 1.9:1 maximum VSWR.

Smoothing is Cheating!

Clearly, averaging the data over the full band to allow it to pass the VSWR specification is an extreme example of deceptively altering the true data. Perhaps it is less clear that smoothing the data over any frequency band is also a deceptive and unacceptable practice.

One situation where smoothing is acceptable

When you are measuring group delay on a network analyzer, measurements can be extremely noisy, especially if your circuit is lossy. The data gets even worse when your frequency points are close together.

Group delay should not be a choppy measurement. The "noise" on the data is due to the problem of limited phase accuracy. So feel free to crank up the smoothing in this case, until the noise on the data is reduced so that it is small compared to the group delay value. But before you do that, try adding averaging (we use 16 averages) and reduce the IF bandwidth to 500 Hertz, to improve your measurement accuracy. Click here to go to a separate page on smoothing group delay measurements.

Our free S-Parameter Utilities spreadsheet allows you to smooth the group delay of previously measured S-parameters without having to go back to the network analyzer to remeasure with smoothing on!

An opposing viewpoint

This was sent from a Microwaves101 reader who tends to disagree with out statement about cheating... William's point about measuring an airline that is well-matched to 50 ohms and using smoothing to make the S21 data look more like the way it should look is valid, but we stand by our point that smoothing is more often used to cheat. We'd rather see all of the bumps in the data and decide ourselves whether they are noise or an unwanted resonance. In any case, whenever you are asked to approve of acceptance test data for a supplier, ask him/her if they used smoothing and then you decide whether that is acceptable! And tell them to read this page so we are all on the same page...

The following paragraph is not to dispute the "Smoothing is Cheating!", however I feel that the referenced algorithm has a very useful function and is very necessary in Vector Network Analysis. VNA smoothing is one of the most misunderstood systematic error correction algorithms within vector network analysis. Granted the end user can use smoothing to distort already corrected data to his or her advantage (smoother S21 plot), but smoothing was never intended for that purpose. Unlike smoothing, reducing the IF BW and averaging is intended to reduce the measurement error due to random white noise. Smoothing on the other hand is intended to statistically improve measurement data due to the residual VNA systematic errors that 12-term error correction can not compensate out. For example; if we where to perform a full two port calibration on your typical VNA and verified a 35 dB corrected directivity; the resultant corrected load and source match would be typically 0 - 3dB worse then the corrected directivity. If we where to measure a bead-less airline or an two-port device with no internal connection to create voltage standing waves, we know that the actual |S21| of the device would have a natural smooth (ripple free) negative slope. However because the calibration is imperfect, very small error vectors will be present on the actually S21 parameter. As the VNA sweeps from low to high the displayed |S21| signal is really a combination of the actual S21 vector and the residual directivity, source and load match error vectors. The actual S21 vector plus the residual error vectors would all rotate in negative direction where some errors will rotate faster then others displaying a non periodic ripple; the less P-P ripple, the more accurate the measurement. If the frequency resolution of the measurement is practical (fine enough to catch resonance) then adding smoothing will improve the measurement uncertainty. This is because we know the actual |S21| parameter is somewhere within the ripple.

Author : Unknown Editor

Advertisement