Optimization

Optimization

Click here to go to our main page on computer-aided design

Click here to go to our page on linear analysis software

Another great topic suggested from our message board. Optimization has been used for since the 1960s in microwave CAD programs to flatten gain, increase bandwidth, improve stability, or fix any problem most problems that can be expressed mathematically from circuit S-parameters.

Optimization is just like having infinite monkeys at your disposal. Use it properly and you're The Man, abuse it and you're wasting time and resources.

Some designers claim they never use optimization, as if they are all-knowing and can arrive at a solution from first principles with some slight tuning. To these people, we say "you're full of crap". That's like a furniture maker never using sandpaper, get over yourselves and get out of the closet, we know you use it!

Let's summarize this in a Microwaves101 rule of thumb!

Optimization Anyone who designs complex microwave circuits and claims they don't use the optimization function in their EDA software is one of these three things: a liar, an idiot, or a super-genius with IQ 250. You pick which one, then tell them at when they bring this up at their next peer review!

Types of optimizers

The two main types of optimizers are gradient and random. In practice you will need to use both, gradient is far more powerful but it can get stuck at local maxima and minima.

Optimization in Keysight's ADS

Here's some sample "OPT blocks" used in ADS.

This first one was used to optimize a switchable attenuator. The circuit is analyzed from 4 to 8 GHz, and the attenuator's loss (S43) is optimized over the full range. The input return loss is optimized over a subset of the frequency range, 5 to 6 GHz.

Optimization

This OPT block was used to optimize a Wilkinson power divider.

Optimization

Optimization is when you use linear analysis software to vary the values of certain elements within the schematic (selected by the user) in an attempt to improve the overall response. Optimization is the most powerful tool of linear simulators. You can perform the work of a thousand microwave designers of yesteryear with a few mouse clicks. However, if you don't know what you are doing, you will end up with poor result anyway. Before you attempt to optimize something, you should know the definition of a few terms.

Optimization goals are functions that are defined by the user. For example, in designing an amplifier you probably want a good input and output match, and flat gain. Your goals might be:

S11<-20 dB
S21>10 dB
S22<-10 dB

Each goal needs to be assigned a frequency band, and it is evaluated and averaged over that band.

Circuit variables are elements that you permit to vary, in the hope that they help you achieve your optimization goals. It is always recommended that you restrict variables to vary over ranges of values that are physically realizable. For example, on a microstrip circuit you probably don't want to use impedances outside of the range Z0/2 to 2Z0, or the line will get too narrow or too wide to deal with. When you limit the range that a variable can assume, this is called a constrained variable. Otherwise it would be called an unconstrained variable. Repeat after me... "constrained good... unconstrained ungood"...

The error function is the a mathematical answer that is computed for your network, given the values of all of the elements including the circuit variables. It is a measure of how far off you are from your optimization goals.

Types of optimization include gradient optimization and random optimization. Gradient optimization implies that the software is calculating the slope of the error function with respect to each variable. It does this for each variable separately by making a calculation of the error function (over the complete frequency range), then slightly changing one variable, calculating the error function again, and using the difference of the two numbers to find the slope. With this information the software knows which way to move each variable in order to reduce the error function, and it uses some smart algorithm to guess how much to change each variable, makes the change and recalculates the error function. It does this until either it meets the error function goals, or it computes that each variable is at a "local" minimum. This means that small changes in either direction do not improve the error function, they make it worse. The downside of this type of optimization is that you may land in a local minimum, but a much better solution might exist with completely different values for one or more of the variables.

Random optimization is the equivalent of an infinite number of monkeys on an infinite number of computers. Here the software doesn't calculate any gradients, it just takes a wild guess on what might improve the error function and evaluates the error function with the guess values used for the variables. Chances are the error function doesn't improve, so the variables are moved to another guess and the error function is evaluated again. This goes on for many iterations, until an improvement is made. Then the variables are set to the current values. If the error function is not reached, the guess work continues.

There are many fancy names for optimization routines beside gradient and random. To the user, you don't need to care exactly how they work, trust us, they all operate as a combination of gradient and random. If your software has more than one optimizer (and they all do) try them all, then pick the one that works best for you and just use it.

Author : Unknown Editor