Thursday, August 6, 2009

Genetic Algorithms in the Design of Comparators.

Successive Approximation ADCs use a low noise multi-stage auto-zeroed comparator to perform the conversion process. The delay introduced by the comparator limits the throughput achieved by the converter. The power dissipated by the comparator forms a significant portion of the power dissipation of the ADC. In this blog, I investigate an iterative procedure which considers the parameters of the gain stages used in the comparator as “genes” and uses a process of “natural selection” to identify an “improved” design.

A typical gain stage of the comparator uses a differential MOS input pair, a pair of MOSFETs to cascode the input pair, a tail current source and load resistors These basic parameters of the gain stage are parameterized. The comparator contains five such gain stages all of which have been parameterized. In addition, the design uses a couple of extra capacitors. These capacitors are also parameterized. There are about 22 parameters in the design all of which can take different values.

The iteration procedure used is as follows. I start off with a design that is reasonably close to what I want. This part of the design procedure is not automated and was performed by me. The set of 22 parameters I’ve chosen become the basis for the rest of the iterative process. A genome is a combination of the 22 parameters that go into the actual circuit. A new set of 23 genomes are generated by randomly changing one parameter in each of the new “genomes” from the base genome. Simulations are performed on all 23 circuits.

A cost function is setup to evaluate which of these 23 circuits is the “best”. I’ve used an equation of the form Noise/noise + Delay/delay + Power/power + gain/Gain as the function that evaluates these circuits where Noise, Delay, Power, Gain are the desired values and noise, delay, power, gain are results from the actual circuit. The best of these circuits is chosen by simply making a numeric comparison based on the results of the simulations on these circuits.

The best of the 23 circuits then becomes the new base design. The best genome is taken as the new base genome and changed again at random. This process is repeated multiple times. I wrote a PERL script to perform the iterations. Spice3 performs the circuit simulations and a wrapper is used to extract the results. The PERL script reads the output of the wrapper script and computes the “cost function” that represents an “evaluation” of the circuit.

The iteration picks the best out of all the trials performed. It is therefore obvious that the “cost function” will keep on increasing after an iteration representing a better circuit each time. It should also be noted that there is no such thing as convergence in these iterations. Iteration simply produces a faster or a lower noise or a lower power or a higher gain comparator. One can also expect any iteration to just produce a marginally better comparator. The compounded effect of multiple iterations is to produce a significantly better comparator.

It might be useful to set up the “evaluation” function to set a limit on how good any given parameter gets by clamping the value used in the function to its desired value. Such a limit would make designs in which one parameter improves beyond desired levels to not be reported as better which should result in choices that improve the other parameters to be chosen. For example, there is not much to be gained by having the comparator’s gain increase beyond “Gain”. Values above “Gain” should result in the gain/Gain ratio being limited to 1.

A random process like this is also likely to continue doing things that are “easy” to do. If it is “easier” to achieve increase in gain through random changes than to improve some of the other parameters, the process might just keep increasing the gain rather than try to improve the other parameters. The “limiting” function in the evaluator also helps to stop this “run” away trend in the iterative process.

The design I tried this on had six parameters that were important. Two Noise (Targets = 25 and 6), Power (Target = 900), Two delays (Target = 15 and 20) and Gain (Target = 80). The table below shows how the design performed through the iterative process. I let the PERL script run 20 iterations through the design and looked at the measured parameters.


Noise1 Noise2 Power Delay1 Delay2 Gain
28.5 6.5 871.3 16.9 23 71.4
28.4 6.5 880.6 16.6 22.9 73
28.2 6.5 918.6 16 22.6 77.9
28.4 6.3 918.7 15.9 22.2 77.2
27.6 5.9 918.4 16.1 22.5 78.4
27.4 5.9 937.6 15.8 22.3 81.4
27.8 5.9 937.5 15.4 22.3 81.4
28.1 5.9 897.9 15.5 22.4 79.7
28.4 5.9 897.8 15.4 22 79.6
28.1 5.9 917.2 15.1 22 82
27.9 5.9 917.1 15.1 22 82.6
28.1 5.9 917.1 15 22 82.6
28 5.9 917.1 15 22 82.7
28.3 5.9 917.1 15 21.6 81.8
28 5.9 936.8 14.8 21.6 83.7
27.4 5.9 956.5 14.7 21.6 84.8
25.8 5.9 976.6 14.7 21.5 83.9
25.1 5.9 976.7 14.8 21.7 83.6
25.3 5.9 976.7 14.7 21.7 83.6
24.9 5.8 986.7 14.7 21.7 83.8


It can be seen from the table above that the initial steps mostly resulted in the gain parameter being improved. It can also be seen that most of the target parameters of the design are close to their desired values.

I investigated this approach after reading/hearing a couple of suggestions from others which I thought I should investigate. The first one is from Donald Knuth in “The Art of Computer Programming” in the context of improved searching algorithms. His comment was that “life” gives you good examples of how to build a good search algorithm. The second comment is from Steve Jones in the lecture “Is Human Evolution Over?” where he outlines how “Natural Selection” was used to build a better Nozzle. The approach used there was to just change 10 things at random; evaluate the nozzles; pick the best and repeat the process multiple times. The Wikipedia entry on "Evolution Strategy" relates to the application of such a principle to optimization problems. I’m simply applying these suggestions to the problem of designing a better comparator.

The second suggestion is very easily seen to be sensible and does work quite well when applied to the problem of designing a comparator. You do get a better comparator each time you go through the process. The iterations do pick up some improvements which appear to accumulate as the process is repeated.

The other part is what one can learn from the changes that these iterations have made (which I guess is how one should read Knuth’s comment).

The iterative process just picks up “improvements” along the way. The process might pick up different “improvements” when run again. The process does appear to pick up similar “improvements” when run multiple times though. Looking at these changes should give the designer some thoughts on how to “optimize” or “improve” the design.

One interesting trend that I noticed was in relation to the size of the input pairs. I’d used 48, 6, 6, 6 and 6 fingers in the five gain stages in the original design. The iterative process appears to prefer fingers which reduce first and increase later. For example, sequences like 48, 4, 2, 6, and 8 for the five stages appear to be thrown up at the end of most runs using the original design. This does make a fair bit of sense. The initial stages have very little signal and should see the least possible load to amplify the signals faster. There is a point beyond which the signals become large. Subsequent stages are better off at driving larger gain stages which also result in higher overall gain.

The trend with the tail currents is very similar to the input pair sizes. The base design used 400, 160, 40, 40 and 40 as the tail currents. The iterative process appears to modify these to 440, 120, 60, 90 and 90. A lesson one could learn from this is probably that the currents in the gain stages should drop towards the “middle” of the cascade and increase beyond that point. The final gain stages are very likely to be slewing all the time.

No comments:

Post a Comment