Saturday, August 22, 2009

A lesson from life...

A couple of weeks ago, I'd blogged about learning lessons from life and applying it to other problems you encounter. I'd discussed how one can use principles of "evolution" to design a better comparator.

A follow up thought would be to see if you can apply lessons from life to your own life. I happened to scan through Robin Sharma's book "The Greatness Guide". One of the ideas mentioned there is that "Greatness" in life can be achieved through "evolution". You can read more of his thoughts on the topic in his article "Greatness by Evolution Vs Revolution".

I'm wondering if design groups could benefit from adapting the "Evolution" approach in their design process. A lot of us tend to change just about everything in a design from one generation to the next. I've also observed that there are some groups of people who tend to take the "Evolution" approach. Usually such people keep many things about their design the same from one "Generation" to another and restrict changes to a few target areas. It is also seen that the groups that do that are more successful than groups that take the "Change Everything" approach.

From a project management perspective, the "Evolution" approach makes a lot of sense. It is a good way to keep risks under control. Schedules are far more likely to be better predicted.

The "Change Everything" approach is believed to be superior since we tend to think that it is a good way to make large improvements. Is that really true? Is it better to be working on the problem to identify a way to get the same improvement with the least possible set of changes? Could innovators make such a change to their attitude to significantly improve the odds of success? Can projects with audacious goals be better executed by breaking them down into "Generations" which improve over time?

Just some thoughts. It would be interesting to see they make a difference in real life.

Thursday, August 6, 2009

Genetic Algorithms in the Design of Comparators.

Successive Approximation ADCs use a low noise multi-stage auto-zeroed comparator to perform the conversion process. The delay introduced by the comparator limits the throughput achieved by the converter. The power dissipated by the comparator forms a significant portion of the power dissipation of the ADC. In this blog, I investigate an iterative procedure which considers the parameters of the gain stages used in the comparator as “genes” and uses a process of “natural selection” to identify an “improved” design.

A typical gain stage of the comparator uses a differential MOS input pair, a pair of MOSFETs to cascode the input pair, a tail current source and load resistors These basic parameters of the gain stage are parameterized. The comparator contains five such gain stages all of which have been parameterized. In addition, the design uses a couple of extra capacitors. These capacitors are also parameterized. There are about 22 parameters in the design all of which can take different values.

The iteration procedure used is as follows. I start off with a design that is reasonably close to what I want. This part of the design procedure is not automated and was performed by me. The set of 22 parameters I’ve chosen become the basis for the rest of the iterative process. A genome is a combination of the 22 parameters that go into the actual circuit. A new set of 23 genomes are generated by randomly changing one parameter in each of the new “genomes” from the base genome. Simulations are performed on all 23 circuits.

A cost function is setup to evaluate which of these 23 circuits is the “best”. I’ve used an equation of the form Noise/noise + Delay/delay + Power/power + gain/Gain as the function that evaluates these circuits where Noise, Delay, Power, Gain are the desired values and noise, delay, power, gain are results from the actual circuit. The best of these circuits is chosen by simply making a numeric comparison based on the results of the simulations on these circuits.

The best of the 23 circuits then becomes the new base design. The best genome is taken as the new base genome and changed again at random. This process is repeated multiple times. I wrote a PERL script to perform the iterations. Spice3 performs the circuit simulations and a wrapper is used to extract the results. The PERL script reads the output of the wrapper script and computes the “cost function” that represents an “evaluation” of the circuit.

The iteration picks the best out of all the trials performed. It is therefore obvious that the “cost function” will keep on increasing after an iteration representing a better circuit each time. It should also be noted that there is no such thing as convergence in these iterations. Iteration simply produces a faster or a lower noise or a lower power or a higher gain comparator. One can also expect any iteration to just produce a marginally better comparator. The compounded effect of multiple iterations is to produce a significantly better comparator.

It might be useful to set up the “evaluation” function to set a limit on how good any given parameter gets by clamping the value used in the function to its desired value. Such a limit would make designs in which one parameter improves beyond desired levels to not be reported as better which should result in choices that improve the other parameters to be chosen. For example, there is not much to be gained by having the comparator’s gain increase beyond “Gain”. Values above “Gain” should result in the gain/Gain ratio being limited to 1.

A random process like this is also likely to continue doing things that are “easy” to do. If it is “easier” to achieve increase in gain through random changes than to improve some of the other parameters, the process might just keep increasing the gain rather than try to improve the other parameters. The “limiting” function in the evaluator also helps to stop this “run” away trend in the iterative process.

The design I tried this on had six parameters that were important. Two Noise (Targets = 25 and 6), Power (Target = 900), Two delays (Target = 15 and 20) and Gain (Target = 80). The table below shows how the design performed through the iterative process. I let the PERL script run 20 iterations through the design and looked at the measured parameters.


Noise1 Noise2 Power Delay1 Delay2 Gain
28.5 6.5 871.3 16.9 23 71.4
28.4 6.5 880.6 16.6 22.9 73
28.2 6.5 918.6 16 22.6 77.9
28.4 6.3 918.7 15.9 22.2 77.2
27.6 5.9 918.4 16.1 22.5 78.4
27.4 5.9 937.6 15.8 22.3 81.4
27.8 5.9 937.5 15.4 22.3 81.4
28.1 5.9 897.9 15.5 22.4 79.7
28.4 5.9 897.8 15.4 22 79.6
28.1 5.9 917.2 15.1 22 82
27.9 5.9 917.1 15.1 22 82.6
28.1 5.9 917.1 15 22 82.6
28 5.9 917.1 15 22 82.7
28.3 5.9 917.1 15 21.6 81.8
28 5.9 936.8 14.8 21.6 83.7
27.4 5.9 956.5 14.7 21.6 84.8
25.8 5.9 976.6 14.7 21.5 83.9
25.1 5.9 976.7 14.8 21.7 83.6
25.3 5.9 976.7 14.7 21.7 83.6
24.9 5.8 986.7 14.7 21.7 83.8


It can be seen from the table above that the initial steps mostly resulted in the gain parameter being improved. It can also be seen that most of the target parameters of the design are close to their desired values.

I investigated this approach after reading/hearing a couple of suggestions from others which I thought I should investigate. The first one is from Donald Knuth in “The Art of Computer Programming” in the context of improved searching algorithms. His comment was that “life” gives you good examples of how to build a good search algorithm. The second comment is from Steve Jones in the lecture “Is Human Evolution Over?” where he outlines how “Natural Selection” was used to build a better Nozzle. The approach used there was to just change 10 things at random; evaluate the nozzles; pick the best and repeat the process multiple times. The Wikipedia entry on "Evolution Strategy" relates to the application of such a principle to optimization problems. I’m simply applying these suggestions to the problem of designing a better comparator.

The second suggestion is very easily seen to be sensible and does work quite well when applied to the problem of designing a comparator. You do get a better comparator each time you go through the process. The iterations do pick up some improvements which appear to accumulate as the process is repeated.

The other part is what one can learn from the changes that these iterations have made (which I guess is how one should read Knuth’s comment).

The iterative process just picks up “improvements” along the way. The process might pick up different “improvements” when run again. The process does appear to pick up similar “improvements” when run multiple times though. Looking at these changes should give the designer some thoughts on how to “optimize” or “improve” the design.

One interesting trend that I noticed was in relation to the size of the input pairs. I’d used 48, 6, 6, 6 and 6 fingers in the five gain stages in the original design. The iterative process appears to prefer fingers which reduce first and increase later. For example, sequences like 48, 4, 2, 6, and 8 for the five stages appear to be thrown up at the end of most runs using the original design. This does make a fair bit of sense. The initial stages have very little signal and should see the least possible load to amplify the signals faster. There is a point beyond which the signals become large. Subsequent stages are better off at driving larger gain stages which also result in higher overall gain.

The trend with the tail currents is very similar to the input pair sizes. The base design used 400, 160, 40, 40 and 40 as the tail currents. The iterative process appears to modify these to 440, 120, 60, 90 and 90. A lesson one could learn from this is probably that the currents in the gain stages should drop towards the “middle” of the cascade and increase beyond that point. The final gain stages are very likely to be slewing all the time.

Sunday, August 2, 2009

Thoughts on execution

A set of people working on creating a new product will at times find themselves wondering if they could have "executed" their project better. How does not go about improving execution? What rules and guidelines need to be followed to ensure that the development process is predictable and convergent?

It is important to acknowledge that the "complexity" of any project cannot be completely determined at the start. It is far more common for designers to make things up as they go about designing their products. Most designers are also "innovators" and are likely to generate "ideas" that can improve their design. These innovations also occur during the design process. Most designers would agree that their design ended up being a whole lot different from what they thought it would be at the start of the design cycle.

It is also very likely that "customers" identify changes that would benefit their end design while designers are creating a small part of it. Such changes could also come at any point in the design cycle resulting in changes in the "complexity" of the project during its development.

It is very rare for a product to be designed with just one customer or application in mind. Engineers involved in marketing a product will more often than not identify new "features" that would allow the design to be suitable for more customers and applications during the design process as well. These changes will also occur during the design cycle resulting in significant change in the complexity of the design.

It is possible for designers to limit changes that they cause to the complexity of the project by limiting the number of improvements and changes they make. Designers have very little control over changes that are caused due to new features and requirements being added from customers and marketing.

The challenge facing design groups is to manage the changing complexity of the project within the time that is available for the design process. Quite often designers will complain that the scope of the project changes too much during the design process. Interestingly they don't complain about the changes they've chosen to make. Designers need to be aware that the scope of the project is going to change quite significantly during the development process.

The design process usually has multiple steps. The first of which is to derive a block diagram to show how the design performs the required function. The second step is to create the set of blocks that are required to implement the block diagram. Ideally the last phase of the design cycle should be an optimization phase where each block is "optimized" to perform the function "optimally" in the presence of the other blocks around it.

I've observed that the "optimization" step is taken up early in the design process. This step tends to get repeated every time there is a change in the scope of the project resulting in large cycle time overruns. Premature optimization is an avoidable cause that results in poor execution. I've also noticed that some designer manage this better than others. It is common for such designers to have "automated" the "optimization" process the first time around. This allows them the flexibility to use computers rather than their time to perform the optimization. Computers are clearly better suited for this kind of work than human beings. Some designers are not particularly interested in optimization and would rather focus on methods and topologies. In general teams that have people with such an attitude tend to execute better than the other. Designers of the kind that want to optimize themselves are very likely to cause large delays in project executions. It might be useful to train such people in "automated optimization" techniques.

A variant of "premature optimization" is trying to squeeze the maximum out of a given design too early. This is so easily avoidable and quite frankly adds very little value to the design. In general this sort of "optimization" focusses on second order effects and tries to improve the design by tweaking these effects to "precision". It might be preferable to delay if not entirely avoid such tricks in designs. I find that the time it takes to achieve the "tweaks" is better used inventing a better method or design.

Designers could avoid optimization of their designs in the early part of the design cycle so as to minimize rework. It would be beneficial to use existing designs as well. Reuse in general would require lesser effort and minimize chances of errors.

In general, "high end" projects tend to be very poorly executed. Designers working on such projects tend to make many silly errors. It is quite common for such projects to be over schedule by the time the designer gets things "working" with minor shortcomings. It might be possible that high visibility and the potential for high rewards (self perceived and given by others) tends to diminish the designer's judgement. It is also noticed that management tends to take a liberal view of silly errors in such projects. You can get away with sloppy work on these kind of projects. Management should probably treat errors on face value and deal with them without being prejudiced by the "visibility" that a project has. As Scott Adams says "By Definition, risk-takers fail often. So do Morons. In practice it is difficult to sort them out". It is entirely possible that a smart designer makes dumb mistakes for which he should be held accountable. Holding the designer accountable is probably beneficial to that person in the long run. A consistent approach to dealing with "execution" errors on the part of management will go a long way in improving project execution in all kinds of projects.

It is quite common for designers to think that "pressure" resulting from high visibility causes errors. I happen to think that high "visibility" just causes such errors to be noticed. I think "pressure" in "high end" projects is a result of poor execution and not a cause. It might be useful to focus on the work that needs to be done and not be thinking about the "impact" and/or "results" while working on these kinds of projects. Designers working on high "visibility" projects should be offered a decent support structure. Management should also attempt to ensure that a modest amount of "perspective" is retained. It might be useful to reiterate that these high end projects are in the end just another project. History suggests that such projects rarely end on a high note. A lot of the "visibility" stuff just builds things up making the fall which is inevitable hard to deal with. Keeping morale up through such projects will also require intervention from management. Low morale is certainly a recipe for poor execution.

In a group of people, good "executors" are bound to exist. Management should recognize this trait in these people and ensure that they are rewarded and recognized appropriately for the same. The presence of such people is bound to have an influence on the others in the group. Most designers are capable of learning. It is only reasonable to expect that they will pick some tricks from the good "executors". A good recognition process for these designers will ensure that the group knows where to go for help on the topic of execution.

Clearly this blog is getting too long. I'll just sign off now. I don't think the thoughts presented well either. I'm still organizing my thoughts on the topic.