Metamorphosis, we learn in high school, is the process that transforms the creepy crawly caterpillar into a beautiful butterfly. The larva becomes a pupa and then the butterfly breaks out and flies away. I often wonder why human being do not or cannot transform themselves into something beautiful, develop capabilities that they did not possess earlier or set themselves free to do things they were not able to. I guess the trouble is one never starts. If one does want to start do we know what one wants to metamorphose into?
For instance, the question of freedom. There are dozens for things that one wants to be free from. We want to be free of our worries, our bosses, annoying colleagues, bigotry, violence in society etc. We have concepts of freedom enshrined in our constitutions. We, (most of us, anyway) live in nation states that guarantee us a right to freedom. Technology has set us free of much of the drudgery of life. It is far easier to cook, clean, commute, communicate (and those are just words that start with "c") than it has ever been in the history of humanity. Yet, most of us feel trapped and want to be free. Why?
Friedrich Neitzsche says "Free from what? What doth that matter to Zarathustra! Clearly, however, shall thine eye show unto me: free FOR WHAT?" That actually sounds like a good point. Most of us never think about what we want to be free to do. Is that why one feels trapped?
I often find myself wanting to have a lot of free time or leisure. Who doesn't. At times wondering if taking a long break from work might free me up. I can't answer the question free to do what? Is it possible that sorting out what I want to do would actually make a difference? There are many unplanned activities that I end up finding time for at work. It might be possible to do the same in other contexts if one can identify what it needs to be. Maybe one can create the time required to pursue a hobby or learn something if one identifies what these pursuits need to be.
Another year draws to a close. I'm sure a lot of us, whether we like it or not, end up reflecting upon our lives at this time of the year. It might been a good idea to think about what one wants to be free for. Doing so might just initiate a metamorphosis in us and transform us into the butterfly we'd all love to be. Just a thought.
Friday, November 27, 2009
Sunday, November 22, 2009
The ones who care!
I've been thinking about the people who have played a significant part in making me the person that I am today. I realized that some of these people were actually quite hard on me. For instance the head-master of my school was really hard on me. Quite frankly I hated it then. Fifteen years later, I've realized a very simple fact. He did so because he cared.
Clearly not all people who give me a hard time do so because they care. There are some assholes in the world who are giving me a hard time because they can do no better. It would be nice to be able to tell the two kinds of people apart.
The ones who care will probably not hesitate to let me know what they think when I have screwed things up. That should be one good rule of thumb to use to decide if one wants to be around such people. I can easily see why such people would help you grow and be a better person.
The wrong kind of people to surround yourselves with are the ones that will offer you a lot of sympathy when you fail or make mistakes. It should be obvious that such actions don't really help and have the effect of perpetuating our misery and limiting our ability to act on the circumstances that cause us to fail or make mistakes. I don't mean to say that such people are evil and/or should be avoided. They do so with the best of intentions. We tend to like people who sympathize with us and not like people who are pragmatic. That might not be in one's best interest.
The ones that care would also be of the kind who will be "truly" happy for you when you do succeed. One can usually detect "jealousy", "flattery" etc in the wrong kind.
In short, there is a case for one to be grateful to the people who give us a hard time because they care. Just something to think about.
Clearly not all people who give me a hard time do so because they care. There are some assholes in the world who are giving me a hard time because they can do no better. It would be nice to be able to tell the two kinds of people apart.
The ones who care will probably not hesitate to let me know what they think when I have screwed things up. That should be one good rule of thumb to use to decide if one wants to be around such people. I can easily see why such people would help you grow and be a better person.
The wrong kind of people to surround yourselves with are the ones that will offer you a lot of sympathy when you fail or make mistakes. It should be obvious that such actions don't really help and have the effect of perpetuating our misery and limiting our ability to act on the circumstances that cause us to fail or make mistakes. I don't mean to say that such people are evil and/or should be avoided. They do so with the best of intentions. We tend to like people who sympathize with us and not like people who are pragmatic. That might not be in one's best interest.
The ones that care would also be of the kind who will be "truly" happy for you when you do succeed. One can usually detect "jealousy", "flattery" etc in the wrong kind.
In short, there is a case for one to be grateful to the people who give us a hard time because they care. Just something to think about.
Monday, September 14, 2009
Thoughts on improving one's creativity.
Creativity is a topic that I’ve been interested in recently. What is creativity? How can one be creative?
Creativity is simply the process of coming up with a new solution to a known problem. A lot of the problems one encounters in life have so many solutions that it is possible to come up with one more. To be considered a creative solution, the idea should not only be novel but also represent a significant improvement over known ideas. The people who research this topic have pointed out that there are different types of creativity. Margaret Boden for example lists three.
1. Combinational creativity.
2. Exploratory creativity.
3. Transformational creativity.
Combinational creativity is taking two known ideas and combining them in a novel way to make a new one.
Exploratory creativity is just making changes to a known solution till you get a new one. Very often this might not result in dramatic improvements. But it is possible to imagine that one gets a significantly different solution after many improvements have been added to the existing solution. The creation of new species through evolution is an example.
Transformational creativity is a category of ideas that are just new and cannot be thought to have been derived from anything else that has existed before. These are the sort of ideas that make you think that they are “truly original”. These usually come about when one thinks about a problem in a way it has never been thought of before.
Can we now try to think of how one might go about finding new solutions? The combinational approach is the easiest to do. List down all attributes that you are looking for in your solution. Look at known solutions to the problem. There is a good chance that some of these solution do some of what you want. Find a set of known solutions which spans all your requirements. You now need to combine these solutions into a new one.
A second approach would be to do a set of what-if experiments. Try to change things in a way you think will take it towards filling up the missing requirements. Chances are you will stumble upon a way to solve your problem. You could also try to map your problem to a similar problem in a different domain and see how people have dealt with the new domain. You might find a solution that applies in that domain that does what you want. This thought process might give you clues about how you can solve the problem in your domain.
The brightest ideas come about when people take a problem and think about it in a totally new way. It is not clear to me if there is a set of axioms that people use in such an approach. It is usually possible for people who get such ideas to explain the new idea using known things. It is seen that such an explanation can be advanced only after the idea has been generated though. I find that such ideas have an air of obviousness to them. The obviousness is however only as a result of hindsight though. One can track the idea down to one or more of the fundamental assumptions about known solutions having been changed.
An approach to take would be list down all the fundamental assumptions that form the part of your approach to solving the problem. You could then try to think of ways to solve the problem with one or more of the assumptions removed or changed.
I find that engineers have a preference for the exploratory style. We like to make small changes to a known solution. Usually the benefits are small as well. Another small change is then identified and so on. I think such an approach is acceptable for “optimization” type problems where one is not looking for a dramatic improvement. I don’t see this way of working resulting in very major improvements. A preference for such an approach prevents engineers from reducing the problem to its basics to look for a fundamentally different solution. One needs to make a conscious attempt to restrict the time spent in such activities. They have a way of using up a lot of people’s time. I think it makes more sense to abandon this approach if the first few things you change don’t have the desired outcome. This way of solving the problem is very much like evolution. It takes a long time to make an impact. It would be useful to realize that there are other ways to be creative as well. That could motivate people to break the loop and think of other “ways” of being creative.
I’ll need to mention that no one invents anything following any of the approaches listed above. I’m of the opinion that these are just ways to prepare your mind. My personal experience suggests that I spend a lot of time thinking about a problem using these approaches. Then something happens and an idea is born. Chance, they say, favors the prepared mind. I’m only suggesting ways to prepare your mind.
Happy inventing. May the force be with you!
Creativity is simply the process of coming up with a new solution to a known problem. A lot of the problems one encounters in life have so many solutions that it is possible to come up with one more. To be considered a creative solution, the idea should not only be novel but also represent a significant improvement over known ideas. The people who research this topic have pointed out that there are different types of creativity. Margaret Boden for example lists three.
1. Combinational creativity.
2. Exploratory creativity.
3. Transformational creativity.
Combinational creativity is taking two known ideas and combining them in a novel way to make a new one.
Exploratory creativity is just making changes to a known solution till you get a new one. Very often this might not result in dramatic improvements. But it is possible to imagine that one gets a significantly different solution after many improvements have been added to the existing solution. The creation of new species through evolution is an example.
Transformational creativity is a category of ideas that are just new and cannot be thought to have been derived from anything else that has existed before. These are the sort of ideas that make you think that they are “truly original”. These usually come about when one thinks about a problem in a way it has never been thought of before.
Can we now try to think of how one might go about finding new solutions? The combinational approach is the easiest to do. List down all attributes that you are looking for in your solution. Look at known solutions to the problem. There is a good chance that some of these solution do some of what you want. Find a set of known solutions which spans all your requirements. You now need to combine these solutions into a new one.
A second approach would be to do a set of what-if experiments. Try to change things in a way you think will take it towards filling up the missing requirements. Chances are you will stumble upon a way to solve your problem. You could also try to map your problem to a similar problem in a different domain and see how people have dealt with the new domain. You might find a solution that applies in that domain that does what you want. This thought process might give you clues about how you can solve the problem in your domain.
The brightest ideas come about when people take a problem and think about it in a totally new way. It is not clear to me if there is a set of axioms that people use in such an approach. It is usually possible for people who get such ideas to explain the new idea using known things. It is seen that such an explanation can be advanced only after the idea has been generated though. I find that such ideas have an air of obviousness to them. The obviousness is however only as a result of hindsight though. One can track the idea down to one or more of the fundamental assumptions about known solutions having been changed.
An approach to take would be list down all the fundamental assumptions that form the part of your approach to solving the problem. You could then try to think of ways to solve the problem with one or more of the assumptions removed or changed.
I find that engineers have a preference for the exploratory style. We like to make small changes to a known solution. Usually the benefits are small as well. Another small change is then identified and so on. I think such an approach is acceptable for “optimization” type problems where one is not looking for a dramatic improvement. I don’t see this way of working resulting in very major improvements. A preference for such an approach prevents engineers from reducing the problem to its basics to look for a fundamentally different solution. One needs to make a conscious attempt to restrict the time spent in such activities. They have a way of using up a lot of people’s time. I think it makes more sense to abandon this approach if the first few things you change don’t have the desired outcome. This way of solving the problem is very much like evolution. It takes a long time to make an impact. It would be useful to realize that there are other ways to be creative as well. That could motivate people to break the loop and think of other “ways” of being creative.
I’ll need to mention that no one invents anything following any of the approaches listed above. I’m of the opinion that these are just ways to prepare your mind. My personal experience suggests that I spend a lot of time thinking about a problem using these approaches. Then something happens and an idea is born. Chance, they say, favors the prepared mind. I’m only suggesting ways to prepare your mind.
Happy inventing. May the force be with you!
Tuesday, September 8, 2009
Intelligence, Infallibility and Creativity.
I've been reading Roger Penrose's book "Shadows of the Mind". The book makes rather interesting reading and makes one think a lot about a bunch of things that most of us don't think about much.
The basic point the book tries to make it that the human brain is more than a computing machine. One quote from Turing that caught my attention was
If a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.
The quote somehow suggests that just a little bit of "fallibility" would go a long way in making machines intelligent and possibly suggests that allowing for a few mistakes will allow machines to reach the level of human intelligence. Clearly computers are better than humans at performing tasks using known procedures. I suspect that "creativity" or the ability to generate "new" ideas would be considered an important attribute of "Humans" that the machine would be expected to match.
A honest look at myself suggests that the machine would be required to be very significantly fallible. I find that a lot of the "ideas" that I come up with do not actually work well. I would conservatively state that about one in ten of my ideas do end up surviving serious scrutiny. I also know quite a few people (who I consider intelligent) who have at various times indicated that the ratio of "correct" to "incorrect" ideas they have is quite low and in the order of one in ten.
In my own work on successive approximation ADCs, I've found many instances where allowing circuits to make errors results in the system working better than trying to make the circuit "correct" at all times. The idea of error correction and redundancy are quite commonly used in many circuits to improve the performance of the circuit itself. My experience with such circuits suggests that one needs to allow significant errors before the advantages become significant.
In the biological sciences one encounters the phenomenon of evolution as a case where the fallibility of the DNA replication process results in the origin of new species over time. DNA replication is a very accurate process which is probably why evolution takes such a long time. It is also quite possible that the "randomness" of the process also contributes to the slow pace at which new species are "created". This could also suggest that human creativity is not entirely a random process. It can also be argued that "creativity" is not deterministic either since it is not clear what determines the "creation". The even more bizarre phenomenon of intuition probably plays a bigger part in human creativity.
One question to the readers of this blog is "What fraction of the new ideas you get end up being valid once you put them to serious scrutiny?" I would appreciate if you could leave that answer as a comment.
I also found myself wondering if "creativity" is indeed a result of intelligence. Again a simple first person introspection throws up an alternative view. Lets say I want to solve a new problem that I've encountered. My intelligence tries to solve the problem using all the tricks it already knows. Some of these tricks are useful and make some headway towards finding the solution. A lot of these tricks are however not useful and don't contribute much. A lot of effort goes into solving the problem after which a "flash of intuition" occurs which presents the "creative" solution. Another mechanism that generates the "flash of intuition" is listening to some one else describing how they have solved the problem. One goes about hearing and understanding the other person and suddenly a new idea emerges. I don't remember a time when I got a brilliant idea that solves a problem that I've not spent a lot of time thinking about.
I'm wondering if creativity occurs only when intelligence is put out of the way. In other words, creativity occurs only intelligence has been "satisfied" and/or "exhausted". One might even be tempted to argue that intelligence has been a hurdle to overcome before the creative process kicks in. There are many examples one hears where experts in a field fail to see a simple solution while a lay man who just happens to walk past offers a shockingly better solution. I know a lot of people who point out that they have been working in the same area for too long and find themselves unable to be as innovative as they had been in the past (I'm saturated feeling). It is quite possible that such "experts" will need to exhaust a large bag of tricks they already possess before they can think of a new solution to the problem.
It would be interesting to think of ways to "satisfy" one's intelligence as a way to improve ones creativity. Clearly working very hard on a problem is one way. Reading or listening to others who have solved similar problems is another. A better alternative would be brainstorming where a group of people discuss the problem in detail. Usually a lot of ideas are discussed in such sessions and quite often people do end up getting some bright ideas in the end.
The basic point the book tries to make it that the human brain is more than a computing machine. One quote from Turing that caught my attention was
If a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.
The quote somehow suggests that just a little bit of "fallibility" would go a long way in making machines intelligent and possibly suggests that allowing for a few mistakes will allow machines to reach the level of human intelligence. Clearly computers are better than humans at performing tasks using known procedures. I suspect that "creativity" or the ability to generate "new" ideas would be considered an important attribute of "Humans" that the machine would be expected to match.
A honest look at myself suggests that the machine would be required to be very significantly fallible. I find that a lot of the "ideas" that I come up with do not actually work well. I would conservatively state that about one in ten of my ideas do end up surviving serious scrutiny. I also know quite a few people (who I consider intelligent) who have at various times indicated that the ratio of "correct" to "incorrect" ideas they have is quite low and in the order of one in ten.
In my own work on successive approximation ADCs, I've found many instances where allowing circuits to make errors results in the system working better than trying to make the circuit "correct" at all times. The idea of error correction and redundancy are quite commonly used in many circuits to improve the performance of the circuit itself. My experience with such circuits suggests that one needs to allow significant errors before the advantages become significant.
In the biological sciences one encounters the phenomenon of evolution as a case where the fallibility of the DNA replication process results in the origin of new species over time. DNA replication is a very accurate process which is probably why evolution takes such a long time. It is also quite possible that the "randomness" of the process also contributes to the slow pace at which new species are "created". This could also suggest that human creativity is not entirely a random process. It can also be argued that "creativity" is not deterministic either since it is not clear what determines the "creation". The even more bizarre phenomenon of intuition probably plays a bigger part in human creativity.
One question to the readers of this blog is "What fraction of the new ideas you get end up being valid once you put them to serious scrutiny?" I would appreciate if you could leave that answer as a comment.
I also found myself wondering if "creativity" is indeed a result of intelligence. Again a simple first person introspection throws up an alternative view. Lets say I want to solve a new problem that I've encountered. My intelligence tries to solve the problem using all the tricks it already knows. Some of these tricks are useful and make some headway towards finding the solution. A lot of these tricks are however not useful and don't contribute much. A lot of effort goes into solving the problem after which a "flash of intuition" occurs which presents the "creative" solution. Another mechanism that generates the "flash of intuition" is listening to some one else describing how they have solved the problem. One goes about hearing and understanding the other person and suddenly a new idea emerges. I don't remember a time when I got a brilliant idea that solves a problem that I've not spent a lot of time thinking about.
I'm wondering if creativity occurs only when intelligence is put out of the way. In other words, creativity occurs only intelligence has been "satisfied" and/or "exhausted". One might even be tempted to argue that intelligence has been a hurdle to overcome before the creative process kicks in. There are many examples one hears where experts in a field fail to see a simple solution while a lay man who just happens to walk past offers a shockingly better solution. I know a lot of people who point out that they have been working in the same area for too long and find themselves unable to be as innovative as they had been in the past (I'm saturated feeling). It is quite possible that such "experts" will need to exhaust a large bag of tricks they already possess before they can think of a new solution to the problem.
It would be interesting to think of ways to "satisfy" one's intelligence as a way to improve ones creativity. Clearly working very hard on a problem is one way. Reading or listening to others who have solved similar problems is another. A better alternative would be brainstorming where a group of people discuss the problem in detail. Usually a lot of ideas are discussed in such sessions and quite often people do end up getting some bright ideas in the end.
Saturday, August 22, 2009
A lesson from life...
A couple of weeks ago, I'd blogged about learning lessons from life and applying it to other problems you encounter. I'd discussed how one can use principles of "evolution" to design a better comparator.
A follow up thought would be to see if you can apply lessons from life to your own life. I happened to scan through Robin Sharma's book "The Greatness Guide". One of the ideas mentioned there is that "Greatness" in life can be achieved through "evolution". You can read more of his thoughts on the topic in his article "Greatness by Evolution Vs Revolution".
I'm wondering if design groups could benefit from adapting the "Evolution" approach in their design process. A lot of us tend to change just about everything in a design from one generation to the next. I've also observed that there are some groups of people who tend to take the "Evolution" approach. Usually such people keep many things about their design the same from one "Generation" to another and restrict changes to a few target areas. It is also seen that the groups that do that are more successful than groups that take the "Change Everything" approach.
From a project management perspective, the "Evolution" approach makes a lot of sense. It is a good way to keep risks under control. Schedules are far more likely to be better predicted.
The "Change Everything" approach is believed to be superior since we tend to think that it is a good way to make large improvements. Is that really true? Is it better to be working on the problem to identify a way to get the same improvement with the least possible set of changes? Could innovators make such a change to their attitude to significantly improve the odds of success? Can projects with audacious goals be better executed by breaking them down into "Generations" which improve over time?
Just some thoughts. It would be interesting to see they make a difference in real life.
A follow up thought would be to see if you can apply lessons from life to your own life. I happened to scan through Robin Sharma's book "The Greatness Guide". One of the ideas mentioned there is that "Greatness" in life can be achieved through "evolution". You can read more of his thoughts on the topic in his article "Greatness by Evolution Vs Revolution".
I'm wondering if design groups could benefit from adapting the "Evolution" approach in their design process. A lot of us tend to change just about everything in a design from one generation to the next. I've also observed that there are some groups of people who tend to take the "Evolution" approach. Usually such people keep many things about their design the same from one "Generation" to another and restrict changes to a few target areas. It is also seen that the groups that do that are more successful than groups that take the "Change Everything" approach.
From a project management perspective, the "Evolution" approach makes a lot of sense. It is a good way to keep risks under control. Schedules are far more likely to be better predicted.
The "Change Everything" approach is believed to be superior since we tend to think that it is a good way to make large improvements. Is that really true? Is it better to be working on the problem to identify a way to get the same improvement with the least possible set of changes? Could innovators make such a change to their attitude to significantly improve the odds of success? Can projects with audacious goals be better executed by breaking them down into "Generations" which improve over time?
Just some thoughts. It would be interesting to see they make a difference in real life.
Thursday, August 6, 2009
Genetic Algorithms in the Design of Comparators.
Successive Approximation ADCs use a low noise multi-stage auto-zeroed comparator to perform the conversion process. The delay introduced by the comparator limits the throughput achieved by the converter. The power dissipated by the comparator forms a significant portion of the power dissipation of the ADC. In this blog, I investigate an iterative procedure which considers the parameters of the gain stages used in the comparator as “genes” and uses a process of “natural selection” to identify an “improved” design.
A typical gain stage of the comparator uses a differential MOS input pair, a pair of MOSFETs to cascode the input pair, a tail current source and load resistors These basic parameters of the gain stage are parameterized. The comparator contains five such gain stages all of which have been parameterized. In addition, the design uses a couple of extra capacitors. These capacitors are also parameterized. There are about 22 parameters in the design all of which can take different values.
The iteration procedure used is as follows. I start off with a design that is reasonably close to what I want. This part of the design procedure is not automated and was performed by me. The set of 22 parameters I’ve chosen become the basis for the rest of the iterative process. A genome is a combination of the 22 parameters that go into the actual circuit. A new set of 23 genomes are generated by randomly changing one parameter in each of the new “genomes” from the base genome. Simulations are performed on all 23 circuits.
A cost function is setup to evaluate which of these 23 circuits is the “best”. I’ve used an equation of the form Noise/noise + Delay/delay + Power/power + gain/Gain as the function that evaluates these circuits where Noise, Delay, Power, Gain are the desired values and noise, delay, power, gain are results from the actual circuit. The best of these circuits is chosen by simply making a numeric comparison based on the results of the simulations on these circuits.
The best of the 23 circuits then becomes the new base design. The best genome is taken as the new base genome and changed again at random. This process is repeated multiple times. I wrote a PERL script to perform the iterations. Spice3 performs the circuit simulations and a wrapper is used to extract the results. The PERL script reads the output of the wrapper script and computes the “cost function” that represents an “evaluation” of the circuit.
The iteration picks the best out of all the trials performed. It is therefore obvious that the “cost function” will keep on increasing after an iteration representing a better circuit each time. It should also be noted that there is no such thing as convergence in these iterations. Iteration simply produces a faster or a lower noise or a lower power or a higher gain comparator. One can also expect any iteration to just produce a marginally better comparator. The compounded effect of multiple iterations is to produce a significantly better comparator.
It might be useful to set up the “evaluation” function to set a limit on how good any given parameter gets by clamping the value used in the function to its desired value. Such a limit would make designs in which one parameter improves beyond desired levels to not be reported as better which should result in choices that improve the other parameters to be chosen. For example, there is not much to be gained by having the comparator’s gain increase beyond “Gain”. Values above “Gain” should result in the gain/Gain ratio being limited to 1.
A random process like this is also likely to continue doing things that are “easy” to do. If it is “easier” to achieve increase in gain through random changes than to improve some of the other parameters, the process might just keep increasing the gain rather than try to improve the other parameters. The “limiting” function in the evaluator also helps to stop this “run” away trend in the iterative process.
The design I tried this on had six parameters that were important. Two Noise (Targets = 25 and 6), Power (Target = 900), Two delays (Target = 15 and 20) and Gain (Target = 80). The table below shows how the design performed through the iterative process. I let the PERL script run 20 iterations through the design and looked at the measured parameters.
It can be seen from the table above that the initial steps mostly resulted in the gain parameter being improved. It can also be seen that most of the target parameters of the design are close to their desired values.
I investigated this approach after reading/hearing a couple of suggestions from others which I thought I should investigate. The first one is from Donald Knuth in “The Art of Computer Programming” in the context of improved searching algorithms. His comment was that “life” gives you good examples of how to build a good search algorithm. The second comment is from Steve Jones in the lecture “Is Human Evolution Over?” where he outlines how “Natural Selection” was used to build a better Nozzle. The approach used there was to just change 10 things at random; evaluate the nozzles; pick the best and repeat the process multiple times. The Wikipedia entry on "Evolution Strategy" relates to the application of such a principle to optimization problems. I’m simply applying these suggestions to the problem of designing a better comparator.
The second suggestion is very easily seen to be sensible and does work quite well when applied to the problem of designing a comparator. You do get a better comparator each time you go through the process. The iterations do pick up some improvements which appear to accumulate as the process is repeated.
The other part is what one can learn from the changes that these iterations have made (which I guess is how one should read Knuth’s comment).
The iterative process just picks up “improvements” along the way. The process might pick up different “improvements” when run again. The process does appear to pick up similar “improvements” when run multiple times though. Looking at these changes should give the designer some thoughts on how to “optimize” or “improve” the design.
One interesting trend that I noticed was in relation to the size of the input pairs. I’d used 48, 6, 6, 6 and 6 fingers in the five gain stages in the original design. The iterative process appears to prefer fingers which reduce first and increase later. For example, sequences like 48, 4, 2, 6, and 8 for the five stages appear to be thrown up at the end of most runs using the original design. This does make a fair bit of sense. The initial stages have very little signal and should see the least possible load to amplify the signals faster. There is a point beyond which the signals become large. Subsequent stages are better off at driving larger gain stages which also result in higher overall gain.
The trend with the tail currents is very similar to the input pair sizes. The base design used 400, 160, 40, 40 and 40 as the tail currents. The iterative process appears to modify these to 440, 120, 60, 90 and 90. A lesson one could learn from this is probably that the currents in the gain stages should drop towards the “middle” of the cascade and increase beyond that point. The final gain stages are very likely to be slewing all the time.
A typical gain stage of the comparator uses a differential MOS input pair, a pair of MOSFETs to cascode the input pair, a tail current source and load resistors These basic parameters of the gain stage are parameterized. The comparator contains five such gain stages all of which have been parameterized. In addition, the design uses a couple of extra capacitors. These capacitors are also parameterized. There are about 22 parameters in the design all of which can take different values.
The iteration procedure used is as follows. I start off with a design that is reasonably close to what I want. This part of the design procedure is not automated and was performed by me. The set of 22 parameters I’ve chosen become the basis for the rest of the iterative process. A genome is a combination of the 22 parameters that go into the actual circuit. A new set of 23 genomes are generated by randomly changing one parameter in each of the new “genomes” from the base genome. Simulations are performed on all 23 circuits.
A cost function is setup to evaluate which of these 23 circuits is the “best”. I’ve used an equation of the form Noise/noise + Delay/delay + Power/power + gain/Gain as the function that evaluates these circuits where Noise, Delay, Power, Gain are the desired values and noise, delay, power, gain are results from the actual circuit. The best of these circuits is chosen by simply making a numeric comparison based on the results of the simulations on these circuits.
The best of the 23 circuits then becomes the new base design. The best genome is taken as the new base genome and changed again at random. This process is repeated multiple times. I wrote a PERL script to perform the iterations. Spice3 performs the circuit simulations and a wrapper is used to extract the results. The PERL script reads the output of the wrapper script and computes the “cost function” that represents an “evaluation” of the circuit.
The iteration picks the best out of all the trials performed. It is therefore obvious that the “cost function” will keep on increasing after an iteration representing a better circuit each time. It should also be noted that there is no such thing as convergence in these iterations. Iteration simply produces a faster or a lower noise or a lower power or a higher gain comparator. One can also expect any iteration to just produce a marginally better comparator. The compounded effect of multiple iterations is to produce a significantly better comparator.
It might be useful to set up the “evaluation” function to set a limit on how good any given parameter gets by clamping the value used in the function to its desired value. Such a limit would make designs in which one parameter improves beyond desired levels to not be reported as better which should result in choices that improve the other parameters to be chosen. For example, there is not much to be gained by having the comparator’s gain increase beyond “Gain”. Values above “Gain” should result in the gain/Gain ratio being limited to 1.
A random process like this is also likely to continue doing things that are “easy” to do. If it is “easier” to achieve increase in gain through random changes than to improve some of the other parameters, the process might just keep increasing the gain rather than try to improve the other parameters. The “limiting” function in the evaluator also helps to stop this “run” away trend in the iterative process.
The design I tried this on had six parameters that were important. Two Noise (Targets = 25 and 6), Power (Target = 900), Two delays (Target = 15 and 20) and Gain (Target = 80). The table below shows how the design performed through the iterative process. I let the PERL script run 20 iterations through the design and looked at the measured parameters.
Noise1 Noise2 Power Delay1 Delay2 Gain
28.5 6.5 871.3 16.9 23 71.4
28.4 6.5 880.6 16.6 22.9 73
28.2 6.5 918.6 16 22.6 77.9
28.4 6.3 918.7 15.9 22.2 77.2
27.6 5.9 918.4 16.1 22.5 78.4
27.4 5.9 937.6 15.8 22.3 81.4
27.8 5.9 937.5 15.4 22.3 81.4
28.1 5.9 897.9 15.5 22.4 79.7
28.4 5.9 897.8 15.4 22 79.6
28.1 5.9 917.2 15.1 22 82
27.9 5.9 917.1 15.1 22 82.6
28.1 5.9 917.1 15 22 82.6
28 5.9 917.1 15 22 82.7
28.3 5.9 917.1 15 21.6 81.8
28 5.9 936.8 14.8 21.6 83.7
27.4 5.9 956.5 14.7 21.6 84.8
25.8 5.9 976.6 14.7 21.5 83.9
25.1 5.9 976.7 14.8 21.7 83.6
25.3 5.9 976.7 14.7 21.7 83.6
24.9 5.8 986.7 14.7 21.7 83.8
It can be seen from the table above that the initial steps mostly resulted in the gain parameter being improved. It can also be seen that most of the target parameters of the design are close to their desired values.
I investigated this approach after reading/hearing a couple of suggestions from others which I thought I should investigate. The first one is from Donald Knuth in “The Art of Computer Programming” in the context of improved searching algorithms. His comment was that “life” gives you good examples of how to build a good search algorithm. The second comment is from Steve Jones in the lecture “Is Human Evolution Over?” where he outlines how “Natural Selection” was used to build a better Nozzle. The approach used there was to just change 10 things at random; evaluate the nozzles; pick the best and repeat the process multiple times. The Wikipedia entry on "Evolution Strategy" relates to the application of such a principle to optimization problems. I’m simply applying these suggestions to the problem of designing a better comparator.
The second suggestion is very easily seen to be sensible and does work quite well when applied to the problem of designing a comparator. You do get a better comparator each time you go through the process. The iterations do pick up some improvements which appear to accumulate as the process is repeated.
The other part is what one can learn from the changes that these iterations have made (which I guess is how one should read Knuth’s comment).
The iterative process just picks up “improvements” along the way. The process might pick up different “improvements” when run again. The process does appear to pick up similar “improvements” when run multiple times though. Looking at these changes should give the designer some thoughts on how to “optimize” or “improve” the design.
One interesting trend that I noticed was in relation to the size of the input pairs. I’d used 48, 6, 6, 6 and 6 fingers in the five gain stages in the original design. The iterative process appears to prefer fingers which reduce first and increase later. For example, sequences like 48, 4, 2, 6, and 8 for the five stages appear to be thrown up at the end of most runs using the original design. This does make a fair bit of sense. The initial stages have very little signal and should see the least possible load to amplify the signals faster. There is a point beyond which the signals become large. Subsequent stages are better off at driving larger gain stages which also result in higher overall gain.
The trend with the tail currents is very similar to the input pair sizes. The base design used 400, 160, 40, 40 and 40 as the tail currents. The iterative process appears to modify these to 440, 120, 60, 90 and 90. A lesson one could learn from this is probably that the currents in the gain stages should drop towards the “middle” of the cascade and increase beyond that point. The final gain stages are very likely to be slewing all the time.
Sunday, August 2, 2009
Thoughts on execution
A set of people working on creating a new product will at times find themselves wondering if they could have "executed" their project better. How does not go about improving execution? What rules and guidelines need to be followed to ensure that the development process is predictable and convergent?
It is important to acknowledge that the "complexity" of any project cannot be completely determined at the start. It is far more common for designers to make things up as they go about designing their products. Most designers are also "innovators" and are likely to generate "ideas" that can improve their design. These innovations also occur during the design process. Most designers would agree that their design ended up being a whole lot different from what they thought it would be at the start of the design cycle.
It is also very likely that "customers" identify changes that would benefit their end design while designers are creating a small part of it. Such changes could also come at any point in the design cycle resulting in changes in the "complexity" of the project during its development.
It is very rare for a product to be designed with just one customer or application in mind. Engineers involved in marketing a product will more often than not identify new "features" that would allow the design to be suitable for more customers and applications during the design process as well. These changes will also occur during the design cycle resulting in significant change in the complexity of the design.
It is possible for designers to limit changes that they cause to the complexity of the project by limiting the number of improvements and changes they make. Designers have very little control over changes that are caused due to new features and requirements being added from customers and marketing.
The challenge facing design groups is to manage the changing complexity of the project within the time that is available for the design process. Quite often designers will complain that the scope of the project changes too much during the design process. Interestingly they don't complain about the changes they've chosen to make. Designers need to be aware that the scope of the project is going to change quite significantly during the development process.
The design process usually has multiple steps. The first of which is to derive a block diagram to show how the design performs the required function. The second step is to create the set of blocks that are required to implement the block diagram. Ideally the last phase of the design cycle should be an optimization phase where each block is "optimized" to perform the function "optimally" in the presence of the other blocks around it.
I've observed that the "optimization" step is taken up early in the design process. This step tends to get repeated every time there is a change in the scope of the project resulting in large cycle time overruns. Premature optimization is an avoidable cause that results in poor execution. I've also noticed that some designer manage this better than others. It is common for such designers to have "automated" the "optimization" process the first time around. This allows them the flexibility to use computers rather than their time to perform the optimization. Computers are clearly better suited for this kind of work than human beings. Some designers are not particularly interested in optimization and would rather focus on methods and topologies. In general teams that have people with such an attitude tend to execute better than the other. Designers of the kind that want to optimize themselves are very likely to cause large delays in project executions. It might be useful to train such people in "automated optimization" techniques.
A variant of "premature optimization" is trying to squeeze the maximum out of a given design too early. This is so easily avoidable and quite frankly adds very little value to the design. In general this sort of "optimization" focusses on second order effects and tries to improve the design by tweaking these effects to "precision". It might be preferable to delay if not entirely avoid such tricks in designs. I find that the time it takes to achieve the "tweaks" is better used inventing a better method or design.
Designers could avoid optimization of their designs in the early part of the design cycle so as to minimize rework. It would be beneficial to use existing designs as well. Reuse in general would require lesser effort and minimize chances of errors.
In general, "high end" projects tend to be very poorly executed. Designers working on such projects tend to make many silly errors. It is quite common for such projects to be over schedule by the time the designer gets things "working" with minor shortcomings. It might be possible that high visibility and the potential for high rewards (self perceived and given by others) tends to diminish the designer's judgement. It is also noticed that management tends to take a liberal view of silly errors in such projects. You can get away with sloppy work on these kind of projects. Management should probably treat errors on face value and deal with them without being prejudiced by the "visibility" that a project has. As Scott Adams says "By Definition, risk-takers fail often. So do Morons. In practice it is difficult to sort them out". It is entirely possible that a smart designer makes dumb mistakes for which he should be held accountable. Holding the designer accountable is probably beneficial to that person in the long run. A consistent approach to dealing with "execution" errors on the part of management will go a long way in improving project execution in all kinds of projects.
It is quite common for designers to think that "pressure" resulting from high visibility causes errors. I happen to think that high "visibility" just causes such errors to be noticed. I think "pressure" in "high end" projects is a result of poor execution and not a cause. It might be useful to focus on the work that needs to be done and not be thinking about the "impact" and/or "results" while working on these kinds of projects. Designers working on high "visibility" projects should be offered a decent support structure. Management should also attempt to ensure that a modest amount of "perspective" is retained. It might be useful to reiterate that these high end projects are in the end just another project. History suggests that such projects rarely end on a high note. A lot of the "visibility" stuff just builds things up making the fall which is inevitable hard to deal with. Keeping morale up through such projects will also require intervention from management. Low morale is certainly a recipe for poor execution.
In a group of people, good "executors" are bound to exist. Management should recognize this trait in these people and ensure that they are rewarded and recognized appropriately for the same. The presence of such people is bound to have an influence on the others in the group. Most designers are capable of learning. It is only reasonable to expect that they will pick some tricks from the good "executors". A good recognition process for these designers will ensure that the group knows where to go for help on the topic of execution.
Clearly this blog is getting too long. I'll just sign off now. I don't think the thoughts presented well either. I'm still organizing my thoughts on the topic.
It is important to acknowledge that the "complexity" of any project cannot be completely determined at the start. It is far more common for designers to make things up as they go about designing their products. Most designers are also "innovators" and are likely to generate "ideas" that can improve their design. These innovations also occur during the design process. Most designers would agree that their design ended up being a whole lot different from what they thought it would be at the start of the design cycle.
It is also very likely that "customers" identify changes that would benefit their end design while designers are creating a small part of it. Such changes could also come at any point in the design cycle resulting in changes in the "complexity" of the project during its development.
It is very rare for a product to be designed with just one customer or application in mind. Engineers involved in marketing a product will more often than not identify new "features" that would allow the design to be suitable for more customers and applications during the design process as well. These changes will also occur during the design cycle resulting in significant change in the complexity of the design.
It is possible for designers to limit changes that they cause to the complexity of the project by limiting the number of improvements and changes they make. Designers have very little control over changes that are caused due to new features and requirements being added from customers and marketing.
The challenge facing design groups is to manage the changing complexity of the project within the time that is available for the design process. Quite often designers will complain that the scope of the project changes too much during the design process. Interestingly they don't complain about the changes they've chosen to make. Designers need to be aware that the scope of the project is going to change quite significantly during the development process.
The design process usually has multiple steps. The first of which is to derive a block diagram to show how the design performs the required function. The second step is to create the set of blocks that are required to implement the block diagram. Ideally the last phase of the design cycle should be an optimization phase where each block is "optimized" to perform the function "optimally" in the presence of the other blocks around it.
I've observed that the "optimization" step is taken up early in the design process. This step tends to get repeated every time there is a change in the scope of the project resulting in large cycle time overruns. Premature optimization is an avoidable cause that results in poor execution. I've also noticed that some designer manage this better than others. It is common for such designers to have "automated" the "optimization" process the first time around. This allows them the flexibility to use computers rather than their time to perform the optimization. Computers are clearly better suited for this kind of work than human beings. Some designers are not particularly interested in optimization and would rather focus on methods and topologies. In general teams that have people with such an attitude tend to execute better than the other. Designers of the kind that want to optimize themselves are very likely to cause large delays in project executions. It might be useful to train such people in "automated optimization" techniques.
A variant of "premature optimization" is trying to squeeze the maximum out of a given design too early. This is so easily avoidable and quite frankly adds very little value to the design. In general this sort of "optimization" focusses on second order effects and tries to improve the design by tweaking these effects to "precision". It might be preferable to delay if not entirely avoid such tricks in designs. I find that the time it takes to achieve the "tweaks" is better used inventing a better method or design.
Designers could avoid optimization of their designs in the early part of the design cycle so as to minimize rework. It would be beneficial to use existing designs as well. Reuse in general would require lesser effort and minimize chances of errors.
In general, "high end" projects tend to be very poorly executed. Designers working on such projects tend to make many silly errors. It is quite common for such projects to be over schedule by the time the designer gets things "working" with minor shortcomings. It might be possible that high visibility and the potential for high rewards (self perceived and given by others) tends to diminish the designer's judgement. It is also noticed that management tends to take a liberal view of silly errors in such projects. You can get away with sloppy work on these kind of projects. Management should probably treat errors on face value and deal with them without being prejudiced by the "visibility" that a project has. As Scott Adams says "By Definition, risk-takers fail often. So do Morons. In practice it is difficult to sort them out". It is entirely possible that a smart designer makes dumb mistakes for which he should be held accountable. Holding the designer accountable is probably beneficial to that person in the long run. A consistent approach to dealing with "execution" errors on the part of management will go a long way in improving project execution in all kinds of projects.
It is quite common for designers to think that "pressure" resulting from high visibility causes errors. I happen to think that high "visibility" just causes such errors to be noticed. I think "pressure" in "high end" projects is a result of poor execution and not a cause. It might be useful to focus on the work that needs to be done and not be thinking about the "impact" and/or "results" while working on these kinds of projects. Designers working on high "visibility" projects should be offered a decent support structure. Management should also attempt to ensure that a modest amount of "perspective" is retained. It might be useful to reiterate that these high end projects are in the end just another project. History suggests that such projects rarely end on a high note. A lot of the "visibility" stuff just builds things up making the fall which is inevitable hard to deal with. Keeping morale up through such projects will also require intervention from management. Low morale is certainly a recipe for poor execution.
In a group of people, good "executors" are bound to exist. Management should recognize this trait in these people and ensure that they are rewarded and recognized appropriately for the same. The presence of such people is bound to have an influence on the others in the group. Most designers are capable of learning. It is only reasonable to expect that they will pick some tricks from the good "executors". A good recognition process for these designers will ensure that the group knows where to go for help on the topic of execution.
Clearly this blog is getting too long. I'll just sign off now. I don't think the thoughts presented well either. I'm still organizing my thoughts on the topic.
Subscribe to:
Posts (Atom)