Sunday, January 3, 2010

Genetic algorithms in the Design of Comparators - Part 2

This blog follows up on my previous blog on the use of Genetic Algorithms in the Design of Comparators. The basic mutation methods used earlier used only single gene mutations. The next generation is just derived from one "parent" with some "genes" changed. It is very easy to modify the process such that the new gene for any iteration is derived by combining the changes that were made in the previous generation that results in the two best "children".

Let C = F (X1, X2...Xn) be the cost function of the "comparator" where X1, X2 ...Xn are the genes. If the changes in genes Xi and Xj result in the two best comparators in an iteration, the evaluated cost function will be of the form

Ci = F (X1, X2, ..., Xi, ...Xn)
Cj = F (X1, X2, ..., Xj, ...Xn)

In the previous method, I would have just used (X1, X2, ..., Xi, ...Xn) as the base gene for the next iteration. I could very easily change that to use to two best genes and use (X1, X2, ..., Xi, .., Xj, ...Xn). It also makes sense to evaluate the "base" in each iteration since it is new.

One can expect the effect of the two genes to "superpose" and result in a larger improvement than each individually. This need not happen in all iterations though. One might expect such a process to produce improvements in the design faster than the previous method. I don't think there is any reason to expect dramatically different "results" though.

Extensions that pick the top three best or more genes are also possible. I live on earth and not a para universe. Hence my preference for two.

A variant of the idea is to to pick up all "mutations" that result in an improvement from the "base" or "parent" gene in each iteration. Improvement is defined as the "cost function" evaluating to a value higher for the "child" than the "parent". Superposition principle would for the most part still apply and one can expect the next "gene" to be better than the previous one. I did notice that most of the "children" quickly end up being "not better" than the parent and very few changes actually improve the "cost function". I'm not sure if these methods are any better than the method that just selects the best in each iteration. It appears that "vanilla" natural selection performs just as efficiently as any "intelligent" variant of the idea. Maybe Intelligence has a lot less practical value than you think.

Another aspect of the two methods described here that make them different from the older method is that these methods can result in designs that are temporarily worse than the previous iteration. There is a possibility that the "mutations" that are being combined in one generation result in a worse cost function in the next. This is in contrast to the first method where the cost function can never reduce from one iteration to the next. The possibility of a temporary decline is a price one needs to pay for the possibility of faster increase in the cost function. Intelligence, invariably implies fallibility. The theory of natural selection, apparently, does not allow species to get worse temporarily. That might disqualify these methods from falling into the category of Genetic Algorithms. Maybe the biologists have some thing to think about. Can intelligent species temporarily decline?

No comments:

Post a Comment