Talk:Evolutionary algorithm: Difference between revisions

Line 89:
I should have stated that the original was created on Python 3.x. Thanks for making it work on 2.X too. --[[User:Paddy3118|Paddy3118]] 23:10, 26 February 2010 (UTC)
 
== IsAre the Python and C++ solution cheating? ==
 
The Python solution makes the mutation rate depend on the distance to the target. This sounds to me like cheating, because the target (and therefore the distance to it) should ideally be unknown. Note that I don't see a principal problem with modifying the mutation rate; the problem is using information about the distance to the target in determining it. The mutation rate could well itself evolve, but the knowledge about the target should only be used for selection, not for mutation. --[[User:Ce|Ce]] 09:00, 1 September 2010 (UTC)
 
'''Python'''
def mutaterate():
'Less mutation the closer the fit of the parent'
return 1-((perfectfitness - fitness(parent)) / perfectfitness * (1 - minmutaterate))
 
'''C++'''
double const mutation_rate = 0.02 + (0.9*fitness)/initial_fitness;
 
: Looks like it's within the letter of the task, but as to the spirit, I don't know. Intuition tells me that knowing the true distance to optimal will help avoid problems of local minima, and some (but certainly not all!) problems that evolutionary algorithms are applied to have true distance (or a close approximation of such) available. This may be a good case for splitting the task and specifying a goal-agnostic algorithm. --[[User:Short Circuit|Michael Mol]] 16:19, 1 September 2010 (UTC)
: In a real problem, you've got a high-dimensional space that you're searching and the fitness function is only poorly known (the profusion of species is clear demonstration that there are many local minima in the problem space that is biology). However, the only effect of varying the mutation rate with fitness, given that we have a reasonable metric, is that it results in faster convergence with smaller populations at each step. It doesn't change the fact that you're ''still'' having to do the evolution towards a solution through random variation and selection, which is the whole point. –[[User:Dkf|Donal Fellows]] 08:26, 2 September 2010 (UTC)
: Varying the mutation rate is necessarily cheating but it is deviating from Richard Dawkins' purpose of demonstrating "random variation combined with non-random cumulative selection". The Weasel model uses a mutator and a selector. The mutator is intended to be random while the selector is non-random. If you add a non-random process to the mutator it breaks down the whole purpose of Dawkins' model. I don't understand why it's necessary to vary the mutation rate in the model. Is there biological evidence that nature reduces mutations when we near the ideal target? Dawkins states the notion of the ideal target is "absurd". It's important to stick with the purpose of the model and not change the essence of the model to simple converge more quickly. It's not a competition about who has the most rapidly converging model. --[[User:Davidj|Davidj]] 18:02, 1 October 2011 (UTC)
: Out of curiosity, could the author of the C++ solution explain the constants 0.02 and 0.9 used to calculate the mutation_rate. Thank you. (double const mutation_rate = 0.02 + (0.9*fitness)/initial_fitness;) --[[User:Davidj|Davidj]] 18:02, 1 October 2011 (UTC)
Anonymous user