Talk:Evolutionary algorithm

From Rosetta Code
Revision as of 16:19, 1 September 2010 by MikeMol (talk | contribs) (→‎Is the Python solution cheating?: Within the letter, but spirit depends on the problem space.)

Adherence to similarly named variables/functions?

When I added the text:

Note: to aid comparison, try and ensure the variables and functions mentioned in the task description appear in solutions

I was not sure if I might be "stepping on implementers toes" too much, so if your languages normal idioms would be broken by the above, then please discuss, although caSe changes/lngth changes should just be done. (I would expect it to be broken by languages created solely to be obtuse such as brainf*!ck, and calculator programs might have limits on name sizes). --Paddy3118 03:40, 7 October 2009 (UTC)

Controversy on Wikipedia

WP mentions that this is a controversial algorithm (with all the noise apparently coming from people who don't grok what it is actually showing, naturally). However, there are only really two controversial points:

  1. This is (somewhat) directed evolution, though this mostly just affects the fitness function.
  2. There is an end-point to the evolution (only really needed to make it into an algorithm rather than an ongoing process).

These issues can be technically resolved by making only the fitness function know what the target is and having the algorithm only terminate once perfect fitness is achieved (i.e., when no possible mutation will improve things). At that point, the scientific issues are moot, and the non-scientific ones dispute whether the algorithm should even exist, which is out of scope for this site anyway. As a plus side, it also makes the presentation of the algorithm more aesthetically pleasing as it removes the assumption that there is only one possible maxima (informal testing using a fitness function that uses the max of individual fitness functions as in this task, except with different target strings, indicates that this works quite well). —Donal Fellows 10:12, 9 October 2009 (UTC)

Tacit J solution

I was wanting to add a purely tacit (arguments not explicitly referenced) J solution. I came up with the following two options and am seeking opinions on which one is "better".
VERSION ONE
Uses a user-defined adverb and conjunction that help keep the rest of the code tidy. <lang j>CHARSET=: 'ABCDEFGHIJKLMNOPQRSTUVWXYZ ' NPROG=: 100 NB. number of progeny (C) MRATE=: 0.05 NB. mutation rate

create =: (?@$&$ { ])&CHARSET NB. creates random list from charset of same shape as y fitness =: +/@:~:"1 copy =: # ,: mutate =: &(>: $ ?@$ 0:)(`(,: create))} NB. adverb select =: ] {~ (i. <./)@:fitness NB. select fittest member of population

nextgen =: select ] , [: MRATE mutate NPROG copy ] while =: conjunction def '(] , (u {:))^:(v {:)^:_ ,:'

evolve=: nextgen while (0 < fitness) create</lang>

VERSION TWO
Only uses verbs (functions) which are can be easier to understand/parse, especially to start with.

<lang j>CHARSET=: 'ABCDEFGHIJKLMNOPQRSTUVWXYZ ' NPROG=: 100 NB. number of progeny (C)

create =: (?@$&$ { ])&CHARSET NB. get random list from charset of same shape as y fitness =: +/@:~:"1 copy =: # ,: mrate =: %@:# NB. mutation rate mutate =: (0&{:: >: $@(1&{::) ?@$ 0:)`((,: create)@(1&{::))}@; select =: ] {~ (i. <./)@:fitness NB. select fittest member of population

nextgen =: ] , [ select {:@] , mrate@[ mutate NPROG copy {:@] notperfect =: (0 < fitness) {:

evolve=: nextgen ^: notperfect ^:_ ,:@create</lang> --Tikkanz 00:53, 4 November 2009 (UTC)

Put them both in, while explaining the ways in which each is better than the other (perhaps with a little more explanatory text than above). It's a shame that there isn't highlighting for J yet (especially the comments) as that would aid reading. –Donal Fellows 08:59, 5 November 2009 (UTC)

Problem in previous Python implementation

In the first Python example: <lang Python>def mutaterate():

   'Less mutation the closer the fit of the parent'
   #This formula does not work as intended because it performs int division
   return 1-((perfectfitness - fitness(parent)) / perfectfitness * (1 - minmutaterate))</lang>
For ex-  ((20 - 10)/20*(1-0.09)) will evaluate to 0
Thus it gets stuck in an infinite loop waiting for mutation to occur.
Edited the program to avoid this by casting perfectfitness as float.

--lookog --14:22, 26 February 2010 (UTC)

I should have stated that the original was created on Python 3.x. Thanks for making it work on 2.X too. --Paddy3118 23:10, 26 February 2010 (UTC)

Is the Python solution cheating?

The Python solution makes the mutation rate depend on the distance to the target. This sounds to me like cheating, because the target (and therefore the distance to it) should ideally be unknown. Note that I don't see a principal problem with modifying the mutation rate; the problem is using information about the distance to the target in determining it. The mutation rate could well itself evolve, but the knowledge about the target should only be used for selection, not for mutation. --Ce 09:00, 1 September 2010 (UTC)

Looks like it's within the letter of the task, but as to the spirit, I don't know. Intuition tells me that knowing the true distance to optimal will help avoid problems of local minima, and some (but certainly not all!) problems that evolutionary algorithms are applied to have true distance (or a close approximation of such) available. This may be a good case for splitting the task and specifying a goal-agnostic algorithm. --Michael Mol 16:19, 1 September 2010 (UTC)