Talk:Evolutionary algorithm: Difference between revisions

Line 137:
:: i am running the pike version right now on a 5600 char string.after 300000 iterations it found 89% if only one character is changed per mutation. if it changes 10 characters per mutation then it only finds 48% at that point. so a low mutation rate is clearly more effective. as for scalability i am guessing the algorithm is linear: O(n).
::to speed it up some parallelization may be helpful. by mutating only one character at a time we know exactly which change caused a successful mutation. we thus can run multiple mutations at the same time and then combine the results.--[[User:EMBee|eMBee]] 13:45, 11 October 2011 (UTC)
::: It's not linear. With your model, suppose the string length is <math>L</math>, alphabet size is <math>n</math> (27 in this case), at any given stage where there are still <math>x</math> unmatch characters, a random mutation will improve the fitness iff: a) you picked an unmatched char, chance is <math>x/L</math>; b) it mutates into a matched char, chance is <math>1/n</math>. So the chance a random mutation is an improvement is <math>{x \over n L}</math>, which is to say, with <math>x</math> unmatched chars, you'd expect an improvement and reduce <math>x</math> by 1 every <math>nL/x</math> mutations. Suppose you start from a string with zero matches (<math>x = L</math>), and work all the way to a full match (<math>x = 0</math>), total expected mutations should be <math>\sum_{x=1}^L nL/x = nL(1/1 + 1/2 + 1/3\cdots+1/L) \approx nL\ln(L)</math>. Actually the sum doesn't go all the way up to <math>L</math>, because your initial random string should match about <math>1/n</math> of the target already, but it's a relatively small difference if <math>L</math> is large. --[[User:Ledrug|Ledrug]] 15:51, 11 October 2011 (UTC)
Anonymous user