Talk:Sieve of Eratosthenes: Difference between revisions

Line 234:
::::And of course what we should be comparing are [http://en.wikipedia.org/wiki/Analysis_of_algorithms#Empirical_orders_of_growth empirical orders of growth], not constant factors. So if one is constantly slower than the other by the same factor ''at several size points'', it's fine. TD with postponement (i.e. the optimal TD, your <1>) runs pretty well for lower ranges (10k-20k primes) in Haskell too. It's because its complexity is O(n^1.5/(log n)^0.5) and at low ranges ''log'' is approximated by higher power law coefficient, i.e. its influence can be sizeable (I've commonly seen n^1.3 for low ranges of OTD, going steadily to n^1.45..49 for higher ranges, as it should).
::::About your edits: there's no need to add postponement to Bird's code; it achieves the same effect on its own, because it folds to the right. Postponement achieves the same effect laboriously, and should still be worse computationally [http://www.haskell.org/haskellwiki/Prime_numbers#Linear_merging because it folds from the left]. My version should probably be restored (or maybe you have a tweak on it) because it is a stepping stone from the simple lazy version ''to'' the Bird's version. (It's strange that your "postponed Bird" is slightly faster than the straight version of it.) -- [[User:WillNess|WillNess]] ([[User talk:WillNess|talk]]) 09:17, 14 September 2014 (UTC)
 
::::Elibarzilay: so I've switched back my postponed version for your postponed Bird's, and placed it ''before'' Bird's. You're welcome to tweak it with your ''when-bigger'', or I'll do it when I have more time (will have to check if it's faster or not). It's more-or-less equivalent to Haskell's <code>span</code>-using versions; but with Haskell's GHC, manually inlining the ''span'' as I did here, was more efficient. -- [[User:WillNess|WillNess]] ([[User talk:WillNess|talk]]) 10:09, 14 September 2014 (UTC)
751

edits