# Talk:CalmoSoft primes

### Raku incorrect (now fixed)

I'm getting

For primes up to two hundred and fifty: The following sequence of 49 consecutive primes yields a prime sum: 11 + 13 + 17 + 19 + 23 + 29 +..+ 223 + 227 + 229 + 233 + 239 + 241 = 5,813

which disagrees with the Raku output. For ease of reference, the relevant Raku output is

For primes up to two hundred fifty: Longest sequence of consecutive primes yielding a prime sum: elements: 47 7 + 11 + 13 + 17 + 19 + 23...199 + 211 + 223 + 227 + 229, sum: 5,107 11 + 13 + 17 + 19 + 23 + 29...211 + 223 + 227 + 229 + 233, sum: 5,333

Sadly, I don't think there is an OEIS entry I can check this against (tee hee) --Petelomax (talk) 02:48, 9 April 2023 (UTC)

- I agree with your result for primes < 250 and also disagree with some of the other Raku results. Your 2 second target for the stretch goal may be a bit ambitious for the interpreted languages as it takes Wren 2.6 seconds to sieve for primes up to 50 million before even starting to figure out the longest CalmoSoft prime sequence! --PureFox (talk) 10:29, 9 April 2023 (UTC)

- I also agree your 250 prime result. I also agree that interpreted languages may struggle to do it in 2 seconds. See the notes on the Algol 68 stretch sample. --Tigerofdarkness (talk) 10:53, 9 April 2023 (UTC)

- You are right - I've added a fixed version. - the sum is now 72618848632313, which agrees with the Phix sample.
- Sadly takes somewhat longer than the Wren sample - does the Wren interpreter us a JIT compiler ? --Tigerofdarkness (talk) 13:26, 9 April 2023 (UTC)

- No, there's a single pass compiler which parses the Wren source directly to bytecode (no intermediate AST) which the VM then interprets. A number of devices are used to make Wren faster though I can't see JIT compilation happening as it would make the VM too complicated. --PureFox (talk) 14:03, 9 April 2023 (UTC)

### Sequence length

I agree with the above comments about the majority of the time going in finding the primes under 50 000 000, however I did notice some performance improvement by considering the sequence lengths - assuming the sequence is more than 1 prime long:

- If the sequence starts with 2, it must have even length.
- If the sequence starts with 3 or more, it must have odd length.

--Tigerofdarkness (talk) 10:04, 10 April 2023 (UTC)

- It's a good point but, unfortunately, it makes no significant difference to my Wren example (0.1 seconds at most). Whilst it skips the primality test for sequences which have the wrong length, this seems to be counterbalanced by the cost of interpreting the extra code given that for these particular numbers there's not many iterations needed anyway before the longest sequence is found.

- What would make a difference is to speed up the sieving. The C++ entry is using
*primesieve*which, amongst other tricks, uses all available cores to maximize speed. Unfortunately, Wren is single threaded and so this technique is not available to the CLI version though, in an embedded application, one could write a function to utilize this library from a C/C++ host which Wren could then call. --PureFox (talk) 13:23, 10 April 2023 (UTC)

- An area I've been investigating is how best to check if the candidate sequence sums are prime or not which is quite slow in Wren for the larger sums. My 'Int.isPrime' routine uses an implicit {2, 3} basis for wheel factorization which, curiously, seems optimal for Wren - if I try to use {2, 3, 5} as in the C++ example - it's slightly slower.

- However, what does help is to use the Miller-Rabin test instead which brings the overall time down from 4.3 to 3.5 seconds. Unfortunately, you can't use this for checking primality generally as it's much slower than wheel-based factorization if you're checking a block of numbers which are mostly non-prime. I checked the first 1 million integers for primeness and Miller-Rabin took about 30 times as long. It's best used therefore for checking isolated large numbers as we have for this task. --PureFox (talk) 11:06, 12 April 2023 (UTC)

- I've just realized that the above test was unfair because there's no need to use the full 'witness' array for smaller numbers. Using the table in A014233 means that Miller-Rabin takes about 8.5 as long as wheel factorization to check the first million integers and the multiple falls to around 7.1 for the first three million numbers. --PureFox (talk) 16:03, 12 April 2023 (UTC)