# Talk:Sisyphus sequence

## Thirty billion primes

Regarding Pete's estimate that 30 000 000 000 primes with values up to around 570 000 000 000 would be needed for the Extreme stretch...

It occurred to me when I started this that the sieve needn't be held in memory. but the primes could be written to a file (probably several files) by another program and the Sysiphus sequence program just needs to read them one at a time. The values would easily fit in 8-byte integers - or 7-byte.

"All" that would be needed then would be a program to create the files of primes.

So how did Russ Cox do it ?

--Tigerofdarkness (talk) 21:07, 9 June 2023 (UTC)

- The only other info I have (see here on Reddit) is that it only took him 2 hours! I'd imagine he'd be using C++ on his own machine as it's nothing to do with his work at google. This probably rules out using some sort of segmented sieve and, on the face of it, the algorithm for generating the sequence can't be parallelized - if you split up the numbers to be searched into chunks, the second chunk wouldn't know where to start in the prime sequence until the first chunk had been run.

- The quickest way I know of to get from one prime to the next without using a sieve is to use GMP's
*mpz_nextprime*function though the numbers here are 'small' by GMP's standards and there may be quicker ways which are optimized for 8 byte integers. Even so, it seems an extremely tall order to brute force this in only 2 hours and you may be right that he has a pre-built file or files of prime numbers to use for his OEIS work. --PureFox (talk) 08:13, 10 June 2023 (UTC)

- According to these stats, this package which is written in C++ and has memory usage of order O(sqrt(n)) can generate primes up to 10^12 (c. 37.6 billion of them) in as little as 1104 seconds using a highly optimized multi-threaded, multi-core segmented bit sieving approach which is more than the estimated 30 billion needed for the extreme stretch task. Moreover, one can simply iterate the primes (which is all we need for this task) rather than pre-building a table of them though I suspect this will be somewhat slower as it's single threaded.

- Now, the Python and Raku entries are using wrappers for this package and (although I hadn't realized it before) the segmented sieve I use for Wren uses Kim Walisch's basic algorithm (single threaded) which is available as a public domain Github gist here. Wren takes about 65 seconds to sieve a billion primes compared to only 0.8 seconds for C++ which allowing for the difference in speed of the test machines (their's is about 1/3 faster than mine) is 60 times slower but that's no surprise. This is using the byte sieve as Wren can't really do bit sieving without significant loss in performance.

- I see that the Python author has now shown conclusively that the extreme stretch goal can be achieved in less than two hours using
*primesieve*with a speed boost from PyPy. It turned out that primes up to about 677 billion were needed for this so Peter's previous estimate of 570 billion was not too far away. --PureFox (talk) 14:21, 11 June 2023 (UTC)

- I see that the Python author has now shown conclusively that the extreme stretch goal can be achieved in less than two hours using

- Thanks. --Tigerofdarkness (talk) 16:09, 11 June 2023 (UTC)

- I suspect the problem is that you're not installing the development version. I don't currently have a working Windows machine but I just installed the C++ version on Ubuntu with: 'sudo apt install libprimesieve-dev'.