User talk:Petelomax

From Rosetta Code

welcome

Hi, and welcome to RC :-)
(It's nice to get language authors contributions).

Hi, your permutations example seems to be written to answer more than one task but in doing so becomes a not very good RC code example for those tasks as it shows extraneous answers and code not needed in the task. Far better if you used a library or module import system and tailored each example to the task at hand. I have flagged these examples for attention as I don't think it would be good for RC tasks if this unrelated output from examples was to spread.

The Executable library task specifically asks for the use of a library. --Paddy3118 (talk) 06:12, 8 August 2015 (UTC)

Thanks, I assume you meant the deranged anagrams task, which I have fixed. --Pete Lomax (talk)

That's great. Thanks. --Paddy3118 (talk) 11:00, 8 August 2015 (UTC)

I've uploaded http://phix.is-great.org/geshi/phix.php (which just gives you a blank page) to http://phix.is-great.org/geshi/phix.zip in case that helps --Pete Lomax (talk) 08:35, 15 September 2015 (UTC)

Boids

I've noticed that you've reposted the off-site link to Phix code for Boids. Is there no possibility to post this code on-site? This is after all the whole point of the wiki. Not only because it allows easy code comparisons, but also because it makes it possible for people to change the code. Also all code on Rosetta is posted under the same license. And links can become broken. Rosetta Code really isn't intended as a link directory. Fwend (talk) 14:26, 6 June 2017 (UTC)

OK, done. I also added a screenshot, trust that is acceptable as an off-site link ;-) Pete Lomax (talk) 10:37, 7 June 2017 (UTC)
Thanks, it looks great. :-) Fwend (talk) 20:28, 7 June 2017 (UTC)

Use of Factorization Wheels in Sieves

As per the comment you made in your contribution to Extensible_prime_generator#Phix, you said "I investigated the use of so-called "wheels", beguiled by the claim that "a 2-3-5-7 wheel saves 77%", until I realised the breakdown was 2: 50%, 3: 16%, 5: 7%, 7: 4% - it is unthinkable not to exclude even numbers, the added complexity (and subscripting) of a 30- or 210- element wheel does not seem worthwhile."

With respect, you are missing something: While it is true that for extremely large ranges of prime approaching infinity, the majority of the gain is derived from using only the low number wheel factors, the benefit is much greater than expected for commonly used ranges. As per this link to the Wikipedia Sieve of Eratosthenes formula and tables, there is a reduction of a factor of about 2.67 using odds-only but a further factor of about 4.16 using the extra 3-5-7 wheel for a sieving range of a hundred thousand as is the requirement of the task, and a factor of about 2.5 for odds-only but a further factor of an extra about 3.26 for a range of about a billion as you calculated (actually you calculated to about two billion). In addition, it is fairly easy to implement pre-culling of the array(s) for the even larger wheel of 11-13-17-19 for a further smaller gain.

You further mention added complexity in subscripting, which isn't necessary; While it is true that there is added complexity in a page segmented version such as this in computation of page start addresses per prime, the actual subscripting for the culling operations doesn't need to be any more complex than for the odds-only case, which could be considered to be just one of the modulo planes out of the total of 48 used for 2-3-5-7 wheel factorization. Part of your perceived subscripting complexity may be because you didn't realize the optimization of reduced memory use using factorization: you can actually reduce the size of your "sieve" culling array by a factor of two as the even values aren't used; a further similar reduction of over a further factor of two is gained by extended wheels, with only the added complexity of starting addresses required at the beginning of each page.

BTW, about 27.5 seconds to calculate the 100 millionth prime for odds-only isn't particularly fast: even using odds-only Julia can calculate this range about ten times faster using an equivalent page segmented technique, as it looks like the techniques of the bit operations required to do bit packing by bytes to make maximal use of the CPU L1 cache are available in Phix, which reduces memory use by a further factor of eight; this is without the more complex techniques of implementing modulo bit planes for efficient wheel factorizations as discussed above for a further reduction of about four times, then multi-processing can be added... --GordonBGood (talk) 05:25, 13 November 2018 (UTC)

Sorry for not getting back sooner, just before you posted my internet went down, and stayed that way for nearly a pigging month.
I took another look, and concede that I am probably missing something, but still not quite convinced. Anyway, looking forward to seeing your first phix contribution :-)
FYI, my mention of subscripting is biased by my internal knowledge of the workings of phix. While it is heavily optimised, and rarely any cause for concern, phix is reference-counted and checking whether we need to do any reference counting can put it at a disadvantage when pitted against C-based subscripting, plus although I inline whenever possible, sometimes subscripting involves a call/return which C-based languages never do. Also, I did not mean subscripting the sieve, but was referring to the subscripting of the wheel itself.
BTW, running the latest julia code, the Euler problem 10, after correcting a typo, I found it was about the same time as phix, whereas the older julia code (plus one extra line and a few global directives so it would actually run) took over seven minutes to display the 100,000,000th prime - 15 times slower than phix, looks like the standard "using Primes" could be improved somewhat. However, the one billion bench from the Page Segmented Algorithm was indeed about 18 times faster. Maybe one day I'll find the time to give that method a spin, unless you beat me to it. --Pete Lomax (talk) 15:50, 16 December 2018 (UTC)
No problem about not getting back; I posted this to you to maybe help your understanding of the Sieve.
As to my use of your Phix language, it probably won't happen anytime soon as I have a toolbox full of languages and although it looks to be quite simple, I don't know about working around its limited data types. One thing I would really miss is the notion of first class function/procedure type as that would then allow one to defer execution, define lazy types, and not be so reliant on memory consuming large sequences. I see this, just as about anything else, can be worked around, in this case by using call id's, but it certainly isn't that beautiful as compared to languages that are designed to be more functional. I see that just about anything is possible in Phix, but I also wonder what kind of performance we might get if we do all of that.
From your documentation, it isn't clear to me about the internal memory structure of sequences; is it a linked list or a simple array in memory? I assume you lose the extra MSB of integer to be able to distinguish between atom/integer... At any rate, any limitations on the use of sequences could be worked around by direct memory manipulations, again at the cost of beautiful code.
a simple array, yes, and yes - you could even resort to inline assembly. --Pete Lomax (talk) 15:33, 17 December 2018 (UTC)
On the other hand, although Julia isn't at all my favorite language and I don't really like dynamic typing, it can be quite fast for doing a lot of things, but even at that it isn't usually my language of choice for any job at hand.
The "older Julia code" for the Extensible Generator isn't my code, and I just added a couple of alternate versions as I perceived its deficiencies. However, although the prime probability test that the Primes library uses is less complex and faster than testing for individual primes using trial division as some libraries do, it isn't really meant to be used as a generator of a sequence of primes. I think that you'll find that my alternate 1 is almost as slow although it better satisfies the requirements of being a true generator (in only one line of code).
As to speeding up your Phix code, you are running into some limitations of the language here in implementing page segmentation: it seems you only have an integer type ("almost" 64-bit on 64-bit machines else using 64-bit "atom" as an integer) and can only shift right/left by dividing/multiplying by powers of two, which is slow unless your compiler optimizes this into machine shift opcodes. I was curious about this so actually wrote a quite test of divide speed, and as expected it's quite terrible at a couple of hundred machine cycles per divide on my machine. I'm afraid that your language will never be able to even match JavaScript languages (which also use a float as an integer, but have proper shift operations) in running a Sieve, which can be done only about three times slower than Julia (say with Fable, a really nice F# language implementation).
If you didn't have this limitation, you could implement "bit packing" and thereby achieve better cache locality in page segmentation as one CPU L2 cache size of say 16 Kilobytes could then contain 131,072 prime representations instead of 2048 using 64-bit integers. The Julia code uses a further optimization that requires addressing memory efficiently by bit-packed bytes to get a further speed-up of a factor of two which wouldn't appear to be possible in Phix (or at least not efficiently). Good luck on this!--GordonBGood (talk) 05:15, 17 December 2018 (UTC)
There are in fact some attempts to use shifts for divide/multiply, if that can be determined at compile-time, will see if I can do more. I totally agree there are some performance issues in bit-fiddling (just like subscripts), and the language would no doubt benefit from a new set of builtins that do such things much faster on blocks of pre-allocated memory, then again my personal definition of "performance" is kinda much more biased towards a fast edit/run cycle. Translation to JavaScript is in fact pretty much number 1 on my wish list for the lanugage, btw. --Pete Lomax (talk) 15:33, 17 December 2018 (UTC)
I seems that builtins are function calls and are not inlined. Thus, although adding `set_bit`, `clear_bit`, `test_bit` `shl_bits`, `shr_bits`, and `shrzf_bits` would make "bit-twiddling" easier to do, it would still take two function calls per cull to implement bit-packed culling, which would be about 120 machine cycles and too slow. You need to be able to specify simple builtins as "inline" to be fast enough. Even having the compiler recognize when multiply/divide factors are powers of two (actually quite easy, as then `and_bits(x, x-1) == 0` after passing the not zero checks when literal constants), the bitwise operations using function calls are too slow unless inlined.
Agreed. Many/most builtins are indeed just lazily implemented as hll code, whereas some I try to inline, like and_bits(), remainder(), compare(), length(), etc. Oh yeah, there's another crafty trick I missed.
As an aside, `without type_check` doesn't seem to work or I don't understand what it is supposed to do, it still detects `integer i = 1/2 as an error without the check; also, how to you turn off integer overflow checking? I can't seem to enter `-#3FFFFFFF - 1` as an integer, or rather it is accepted as an integer but prints as a float and not exact at that as I understood to be possible from the documentation, and, due to integer overflow checking subtracting another one from it triggers the type check error. Flaky... And overflow checks (as well as type checks) take machine cycles.
Ah, `without type_check` only disables user-defined type routines, not internal checking - will clarify that in the manual. I wasn't able to reproduce any issue with -#3FFFFFFF - 1. I'm a fail-fast guy at heart, so I don't really understand/agree with the word Flaky, and there is no option, or plan for one, to make your program harder to maintain just so it can go a little faster (sorry).
I can understand that compilation speed is a concern and in fact is already slow on my low end machine - about 30 seconds for a `-c` pass of a basic test program, where my machine is over twice as slow as yours given the speed to run the extensible generator as a benchmark. But some improvements such as making designated builtin functions inline and (possibly) adding the slightly more sophisticated power of two multiply/divide shouldn't take much time.
It has only just occurred to me: 30s is indeed quite a long time & I'll bet what's happening is your a.v. is kicking in and giving the resultant exe a good once-over. After a few goes it should get bored with that. --Pete Lomax (talk) 16:47, 29 December 2018 (UTC)
Yes, I have experienced AV slowing expected performance before on occasion, and it seems you are right in this instance: Windows Defender (on Windows 10) is doing some kind of extensive scan triggered every time I start a compile cycle (no matter how trivial) and can't really get started until that scan is finished, which is probably the majority of the 30 seconds. It wasn't expected that AV would be triggered on every invocation.--GordonBGood (talk) 06:45, 31 December 2018 (UTC)
"performance issues on subscripting" I suppose means indexing of sequences? Not inlined? bounds checking is slow? Bounds checking can't be optionally turned off?
Inlined and disabled when known to be in bounds, but no option to forcibly turn it off.
As to generating JavaScript output, if it were me at the stage I understand you to be, I wouldn't bother with JavaScript and would skip directly to Web Assembly output as you don't much require anything that Web Assembly doesn't already deliver since you already use reference counting memory allocation, including multi-threading (or at least coming very soon, and already available on Chrome). Performance is not particularly fast (no faster than JavaScript for many cases) and somewhat inconsistent performance across browsers (although, so too is JavaScript), but that will surely get fixed by the time you have completed this.
I ran away screaming (sounds familiar, tee hee) from Web Assembly last time, maybe I'll give it another look.
In short, Phix seems to me to be too much of a "work in progress" and as it currently exists not worth my while in pursuing further adaptation of my algorithms to it. However, as before, good luck to you, and I may look at it again if some of these issues are fixed.--GordonBGood (talk) 05:18, 18 December 2018 (UTC)
No worries. There never was and probably never will be a programming language with no downsides, some we can live with, some we cannot. Many thanks for the input, will try to bear it all in mind as I move onward. --Pete Lomax (talk) 00:53, 20 December 2018 (UTC)
Sorry, I was mistaken: there is no bug on the `-0x40000000` or `-0x3FFFFFFF - 1`, rather I got caught by there being a difference between using `?` used as a shorthand to print to screen which adds a "newline" automatically and `print(1,...)` which does not automatically add the newline. I saw the next value printed immediately beside, which was a float, to be part of the integer printed. Your documentation on `?` does mention this difference but I hadn't read it.
As to language safety, blocking "subscripts" when they are out of range of a "sequence" is commonly done as part of language safety, but blocking integer overflows that can't be turned off is not or at least can commonly be turned off, as it doesn't cause a crash or a security hole but only potentially an error in programming logic. There are quite a few languages that do detect it, but most of the ones that I know have a way of turning it off. Anyway, not a big issue as there are ways of compensating when it's desired, but without inlined bit manipulations they may be expensive in performance.
As to language preferences, yes, lately I have come to prefer functional languages, which Phix in no way pretends to be functional as it is hard to imagine a language which is more imperative. I have been tempted to try Elixir (which might be considered to be a language of similar capabilities as Phix, but functional) for that reason, but which again I don't expect to be a highly performant language - just interesting...--GordonBGood (talk) 06:39, 20 December 2018 (UTC)

Modular_arithmetic

Hi Pete, I found f(10) to be 1E100, your program outputs 1e101, are you sure yours is correct? Dedalus (talk) 15:04, 15 February 2019 (UTC)

I just ran it again and the output was 1e100, I must have pasted it in wrong. Well spotted, thanks. --Pete Lomax (talk) 17:20, 15 February 2019 (UTC)

minor typo

Hi Pete. In the Phix code for QR decomposition I spotted a minor typo when translating to VBA. When you copy a into R i is taken from 1 to n in stead of m. For a non square matrix the Phix solution won't work. Dedalus (talk) 11:03, 15 March 2019 (UTC)

Thanks, but m>=n, so changing it would only ever result in (m-n) null j-loops. I can see why you needed it for VBA though. --Pete Lomax (talk) 12:22, 15 March 2019 (UTC)
Oh, be warned the distribution copy contains this, which I've now copied into the rc page, and the extended loops of your VBA solution may be subject to a similar issue:
--
-- Programming note: The code of this main loop was not as easily
-- written as the first glance might suggest. Explicitly setting 
-- to 0 any a[i,j] [etc] that should be 0 but have inadvertently 
-- gotten set to +/-1e-15 or thereabouts may be advisable. The
-- commented-out code was retrieved from a backup and should be
-- treated as an example and not be trusted (iirc, it made no
-- difference to the test cases used, so I deleted it, and then
-- had second thoughts a few days later).
--
    for j=1 to min(m-1,n) do
        u = mat_col(a,j)
        u[j] -= mat_norm(u)
        v = sq_div(u,mat_norm(u))
        q = sq_sub(I,sq_mul(2,matrix_mul(vtranspose(v),{v})))
        a = matrix_mul(q,a)
--      for row=j+1 to length(a) do
--          a[row][j] = 0
--      end for
        Q = matrix_mul(Q,q)
    end for


Long literals, with continuations

Your Phix programming solution was   exactly   the minimalist type of entry that I was aiming for in regards to that task's requirements.     -- Gerard Schildberger (talk) 19:10, 24 March 2020 (UTC)

Heh, originally I had it doing this:
fmt = """Last revision: %s (%s)...
include builtins\timedate.e
timedate td = parse_date_string(last_updated,{"Mmmm dth, yyyy"})
atom d = timedate_diff(td,date(),DT_DAY)
string age = iff(d=0?"today":elapsed(d)&" ago")

printf(1,fmt,{last_updated,age,length(elements),elements[$]})
before deciding that stuff just gets in the way. --Pete Lomax (talk) 21:52, 24 March 2020 (UTC)

Weather routing task - Windows file line separators

Thanks for fixing the Go example so it can now read the data file on Windows using the usual line separator of "\r\n" for that platform. I've added a similar fix to the Wren example.

I tend to forget about poor old Windows as these days I'm 'Linux only'. --PureFox (talk) 19:19, 19 May 2020 (UTC)


Isqrt (integer square root) of X

In your Phix entry, you have:

  mpz q = mpz_init(1),
      r = mpz_init(0),
      t = mpz_init(0),
      z = mpz_init_set(x)

where   t   is initialized to zero.

Is that a necessary thing to do?

t   doesn't need to be initialized as it's set/defined later in the program.

You must call mpz_init(), but quite right it makes no difference whether you explicitly specify 0 or not (the default anyway). --Pete Lomax (talk)

Also, is the

 !=0 then ?9/0 

part of some semantic sugar that is needed to check for division by zero?

Bit of an idiosyncracy, testing that the remainder is zero. Have replaced ?9/0 with crash("uh") which is bascially the same thing. --Pete Lomax (talk)
And now replaced crash() with assert(). --Pete Lomax (talk) 19:41, 16 July 2020 (UTC)

I looked at the entry for   Integer_roots#Phix   and it appears that that example (program) requires   n   to be an integer and is apparently not an "mpz" thingy.

Ah, n is the nth root, you could make   a   an mpz and pass it directly to mpz_root(). I take it my comments on that page arrived after you made this post. --Pete Lomax (talk) 19:20, 16 July 2020 (UTC)

I am not familiar at all with the Phix language, so I am most assuredly missing something.

If you have a windows box you should give it a quick spin, though the linux version is a bit flakey (I think it needs kernel 5.3, and was broken on 5.0..5.2 at least) and the web version is still at the planning phase... --Pete Lomax (talk)


The function mentioned in your link may be simpler and shorter, but the Rosetta Code task was to implement an integer square root function as per the pseudocode.   I wouldn't object to you entering an alternative Phix version in the discussion page for this task.     -- Gerard Schildberger (talk) 06:59, 16 July 2020 (UTC)

My intention was just to dissuade anyone from using isqrt() when they should really be using mpz_root(). I actually use rc and/or my local demo/rosetta directory myself quite often, to remind me how to do things. --Pete Lomax (talk)

By the way, the (draft) task has been changed, only the odd powers of seven are to be shown.   I had inadvertently left out the word   odd.     -- Gerard Schildberger (talk) 06:59, 16 July 2020 (UTC)

Page updated, thanks --Pete Lomax (talk) 19:16, 16 July 2020 (UTC)

Issue in Pancake Numbers

Hi, your proposed solution in Phix for Pancake numbers appears to be wrong. It lists p(19) as 21, whereas A058986 lists it as 22. Worse yet, it seems every single solution on this page is just a translation of your algorithm. I question this algorithm as a solution to begin with. It’s just a sequence of numbers that *happen* to agree with the first 19 known solutions. But there’s no much value here as an exercise for RC, as opposed to, say, exhaustive search for the solutions.

Can you give your feedback on Talk:Pancake numbers. I’d be tempted to just purge the page of any non actual solutions. Monarchdodra (talk) 13:01, 9 March 2021 (UTC)

Good catch, page + talk updated. --Pete Lomax (talk) 16:56, 9 March 2021 (UTC)

Smallest power of 6 whose decimal expansion contains n

Hello Pete,
I'm investigating the runtime problem.Runnig on TIO.RUN the same increase of a factor of ~10x after 100000
I'm checking the mean power, when the string was found, and the mean power for the last 1024 checks.
One digit more > 3x power ( aka lenght of search string. runtime about 3x lenght * 3x count ~> 10x ) to find the search string.
I've got no idea, how to speed up. Horsth 11:21, 9 April 2021 (UTC)

//power_Limit=  25000 -> used by strings 243,194,474.  Generating time to power limit 0.859s
checking every 1024:
 Generating all numerical strings up to power limit 6260 takes 0.056s 
 used by strings 15,252,500 ~ 16Mb

    Search    mean  last
    string   power  mean      Delta time
                    power
         0    9.00    0.01     0.000
      1024   46.62   46.65     0.002
      2048   94.98  143.40     0.018
      3072  111.74  145.27     0.017
      4096  120.09  145.16     0.017
      5120  125.31  146.17     0.017
      6144  129.45  150.19     0.018
      7168  131.38  142.94     0.017
      8192  133.10  145.11     0.017
      9216  134.22  143.21     0.016
     10240  142.94  221.43     0.052
     11264  170.38  444.83     0.156
     12288  193.64  449.45     0.159
     13312  213.66  453.99     0.164
     14336  230.34  447.23     0.161
     15360  245.28  454.43     0.169
.....
     98304  421.65  466.58     0.170
     99328  422.18  473.67     0.172
    100352  426.05  801.41     0.656
    101376  436.00 1410.97     1.561
    102400  446.06 1441.35     1.671
    103424  455.71 1421.37     1.607
    104448  465.18 1421.69     1.600
...
    117760  573.58 1397.76     1.458
    118784  580.85 1416.78     1.511
    119808  588.16 1436.94     1.566
mean power   589.35

real	0m44,996s
Note sure what to make of "mean power". All I did to test mine was save the last thing found, then use gmp to independently generate that power of 6 and prove that final digit string did indeed occur somewhere in it. I transpiled it to JavaScript (am actively working on that technology and have not yet released it) and that took 13mins 10s for the 10,000,000 run, 35s for a 1,000,000 run. AH: have you just fixed this now? --Pete Lomax (talk) 16:35, 9 April 2021 (UTC)
Hello Pete,
now I understand, how you solved the task.Your version of PHIX takes 1m14s on my computer.
I translate the principle into pascal with a small optimization to jump over the first already found numbers.Especially for many digits it brings a speedup.11s downto 9s for 10,000,000 aka 7 decimal digits

congrats

Congratulations on singlehandedly occupying the top slot in Rank Languages by Popularity - I raise a metaphorical glass to you. :-)

Thanks!

Also thank you for pointing out my typo in Self Hosting Compiler #Quackery.

I noticed that in Josephus problem, you listed Quackery amongst the unclassified. FYI, I would classify it as "sliding queue". GordonCharlton (talk) 07:39, 6 June 2021 (UTC)

Have reclassified it, thanks. --Pete Lomax (talk) 12:47, 9 June 2021 (UTC)

example for LiveCode

Hello, Pete. I’m Dick.

While making my first wiki contribution, I mistakenly deleted the LUA entry. I apologize for causing you work, and thank you for doing it.

I meant to delete the previous code for LiveCode because it does not address the task at hand. The old code fails to apply pattern matching beyond just exactly matching the file’s extension.

Talk:Walk a directory/Recursively contains a restatement of the task:

Walk a given directory tree, calling a function for every filename which matches a given wildcard, UNIX glob, or regex pattern (whichever is easiest for the given language).

The old code therefor adds noise to Rosetta Code, reducing the signal-to-noise ratio, degrading rather than enhancing the value of the site.

But my concern seems to me to be way below the thoughts of the contributors to the Talk. It hasn’t been updated since 2017. Do you still advise me to seek counsel there about this edit?

No problem, everyone has to start somewhere. First, you can safely assume that if you reply on your talk page then whoever wrote a message there has that page on their watchlist and will get an email notification, just as you will when I post this reply (unless you unchecked "Watch this page" when you made this post). Also, all updates appear on Special:RecentChanges so I, and any moderators, would see any change to the talk page alongside changes to the task page, and of course that is how I spotted yours. It is not a case of "getting permission" or usually even agreement, but a statement of "why I did it" (brief being better). Please pay attention to the "Summary:" when you edit. You left it as ‎{{header|LiveCode}} which is not any kind of crime but often not any kind of help either (that and five more words being best). When such an explanation will not fit in the summary and does not belong being left permanently on the task page, it belongs on the talk page, even unread. Not every change needs all that, but when I delete someone elses code it is only being civil to explain myself. It is quite helpful to show the output, so when you do update Walk a directory/Recursively I'll expect to see that. I'll add that the above attempt does not actually explain to me what you fixed, different output probably would. Lastly you need to learn to sign your comments by adding --~~~~. Any further questions? --Pete Lomax (talk) 14:18, 17 November 2021 (UTC)

thanks for the advice, Pete. My every attempt to edit the Summary failed to produce a lasting result. What's the trick? --~~~~

The summary only ever appears on the History/Recent Changes reports and are fine for Walk a directory/Recursively. Also I used <nowiki> to show you what --~~~~ should look like when you type it, but you should only key the 2+4 characters shown and omit the nowiki tags. --Pete Lomax (talk) 17:57, 30 November 2021 (UTC)

Thanks again, Pete. Is there a basic training document you recommend? --Dick Kriesel 23:23, 30 November 2021 (UTC)

Update to D for Bézier intersections

You might like to know there was a bug in the translated-from D code that I kind of knew about but it actually shows up if you merely divide the tol by 10. It previously appeared in the Modula-2 code (which is similar to the D) and I used the same crude remedy. I figure crude remedies probably are best, anyway, when you are working in floating point. --Chemoelectric (talk) 12:33, 7 June 2023 (UTC)

If one really, really, really wanted to isolate the intersections BTW, go about it this way: implicitize one curve in exact arithmetic. Plug in the other in exact arithmetic. Break the quartic into square-free polynomials, in exact arithmetic. Now you can isolate the roots into intervals, exactly, and also know their multiplicities. :) --Chemoelectric (talk) 12:38, 7 June 2023 (UTC)

I wrote a Phix

I wrote a second implementation of Bézier_curves/Intersections in Phix. It uses "unique" in a way I think will function correctly enough but about which it will do no harm if you come up with a more "fuzzy" alternative. :) (I’m thinking of writing a Scheme implementation that does essentially what the "unique" does but does it by making endpoint checks a separate matter. [If two floating point numbers are equal, for instance because copied from the same source, they are equal, no matter what talking heads say!] Also adding an absolute tolerance besides the flatness tolerance. And then moving on to Bézier clipping, because I’ve never completed an implementation of intersections using that. Or implementing my algorithm but for cubics, as a side project.) --Chemoelectric (talk) 11:38, 12 June 2023 (UTC)

Yes, unique() only eliminates binary-exact [nested/all] floats, so I'd just write any such "fuzzy" filtering like this:
sequence t_pairs = unique (find_intersections (p, q, flatness_tolerance))
for i=length(tpairs) to 2 by -1 do
  if fuzzy(tpairs[i],tpairs[i-1]) then t_pairs[i..i] = {} end if
end for
(Obviously the "by -1" ensures that deleting t_pairs[i] does not mess up any subsequent indexing.)
Note there may also be some cases where the natural sort order (on which unique is based) is not good enough, such as {{0.51,0.71}, {0.515,800}, {0.52,0.71}} where the first and last are "fuzzy-same" but the middle one clearly ain't. Rounding things before throwing them at unique() might perhaps prove better, or a full inner "j=i-1 to 1 by -1" might be needed, or maybe even replace unique() with a couple of sorts, first eliminating t_pairs sorted by X then doing the same sorted by Y. --Petelomax (talk) 16:41, 12 June 2023 (UTC)
Since you have clearly now actually used Phix, there is a semi-related short survey (as announced to me on this thread) I thought I might invite you to fill out if interested, no worries if not. --Petelomax (talk) 03:37, 15 June 2023 (UTC)

Legacy view

I've just switched back to the 14-year-old legacy view, what a relvelation! The final trigger point was that stupid (really stupid) thing it does with the sidebar, whereby as you scroll (mousewheel) down and it hits the bottom, it scrolls the right panel a smidge and jumps the side panel all the way back up to match it, very annoying. With the old style it all works so much more smoothly. Anyway, happy new year! --Petelomax (talk) 23:56, 1 January 2024 (UTC)