Talk:Balanced ternary: Difference between revisions

m (→‎Test case discussion: misc reword.)
Line 20:
: A floating point involves too many things. Seemingly simple stuff like "0.1" has no exact finite representation in either base 2 (IEEE) or base 3, thus the meaning of "436.436" in a task requirement is dubious to begin with: Rdm ''will'' come along and bury me with questions, so I'd rather not do that.
: We could dodge this issue by limiting digit length after decimal point, which is essentially treating them as rationals, then we'd have different problems: either we don't cancel out common factors in numerator and denominator, which is obviously unsatisfactory; or we do cancel them out, which requires division and modulo, which would be difficult and long-winded. I don't want to make the task require more effort than necessary. In any event, if the task proves to be so popular that people flock to it like hot cakes (I doubt it), one could always make another task to extend it to floating point numbers. --[[User:Ledrug|Ledrug]] 16:42, 1 November 2011 (UTC)
::Floating point numbers are essentially a fixed width approximation of rational numbers. So why would a person want arbitrary precision floating point? An answer might be to represent irrational function results, but you have to limit the precision of those results or you run out of memory before you even get started. The simplest approach for representing irrational results would probably to use a fixed width representation for irrational numbers. But if your problem domain demands something else, then understanding that problem domain can really help in building a reasonably implementation. --[[User:Rdm|Rdm]] 12:31, 2 November 2011 (UTC)
 
Ironically this algorithm is (basically) the same as the binary equivalent (and which also is yet to be added into RosettaCode). Hence (for 2, 4, 8 & 16 bytes precision) it is generally just ''built-in'' on everybody's computer.
6,962

edits