Jump to content

Talk:Generalised floating point addition: Difference between revisions

(→‎Clarify?: The BCD *requirement* needs to change to mere possible strategy)
Line 92:
 
:: FWIW, a fair number of languages have built-in arbitrary precision integers; they can do generalised floating point math without needing to fuss around with BCD. ''Requiring'' the use of non-idiomatic techniques when a high-quality idiomatic technique lies close by is a little odd. I'll look into rewriting the task a bit to make this possible. (BCD ''can'' be a possible implementation strategy, of course; I've no problem with that.) –[[User:Dkf|Donal Fellows]] ([[User talk:Dkf|talk]]) 18:41, 23 December 2013 (UTC)
 
::: After some thought I think I'd like to draw several distinctions here. One distinction has to do with the number base (BCD vs Binary, for example). Another distinction has to do with precision (53 bits of mantissa? arbitrary precision integers?). Another distinction has to do with the represented range of numbers (integers are different from rational numbers). And yet another distinction has to do with "floating point".
 
::: Floating point, as I understand them, give us numbers of the form x*y^z where y is typically a constant and where x and z typically have constrained ranges. So one way of representing floating point numbers is using the pair (x,z) along with a specification of the remaining constraints. One might imagine this specification as being a part of an external standard, or a given for users of either certain computing hardware or a specific programming language, library or environment, or we might want a floating point number where each floating point number is an object and a representation of these constraints is embedded in each such object. But, of course, there will always be limits. If you have a floating point number that takes 16 gigabytes to represent you will not be able to have as many of those as you would a floating point number that takes 4 bytes to represent.
 
::: Anyways, to meaningfully choose tradeoffs, it's usually a good idea to express the purpose of the representation. As I understand it, floating point numbers are based off "scientific notation" where numbers are typically the result of measurements (and, thus, limited in precision while often needing to treat a wide variety of magnitudes). Another aspect of floating point numbers is that they can efficiently represent approximate results of transcendental functions (sine, cosine, logarithm, ...). For many practical purposes approximations are more than adequate - excessive precision can be thought of as distracting attention from the important issues.
 
::: Hopefully some of these musings will help you in your rewrite? And, thank you. --[[User:Rdm|Rdm]] ([[User talk:Rdm|talk]]) 19:17, 23 December 2013 (UTC)
6,962

edits

Cookies help us deliver our services. By using our services, you agree to our use of cookies.