Talk:Kahan summation: Difference between revisions

→‎Task: added a comment concerning another talk page about summation methods.
(→‎Task 2: ULP)
(→‎Task: added a comment concerning another talk page about summation methods.)
 
(6 intermediate revisions by 3 users not shown)
Line 1:
==Task - The problem is badly formulated==
 
Rounding errors in sums do not occur because we use a certain number of digits after the decimal point. The source of the such errors is the '''floating point''' representation of decimal fractions. As you can see, when using '''fixed point''' representation, there are no rounding errors. Of course, the calculated sum of a / 3 + a / 3 + a / 3 is less than a, but this error is caused by the rounding at division.
<lang C>
/*
* RosettaCode: Kahan summation, C89 (MS Visual Studio 2010)
*
* C language has no fixed decimal fraction type. Nevertheless to obtain
* "six-digits precision" we can use ordinary fractions with fixed denominator.
*/
#include <stdio.h>
 
#define DECIMAL_FRACTION long int
#define DENOMINATOR 1000000L
#define DECIMAL_TO_FIXED(WHOLE,FRACTIONAL) (WHOLE*DENOMINATOR+FRACTIONAL)
#define FIXED_TO_WHOLE(VALUE) (VALUE / DENOMINATOR)
#define FIXED_TO_FRACT(VALUE) (VALUE % DENOMINATOR)
 
int main(void)
{
DECIMAL_FRACTION a = DECIMAL_TO_FIXED(10000,0);
DECIMAL_FRACTION b = DECIMAL_TO_FIXED(3,14159);
DECIMAL_FRACTION c = DECIMAL_TO_FIXED(2,71828);
 
DECIMAL_FRACTION leftSum;
DECIMAL_FRACTION rightSum;
DECIMAL_FRACTION kahanSum;
 
leftSum = a;
leftSum += b;
leftSum += c;
 
rightSum = c;
rightSum += b;
rightSum += a;
 
{
/*
* Actually we sum only a+b+c with an un-rolled implementation
* of Kahan algorithm. KISS
*/
 
DECIMAL_FRACTION correction = 0;
DECIMAL_FRACTION inputMinusCorrection = 0;
DECIMAL_FRACTION updatedSum = 0;
 
kahanSum = a;
 
inputMinusCorrection = b - correction;
updatedSum = kahanSum + inputMinusCorrection;
correction = updatedSum - kahanSum;
correction -= inputMinusCorrection;
kahanSum = updatedSum;
 
inputMinusCorrection = c - correction;
updatedSum = kahanSum + inputMinusCorrection;
correction = updatedSum - kahanSum;
correction -= inputMinusCorrection;
kahanSum = updatedSum;
}
 
#define PRINT(S,V) printf(S##" = %d.%d\n", FIXED_TO_WHOLE(V), FIXED_TO_FRACT(V))
 
PRINT("a", a);
PRINT("b", b);
PRINT("c", c);
putchar('\n');
 
PRINT(" (a+b)+c", leftSum);
PRINT(" a+(b+c)", rightSum);
PRINT("Kahan sum", kahanSum);
 
if ( leftSum == kahanSum && rightSum == kahanSum )
puts("\nC can compute on fixed point numbers without round-off errors");
 
getchar();
return 0;
}
</lang>
{{Out}}
<pre>
a = 1410.65408
b = 3.14159
c = 2.71828
 
(a+b)+c = 1415.151395
a+(b+c) = 1415.151395
Kahan sum = 1415.151395
 
C can compute on fixed point numbers without round-off errors
</pre>
 
==Task==
The idea of showing the Kahan summation on Rosettacode is good, but the Task is not good yet. I suggest to remove the requirements of using a Decimal module and make it optional. So many languages can use normal floating point values. I also suggest to increase the number of input values and the number of their digits, to better show why the Kahan summation is useful.
Line 54 ⟶ 146:
 
::::OK, indeed the options() call just restricts the number of digits being displayed, not the actual number used in calculations. In that respect it is in the same boat as PHP. AFAIK there is a package (not a base component) that can deal with arbitrary precision (http://cran.r-project.org/web/packages/Rmpfr/), but that is not what this particular task is aiming to show. I have redone the operations and results on a Windows 7 64-bit machine from a friend, will check again on my Ubuntu 64-bit box to corroborate the results. --[[User:Jmcastagnetto|jmcastagnetto]] ([[User talk:Jmcastagnetto|talk]]) 02:11, 17 December 2014 (UTC)
 
:For a clear example of the differences in summation methods, try adding the first billion terms of the harmonic series 1/i. It will also be significantly different adding forward or backward. In 32 bit floats, the results are:
<pre> forward: 15.4036827
backward: 18.8079185
forward compensated: 21.3004818
backward compensated: 21.3004818</pre>
--[[User:Andy a|Andy a]] ([[User talk:Andy a|talk]]) 04:56, 21 April 2020 (UTC)
 
::::: The above topic (summation methods) is also mentioned in the &nbsp; ''talk/discussion'' &nbsp; page at &nbsp; [http://rosettacode.org/wiki/Talk:Sum_of_a_series#summing_backwards (Talk) &nbsp; Sum of a series, summing backwards]. &nbsp; &nbsp; &nbsp; &nbsp; -- [[User:Gerard Schildberger|Gerard Schildberger]] ([[User talk:Gerard Schildberger|talk]]) 09:24, 21 April 2020 (UTC)
 
==So far, in J and R==
Line 256 ⟶ 357:
</pre>
::&mdash;[[User:Sonia|Sonia]] ([[User talk:Sonia|talk]]) 01:53, 21 December 2014 (UTC)
 
 
==Epsilon computation==
The "Epsilon computation around 1" sub task should be a task by itself.
<lang python>epsilon = 1.0
while 1.0 + epsilon != 1.0:
epsilon = epsilon / 2.0</lang>
The "Epsilon computation around 1" sub task is a nice indicator about IEEE 754 floating point, used (or not) in the language implementation (specific compilers). Most of the time languages are not explicit about the precision required in floating point data type.
===IEEE 754===
IEEE 754 (1985 & 2008) floating point has 4 major data types:
<pre>
Precision Name Bits Mantissa Decimal precision
------------------ ---- --------- -----------------
simple precision 32 24 bits 5.9604E-9
double precision 64 53 bits 1.1102E-16
extended precision 80 64 bits 5.4210E-20
decimal128 120 34 digits 1.0000E-34
</pre>
C 99, Fortran 77, have IEEE-754 corresponding data types:
<pre>
IEEE 754 name GNU C Intel C Visual C Fortran 77 Fortran 95 VB .Net
------------------ ----------- ----------- -------- ---------- ---------------------- ----------
simple precision float float float real*4 SELECTED_REAL_KIND(8) Single
double precision double double double real*8 SELECTED_REAL_KIND(16) Double
extended precision long double long double n/a real*10 SELECTED_REAL_KIND(20) n/a
decimal128 __float128 __Quad n/a real*16 SELECTED_REAL_KIND(34) n/a
</pre>
In Microsoft Visual C long double is treated as double.
===Compilers===
The epsilon computation using different implementation of languages and different data types gives the
following results:
<pre>
Language Compiler Declaration N Epsilon IEEE-754
-------- -------- ------------ -- -------------------------- --------
C++ VC++ 6.0 float 53 1.110223E-16 Double
C++ VC++ 6.0 double 53 1.110223E-16 Double
C++ VC++ 6.0 long double 53 1.110223E-16 Double
Fortran Plato real*4 64 5.421011E-20 Extended
Fortran Plato real*8 64 5.421010862428E-20 Extended
Fortran Plato real*10 64 5.42101086242752217E-20 Extended
Fortran Plato real*16 64 5.42101086242752217E-20 Extended
Pascal Free real 64 5.42101086242752E-020 Extended
Pascal Free double 64 5.42101086242752E-020 Extended
Pascal Free extended 64 5.4210108624275222E-0020 Extended
Perl Strawberry 53 1.11022302462516e-016 Double
Python v335 53 1.1102230246251565e-16 Double
SmallBasic 1.2 94 1.0e-28 Fixed128*
VB6 VB 6.0 Single 24 5.960464E-08 Single
VB6 VB 6.0 Double 53 1.11022302462516E-16 Double
VBA VBA 7.1 Single 24 5.960464E-08 Single
VBA VBA 7.1 Double 53 1.11022302462516E-16 Double
VBScript Win 10 53 1.110223E-16 Double
VB.Net VS 2013 Single 53 1.110223E-16 Double
VB.Net VS 2013 Double 53 1.11022302462516E-16 Double
VB.Net VS 2013 Decimal 94 1.0e-28 Fixed128*
</pre>
N is the loop count.<br>
Fixed128 : Use of IEEE Decimal128 floating point to emulate fixed point arithmetic.<br>
 
It is interesting to see that several compilers do not use the different IEEE-754 precisions to implement the different data types.
The trade-off between compiler simplicity a runtime efficiency is: why to bother with different floating point precisions and all the implied cross conversion routines, why not use only the higher precision.<br>
===Kahan summation===
Kahan summation algorithm task is a good idea but, the example numbers : 10000.0, 3.14159, 2.71828
are a bad choice, because no rounding errors when IEEE 754 floating point double precision (64 bits) are used by the language, and unfortunatly is now the standard. Let's note that William Kahan is a father of the original IEEE 754 and its revisions.<br>
<br>
--[[User:PatGarrett|PatGarrett]] ([[User talk:PatGarrett|talk]]) 16:43, 16 February 2019 (UTC)