Talk:Numerical integration: Difference between revisions

Content added Content deleted
(Undo revision 90958 by Paddy3118 (Talk) I just read the edit comment by Short Circuit.)
(→‎f(x) = x cases.: new section)
Line 10: Line 10:
::A correct implementation in C of the mid rect can be seen at [[http://en.wikipedia.org/wiki/Rectangle_method]]. The important thing to notice is that the interval (h in most of the rosettacode examples) is divided in half to find the middle point at which to evaluate the function (the i * 0.5 in the middle). If you're not dividing the interval in half, you cannot determine the value of the function at that point. Averaging the beginning and ending function values of the rectangle is the same number only if the function is linear, because doing that (and multiplying by the width) is simply another way of calculating the area of a right trapezoid. For good "pseudo code" that isn't so pseudo, I suggest the Algol 68 example. Notice how it has h/2 in the mid rect to get to the middle of the rectangle. The trapezium is careful to weight the inner points twice as much as the endpoints. (But don't copy the right rect, I think it's wrong to subtract h from the end position, because we already added h to the start position, and that leaves out one of the rectangles. Maybe use the Common Lisp one for that, since it gets it right.) --[[User:TimToady|TimToady]] 03:18, 12 September 2010 (UTC)
::A correct implementation in C of the mid rect can be seen at [[http://en.wikipedia.org/wiki/Rectangle_method]]. The important thing to notice is that the interval (h in most of the rosettacode examples) is divided in half to find the middle point at which to evaluate the function (the i * 0.5 in the middle). If you're not dividing the interval in half, you cannot determine the value of the function at that point. Averaging the beginning and ending function values of the rectangle is the same number only if the function is linear, because doing that (and multiplying by the width) is simply another way of calculating the area of a right trapezoid. For good "pseudo code" that isn't so pseudo, I suggest the Algol 68 example. Notice how it has h/2 in the mid rect to get to the middle of the rectangle. The trapezium is careful to weight the inner points twice as much as the endpoints. (But don't copy the right rect, I think it's wrong to subtract h from the end position, because we already added h to the start position, and that leaves out one of the rectangles. Maybe use the Common Lisp one for that, since it gets it right.) --[[User:TimToady|TimToady]] 03:18, 12 September 2010 (UTC)
:::On closer inspection, the Algol code was wrong too, so I just fixed it so you'd have something to look at. But you really need to learn to visualize the geometry of it, I think.--[[User:TimToady|TimToady]] 03:40, 12 September 2010 (UTC)
:::On closer inspection, the Algol code was wrong too, so I just fixed it so you'd have something to look at. But you really need to learn to visualize the geometry of it, I think.--[[User:TimToady|TimToady]] 03:40, 12 September 2010 (UTC)

== f(x) = x cases. ==

My rationale for the two f(x) = x cases were threefold:
# I wanted a simple case that would expose accuracy differences between the approximation methods across function types, and a function with a linear slope would do that. (At least when compared to x^3 and 1/x).
# I wanted a case that would be relatively easy to get an exact answer by hand
# I wanted to see if I could expose differences in floating-point/rational number implementations between languages using a simple case.
Roughly speaking, IEEE 754 floating-point numbers (what most programmers are probably accustomed to working with) allow for up to 24 bits of integer precision for a 32-bit floats. This means they will tend to precisely represent non-negative integers of up to 2^24, or 16,777,216. (I'm sure it's slightly more complicated than that, or I would have mentioned it along with the test cases. In any case, a simple test program I wrote for incrementing a float detected integer precision loss when it tried to go past 2^24 or thereabouts.) The final result of the [0,5000] case is an integer below 2^24, and so it can be accurately and precisely represented. The final result of the [0,6000] case is an integer above 2^24, and may not be. (I'm sure there's someone who will read this who can identify the the next greater positive integer above 2^24 that can be represented with an IEEE 32-bit float, and could speak to this directly.) Granted, in either case, the intermediate values during approximation are likely to lose precision by always falling on exact representations. Using a different number of approximating steps (such as 5,000 and 6,000 respectively) could cover that. --[[User:Short Circuit|Michael Mol]] 08:03, 12 September 2010 (UTC)