Talk:Numerical integration

From Rosetta Code
Revision as of 08:03, 12 September 2010 by MikeMol (talk | contribs) (→‎f(x) = x cases.: new section)

The Trap function in the Java example does not call F(X). Is this an error? --Waldorf 14:21, 21 December 2007 (MST)

Good catch. I'm working on it now. --Mwn3d 14:29, 21 December 2007 (MST)

Parser note (C)

Writing the C code, I've used the var name If (Integrated function, in my mind), and noticed the syntax highlighter highlights If as if! It should be case sensitive! --ShinTakezou 23:38, 16 December 2008 (UTC)

Copying Bad Code

It seems a lot of people copied the Java and Ada entries without actually thinking about the code or testing it. The right rectangle examples were leaving out the final rectangle; this was originally probably a copy/paste error from the left-rectangle, which does want to stop 1 h early. The mid rectangle routines were all just averaging the beginning and endpoint of each rectangle rather than sampling the middle. This is essentially recreating trapezium. In some cases, trapezium is also wrong, because while you count the endpoints only once, each interior function is used twice, and should be multiplied by 2. These particular entries were not doing that. Testing would have found most of these problems. --TimToady 07:51, 11 September 2010 (UTC)

IIRC I copied the C example to make the Java example. I had used the C example for a homework assignment in college on these algorithms. I'd like a little more explanation on these problems. Maybe some pseudocode? You understand my resistance since I probably spent a week or two talking about these integration rules in a class with a reputable professor at my college. It is highly possible that I still didn't code them right, though. --Mwn3d 16:19, 11 September 2010 (UTC)
A correct implementation in C of the mid rect can be seen at [[1]]. The important thing to notice is that the interval (h in most of the rosettacode examples) is divided in half to find the middle point at which to evaluate the function (the i * 0.5 in the middle). If you're not dividing the interval in half, you cannot determine the value of the function at that point. Averaging the beginning and ending function values of the rectangle is the same number only if the function is linear, because doing that (and multiplying by the width) is simply another way of calculating the area of a right trapezoid. For good "pseudo code" that isn't so pseudo, I suggest the Algol 68 example. Notice how it has h/2 in the mid rect to get to the middle of the rectangle. The trapezium is careful to weight the inner points twice as much as the endpoints. (But don't copy the right rect, I think it's wrong to subtract h from the end position, because we already added h to the start position, and that leaves out one of the rectangles. Maybe use the Common Lisp one for that, since it gets it right.) --TimToady 03:18, 12 September 2010 (UTC)
On closer inspection, the Algol code was wrong too, so I just fixed it so you'd have something to look at. But you really need to learn to visualize the geometry of it, I think.--TimToady 03:40, 12 September 2010 (UTC)

f(x) = x cases.

My rationale for the two f(x) = x cases were threefold:

  1. I wanted a simple case that would expose accuracy differences between the approximation methods across function types, and a function with a linear slope would do that. (At least when compared to x^3 and 1/x).
  2. I wanted a case that would be relatively easy to get an exact answer by hand
  3. I wanted to see if I could expose differences in floating-point/rational number implementations between languages using a simple case.

Roughly speaking, IEEE 754 floating-point numbers (what most programmers are probably accustomed to working with) allow for up to 24 bits of integer precision for a 32-bit floats. This means they will tend to precisely represent non-negative integers of up to 2^24, or 16,777,216. (I'm sure it's slightly more complicated than that, or I would have mentioned it along with the test cases. In any case, a simple test program I wrote for incrementing a float detected integer precision loss when it tried to go past 2^24 or thereabouts.) The final result of the [0,5000] case is an integer below 2^24, and so it can be accurately and precisely represented. The final result of the [0,6000] case is an integer above 2^24, and may not be. (I'm sure there's someone who will read this who can identify the the next greater positive integer above 2^24 that can be represented with an IEEE 32-bit float, and could speak to this directly.) Granted, in either case, the intermediate values during approximation are likely to lose precision by always falling on exact representations. Using a different number of approximating steps (such as 5,000 and 6,000 respectively) could cover that. --Michael Mol 08:03, 12 September 2010 (UTC)