Polynomial regression: Difference between revisions
Content added Content deleted
Underscore (talk | contribs) (Added Hy.) |
m ({{out}}) |
||
Line 43: | Line 43: | ||
Put (C (2), Aft => 3, Exp => 0); |
Put (C (2), Aft => 3, Exp => 0); |
||
end Fitting;</lang> |
end Fitting;</lang> |
||
{{out}} |
|||
Sample output: |
|||
<pre> |
<pre> |
||
1.000 2.000 3.000 |
1.000 2.000 3.000 |
||
Line 169: | Line 169: | ||
print polynomial(d) |
print polynomial(d) |
||
END # fitting #</lang> |
END # fitting #</lang> |
||
{{out}} |
|||
Output: |
|||
<pre> |
<pre> |
||
3x**2+2x+1 |
3x**2+2x+1 |
||
Line 177: | Line 177: | ||
=={{header|BBC BASIC}}== |
=={{header|BBC BASIC}}== |
||
{{works with|BBC BASIC for Windows}} |
{{works with|BBC BASIC for Windows}} |
||
The code listed below is good for up to 10000 data points |
The code listed below is good for up to 10000 data points |
||
and fits an order-5 polynomial, so the test data for this task |
|||
is hardly challenging! |
|||
<lang bbcbasic> INSTALL @lib$+"ARRAYLIB" |
<lang bbcbasic> INSTALL @lib$+"ARRAYLIB" |
||
Line 227: | Line 229: | ||
PRINT ;vector(term%) " * x^" STR$(term%) |
PRINT ;vector(term%) " * x^" STR$(term%) |
||
NEXT</lang> |
NEXT</lang> |
||
{{out}} |
|||
Output: |
|||
<pre> |
<pre> |
||
Polynomial coefficients = |
Polynomial coefficients = |
||
Line 315: | Line 317: | ||
return 0; |
return 0; |
||
}</lang> |
}</lang> |
||
{{out}} |
|||
'''Output''' of the test: |
|||
<pre>1.000000 |
<pre>1.000000 |
||
2.000000 |
2.000000 |
||
Line 463: | Line 465: | ||
end program</lang> |
end program</lang> |
||
{{out}} (lower powers first, so this seems the opposite of the Python output): |
|||
<pre> |
<pre> |
||
1.0000 |
1.0000 |
||
Line 552: | Line 553: | ||
sleep |
sleep |
||
</lang> |
</lang> |
||
{{out}} |
|||
Console output: |
|||
<pre> |
<pre> |
||
Polynomial Coefficients: |
Polynomial Coefficients: |
||
Line 654: | Line 655: | ||
fmt.Println(c) |
fmt.Println(c) |
||
}</lang> |
}</lang> |
||
{{out}} (lowest order coefficient first) |
|||
<pre> |
<pre> |
||
[0.9999999999999758 2.000000000000015 2.999999999999999] |
[0.9999999999999758 2.000000000000015 2.999999999999999] |
||
Line 672: | Line 673: | ||
mat = listArray ((1,1), (d,d)) $ liftM2 concatMap ppoly id [0..fromIntegral $ pred d] |
mat = listArray ((1,1), (d,d)) $ liftM2 concatMap ppoly id [0..fromIntegral $ pred d] |
||
vec = listArray (1,d) $ take d ry</lang> |
vec = listArray (1,d) $ take d ry</lang> |
||
{{out}} in GHCi: |
|||
<lang haskell>*Main> polyfit 3 [1,6,17,34,57,86,121,162,209,262,321] |
<lang haskell>*Main> polyfit 3 [1,6,17,34,57,86,121,162,209,262,321] |
||
[1.0,2.0,3.0]</lang> |
[1.0,2.0,3.0]</lang> |
||
Line 692: | Line 693: | ||
Theory = p(1)*x(nr)^2 + p(2)*x(nr) + p(3) |
Theory = p(1)*x(nr)^2 + p(2)*x(nr) + p(3) |
||
END</lang> |
END</lang> |
||
{{out}} |
|||
< |
<pre>SOLVE performs a (nonlinear) least-square fit (Levenberg-Marquardt): |
||
p(1)=2.997135145; p(2)=2.011348347; p(3)=0.9906627242; iterations=19;</ |
p(1)=2.997135145; p(2)=2.011348347; p(3)=0.9906627242; iterations=19;</pre> |
||
=={{header|Hy}}== |
=={{header|Hy}}== |
||
Line 709: | Line 711: | ||
1 2 3 0 0 0 0 0 0 0 0</lang> |
1 2 3 0 0 0 0 0 0 0 0</lang> |
||
Note that this implementation does not use floating point numbers, |
Note that this implementation does not use floating point numbers, |
||
so we do not introduce floating point errors. |
|||
Using exact arithmetic has a speed penalty, |
|||
but for small problems like this it is inconsequential. |
|||
The above solution fits a polynomial of order 11. |
The above solution fits a polynomial of order 11. |
||
If the order of the polynomial is known to be 3 |
|||
(as is implied in the task description) |
|||
then the following solution is probably preferable: |
|||
<lang j> Y (%. (i.3) ^/~ ]) X |
<lang j> Y (%. (i.3) ^/~ ]) X |
||
1 2 3</lang> |
1 2 3</lang> |
||
=={{header|Julia}}== |
=={{header|Julia}}== |
||
The least-squares fit problem for a degree <i>n</i> |
The least-squares fit problem for a degree <i>n</i> |
||
can be solved with the built-in backslash operator: |
|||
<lang julia>function polyfit(x, y, n) |
|||
A = [ float(x[i])^p for i = 1:length(x), p = 0:n ] |
A = [ float(x[i])^p for i = 1:length(x), p = 0:n ] |
||
A \ y |
A \ y |
||
end</lang> |
end</lang> |
||
{{out}} |
|||
<pre>julia> x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
|||
julia> y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321] |
julia> y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321] |
||
julia> polyfit(x, y, 2) |
julia> polyfit(x, y, 2) |
||
Line 726: | Line 737: | ||
1.0 |
1.0 |
||
2.0 |
2.0 |
||
3.0</ |
3.0</pre> |
||
(giving the coefficients in increasing order of degree). |
(giving the coefficients in increasing order of degree). |
||
Line 738: | Line 749: | ||
=={{header|MATLAB}}== |
=={{header|MATLAB}}== |
||
Matlab has a built-in function "polyfit(x,y,n)" which performs this task. |
Matlab has a built-in function "polyfit(x,y,n)" which performs this task. |
||
The arguments x and y are vectors which are parametrized by the index suck that <math>point_{i} = (x_{i},y_{i})</math> and the argument n is the order of the polynomial you want to fit. |
|||
The output of this function is the coefficients of the polynomial which best fit these x,y value pairs. |
|||
<lang MATLAB>>> x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; |
<lang MATLAB>>> x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; |
||
Line 780: | Line 793: | ||
Lagrange interpolating polynomial: |
Lagrange interpolating polynomial: |
||
<lang parigp>polinterpolate([0,1,2,3,4,5,6,7,8,9,10],[1,6,17,34,57,86,121,162,209,262,321])</lang> |
<lang parigp>polinterpolate([0,1,2,3,4,5,6,7,8,9,10],[1,6,17,34,57,86,121,162,209,262,321])</lang> |
||
{{out}} |
|||
Output: |
|||
<pre>3*x^2 + 2*x + 1</pre> |
<pre>3*x^2 + 2*x + 1</pre> |
||
Line 803: | Line 816: | ||
>>> yf |
>>> yf |
||
array([ 1., 6., 17., 34., 57., 86., 121., 162., 209., 262., 321.])</lang> |
array([ 1., 6., 17., 34., 57., 86., 121., 162., 209., 262., 321.])</lang> |
||
Find max absolute error |
Find max absolute error: |
||
<lang python>>>> '%.1g' % max(y-yf) |
<lang python>>>> '%.1g' % max(y-yf) |
||
'1e-013'</lang> |
'1e-013'</lang> |
||
Line 816: | Line 829: | ||
2 |
2 |
||
1.085 N + 10.36 N - 0.6164</lang> |
1.085 N + 10.36 N - 0.6164</lang> |
||
Thus we confirm once more that for already sorted sequences |
Thus we confirm once more that for already sorted sequences |
||
the considered quick sort implementation has |
|||
quadratic dependence on sequence length |
|||
(see [[Query Performance|'''Example''' section for Python language |
|||
on ''Query Performance'' page]]). |
|||
=={{header|R}}== |
=={{header|R}}== |
||
The easiest (and most robust) approach to solve this in R |
The easiest (and most robust) approach to solve this in R |
||
is to use the base package's ''lm'' function |
|||
which will find the least squares solution via a QR decomposition: |
|||
<lang R> |
<lang R> |
||
Line 826: | Line 845: | ||
coef(lm(y ~ x + I(x^2)))</lang> |
coef(lm(y ~ x + I(x^2)))</lang> |
||
{{out}} |
|||
'''Output''' |
|||
< |
<pre> |
||
(Intercept) x I(x^2) |
(Intercept) x I(x^2) |
||
1 2 3 |
1 2 3 |
||
</ |
</pre> |
||
=={{header|Racket}}== |
=={{header|Racket}}== |
||
Line 853: | Line 872: | ||
(function (poly (fit xs ys 2))))) |
(function (poly (fit xs ys 2))))) |
||
</lang> |
</lang> |
||
{{out}} |
|||
Output: |
|||
[[File:polyreg-racket.png]] |
[[File:polyreg-racket.png]] |
||
Line 873: | Line 892: | ||
p betas</lang> |
p betas</lang> |
||
{{out}} |
|||
'''Output:''' |
|||
<pre>[1.00000000000018, 2.00000000000011, 3.00000000000001]</pre> |
<pre>[1.00000000000018, 2.00000000000011, 3.00000000000001]</pre> |
||
=={{header|Tcl}}== |
=={{header|Tcl}}== |
||
{{tcllib|math::linearalgebra}} |
{{tcllib|math::linearalgebra}} |
||
<!-- This implementation from Emiliano Gavilan; |
<!-- This implementation from Emiliano Gavilan; |
||
posted here with his explicit permission --> |
|||
<lang tcl>package require math::linearalgebra |
<lang tcl>package require math::linearalgebra |
||
Line 939: | Line 959: | ||
Disp regeq(x)</lang> |
Disp regeq(x)</lang> |
||
<code>seq(''expr'',''var'',''low'',''high'')</code> evaluates ''expr'' with ''var'' bound to integers from ''low'' to ''high'' and returns a list of the results. <code> →</code> is the assignment operator. |
<code>seq(''expr'',''var'',''low'',''high'')</code> evaluates ''expr'' with ''var'' bound to integers from ''low'' to ''high'' and returns a list of the results. <code> →</code> is the assignment operator. |
||
<code>QuadReg</code>, "quadratic regression", does the fit and stores the details in a number of standard variables, including <var>regeq</var>, which receives the fitted quadratic (polynomial) function itself. |
|||
We then apply that function to the (undefined as ensured by <code>DelVar</code>) variable x to obtain the expression in terms of x, and display it. |
|||
{{out}} |
|||
<code>3.·x<sup>2</sup> + 2.·x + 1.</code> |
|||
=={{header|Ursala}}== |
=={{header|Ursala}}== |
||
{{libheader|LAPACK}} |
{{libheader|LAPACK}} |
||
The fit function defined below returns the coefficients |
The fit function defined below returns the coefficients |
||
of |
of an nth-degree polynomial in order of descending degree |
||
fitting the lists of inputs x and outputs y. |
|||
The real work is done by the dgelsd function from the lapack library. |
The real work is done by the dgelsd function from the lapack library. |
||
Ursala provides a simplified interface to this library |
Ursala provides a simplified interface to this library |
||
can be passed as lists rather than arrays, |
whereby the data can be passed as lists rather than arrays, |
||
handled automatically. |
and all memory management is handled automatically. |
||
<lang Ursala>#import std |
<lang Ursala>#import std |
||
#import nat |
#import nat |
||
Line 963: | Line 987: | ||
example = fit2(x,y)</lang> |
example = fit2(x,y)</lang> |
||
{{out}} |
|||
output: |
|||
<pre><3.000000e+00,2.000000e+00,1.000000e+00></pre> |
<pre><3.000000e+00,2.000000e+00,1.000000e+00></pre> |