Polynomial regression: Difference between revisions

m
{{out}}
(Added Hy.)
m ({{out}})
Line 43:
Put (C (2), Aft => 3, Exp => 0);
end Fitting;</lang>
{{out}}
Sample output:
<pre>
1.000 2.000 3.000
Line 169:
print polynomial(d)
END # fitting #</lang>
{{out}}
Output:
<pre>
3x**2+2x+1
Line 177:
=={{header|BBC BASIC}}==
{{works with|BBC BASIC for Windows}}
The code listed below is good for up to 10000 data points and fits an order-5 polynomial, so the test data for this task is hardly challenging!
and fits an order-5 polynomial, so the test data for this task
is hardly challenging!
<lang bbcbasic> INSTALL @lib$+"ARRAYLIB"
Line 227 ⟶ 229:
PRINT ;vector(term%) " * x^" STR$(term%)
NEXT</lang>
{{out}}
Output:
<pre>
Polynomial coefficients =
Line 315 ⟶ 317:
return 0;
}</lang>
{{out}}
'''Output''' of the test:
<pre>1.000000
2.000000
Line 463 ⟶ 465:
end program</lang>
 
Output{{out}} (lower powers first, so this seems the opposite of the Python output):
 
<pre>
1.0000
Line 552 ⟶ 553:
sleep
</lang>
{{out}}
Console output:
<pre>
Polynomial Coefficients:
Line 654 ⟶ 655:
fmt.Println(c)
}</lang>
Output{{out}} (lowest order coefficient first)
<pre>
[0.9999999999999758 2.000000000000015 2.999999999999999]
Line 672 ⟶ 673:
mat = listArray ((1,1), (d,d)) $ liftM2 concatMap ppoly id [0..fromIntegral $ pred d]
vec = listArray (1,d) $ take d ry</lang>
Output{{out}} in GHCi:
<lang haskell>*Main> polyfit 3 [1,6,17,34,57,86,121,162,209,262,321]
[1.0,2.0,3.0]</lang>
Line 692 ⟶ 693:
Theory = p(1)*x(nr)^2 + p(2)*x(nr) + p(3)
END</lang>
{{out}}
<lang hicestpre>SOLVE performs a (nonlinear) least-square fit (Levenberg-Marquardt):
p(1)=2.997135145; p(2)=2.011348347; p(3)=0.9906627242; iterations=19;</langpre>
 
=={{header|Hy}}==
Line 709 ⟶ 711:
1 2 3 0 0 0 0 0 0 0 0</lang>
 
Note that this implementation does not use floating point numbers, so we do not introduce floating point errors. Using exact arithmetic has a speed penalty, but for small problems like this it is inconsequential.
so we do not introduce floating point errors.
Using exact arithmetic has a speed penalty,
but for small problems like this it is inconsequential.
 
The above solution fits a polynomial of order 11. If the order of the polynomial is known to be 3 (as is implied in the task description) then the following solution is probably preferable.
If the order of the polynomial is known to be 3
(as is implied in the task description)
then the following solution is probably preferable:
<lang j> Y (%. (i.3) ^/~ ]) X
1 2 3</lang>
 
=={{header|Julia}}==
The least-squares fit problem for a degree <i>n</i> can be solved with the built-in backslash operator: <lang julia>function polyfit(x, y, n)
can be solved with the built-in backslash operator:
<lang julia>function polyfit(x, y, n)
A = [ float(x[i])^p for i = 1:length(x), p = 0:n ]
A \ y
end</lang>
{{out}}
Example output:<lang juliapre>julia> x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
julia> y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
julia> polyfit(x, y, 2)
Line 726 ⟶ 737:
1.0
2.0
3.0</langpre>
(giving the coefficients in increasing order of degree).
 
Line 738 ⟶ 749:
 
=={{header|MATLAB}}==
Matlab has a built-in function "polyfit(x,y,n)" which performs this task. The arguments x and y are vectors which are parametrized by the index suck that <math>point_{i} = (x_{i},y_{i})</math> and the argument n is the order of the polynomial you want to fit. The output of this function is the coefficients of the polynomial which best fit these x,y value pairs.
The arguments x and y are vectors which are parametrized by the index suck that <math>point_{i} = (x_{i},y_{i})</math> and the argument n is the order of the polynomial you want to fit.
The output of this function is the coefficients of the polynomial which best fit these x,y value pairs.
 
<lang MATLAB>>> x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
Line 780 ⟶ 793:
Lagrange interpolating polynomial:
<lang parigp>polinterpolate([0,1,2,3,4,5,6,7,8,9,10],[1,6,17,34,57,86,121,162,209,262,321])</lang>
{{out}}
Output:
<pre>3*x^2 + 2*x + 1</pre>
 
Line 803 ⟶ 816:
>>> yf
array([ 1., 6., 17., 34., 57., 86., 121., 162., 209., 262., 321.])</lang>
Find max absolute error.:
<lang python>>>> '%.1g' % max(y-yf)
'1e-013'</lang>
Line 816 ⟶ 829:
2
1.085 N + 10.36 N - 0.6164</lang>
Thus we confirm once more that for already sorted sequences the considered quick sort implementation has quadratic dependence on sequence length (see [[Query Performance|'''Example''' section for Python language on ''Query Performance'' page]]).
the considered quick sort implementation has
quadratic dependence on sequence length
(see [[Query Performance|'''Example''' section for Python language
on ''Query Performance'' page]]).
 
=={{header|R}}==
The easiest (and most robust) approach to solve this in R is to use the base package's ''lm'' function which will find the least squares solution via a QR decomposition:
is to use the base package's ''lm'' function
which will find the least squares solution via a QR decomposition:
 
<lang R>
Line 826 ⟶ 845:
coef(lm(y ~ x + I(x^2)))</lang>
 
{{out}}
'''Output'''
<lang Rpre>
(Intercept) x I(x^2)
1 2 3
</langpre>
 
=={{header|Racket}}==
Line 853 ⟶ 872:
(function (poly (fit xs ys 2)))))
</lang>
{{out}}
Output:
[[File:polyreg-racket.png]]
 
Line 873 ⟶ 892:
 
p betas</lang>
{{out}}
'''Output:'''
<pre>[1.00000000000018, 2.00000000000011, 3.00000000000001]</pre>
 
=={{header|Tcl}}==
{{tcllib|math::linearalgebra}}
<!-- This implementation from Emiliano Gavilan; posted here with his explicit permission -->
posted here with his explicit permission -->
<lang tcl>package require math::linearalgebra
 
Line 939 ⟶ 959:
Disp regeq(x)</lang>
 
<code>seq(''expr'',''var'',''low'',''high'')</code> evaluates ''expr'' with ''var'' bound to integers from ''low'' to ''high'' and returns a list of the results. <code> →</code> is the assignment operator. <code>QuadReg</code>, "quadratic regression", does the fit and stores the details in a number of standard variables, including <var>regeq</var>, which receives the fitted quadratic (polynomial) function itself. We then apply that function to the (undefined as ensured by <code>DelVar</code>) variable x to obtain the expression in terms of x, and display it.
<code>QuadReg</code>, "quadratic regression", does the fit and stores the details in a number of standard variables, including <var>regeq</var>, which receives the fitted quadratic (polynomial) function itself.
We then apply that function to the (undefined as ensured by <code>DelVar</code>) variable x to obtain the expression in terms of x, and display it.
 
{{out}}
Output: <code>3.·x<sup>2</sup> + 2.·x + 1.</code>
 
=={{header|Ursala}}==
{{libheader|LAPACK}}
The fit function defined below returns the coefficients of an nth-degree polynomial in order
of descendingan nth-degree fittingpolynomial thein listsorder of inputsdescending x and outputsdegree y.
fitting the lists of inputs x and outputs y.
The real work is done by the dgelsd function from the lapack library.
Ursala provides a simplified interface to this library whereby the data
whereby the data can be passed as lists rather than arrays, and all memory management is
and all memory management is handled automatically.
<lang Ursala>#import std
#import nat
Line 963 ⟶ 987:
 
example = fit2(x,y)</lang>
{{out}}
output:
<pre><3.000000e+00,2.000000e+00,1.000000e+00></pre>
Anonymous user