Polynomial regression: Difference between revisions
m
syntax highlighting fixup automation
m (→{{header|Phix}}: MINSIZE now ok) |
Thundergnat (talk | contribs) m (syntax highlighting fixup automation) |
||
Line 15:
{{trans|Swift}}
<
R sum(arr) / Float(arr.len)
Line 47:
V x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
V y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
poly_regression(x, y)</
{{out}}
Line 69:
=={{header|Ada}}==
<
function Fit (X, Y : Real_Vector; N : Positive) return Real_Vector is
Line 80:
end loop;
return Solve (A * Transpose (A), A * Y);
end Fit;</
The function Fit implements least squares approximation of a function defined in the points as specified by the arrays ''x''<sub>''i''</sub> and ''y''<sub>''i''</sub>. The basis φ<sub>''j''</sub> is ''x''<sup>''j''</sup>, ''j''=0,1,..,''N''. The implementation is straightforward. First the plane matrix A is created. A<sub>ji</sub>=φ<sub>''j''</sub>(''x''<sub>''i''</sub>). Then the linear problem AA<sup>''T''</sup>''c''=A''y'' is solved. The result ''c''<sub>''j''</sub> are the coefficients. Constraint_Error is propagated when dimensions of X and Y differ or else when the problem is ill-defined.
===Example===
<
with Ada.Float_Text_IO; use Ada.Float_Text_IO;
Line 97:
Put (C (1), Aft => 3, Exp => 0);
Put (C (2), Aft => 3, Exp => 0);
end Fitting;</
{{out}}
<pre>
Line 110:
<!-- {{does not work with|ELLA ALGOL 68|Any (with appropriate job cards AND formatted transput statements removed) - tested with release 1.8.8d.fc9.i386 - ELLA has no FORMATted transput}} -->
<
MODE
Line 223:
);
print polynomial(d)
END # fitting #</
{{out}}
<pre>
Line 232:
=={{header|AutoHotkey}}==
{{trans|Lua}}
<syntaxhighlight lang="autohotkey">
regression(xa,ya){
n := xa.Count()
Line 273:
eval(a,b,c,x){
return a + (b + c*x) * x
}</
Examples:<
ya := [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
MsgBox % result := regression(xa, ya)
return</
{{out}}
<pre>y = 3.000000x^2 + 2.000000x + 1.000000
Line 297:
=={{header|AWK}}==
{{trans|Lua}}
<syntaxhighlight lang="awk">
BEGIN{
i = 0;
Line 382:
a = ym - b * xm - c * x2m;
}
</syntaxhighlight>
{{out}}
<pre>
Line 406:
and fits an order-5 polynomial, so the test data for this task
is hardly challenging!
<
Max% = 10000
Line 454:
FOR term% = 5 TO 0 STEP -1
PRINT ;vector(term%) " * x^" STR$(term%)
NEXT</
{{out}}
<pre>
Line 470:
'''Include''' file (to make the code reusable easily) named <tt>polifitgsl.h</tt>
<
#define _POLIFITGSL_H
#include <gsl/gsl_multifit.h>
Line 477:
bool polynomialfit(int obs, int degree,
double *dx, double *dy, double *store); /* n, p */
#endif</
'''Implementation''' (the examples [http://www.gnu.org/software/gsl/manual/html_node/Fitting-Examples.html here] helped alot to code this quickly):
<
bool polynomialfit(int obs, int degree,
Line 519:
return true; /* we do not "analyse" the result (cov matrix mainly)
to know if the fit is "good" */
}</
'''Testing''':
<
#include "polifitgsl.h"
Line 541:
}
return 0;
}</
{{out}}
<pre>1.000000
Line 549:
=={{header|C sharp|C#}}==
{{libheader|Math.Net}}
<
{
// Vandermonde matrix
Line 563:
var p = r.Inverse().Multiply(q.TransposeThisAndMultiply(yv));
return p.Column(0).ToArray();
}</
Example:
<
{
const int degree = 2;
Line 576:
Console.WriteLine("{0} => {1} diff {2}", x[i], Polynomial.Evaluate(x[i], p), y[i] - Polynomial.Evaluate(x[i], p));
Console.ReadKey(true);
}</
=={{header|C++}}==
{{trans|Java}}
<
#include <iostream>
#include <numeric>
Line 641:
return 0;
}</
{{out}}
<pre>y = 1 + 2x + 3x^2
Line 661:
Uses the routine (lsqr A b) from [[Multiple regression]] and (mtp A) from [[Matrix transposition]].
<
(defun polyfit (x y n)
(let* ((m (cadr (array-dimensions x)))
Line 669:
(setf (aref A i j)
(expt (aref x 0 i) j))))
(lsqr A (mtp y))))</
Example:
<
(y (make-array '(1 11) :initial-contents '((1 6 17 34 57 86 121 162 209 262 321)))))
(polyfit x y 2))
#2A((0.9999999999999759d0) (2.000000000000005d0) (3.0d0))</
=={{header|D}}==
{{trans|Kotlin}}
<
import std.range;
import std.stdio;
Line 727:
auto y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321];
polyRegression(x, y);
}</
{{out}}
<pre>y = 1 + 2x + 3x^2
Line 747:
{{libheader|Calc}}
<
(y '(1 6 17 34 57 86 121 162 209 262 321)))
(calc-eval "fit(a*x^2+b*x+c,[x],[a,b,c],[$1 $2])" nil (cons 'vec x) (cons 'vec y)))</
{{out}}
Line 757:
=={{header|Fortran}}==
{{libheader|LAPACK}}
<
contains
Line 820:
end function
end module</
===Example===
<
use fitting
implicit none
Line 841:
write (*, '(F9.4)') a
end program</
{{out}} (lower powers first, so this seems the opposite of the Python output):
Line 852:
=={{header|FreeBASIC}}==
General regressions for different polynomials, here it is for degree 2, (3 terms).
<
Type vector
Line 1,039:
print show(ans())
sleep</
{{out}}
<pre>+1 +2*x +3*x^2</pre>
=={{header|GAP}}==
<
local a;
a := List([0 .. n], i -> List(x, s -> s^i));
Line 1,055:
# Return coefficients in ascending degree order
PolynomialRegression(x, y, 2);
# [ 1, 2, 3 ]</
=={{header|gnuplot}}==
<
f(x) = a*x**2 + b*x + c
Line 1,082:
e
print sprintf("\n --- \n Polynomial fit: %.4f x^2 + %.4f x + %.4f\n", a, b, c)</
=={{header|Go}}==
===Library gonum/matrix===
<
import (
Line 1,126:
}
return x
}</
{{out}}
<pre>
Line 1,136:
===Library go.matrix===
Least squares solution using QR decomposition and package [http://github.com/skelterjohn/go.matrix go.matrix].
<
import (
Line 1,176:
}
fmt.Println(c)
}</
{{out}} (lowest order coefficient first)
<pre>
Line 1,184:
=={{header|Haskell}}==
Uses module Matrix.LU from [http://hackage.haskell.org/package/dsp hackageDB DSP]
<
import Data.Array
import Control.Monad
Line 1,194:
polyfit d ry = elems $ solve mat vec where
mat = listArray ((1,1), (d,d)) $ liftM2 concatMap ppoly id [0..fromIntegral $ pred d]
vec = listArray (1,d) $ take d ry</
{{out}} in GHCi:
<
[1.0,2.0,3.0]</
=={{header|HicEst}}==
<
x = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
Line 1,214:
! called by the solver of the SOLVE function. All variables are global
Theory = p(1)*x(nr)^2 + p(2)*x(nr) + p(3)
END</
{{out}}
<pre>SOLVE performs a (nonlinear) least-square fit (Levenberg-Marquardt):
Line 1,220:
=={{header|Hy}}==
<
(setv x (range 11))
(setv y [1 6 17 34 57 86 121 162 209 262 321])
(print (polyfit x y 2))</
=={{header|J}}==
<
(%. ^/~@x:@i.@#) Y
1 2 3 0 0 0 0 0 0 0 0</
Note that this implementation does not use floating point numbers,
Line 1,242:
(as is implied in the task description)
then the following solution is probably preferable:
<
1 2 3</
(note that this time we used floating point numbers, so that result is approximate rather than exact - it only looks exact because of how J displays floating point numbers (by default, J assumes six digits of accuracy) - changing (i.3) to (x:i.3) would give us an exact result, if that mattered.)
Line 1,249:
{{trans|D}}
{{works with|Java|8}}
<
import java.util.function.IntToDoubleFunction;
import java.util.stream.IntStream;
Line 1,298:
polyRegression(x, y);
}
}</
{{out}}
<pre>y = 1.0 + 2.0x + 3.0x^2
Line 1,319:
The least-squares fit problem for a degree <i>n</i>
can be solved with the built-in backslash operator (coefficients in increasing order of degree):
<
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
@show polyfit(x, y, 2)</
{{out}}
Line 1,330:
=={{header|Kotlin}}==
{{trans|REXX}}
<
fun polyRegression(x: IntArray, y: IntArray) {
Line 1,365:
val y = intArrayOf(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321)
polyRegression(x, y)
}</
{{out}}
Line 1,388:
=={{header|Lua}}==
{{trans|Modula-2}}
<
return a + (b + c * x) * x
end
Line 1,439:
local xa = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
local ya = {1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321}
regression(xa, ya)</
{{out}}
<pre>y = 1 + 2x + 3x^2
Line 1,455:
=={{header|Maple}}==
<
PolynomialInterpolation([[0, 1], [1, 6], [2, 17], [3, 34], [4, 57], [5, 86], [6, 121], [7, 162], [8, 209], [9, 262], [10, 321]], 'x');
</syntaxhighlight>
Result:
<pre>3*x^2+2*x+1</pre>
Line 1,463:
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Using the built-in "Fit" function.
<
Fit[data, {1, x, x^2}, x]</
Second version: using built-in "InterpolatingPolynomial" function.
<
Result:
<pre>1 + 2x + 3x^2</pre>
Line 1,475:
The output of this function is the coefficients of the polynomial which best fit these x,y value pairs.
<
>> y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321];
>> polyfit(x,y,2)
Line 1,481:
ans =
2.999999999999998 2.000000000000019 0.999999999999956</
=={{header|МК-61/52}}==
Part 1:
<syntaxhighlight lang="text">ПC С/П ПD ИП9 + П9 ИПC ИП5 + П5
ИПC x^2 П2 ИП6 + П6 ИП2 ИПC * ИП7
+ П7 ИП2 x^2 ИП8 + П8 ИПC ИПD *
ИПA + ПA ИП2 ИПD * ИПB + ПB ИПD
КИП4 С/П БП 00</
''Input'': В/О x<sub>1</sub> С/П y<sub>1</sub> С/П x<sub>2</sub> С/П y<sub>2</sub> С/П ...
Part 2:
<syntaxhighlight lang="text">ИП5 ПC ИП6 ПD П2 ИП7 П3 ИП4 ИПD *
ИПC ИП5 * - ПD ИП4 ИП7 * ИПC ИП6
* - П7 ИП4 ИПA * ИПC ИП9 * -
Line 1,502:
ИПD ИП8 * ИП7 ИП3 * - / ПB ИПA
ИПB ИП7 * - ИПD / ПA ИП9 ИПB ИП6
* - ИПA ИП5 * - ИП4 / П9 С/П</
''Result'': Р9 = a<sub>0</sub>, РA = a<sub>1</sub>, РB = a<sub>2</sub>.
=={{header|Modula-2}}==
<
FROM FormatString IMPORT FormatString;
FROM RealStr IMPORT RealToStr;
Line 1,593:
ReadChar;
END PolynomialRegression.</
=={{header|Nim}}==
{{trans|Kotlin}}
<
proc polyRegression(x, y: openArray[int]) =
Line 1,630:
let x = toSeq(0..10)
let y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
polyRegression(x, y)</
{{out}}
Line 1,652:
{{trans|Kotlin}}
{{libheader|Base}}
<
open Stdio
Line 1,692:
let x = Array.init 11 ~f:Float.of_int in
let y = [| 1.; 6.; 17.; 34.; 57.; 86.; 121.; 162.; 209.; 262.; 321. |] in
regression x y</
{{out}}
Line 1,715:
=={{header|Octave}}==
<
y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321];
coeffs = polyfit(x, y, 2)</
=={{header|PARI/GP}}==
Lagrange interpolating polynomial:
<
In newer versions, this can be abbreviated:
<
{{out}}
<pre>3*x^2 + 2*x + 1</pre>
Least-squares fit:
<
M=matrix(#V,3,i,j,(i-1)^(j-1));Polrev(matsolve(M~*M,M~*V))</
<small>Code thanks to [http://pari.math.u-bordeaux.fr/archives/pari-users-1105/msg00006.html Bill Allombert]</small>
{{out}}
Line 1,735:
Least-squares polynomial fit in its own function:
<
lsf([0..10], [1,6,17,34,57,86,121,162,209,262,321], 2)</
=={{header|Perl}}==
This code identical to that of [[Multiple regression]] task.
<
use warnings;
use Statistics::Regression;
Line 1,753:
my @coeff = $reg->theta();
printf "%-6s %8.3f\n", $model[$_], $coeff[$_] for 0..@model-1;</
{{output}}
<pre>const 1.000
Line 1,760:
PDL Alternative:
<
use strict;
Line 1,785:
print "\n" unless($_)
}
</syntaxhighlight>
{{output}}
<pre> 3.00000037788248 * $x**2 + 1.99999750988868 * $x + 1.00000180493936</pre>
Line 1,794:
{{libheader|Phix/pGUI}}
You can run this online [http://phix.x10.mx/p2js/Polynomial_regression.htm here].
<!--<
<span style="color: #000080;font-style:italic;">-- demo\rosetta\Polynomial_regression.exw</span>
<span style="color: #008080;">with</span> <span style="color: #008080;">javascript_semantics</span>
Line 1,866:
<span style="color: #7060A8;">IupClose</span><span style="color: #0000FF;">()</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<!--</
{{out}}
(plus a simple graphical plot, as per [[Polynomial_regression#Racket|Racket]])
Line 1,887:
=={{header|PowerShell}}==
<syntaxhighlight lang="powershell">
function qr([double[][]]$A) {
$m,$n = $A.count, $A[0].count
Line 1,968:
"X^2 X constant"
"$(polyfit $x $y 2)"
</syntaxhighlight>
{{out}}
<pre>
Line 1,979:
{{libheader|NumPy}}
<
>>> y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
>>> coeffs = numpy.polyfit(x,y,deg=2)
>>> coeffs
array([ 3., 2., 1.])</
Substitute back received coefficients.
<
>>> yf
array([ 1., 6., 17., 34., 57., 86., 121., 162., 209., 262., 321.])</
Find max absolute error:
<
'1e-013'</
===Example===
For input arrays `x' and `y':
<
>>> y = [2.7, 2.8, 31.4, 38.1, 58.0, 76.2, 100.5, 130.0, 149.3, 180.0]</
<
>>> print p
2
1.085 N + 10.36 N - 0.6164</
Thus we confirm once more that for already sorted sequences
the considered quick sort implementation has
Line 2,012:
which will find the least squares solution via a QR decomposition:
<syntaxhighlight lang="r">
x <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
y <- c(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321)
coef(lm(y ~ x + I(x^2)))</
{{out}}
Line 2,025:
Alternately, use poly:
<
<pre> (Intercept) poly(x, 2, raw = T)1 poly(x, 2, raw = T)2
1 2 3</pre>
=={{header|Racket}}==
<
#lang racket
(require math plot)
Line 2,049:
(plot (list (points (map vector xs ys))
(function (poly (fit xs ys 2)))))
</syntaxhighlight>
{{out}}
[[File:polyreg-racket.png]]
Line 2,057:
We'll use a Clifford algebra library.
<syntaxhighlight lang="raku"
constant @x1 = <0 1 2 3 4 5 6 7 8 9 10>;
Line 2,076:
(($y ∧ $J)·$I.reversion)/$I2,
(($y ∧ ($x2 ∧ $x0))·$I.reversion)/$I2,
(($y ∧ ($x0 ∧ $x1))·$I.reversion)/$I2;</
{{out}}
<pre>1
Line 2,084:
=={{header|REXX}}==
<
* Implementation of http://keisan.casio.com/exec/system/14059932254941
*--------------------------------------------------------------------*/
Line 2,134:
fun:
Parse Arg x
Return a+b*x+c*x**2 </
{{out}}
<pre>y=1+2*x+3*x**2
Line 2,152:
=={{header|Ruby}}==
<
def regress x, y, degree
Line 2,161:
((mx.t * mx).inv * mx.t * my).transpose.to_a[0].map(&:to_f)
end</
'''Testing:'''
<
[1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321],
2)</
{{out}}
<pre>[1.0, 2.0, 3.0]</pre>
Line 2,174:
{{libheader|Scastie qualified}}
{{works with|Scala|2.13}}
<
private def xy = Seq(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321).zipWithIndex.map(_.swap)
Line 2,207:
polyRegression(xy)
}</
=={{header|Sidef}}==
{{trans|Ruby}}
<
var A = Matrix.build(x.len, degree+1, {|i,j|
x[i]**j
Line 2,230:
)
say coeff</
{{out}}
<pre>[1, 2, 3]</pre>
Line 2,236:
=={{header|Stata}}==
See '''[http://www.stata.com/help.cgi?fvvarlist Factor variables]''' in Stata help for explanations on the ''c.x##c.x'' syntax.
<
. input x y
0 1
Line 2,268:
|
_cons | 1 . . . . .
------------------------------------------------------------------------------</
=={{header|Swift}}==
{{trans|Kotlin}}
<syntaxhighlight lang="swift">
let x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Line 2,315:
polyRegression(x: x, y: y)
</syntaxhighlight>
{{out}}
Line 2,340:
<!-- This implementation from Emiliano Gavilan;
posted here with his explicit permission -->
<
proc build.matrix {xvec degree} {
Line 2,388:
set coeffs [math::linearalgebra::solveGauss $A $b]
# show results
puts $coeffs</
This will print:
1.0000000000000207 1.9999999999999958 3.0
Line 2,394:
=={{header|TI-83 BASIC}}==
<
seq(X,X,0,10) → L1
{1,6,17,34,57,86,121,162,209,262,321} → L2
QuadReg L1,L2</
{{out}}
Line 2,408:
=={{header|TI-89 BASIC}}==
<
seq(x,x,0,10) → xs
{1,6,17,34,57,86,121,162,209,262,321} → ys
QuadReg xs,ys
Disp regeq(x)</
<code>seq(''expr'',''var'',''low'',''high'')</code> evaluates ''expr'' with ''var'' bound to integers from ''low'' to ''high'' and returns a list of the results. <code> →</code> is the assignment operator.
Line 2,430:
whereby the data can be passed as lists rather than arrays,
and all memory management is handled automatically.
<
#import nat
#import flo
(fit "n") ("x","y") = ..dgelsd\"y" (gang \/*pow float*x iota successor "n")* "x"</
test program:
<
y = <1.,6.,17.,34.,57.,86.,121.,162.,209.,262.,321.>
#cast %eL
example = fit2(x,y)</
{{out}}
<pre><3.000000e+00,2.000000e+00,1.000000e+00></pre>
Line 2,447:
=={{header|VBA}}==
Excel VBA has built in capability for line estimation.
<
Private Function polynomial_regression(y As Variant, x As Variant, degree As Integer) As Variant
Dim a() As Double
Line 2,476:
Debug.Print "Degrees of freedom:"; result(4, 2)
Debug.Print "Standard error of y estimate:"; result(3, 2)
End Sub</
<pre>coefficients : 1, 2, 3,
standard errors: 0, 0, 0,
Line 2,490:
{{libheader|Wren-seq}}
{{libheader|Wren-fmt}}
<
import "/seq" for Lst
import "/fmt" for Fmt
Line 2,525:
for (i in 1..10) x[i] = i
var y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
polynomialRegression.call(x, y)</
{{out}}
Line 2,548:
=={{header|zkl}}==
Using the GNU Scientific Library
<
xs:=GSL.VectorFromData(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
ys:=GSL.VectorFromData(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321);
Line 2,554:
v.format().println();
GSL.Helpers.polyString(v).println();
GSL.Helpers.polyEval(v,xs).format().println();</
{{out}}
<pre>
Line 2,566:
Example:
<
T(T(1.0,6.0,17.0,34.0,57.0,86.0,121.0,162.0,209.0,262.0,321.0)), 2)
.flatten().println();</
{{out}}<pre>L(1,2,3)</pre>
|