Polynomial regression: Difference between revisions
→{{header|jq}}
m (→{{header|Phix}}: MINSIZE now ok) |
|||
(14 intermediate revisions by 8 users not shown) | |||
Line 15:
{{trans|Swift}}
<
R sum(arr) / Float(arr.len)
Line 47:
V x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
V y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
poly_regression(x, y)</
{{out}}
Line 69:
=={{header|Ada}}==
<
function Fit (X, Y : Real_Vector; N : Positive) return Real_Vector is
Line 80:
end loop;
return Solve (A * Transpose (A), A * Y);
end Fit;</
The function Fit implements least squares approximation of a function defined in the points as specified by the arrays ''x''<sub>''i''</sub> and ''y''<sub>''i''</sub>. The basis φ<sub>''j''</sub> is ''x''<sup>''j''</sup>, ''j''=0,1,..,''N''. The implementation is straightforward. First the plane matrix A is created. A<sub>ji</sub>=φ<sub>''j''</sub>(''x''<sub>''i''</sub>). Then the linear problem AA<sup>''T''</sup>''c''=A''y'' is solved. The result ''c''<sub>''j''</sub> are the coefficients. Constraint_Error is propagated when dimensions of X and Y differ or else when the problem is ill-defined.
===Example===
<
with Ada.Float_Text_IO; use Ada.Float_Text_IO;
Line 97:
Put (C (1), Aft => 3, Exp => 0);
Put (C (2), Aft => 3, Exp => 0);
end Fitting;</
{{out}}
<pre>
Line 110:
<!-- {{does not work with|ELLA ALGOL 68|Any (with appropriate job cards AND formatted transput statements removed) - tested with release 1.8.8d.fc9.i386 - ELLA has no FORMATted transput}} -->
<
MODE
Line 223:
);
print polynomial(d)
END # fitting #</
{{out}}
<pre>
Line 232:
=={{header|AutoHotkey}}==
{{trans|Lua}}
<syntaxhighlight lang="autohotkey">
regression(xa,ya){
n := xa.Count()
Line 273:
eval(a,b,c,x){
return a + (b + c*x) * x
}</
Examples:<
ya := [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
MsgBox % result := regression(xa, ya)
return</
{{out}}
<pre>y = 3.000000x^2 + 2.000000x + 1.000000
Line 297:
=={{header|AWK}}==
{{trans|Lua}}
<syntaxhighlight lang="awk">
BEGIN{
i = 0;
Line 382:
a = ym - b * xm - c * x2m;
}
</syntaxhighlight>
{{out}}
<pre>
Line 406:
and fits an order-5 polynomial, so the test data for this task
is hardly challenging!
<
Max% = 10000
Line 454:
FOR term% = 5 TO 0 STEP -1
PRINT ;vector(term%) " * x^" STR$(term%)
NEXT</
{{out}}
<pre>
Line 470:
'''Include''' file (to make the code reusable easily) named <tt>polifitgsl.h</tt>
<
#define _POLIFITGSL_H
#include <gsl/gsl_multifit.h>
Line 477:
bool polynomialfit(int obs, int degree,
double *dx, double *dy, double *store); /* n, p */
#endif</
'''Implementation''' (the examples [http://www.gnu.org/software/gsl/manual/html_node/Fitting-Examples.html here] helped alot to code this quickly):
<
bool polynomialfit(int obs, int degree,
Line 519:
return true; /* we do not "analyse" the result (cov matrix mainly)
to know if the fit is "good" */
}</
'''Testing''':
<
#include "polifitgsl.h"
Line 541:
}
return 0;
}</
{{out}}
<pre>1.000000
Line 549:
=={{header|C sharp|C#}}==
{{libheader|Math.Net}}
<
{
// Vandermonde matrix
Line 563:
var p = r.Inverse().Multiply(q.TransposeThisAndMultiply(yv));
return p.Column(0).ToArray();
}</
Example:
<
{
const int degree = 2;
Line 576:
Console.WriteLine("{0} => {1} diff {2}", x[i], Polynomial.Evaluate(x[i], p), y[i] - Polynomial.Evaluate(x[i], p));
Console.ReadKey(true);
}</
=={{header|C++}}==
{{trans|Java}}
<
#include <iostream>
#include <numeric>
Line 641:
return 0;
}</
{{out}}
<pre>y = 1 + 2x + 3x^2
Line 661:
Uses the routine (lsqr A b) from [[Multiple regression]] and (mtp A) from [[Matrix transposition]].
<
(defun polyfit (x y n)
(let* ((m (cadr (array-dimensions x)))
Line 669:
(setf (aref A i j)
(expt (aref x 0 i) j))))
(lsqr A (mtp y))))</
Example:
<
(y (make-array '(1 11) :initial-contents '((1 6 17 34 57 86 121 162 209 262 321)))))
(polyfit x y 2))
#2A((0.9999999999999759d0) (2.000000000000005d0) (3.0d0))</
=={{header|D}}==
{{trans|Kotlin}}
<
import std.range;
import std.stdio;
Line 727:
auto y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321];
polyRegression(x, y);
}</
{{out}}
<pre>y = 1 + 2x + 3x^2
Line 743:
9 262 262.0
10 321 321.0</pre>
=={{header|EasyLang}}==
{{trans|Lua}}
<syntaxhighlight lang=easylang>
func eval a b c x .
return a + (b + c * x) * x
.
proc regression xa[] ya[] . .
n = len xa[]
for i = 1 to n
xm = xm + xa[i]
ym = ym + ya[i]
x2m = x2m + xa[i] * xa[i]
x3m = x3m + xa[i] * xa[i] * xa[i]
x4m = x4m + xa[i] * xa[i] * xa[i] * xa[i]
xym = xym + xa[i] * ya[i]
x2ym = x2ym + xa[i] * xa[i] * ya[i]
.
xm = xm / n
ym = ym / n
x2m = x2m / n
x3m = x3m / n
x4m = x4m / n
xym = xym / n
x2ym = x2ym / n
#
sxx = x2m - xm * xm
sxy = xym - xm * ym
sxx2 = x3m - xm * x2m
sx2x2 = x4m - x2m * x2m
sx2y = x2ym - x2m * ym
#
b = (sxy * sx2x2 - sx2y * sxx2) / (sxx * sx2x2 - sxx2 * sxx2)
c = (sx2y * sxx - sxy * sxx2) / (sxx * sx2x2 - sxx2 * sxx2)
a = ym - b * xm - c * x2m
print "y = " & a & " + " & b & "x + " & c & "x^2"
numfmt 0 3
for i = 1 to n
print xa[i] & " " & ya[i] & " " & eval a b c xa[i]
.
.
xa[] = [ 0 1 2 3 4 5 6 7 8 9 10 ]
ya[] = [ 1 6 17 34 57 86 121 162 209 262 321 ]
regression xa[] ya[]
</syntaxhighlight>
=={{header|Emacs Lisp}}==
{{libheader|Calc}}
<
(y '(1 6 17 34 57 86 121 162 209 262 321)))
(calc-eval "fit(a*x^2+b*x+c,[x],[a,b,c],[$1 $2])" nil (cons 'vec x) (cons 'vec y)))</
{{out}}
Line 757 ⟶ 802:
=={{header|Fortran}}==
{{libheader|LAPACK}}
<
contains
Line 820 ⟶ 865:
end function
end module</
===Example===
<
use fitting
implicit none
Line 841 ⟶ 886:
write (*, '(F9.4)') a
end program</
{{out}} (lower powers first, so this seems the opposite of the Python output):
Line 852 ⟶ 897:
=={{header|FreeBASIC}}==
General regressions for different polynomials, here it is for degree 2, (3 terms).
<
Type vector
Line 1,039 ⟶ 1,084:
print show(ans())
sleep</
{{out}}
<pre>+1 +2*x +3*x^2</pre>
=={{header|GAP}}==
<
local a;
a := List([0 .. n], i -> List(x, s -> s^i));
Line 1,055 ⟶ 1,100:
# Return coefficients in ascending degree order
PolynomialRegression(x, y, 2);
# [ 1, 2, 3 ]</
=={{header|gnuplot}}==
<
f(x) = a*x**2 + b*x + c
Line 1,082 ⟶ 1,127:
e
print sprintf("\n --- \n Polynomial fit: %.4f x^2 + %.4f x + %.4f\n", a, b, c)</
=={{header|Go}}==
===Library gonum/matrix===
<
import (
Line 1,126 ⟶ 1,171:
}
return x
}</
{{out}}
<pre>
Line 1,136 ⟶ 1,181:
===Library go.matrix===
Least squares solution using QR decomposition and package [http://github.com/skelterjohn/go.matrix go.matrix].
<
import (
Line 1,176 ⟶ 1,221:
}
fmt.Println(c)
}</
{{out}} (lowest order coefficient first)
<pre>
Line 1,184 ⟶ 1,229:
=={{header|Haskell}}==
Uses module Matrix.LU from [http://hackage.haskell.org/package/dsp hackageDB DSP]
<
import Data.Array
import Control.Monad
Line 1,194 ⟶ 1,239:
polyfit d ry = elems $ solve mat vec where
mat = listArray ((1,1), (d,d)) $ liftM2 concatMap ppoly id [0..fromIntegral $ pred d]
vec = listArray (1,d) $ take d ry</
{{out}} in GHCi:
<
[1.0,2.0,3.0]</
=={{header|HicEst}}==
<
x = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
Line 1,214 ⟶ 1,259:
! called by the solver of the SOLVE function. All variables are global
Theory = p(1)*x(nr)^2 + p(2)*x(nr) + p(3)
END</
{{out}}
<pre>SOLVE performs a (nonlinear) least-square fit (Levenberg-Marquardt):
Line 1,220 ⟶ 1,265:
=={{header|Hy}}==
<
(setv x (range 11))
(setv y [1 6 17 34 57 86 121 162 209 262 321])
(print (polyfit x y 2))</
=={{header|J}}==
<
(%. ^/~@x:@i.@#) Y
1 2 3 0 0 0 0 0 0 0 0</
Note that this implementation does not use floating point numbers,
Line 1,238 ⟶ 1,283:
but for small problems like this it is inconsequential.
The above solution fits a polynomial of order 11 (or, more specifically, a polynomial whose order matches the length of its argument sequence).
If the order of the polynomial is known to be 3
(as is implied in the task description)
then the following solution is probably preferable:
<
1 2 3</
(note that this time we used floating point numbers, so that result is approximate rather than exact - it only looks exact because of how J displays floating point numbers (by default, J assumes six digits of accuracy) - changing (i.3) to (x:i.3) would give us an exact result, if that mattered.)
Line 1,249 ⟶ 1,294:
{{trans|D}}
{{works with|Java|8}}
<
import java.util.function.IntToDoubleFunction;
import java.util.stream.IntStream;
Line 1,256 ⟶ 1,301:
private static void polyRegression(int[] x, int[] y) {
int n = x.length;
double xm = Arrays.stream(x).average().orElse(Double.NaN);
double ym = Arrays.stream(y).average().orElse(Double.NaN);
double x2m = Arrays.stream(
double x3m = Arrays.stream(
double x4m = Arrays.stream(
double xym = 0.0;
for (int i = 0; i < x.length && i < y.length; ++i) {
Line 1,298 ⟶ 1,342:
polyRegression(x, y);
}
}</
{{out}}
<pre>y = 1.0 + 2.0x + 3.0x^2
Line 1,314 ⟶ 1,358:
9 262 262.0
10 321 321.0</pre>
=={{header|jq}}==
'''Adapted from [[#Wren|Wren]]'''
'''Works with jq, the C implementation of jq'''
'''Works with gojq, the Go implementation of jq'''
'''Works with jaq, the Rust implementation of jq'''
<syntaxhighlight lang="jq">
def mean: add/length;
def inner_product($y):
. as $x
| reduce range(0; length) as $i (0; . + ($x[$i] * $y[$i]));
# $x and $y should be arrays of the same length
# Emit { a, b, c, z}
# Attempt to avoid overflow
def polynomialRegression($x; $y):
($x | length) as $length
| ($length * $length) as $l2
| ($x | map(./$length)) as $xs
| ($xs | add) as $xm
| ($y | mean) as $ym
| ($xs | map(. * .) | add * $length) as $x2m
| ($x | map( (./$length) * . * .) | add) as $x3m
| ($xs | map(. * . | (.*.) ) | add * $l2 * $length) as $x4m
| ($xs | inner_product($y)) as $xym
| ($xs | map(. * .) | inner_product($y) * $length) as $x2ym
| ($x2m - $xm * $xm) as $sxx
| ($xym - $xm * $ym) as $sxy
| ($x3m - $xm * $x2m) as $sxx2
| ($x4m - $x2m * $x2m) as $sx2x2
| ($x2ym - $x2m * $ym) as $sx2y
| {z: ([$x,$y] | transpose) }
| .b = ($sxy * $sx2x2 - $sx2y * $sxx2) / ($sxx * $sx2x2 - $sxx2 * $sxx2)
| .c = ($sx2y * $sxx - $sxy * $sxx2) / ($sxx * $sx2x2 - $sxx2 * $sxx2)
| .a = $ym - .b * $xm - .c * $x2m ;
# Input: {a,b,c,z}
def report:
def lpad($len): tostring | ($len - length) as $l | (" " * $l) + .;
def abc($x): .a + .b * $x + .c * $x * $x;
def print($p): "\($p[0] | lpad(3)) \($p[1] | lpad(4)) \(abc($p[0]) | lpad(5))";
"y = \(.a) + \(.b)x + \(.c)x^2\n",
" Input Approximation",
" x y y\u0302",
print(.z[]) ;
def x: [range(0;11)];
def y: [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321];
polynomialRegression(x; y)
| report
</syntaxhighlight>
{{output}}
<pre>
y = 1 + 2x + 3x^2
Input Approximation
x y ŷ
0 1 1
1 6 6
2 17 17
3 34 34
4 57 57
5 86 86
6 121 121
7 162 162
8 209 209
9 262 262
10 321 321
</pre>
=={{header|Julia}}==
Line 1,319 ⟶ 1,439:
The least-squares fit problem for a degree <i>n</i>
can be solved with the built-in backslash operator (coefficients in increasing order of degree):
<
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
@show polyfit(x, y, 2)</
{{out}}
Line 1,330 ⟶ 1,450:
=={{header|Kotlin}}==
{{trans|REXX}}
<
fun polyRegression(x: IntArray, y: IntArray) {
Line 1,365 ⟶ 1,485:
val y = intArrayOf(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321)
polyRegression(x, y)
}</
{{out}}
Line 1,388 ⟶ 1,508:
=={{header|Lua}}==
{{trans|Modula-2}}
<
return a + (b + c * x) * x
end
Line 1,439 ⟶ 1,559:
local xa = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
local ya = {1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321}
regression(xa, ya)</
{{out}}
<pre>y = 1 + 2x + 3x^2
Line 1,455 ⟶ 1,575:
=={{header|Maple}}==
<
PolynomialInterpolation([[0, 1], [1, 6], [2, 17], [3, 34], [4, 57], [5, 86], [6, 121], [7, 162], [8, 209], [9, 262], [10, 321]], 'x');
</syntaxhighlight>
Result:
<pre>3*x^2+2*x+1</pre>
Line 1,463 ⟶ 1,583:
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Using the built-in "Fit" function.
<
Fit[data, {1, x, x^2}, x]</
Second version: using built-in "InterpolatingPolynomial" function.
<
WolframAlpha version:
<syntaxhighlight lang="mathematica">curve fit (0,1), (1,6), (2,17), (3,34), (4,57), (5,86), (6,121), (7,162), (8,209), (9,262), (10,321)</syntaxhighlight>
Result:
<pre>1 + 2x + 3x^2</pre>
Line 1,475 ⟶ 1,597:
The output of this function is the coefficients of the polynomial which best fit these x,y value pairs.
<
>> y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321];
>> polyfit(x,y,2)
Line 1,481 ⟶ 1,603:
ans =
2.999999999999998 2.000000000000019 0.999999999999956</
=={{header|МК-61/52}}==
Part 1:
<syntaxhighlight lang="text">ПC С/П ПD ИП9 + П9 ИПC ИП5 + П5
ИПC x^2 П2 ИП6 + П6 ИП2 ИПC * ИП7
+ П7 ИП2 x^2 ИП8 + П8 ИПC ИПD *
ИПA + ПA ИП2 ИПD * ИПB + ПB ИПD
КИП4 С/П БП 00</
''Input'': В/О x<sub>1</sub> С/П y<sub>1</sub> С/П x<sub>2</sub> С/П y<sub>2</sub> С/П ...
Part 2:
<syntaxhighlight lang="text">ИП5 ПC ИП6 ПD П2 ИП7 П3 ИП4 ИПD *
ИПC ИП5 * - ПD ИП4 ИП7 * ИПC ИП6
* - П7 ИП4 ИПA * ИПC ИП9 * -
Line 1,502 ⟶ 1,624:
ИПD ИП8 * ИП7 ИП3 * - / ПB ИПA
ИПB ИП7 * - ИПD / ПA ИП9 ИПB ИП6
* - ИПA ИП5 * - ИП4 / П9 С/П</
''Result'': Р9 = a<sub>0</sub>, РA = a<sub>1</sub>, РB = a<sub>2</sub>.
=={{header|Modula-2}}==
<
FROM FormatString IMPORT FormatString;
FROM RealStr IMPORT RealToStr;
Line 1,593 ⟶ 1,715:
ReadChar;
END PolynomialRegression.</
=={{header|Nim}}==
{{trans|Kotlin}}
<
proc polyRegression(x, y: openArray[int]) =
Line 1,630 ⟶ 1,752:
let x = toSeq(0..10)
let y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
polyRegression(x, y)</
{{out}}
Line 1,652 ⟶ 1,774:
{{trans|Kotlin}}
{{libheader|Base}}
<
open Stdio
Line 1,692 ⟶ 1,814:
let x = Array.init 11 ~f:Float.of_int in
let y = [| 1.; 6.; 17.; 34.; 57.; 86.; 121.; 162.; 209.; 262.; 321. |] in
regression x y</
{{out}}
Line 1,715 ⟶ 1,837:
=={{header|Octave}}==
<
y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321];
coeffs = polyfit(x, y, 2)</
=={{header|PARI/GP}}==
Lagrange interpolating polynomial:
<
In newer versions, this can be abbreviated:
<
{{out}}
<pre>3*x^2 + 2*x + 1</pre>
Least-squares fit:
<
M=matrix(#V,3,i,j,(i-1)^(j-1));Polrev(matsolve(M~*M,M~*V))</
<small>Code thanks to [http://pari.math.u-bordeaux.fr/archives/pari-users-1105/msg00006.html Bill Allombert]</small>
{{out}}
Line 1,735 ⟶ 1,857:
Least-squares polynomial fit in its own function:
<
lsf([0..10], [1,6,17,34,57,86,121,162,209,262,321], 2)</
=={{header|Perl}}==
This code identical to that of [[Multiple regression]] task.
<
use warnings;
use Statistics::Regression;
Line 1,753 ⟶ 1,875:
my @coeff = $reg->theta();
printf "%-6s %8.3f\n", $model[$_], $coeff[$_] for 0..@model-1;</
{{output}}
<pre>const 1.000
Line 1,760 ⟶ 1,882:
PDL Alternative:
<
use strict;
Line 1,785 ⟶ 1,907:
print "\n" unless($_)
}
</syntaxhighlight>
{{output}}
<pre> 3.00000037788248 * $x**2 + 1.99999750988868 * $x + 1.00000180493936</pre>
Line 1,794 ⟶ 1,916:
{{libheader|Phix/pGUI}}
You can run this online [http://phix.x10.mx/p2js/Polynomial_regression.htm here].
<!--<
<span style="color: #000080;font-style:italic;">-- demo\rosetta\Polynomial_regression.exw</span>
<span style="color: #008080;">with</span> <span style="color: #008080;">javascript_semantics</span>
Line 1,866 ⟶ 1,988:
<span style="color: #7060A8;">IupClose</span><span style="color: #0000FF;">()</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<!--</
{{out}}
(plus a simple graphical plot, as per [[Polynomial_regression#Racket|Racket]])
Line 1,887 ⟶ 2,009:
=={{header|PowerShell}}==
<syntaxhighlight lang="powershell">
function qr([double[][]]$A) {
$m,$n = $A.count, $A[0].count
Line 1,968 ⟶ 2,090:
"X^2 X constant"
"$(polyfit $x $y 2)"
</syntaxhighlight>
{{out}}
<pre>
Line 1,979 ⟶ 2,101:
{{libheader|NumPy}}
<
>>> y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
>>> coeffs = numpy.polyfit(x,y,deg=2)
>>> coeffs
array([ 3., 2., 1.])</
Substitute back received coefficients.
<
>>> yf
array([ 1., 6., 17., 34., 57., 86., 121., 162., 209., 262., 321.])</
Find max absolute error:
<
'1e-013'</
===Example===
For input arrays `x' and `y':
<
>>> y = [2.7, 2.8, 31.4, 38.1, 58.0, 76.2, 100.5, 130.0, 149.3, 180.0]</
<
>>> print p
2
1.085 N + 10.36 N - 0.6164</
Thus we confirm once more that for already sorted sequences
the considered quick sort implementation has
Line 2,012 ⟶ 2,134:
which will find the least squares solution via a QR decomposition:
<syntaxhighlight lang="r">
x <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
y <- c(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321)
coef(lm(y ~ x + I(x^2)))</
{{out}}
Line 2,025 ⟶ 2,147:
Alternately, use poly:
<
<pre> (Intercept) poly(x, 2, raw = T)1 poly(x, 2, raw = T)2
1 2 3</pre>
=={{header|Racket}}==
<
#lang racket
(require math plot)
Line 2,049 ⟶ 2,171:
(plot (list (points (map vector xs ys))
(function (poly (fit xs ys 2)))))
</syntaxhighlight>
{{out}}
[[File:polyreg-racket.png]]
Line 2,055 ⟶ 2,177:
=={{header|Raku}}==
(formerly Perl 6)
We'll use a Clifford algebra library. Very slow.
Rationale (in French for some reason):
Le système d'équations peut s'écrire :
<math>\left(a + b x_i + cx_i^2 = y_i\right)_{i=1\ldots N}</math>, où on cherche <math>(a,b,c)\in\mathbb{R}^3</math>. On considère <math>\mathbb{R}^N</math> et on répartit chaque équation sur chaque dimension:
<math> (a + b x_i + cx_i^2)\mathbf{e}_i = y_i\mathbf{e}_i</math>
Posons alors :
<math>
\mathbf{x}_0 = \sum_{i=1}^N \mathbf{e}_i,\,
\mathbf{x}_1 = \sum_{i=1}^N x_i\mathbf{e}_i,\,
\mathbf{x}_2 = \sum_{i=1}^N x_i^2\mathbf{e}_i,\,
\mathbf{y} = \sum_{i=1}^N y_i\mathbf{e}_i
</math>
Le système d'équations devient : <math>a\mathbf{x}_0+b\mathbf{x}_1+c\mathbf{x}_2 = \mathbf{y}</math>.
D'où :
<math>\begin{align}
a = \mathbf{y}\and\mathbf{x}_1\and\mathbf{x}_2/(\mathbf{x}_0\and\mathbf{x_1}\and\mathbf{x_2})\\
b = \mathbf{y}\and\mathbf{x}_2\and\mathbf{x}_0/(\mathbf{x}_1\and\mathbf{x_2}\and\mathbf{x_0})\\
c = \mathbf{y}\and\mathbf{x}_0\and\mathbf{x}_1/(\mathbf{x}_2\and\mathbf{x_0}\and\mathbf{x_1})\\
\end{align}</math>
<syntaxhighlight lang="raku" line>use MultiVector;
constant @x1 = <0 1 2 3 4 5 6 7 8 9 10>;
Line 2,067 ⟶ 2,214:
constant $y = [+] @y Z* @e;
.say for
$y∧$x1∧$x2/($x0∧$x1∧$x2),
$y∧$x2∧$x0/($x1∧$x2∧$x0),
$y∧$x0∧$x1/($x2∧$x0∧$x1);
</syntaxhighlight>
{{out}}
<pre>1
Line 2,084 ⟶ 2,227:
=={{header|REXX}}==
<
* Implementation of http://keisan.casio.com/exec/system/14059932254941
*--------------------------------------------------------------------*/
Line 2,134 ⟶ 2,277:
fun:
Parse Arg x
Return a+b*x+c*x**2 </
{{out}}
<pre>y=1+2*x+3*x**2
Line 2,150 ⟶ 2,293:
9 262 262.000
10 321 321.000</pre>
=={{header|RPL}}==
{{trans|Ada}}
≪ 1 + → x y n
≪ { } n + x SIZE + 0 CON
1 x SIZE '''FOR''' j
1 n '''FOR''' k
{ } k + j + x j GET k 1 - ^ PUT
'''NEXT NEXT'''
DUP y * SWAP DUP TRN * /
<span style="color:grey">@ the following lines convert the resulting vector into a polynomial equation</span>
DUP 'x' STO 1 GET
2 x SIZE '''FOR''' j 'X' * x j GET + '''NEXT'''
EXPAN COLCT
≫ ≫ '<span style="color:blue">FIT</span>' STO
[1 2 3 4 5 6 7 8 9 10] [1 6 17 34 57 86 121 162 209 262 321] 2 <span style="color:blue">FIT</span>
{{out}}
<pre>
1: '3+2*X+1*X^2'
</pre>
=={{header|Ruby}}==
<
def regress x, y, degree
Line 2,161 ⟶ 2,325:
((mx.t * mx).inv * mx.t * my).transpose.to_a[0].map(&:to_f)
end</
'''Testing:'''
<
[1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321],
2)</
{{out}}
<pre>[1.0, 2.0, 3.0]</pre>
Line 2,174 ⟶ 2,338:
{{libheader|Scastie qualified}}
{{works with|Scala|2.13}}
<
private def xy = Seq(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321).zipWithIndex.map(_.swap)
Line 2,207 ⟶ 2,371:
polyRegression(xy)
}</
=={{header|Sidef}}==
{{trans|Ruby}}
<
var A = Matrix.build(x.len, degree+1, {|i,j|
x[i]**j
Line 2,230 ⟶ 2,394:
)
say coeff</
{{out}}
<pre>[1, 2, 3]</pre>
Line 2,236 ⟶ 2,400:
=={{header|Stata}}==
See '''[http://www.stata.com/help.cgi?fvvarlist Factor variables]''' in Stata help for explanations on the ''c.x##c.x'' syntax.
<
. input x y
0 1
Line 2,268 ⟶ 2,432:
|
_cons | 1 . . . . .
------------------------------------------------------------------------------</
=={{header|Swift}}==
{{trans|Kotlin}}
<syntaxhighlight lang="swift">
let x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Line 2,315 ⟶ 2,479:
polyRegression(x: x, y: y)
</syntaxhighlight>
{{out}}
Line 2,340 ⟶ 2,504:
<!-- This implementation from Emiliano Gavilan;
posted here with his explicit permission -->
<
proc build.matrix {xvec degree} {
Line 2,388 ⟶ 2,552:
set coeffs [math::linearalgebra::solveGauss $A $b]
# show results
puts $coeffs</
This will print:
1.0000000000000207 1.9999999999999958 3.0
Line 2,394 ⟶ 2,558:
=={{header|TI-83 BASIC}}==
<
seq(X,X,0,10) → L1
{1,6,17,34,57,86,121,162,209,262,321} → L2
QuadReg L1,L2</
{{out}}
Line 2,408 ⟶ 2,572:
=={{header|TI-89 BASIC}}==
<
seq(x,x,0,10) → xs
{1,6,17,34,57,86,121,162,209,262,321} → ys
QuadReg xs,ys
Disp regeq(x)</
<code>seq(''expr'',''var'',''low'',''high'')</code> evaluates ''expr'' with ''var'' bound to integers from ''low'' to ''high'' and returns a list of the results. <code> →</code> is the assignment operator.
Line 2,430 ⟶ 2,594:
whereby the data can be passed as lists rather than arrays,
and all memory management is handled automatically.
<
#import nat
#import flo
(fit "n") ("x","y") = ..dgelsd\"y" (gang \/*pow float*x iota successor "n")* "x"</
test program:
<
y = <1.,6.,17.,34.,57.,86.,121.,162.,209.,262.,321.>
#cast %eL
example = fit2(x,y)</
{{out}}
<pre><3.000000e+00,2.000000e+00,1.000000e+00></pre>
Line 2,447 ⟶ 2,611:
=={{header|VBA}}==
Excel VBA has built in capability for line estimation.
<
Private Function polynomial_regression(y As Variant, x As Variant, degree As Integer) As Variant
Dim a() As Double
Line 2,476 ⟶ 2,640:
Debug.Print "Degrees of freedom:"; result(4, 2)
Debug.Print "Standard error of y estimate:"; result(3, 2)
End Sub</
<pre>coefficients : 1, 2, 3,
standard errors: 0, 0, 0,
Line 2,490 ⟶ 2,654:
{{libheader|Wren-seq}}
{{libheader|Wren-fmt}}
<
import "./seq" for Lst
import "./fmt" for Fmt
var polynomialRegression = Fn.new { |x, y|
Line 2,525 ⟶ 2,689:
for (i in 1..10) x[i] = i
var y = [1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321]
polynomialRegression.call(x, y)</
{{out}}
Line 2,548 ⟶ 2,712:
=={{header|zkl}}==
Using the GNU Scientific Library
<
xs:=GSL.VectorFromData(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
ys:=GSL.VectorFromData(1, 6, 17, 34, 57, 86, 121, 162, 209, 262, 321);
Line 2,554 ⟶ 2,718:
v.format().println();
GSL.Helpers.polyString(v).println();
GSL.Helpers.polyEval(v,xs).format().println();</
{{out}}
<pre>
Line 2,566 ⟶ 2,730:
Example:
<
T(T(1.0,6.0,17.0,34.0,57.0,86.0,121.0,162.0,209.0,262.0,321.0)), 2)
.flatten().println();</
{{out}}<pre>L(1,2,3)</pre>
|