Multiple regression: Difference between revisions

m
syntax highlighting fixup automation
m (syntax highlighting fixup automation)
Line 20:
 
matrices.ads:
<langsyntaxhighlight Adalang="ada">generic
type Element_Type is private;
Zero : Element_Type;
Line 57:
Not_Square_Matrix : exception;
Not_Invertible : exception;
end Matrices;</langsyntaxhighlight>
 
matrices.adb:
<langsyntaxhighlight Adalang="ada">package body Matrices is
function "*" (Left, Right : Matrix) return Matrix is
Result : Matrix (Left'Range (1), Right'Range (2)) :=
Line 248:
return Result;
end Transpose;
end Matrices;</langsyntaxhighlight>
 
Example multiple_regression.adb:
<langsyntaxhighlight Adalang="ada">with Ada.Text_IO;
with Matrices;
procedure Multiple_Regression is
Line 318:
Output_Matrix (Float_Matrices.To_Matrix (Coefficients));
end;
end Multiple_Regression;</langsyntaxhighlight>
 
{{out}}
Line 361:
=={{header|BBC BASIC}}==
{{works with|BBC BASIC for Windows}}
<langsyntaxhighlight lang="bbcbasic"> *FLOAT 64
INSTALL @lib$+"ARRAYLIB"
Line 389:
t() = t().m()
c() = y().t()
ENDPROC</langsyntaxhighlight>
{{out}}
<pre>
Line 396:
 
=={{header|C}}==
Using GNU gsl and c99, with the WP data<langsyntaxhighlight Clang="c">#include <stdio.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_math.h>
Line 439:
gsl_multifit_linear_free(wspc);
 
}</langsyntaxhighlight>
 
=={{header|C++}}==
{{trans|Java}}
<langsyntaxhighlight lang="cpp">#include <array>
#include <iostream>
 
Line 669:
 
return 0;
}</langsyntaxhighlight>
{{out}}
<pre>[0.981818]
Line 677:
=={{header|C sharp|C#}}==
{{libheader|Math.Net}}
<langsyntaxhighlight lang="csharp">using System;
using MathNet.Numerics.LinearRegression;
using MathNet.Numerics.LinearAlgebra;
Line 694:
Console.WriteLine(β);
}
}</langsyntaxhighlight>
 
{{out}}
Line 706:
Uses the routine (chol A) from [[Cholesky decomposition]], (mmul A B) from [[Matrix multiplication]], (mtp A) from [[Matrix transposition]].
 
<langsyntaxhighlight lang="lisp">
;; Solve a linear system AX=B where A is symmetric and positive definite, so it can be Cholesky decomposed.
(defun linsys (A B)
Line 740:
(linsys (mmul (mtp A) A)
(mmul (mtp A) b)))
</syntaxhighlight>
</lang>
 
To show an example of multiple regression, (polyfit x y n) from [[Polynomial regression]], which itself uses (linsys A B) and (lsqr A b), will be used to fit a second degree order polynomial to data.
 
<langsyntaxhighlight lang="lisp">(let ((x (make-array '(1 11) :initial-contents '((0 1 2 3 4 5 6 7 8 9 10))))
(y (make-array '(1 11) :initial-contents '((1 6 17 34 57 86 121 162 209 262 321)))))
(polyfit x y 2))
#2A((0.9999999999999759d0) (2.000000000000005d0) (3.0d0))</langsyntaxhighlight>
 
=={{header|D}}==
{{trans|Java}}
<langsyntaxhighlight lang="d">import std.algorithm;
import std.array;
import std.exception;
Line 975:
v = multipleRegression(y, x);
v.writeln;
}</langsyntaxhighlight>
{{out}}
<pre>[0.981818]
Line 985:
{{libheader|calc}}
 
<langsyntaxhighlight lang="lisp">(let ((x1 '(0 1 2 3 4 5 6 7 8 9 10))
(x2 '(0 1 1 3 3 7 6 7 3 9 8))
(y '(1 6 17 34 57 86 121 162 209 262 321)))
(apply #'calc-eval "fit(a*X1+b*X2+c,[X1,X2],[a,b,c],[$1 $2 $3])" nil
(mapcar (lambda (items) (cons 'vec items)) (list x1 x2 y))))</langsyntaxhighlight>
 
{{out}}
Line 996:
 
=={{header|ERRE}}==
<langsyntaxhighlight ERRElang="erre">PROGRAM MULTIPLE_REGRESSION
 
!$DOUBLE
Line 1,077:
END FOR
 
END PROGRAM</langsyntaxhighlight>
{{out}}
<pre>LINEAR SYSTEM COEFFICENTS
Line 1,097:
{{libheader|SLATEC}} [http://netlib.org/slatec/ Available at the Netlib]
 
<langsyntaxhighlight Fortranlang="fortran">*-----------------------------------------------------------------------
* MR - multiple regression using the SLATEC library routine DHFTI
*
Line 1,182:
STOP 'program complete'
END
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 1,192:
The [http://en.wikipedia.org/wiki/Ordinary_least_squares#Example_with_real_data example] on WP happens to be a polynomial regression example, and so code from the [[Polynomial regression]] task can be reused here. The only difference here is that givens x and y are computed in a separate function as a task prerequisite.
===Library gonum/matrix===
<langsyntaxhighlight lang="go">package main
 
import (
Line 1,224:
x, y := givens()
fmt.Printf("%.4f\n", mat64.Formatted(mat64.QR(x).Solve(y)))
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,232:
</pre>
===Library go.matrix===
<langsyntaxhighlight lang="go">package main
 
import (
Line 1,277:
}
fmt.Println(c)
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,285:
=={{header|Haskell}}==
Using package [http://hackage.haskell.org/package/hmatrix hmatrix] from HackageDB
<langsyntaxhighlight lang="haskell">import Numeric.LinearAlgebra
import Numeric.LinearAlgebra.LAPACK
 
Line 1,296:
v :: Matrix Double
v = (3><1)
[1.745005,-4.448092,-4.160842]</langsyntaxhighlight>
Using lapack::dgels
<langsyntaxhighlight lang="haskell">*Main> linearSolveLSR m v
(3><1)
[ 0.9335611922087276
, 1.101323491272865
, 1.6117769115824 ]</langsyntaxhighlight>
Or
<langsyntaxhighlight lang="haskell">*Main> inv m `multiply` v
(3><1)
[ 0.9335611922087278
, 1.101323491272865
, 1.6117769115824006 ]</langsyntaxhighlight>
 
=={{header|Hy}}==
<langsyntaxhighlight lang="lisp">(import
[numpy [ones column-stack]]
[numpy.random [randn]]
Line 1,323:
(print (first (lstsq
(column-stack (, (ones n) x1 x2 (* x1 x2)))
y)))</langsyntaxhighlight>
 
=={{header|J}}==
 
<langsyntaxhighlight lang="j"> NB. Wikipedia data
x=: 1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83
y=: 52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46
 
y %. x ^/ i.3 NB. calculate coefficients b1, b2 and b3 for 2nd degree polynomial
128.813 _143.162 61.9603</langsyntaxhighlight>
 
Breaking it down:
<langsyntaxhighlight lang="j"> X=: x ^/ i.3 NB. form Design matrix
X=: (x^0) ,. (x^1) ,. (x^2) NB. equivalent of previous line
4{.X NB. show first 4 rows of X
Line 1,346:
NB. y %. X does matrix division and gives the regression coefficients
y %. X
128.813 _143.162 61.9603</langsyntaxhighlight>
In other words <tt> beta=: y %. X </tt> is the equivalent of:<br>
<math> \hat\beta = (X'X)^{-1}X'y</math><br>
 
To confirm:
<langsyntaxhighlight lang="j"> mp=: +/ .* NB. matrix product
NB. %.X is matrix inverse of X
NB. |:X is transpose of X
Line 1,360:
X (%.@:xpy@[ mp xpy) y
128.814 _143.163 61.9606
</syntaxhighlight>
</lang>
 
LAPACK routines are also available via the Addon <tt>math/lapack</tt>.
<langsyntaxhighlight lang="j"> load 'math/lapack'
load 'math/lapack/gels'
gels_jlapack_ X;y
128.813 _143.162 61.9603</langsyntaxhighlight>
 
=={{header|Java}}==
{{trans|Kotlin}}
<langsyntaxhighlight lang="java">import java.util.Arrays;
import java.util.Objects;
 
Line 1,578:
printVector(v);
}
}</langsyntaxhighlight>
{{out}}
<pre>[0.9818181818181818]
Line 1,593:
Extends the Matrix class from [[Matrix Transpose#JavaScript]], [[Matrix multiplication#JavaScript]], [[Reduced row echelon form#JavaScript]].
Uses the IdentityMatrix from [[Matrix exponentiation operator#JavaScript]]
<langsyntaxhighlight lang="javascript">// modifies the matrix "in place"
Matrix.prototype.inverse = function() {
if (this.height != this.width) {
Line 1,639:
)
);
print(y.regression_coefficients(x));</langsyntaxhighlight>
{{out}}
<pre>0.9818181818181818
Line 1,656:
 
'''Preliminaries'''
<langsyntaxhighlight lang="jq">def dot_product(a; b):
reduce range(0;a|length) as $i (0; . + (a[$i] * b[$i]) );
 
Line 1,667:
reduce range(0; $p) as $j
(.;
.[$i][$j] = dot_product( A[$i]; $BT[$j] ) ));</langsyntaxhighlight>
 
'''Multiple Regression'''
<langsyntaxhighlight lang="jq">def multipleRegression(y; x):
(y|transpose) as $cy
| (x|transpose) as $cx
Line 1,693:
 
range(0; ys|length) as $i
| multipleRegression(ys[$i]; xs[$i])</langsyntaxhighlight>
{{out}}
<pre>
Line 1,707:
As in Matlab, the backslash or slash operator (depending on the matrix ordering) can be used for solving this problem, for example:
 
<langsyntaxhighlight lang="julia">x = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83]
y = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46]
X = [x.^0 x.^1 x.^2];
b = X \ y</langsyntaxhighlight>
{{out}}
<pre>
Line 1,721:
=={{header|Kotlin}}==
As neither the JDK nor the Kotlin Standard Library has matrix operations built-in, we re-use functions written for various other tasks.
<langsyntaxhighlight lang="scala">// Version 1.2.31
 
typealias Vector = DoubleArray
Line 1,841:
v = multipleRegression(y, x)
printVector(v)
}</langsyntaxhighlight>
 
{{out}}
Line 1,856:
First build a random dataset.
 
<langsyntaxhighlight lang="maple">n:=200:
X:=<ArrayTools[RandomArray](n,4,distribution=normal)|Vector(n,1,datatype=float[8])>:
Y:=X.<4.2,-2.8,-1.4,3.1,1.75>+convert(ArrayTools[RandomArray](n,1,distribution=normal),Vector):</langsyntaxhighlight>
 
Now the linear regression, with either the LinearAlgebra package, or the Statistics package.
 
<langsyntaxhighlight lang="maple">LinearAlgebra[LeastSquares](X,Y)^+;
# [4.33701132468683, -2.78654498997457, -1.41840666085642, 2.92065133466547, 1.76076442997642]
 
Line 1,880:
# R-squared: 0.9767, Adjusted R-squared: 0.9761
# 4.33701132468683 x1 - 2.78654498997457 x2 - 1.41840666085642 x3
# + 2.92065133466547 x4 + 1.76076442997642 c</langsyntaxhighlight>
 
=={{header|Mathematica}}/{{header|Wolfram Language}}==
<langsyntaxhighlight Mathematicalang="mathematica">x = {1.47, 1.50 , 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83};
y = {52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46};
X = {x^0, x^1, x^2};
LeastSquares[Transpose@X, y]</langsyntaxhighlight>
{{out}}
<pre>{128.813, -143.162, 61.9603}</pre>
Line 1,894:
The slash and backslash operator can be used for solving this problem. Here some random data is generated.
 
<langsyntaxhighlight Matlablang="matlab"> n=100; k=10;
y = randn (1,n); % generate random vector y
X = randn (k,n); % generate random matrix X
b = y / X
b = 0.1457109 -0.0777564 -0.0712427 -0.0166193 0.0292955 -0.0079111 0.2265894 -0.0561589 -0.1752146 -0.2577663 </langsyntaxhighlight>
 
In its transposed form yt = Xt * bt, the backslash operator can be used.
 
<langsyntaxhighlight Matlablang="matlab"> yt = y'; Xt = X';
bt = Xt \ yt
bt =
Line 1,914:
-0.0561589
-0.1752146
-0.2577663</langsyntaxhighlight>
 
Here is the example for estimating the polynomial fit
 
<langsyntaxhighlight Matlablang="matlab"> x = [1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83]
y = [52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46]
X = [x.^0;x.^1;x.^2];
b = y/X
 
128.813 -143.162 61.960</langsyntaxhighlight>
 
Instead of "/", the slash operator, one can also write :
<langsyntaxhighlight Matlablang="matlab"> b = y * X' * inv(X * X') </langsyntaxhighlight>
or
<langsyntaxhighlight Matlablang="matlab"> b = y * pinv(X) </langsyntaxhighlight>
 
=={{header|Nim}}==
{{libheader|arraymancer}}
<langsyntaxhighlight Nimlang="nim"># Using Wikipedia data sample.
 
import math
Line 1,948:
var a = stack(height.ones_like, height, height *. height, axis = 1)
 
echo toSeq(least_squares_solver(a, weight).solution.items)</langsyntaxhighlight>
 
{{out}}
Line 1,954:
 
=={{header|PARI/GP}}==
<langsyntaxhighlight lang="parigp">pseudoinv(M)=my(sz=matsize(M),T=conj(M))~;if(sz[1]<sz[2],T/(M*T),(T*M)^-1*T)
addhelp(pseudoinv, "pseudoinv(M): Moore pseudoinverse of the matrix M.");
 
y*pseudoinv(X)</langsyntaxhighlight>
 
=={{header|Perl}}==
<langsyntaxhighlight lang="perl">use strict;
use warnings;
use Statistics::Regression;
Line 1,972:
my @coeff = $reg->theta();
 
printf "%-6s %8.3f\n", $model[$_], $coeff[$_] for 0..@model-1;</langsyntaxhighlight>
{{out}}
<pre>const 128.813
Line 1,980:
=={{header|Phix}}==
{{trans|ERRE}}
<!--<langsyntaxhighlight Phixlang="phix">(phixonline)-->
<span style="color: #008080;">with</span> <span style="color: #008080;">javascript_semantics</span>
<span style="color: #008080;">constant</span> <span style="color: #000000;">N</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">15</span><span style="color: #0000FF;">,</span> <span style="color: #000000;">M</span><span style="color: #0000FF;">=</span><span style="color: #000000;">3</span>
Line 2,036:
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"Solutions:\n"</span><span style="color: #0000FF;">)</span>
<span style="color: #0000FF;">?</span><span style="color: #7060A8;">columnize</span><span style="color: #0000FF;">(</span><span style="color: #000000;">a</span><span style="color: #0000FF;">,</span><span style="color: #000000;">M</span><span style="color: #0000FF;">+</span><span style="color: #000000;">1</span><span style="color: #0000FF;">)[</span><span style="color: #000000;">1</span><span style="color: #0000FF;">]</span>
<!--</langsyntaxhighlight>-->
{{out}}
<pre>
Line 2,048:
 
=={{header|PicoLisp}}==
<langsyntaxhighlight PicoLisplang="picolisp">(scl 20)
 
# Matrix transposition
Line 2,090:
(car X) ) ) ) )
(T (> (inc 'Lead) Cols)) ) )
Mat )</langsyntaxhighlight>
{{trans|JavaScript}}
<langsyntaxhighlight PicoLisplang="picolisp">(de matInverse (Mat)
(let N (length Mat)
(unless (= N (length (car Mat)))
Line 2,110:
X (columnVector (2.0 1.0 3.0 4.0 5.0)) )
 
(round (caar (regressionCoefficients Y X)) 17)</langsyntaxhighlight>
{{out}}
<pre>-> "0.98181818181818182"</pre>
Line 2,117:
{{libheader|NumPy}}
'''Method with matrix operations'''
<langsyntaxhighlight lang="python">import numpy as np
 
height = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63,
Line 2,127:
y = np.mat(weight)
 
print(y * X.T * (X*X.T).I)</langsyntaxhighlight>
{{out}}
<pre>
Line 2,133:
</pre>
'''Using numpy lstsq function'''
<langsyntaxhighlight lang="python">import numpy as np
 
height = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63,
Line 2,143:
y = weight
 
print(np.linalg.lstsq(X, y)[0])</langsyntaxhighlight>
{{out}}
<pre>
Line 2,153:
R provides the '''lm''' function for linear regression.
 
<langsyntaxhighlight lang="rsplus">x <- c(1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83)
y <- c(52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46)
 
lm( y ~ x + I(x^2))</langsyntaxhighlight>
 
{{out}}
Line 2,170:
is useful to illustrate R's model description and linear algebra capabilities.
 
<langsyntaxhighlight lang="rsplus">simpleMultipleReg <- function(formula) {
 
## parse and evaluate the model formula
Line 2,185:
}
 
simpleMultipleReg(y ~ x + I(x^2))</langsyntaxhighlight>
 
This produces the same coefficients as lm()
Line 2,199:
than the method above, is to solve the linear system directly
and use the crossprod function:
<langsyntaxhighlight Rlang="r">solve(crossprod(X), crossprod(X, Y))</langsyntaxhighlight>
 
A numerically more stable way is to use the QR decomposition of the design matrix:
 
<langsyntaxhighlight lang="rsplus">lm.impl <- function(formula) {
mf <- model.frame(formula)
X <- model.matrix(mf)
Line 2,223:
 
# (Intercept) x I(x^2)
# 128.81280 -143.16202 61.96033</langsyntaxhighlight>
 
=={{header|Racket}}==
<langsyntaxhighlight lang="racket">
#lang racket
(require math)
Line 2,233:
(define (fit X y)
(matrix-solve (matrix* (T X) X) (matrix* (T X) y)))
</syntaxhighlight>
</lang>
Test:
<langsyntaxhighlight lang="racket">
(fit (matrix [[1 2]
[2 5]
Line 2,246:
{{out}}
(array #[#[9 1/3] #[-3 1/3]])
</syntaxhighlight>
</lang>
 
=={{header|Raku}}==
Line 2,276:
 
 
<syntaxhighlight lang="raku" perl6line>use Clifford;
my @height = <1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83>;
my @weight = <52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46>;
Line 2,291:
say "α = ", ($w∧$h1∧$h2)·$I.reversion/$I2;
say "β = ", ($w∧$h2∧$h0)·$I.reversion/$I2;
say "γ = ", ($w∧$h0∧$h1)·$I.reversion/$I2;</langsyntaxhighlight>
{{out}}
<pre>α = 128.81280357844
Line 2,304:
Using the standard library Matrix class:
 
<langsyntaxhighlight lang="ruby">require 'matrix'
 
def regression_coefficients y, x
Line 2,311:
 
(x.t * x).inverse * x.t * y
end</langsyntaxhighlight>
 
Testing 2-dimension:
<langsyntaxhighlight lang="ruby">puts regression_coefficients([1, 2, 3, 4, 5], [ [2, 1, 3, 4, 5] ])</langsyntaxhighlight>
{{out}}
<pre>Matrix[[0.981818181818182]]</pre>
Line 2,320:
Testing 3-dimension:
Points(x,y,z): [1,1,3], [2,1,4] and [1,2,5]
<langsyntaxhighlight lang="ruby">puts regression_coefficients([3,4,5], [ [1,2,1], [1,1,2] ])</langsyntaxhighlight>
{{out}}
<pre>Matrix[[0.9999999999999996], [2.0]]</pre>
Line 2,328:
First, build a random dataset:
 
<syntaxhighlight lang="text">set rng=mc seed=17760704.
new file.
input program.
Line 2,341:
end input program.
compute y=1.5+0.8*x1-0.7*x2+1.1*x3-1.7*x4+rv.normal(0,1).
execute.</langsyntaxhighlight>
 
Now use the regression command:
 
<syntaxhighlight lang="text">regression /dependent=y
/method=enter x1 x2 x3 x4.</langsyntaxhighlight>
 
{{out}}
 
<syntaxhighlight lang="text">Regression
Notes
|--------------------------------------------------------------------|---------------------------------------------------------------------------|
Line 2,426:
| |x4 |-1,770 |,073 |-,656 |-24,306|,000|
|----------------------------------------------------------------------------------------------|
a Dependent Variable: y</langsyntaxhighlight>
 
=={{header|Stata}}==
Line 2,432:
First, build a random dataset:
 
<langsyntaxhighlight lang="stata">clear
set seed 17760704
set obs 200
Line 2,438:
gen x`i'=rnormal()
}
gen y=1.5+0.8*x1-0.7*x2+1.1*x3-1.7*x4+rnormal()</langsyntaxhighlight>
 
Now, use the '''[https://www.stata.com/help.cgi?regress regress]''' command:
 
<syntaxhighlight lang ="stata">reg y x*</langsyntaxhighlight>
 
'''Output'''
Line 2,467:
The regress command also sets a number of '''[https://www.stata.com/help.cgi?ereturn ereturn]''' values, which can be used by subsequent commands. The coefficients and their standard errors also have a [https://www.stata.com/help.cgi?_variables special syntax]:
 
<langsyntaxhighlight lang="stata">. di _b[x1]
.75252466
 
Line 2,477:
 
. di _se[_cons]
.06978623</langsyntaxhighlight>
 
See '''[https://www.stata.com/help.cgi?regress_postestimation regress postestimation]''' for a list of commands that can be used after a regression.
Line 2,483:
Here we compute [[wp:Akaike information criterion|Akaike's AIC]], the covariance matrix of the estimates, the predicted values and residuals:
 
<langsyntaxhighlight lang="stata">. estat ic
 
Akaike's information criterion and Bayesian information criterion
Line 2,507:
 
. predict yhat, xb
. predict r, r</langsyntaxhighlight>
 
=={{header|Tcl}}==
{{tcllib|math::linearalgebra}}
<langsyntaxhighlight lang="tcl">package require math::linearalgebra
namespace eval multipleRegression {
namespace export regressionCoefficients
Line 2,526:
}
}
namespace import multipleRegression::regressionCoefficients</langsyntaxhighlight>
Using an example from the Wikipedia page on the correlation of height and weight:
<langsyntaxhighlight lang="tcl"># Simple helper just for this example
proc map {n exp list} {
upvar 1 $n v
Line 2,543:
}
# Wikipedia states that fitting up to the square of x[i] is worth it
puts [regressionCoefficients $y [map n {map v {expr {$v**$n}} $x} {0 1 2}]]</langsyntaxhighlight>
{{out}} (a 3-vector of coefficients):
<pre>128.81280358170625 -143.16202286630732 61.96032544293041</pre>
Line 2,549:
=={{header|TI-83 BASIC}}==
{{works with|TI-83 BASIC|TI-84Plus 2.55MP}}
<langsyntaxhighlight lang="ti83b">{1.47,1.50,1.52,1.55,1.57,1.60,1.63,1.65,1.68,1.70,1.73,1.75,1.78,1.80,1.83}→L₁
{52.21,53.12,54.48,55.84,57.20,58.57,59.93,61.29,63.11,64.47,66.28,68.10,69.92,72.19,74.46}→L₂
QuadReg L₁,L₂ </langsyntaxhighlight>
{{out}}
<pre>
Line 2,564:
the Lapack library [http://www.netlib.org/lapack/lug/node27.html],
which is callable in Ursala like this:
<langsyntaxhighlight Ursalalang="ursala">regression_coefficients = lapack..dgelsd</langsyntaxhighlight>
test program:
<syntaxhighlight lang="ursala">x =
<lang Ursala>x =
 
<
Line 2,577:
#cast %eL
 
example = regression_coefficients(x,y)</langsyntaxhighlight>
The matrix x needn't be square, and has one row for each data point.
The length of y must equal the number of rows in x,
Line 2,594:
=={{header|Visual Basic .NET}}==
{{trans|Java}}
<langsyntaxhighlight lang="vbnet">Module Module1
 
Sub Swap(Of T)(ByRef x As T, ByRef y As T)
Line 2,823:
End Sub
 
End Module</langsyntaxhighlight>
{{out}}
<pre>[0.981818181818182]
Line 2,832:
{{trans|Kotlin}}
{{libheader|Wren-matrix}}
<langsyntaxhighlight lang="ecmascript">import "/matrix" for Matrix
 
var multipleRegression = Fn.new { |y, x|
Line 2,859:
System.print(v)
System.print()
}</langsyntaxhighlight>
 
{{out}}
Line 2,872:
=={{header|zkl}}==
Using the GNU Scientific Library:
<langsyntaxhighlight lang="zkl">var [const] GSL=Import("zklGSL"); // libGSL (GNU Scientific Library)
height:=GSL.VectorFromData(1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63,
1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83);
Line 2,880:
v.format().println();
GSL.Helpers.polyString(v).println();
GSL.Helpers.polyEval(v,height).format().println();</langsyntaxhighlight>
{{out}}
<pre>
Line 2,890:
Or, using Lists:
{{trans|Common Lisp}}
<langsyntaxhighlight lang="zkl">// Solve a linear system AX=B where A is symmetric and positive definite, so it can be Cholesky decomposed.
fcn linsys(A,B){
n,m:=A.len(),B[1].len(); // A.rows,B.cols
Line 2,945:
if(M.len()==1) M[0].pump(List,List.create); // 1 row --> n columns
else M[0].zip(M.xplode(1));
}</langsyntaxhighlight>
<langsyntaxhighlight lang="zkl">height:=T(T(1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63,
1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83));
weight:=T(T(52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93,
61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46));
polyfit(height,weight,2).flatten().println();</langsyntaxhighlight>
{{out}}
<pre>
10,327

edits