Singular value decomposition: Difference between revisions
(Added Wren) |
m (Add remarks) |
||
Line 45: | Line 45: | ||
'''Remark''' |
'''Remark''' |
||
It’s encouraged to implement the algorithm by yourself while using libraries is still acceptible. |
1. It’s encouraged to implement the algorithm by yourself while using libraries is still acceptible. |
||
2. The algorithm should be applicable for general case(<math>m\times n</math>). |
|||
Revision as of 14:39, 7 December 2022
You are encouraged to solve this task according to the task description, using any language you may know.
is any m by n matrix, square or rectangular. Its rank is r. We will diagonalize this A, but not by Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle X^{−1}AX} . The eigenvectors in have three big problems: They are usually not orthogonal, there are not always enough eigenvectors, and = Failed to parse (syntax error): {\displaystyle λx} requires to be a square matrix. The singular vectors of solve all those problems in a perfect way.
The Singular Value Decomposition (SVD)
According to the web page above, for any rectangular matrix , we can decomposite it as Failed to parse (syntax error): {\displaystyle A=UΣV^T}
Task Description
Firstly, input two numbers "m" and "n".
Then, input a square/rectangular matrix .
Finally, output Failed to parse (syntax error): {\displaystyle U,Σ,V} with respect to .
Example
Sample Input
2 2 3 0 4 5
From the input above we can know that is a 2 by 2 matrix.
Sample Output
0.31622776601683794 -0.9486832980505138 0.9486832980505138 0.31622776601683794 6.708203932499369 0 0 2.23606797749979 0.7071067811865475 -0.7071067811865475 0.7071067811865475 0.7071067811865475
The output may vary depending your choice of the data types.
Remark
1. It’s encouraged to implement the algorithm by yourself while using libraries is still acceptible.
2. The algorithm should be applicable for general case().
Julia
Julia has an svd() function as part of its built-in LinearAlgebra package.
julia> using LinearAlgebra
julia> function testsvd()
rows, cols = [parse(Int, s) for s in split(readline())]
arr = zeros(rows, cols)
for row in 1:rows
arr[row, :] .= [tryparse(Float64, s) for s in split(readline())]
end
display(svd(arr))
end
testsvd (generic function with 1 method)
julia> testsvd()
2 2
3 0
4 5
SVD{Float64, Float64, Matrix{Float64}, Vector{Float64}}
U factor:
2×2 Matrix{Float64}:
-0.316228 -0.948683
-0.948683 0.316228
singular values:
2-element Vector{Float64}:
6.70820393249937
2.2360679774997894
Vt factor:
2×2 Matrix{Float64}:
-0.707107 -0.707107
-0.707107 0.707107
Phix
with javascript_semantics requires("1.0.2") -- builtins/svd.e added, haha include builtins/svd.e sequence a = {{3,0}, {4,5}} sequence {u,w,v} = svd(a) ?u ?w ?v
- Output:
{{0.9486832981,0.316227766},{-0.316227766,0.9486832981}} {2.236067977,6.708203932} {{0.7071067812,0.7071067812},{-0.7071067812,0.7071067812}}
Python
This implementation uses "numpy" library.
from numpy import *
A = matrix([[3, 0], [4, 5]])
U, Sigma, VT = linalg.svd(A)
print(U)
print(Sigma)
print(VT)
- Output:
[[-0.31622777 -0.9486833 ] [-0.9486833 0.31622777]] [6.70820393 2.23606798] [[-0.70710678 -0.70710678] [-0.70710678 0.70710678]]
Wren
We don't have a built-in method for this so we need to work from first principles. The following only decomposes 2 x 2 matrices.
import "./matrix" for Matrix
import "./fmt" for Fmt
var svd2x2 = Fn.new { |a, width, prec|
var at = a.transpose
var p = a * at
// the eigenvalues, λ, are the solutions of the characteristic polynomial:
// λ^2 - (p[0,0] + p[1,1]) * λ + p[0,0]*p[1,1] - p[0,1]*p[1,0] = 0
var b = -(p[0, 0] + p[1, 1])
var c = p[0, 0]*p[1, 1] - p[0, 1]*p[1, 0]
var d = (b * b - 4 * c).sqrt
var l1 = (-b + d) / 2
var l2 = (-b - d) / 2
// hence the singular values and 'sigma' are:
var s1 = l1.sqrt
var s2 = l2.sqrt
var sigma = Matrix.new([[s1, 0], [0, s2]])
// now find the eigenvectors
p = at * a
var i = Matrix.identity(p.numRows)
var m1 = p - i*l1
var m2 = p - i*l2
m1.toReducedRowEchelonForm
m2.toReducedRowEchelonForm
// solve for null space and convert to unit vector
var ev1 = Matrix.new([[-m1[0,0] / m1[0, 1]], [1]])
ev1 = ev1 / ev1.norm
var ev2 = Matrix.new([[-m2[0,0] / m2[0, 1]], [1]])
ev2 = ev2 / ev2.norm
// we can now put these vectors together to get 'v' transpose
var vt = Matrix.new([[ev1[0, 0], ev2[0, 0]], [ev1[1, 0], ev2[1, 0]]])
// and since 'v' is orthogonal its transpose and inverse are the same
var u = a * vt * (sigma.inverse)
System.print("U:")
Fmt.mprint(u, width, prec)
System.print("\nΣ:")
Fmt.mprint(sigma, width, prec)
System.print("\nVT:")
Fmt.mprint(vt, width, prec)
}
var a = Matrix.new([[3, 0], [4, 5]])
svd2x2.call(a, 17, 14)
- Output:
U: | 0.31622776601684 -0.94868329805051| | 0.94868329805051 0.31622776601684| Σ: | 6.70820393249937 0.00000000000000| | 0.00000000000000 2.23606797749979| VT: | 0.70710678118655 -0.70710678118655| | 0.70710678118655 0.70710678118655|