Decomposition

MultivariateSeries.decomposeFunction
decompose(p :: DynamicPolynomials.Polynomial,  rkf :: Function)

Decompose the homogeneous polynomial $p$ as $∑ ω_i (ξ_{i1} x_1 + ... + ξ_{in} x_n)ᵈ$ where $d$ is the degree of $p$.

The optional argument rkf is the rank function used to determine the numerical rank from the vector S of singular values. Its default value eps_rkf(1.e-6) determines the rank as the first i s.t. S[i+1]/S[i]< 1.e-6 where S is the vector of singular values.

If the rank function cst_rkf(r) is used, the SVD is truncated at rank r.

source
decompose(T :: Array{C,3},  rkf :: Function)

Decompose the multilinear tensor T of order 3 as a weighted sum of tensor products of vectors of norm 1.

The optional argument rkf is the rank function used to determine the numerical rank from the vector S of singular values. Its default value eps_rkf(1.e-6) determines the rank as the first i s.t. S[i+1]/S[i]< 1.e-6 where S is the vector of singular values.

If the rank function cst_rkf(r) is used, the SVD is truncated at rank r.

Slices along the mode m=1 of the tensor (i.e. T[i,:,:]) are used by default to compute the decomposition. The optional argument mode = m can be used to specify the sliced mode.

decompose(T, mode=2)
decompose(T, eps_rkf(1.e-10), mode=3)
source
TensorDec.rcg_decomposeFunction
rcg_decompose(p :: Polynomial{true,T},  rkf :: Function)

Decompose the homogeneous polynomial $p$ as $∑ ω_i (ξ_{i1} x_1 + ... + ξ_{in} x_n)ᵈ$ where $d$ is the degree of $p$.

The optional argument rkf is the rank function used to determine the numerical rank from the vector S of singular values. Its default value eps_rkf(1.e-6) determines the rank as the first i s.t. S[i+1]/S[i]< 1.e-6 where S is the vector of singular values.

If the rank function cst_rkf(r) is used, the SVD is truncated at rank r.

A Riemannian conjugate gradient algorithm is used (RCG) in the algorithm decompose (rcg_decompose) to approximate the pencil of submatrices of the Hankel matrix by a pencil of real simultaneous diagonalizable matrices.

source
TensorDec.tensorsplitFunction

Decompose V as $u \otimes v$ where u is of dimension n1 and v of dimension n2.

It is based on the svd decomposition of the $n1 \times n2$ matrix associated to V.

source
TensorDec.approximateFunction
approximate(P::Polynomial, r:: Int64; iter = :RNE, init = :Random)

This function approximates a symmetric tensor (real or complex valued) into a low rank symmetric tensor.

Input:

  • P: The homogeneous polynomial associated to the symmetric tensor to approximate.
  • r: Approximation rank.

The option iter specifies the method to apply in order to find the approximation, there are 4 options (the default is rnentr):

 * RNE: To apply the function 'rne_n_tr': This function gives a low symmetric rank approximation of a complex valued
 symmetric tensor by applying an exact Riemannian Newton iteration with
 dog-leg trust region steps to the associate non-linear-least-squares
 problem. The optimization set is parameterized by weights and unit vectors. The approximation is of the form
 of linear combination of r linear forms to the d-th power ``∑w_i*(v_i^tx)^d`` with i=1,...,r.
 This approximation is represented by a vector of strictly positive real numbers W=(w_i) (weight vector), and a matrix of normalized columns V=[v_1;...;v_r];

 * RNER: To apply the function 'rne_n_tr_r'(when the symmetric tensor is real and the symmetric tensor approximation is required to be real): This function gives a low symmetric rank approximation of a real valued
 symmetric tensor by applying an exact Riemannian Newton iteration with
 dog-leg trust region steps to the associate non-linear-least-squares
 problem. The optimization set is parameterized by weights and unit vectors. The approximation is of the form
 of linear combination of r linear forms to the d-th power ∑w_i*(v_i^tx)^d, with i=1,...,r.
 This approximation is represented by a vector of r real numbers W=(w_i) (weight vector), and a matrix
 of real normalized columns V=[v_1;...;v_r];

 * RGN: To apply the function 'rgn_v_tr': This function gives a low symmetric rank approximation of a complex valued
 symmetric tensor by applying a Riemannian Gauss-Newton iteration with
 dog-leg trust region steps to the associate non-linear-least-squares
 problem. The optimization set is a cartesian product of Veronese
 manifolds. The approximation is of the form
 of linear combination of r linear forms to the d-th power ∑(v_i^tx)^d, with i=1,...,r.
 This approximation is represented by a matrix [v_1;...;v_r];

 * SPM: To apply the function 'spm_decompose': Decomposition of the tensorwith the Power Method.

The option init specifies the way the initial point for the first three methods is chosen by the function decompose:

 * Random: To choose a random combination (default option);
 * Rnd: To choose non-random combination.
 *RCG: To choose to approximate the pencil of submatrices of the Hankel matrix by a pencil of real simultaneous diagonalizable matrices using Riemannian conjugate gradient algorithm.
source
approximate(P::Polynomial, w0, V0; iter = :RNE, init = :Random)

This function approximates a symmetric tensor (real or complex valued) into a low rank symmetric tensor starting from an initial decomposition (w0, V0)

Input:

  • P: The homogeneous polynomial associated to the symmetric tensor to approximate.
  • w0: Initial weights of the decomposition
  • V0: Initial vectors of the decomposition

The option iter specifies the method to apply in order to find the approximation, there are 4 options (the default is :RNE):

 * RNE: To apply the function 'rne_n_tr': This function gives a low symmetric rank approximation of a complex valued
 symmetric tensor by applying an exact Riemannian Newton iteration with
 dog-leg trust region steps to the associate non-linear-least-squares
 problem. The optimization set is parameterized by weights and unit vectors. The approximation is of the form
 of linear combination of r linear forms to the d-th power ``∑w_i*(v_i^tx)^d`` with i=1,...,r.
 This approximation is represented by a vector of strictly positive real numbers W=(w_i) (weight vector), and a matrix of normalized columns V=[v_1;...;v_r];

 * RNER: To apply the function 'rne_n_tr_r'(when the symmetric tensor is real and the symmetric tensor approximation is required to be real): This function gives a low symmetric rank approximation of a real valued
 symmetric tensor by applying an exact Riemannian Newton iteration with
 dog-leg trust region steps to the associate non-linear-least-squares
 problem. The optimization set is parameterized by weights and unit vectors. The approximation is of the form
 of linear combination of r linear forms to the d-th power ∑w_i*(v_i^tx)^d, with i=1,...,r.
 This approximation is represented by a vector of r real numbers W=(w_i) (weight vector), and a matrix
 of real normalized columns V=[v_1;...;v_r];

 * RGN: To apply the function 'rgn_v_tr': This function gives a low symmetric rank approximation of a complex valued
 symmetric tensor by applying a Riemannian Gauss-Newton iteration with
 dog-leg trust region steps to the associate non-linear-least-squares
 problem. The optimization set is a cartesian product of Veronese
 manifolds. The approximation is of the form
 of linear combination of r linear forms to the d-th power ∑(v_i^tx)^d, with i=1,...,r.
 This approximation is represented by a matrix [v_1;...;v_r];

 * SPM: To apply the function 'spm_decompose': Decomposition of the tensorwith the Power Method.
source
TensorDec.rne_n_trFunction
rne_n_tr(P, A0, B0, Dict{String,Any}("maxIter" => N,"epsIter" => ϵ))⤍ A, B, Info

This function gives a low symmetric rank approximation of a complex valued symmetric tensor by applying an exact Riemannian Newton iteration with dog-leg trust region steps to the associate non-linear-least-squares problem. The optimization set is parameterized by weights and unit vectors. Let r be the approximation rank. The approximation is of the form of linear combination of r linear forms to the d-th power $∑w_i*(v_i^tx)^d$ with i=1,...,r. This approximation is represented by a vector of strictly positive real numbers W=(wi) (weight vector), and a matrix of normalized columns V=[v1;...;v_r].

Input:

  • P: Homogeneous polynomial (associated to the symmetric tensor to approximate).
  • A0: Initial weight vector of size equal to the approximation rank.
  • B0: Initial matrix of row size equal to the dimension of P and column size equal to the approximation rank.
  • N: Maximal number of iterations (by default 500).
  • ϵ: The radius of the trust region (by default 1.e-3).

Output:

  • A: Weight vector of size equal to the approximation rank. It is a real strictly positive vector.
  • B: Matrix of row size equal to the dimension of P and column size equal to the approximation rank. The columns vectors of B are normalized.
  • Info: 'd0' (resp. 'd*') represents the initial (resp. the final) residual error, 'nIter' is for the number of iterations needed to find the approximation.
source
TensorDec.rne_n_tr_rFunction
rne_n_tr_r(P, A0, B0, Dict{String,Any}("maxIter" => N,"epsIter" => ϵ))⤍ A, B, Info

This function gives a low symmetric rank approximation of a real valued symmetric tensor by applying an exact Riemannian Newton iteration with dog-leg trust region steps to the associate non-linear-least-squares problem. The optimization set is parameterized by weights and unit vectors. Let r be the approximation rank. The approximation is of the form of linear combination of r linear forms to the d-th power ∑wi*(vi^tx)^d, with i=1,...,r. This approximation is represented by a vector of r real numbers W=(wi) (weight vector), and a matrix of normalized columns V=[v1;...;v_r].

Input:

  • P: Homogeneous polynomial (associated to the symmetric tensor to approximate).
  • A0: Initial weight vector of size equal to the approximation rank.
  • B0: Initial matrix of row size equal to the dimension of P and column size equal to the approximation rank.

The options are

  • N: Maximal number of iterations (by default 500).
  • ϵ: The radius of the trust region (by default 1.e-3).

Output:

  • A: Weight vector of size equal to the approximation rank.
  • B: Matrix of row size equal to the dimension of P and column size equal to the

approximation rank. The columns vectors of B are normalized.

  • Info: 'd0' (resp. 'd*') represents the initial (resp. the final) residual error, 'nIter' is for the number of iterations needed to find the approximation.
source
TensorDec.rgn_v_trFunction
    rgn_v_tr(P, B0, Dict{String,Any}("maxIter" => N,"epsIter" => ϵ))⤍ B, Info

This function gives a low symmetric rank approximation of a complex valued symmetric tensor by applying a Riemannian Gauss-Newton iteration with dog-leg trust region steps to the associate non-linear-least-squares problem. The optimization set is a cartesian product of Veronese manifolds. Let r be the approximation rank. The approximation is of the form of linear combination of r linear forms to the d-th power ∑(vi^tx)^d, with i=1,...,r. This approximation is represented by a matrix [v1;...;v_r].

Input:

  • P: Homogeneous polynomial (associated to the symmetric tensor to approximate).
  • B0: Matrix of row size equal to the dimension of P and column size equal to the approximation rank (initial point).
  • N: Maximal number of iterations (by default 500).
  • ϵ: The radius of the trust region (by default 1.e-3).

Output:

  • B: Matrix of row size equal to the dimension of P and column size equal to the approximation rank. This matrix contains the r vectors of the symmetric decomposition of the approximation.
  • Info: 'd0' (resp. 'd*') represents the initial (resp. the final) residual error, 'nIter' is for the number of iterations needed to find the approximation.
source
TensorDec.weightsFunction
weights(T, Xi::Matrix) -> Vector

Compute the weight vector in the decomposition of the homogeneous polynomial T as a weighted sum of powers of the linear forms associated to the columns of Xi.

source