Optimization
MomentPolynomialOpt.annihilator
MomentPolynomialOpt.get_measure
MomentPolynomialOpt.get_minimizers
MomentPolynomialOpt.get_series
MomentPolynomialOpt.maximize
MomentPolynomialOpt.min_ellipsoid
MomentPolynomialOpt.minimize
MomentPolynomialOpt.mpo_optimizer
MomentPolynomialOpt.optimize
MomentPolynomialOpt.polar_minimize
MomentPolynomialOpt.sos_decompose
MomentPolynomialOpt.w_cholesky
Solving moment programs
MomentPolynomialOpt.mpo_optimizer
— Functionmpo_optimizer(opt)
Define the default optimizer opt
for the optimization problems created by MomentPolynomialOpt
mpo_optimizer(opt, args...)
Define the default optimizer opt
with its attribute args...
for the optimization problems created by MomentPolynomialOpt
MomentPolynomialOpt.minimize
— Functionv, M = minimize(f, [e1, e2, ...], [p1, p2, ...], X, d, optimizer)
Compute the infimum of f
under the constraints $e_i=0$ and $p_i \geq 0$ using a relaxation of order d
on the moments in the variable X
and the optimizer optimizer
.
See optimize
.
MomentPolynomialOpt.maximize
— Functionv, M = maximize(f, [e1, e2, ...], [p1, p2, ...], X, d, optimizer)
Similar to the function minimize
but compute the supremun of f
.
See optimize
.
MomentPolynomialOpt.optimize
— Functionv, M = optimize(sense, f, [e1, e2, ...], [p1, p2, ...], X, d)
Compute the optimum of f
under the constraints $e_i$ =0 and $p_i \geq 0$ using a relaxation of order d
on the moments in the variable X
.
- $f, e_i, p_i$ should be polynomials in the variables X.
- 'sense` is a Symbol in [:Inf, :inf, :Min,:min] or :Sup, :sup, :Max, :max
X
is a tuple of variablesd
is the order of the relaxationoptimizer
is the optimizer used to solve the moment relaxation. By default it isMPO[:optimizer]
.
If the problem is feasible and has minimizers, it outputs
- v: the optimum value
- M: the moment model of type JuMP.Model
Example
using MomentPolynomialOpt
X = @polyvar x1 x2
e1 = x1^2-2
e2 = (x2^2-3)*(x1*x2-2)
p1 = x1
p2 = 2-x2
v, M = optimize(:inf, -x1, [e1, e2], [p1, p2], X, 3)
To recover the optimizers, see get_minimizers
, get_measure
, get_series
.
v, M = optimize([(f, set) ...], X, d)
Solve the moment program of relaxation of order d
in the variables X
, defined by the constraint or objective paires (f, set)
where f
is a polynomial and set
is a string
- "inf", "sup" to define the objective.
- "=0" to define zero constraints:
- ">=0", "<=0" to define sign constraints
It outputs
- v: the optimal value
- M: the optimized moment program of type MOM.Model
Example
using MomentPolynomialOpt, DynamicPolynomials
X = @polyvar x1 x2
e1 = x1^2-2
e2 = (x2^2-3)*(x1*x2-2)
p1 = x1
p2 = 2-x2
v, M = optimize([(-x1, "inf"), (e1, "=0"), (e2, "=0"), (p1, ">=0"), (p2>=0)], X, 3)
To recover the optimal values, see get_minimizers
, get_measure
, get_series
.
MomentPolynomialOpt.polar_minimize
— Functionv, M = polar_minimize(f, [h1, h2, ...], [g1, g2, ...], X, degree_approx, optimizer)
Compute the infimum of the f (equality constraints hi == 0 and the sign constraints gi >= 0) on the moment side. It does that calling minimize(...), replacing the equality constraints [h1, h2, ...] with generators of the polar ideal.
f, hi, gi should be polynomials in the variables X.
If the problem is feasible and has minimizers, it outputs
- v: the minimum value
- M: the moment model of type MOM.Model
Solving Sum-Of-Squares programs
MomentPolynomialOpt.min_ellipsoid
— FunctionCompute the minimal ellipsoid enclosing the point P of the semi-algebraic set defined by H[j]=0, G[k]>=0
It outputs the center c and the matrix U which columns are the principal axes of the ellipsoid.
Optimal solutions
MomentPolynomialOpt.get_minimizers
— Functionget_minimizers(M, , t::Int64 = Inf)
Return the minimizer points of the optimized moment program M
, using moments of degree <=t (default: twice the order of the relaxation minus 2)
get_minimizer(M)
[1.41421 1.73205; 1.41421 1.41421; 1.41421 -1.73205]
MomentPolynomialOpt.get_measure
— Functionw, Xi = get_measure(M)
Return the approximation of the moment sequence $\mu$ truncated to moments of degree <= t (default: twice the order of the relaxation minus 2), as weighted sum of Dirac measures: $\sum_{k=1}^{r} \omega_k \delta_{\xi_k}$ where
w
is the vector of weights of the Dirac measures.Xi
is matrix of $n\times r$ support points of the corresponding Dirac measures. The columnXi[:,i]
is the support point $\xi_{i}$ of the ith Dirac measure and its weights isw[i]
.
w, Xi = get_measure(M)
([0.1541368146508854, 0.5889741915171074, 0.256888993597116], [1.4142135624216647 1.414213562080608 1.4142135620270329; -1.732052464639053 1.4141771454788292 1.7319839273833693])
MomentPolynomialOpt.get_series
— Functionget_series(M)
Return the vector of $\nu$=M[:nu]
series of optimal moments of the optimized moment program M
.
MomentPolynomialOpt.annihilator
— FunctionK,I,P,B = annihilator(sigma::Series{R}, L, eqzero = is_zero)
Compute a graded basis of the annihilator of the positive series $σ$ in the space spanned by the list of monomials L.
It outputs:
- K the graded basis
- I the initial monomials of K
- P the orthogonal basis
- B the monomial basis
Decompositions
MomentPolynomialOpt.sos_decompose
— FunctionDecompose f in the truncated quadratic module associated to the constraints H=0, G>=0:
WS, P, v, M = sos_decompose(f, H, G, X, d, optimizer; exact = false, round = Inf64 )
such that
$f = \sum_k \omega_{0,k} q_{0,k}^2 + \sum_j (\sum_k \omega_{j,k} q_{j,k}^2)*G[j] + \sum_i P[i]*H[i] (+)$
where
- f is the polynomial to decompose
- H = [...] are the equality constraints
- G = [...] are the non-negativity constraints
- X is the set of variables
- d is the order of the relaxation
optimizer
(optional) is the optimizer used to solve the SDP program. The default value isMPO[:optimizer]
- if
exact = true
then an exact decomposition with rational coefficients is computed. - if the option
round = p
is provided, then a round of at mostp
digits is used.
In the output decomposition,
- WS[1] = [w0, Q0] where w0 is an array of weights $\omega_{0,k}$ in (+) and Q0 is an array of polynomials (q_{0,k} in (+)) of degree $\le 2d -\deg(G[i])$
- WS[j+1] as WS[1] for the weights $\omega_{j,k}$ and polynomials $q_j,k$in (+)
- P[i] is a polynomial of degree $\le 2d - \deg(H[i])$
- v is the maximal smallest eigenvalue of S0 such that $(X^d)^t S0 (X^d) = \sum_k \omega_{0,k} q_{0,k}^2$ in (*)
M
is the JuMP optimization model
MomentPolynomialOpt.w_cholesky
— Function w, L = w_cholesky(A)
Compute the weighted Cholesky factorisation so that 'L^tdiagm(w)L = A`