Harry Garst

219 Reputation

5 Badges

16 years, 337 days

MaplePrimes Activity


These are questions asked by Harry Garst


 

NULL

restart; with(LinearAlgebra)

kernelopts(version); interface(version)

`Maple 2017.3, X86 64 WINDOWS, Sep 27 2017, Build ID 1265877`

 

`Standard Worksheet Interface, Maple 2017.3, Windows 10, September 27 2017 Build ID 1265877`

(1)



The following equation contains so many regularities, that it is tantalizing to find a compact matrix formulation.
I found a matrix expression, but it seems unnecessairy complex. Is there a Maple procedure that can help me to find a more concise matrix formulation?

eq1 := i2*i3*i4*(i2+i3+i4)+i1*i3*i4*(i1+i3+i4)+i1*i2*i4*(i1+i2+i4)+i1*i2*i3*(i1+i2+i3)

i2*i3*i4*(i2+i3+i4)+i1*i3*i4*(i1+i3+i4)+i1*i2*i4*(i1+i2+i4)+i1*i2*i3*(i1+i2+i3)

(2)

expand(eq1)

i1^2*i2*i3+i1^2*i2*i4+i1^2*i3*i4+i1*i2^2*i3+i1*i2^2*i4+i1*i2*i3^2+i1*i2*i4^2+i1*i3^2*i4+i1*i3*i4^2+i2^2*i3*i4+i2*i3^2*i4+i2*i3*i4^2

(3)

V := Matrix(4, 1, [i1, i2, i3, i4])

Matrix(%id = 18446745919887783806)

(4)

one := Matrix(4, 1, 1)

Matrix(%id = 18446745919887784886)

(5)

This matrix expression works, but seems overly complex. Using Maple, is there a way to simplify it?

Trace(MatrixScalarMultiply(one^%T.(V.one^%T-DiagonalMatrix(Diagonal(V.one^%T))).convert(Diagonal(Adjoint(V.one^%T-DiagonalMatrix(Diagonal(V.one^%T)))), Matrix), 1/2))-eq1

0

(6)

Alternatively, but also not very simple:

Trace(DiagonalMatrix(Diagonal(MatrixScalarMultiply(1/(V.one^%T-DiagonalMatrix(Diagonal(V.one^%T))), (1/2)*Determinant(V.one^%T-DiagonalMatrix(Diagonal(V.one^%T)))))).(KroneckerProduct(V^%T.one, IdentityMatrix(4))-DiagonalMatrix(V)))-eq1

0

(7)

Obviously, this does not help:

A, B := LinearAlgebra:-GenerateMatrix([eq1], [x])

Matrix(%id = 18446745919887762006), Vector[column](%id = 18446745919887761886)

(8)

NULL


 

Download Matrix_formulation.mw

Dear maple experts,

as far as I know premultiplication of matrix A with matrix B is only possible if the number of columns of A is equal to the number of rows of B (matrices are conformable). Not so in Maple: strange.mw

I expected an error message, so I would receive feedback that I made an error.

what's going on?

kind regards,

Harry Garst

I probably worked too hard, but this result seems strange to me:

In a second example (not shown here, but in atttached file) all goes well. It is probably very simple, but at this moment I better go for a walk outside.

best regards,

Harry Garst

mapleprimes.mw

the following code results in an error message: Error, (in forget) lexically scoped parameter out of context

If I click on this error message, it brings me to a page which I visited too often.

if I uncomment the irrelevant minimize command, the error message disappears.

How can I prevent this error without giving irrelevant commands?

kind regards,

Harry Garst

Dear Maple experts,

I am struggling with a difference between the symbolic and numerical solution of an eigendecomposition of a symmetric positive definite matrix. Numerically the solution seems correct, but the symbolic solution puzzles me. In the symbolic solution the reconstructed matrix is different from the original matrix (although the difference between the original and the reconstructed matrix seems to be related to an unknown scalar multiplier.

restart;
with(LinearAlgebra);
Lambda := Matrix(5, 1, symbol = lambda);
Theta := Matrix(5, 5, shape = diagonal, symbol = theta);
#Ω is the matrix that will be diagonalized.
Omega := MatrixPower(Theta, -1/2) . Lambda . Lambda^%T . MatrixPower(Theta, -1/2);
#Ω is symmetric and in practice always positive definite, but I do not know how to specify the assumption of positivess definiteness in Maple
IsMatrixShape(Omega, symmetric);

# the matrix Omega is very simple and Maple finds a symbolic solution
E, V := Eigenvectors(Omega);

# this will not return the original matrix

simplify(V . DiagonalMatrix(E) . V^%T)

# check this numerically with the following values.

lambda[1, 1] := .9;lambda[2, 1] := .8;lambda[3, 1] := .7;lambda[4, 1] := .85;lambda[5, 1] := .7;
theta[1, 1] := .25;theta[2, 2] := .21;theta[3, 3] := .20;theta[4, 4] := .15;theta[5, 5] := .35;

The dotproduct is not always zero, although I thought that the eigenvectors should be orthogonal.

I know eigenvector solutions may be different because of scalar multiples, but here I am not able to understand the differences between the numerical and symbolic solution.

I probably missed something, but I spend the whole saturday trying to solve this problem, but I can not find it.

I attached both files.

Anyone? Thank in advance,

Harry

eigendecomposition_numeric.mw

eigendecomposition_symbolic.mw

2 3 4 5 6 7 8 Page 4 of 10