Carl Love

Carl Love

28085 Reputation

25 Badges

13 years, 96 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

As you may know, the matrix of all 2nd partial derivatives is
called the "hessian", and a critical point may be classified 
by checking the signs of its eigenvalues when evaluated at that
point. This works for smooth real-valued functions of any number
of real variables, and it's a slight generalization of the
multivariate 2nd-derivative test given in most elementary calculus
textbooks (which only works for bivariate functions). 

D2test:= proc(f::algebraic, v::list(name), crit::listlist(realcons))
uses LA= LinearAlgebra, VC= VectorCalculus;
local H:= unapply(VC:-Hessian(f, v), v[]), pt, R, E;
    for pt in crit do
        E:= {seq}(LA:-Eigenvalues(Matrix(evalf(H(pt[])), 'shape'= 'symmetric')));
        R[pt]:= if 0 in E then FAIL
            elif (min*max)(E) < 0 then "saddle point"
            elif E[1] < 0 then "maximum"
            else "minimum"
        fi
    od;
    <entries(R, 'pairs')>
end proc
:
f:= 4*y^3+x^2-12*y^2-36*y+x^3+2:  v:= [x,y]:
crit:= (rhs~)~(solve({seq}(VectorCalculus:-Gradient(f,v)), v));
D2test(f, v, crit);
plots:-display(
    plot3d(f, (v=~ -2..4)[]),
    plots:-pointplot3d([seq]([p[], eval(f, v=~ p)], p= crit), color= red, symbolsize= 16),
    seq(plots:-spacecurve([p[], eval(f, v=~ p)+t], t= -20..20, color= red), p= crit),
    orientation= [60, 50, 10]
);

You are overgeneralizing about what is "safer". In this context, it's better to use indexed variables:

my_vector:= Vector(3, symbol= v);

The vector's entries will now be v[1]v[2]v[3].

If you want subscripted variables, you can use the || operator:

Vector([v__||(1..3)]);

This is not safer, because variables created with || (or cat, nprintf, or parse) are necessarily global.

The entirety of help for __ would amount to one or two sentences. The underscore character is treated  exactly like any letter of the alphabet. Symbols with embedded __ are prettyprinted subscripted. That's all that there is to know about it. It's a feature of the GUI display only, not of the Maple language. In the language, subscripting has no significance whatsoever, so you could just as well use Vector([v||(1..3)]).

Therefore, your v__i is seen by Maple as no different than vabi. Thus, of course, the value of i, if any, has no connection to vabi.

Data:= [[0,0,0], [5,.2,.1], [10,.25,.2], [18,.3,.3], [25,.4,.4], [32,.6,.45],
    [38,.72,.5], [43,.6,.4], [47,.3,.3], [50,0,0]]:
Pts:= [seq](k[1..2]*~[1,-1], k= Data): n:= nops(Pts):
plot(
    [Pts$2, seq([[k[1],0], k], k= Pts)],     
    filled= [true, false$(n+1)], color= [cyan, blue, black$n], thickness= [3$2, 1$n],
    labels= [x*Unit(m), depth*Unit(m)], title= "       River cross section\n"
);

(P,R):= ([0,0], rand(1..4)):  plot([P, (to 10000 do P+= [[-1,0],[1,0],[0,-1],[0,1]][R()] od)]);

I think that it's best to keep units separate from the start by using the Unit(...) operator. Then they will stay separate in the result.

epsilon__0:= 8.8541878128e-12*Unit(F/m);
solve(150*Unit(V/m) = sigma/epsilon__0, sigma);

I could've also extracted epsilon[0] from the ScientificConstants package.
 

Like this:
 

Download Markov_chain.mw

P:= <0.6199, 0.1450, 0.0636, 0.1549, 0.0166;
     0.6550, 0.0002, 0.0870, 0.1827, 0.0751;
     0.2990, 0.1294, 0.0010, 0.4925, 0.0781;  
     0.2773, 0.7184, 0.0043, 0,      0;
     0.0009, 0.6229, 0.3762, 0,      0
>:
V:= [S, I__1, I__2, I__3, R]:
n:= nops(V):
GT:= GraphTheory:  LA:= LinearAlgebra:
MC:= GT:-Graph(
    V,
    {seq(seq([[V[i],V[j]], P[i,j]], i= 1..n), j= 1..n)}
);
GT:-DrawGraph(MC);
#Find long-term probability of being in each state:
V =~ LA:-LinearSolve(
    <P-Matrix(n, shape= identity) | <seq(1, 1..n)>>^+,
    <seq(0, 1..n), 1>
);

[S = 0.535156546280454, I__1 = 0.215609042590436, I__2 = 0.0648612328168834, I__3 = 0.154231678262430, R = 0.0301415000497959]

 

Like this:

plot3d(
    [[r, theta, r], [r, theta, 0]], r= 0..1, theta= 0..Pi, 
    coords= cylindrical, scaling= constrained
);

The 3D version of polar coordinates is coords= cylindrical. Making the 3rd coordinate 0 in the 2nd plot restricts it to the xy-plane, so you get the region of integration.

Explicit returns of true or false are almost always redundant. The following is equivalent to VV's procedure, but in boolean-operator form:

restart:
TypeTools:-AddType(
    ElementaryMatrix,
    A-> A::'Matrix'('square') and (
            nops((local Z:= op(2, A-1))) > 1 implies
                nops((local j:= [lhs~(Z)[]])) = 2 and
                Z = {j[]=1, j[[2,1]][]=1, (j[1]$2)=-1, (j[2]$2)=-1}
        )
):
M:= <0,1,0,0; 1,0,0,0; 0,0,1,0; 0,0,0,1>:
type(M, ElementaryMatrix);
                              true

I included the identity matrix as an elementary matrix. If this is not desired, change  > 1 to <> 1.

Here's a much shorter code to do the check that you want, where M is the matrix:

remove(
    type, 
    op(2, M), 
    anything= And(complexcons, satisfies(x-> is(x, ComplexRange(-90,90))))
)[];

 

To get just the numbers, you can do

rhs~(cozum)[]

but they will not be in the same order.

To retain the order, do

rhs~([cozum[]])[]

To get the number associated with a particular label, do

eval(g[-1], cozum)

Like Mariusz, I recommend that you build upon Maple's own laplace command. But unlike Mariusz, I recommend that unevaluated laplace transforms in the result (which will come from transforming unknown functions such as f(x) and its derivatives) be converted to unevaluated Elziki transforms. Like this:

Elziki:= (f::algebraic, t::name, v::name)->
    (expand@evalindets)(        
        v*inttrans:-laplace(f, t, 1/v),
        specfunc(:-laplace),
        L-> 'Elziki'(op(1..2, L), 1/op(3,L))*op(3,L)
    )
:
Elziki(diff(f(x),x$2) - diff(f(x),x) + 2*f(x), x, v);

Thus the use of laplace is transparent to the end user.

The problem is that the summand is evaluated before k has a value. This doesn't happen if you use command add:

add(diff(M[1,1], x[k]) + diff(M[1,k], x[1]) - diff(M[k,1], x[1]), k= 1..2)

When you've seen LinearAlgebra algorithms labeled as FractionFree, that refers to the internal workings of the algorithm, not to its final output. If you have a matrix of rational numbers, the probability that its eigenvectors can be expressed as rational numbers is infinitesimal, so the example that you show is very special. And, as you've noted, it's easy to put a vector into the fraction-free form. Indeed, it's trival:

FF:= v-> (rationalize@expand)~(mul(denom~(v))*v):
M:= LinearAlgebra:-RandomMatrix(3);
(FF@op@op)~(3, LinearAlgebra:-Eigenvectors(M, output= list))[];

 

The nesting depth to which statement output is printed in the absence of an explicit print statement is controlled by the environment variable printlevel (see ?printlevel). Its default value is 1.

The command parse will interpret a string as if it was Maple code. So, if S is the string you gave, then U:= parse(S) gives the equivalent expression, with units.

First 85 86 87 88 89 90 91 Last Page 87 of 395