Carl Love

Carl Love

28020 Reputation

25 Badges

12 years, 302 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Use t= -15. .. 15. For from -7 to 7, display a message such as "invalid t value" on a blank background by using a textplot.

Here's another way that I thought was nifty. Regarding efficiency, I don't know how it compares to the others, but I'd be surprised if it was significantly worse.

F:= proc(
    p::polynom(anything,x),
    x::name, 
    n::depends(And(posint, Not(integer[1..d-1]))), 
    d:= degree(p,x)
)
    Matrix((n-d, n), proc(k) option remember; coeff(p, x, k) end proc@(-`-`))     
end proc
:
P:= randpoly(x);
F(P, x, 10);

 

The job can be done by

remove(evalb, sol)

To use subsindets:

subsindets(sol, And(`=`, satisfies(evalb)), e-> ())

Since I don't recommend applyrule, someone else will need to show that.

The most commonly used definition for the "length", "magnitude", or "norm" of a finite-dimensional vector is the Euclidean norm---the one that you show. However, there are other possibilties that are useful for various mathematical purposes, such as the measurement of error in a calculation. For any p >= 1, the p-norm of v is sum(abs(v[i])^p, i= 1..n)^(1/p). So the Euclidean norm is the 2-norm, and this can be returned from LinearAlgebra by Norm(v, 2) or Norm(v, Euclidean). Taking the limit as --> infinity gives the element of maximal absolute value. This is called the infinity-norm. AFAIK, the only commonly used values of p are 12, and infinity. However, all these p-norms generate the same topology. The command Norm will also accept 0 <= p < 1 or p= -infinity. These are not norms in the usual sense of that word, although the computational formula is still defined.

As you may know, the matrix of all 2nd partial derivatives is
called the "hessian", and a critical point may be classified 
by checking the signs of its eigenvalues when evaluated at that
point. This works for smooth real-valued functions of any number
of real variables, and it's a slight generalization of the
multivariate 2nd-derivative test given in most elementary calculus
textbooks (which only works for bivariate functions). 

D2test:= proc(f::algebraic, v::list(name), crit::listlist(realcons))
uses LA= LinearAlgebra, VC= VectorCalculus;
local H:= unapply(VC:-Hessian(f, v), v[]), pt, R, E;
    for pt in crit do
        E:= {seq}(LA:-Eigenvalues(Matrix(evalf(H(pt[])), 'shape'= 'symmetric')));
        R[pt]:= if 0 in E then FAIL
            elif (min*max)(E) < 0 then "saddle point"
            elif E[1] < 0 then "maximum"
            else "minimum"
        fi
    od;
    <entries(R, 'pairs')>
end proc
:
f:= 4*y^3+x^2-12*y^2-36*y+x^3+2:  v:= [x,y]:
crit:= (rhs~)~(solve({seq}(VectorCalculus:-Gradient(f,v)), v));
D2test(f, v, crit);
plots:-display(
    plot3d(f, (v=~ -2..4)[]),
    plots:-pointplot3d([seq]([p[], eval(f, v=~ p)], p= crit), color= red, symbolsize= 16),
    seq(plots:-spacecurve([p[], eval(f, v=~ p)+t], t= -20..20, color= red), p= crit),
    orientation= [60, 50, 10]
);

You are overgeneralizing about what is "safer". In this context, it's better to use indexed variables:

my_vector:= Vector(3, symbol= v);

The vector's entries will now be v[1]v[2]v[3].

If you want subscripted variables, you can use the || operator:

Vector([v__||(1..3)]);

This is not safer, because variables created with || (or cat, nprintf, or parse) are necessarily global.

The entirety of help for __ would amount to one or two sentences. The underscore character is treated  exactly like any letter of the alphabet. Symbols with embedded __ are prettyprinted subscripted. That's all that there is to know about it. It's a feature of the GUI display only, not of the Maple language. In the language, subscripting has no significance whatsoever, so you could just as well use Vector([v||(1..3)]).

Therefore, your v__i is seen by Maple as no different than vabi. Thus, of course, the value of i, if any, has no connection to vabi.

Data:= [[0,0,0], [5,.2,.1], [10,.25,.2], [18,.3,.3], [25,.4,.4], [32,.6,.45],
    [38,.72,.5], [43,.6,.4], [47,.3,.3], [50,0,0]]:
Pts:= [seq](k[1..2]*~[1,-1], k= Data): n:= nops(Pts):
plot(
    [Pts$2, seq([[k[1],0], k], k= Pts)],     
    filled= [true, false$(n+1)], color= [cyan, blue, black$n], thickness= [3$2, 1$n],
    labels= [x*Unit(m), depth*Unit(m)], title= "       River cross section\n"
);

(P,R):= ([0,0], rand(1..4)):  plot([P, (to 10000 do P+= [[-1,0],[1,0],[0,-1],[0,1]][R()] od)]);

I think that it's best to keep units separate from the start by using the Unit(...) operator. Then they will stay separate in the result.

epsilon__0:= 8.8541878128e-12*Unit(F/m);
solve(150*Unit(V/m) = sigma/epsilon__0, sigma);

I could've also extracted epsilon[0] from the ScientificConstants package.
 

Like this:
 

Download Markov_chain.mw

P:= <0.6199, 0.1450, 0.0636, 0.1549, 0.0166;
     0.6550, 0.0002, 0.0870, 0.1827, 0.0751;
     0.2990, 0.1294, 0.0010, 0.4925, 0.0781;  
     0.2773, 0.7184, 0.0043, 0,      0;
     0.0009, 0.6229, 0.3762, 0,      0
>:
V:= [S, I__1, I__2, I__3, R]:
n:= nops(V):
GT:= GraphTheory:  LA:= LinearAlgebra:
MC:= GT:-Graph(
    V,
    {seq(seq([[V[i],V[j]], P[i,j]], i= 1..n), j= 1..n)}
);
GT:-DrawGraph(MC);
#Find long-term probability of being in each state:
V =~ LA:-LinearSolve(
    <P-Matrix(n, shape= identity) | <seq(1, 1..n)>>^+,
    <seq(0, 1..n), 1>
);

[S = 0.535156546280454, I__1 = 0.215609042590436, I__2 = 0.0648612328168834, I__3 = 0.154231678262430, R = 0.0301415000497959]

 

Like this:

plot3d(
    [[r, theta, r], [r, theta, 0]], r= 0..1, theta= 0..Pi, 
    coords= cylindrical, scaling= constrained
);

The 3D version of polar coordinates is coords= cylindrical. Making the 3rd coordinate 0 in the 2nd plot restricts it to the xy-plane, so you get the region of integration.

Explicit returns of true or false are almost always redundant. The following is equivalent to VV's procedure, but in boolean-operator form:

restart:
TypeTools:-AddType(
    ElementaryMatrix,
    A-> A::'Matrix'('square') and (
            nops((local Z:= op(2, A-1))) > 1 implies
                nops((local j:= [lhs~(Z)[]])) = 2 and
                Z = {j[]=1, j[[2,1]][]=1, (j[1]$2)=-1, (j[2]$2)=-1}
        )
):
M:= <0,1,0,0; 1,0,0,0; 0,0,1,0; 0,0,0,1>:
type(M, ElementaryMatrix);
                              true

I included the identity matrix as an elementary matrix. If this is not desired, change  > 1 to <> 1.

Here's a much shorter code to do the check that you want, where M is the matrix:

remove(
    type, 
    op(2, M), 
    anything= And(complexcons, satisfies(x-> is(x, ComplexRange(-90,90))))
)[];

 

To get just the numbers, you can do

rhs~(cozum)[]

but they will not be in the same order.

To retain the order, do

rhs~([cozum[]])[]

To get the number associated with a particular label, do

eval(g[-1], cozum)

Like Mariusz, I recommend that you build upon Maple's own laplace command. But unlike Mariusz, I recommend that unevaluated laplace transforms in the result (which will come from transforming unknown functions such as f(x) and its derivatives) be converted to unevaluated Elziki transforms. Like this:

Elziki:= (f::algebraic, t::name, v::name)->
    (expand@evalindets)(        
        v*inttrans:-laplace(f, t, 1/v),
        specfunc(:-laplace),
        L-> 'Elziki'(op(1..2, L), 1/op(3,L))*op(3,L)
    )
:
Elziki(diff(f(x),x$2) - diff(f(x),x) + 2*f(x), x, v);

Thus the use of laplace is transparent to the end user.

First 84 85 86 87 88 89 90 Last Page 86 of 395