Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 312 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Using Stephen Wolfram's "new kind of science," we can simply define the derivative with respect to a forced integer argument to be 0, always. Thus, modifying your command slightly, we get

evalf(eval(diff(n*Zeta(n, 3), n), {n= 3, diff= 0})); 
      -0.3740436824

which is what you expected. I'd guess that this is the way that Mathematica views it.

A more-formal implementation of this idea could be made by making a small adjustment to `diff/Zeta`, which is only a 9-line procedure. Specifically, the unevaluated 'diff' in lines 4 and 7 could be replaced by 0. So, a one-line patch is

restart:
`diff/Zeta`:= subs(''diff''= 0, eval(`diff/Zeta`)):

#Now, redo your original command:
evalf(eval(diff(n*Zeta(n, 3), n), n= 3));
      -0.3740436824

 

Your procedure can be simplified to 

#construction d'un carré connaissant 2 sommets opposés 
car_2som_opp:= proc(U::list, V::list)  
local 
   X:= [add(V)+`-`(U[]), add(U)-`-`(V[])]/2, 
   Y:= [add(U)+`-`(V[]), add(V)-`-`(U[])]/2
;
   plot([U, X, V, Y, U], scaling= constrained, axes= none) 
end proc:

So, there's no need for solve, square roots, or distance formulas.

Using the modern Statistics package and other innovations, your computation can be done like this

restart:
N:= 400:
list_of_k:= [40, 20, 10, 5, 2, 1]:
st1:= Statistics:-Sample(EmpiricalDistribution([-1,1]), N):
st:= (evalf@table)(
   [seq(k= [seq(add(st1[i*k-k+1..k*i]), i= 1..N/k)]/~sqrt(k), k= list_of_k)]
);

 

Your two questions, the first about the end-of-parameters marker and the second about LCState, are completely unrelated. But I'll try to answer both.

If the end-of-parameters marker is the only "parameter" of a procedure, then the procedure cannot be passed any arguments; in other words, its only valid argument sequence is NULL or (); or, in more other words, the number of arguments must be 0.

The end-of-parameters marker "works" by making the name ` $` a parameter of the procedure. You can see that via 

P:= proc($) end proc:
lprint(op(1, eval(P))); 
#op(1, ...) of any procedure is its parameter sequence.
` $`

Almost anything enclosed in backquotes is a symbol (which is a kind of name) and a valid procedure parameter. If that's not enough information for you, please try asking a more-specific question. I don't have access to the kernel source code, but a great deal can be deduced about it from black-box experimentation.

Regarding LCState: It's not a command; it's just an integer-valued local variable in module RandomTools:-LinearCongruence. Local variables are usually not publically documented, and we unfortunately don't have access to comments in the original code. However, I know enough to tell you exactly what it is. 

First, the following two commands help you to poke around inside modules:

  • interface(verboseproc= 2):
    This prevents the internal contents of system procedures and modules from being printed as "...".
  • kernelopts(opaquemodules= false):
    This lets you treat module locals as if they were exports, giving you both read and write access. This is a very powerful debugging tool.

Read the help page ?RandomTools,LinearCongruence. There you'll see that each random integer is generated by applying a trivial linear modular arithmetic computation to the previous random integer. To do this, that previous random integer must be stored somewhere. Indeed, it's stored in LCState. That's all that LCState is. In most discussions of random number generators, this state variable is called the "seed".

The following session confirms that what I said above is true:

interface(verboseproc= 2):  kernelopts(opaquemodules= false):
RCLC:= RandomTools:-LinearCongruence:
op(2, eval(RCLC)); 
#op(2, ...) of a module is its "header".

module ()
   local simpleLGInertProc, complexLGInertProc, Prime, Gen, Shift, LCState;
   export GenerateInteger, NewGenerator, GetState, SetState;
   option package, `Copyright (c) 2004 Waterloo Maple Inc. All rights reserved.`;
end module


op([2,2], eval(RCLC)); #Specifically op([2,2], ...) is its locals.
simpleLGInertProc, complexLGInertProc, Prime, Gen, Shift, LCState

RCLC:-LCState;
                               1
RCLC:-GenerateInteger();
                          427419669081
RCLC:-LCState;
                          427419669081
 

 

You had another Question about a month ago about another Markov chain with a tridiagonal transition matrix. That was very similar to the current matrix, although this chain has absorbing states (corresponding to the 1s on its diagonal) and that one did not. In my Answer to that Question, I showed you the easy way to enter a matrix that's organized by its diagonals (called a band matrix) using the scan= band option. You didn't use that here.

It looks like you want to compute the long-run probabilities for this Markov chain. In my previous Answer, I showed how to do that for a chain without absorbing states. The method to do that when there are absorbing states is different. The worksheet below repeats what you did, but with easier coding, and provides a general procedure for the long-run probabilities of an absorbing Markov chain.
 

A tridiagonal absorbing Markov chain

Author: Carl Love <carl.j.love@gmail.com>, 1 Jul 2019

restart:

#your data:
(A,B):= (8,5):
(q,p):= (0.4, 0.2):  r:= 1-q-p:

dim:= A+B-1: #This is more convenient than your actual dimension, A+B+1.

interface(rtablesize= dim+2): #This only affects the display of matrices.
:

A matrix whose structure is based on its diagonals can be easily constructed with the scan= band option to Matrix::

P:= Matrix([[q$dim, 0], [1, r$dim, 1], [0, p$dim]], scan= band[1,1]);

Matrix(14, 14, {(1, 1) = 1, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (1, 5) = 0, (1, 6) = 0, (1, 7) = 0, (1, 8) = 0, (1, 9) = 0, (1, 10) = 0, (1, 11) = 0, (1, 12) = 0, (1, 13) = 0, (1, 14) = 0, (2, 1) = .4, (2, 2) = .4, (2, 3) = .2, (2, 4) = 0, (2, 5) = 0, (2, 6) = 0, (2, 7) = 0, (2, 8) = 0, (2, 9) = 0, (2, 10) = 0, (2, 11) = 0, (2, 12) = 0, (2, 13) = 0, (2, 14) = 0, (3, 1) = 0, (3, 2) = .4, (3, 3) = .4, (3, 4) = .2, (3, 5) = 0, (3, 6) = 0, (3, 7) = 0, (3, 8) = 0, (3, 9) = 0, (3, 10) = 0, (3, 11) = 0, (3, 12) = 0, (3, 13) = 0, (3, 14) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = .4, (4, 4) = .4, (4, 5) = .2, (4, 6) = 0, (4, 7) = 0, (4, 8) = 0, (4, 9) = 0, (4, 10) = 0, (4, 11) = 0, (4, 12) = 0, (4, 13) = 0, (4, 14) = 0, (5, 1) = 0, (5, 2) = 0, (5, 3) = 0, (5, 4) = .4, (5, 5) = .4, (5, 6) = .2, (5, 7) = 0, (5, 8) = 0, (5, 9) = 0, (5, 10) = 0, (5, 11) = 0, (5, 12) = 0, (5, 13) = 0, (5, 14) = 0, (6, 1) = 0, (6, 2) = 0, (6, 3) = 0, (6, 4) = 0, (6, 5) = .4, (6, 6) = .4, (6, 7) = .2, (6, 8) = 0, (6, 9) = 0, (6, 10) = 0, (6, 11) = 0, (6, 12) = 0, (6, 13) = 0, (6, 14) = 0, (7, 1) = 0, (7, 2) = 0, (7, 3) = 0, (7, 4) = 0, (7, 5) = 0, (7, 6) = .4, (7, 7) = .4, (7, 8) = .2, (7, 9) = 0, (7, 10) = 0, (7, 11) = 0, (7, 12) = 0, (7, 13) = 0, (7, 14) = 0, (8, 1) = 0, (8, 2) = 0, (8, 3) = 0, (8, 4) = 0, (8, 5) = 0, (8, 6) = 0, (8, 7) = .4, (8, 8) = .4, (8, 9) = .2, (8, 10) = 0, (8, 11) = 0, (8, 12) = 0, (8, 13) = 0, (8, 14) = 0, (9, 1) = 0, (9, 2) = 0, (9, 3) = 0, (9, 4) = 0, (9, 5) = 0, (9, 6) = 0, (9, 7) = 0, (9, 8) = .4, (9, 9) = .4, (9, 10) = .2, (9, 11) = 0, (9, 12) = 0, (9, 13) = 0, (9, 14) = 0, (10, 1) = 0, (10, 2) = 0, (10, 3) = 0, (10, 4) = 0, (10, 5) = 0, (10, 6) = 0, (10, 7) = 0, (10, 8) = 0, (10, 9) = .4, (10, 10) = .4, (10, 11) = .2, (10, 12) = 0, (10, 13) = 0, (10, 14) = 0, (11, 1) = 0, (11, 2) = 0, (11, 3) = 0, (11, 4) = 0, (11, 5) = 0, (11, 6) = 0, (11, 7) = 0, (11, 8) = 0, (11, 9) = 0, (11, 10) = .4, (11, 11) = .4, (11, 12) = .2, (11, 13) = 0, (11, 14) = 0, (12, 1) = 0, (12, 2) = 0, (12, 3) = 0, (12, 4) = 0, (12, 5) = 0, (12, 6) = 0, (12, 7) = 0, (12, 8) = 0, (12, 9) = 0, (12, 10) = 0, (12, 11) = .4, (12, 12) = .4, (12, 13) = .2, (12, 14) = 0, (13, 1) = 0, (13, 2) = 0, (13, 3) = 0, (13, 4) = 0, (13, 5) = 0, (13, 6) = 0, (13, 7) = 0, (13, 8) = 0, (13, 9) = 0, (13, 10) = 0, (13, 11) = 0, (13, 12) = .4, (13, 13) = .4, (13, 14) = .2, (14, 1) = 0, (14, 2) = 0, (14, 3) = 0, (14, 4) = 0, (14, 5) = 0, (14, 6) = 0, (14, 7) = 0, (14, 8) = 0, (14, 9) = 0, (14, 10) = 0, (14, 11) = 0, (14, 12) = 0, (14, 13) = 0, (14, 14) = 1})

#your initial state vector:
p0:= Vector[row]([0$A, 1, 0$B]);

Vector[row](14, {(1) = 0, (2) = 0, (3) = 0, (4) = 0, (5) = 0, (6) = 0, (7) = 0, (8) = 0, (9) = 1, (10) = 0, (11) = 0, (12) = 0, (13) = 0, (14) = 0})

It is usually better to compute a high power of a matrix and then multiply it by a vector than it is to iteratively multiply by the matrix.

pV:= (n::integer)-> p0.P^n:

evalf~(pV(5), 3);

Vector[row](14, {(1) = 0., (2) = 0., (3) = 0., (4) = 0.102e-1, (5) = 0.512e-1, (6) = .128, (7) = .205, (8) = .230, (9) = .189, (10) = .115, (11) = 0.512e-1, (12) = 0.160e-1, (13) = 0.320e-2, (14) = 0.320e-3})

evalf~(pV(200), 3);

Vector[row](14, {(1) = .969, (2) = 0.117e-4, (3) = 0.160e-4, (4) = 0.161e-4, (5) = 0.142e-4, (6) = 0.114e-4, (7) = 0.854e-5, (8) = 0.604e-5, (9) = 0.402e-5, (10) = 0.250e-5, (11) = 0.143e-5, (12) = 0.7070000000e-6, (13) = 0.2570000000e-6, (14) = 0.311e-1})

Generally, what we really want for an absorbing Markov chain is pV(infinity).  That's surprisingly easy to compute. First, it'll be convenient to have a boolean check that a matrix is stochastic:

TypeTools:-AddType(
   Stochastic,
   P-> P::'Matrix'('square')
      and andmap(`>=`, P, 0)
      and ArrayTools:-IsZero(ArrayTools:-AddAlongDimension(P,2) -~ 1)
)
:   

AbsorbingProbabilities:= proc(P::Stochastic)
local Jtrans, Jabs;
   #indices of absorbing and transient states:
   (Jabs, Jtrans):= selectremove(k-> P[k,k]=1, [$rtable_dims(P)[1]]);
   DataFrame(
      LinearAlgebra:-LinearSolve(
         Matrix(nops(Jtrans), 'shape'= 'identity') - P[Jtrans$2],
         P[Jtrans,Jabs]
      ),
      'rows'= cat~(`T `, Jtrans), 'columns'= cat~(`A `, Jabs)
   )
end proc
:

The above procedure returns a DataFrame (essentially a matrix with nonstandard indices) whose (i,j) entry is the probability of being absorbed into state j having started in state i. So, the rows are indexed by the transient states, and the columns by the absorbing states.

B:= AbsorbingProbabilities(P);

DataFrame(Matrix(12, 2, {(1, 1) = .9998779147845199, (1, 2) = 0.1220852154804057e-3, (2, 1) = .9996337443535595, (2, 2) = 0.3662556464412171e-3, (3, 1) = .9991454034916382, (3, 2) = 0.8545965083628397e-3, (4, 1) = .9981687217677953, (4, 2) = 0.18312782322060844e-2, (5, 1) = .9962153583201091, (5, 2) = 0.37846416798925737e-2, (6, 1) = .9923086314247364, (6, 2) = 0.7691368575265552e-2, (7, 1) = .9844951776339907, (7, 2) = 0.15504822366011504e-1, (8, 1) = .9688682700524991, (8, 2) = 0.31131729947503406e-1, (9, 1) = .9376144548895156, (9, 2) = 0.62385545110487196e-1, (10, 1) = .8751068245635479, (10, 2) = .12489317543645476, (11, 1) = .7500915639116127, (11, 2) = .24990843608838983, (12, 1) = .5000610426077418, (12, 2) = .49993895739225996}), rows = [`T 2`, `T 3`, `T 4`, `T 5`, `T 6`, `T 7`, `T 8`, `T 9`, `T 10`, `T 11`, `T 12`, `T 13`], columns = [`A 1`, `A 14`])

Compare the T 9 row in B (immediately above) with the first and last entries of pV(200)  (previously computed).

evalf~(B[cat(`T `,A+1), ..], 3);

DataSeries(Vector[row](2, {(1) = .969, (2) = 0.311e-1}), labels = [`A 1`, `A 14`], datatype = anything)

 

NULL


 

Download AbsorbingMarkov.mw

.

Your code works for me in Maple 2019. But its working depends on x and (lowercase) being unassigned. To fix that, you should add xy, and i to the local list.

Also, display does nothing is this case and isn't needed. Just remove display. The purpose of display is combining multiple plots or changing the options on an existing plot. Since you're not doing either of those things, only the plot command is needed.

Here is another way. It's similar to Joe's, but I prefer the elegant syntax of mine. I also do the exact example that you gave, doing it both in form and diff form.
 

This is my matrix-composition operator:

`&&@`:= (A::Matrix, B::And(Matrix, satisfies(B-> upperbound(B)[1]=upperbound(A)[2])))->
   Matrix(
      (upperbound(A)[1], upperbound(B)[2]),
      (i,j)-> add(zip(`?()`, A[i,..], `[]`~(B[..,j])))
   )
:   

Now, I'll apply it to your example. First, construct the matrix of differential operators in D form:

Df:= (j-> `if`(j=0, 0, D[args]))~(
   < 1, 0;  #x will be 1st position; y second.
     0, 2;
     2, 1 >
);

Matrix(3, 2, {(1, 1) = D[1], (1, 2) = 0, (2, 1) = 0, (2, 2) = D[2], (3, 1) = D[2], (3, 2) = D[1]})

...and your matrix of the functions to be operated on:

Ns:= Matrix((2,12), (i,j)-> `if`((j-i)::even, N__||(1+(j-i)/2), 0));

_rtable[18446746596172405086]

...and the specific functions that you gave for some of those:

N__1:= (x,y)-> 1 - x/2 - 3/4 * y + x * y/6 + x^2/18 + y^2/8:
N__2:= (x,y)-> -x/6 + x^2/18:
N__3:= (x,y)-> -y/4 + y^2/8:
N__4:= (x,y)-> x*y/6:
:

The following returns the result of the compostion as a Matrix of algebraic expressions:

(eval@apply)~(Df &&@ Ns, (x,y));

_rtable[18446746596165863054]

Surprisingly, the exact same composition operator can be used if the differential operators are in diff form:

Df:= (v-> `if`(v=0, 0, rcurry(diff, args)))~(
   < x, 0;
     0, y;
     y, x >
);

Matrix(3, 2, {(1, 1) = proc () options operator, arrow; diff(args, x) end proc, (1, 2) = 0, (2, 1) = 0, (2, 2) = proc () options operator, arrow; diff(args, y) end proc, (3, 1) = proc () options operator, arrow; diff(args, y) end proc, (3, 2) = proc () options operator, arrow; diff(args, x) end proc})

Df &&@ apply~(Ns, (x,y));

_rtable[18446746596172423406]

(The following example requires Maple 2019 or later.)

The composition operator can be used associatively if the matrices of differentials are square. To show an example, I'll just take two square submatrices from Df:

Df[..2, ..] &&@ Df[2.., ..] &&@ apply~(Ns, (x,y));

_rtable[18446746596152823670]

 


 

Download DifferentialMatrixCompositions.mw

If procedure f1 is passed into procedure f2, then it's often useful to exctract f1's parameters while in f2. That can be done with op(1, eval(f1)), which I use in my procedure DP below. That op command will return the parameters with their type specifications, if any were provided. These can be removed to obtain a list of just the names of the parameters like this:

(p-> `if`(p::`::`, lhs(p), p))~([op(1, eval(F))])

I recommend that you not try to use an Array as the input parameter to a procedure formed from a symbolic algebraic expression, such as you do with Ff. So, I'd make Ff

Ff:= (x,y)-> (sin(x)+20)^2 + (cos(y)+5)^2;

Then my DP returns the gradient as a procedure:

DP:= proc(F::procedure, N::posint:= infinity)
local P:= (p-> `if`(p::`::`, lhs(p), p))~([op(1, eval(F))]), n:= min(nops(P), N);
   unapply(VectorCalculus:-Vector(n, k-> D[k](F)(P[..n][])), P[..n])
end proc:

So that you can use it to get a symbolic gradient:

DP(Ff)(x,y)

or an evaluated gradient:

DP(Ff)(Pi/3, Pi/4)

So, the usage of DP is very much like D.

The second (optional) parameter of DP is an integer n that specifies that only the first n parameters of are to be treated as independent variables in the sense of calculus; if n is omitted, then all are used, which should cover the vast majority of cases.

If you really need Ff to take its parameters in an Array, let me know why, and I'll come up with a solution.

This graph seems generic enough that it's worth generalizing to any size. Thus:

Truss:= proc(n::And(posint, even))
uses GT= GraphTheory;
local
   k, p, 
   B:= k-> seq(k+~p, p= ({-1,0}, {-1,1}, {0,1}, {0,2})),
   T:= GT:-Graph({seq(B(k), k= 2..n-2, 2), n-~{1,0}})
; 
   GT:-SetVertexPositions(T, [seq([[k,1],[k,0]][], k= 1..n/2)]); 
   T
end proc
:   

GraphTheory:-DrawGraph(Truss(8));

Your C program is a top-level 'main' program which interacts directly with the user and doesn't explicitly return any nontrivial value. You need to change that to a function which takes arguments (specifically, at least one Matrix argument) and returns a value (a Matrix) and does NOT interact with the user[*1]. See the help page ?external_calling. The example mat_mult given on that page is very close to what you need.

[*1] Possibly there is some way that I'm not aware of that the external program can interact with the user while running in Maple, but this would be some unusual and advanced case.

Any rtable (which includes Vectors, Matrices, and Arrays) can be converted to a sequence simply with seq(V). I think that you're having some confusion about the distinction between sequences and lists (which is, admittedly, a minor and idiosyncratic distinction). If I say

S:= 1, 2, 3;

then is a sequence with three entries. If I say 

L:= [1, 2, 3];

or

L:= [S];

then is a list with three entries. The most-important difference (and it may be the only significant difference) between a sequence and a list is that a sequence cannot ordinarily be passed as a single argument to a single procedure parameter (there are a few tricks to get around even this).

So, your n__a is a sequence of Vectors. It can be converted to a list of lists by

[seq]~([n__a]);

As Tom Leslie has pointed out, there is no way to specify the order of set elements. They are stored in a specific (and predictable) order that is convenient for Maple. Thus, it seems that sets aren't at all suitable for your purpose, so I won't discuss them further in this Answer.

Anything that you can do to narrow the search space will help fsolve (this does not apply to solve). In this case, a little bit of plotting should convince you that there is a negative solution. Simply providing that information to fsolve is enough:

params:= {
    k = 1.3806e-23,
    T = 300,
    h__bar = 1.05456e-34,
    L__z = 6e-9,
    m__hhw = 3.862216e-31,
    E__hh0 = 3.012136e-21,
    E__hh1 = 1.185628e-20
}:
p:= k*T*m__hhw*(ln(1+exp(E__fv-E__hh0)/(k*T))+
  ln(1+exp(E__fv-E__hh1)/(k*T)))/(Pi*h__bar^2*L__z) = 0.3e25;

fsolve(eval(p, params),
E__fv= -infinity..0);
                          -48.46001884
 

@mehdi jafari There are two completely separate issues involved in the cases that you show where simplify(...) assuming real doesn't return what you think it should. In the first case, you use

simplify(..., sqrt) assuming real

In this case it's the sqrt that's preventing the simplication that you want, and this would still be true even if the the input were a single algebraic expression rather than a Matrix. Instead, change your command to

simplify(...) assuming real

I don't consider the above to be a bug, or even a weakness.

In your second case, you use

simplify(S_NEW(2,2)) assuming real

where S_NEW is a Matrix. Here, you are using the so-called programmer indexing, i.e., you put the indices (2,2) in parentheses rather than square brackets, [2,2]. This case does seem to be a weakness of assuming (it's not a weakness of simplify); however, I recommend that you always index with square brackets unless you specifically know that programmer indexing is required for your operation (such a situation is rare). So, if you change the above to

simplify(S_NEW[2,2]) assuming real

then it'll work.

The command that you're thinking of is IterativeMaps:-Bifurcation. See its help page. However, I am confused by your usage of an equation rather than an expression.

When a worksheet (or document) is viewed as a slideshow (via menu View => Slideshow), its sections become the slides. This means that you need to make each section small enough that it fits on one screen. So, if you need to present a slideshow, it'd be wise to first perfect your edit of the flat (un-sectioned) worksheet, then divide it into sections. Also, the presentation is fixed slides rather than executable code. Generally, when I give a presentation, I prefer to execute key parts of the code directly for my audience. That helps maintain their attention.

This seems to me to be a very weak feature of Maple.

First 139 140 141 142 143 144 145 Last Page 141 of 395