Venkat Subramanian

431 Reputation

13 Badges

15 years, 265 days

MaplePrimes Activity


These are replies submitted by

@aylin 

 

Aylin, as shown in the paper, control variables are given by min max commands which are found by solving for the costate equations for bounds if you were to write them.

 

Once you write down u values, then it is just simulation and is easy to solve. The paper is trying to solve a multi objective problem (by adding weights to different objectives). The problem is badly scaled (in the original Maple code). As of now the major contribution to the objective is from x[1] and not u[1], u[2]. If x[1] is not changing much, just optimizing the other two terms might give faster convergence.

Thanks for this problem, when I find time, I will post a solution for this.

@aylin 

You are welcome. Based on the discussion so far, it is fair to say that you are new to this field.  There are two main approaches to solve these problems, indirect methods and direct methods (Pontyragin). For unconstrained, unbounded problems it is easy to do direct methods. For bounded problems, you have to add costates for each bounds for every state and control variable.

Direct methods can be of 3 types, control vector parameterization, multipleshooting and simultanesous optimizations. If you care only for the solution, then I can write something up and email for the direct methods.

 

 

@ 

once we have a meaningful profile for x[1], x[2], x[3] with some optimum constant values for u[1], u[2], one can provide this as an initial guess for the bvp by using approxsoln = sol0 (where sol0 is a good initial guess from the IVP solver).

@aylin 

 

Is there a physical meaning for x[1], x[2], x[3]? Are they expected to be positive? I wrote a simple code in which I solved only for the original ODEs with u[1] and u[2] as parameters (not changing with time). Increasing u[1] and u[2] seems to increase your objective but x[2] becomes negative.

 

If x[1], x[2], x[3] are expected to be positive then you need to add adjoint or costate variables for that. You might want to scale the variables and objective so that they are nearly 1. Right now it varies by a wide range.

 

restart:
unprotect('gamma');

L:=x[1]-A[1]*(u[1])^2/2-A[2]*(u[2])^2/2;

L := x[1]-(1/2)*(A[1]*u[1]^2)-(1/2)*(A[2]*u[2]^2)

(1)

H:=L+psi[1](t)*(lambda-mu*x[1]-(1-u[1])*beta*x[1]*x[3]+delta*x[2])+psi[2](t)*((1-u[1])*beta*x[1]*x[3]-sigma*x[2])
+psi[3](t)*((1-u[2])*k*x[2]-gamma*x[3]);

H := x[1]-(1/2)*(A[1]*u[1]^2)-(1/2)*(A[2]*u[2]^2)+psi[1](t)*(lambda-mu*x[1]-(1-u[1])*beta*x[1]*x[3]+delta*x[2])+psi[2](t)*((1-u[1])*beta*x[1]*x[3]-sigma*x[2])+psi[3](t)*((1-u[2])*k*x[2]-gamma*x[3])

(2)

du1:=diff(H,u[1]);

du1 := -A[1]*u[1]+psi[1](t)*beta*x[1]*x[3]-psi[2](t)*beta*x[1]*x[3]

(3)

du2:=diff(H,u[2]);

du2 := -A[2]*u[2]-psi[3](t)*k*x[2]

(4)

ddu1:=-A[1]*u[1]+psi[1](t)*beta*x[1]*x[3]-psi[2](t)*beta*x[1]*x[3];

ddu1 := -A[1]*u[1]+psi[1](t)*beta*x[1]*x[3]-psi[2](t)*beta*x[1]*x[3]

(5)

ddu2:=-A[2]*u[2]-psi[3](t)*k*x[2];

ddu2 := -A[2]*u[2]-psi[3](t)*k*x[2]

(6)

sol_u1 := solve(ddu1, u[1]);

sol_u1 := beta*x[1]*x[3]*(psi[1](t)-psi[2](t))/A[1]

(7)

sol_u2 := solve(ddu2, u[2]);

sol_u2 := -psi[3](t)*k*x[2]/A[2]

(8)

Dx2:=subs(u[1]=beta*x[1]*x[3]*(psi[1](t)-psi[2](t))/A[1],u[2]=-psi[3](t)*k*x[2]/A[2], H );

Dx2 := x[1]-(1/2)*(beta^2*x[1]^2*x[3]^2*(psi[1](t)-psi[2](t))^2/A[1])-(1/2)*(psi[3](t)^2*k^2*x[2]^2/A[2])+psi[1](t)*(lambda-mu*x[1]-(1-beta*x[1]*x[3]*(psi[1](t)-psi[2](t))/A[1])*beta*x[1]*x[3]+delta*x[2])+psi[2](t)*((1-beta*x[1]*x[3]*(psi[1](t)-psi[2](t))/A[1])*beta*x[1]*x[3]-sigma*x[2])+psi[3](t)*((1+psi[3](t)*k*x[2]/A[2])*k*x[2]-gamma*x[3])

(9)

ode1:=diff(psi[1](t),t)=-diff(H,x[1]);

ode1 := diff(psi[1](t), t) = -1-psi[1](t)*(-mu-(1-u[1])*beta*x[3])-psi[2](t)*(1-u[1])*beta*x[3]

(10)

ode2:=diff(psi[2](t),t)=-diff(H,x[2]);

ode2 := diff(psi[2](t), t) = -psi[1](t)*delta+psi[2](t)*sigma-psi[3](t)*(1-u[2])*k

(11)

ode3:=diff(psi[3](t),t)=-diff(H,x[3]);

ode3 := diff(psi[3](t), t) = psi[1](t)*(1-u[1])*beta*x[1]-psi[2](t)*(1-u[1])*beta*x[1]+psi[3](t)*gamma

(12)

restart:
#Digits:=10:
unprotect('gamma');
lambda:=5*10^5:
mu:=0.003:
beta:=4*10^(-10):
delta:=0:
alpha:=0.043:
sigma:=alpha+delta:
k:=6.24:
gamma:=0.65:
A[1]:=10:
A[2]:=2:
#u[1]:=beta*x[1](t)*x[3](t)*(psi[1](t)-psi[2](t))/A[1]:
#u[2]:=-psi[3](t)*k*x[2](t)/A[2]:

ics := x[1](0)=1.7*10^8, x[2](0)=0,x[3](0)=400,psi[1](20)=0,psi[2](20)=0,psi[3](20)=0:

ode1:=diff(x[1](t), t)=lambda-mu*x[1](t)-(1-u[1])*beta*x[1](t)*x[3](t)+delta*x[2](t),
diff(x[2](t), t) =(1-u[1])*beta*x[1](t)*x[3](t)-sigma*x[2](t),
diff(x[3](t), t) =(1-u[2])*k*x[2](t)-gamma*x[3](t),
diff(psi[1](t), t) =-1+mu*psi[1](t)+beta*x[3](t)*(1-u[1])*(psi[1](t)-psi[2](t)),
diff(psi[2](t), t) =delta*psi[1](t)+sigma*psi[2](t)-psi[3](t)*(1-u[2])*k,
diff(psi[3](t), t) = beta*x[1](t)*(psi[1](t)-psi[2](t))*(1-u[1])+gamma*psi[3](t);

ode1 := diff(x[1](t), t) = 500000-0.3e-2*x[1](t)-(1/2500000000)*((1-u[1])*x[1](t)*x[3](t)), diff(x[2](t), t) = (1/2500000000)*((1-u[1])*x[1](t)*x[3](t))-0.43e-1*x[2](t), diff(x[3](t), t) = 6.24*(1-u[2])*x[2](t)-.65*x[3](t), diff(psi[1](t), t) = -1+0.3e-2*psi[1](t)+(1/2500000000)*(x[3](t)*(1-u[1])*(psi[1](t)-psi[2](t))), diff(psi[2](t), t) = 0.43e-1*psi[2](t)-6.24*psi[3](t)*(1-u[2]), diff(psi[3](t), t) = (1/2500000000)*(x[1](t)*(psi[1](t)-psi[2](t))*(1-u[1]))+.65*psi[3](t)

(13)

sol:=dsolve([ode1,ics],numeric, method = bvp[midrich],maxmesh=500);

Error, (in dsolve/numeric/bvp/convertsys) too few boundary conditions: expected 8, got 6

 

 

eq1:=diff(x[1](t), t)=lambda-mu*x[1](t)-(1-u[1])*beta*x[1](t)*x[3](t)+delta*x[2](t);
eq2:=diff(x[2](t), t) =(1-u[1])*beta*x[1](t)*x[3](t)-sigma*x[2](t);
eq3:=diff(x[3](t), t) =(1-u[2])*k*x[2](t)-gamma*x[3](t);

eq1 := diff(x[1](t), t) = 500000-0.3e-2*x[1](t)-(1/2500000000)*((1-u[1])*x[1](t)*x[3](t))

eq2 := diff(x[2](t), t) = (1/2500000000)*((1-u[1])*x[1](t)*x[3](t))-0.43e-1*x[2](t)

eq3 := diff(x[3](t), t) = 6.24*(1-u[2])*x[2](t)-.65*x[3](t)

(14)

eq4:=diff(Q(t),t)=x[1](t)-A[1]*(u[1])^2/2-A[2]*(u[2])^2/2;

eq4 := diff(Q(t), t) = x[1](t)-5*u[1]^2-u[2]^2

(15)

ics:=x[1](0)=1.7*10^8, x[2](0)=0,x[3](0)=400,Q(0)=0;

ics := x[1](0) = 170000000.0, x[2](0) = 0, x[3](0) = 400, Q(0) = 0

(16)

sol0:=dsolve({eq1,eq2,eq3,eq4,ics},type=numeric,stiff=true,'parameters'=[u[1],u[2]],abserr=1e-15,relerr=1e-12,maxfun=0,range=0..20):

with(plots):

Q0:=0;

Q0 := 0

(17)

obj:=proc(u)

global sol0,Q0;
local ob1;
try
sol0('parameters'=[u[1],u[2]]):
ob1:=subs(sol0(20.),Q(t)):
catch :
ob1:=0;
end try;
#ob1:=subs(sol0(20.),Q(t));
if ob1>Q0 then Q0:=ob1;print(Q0,u);end;
ob1;
end proc;

obj := proc (u) local ob1; global sol0, Q0; try sol0('parameters' = [u[1], u[2]]); ob1 := subs(sol0(20.), Q(t)) catch: ob1 := 0 end try; if Q0 < ob1 then Q0 := ob1; print(Q0, u) end if; ob1 end proc

(18)

obj([1,1]);

3398039287.12889, [1, 1]

3398039287.12889

(19)

obj([3,2.5]);

10853691356.1391, [3, 2.5]

10853691356.1391

(20)

u0:=Vector(2,[0.,0.],datatype=float[8]);

u0 := Vector(2, {(1) = 0., (2) = 0.})

(21)

 

Q0:=0;

Q0 := 0

(22)

with(Optimization);

[ImportMPS, Interactive, LPSolve, LSSolve, Maximize, Minimize, NLPSolve, QPSolve]

(23)

sol2:=NLPSolve(2,obj,initialpoint=u0,method=nonlinearsimplex,maximize,evaluationlimit=100):

3.39794371555394*10^9, Vector[row](2, {(1) = 0., (2) = 0.})

3.39803930712891*10^9, Vector[row](2, {(1) = 1., (2) = 0.})

3.39804033045244*10^9, Vector[row](2, {(1) = 1.49794733500399, (2) = 1.50623284257974})

3.39805145390838*10^9, Vector[row](2, {(1) = 1.87243416875499, (2) = 1.69451194790220})

3.40712818224986*10^9, Vector[row](2, {(1) = 2.24692100250599, (2) = 2.63590747451454})

3.51729809603447*10^9, Vector[row](2, {(1) = 2.20011014828712, (2) = 3.27134945497786})

3.53967508580218*10^9, Vector[row](2, {(1) = 2.57459698203812, (2) = 2.70651213901046})

3.74232674396347*10^9, Vector[row](2, {(1) = 3.27675979532124, (2) = 2.21227948753899})

3.94430211657206*10^9, Vector[row](2, {(1) = 2.57313414284378, (2) = 2.86757902989179})

4.26156884348600*10^9, Vector[row](2, {(1) = 2.97422636444183, (2) = 2.47994287384092})

5.12441431374667*10^9, Vector[row](2, {(1) = 2.92537553525273, (2) = 2.54569519139944})

2.46892011415654*10^10, Vector[row](2, {(1) = 3.07878508154351, (2) = 2.44069316166875})

8.93112347088505*10^10, Vector[row](2, {(1) = 3.20642994395782, (2) = 2.35054875107964})

1.37406629736151*10^11, Vector[row](2, {(1) = 3.12422642131584, (2) = 2.40814677044507})

3.97704555922777*10^11, Vector[row](2, {(1) = 3.15722253153225, (2) = 2.38467382241256})

5.39937222970221*10^11, Vector[row](2, {(1) = 3.14072447642405, (2) = 2.39641029642882})

3.33023882593904*10^12, Vector[row](2, {(1) = 3.18182623774504, (2) = 2.36761128674610})

2.12196697026660*10^13, Vector[row](2, {(1) = 3.18125786060723, (2) = 2.36800986932868})

Warning, limiting number of function evaluations reached

(24)

sol0('parameters'=[3.18125786060723, 2.36800986932868]);

[`u[1]` = 3.18125786060723, `u[2]` = 2.36800986932868]

(25)

for i from 1 to 3 do odeplot(sol0,[t,x[i](t)],0..20,thickness=3,axes=boxed);od;

 

 

 

 

Download mapleprimeVS.mws

@aylin 

 

Aylin, thanks for the problem statement and the code. Can you clarify what your objective is (Cost function) and how you got the BCS.

 

If you are trying to minimize integral of L, then it might result in a singular control problem.

Carl,

Thanks. I asked the original poster to post the original control problem. My current thought is that problem formulation in BVP form is wrong. Typically control variables appear in the original ODEs, not adjoints.

 

From the Hamiltonian and minimum principles, sometimes one can write control variable as a function of adjoints and state variables (ODE variables).

Depending on the objectives, one of the adjoint variable is likely to be non zero for the boundary condition. Without knowing the original problem statement, we cannot determine this.

 

To add to this, it is perfectly fine with limitations coming from singularity, etc. It is always left to the user to try to fix these things. 

 

An important point I would like to bring up is that when CAS is written, please implement the algorithm as is and don't do things that do not reflect the research algorithms or corresponding papers accurately.

 

http://www.mapleprimes.com/posts/149877-ODEs-PDEs-And-Special-Functions

 

In the link above, I showed an example of how Maple fails to solve DAEs. The truth of the matter is that at this point in time, Maple does not have a direct DAE solver. It only has ODE solvers. It converts DAE to ODEs and then solves. This point should be made clear in the help page for DAEs. I had spent a lot of time hoping that Maple does what it claims it does. In my opinion, Maplesoft will get more credit if help files reflect and convey these limitations.

All the DAE solvers in Maple have the same issue. The MEBDFI code is based on an algorithm from Dr. Jeff Cash. Unforuntately the Maple code only does ODEs. For DAEs, it converts to ODEs and then solves. This is not the original mebdfi code Professor Cash has published in the literature (implemented in FORTRAN).

 

My request is that when Maplesoft changes things in algorithms or methods while following a paper or work, please indicate apriori to not waste researchers and students' time. 

PS,  I have modified the Maple code so that one can solve this problem and I can share it if there is interest. I am a longtime user of Maple (from Maple V) and will be a lifetime user of Maple classic worksheet.

Edit:

1. I want to add that Maplesoft folks (in particular Allan Wittkopf) and others are very knowledgeable and very helpful. My guess is they are not staffed enough to update help pages (but this is not an excuse to misrepresent facts). My suggestion would be to add a wiki page for help pages and let the users update them with the caveat that Maplesoft does not own responsbility for the same. This way, when a new user goes to a help page, if there was a linked wiki page then one can read that for typos, algorithms used or modfified.

 2. At this point in time, Maple converts DAEs to explicit ODEs (dy/dt = f). If it is not able to do so, then the code will crash often times sucking up more than 1GB of RAM.

 

 

Hi All,

I just installed Maple 18. Yet to check all the new features. Looks like Maple does not provide the option to interact with NAG anymore.

 

This is a significant backward step. I am not sure why.

 

Thanks

Venkat Subramanian

@Allan Wittkopf 

 

Allan,

Thanks for your attention. Just like fsolve/sysnewton, if there are some specific commands that can be used to directly specify equations with variables and initial conditions in proper order (mapped) directly, then that would be great.

Also, please consider providing for direct solution of Mdy/dt =f as discussed here where M is singular for index-1DAE and do direct implicit solver for integration. That will help solve many difficult largescale index-1DAEs.

Note that we should continue to have the current facility to map and identify equations, variables and DAE index,etc. Just that for largescale ( any reasonably sized/difficult) PDEs and DAEs, we need a different/direct approach.

 

Venkat Subramanian

www.maple.eece.wustl.edu

Preben's answer is correct.

Similar problems in cylindrical or other coordinates will have a different particular solution. Typically, separation of variables methods work for non-homogeneous boundary conditions by using c(x,t) = u(x,t) +w(x).

But in this case, that won't work, you can get the answer by using c(x,t) = u(x,t) + w(x) +v(t), and deriving for w(x) and v(t).

More details in the paper, http://www.maple.eece.wustl.edu/pubs/9-JPS-VS-RW-sepvar-2001.pdf. I have also shown this example in the book I wrote on using Maple for chemical engineers.

 

Good luck. Laplace transform will solve this problem direclty, but you will end up with roots that are repeated and you have to use Heaviside-expansion theorem to invert back to time.

 

Two flux conditions are common in electrochemical systems (Batteries), and they don't actually have a steady-state. 

Preben's answer is correct.

Similar problems in cylindrical or other coordinates will have a different particular solution. Typically, separation of variables methods work for non-homogeneous boundary conditions by using c(x,t) = u(x,t) +w(x).

But in this case, that won't work, you can get the answer by using c(x,t) = u(x,t) + w(x) +v(t), and deriving for w(x) and v(t).

More details in the paper, http://www.maple.eece.wustl.edu/pubs/9-JPS-VS-RW-sepvar-2001.pdf. I have also shown this example in the book I wrote on using Maple for chemical engineers.

 

Good luck. Laplace transform will solve this problem direclty, but you will end up with roots that are repeated and you have to use Heaviside-expansion theorem to invert back to time.

 

Two flux conditions are common in electrochemical systems (Batteries), and they don't actually have a steady-state. 

@ecterrab 

 

Edgardo, thanks for trying the model I gave. My point is that we should not manipulate index 1 DAE, just take it and solve with implicit formula as explained before.

As you have asked, I am attaching a MATLAB script for the same model (which fails in Maple as of today for N greater than 7). You will see that I am not very good at it. As of now you have to change both N1 and N (both mean the same) for the number of node points. MATLAB solves without any trouble. Plots for both N = 2 and N = 100 are attached. I even ran N = 1000 node points, it ran in 2-3 minutes or so. N = 100 was instantaneous and N = 2 was comparable speed with Maple. In addition, MATLAB file is a pain (I have to print multiple files, merge as a PDF and upload).

For reasons unknown to me MATLAB pushes ode45, but their ode15i and ode15s are their best  solvers in our experience. Note that for this particular model MATLAB is superior, it doesn't mean the same for ill-conditioned problems (Maple's analytical jacobian will make the dsolve/numeric robust and more efficient for ill-conditioned ODEs (small sized) as of today). For index 1 DAEs, I think Maple's way of manipulation and conversion to ODEs should be modified and option to solve directly with M being singular would be great.

MATLABscripts_and_pl.pdfI am sure one can code the same in Mathematica.

Venkat Subramanian

www.maple.eece.wustl.edu

@ecterrab 

 

Edgardo,

I completely agree with the C code conversion. For example, codegen feature of Maple is a great tool. There is no argument from my side on the successful conversion to C format. My comments are on the implementation of the algorithm, choice of the solver and the method of solving DAEs as ODEs.

When you use standalone C or FORTRAN solvers, very rarely anyone would enter a full dense analytic jacobian for a largescale system. There are more examples that I can provide. For example, Burger's equation in 1 or 2D is a standard problem (solved and demonstrated in Mathematica, MATLAB, etc for example). One will hit the RAM limit as of now with dsolve/numeric when the number of node points is increased.

Please do note that despite the limitation of what we have, our group's first choice is still and will always be Maple to start a model, and when needed hardcoded C or FORTRAN. We very rarely start with other software because for us using Maple helps us develop a very complicated model in 15 minutes to one hour.  However, MATLAB has very good reach among the systems community. For optimization and other toolboxes, MATLAB is far ahead and we use the same for that. In addition, MATLAB allows for files to be sent to non MATLAB users.

Using C directly would take weeks or months to develop these models.

 

@ecterrab 

 

Please see a code attached. This comes from finite difference discretization of one PDE with time derivative coupled with one PDE without time derivative. Don't run this code for high values of N (even 5 or 10 would crash any computer), with total number of DAEs less than 25 or so.

 

The only way to run this code as of today would be to convert all the algebraic equations into ODEs and solve the resulting dy/dt = f system with implicit=true. There again size becomes the limitation.

 

restart:

Digits:=15;

Digits := 15

 

 Code writen by Dr. Venkat Subramanian and MAPLE group members at WU.

N:=2;

N := 2

 

eq1[0]:=3*u1[0](t)-4*u1[1](t)+u1[2](t);

eq1[0] := 3*u1[0](t)-4*u1[1](t)+u1[2](t)

 

eq1[N+1]:=u1[N+1](t)-1;

eq1[3] := u1[3](t)-1

 

eq2[0]:=3*u2[0](t)-4*u2[1](t)+1*u2[2](t);

eq2[N+1]:=u2[N+1](t);

eq2[0] := 3*u2[0](t)-4*u2[1](t)+u2[2](t)

eq2[3] := u2[3](t)

 

 

h:=1/(N+1);

h := 1/3

 

 

for i from 1 to N do eq1[i]:=diff(u1[i](t),t)=(u1[i+1](t)-2*u1[i](t)+u1[i-1](t))/h^2-u1[i](t)*(1+u2[i](t));od:

for i from 1 to N do eq2[i]:=0=(u2[i+1](t)-2*u2[i](t)+u2[i-1](t))/h^2-(1-u1[i](t)^2)*(exp(-u2[i](t)));od:

Eqs:=seq(eq1[i],i=0..N+1),seq(eq2[i],i=0..N+1):

ics:=seq(u1[i](0)=1,i=0..N+1),seq(u2[i](0)=0,i=0..N+1):

#infolevel[all]:=10;

tt := time():
sol:=dsolve({Eqs,ics},type=numeric, optimize=false,stiff=true,compile=true, maxfun=0):
time()-tt;

.265

 

 

with(plots):

 

odeplot(sol,[t,u1[0](t)],0..1,thickness=3,axes=boxed);

 

odeplot(sol,[t,u2[0](t)],0..1,axes=boxed,thickness=3);

 

 

 

PDE-DAE-To_post.mw

Hope I am not annoying you, my objective is to continue to restate the current limitation and improve for the future.

Venkat Subramanian

www.maple.eece.wustl.edu

pdsolve/numeric seems to use pdsolve/numeric/sparseLU. I am not sure if is really a sparse solver and if it is available in compiled form.

 

First 15 16 17 18 Page 17 of 18