Venkat Subramanian

431 Reputation

13 Badges

15 years, 265 days

MaplePrimes Activity


These are replies submitted by

@Carl Love 

Carl, thanks. Just fixed the mistake. Yes, I meant 2D input.

@Carl Love 

 

Don't you think Maple should stop 2D input in the future or at least alert new users? What is the point of doing 2D input? Never figured out why.

@dharr 

Maple has implemented lsode wrongly.

please read my comments in the link given in this post (top of this page). 

 

When adaptive sovler is used (with error control), there is no problem as Maple takes a large number of time steps to correct the error (this is inefficient). In fact I am not sure why you are posting dsolve solution. I have posted the expected answer at t = 0.1. I am also highlighting what you are supposed to get when dt = 0.1 (Singlestep was chosen to integrate with Eulerbackward/LSODE method).

Rosenbrock involves only linearsolves. it seems to be implemented well for ODEs (not DAEs for which it will try to convert DAEs to ODEs of the form dy/dt = f which fails for real nonlinear DAEs).

Lsode involves newton raphson iterations. That is not properly done (stopped after one iteration).

 

When error is taken to be one and timestep is taken to be 0.1, you can force lsode to mimic Eulerbackward with one time step. Lsode is not doing what it is supposed to do (results not matching with fsolve for Eulerbackward).

 

I am not sure how much you know about the Maple's implementation of  numerical algorithms for differential equations (didn't mean to be offensive, but it looks to me now that some of the algorithms in the literature are just wrongly implemented in Maple. For example LSODE and MEBDFI). Even the mgear algorithm seems to do only 3 iterations (not sure it solves Newton Raphson till convergence). If you had implemented mgear, then my guess/request is that you can help bring mgear back to Maple perhaps with the ability to solve Mdy/dt = f with M being singular (zero rows for algebraic variables), this will help Maple to include a real DAE solver. If I can be of any assistance, please let me know.

EDIT: Dharr, I just looked at your profile. It is clear that you are very knowledgable about numerical methods and also electrochemistry (we share similar research interests). On the surface level, Maple's dsolve numeric is fine, but when you dig deeper, it turns that some of the algorithms are not properly implemented.

 

The issue is not being able to solve this example. This simple example was chosen to show why lsode fails in maple for the complciated example given in the link  for Can we trust maple?. It also highlights the issue the issue that Maple is not doing what it claims it is doing.

 

Note that this is not to say that Maple's solvers don't work. The objective is to highilght the buggy implementation of certain algorithms which will bite you for difficult and real problems.  As a professor and Maple Ambassador, I have to insist on accuracy to make sure Maple doesn't lose credibility. If Maple claims to do certain things, and it is not properly done, I am going to highlight the same.  Hope that is fine.

 

 

@Preben Alsholm 

Thanks. As I mentioned in my post mgear is hidden in Maple 7 (I used showstat to find that). Maple removed it from Maple 8 onwards when it started including Rosenbrock algorithms.

There should be a way to use it in Maple 7, not sure how.

@ 

current tests suggest that mgear gives better results than lsode.

@abcd 

 

Glad you liked my analysis. You are welcome.

I agree with you that Maple should focus on existing fortran or c algorithms and just provide a wrapper for the same instead of writing its own solvers. The issue might have been that some of the linear solvers might require licenses, not sure.

 

I prefer Maple because at least I can learn the algorithms and concepts behind any command (at least till Maple 15 or so, not sure about new economics or other packages).

For some reason (perhaps Commercial) Maplesoft has made the decision to move towards Maple TA, elementary/high school math/teaching and move most of its interesting algorithms or applications to Maplesim. (I hate the interface of Maplesim). 

Maplesim's solvers are more robust (including the ability to solve DAEs), so I would suggest that if you have the patience to use Maplesim interface (after using Maple classic worksheet, for me MATLAB, Mathematica, newer Maple modes are just too slow), try it. My codes are more than 1000 lines long just like yours.

 

Hope I didn't give the impression that Maple folks are not smart. My interaction with Allan Wittkopf suggests that he knows a lot of algorithms, programming environments and solvers. Maplesoft folks are probably restricted to provide robust stiff algorithms that include Newton Raphson algorithms only via Maplesim for commerical purposes. I respect that. However, they should document lsode/mebdfi and DAE solvers properly suggesting the current limitation to handle only linear equations.

I have used DASSL, DASKR, DASPK in Fortran, IDA in C and RADAU in FORTRAN. Those are the best solvers. I dream of the day when Maple will have a wrapper for these algorithms.

In my group, we routinely use all of these algorithms, but we cheat. We write equations in C or FORTRAN from Maple, run a command prompt from Maple to compile and execute these codes, execute them and plot the profiles in Maple. So, Maple is an interface and also provides the ability to write analytic Jacobians and the ability to create reduced set of model equations.

If I can do this, I am sure Maplesoft folks can easily manage writting better wrappers and use external calling better.

 

@abcd 

@abcd 

Maple unfortunately has not implemented LSODE properly. Only a single iteration (or may be 2 iterations) are made for solving nonlinear equations. Attached code analyzes dsolve/numeric/lsode in Maple 14.

1. Codes were run with abserr=relerr=1, with minstep of 0.1 and initstep of 0.1

2. lsode[backfull] with Ctrl[9]=1 was used. This is just Euler backward method with numerical jacobian. 

3. For linear problems, expected results are obtained.

4. For nonlinear problems wrong results are obtained when error is not controlled with abserr or relerr. That is lsode in Maple is not equal to lsode in FORTRAN for nonlinear problems.

5. For highly nonlinear problems, very bad results are obtained. This could be finding a wrong answer from multiple solutions of a nonlinear equation. Manual Euler backward step is implemented with fsolve to show what should have been the answer from lsode.

 

When time permits (not sure if I am knowledgable to do this), I will try to take the lsode algorithm and modify it to add Newton Raphson loop to fix it. It will be great if someone were to show how to run a processed code (with procedures for f, and y0 given by user) and then directly call the final solver in lsode. If any algorithm is not implemented properly, then it should be documented in the help pages/manual. This explains why a DAE solver was not included. (It is straightforward to modify lsode type algorithms to solve DAEs).

 

restart;

Digits:=15;

Digits := 15

(1)

eq:=diff(y(t),t)=-y(t);

eq := diff(y(t), t) = -y(t)

(2)

C:=array([0$22]);

C := Vector[row](22, {(1) = 0, (2) = 0, (3) = 0, (4) = 0, (5) = 0, (6) = 0, (7) = 0, (8) = 0, (9) = 0, (10) = 0, (11) = 0, (12) = 0, (13) = 0, (14) = 0, (15) = 0, (16) = 0, (17) = 0, (18) = 0, (19) = 0, (20) = 0, (21) = 0, (22) = 0})

(3)

C[9]:=1;

C[9] := 1

(4)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .909090909090834]

(5)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1

(6)

fsolve(%,y1=0.5);

.909090909090909

(7)

 While for linear it gave the expected result, it gives wrong results for nonlinear problems.

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = .904837355407810]

(8)

eq:=diff(y(t),t)=-y(t)^2*exp(-y(t))-10*y(t)*(1+0.01*exp(y(t)));

eq := diff(y(t), t) = -y(t)^2*exp(-y(t))-10*y(t)*(1+0.1e-1*exp(y(t)))

(9)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = .501579294869466]

(10)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1^2*exp(-y1)-10*y1*(1+0.1e-1*exp(y1))

(11)

fsolve(%,y1=1);

.488691779256025

(12)

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = .349614721994122]

(13)

 The results obtained are worse than single iteration using jacobian.

eq2:=(lhs-rhs)(subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq));

eq2 := 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1))

(14)

jac:=unapply(diff(eq2,y1),y1);

jac := proc (y1) options operator, arrow; 20.+2*y1*exp(-y1)-y1^2*exp(-y1)+.10*exp(y1)+.10*y1*exp(y1) end proc

(15)

f:=unapply(eq2,y1);

f := proc (y1) options operator, arrow; 0.1e2*y1-0.1e2+y1^2*exp(-y1)+10*y1*(1+0.1e-1*exp(y1)) end proc

(16)

y0:=1;

y0 := 1

(17)

dy:=-evalf(f(y0)/jac(y0));

dy := -.508796088545793

(18)

ynew:=y0+dy;

ynew := .491203911454207

(19)

 Following procedures confirm that it is indeed calling the procedure only at 0 and 0.1, with backdiag giving slightly better results.

myfun:= proc(x,y) if not type(x,'numeric') or not type(evalf(y),numeric)then 'procname'(x,y);
    else lprint(`Request at x=`,x); -y^2*exp(-y(x))-10*y*(1+0.01*exp(y)); end if; end proc;

myfun := proc (x, y) if not (type(x, 'numeric') and type(evalf(y), numeric)) then ('procname')(x, y) else lprint(`Request at x=`, x); -y^2*exp(-y(x))-10*y*(1+0.1e-1*exp(y)) end if end proc

(20)

sol1:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol1(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .501579304183583]

(21)

sol2:=dsolve({diff(y(x),x)=myfun(x,y(x)),y(0)=1},numeric,method=lsode[backdiag],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1,known={myfun}):

sol2(0.1);

`Request at x=`, 0.

`Request at x=`, 0.

`Request at x=`, .1

`Request at x=`, .1

[x = .1, y(x) = .497831388424072]

(22)

 

 Very bad results are obtained if nonlinearity is bad. For example if we replace 10 with 100

eq:=diff(y(t),t)=-y(t)^2*exp(-y(t))-100*y(t)*(1+0.01*exp(y(t)));

eq := diff(y(t), t) = -y(t)^2*exp(-y(t))-100*y(t)*(1+0.1e-1*exp(y(t)))

(23)

sol:=dsolve({eq,y(0)=1},type=numeric,method=lsode[backfull],ctrl=C,initstep=0.1,minstep=0.1,abserr=1,relerr=1):

sol(0.1);

[t = .1, y(t) = -8.18645608888381]

(24)

subs(diff(y(t),t)=(y1-1)/0.1,y(t)=y1,eq);

0.1e2*y1-0.1e2 = -y1^2*exp(-y1)-100*y1*(1+0.1e-1*exp(y1))

(25)

fsolve(%,y1=1);

0.899472065885205e-1

(26)

sol1:=dsolve({eq,y(0)=1},type=numeric):

sol1(0.1);

[t = .1, y(t) = 0.402875805278245e-4]

(27)

 

 

 

Download Lsodeanalysis.mws

@abcd 

 

Though I am not exactly sure, reading the algorithms behind previous versions of mgear in Maple 6 suggest that perhaps in Maple, lsode is not properly implemented. That is instead of doing Newton Raphson to convergence for BDF methods, it simply does only one or may be 2 iterations and stops. This could give wrong results. However, starting from initial values and by adjusting time steps, accuracy is controlled (not sure this is stable). Whenever Newton Raphson is not solved to high precision, accuracy is lost (order is lost for the methods). If this is true, then it is unfortunate that Maple claims to implement standard methods, but implements the same in an incomplete manner.

 

Rosenbrock methods seem to be correctly implemented for ODEs.

 

@ currently running some tests. lsode[backfull] may be close to mgear numerical jacobian. Not sure, but checking the same.

@cskoog 

 

Chris, Thanks. It works. I have figured out the issue. Maple 6 used to have method =mgear in dsolve/numeric. This was deprecated. It is sad. 

method=mgear provides for numerical solution based on Gear's method (BDF) with numerical Jacobian. Lsode cannot be made to run in this mode (hope I am not wrong). It will be great if this can be brought back in future versions.

Just changing mgear to other stiff methods was crashing the codes because of RAM issues.

showstat shows that the procedure exists in Maple 7 (not Maple 8), but it is disabled when calling from dsolve/numeric. Perhaps someone more knowledgable (Allan Wittkopf) can comment on how to call mgear method in Maple 7 or by storing the procedure as text file and calling it in later versions.

 

My tests suggest that mgear was one of the best algorithms, (better than Rosenbrock type algorithms) for largescale problems (RAM issues).

 

 

@Christopher2222 

I asked them only on Friday, so perhaps they will get back to me next week. What is clear is that Maplesoft doesn't have downloadable links anymore for Maple 6.

 

@taro and Cskoog,

 

I am able to open the files, but not run the same. It may be possible that some of the commands and procedures might have been deprecated from Maple 7 onwards, not sure.

 

It is a big set of procedures with intentionally ignoring the warnings in multiple places, so hard to debug.

 

 

@John Fredsted 

 

Chinthan is in my group. I think seq command is missing

 

try something like 

restart;with(CodeGeneration):

> f:=[x[1],x[2],x[3]-x[2]^2];
>
> C([seq(g[i] = f[i],i=1..3)]);

f := [x[1], x[2], x[3] - x[2] ^2]

g[0] = x[0];
g[1] = x[1];
g[2] = x[2] - pow(x[1], 0.2e1);

optimization seems to find it. I think trig form may be easier to prove, I will keep trying. The equality constraint is the equation of a circle with radiaus sqrt(2). Right now, the plots make sense.

restart;

eq:=x^(4*y)+y^(4*x);

eq := x^(4*y)+y^(4*x)

(1)

x:=sqrt(2)*cos(theta);y:=sqrt(2)*sin(theta);

x := 2^(1/2)*cos(theta)

y := 2^(1/2)*sin(theta)

(2)

eq;

(2^(1/2)*cos(theta))^(4*2^(1/2)*sin(theta))+(2^(1/2)*sin(theta))^(4*2^(1/2)*cos(theta))

(3)

eq2:=simplify(eq);

eq2 := 4^(2^(1/2)*sin(theta))*cos(theta)^(4*2^(1/2)*sin(theta))+4^(2^(1/2)*cos(theta))*sin(theta)^(4*2^(1/2)*cos(theta))

(4)

Optimization:-Maximize(%);

[2., [theta = .785390443327796]]

(5)

combine(eq2);

4^(2^(1/2)*sin(theta))*cos(theta)^(4*2^(1/2)*sin(theta))+4^(2^(1/2)*cos(theta))*sin(theta)^(4*2^(1/2)*cos(theta))

(6)

plot(eq2,theta=0..Pi/2,axes=boxed,thickness=3,axes=boxed);

 

 


Download Inequalityproof.mws

It may be possible to combine eq2 to a form 2*sin(a*theta+b+..)^c or something like that.

 

 

I gave up on copying and pasting large amount of codes/procedures in mw format. Try mws format, it should be 100x faster for copy/paste/edits.

 

 

First 8 9 10 11 12 13 14 Last Page 10 of 18