Venkat Subramanian

431 Reputation

13 Badges

15 years, 265 days

MaplePrimes Activity


These are replies submitted by

@acer 

Stiff ODE solvers (Rosenbrock) involve using the same coefficient matrix with different rhs. For example AX=b, followed by AX1=b1 (where b1 = linear function of b), etc. Simple LU deomposition will allow for one LU decomposition followed by multiple backsubstiution. Is it possible to do this with SparseLU or SparseDirect option?

Thanks

 

Being a professor, it pains me to see non-deterministic algorithms (Genetic algorithms, etc) being pushed everywhere. I have seen students directly trying stochastic approaches for problems that are well defined and are easily solved using gradient based techniques.

There is a place for non-gradient approaches, but just using stochastic, genetic algorithms and calling them global optimzation is wrong.

This is unfortunately true even with GlobalOptimization package provided by Maple as well.

 

@Greg3141 

 

that will show the content of the code and the steps involved (in regular maple).  I don't think one can hide the content in Maple. This is a serious problem when confidential/classified information are to be hidden while sharing the codes.

 

@Carl Love 

 

This is how Maple solves DAEs. (Differentiate algebraic equations, arrive at dy/dt = f(y)) and then use AE as a residue. 

@Carl Love 

 

That has been my biggest complaint. I am ok if Maple is not able to sovle implicit DAEs, but it says it solves DAEs. But it only solves DAEs by converting DAEs to ODEs (unless the Algebraic equation is simple linear equation or linearly solvable in terms of ODE variables).

 

Write any DAE and solve with any DAE solver in Maple. Use infolevel = 10, you will see YP for every variable including algebraic.

 

That is the sad part (incorrect help file)

 

 

@ patient

YP[2] = (-6*X*Y[3]*Y[1]^2*Y[4]^2+Y[1]*Y[2]*(-3*X^2*Y[1]^(1/2)+Y[4]^2*Y[3]*(X^2-8*Y[4]^2))+6*Y[4]^2*X*Y[2]^2*(Y[4]^2*Y[3]-Y[1]^(1/2)))/(Y[1]^(1/2)*X*(4*Y[4]^4*Y[1]^(1/2)*Y[3]+X^2*Y[1]+2*Y[4]^2*X*Y[2]))

YP[3] = Y[3]*(-3*X*Y[1]^2+2*Y[4]^2*(-Y[4]^2*Y[1]^(1/2)*Y[3]+X*Y[2]+Y[1])*Y[2])/(Y[1]*(4*Y[4]^4*Y[1]^(1/2)*Y[3]+X^2*Y[1]+2*Y[4]^2*X*Y[2]))

YP[1] = Y[2]

 

The variables are S, S' and U. Y[4] is x0.

X is the independent variable.

@tomleslie 

Maple does not have a DAE sovler. If Maple solves with rkf45dae it probably means that is solves dy/dt =f with algebraic equations may be as a constraint.

 

Patient, convert all of your equations to dy/dt =f . If needed differentiate the algebraic equations, then you will be close to what Maple does.

 

If some one posts *.mws worksheet that works, I will try. ( or at least in that format, 1D math)

 

@momoklala 

 

if code works for B = -0.5, and L = particular value, use that as approxsoln for B =0. 

 

 

@Alejandro Jakubi 

 

Acer and Alejandro, I kind of agree partially with both of you regarding openness of Maple. I started using Maple from Maple IV onwards as a student, and have slowly started learning about analytical methods/numerical methods, odes and DAEs. The kind of problems I face in my research even now cannot be solved by most solvers/platforms all the time. But Maple has helped me learn different nuances in numerical methods easily ( I couldn't handle mathematica which was not userfriendly for me).

While Maple is reasonably open, (Explore command is great from LibraryTools), documentation is bad and many things are not properly explained. To me, it looks like the focus right now is to get the attention of kids doing middle school or high school math, with no attention/care towards users having to solve large number of equations/ODEs/DAEs.

Another thing I hate is the inaccuracy in the claims. Sorry to repeat again, while Maple claims it can solve DAEs, it can only solve ODEs and should acknowledge the same. The worst thing is users reading about Jeff Cash who developed mebdfi code might wrongly assume that he is wrong and his method cannot solve DAEs. This is not true, it is just the Maple has implemented it wrongly. Ironically, Maplesim can solve DAEs. Even if you have license for both, you can't use Maplesim's DAE solver without creating a custom component which will take an hour or  more to create. For a change instead of just complaining, you will see me posting solutions for DAE problems in Maple in the next few days or weeks (just now a research paper was accepted, this approach is dfifferent from Maplesim or any other DAE solvers published as of today).I don't know where else Maple has made such mistakes/inaccurate claims.

Recently Acer and myself arrived at a compiled Newton Raphson. I can't understand why Maple cannot include such a thing in its fsolve (which consumes too much RAM).

FWIW, my view on where Maple can play a big role and is not doing it.

Instead of Codegen being able to convert Maple code to C, it should do it the otherway around. (Take an external C file and convert to Maple. Most algorithms have specified number of input variables, parameters and expected output variables). I know that there is a way to do externalcalling, but it is not well documented and difficult. Why do I want this? This could mean that Maple does not need to write anything new, just use existing good or new things coming out in C or Fortran directly.

Maplesim web based approach is a good step. You want to be able to share your results with others who may not have access to the software. A good business model should allow this. Only then potential users will sign up for license. If having Maple is a prerequisite to share results, then how can one expand the user pool?

 

 

@Preben Alsholm 

 

I am not surprised. I am actually pleased that I had the patience to browse through a 2D math input code and manage a working code out of it. Thanks for the catch.

I will work on this code more later. I think there is a singularity at both eta = 0 and eta = N. midrich method avoids the end points. Exp(1/0) should not occur in iteration, but the nearest node point probably has issues.

For problems like this providing a mesh (0,1e-6...0.2, 0.4, 0.99, 0.999999, 1)total length might work.

 

 

@acer 

 

used evalf for creating the F procedure (Sometimes users might provide 2x-x^2 instead of 2.x-x^2)

Also, as you are not adding the error, you don't need to divide by N in the Newton procedure. This should be faster

Let us get greedy next,

For a given list of equations and variables with guess , eqs, ICs, can we create a dll or downloadable package/library addition for this?

 

restart:

Digits := 15:

N := 50;

N := 50

 

f := array([seq(evalf(-2*x[i]+x[i]^2+0.99/i^2),i=1..N)]):

CodeTools:-Usage(
  fsolve({seq(f[i],i=1..N)},{seq(x[i]=0.5,i=1..N)})  ):
type(eval(%,1),set(name=numeric));

memory used=43.53MiB, alloc change=64.00MiB, cpu time=2.59s, real time=2.60s, gc time=15.60ms
true

 

# s3 is not recreated for each new problem, so does not contribute
# to problem specific timing. Even compilation to s3c would be done
# at most once, upon package load say.
#
s3 := proc(n::posint,
           A::Array( datatype = float[8] ),
           ip::Array( datatype = integer[4] ),
           b::Array( datatype = float[8] ) )
   local i::integer, j::integer, k::integer, m::integer, t::float;
      ip[n] := 1;
      for k to n-1 do
        m := k;
        for i from k+1 to n do
          if abs(A[m,k]) < abs(A[i,k]) then
            m := i
           end if
         end do;
        ip[k] := m;
        if m <> k then
          ip[n] := -ip[n]
         end if;
       t := A[m,k];
       A[m,k] := A[k,k];
       A[k,k] := t;
       if t = 0 then
         ip[n] := 0;
         #return ip
         end if;
       for i from k+1 to n do
         A[i,k] := -A[i,k]/t
         end do;
       for j from k+1 to n do
         t := A[m,j];
         A[m,j] := A[k,j];
         A[k,j] := t;
         if t <> 0 then
           for i from k+1 to n do
             A[i,j] := A[i,j]+A[i,k]*t
             end do
           end if
         end do
       end do;
     if A[n,n] = 0 then
       ip[n] := 0
       end if;
     #ip[n]
      if ip[n] = 0 then
        error "The matrix A is numerically singular"
       end if;
      if 1 < n then
        for k to n-1 do
          m := ip[k];
          t := b[m];
          b[m] := b[k];
          b[k] := t;
          for i from k+1 to n do
           b[i] := b[i]+A[i,k]*t
           end do
         end do;
       for k from n by -1 to 2 do
         b[k] := b[k]/A[k,k];
         t := -b[k];
        for i to k-1 do
           b[i] := b[i]+A[i,k]*t
           end do
         end do
       end if;
     b[1] := b[1]/A[1,1];
     #return 0.0; # Better handling of explicit returns needed when inlining
end proc:

# Newtondummy is not recreated for each new problem. But its
# instantiation is done for each problem, and that is accounted
# for in the base timing below.
#
Newtondummy := proc(x::Array(datatype=float[8]),
               f0::Array(datatype=float[8]),
               j0::Array(datatype=float[8]),
               N::integer,
               ip::Array(datatype=integer[4]),
               maxiter::integer[4],
               tol::float[8])
   local ferr::float[8], temp::float[8],
         i::integer, k::integer, iter::integer;
      ferr := 10.0*tol;
      iter := 0;
      while ferr > tol and iter < maxiter do
        _DUMMY1; # f3(x, f0);
        _DUMMY2; # J3(x, j0);
        j0[1,1] := j0[1,1]; # else j0 not recognized as 2-dim?
        _DUMMY3; # s3c(N, j0, ip, f0);
        for i from 1 to N do
           x[i] := x[i] - f0[i];
        end do;
        ferr := abs(f0[1]);
        for i from 2 to N do
           temp := abs(f0[i]);
           if temp > ferr then
              ferr := temp;
           end if;
        end do;
        ferr := ferr;
        iter := iter + 1;
     end do:
     return ferr;
end proc:

# The following are all the problem specific assembly and compilation
# tasks, timed together. (They could be part of an appliable module,
# eventually).
#
(st,str) := time(),time[real]():

eqsA := [seq(F[i]=evalf(f[i]),i=1..N)]:
irform2 := StatSeq(seq(Assign(op(i)),i=eqsA)):
prccons:= codegen[intrep2maple](Proc(Parameters(x::Array,F::Array),irform2)):
eqsJ := remove(type,[seq(seq(Jac[i,j]=evalf(diff(f[i],x[j])),j=1..N),i=1..N)],name=0.0):
irformJ := StatSeq(seq(Assign(op(i)),i=eqsJ)):
prcconsJ:= codegen[intrep2maple](Proc(Parameters(x::Array,Jac::Array),irformJ)):

(basetime,basetimer) := time()-st,time[real]()-str;

basetime, basetimer := 0.31e-1, 0.38e-1

 

# I need to document what I've done here.
# (I ought to consider writing a more general purpose "function-call Inliner".)
#
(st,str) := time(),time[real]():

UNewtondummy := ToInert(eval(Newtondummy)):
UNdlocals := [op([2,..],UNewtondummy)]:
Uprccons := ToInert(eval(prccons)):
UprcconsJ := ToInert(eval(prcconsJ)):
Us3 := ToInert(eval(s3)):
Us3locals := [op([2,..],Us3)]:
newlocals := [seq(`if`(t::'specfunc(anything,_Inert_DCOLON)',
                  _Inert_DCOLON(_Inert_NAME(cat(op([1,1],t),"_n1")),op([2..],t)),NULL),
                  t=Us3locals)]:
Ls3 := [seq(_Inert_LOCAL(i)=_Inert_LOCAL(i+nops(op(2,UNewtondummy))),i=1..nops(Us3locals))]:
T1 := subs(_Inert_NAME("_DUMMY1")=op([5,..],Uprccons), # param sequence 1st 2nd entries match
           _Inert_NAME("_DUMMY2")=op(subs([_Inert_PARAM(2)=_Inert_PARAM(3)],op([5],UprcconsJ))),
           _Inert_NAME("_DUMMY3")=op(subs([_Inert_PARAM(1)=_Inert_PARAM(4),
                                           _Inert_PARAM(2)=_Inert_PARAM(3),
                                           _Inert_PARAM(3)=_Inert_PARAM(5),
                                           _Inert_PARAM(4)=_Inert_PARAM(2),
                                           op(Ls3)],op([5],Us3))),
           subsop(2=_Inert_LOCALSEQ(op([2,..],UNewtondummy),op(newlocals)),UNewtondummy)):

# This is like `Newton`, but with the code from prccons,prcconsJ,s3 inlined.
InlinedNewton := FromInert(T1):

InlinedNewtonc := Compiler:-Compile(InlinedNewton):

(fusedcompiletime,fusedcompiletimer) := time()-st,time[real]()-str;

fusedcompiletime, fusedcompiletimer := .983, 1.526

 

# And now, essentially what was done previously. Ie, all of J3, f3, and s3c are
# Compiled. And so is `Newton`, except now it is accessing those three lexically
# while before they were explicitly declared global in `Newton`.
#
(st,str) := time(),time[real]():

f3 := Compiler:-Compile(prccons):
J3 := Compiler:-Compile(prcconsJ):
s3c := Compiler:-Compile(s3):
Newton := subs([_DUMMY1='f3(x, f0)',
                _DUMMY2='J3(x, j0)',
                _DUMMY3='s3c(N, j0, ip, f0)'],eval(Newtondummy)):
Newtonc := Compiler:-Compile(Newton):

(separatedcompiletime,separatedcompiletimer) := time()-st,time[real]()-str;

Warning, the function names {J3, f3, s3c} are not recognized in the target language

 

separatedcompiletime, separatedcompiletimer := .780, 1.306

 

# These need only be created for the first problem, and could be
# efficiently grown (but not replaced) for new problems with larger size N.
#
x0 := Array(1..N,datatype=float[8]):
f0 := Array(1..N,datatype=float[8]):
j0 := Array(1..N,1..N,datatype=float[8]):
ip := Array(1..N,datatype=integer[4]):

repeats := 1000:

ArrayTools:-Fill(0.0,f0):
ArrayTools:-Fill(1,ip):
(t11,t11r) := time(),time[real]():
for ii from 1 to repeats do
  ArrayTools:-Fill(0.5,x0):
  ArrayTools:-Fill(0.0,j0):
  InlinedNewtonc(x0,f0,j0,N,ip,100,HFloat(2e-6)); # singly compiled
end do:
(tottime,totrtime) := time()-t11,time[real]()-t11r:
evalf[4](tottime/repeats)*'seconds',evalf[4](totrtime/repeats)*'seconds[real]';

ArrayTools:-Fill(0.5,x0):
ArrayTools:-Fill(0.0,j0):
InlinedNewtonc(x0,f0,j0,N,ip,100,HFloat(2e-6)); # singly compiled

0.8420e-3*seconds, 0.8500e-3*seconds[real]
0.463564004320658215e-6

 

ArrayTools:-Fill(0.0,f0):
ArrayTools:-Fill(1,ip):
(t11,t11r) := time(),time[real]():
for ii from 1 to repeats do
  ArrayTools:-Fill(0.5,x0):
  ArrayTools:-Fill(0.0,j0):
  Newtonc(x0,f0,j0,N,ip,100,HFloat(2e-6)); # separetly compiled
end do:
(tottime,totrtime) := time()-t11,time[real]()-t11r:
evalf[4](tottime/repeats)*'seconds',evalf[4](totrtime/repeats)*'seconds[real]';

ArrayTools:-Fill(0.5,x0):
ArrayTools:-Fill(0.0,j0):
Newtonc(x0,f0,j0,N,ip,100,HFloat(2e-6)); # separately compiled

0.1077e-2*seconds, 0.1087e-2*seconds[real]
0.463564004320658215e-6

 

 

 

Download newtoncompiled6.mw

(1) First please use only Maple input and 1D math if you can. Classic worksheet is the easiest and least buggy. I spent a lot of time on this problem and most of which was just spent on copying and pasting your equations. (Is there a way to take a mw worksheet or 2D math code to automatically convert to 1D math input and classic worksheet?). I wish common sense prevailed and Maple never moved to mw.

 

I attach a partial solution. I think this is a semi-infinite domain problem. I am using N = 2, 10 may be an overkill.

 

Also, at N =2, physical intuition suggests that flux of theta can be taken to zero (I am assuming that you are solving for velocity and temp).

Low values of epsilon (that is a crazy symbol, I never used it and it was a pain to copy and paste), gives a solution which can be used as an approxsoln(guess) for large values. I leave the rest to you.

If you want solution for your original bc, take my solution and use as approxsoln for your case. might work. Another trick is to write continuation on theta(N) based on my bc.

Few tips, increase Digits, use continuation parameter based on the physics. (Ideally a linear problem or a simple problem from which you can grow to your problem)

 

restart; with(plots)

 

Digits:=15;

 

Digits := 15

 

P := {Ec = .5, N = 2, Re = 5, beta = 0.1e-1, lambda = -1, n = 2, sigma = 10, k[1] = .9};

P := {Ec = .5, N = 2, Re = 5, beta = 0.1e-1, lambda = -1, n = 2, sigma = 10, k[1] = .9}

 

#P := {N=6};

E := [.2, .5, .8, 1.1];

E := [.2, .5, .8, 1.1]

 

ode1 := theta(eta)^2*((diff(f(eta), eta))^2-f(eta)*(diff(f(eta), eta, eta))) = exp(beta/theta(eta))*((diff(f(eta), eta, eta, eta))*theta(eta)^2-beta*(diff(f(eta), eta, eta))*(diff(theta(eta), eta)))+k[1]*theta(eta)^2*(2*(diff(f(eta), eta))*(diff(f(eta), eta, eta, eta))+(diff(f(eta), eta, eta))^2-f(eta)*(diff(f(eta), eta, eta, eta, eta)));

ode1 := theta(eta)^2*((diff(f(eta), eta))^2-f(eta)*(diff(f(eta), `$`(eta, 2)))) = exp(beta/theta(eta))*((diff(f(eta), `$`(eta, 3)))*theta(eta)^2-beta*(diff(f(eta), `$`(eta, 2)))*(diff(theta(eta), eta)))+k[1]*theta(eta)^2*(2*(diff(f(eta), eta))*(diff(f(eta), `$`(eta, 3)))+(diff(f(eta), `$`(eta, 2)))^2-f(eta)*(diff(f(eta), `$`(eta, 4))))

 

ode2 := (1+`&epsilon;`*theta(eta))*(diff(theta(eta), eta, eta))+`&epsilon;`*(diff(theta(eta), eta))^2+sigma*f(eta)*(diff(theta(eta), eta))-n*sigma*(diff(f(eta), eta))*theta(eta)+n*(n-1)*(1+`&epsilon;`*theta(eta))*theta(eta)/Re+sigma*lambda*theta(eta)+sigma*Ec*(exp(beta/theta(eta))*(diff(f(eta), eta, eta))^2+k[1]*((diff(f(eta), eta))*(diff(f(eta), eta, eta))^2-f(eta)*(diff(f(eta), eta, eta))*(diff(f(eta), eta, eta, eta)))) = 0;

ode2 := (1+`&epsilon;`*theta(eta))*(diff(theta(eta), `$`(eta, 2)))+`&epsilon;`*(diff(theta(eta), eta))^2+sigma*f(eta)*(diff(theta(eta), eta))-n*sigma*(diff(f(eta), eta))*theta(eta)+n*(n-1)*(1+`&epsilon;`*theta(eta))*theta(eta)/Re+sigma*lambda*theta(eta)+sigma*Ec*(exp(beta/theta(eta))*(diff(f(eta), `$`(eta, 2)))^2+k[1]*((diff(f(eta), eta))*(diff(f(eta), `$`(eta, 2)))^2-f(eta)*(diff(f(eta), `$`(eta, 2)))*(diff(f(eta), `$`(eta, 3))))) = 0

 

bcs := f(0) = 0, (D(f))(0) = 1, (D(f))(N) = 0, ((D@@2)(f))(N) = 0, theta(0) = 1, (D(theta))(N) = 0;

bcs := f(0) = 0, (D(f))(0) = 1, (D(f))(N) = 0, ((D@@2)(f))(N) = 0, theta(0) = 1, (D(theta))(N) = 0

 

 

sys := eval([ode1, ode2, bcs], P);

sys := [theta(eta)^2*((diff(f(eta), eta))^2-f(eta)*(diff(f(eta), `$`(eta, 2)))) = exp(0.1e-1/theta(eta))*((diff(f(eta), `$`(eta, 3)))*theta(eta)^2-0.1e-1*(diff(f(eta), `$`(eta, 2)))*(diff(theta(eta), eta)))+.9*theta(eta)^2*(2*(diff(f(eta), eta))*(diff(f(eta), `$`(eta, 3)))+(diff(f(eta), `$`(eta, 2)))^2-f(eta)*(diff(f(eta), `$`(eta, 4)))), (1+`&epsilon;`*theta(eta))*(diff(theta(eta), `$`(eta, 2)))+`&epsilon;`*(diff(theta(eta), eta))^2+10*f(eta)*(diff(theta(eta), eta))-20*(diff(f(eta), eta))*theta(eta)+(2/5)*(1+`&epsilon;`*theta(eta))*theta(eta)-10*theta(eta)+5.0*exp(0.1e-1/theta(eta))*(diff(f(eta), `$`(eta, 2)))^2+4.50*(diff(f(eta), eta))*(diff(f(eta), `$`(eta, 2)))^2-4.50*f(eta)*(diff(f(eta), `$`(eta, 2)))*(diff(f(eta), `$`(eta, 3))) = 0, f(0) = 0, (D(f))(0) = 1, (D(f))(2) = 0, ((D@@2)(f))(2) = 0, theta(0) = 1, (D(theta))(2) = 0]

 

 

P;

{Ec = .5, N = 2, Re = 5, beta = 0.1e-1, lambda = -1, n = 2, sigma = 10, k[1] = .9}

 

 

#dsolve(eval(sys, `&epsilon;` = E[1]), [f(eta), theta(eta)], abserr=1e-4,numeric, method = bvp[midrich], maxmesh = 1000):

for k to 4 do if k = 1 then Q || k := dsolve(eval(sys, `&epsilon;` = E[k]), [f(eta), theta(eta)], numeric, method = bvp[midrich], maxmesh = 10000) else Q || k := dsolve(eval(sys, `&epsilon;` = E[k]), [f(eta), theta(eta)], abserr = 0.1e-3, numeric, method = bvp[midrich], maxmesh = 1000, approxsoln = Q || (k-1)) end if; T || k := plots:-odeplot(Q || k, [eta, theta(eta)], eta = 0 .. 2) end do:

plots:-odeplot(Q1, [eta, f(eta)], eta = 0 .. 3);

Warning, right boundary of range exceeds domain of the problem, it has been adjusted

 

 

plots:-display(T || (1 .. 4));

 

NULL

``

 

Download VT1.mw

@ 

At least in Maple 14, you will see that fsolve consumes huge amount of RAM compared to the discussed approach. I think this is because fsolve stores the zeros of the Jacobian.

 

@acer 

If you manually copy paste F , Jac and s3 and Newton in a single procedure and compile, that will be faster. (I did this before, can't find it).

I think it is interesting to note that both Newton (original code) and Newtonc have similar speed for a single iteration, but I expect that a single compiled procedure (without global calls) will be faster.

 

@acer 

 

Yes, I typically remove 0 for compiling otherwise it is too slow (and RAM intensive for large problems). While this is good, unfortunately, this approach is not useful for dsolve/numeric/compile for DAEs.  Please check my PM for DAE, as  I want this thread focussed strictly on Newton Raphson.

 

Also, fsolve does not provide option for tolerance. So, providing tolerance as 1e-4 or 1e-6 is more than enough for many problems. That is much appreciated.

 

You don't need to do s3 or s3c inside a dsolve/numeric. You can easialy call 'dsolve/numeric/SC/Decomp' followed by 'dsolve/numeric/Solve', both of which are recognized in dsolve/numeric.

 

First 12 13 14 15 16 17 18 Page 14 of 18