ecterrab

14540 Reputation

24 Badges

20 years, 21 days

MaplePrimes Activity


These are replies submitted by ecterrab

@Alejandro
Your generalization of what Preben has done looks correct to me. Note however two things.

1) Both Preben and you used 'assuming real' in key intermediate steps of what you show. Without "assuming real", your approach produces a solution as complicated as dsolve's solution, not simpler. On the other hand, the simpler solution presented by Markiyan when he posted this ODE system is valid in general, for complex x, y(x), z(x). So there must be a way to obtain it without "assuming real".

To the side: while "assuming real" is perfectly valid during algebraic interactive exploration, Maple's default domain is complex. "assuming real" cannot be coded into dsolve.

2) What you are presenting, a MatrixDE approach, is actually the standard way to solve linear ODE systems (google and you see it). It was Maple's approach till 1996. The related old subroutines are still in the library: write sys as a set (not a list) then enter `dsolve/diffeq/LinSysNonConst`(sys, {y, z}, x) and you see this matrixDE at work - the solution you will see is not simpler than dsolve(sys) as it happens with your presentation when not using "assuming real".

My take on this: the actual problem behind finding a simpler solution to Markiyan's ODE system, in brief, is that the solution can be written in infinitely many manners because linear combination of solutions are also solutions. This is not related to using a matrixDE approach and I do not see any algorithmic way to tell what linear combination takes you to the simpler mix of trigonometric functions.

Details
1) Markiyan's sys can be decoupled into one algebraic equation (say for z as a function of y) and a 2nd order linear ODE for y as you see if you enter

> PDEtools:-casesplit(sys);

[z(x) = … (trigs and y(x)), y'' = a(x) y' + b(x) y]

where a(x) and b(x) also involve trig functions. Get now the equation for y(x) and solve it

> dsolve(ode_y);

y(x) = C1 s_1(x) + C2 s_2(x)

where the two independent solutions s_1(x) and s_2(x) also involve trig functions. If you now substitute this solution into z(x) = … (trigs and y(x)) you get the solution for z(x).

Now: you can rewrite this solution as y(x) = C1 s_3(x) + C3 s_4(x), where s_3(x) and s_4(x) are linear combinations of s_1(x) and s_2(x), and s_3(x) and s_4(x) may be simpler, or more complicated than s_1(x) an s_2(x). Likely, if you now substitute y(x) = C1 s_3(x) + C2 s_4(x) into z(x) = … (trigs and y(x)), the resulting solution for z(x) may also be simpler or more complicated than the solution you get when you use s_1(x) and s_2(x).

Summarizing: The actual problem behind "obtaining a simpler solution" is to decide on what linear combination between s_1(x) and s_2(x) among infinitely many is the most appropriate - I think it has no systematic solution. The matrixDE approach you suggest is standard, it was Maple's default 15+ years ago, and has advantages in some cases; we may even revamp it for when it is advantageous, but I does not take you systematically to a simpler solution.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@Alejandro
A matrixDE approach was the default in Maple for tackling systems of linear ODEs in 1996 before I rewrote dsolve. In 1997 I changed that to using diffalg. Both approaches work in all cases, with respective advantages/disadvantages depending on the example - and as I said it is possible to come up with a heuristics suggesting the use of this or that approach depending on the example in order to get a simpler solution. The old matrixDE approach is still there: write sys as a set (not a list) then `dsolve/diffeq/LinSysNonConst`(sys, {y, z}, x). As you can see the solution returned is not simpler than dsolve(sys). You mentioned some further tuning. I look forward for your next comment.

About the problem - itself - of obtaining a "simpler" solutions for a linear ODE or a system of them: you know, linear combinations of solutions are also solutions, so depending on: 1) the mathematical functions entering the system and 2) the default way of simplifying them (e.g. Maple and Mathematica work differently), some linear combinations of solutions will result in simpler solutions than others. That is what Preben did for this example. Nice, but just for this example, based on educated human exploration around it.

Regarding the transformation suggested by Markiyan *for this example*, i.e. tan instead of cos used by dsolve, recall we are talking of an algorithm, for a general problem, not for one ODE. The approach used in dsolve *for the general problem* is to first normalize the equation (different problems map into one and the same) to decide what is (heuristically) a more convenient transformation among many possible. In Markiyan's example, after decoupling, the system reduces to one equation: diff(diff(y(x), x), x) = (sin(x)*(diff(y(x), x))*cos(x)^3-5*y(x)*cos(x)^2+27*sin(x)*(diff(y(x), x))*cos(x)+9*y(x))/(cos(x)^4-6*cos(x)^2-27). You normalize it via convert(%, NormalForm, simplifier = simplify); and receive y'' = f(x)*y where f(x) only involves cos. It is then natural to use t = cos(x). For this example (not others) using t = tan(x) rationalizes this ODE the same way as t = cos(x) (try it - you have PDEtools:-dechange for that). By the way providing an option to indicate the transformation is in the plans since some ages ...

The summary is as said in previous comments: A) no one has identified here, so far, a class of problems for which the matrixDE decoupling instead of a direct differential algebra approach, or the tan instead of cos transformation, is more convenient. By class of problem, Alejandro, I do not mean "the linear ODE system class of problem" (obvious) you said, but a subclass of them for which one method is more convenient than the other one, being that *both work in general* (unlike what you said it might be). B) This is just about returning another solution, simpler than the already correct solution returned, in a context where any linear combination of solutions is also a solution - there are infinitely many possible. C) dsolve has good heuristics in general for returning simpler and from times to times I invest more time in simplicity - it is important. The possibility of taking advantage of using a matrixDE approach, even if in some seldom cases, is in my mind since I changed that into using differential algebra. I may even return to this motivated by your comments. But, honestly, this is not top priority for me in the context of so many directions into which the current DE solvers can be enhanced.

To the side: besides the Physics package, it is certainly exciting and *extremely* challenging for me to be on top of the development of the DE and PDE symbolic solutions and special functions code of a project as big at the Maple project. Two non-obvious important remarks: it is just silly to approach this challenge as a only-one-person endeavor - your feedback and ideas (include here the arxiv repository) are of the uttermost importance, even when I may disagree with you. You are also not sitting in this cockpit, you can't imagine how many things can still be done. Literally, the "more" I augment the methods, simplifications, etc. in use, the "10x more" I see further things and methods that can be derived and implemented. To add complexity: the interconnection between methods for ODEs, PDEs, special functions and simplification is, well, just brutal, I'd say impossible to present just in written. This post on "ODEs, PDEs and Special Functions" all-in-one is in part motivated by my perception of that connection, as you see it also in the examples of the attached worksheet.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@Venkat

Hi, I am glad that this conundrum about the speed at which dsolve/numeric computes solutions has arrived at a conclusion and you agree that it runs at the speed of C code (it actually *is* C compiled code for the ODE system you bring).

Your observations about other things like the implementation of the algorithm, choice of solver and the method for solving nonlinear DAEs, perhaps in the presence of mathematical functions (your system has exp(u[i]) all around where u[i] are the unknowns), are rather valuable and I am sure Allan is probably already thinking about them. I think this debate is relevant and not annoying at all.

Your comments about Matlab and Mathematica are relevant too, although I think that providing files to be sent to non Maple users (as you point that Matlab does) may not be so relevant as being able to tackle large systems within the Maple workshet (when the system is tackable - see next paragraph).

I note however only one thing, Venkat: not being the expert in DE numerical methods but being sufficiently familiar with them, the system you posted here, where the algebraic equations contain many exp(u[i]) and u[i] are the unknowns, is *very hard* to decouple symbolically, I don't think that a program for handling that kind of coupling with N >= 8 (implies on ~20 equations) exists, and tracing dsolve/numeric I see that its symbolic preprocessing gets stuck precisely there, before writing or compiling any C code. So my question to you: do you say that you can tackle exactly this same system using Matlab or writing C code yourself for it? Or is this one example that you don't know who to solve using Maple and anything else?

Independent of that, using Maple, in my laptop, dsolve/numeric takes 46 seconds to solve the problem for N = 7 (i.e., 18 equations), with the symbolic preprocessing of these algebraic equations having exp(u[i]) taking most of the time. Then it hangs for N = 8 as said. There may be a problem in the symbolic preprocessing.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@Alejandro

We have a different view on this one. I don't think Markiyan showed "an algorithmic way of finding a simpler solution for a class of problems". No class of problems is identified in this post. His approach also has human educated guess, a transformation cos and sin -> tan, actually based on dsolve's userinfos, that works fine in the example he presented but that can other times produce the opposite, i.e. a more complicated solution. In the same line, Preben nicely refined Markiyan's presentation, but also didn't identify a class of problems for which the linear combination sys[1] + sys[2] followed by u = y+z would systematically result in a simpler form of solution.

Not found in that post, you elaborate here, in this thread, on top of Preben's approach to that example, saying that the system could be written in matrix form and decoupled. There are other ways of decoupling, Alejandro. Concretely, dsolve also decouples the system before solving it: check PDEtools:-casesplit(sys). Again, depending on the example, one way of decoupling may work better in one case, the other in other cases, and until you identify the class of problems for which one approach works better than another one you have nada.

Having said all that, I do not discard that someone could come forward and show concretely, even easily, that "for cases where "these" conditions are satisfied: a) "this" transformation is most of the times better than the one used by dsolve and b) "a matrixDE decoupling" will result in simpler solutions than "the differential algebraic lexicographical decoupling used by dsolve". I'd be happy to read such a post and consider implementing the underlying ideas. However I do not think such an approach could be *derived* [i.e. appear as a consequence] of using DE packages as Markiyan asked and so my answer to him: perhaps his question is based in too high expectations.

Anyway, at this point, being that the ODE posted by Markiyan is actually solved correctly by dsolve - all this debate is only about getting a simpler solution from a command that more frequently than otherwise already returns simpler solutions - I personally prefer to focus other things that look to me really more urgent or relevant.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

This presentation, about ODEs, PDEs and Special functions, claims among other things that dsolve numerically solves ODEs - not PDEs - at the speed of C code. This is rather relevant information. Most people are not aware of it. In the presentation, an example of a system with 50 coupled ODEs, 50 unknowns, and 50 Lagrange multipliers, is solved as an illustration.

@Mac Dude and @Venkat

On the debate about the claim above: it is indisputable that dsolve writes and compiles C code on the fly for solving ODE systems when you pass compile = true. In this context, whether you think you can write better C code for a particular ODE system is a different issue. And I do not dispute it, although note that I've not seen an example of that in the 10 comments above. One example, concrete, complete, is always helpful to make a point. It also helps improving things. Likely, to have more options than currently available to indicate to dsolve to write the C code in this or that manner - a valuable suggestion, I really like it - is however also a different issue.

I also read carefully the comments by Venkat and Allan's response. I do not think the technical details presented - valuable perhaps regarding improvements in dsolve/numeric - change the facts about the option compile, neither change my opinion that tackling ODE systems within a computer algebra worksheet is today possible at the speed of C code and tremendously advantageous.

By the way, Venkat, everywhere in computer algebra you have a similar situation where more specific problems may require non-trivial use of optional arguments to get an optimal result, optional arguments that may even not exist today. And yes, having expertise we sometimes cross with or can formulate problems that beat a computer algebra system. We grow studying these concrete examples when posted. That makes this debate potentially so valuable. But, hey, that also doesn't change the status of things about the speed at which dsolve/numeric runs. And by the way: no, I do not think at all that Maple's dsolve/numeric writes bad or mostly inconvenient C code when the system includes many ODEs, regardless of how much this automatic generation of C code can be improved.

But I don't want to close my summary with just reafirming that dsolve/numeric does write, compiles and runs C code to solve ODE systems. I also see a very detalied debate that perhaps triggers valuable further developments/improvements, and I am frankly glad with that and all your participation.

@Markiyan
In "Play a simple melody" you show an ODE system and the solution. You do not show how to obtain that solution. Perhaps the advanced status of things in this or that matter creates an expectation difficult to fulfill .. if the logic underlying a result is not understood (as in your post), then it cannot be coded. Your other post is more about computing a limit, and using Green functions for solving PDEs & Boundary conditions - some things are already in place, others not. I presented this post about ODEs, PDEs and Special functions aiming at illustrating the current status of things, that is really extraordinary, I think, but there is still a long way to go improving things; again, thanks for your comments Markiyan.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi
As mentioned in a previous comment in this thread we now provide access to the version of Physics under development, including adjustments/fixes/novelties as they are ready. A new update (today Aug/1) is available for download at http://www.maplesoft.com/products/maple/features/physicsresearch.aspx.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@Mac Dude

Taking real/good advantage of computer algebra in physics courses is to some point pioneering activity. As mentioned in the presentation it is also one of the main targets of the Physics project. Please feel free to bring your related questions/suggestions, here or writing us at physics@maplesoft.com, and we may be able to help you / join efforts.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@Mac Dude

When using computer algebra in physics, "casting expressions into certain forms" is the very relevant type of thing we (students or not) need to do all the time. One needs to learn these basic operations in order to take advantage of the computer in physics. And for these basic operations I think that a basic and really short, to-the-point, textbook/tutorial is best. Your first comment made me consider revamping/updating that tutorial I wrote for physics students years ago. In your second comment, however, you mention something else, like "pushing the limits of Maple", e.g. simplifying an expression for which simplify fails, etc. I think these are more advanced operations, or more technical things like the evaluation levels detail you linked, more relevant for people who go deeper into using computer algebra in general - as such I feel these topics are outside the sphere of the Physics package.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi
Just revising the documentation about dsolve's speed to see if the information I have is also available to you, Mac Dude. Check please ?dsolve/rkf45 and ?dsolve/numeric/efficiency: current dsolve runs as fast as C code when specifying the compile = true option in the call to dsolve/numeric. Using this option actually makes dsolve write the C code for you, compile it, and use it for solving the DE system, without you perceiving. It is all automatic, as said in the presentation. The limitations of "dsolve at the speed of C code" are also just those for regular C code you may nevertheless prefer to write yourself to use with a connection Toolbox: it only works with real valued DE systems that contain at most elementary functions (no special functions allowed) - see the list of functions in ?Compile,compile under Runtime Support.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi
These polemic questions and the answers I present in this talk actually passed through my mind at different times and reflect, honestly, what I think.

In 1996 I was interested in computing Poincare sections in the context of a general relativity problem. There was Fortran code for that (see Calzetta and El Hasi, Phys.Rev. D 51 - 1995). After struggling badly because the number of numerical experiments to be done was really large, it resulted to me much simpler to write a complete program to compute Poincare sections within a computer algebra system than trying to use the existing one for Fortran. The program is today found in Maple, DEtools[poincare].

I learned two things: 1) the advantages of running numerical computations within a computer algebra environment were just enormous. 2) the drawback at that time: speed. I didn't like the speed. No, it was not satisfactory. C was abut 30 to 100 times faster.

In 2000 this speed issue got on my nerves. I talked to Allan Witkoppf, my friend, at that time a Ph.D student and an expert in C. We came up with some interesting ideas that ended up in the Maple DNA project for fast numerical ODE solutions (google "DNA Maple 6"). Surprisingly, with simple ideas we got 15x to 30x speed up - hey! I understood one more thing: the numerical slowness of computer algebra systems was not a real or serious obstacle.

Along the next ten years Allan became the Maple numerical DE guy, great - and voila the speed, Dude. It is there. For real. Even DNA is now in the dust. Try your examples. Compare. I - honestly - believe and fully stand by this answer for polemic question 1. Yes, you can also use a connection Toolbox instead, as you say, and some people will prefer it, perhaps because already have C or Fortran code that they can reuse. Still I think the CAS worksheet is really better in the small and large scale, for the opportunity of reusing, symbolic preprocessing not available in Fortran or C, and including the speed.

About polemic question #2: I agree with you in that the computer is not exclusive of books. I see from your comment that the presentation could be misunderstood as suggesting "this or that". The intention is only to reflect a practice, though: people tend more and more to use one or the other, as in "only go to the books if the computer (including web) is insufficient". And not few people ask me about this regarding DEs. The last time I consulted DE books I believe it was ~10 years ago. For ODEs the computers went just far beyond textbooks. Not beyond humans. I currently do consult the arxiv.

You mentioned Abramowitz. It is also true, I have my copy. And always looked at it as a monumental piece of work. But then in 1999 I moved away from the paper version of it (google "abramowitz pdf", second link). Then I saw the DLMF (Digital Library of Mathematical Functions) and Marichev's and others emerging as static repositories of special function information. The FunctionAdvisor in Maple came after all that, with a different idea in mind: do not present 'static' information. Instead: process information, interrelating it on the fly, according to user's input, using an increasing number of new algorithms popping up all around.

There is still a long way to go. I think however that the paradigm has shifted already. Textbooks and static presentations are entering the rear mirror. Core pieces of mathematical information processed on the fly with each-day-more-varied-algorithms is in front of us. This is what is illustrated in the presentation attached.

The core of the special functions part was also presented in "Special Functions in the Digital Age" (IMA 2002), while a previous version of most of this DE material, including the questions, was also presented in the session for "Teaching and Learning Differential Equations" of the meeting of the CMS 2000.

Edgardo S. Cheb-Terrab 
Physics, Maplesoft

Hi
Thanks for your comments. The learning curve: basic or advanced quality textbooks? Struggling to do seemingly trivial things like casting expressions into certain forms … I'd say compact-and-basic is the remedy, with compact & quality in bold. For teaching in Brazil, State University of Rio de Janeiro, I tried to write such a mini-and-basic for physics students; I may revamp it. For advanced: what is what you'd like to see in such a text that would smooth out the learning curve?

Examples and textbook notation: I'm glad to hear that you find these examples useful. The are mostly those shown in ?Physics,Examples. Physics and mathematical methods are not the same thing, and Feynman's and Landau's books (not only them, of course) are great in that in their problems illustrate both for real. BTW three of the four examples under "Mechanics" are from Landau's vol.1.

The 'notation' issue actually refers to keyboard input, and the display of results on the screen, so not palettes. To enter things, say, as you do when computing with paper and pencil, and see the output as you see it in textbooks. It is impressive for me how this speeds up matters in my brain. Even after working with computer algebra for so many years, I'm still not used to (sounds alien to me) computerish style representing mathematical objects so differently than the way we do it by hand, ditto for the output. All the efforts are put in Physics to not do that.

New: we now provide access around the clock to the version of Physics under development (http://www.maplesoft.com/products/maple/features/physicsresearch.aspx). This includes ways for you to present your suggestions/report bugs/feedback. The downloadable package, post 17.01, is at "zero known bugs" in this moment and we intend to keep it this way. Not having to wait for adjustments until the next release anymore.

Edgardo S. Cheb-Terrab
Physics, Maplesoft



``

restart; with(Physics)

 

The default metric already has the signature [+,-,-,-] you are asking. Just recall that the component "0" is mapped into the component "4", and that signature you ask, the same one used by default in the Landau books and others, is

 

 

g_[]

g_[mu, nu] = (Matrix(4, 4, {(1, 1) = -1, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = -1, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = -1, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 1}))

(1)

 

You can always also use 0 or 4 indistinctly to refer to the components, as in:

 

g_[0, 0] = g_[4, 4];

1 = 1

(2)

 

Independent of that the keyword 'signature' is from old times and needs to be either removed or updated, because it allows only for  [+,-,-,-] or  [+,+,+,+] (adapted to the dimension of spacetime, that is also settable).

 

The issue is that, in current versions of Physics, you can actually set the metric to whatever you want, making that value of 'signature' irrelevant. For example:

 

 

Setup(coordinates = cartesian, metric = a*dx^2+b*dy^2+c*dz^2+d*dt^2);

`* Partial match of  'coordinates' against keyword 'coordinatesystems'`

 

`Default differentiation variables for d_, D_ and dAlembertian are: `*{X = (x, y, z, t)}

 

`Systems of spacetime Coordinates are: `*{X = (x, y, z, t)}

 

[coordinatesystems = {X}, metric = {(1, 1) = a, (2, 2) = b, (3, 3) = c, (4, 4) = d}]

(3)

g_[]

g_[mu, nu] = (Matrix(4, 4, {(1, 1) = a, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = b, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = c, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = d}))

(4)

``

``



Download you_can_set_g_to_wha.mw

@ecterrab 



restart; with(Physics):

The default metric already has the signature [+,-,-,-] you are asking. Just recall that the component "0" is mapped into the component "4", and that signature you ask, the same one used by default in the Landau books and others, is

g_[];

g[mu, nu] = (Matrix(4, 4, {(1, 1) = -1, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = -1, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = -1, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 1}))

(1)

You can always also use 0 or 4 indistinctly to refer to the components, as in:

g_[0,0] = g_[4,4];

1 = 1

(2)

Independent of that the keyword 'signature' is from old times and needs to be either removed or updated, because it allows only for  [+,-,-,-] or  [+,+,+,+] (adapted to the dimension of spacetime, that is also settable).

 

The issue is that, in current versions of Physics, you can actually set the metric to whatever you want, making that value of 'signature' irrelevant. For example:

Setup(coordinates = cartesian, metric = a*dx^2 + b*dy^2 + c*dz^2 + d*dt^2);

`* Partial match of  'coordinates' against keyword 'coordinatesystems'`

 

`Default differentiation variables for d_, D_ and dAlembertian are: `*{X = (x, y, z, t)}

 

`Systems of spacetime Coordinates are: `*{X = (x, y, z, t)}

 

[coordinatesystems = {X}, metric = {(1, 1) = a, (2, 2) = b, (3, 3) = c, (4, 4) = d}]

(3)

g_[];

g[mu, nu] = (Matrix(4, 4, {(1, 1) = a, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = b, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = c, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = d}))

(4)

 



Download you_can_set_g_to_wha.mw

@ecterrab 

@peter137 Yes I have in mind additional documentation; that is key in this project and chunks are added at each release, with some people now complaining because of the current large size of ?Physics,examples ... These examples are mainly from the Landau books or from the other books shown in the references (footnote of ?Physics).

What would be your suggestion for showing the use of the package that at the same time is of your interest and you think would be of wide range interest?

Edgardo S. Cheb-Terrab
Physics, Maplesoft

@peter137 Yes I have in mind additional documentation; that is key in this project and chunks are added at each release, with some people now complaining because of the current large size of ?Physics,examples ... These examples are mainly from the Landau books or from the other books shown in the references (footnote of ?Physics).

What would be your suggestion for showing the use of the package that at the same time is of your interest and you think would be of wide range interest?

Edgardo S. Cheb-Terrab
Physics, Maplesoft

First 56 57 58 59 60 61 62 Page 58 of 64