ecterrab

14540 Reputation

24 Badges

20 years, 22 days

MaplePrimes Activity


These are replies submitted by ecterrab

@Rouben Rostamian  

I'm happy to see exploration to the sides, the PDEtools symmetry commands are really powerful, and have a ton of options. By the way, SimilaritySolutions, the one you used, an approach frequently presented in symmetry textooks, is a rather restricted and watered-down version of InvariantSolutions, in turn only presented in full in more advanced symmetry textbooks. 

Now on the sqrt(x): this is not really "a bug", if you pdetest(sol, pde) assuming x::real, you see the remainder has signum(1, x) as a factor, which is equal to 0 for all nonzero real x. So this solution returned could be seen as not appropriate when x = 0 only.

But more important: from where is this abs(x) coming? It is from simplify. The solution actually computed by SimilaritySolutions is 
sol_0 := u(x, t) = _C1+erf((1/2)/(t/x^2)^(1/2))*_C2, which tests OK right away, even with something as simple as normal(eval(pde, sol_0)). Try now simplify(sol_0) assuming x::real; and you see abs(x) in the output, which, when testing generates signum(1, x).

Best

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@_Maxim_ 

I see now there are two assuming constructs in the same item 3. First of all: from the explanation in the previous reply, what works is assume; your comment can then only be applied to it (whether it tries to place assumptions on 0), and my take on this one is that assume(x(0)>0) should be a valid assumption, because in general nothing is known about x. Regarding why "the first construct does not interrupt with an error", it is because assuming notices that abs(x(0)) has no indets of type name, therefore shortcuts the computation without calling assume at all.

So one thing to think about: assume could handle assumptions on x(0), at least when x is not assigned; I will forward this comment to the person taking care of assume (I wrote assuming but am not involved in assume itself).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@illuminates 

Now it is clear. This is fixed for Maple 2017. The updates of the Physics package, including fixes and the new material being developed are distributed at the Maplesoft R&D Physics webpage. So in Maple 2017 + updates you get -6, not -24.

This Maplesoft R&D Physics webpage also includes updates for Maple 2016 that I recommend, containing fixes and Physics developments that entered Maple 2017, but the last update for Maple 2016 happened Apr/15. Only updates for the current release get posted every week. A workaround in Maple 2016 for this example you posted is to use SumOverRepeatedIndices, as you indicated.

@digerdiga 

The answers are on the help pages.

For the signature, see Physics:-Setup, search for the word signature on the page; in more recent versions of Maple you can just search for signature within the whole help system, and Physics:-Setup is one of the options that come close to the top. For D_ see ?D_, for Christoffel see ?Christoffel. To express covariant derivatives in terms of Christoffel symbols also see Christoffel.

I am not sure how familiar you are with using computer algebra ... Help pages are a very core part of it. There are so many commands that it is meaningless to try to remember everything. You just consult the help pages, always, and you rapidly remember things about those commands you use the most. Otherwise, without consulting the documentation computer algebra systems are of little use.

 

@digerdiga 

differentialoperators is a new feature of Maple 2017 (this is four releases after Maple 17), so it doesn't work in Maple 17. The ability to work with any of the four signatures mentioned in the first answer (above) was introduced in Maple 2015 or the next one, so that also doesn't work as such in the older Maple 17. Without this differentialoperators feature, you can compute with D_[mu] and A(X) but not as operands of a product: D_[mu] is a differentiation operator, to use it you need to apply it, as in D_[mu](A(X)). Of course the interesting thing is to have this working properly also when you use multiplication (with `*`, not `.`) instead of just application, but that requires you updating to the lastest Maple.

Regarding your question about Physics:-Simplify, recall that all these commands have a corresponding help page. The answer to your question is in that help page: Simplify is physics oriented, performs simplifications of noncommutative products taking into account algebra rules, as well as simplification of contracted indices taking Einstein's rule and tensor's symmetries into account, plus some other stuff. So it is complementary to the standard simplify.

In summary, some of the topics presented in today's IOP talk work in Maple 17 but several only work in newer or the latest release Maple 2017.3.

 

@digerdiga 

Could you please post a worksheet with your input/output? (From your reply I am unable to understand what you are saying). Also the Maple version? Current is Maple 2017. Thanks.

@_Maxim_ 

The case of _Y is different: it is produced by the DESol routines, and it is a global since before the existence of the FunctionAdvisor. The same happens with _Z and RootOf.

Regarding the FunctionAdvisor,I don't remember exactly all the situations (there are too many) but mostly always the variables introduced are local ones. This design has advantages and disadvantages. The most obvious disadvantage is precisely the situation where you fell: you expected the f in f(z) to be global, it is, of course, an understandable expectation; but when you stop to think about the design, returning globals that also have a reasonable visual (e.g. 'f', not '_f' which is kinda FORTRANISH) is nontrivial: the global 'f' can always be assigned, macroed, aliased, have attributes, etc. and then you need to spin around ideas, returning a letter that looks different, make it be indexed or etc., all not as simple as returning a local f.

In some more modern parts of the FunctionAdvisor, nonetheless, not entirely happy with locals I made it work with globals (regarding some summation indices) but as said it was complicated, and with advantages and disadvantages. Either way I made this design decision 15+ years ago, when writing the first version of the FunctionAdvisor, for now, this is how it works, I do not foresee this changing.

@vv 

Thanks for your coment, it is either a complaint about the things implemented in Physics! or more like a sign that you appreciate what is in this package but would like to see more of this outside the package? Physics needs to redefine a large number of things because, otherwise, the dense notation used in physics, simply put, cannot be used on a Computer Algebra System (CAS), and so most of the computations done in theoretical physics are just not possible the way we do them with paper and pencil or in textbooks. Think about, CAS do not implement even the addition of two non-projected vectors - nowadays part of the Physics:-Vectors package. This is the first thing that called my attention when I discovered Mathematica and Maple years ago, not even that letter with an arrow on top ....

As a more advanced example, still basic, in the post above you see things in blue, olive and purple, respectively identifying commutative, noncommutative and anticommutative objects with respect to products `*`. Such a thing is impossible in pre-Physics CAS, where the product operator `*` assumes all its operands commute, and how about diff with regards to that. You may think of the implementation of these things in Physics as "a state within a state" as you say; indeed in all these respects Physics is unique. By the way I already heard the same comment about PDEtools. Either way, Maple is the only CAS that implements this advanced functionality, and I think this openness of Maplesoft to extend functionality as you see is a very-very good thing, a strong feature of the Maple system. In the same line I would mention the official Maplesoft Physics updates and fixes distributed every week.

You ask whether other parts of Maple are going to evolve in these directions. Unfortunately I'm not sure someone has the answer to that question. It is too general. I can tell you that several of the things you saw implemented in the previous releases started as features in the Physics package (to mention but some, there is the "everything inert" and a large number of improvements in int, assume, is, Typesetting and the GUI in general, similar to what happened with former features in PDEtools, dsolve and pdsolve; to mention but three significant and non-obvious ones: `assuming`, the FunctionAdvisor and simplify/size. The merge of Physics:-Assume (which actually is an official Maple command) with the older assume (also an official Maple command) is most certainly going to happen too. There is agreement about that.

On why is Maple missing an AbtractLinearAlgebra (or a Rings package), I don't have the answer but for noting (just muy perception, not talking in the name of Maplesoft) that the Maplesoft development group looks small to me for the task at hands, while the amount of things being developed is not small. Anyway in my experience as soon as something starts to pop-up more frequently as a request it calls the attention to a point where it gets developed.

John Fredsted

The matrix command may appear in the documentation as 'deprecated' but you see it is not: its functionality is unique, not available - as such - elsewhere in the Maple language. BTW there is an internal conversation about exactly that.

Then the use you suggest, of a mapping on the rhs of the algebra rule, has an issue: you know, in CAS, multiplication and function application are not the same thing. So, if you use a mapping as you suggest, on the lhs of the algebra rule you 'multipliy' but on the rhs you 'apply' ... (?). Not OK. On the other hand, as you do with paper and pencil, on the computer you multiply equation (6) -then you need product operands, and if you want the matricial dimension of the lhs and rhs explicitly being the same you need a matrix with which you can also operate algebraically. That is what this post shows, how to do something like that.

Regarding changing Physics defaults, returning a rhs that is different than the rhs of (6) in that it has the same matricial dimension of the lhs, I still think the current output in (6) is simpler for the purposes we use these algebra rules (ie with the matricial dimensions ommited), but only after finishing revamping the use of Dirac and other spinors typical of particle physics formulations (on the way) I will be able to see pros and cons for real, and then think about this again.

Finally, you enter that II symbol from the palette of symbols on the left, see under "Open face".

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

I just posted how to set the Algebra of Dirac matrices with an identity matrix on the right-hand side. I preferred to make it a post so that it doesn't get lost in the middle of this thread with so many replies. In a week maybe I finish revamping this spinor sector related to particle physics. Yes I agree with you that the spinor notation in GR is relevant, in my opinion mainly in connection with quantum gravity. That however will still take some more time to be all implemented. We will arrive there.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

So `.` now uses the same commutation rule of `*` with regards to Dgamma[mu] and an object set as anticommutative (in your example, psi). The fix is again available to everyone at the Maplesoft R&D Physics webpage.

You naturally moved the focus to something else, not mentioned in your original post, which is: "if we do not enter - say psi, a spinor - as a non-algebraic Vector construct, then how do we see the matricial form of an abstract expression involving psi only declared as anticommutative?" This is actually an entire question by itself.

The answer involves two or three things. First, within the Physics:-Library package, there are the library routines: Library:-RewriteInMatrixForm and Library:-PerformMatrixOperations. The first one is expected to just display the given algebraic expression replacing abstract objects by the corresponding underlying matrices. It works fine but not yet with this psi. The second one, not only replaces the abstract objects but also carries on the matrix operations; this one two is not yet handling "just-an-anticommutative-psi" as a 4-spinor. So these two routines are evolving into something similar to Physics:-TensorArray but with regards to 'spinor indices'.

The last sentence touches the realm of the representation problem you are focusing: originally, my idea was to represent spinors just a indexed (tensorial) anticommutative objects. So, 'spinorindex' is one of the things you can set using Setup and indeed in ?FeynmanDiagrams you see these indices used, and the equivalent of the mass term in a Lagrangian being represented. But then, general relativity, in its extended form, uses spacetimeindices, spaceindices and tetradindices, and will soon use spinorindices as well, to that you add su2indices and su3indices introduced in Maple 2017 with the StandardModel package, plus the fact that, with paper and pencil, we -almost- never write spinor indices; instead we omit them.

All this to say that the original idea is by now obsolete. I am considering changing this into a concrete "omit Dirac-spinor indices" and in fact all other particle physics spinor indices, and instead enhance Library:-RewriteInMatrixForm and Library:- PerformMatrixOperations to also handle the psi we are discussing, by automatically transforming psi -> Vector[row/colum](4, symbol = psi). Then we will see the matrix form and also have the operations performed, while: a) being able to work the Lagrangian in compact form, b) not artificially expressed with too-many-indices, and c) keep spinor indices for something else, possible GR or supersymmetry. If the wind doesn't change, this development step may be ready in a week among other related things in the StandardModel package (including the default setting of the algebra rules for the Gell-Man matrices, that are already there present and implemented in the StandardModel package)

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@John Fredsted 

1) The algebra rules for the Pauli and Dirac matrices are now implemented automatically, ie they are known to the system as soon as you load Physics

The update is available to everybody at the Maplesoft R&D Physics webpage.

2) The Dirac matrices do not commute with anticommutative variables (and there is no need to declare the spinor as a non-algebraic Vector construct). But you need to use `*`, not `.`.

BTW: for any A, B, after loading Physics, Library:-Commute(A, B) will tell you whether A*B = B*A.

3) The use of `.` in this case should have used the same commutation rules of `*`; I will give a look at this and fix it.

Finally, could you please post the cases you either feel you "hit a roadblock" or you find them to be "non-intuitive formalism", or simply you get frustrated for whatever reason.

The Physics project is really original in its attempt to bring alive, within a computer algebra worksheet, very^2 dense mathematical notation and methods. A project like this requires concrete, frequent and sufficiently detailed feedback. At this point several Maple users (mostly physicists) are providing this feedback. And then from the development side, we are providing frequent updates, extensions, revisions, etc. at the Maplesoft R&D Physics webpage, that end up being useful for everyone.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions.

@Markiyan Hirnyk 
 

In Maple,

(%diff = diff)(Sum(f(n, x), n = 1 .. infinity), x)

%diff(Sum(f(n, x), n = 1 .. infinity), x) = Sum(diff(f(n, x), x), n = 1 .. infinity)

(1)

In Mathematica it is the same:

 

You are disputing these results saying "The differentiation of a functional series is possible only under certain conditions". From there you go on to "bug in pdsolve", "untrusted developers!", "unreliable commands!"

 

Markiyan, we all know that, strictly speaking, when differentiating an infinite series, to commute diff and sum some conditions need to be met. But that is not how computer algebra systems work when there are arbitrary functions around, like this f(n, x). Instead, in order to proceed, computer algebra systems make the assumption that the function f(x, n) is sufficiently well behaved so that the infinite series is uniformly convergent and so diff and sum commute. That is what Maple and Mathematica tell you in the two images above.

 

You also say "Look in solid textbooks on PDEs for details."

 

Check "Partial Differential Equations and Boundary Value Problems with Maple" by G.A. Articolo,

• 

On page 194, the third formula from top, you see exactly the kind of solution of this thread, with _C1(n), for a problem that you passed with an incomplete number of boundary/initial conditions, so that some freedom on the choice of the series coefficients remained available:

 

 

• 

On page 195, you see the same and read "Again, all of the preceding operations are based on the assumptions that the infinite series is uniformly convergent and that the formal interchange between the differentiation and summation operators is legitimate"

 

What Articolo is telling you in his excellent book (BTW thanks to Mariusz Iwaniuk for the reference) is what is standard in computer algebra, and also with paper and pencil: you develop a solution in the way indicated (includes _C1(n) because the specification you passed of the problem is incomplete), and until the problem is completely specified you always assume that the problem you passed is a well defined problem, even if initial conditions are missing. The same applies to the computation of general exact solutions to ODEs, PDEs and systems of them, where the initial/boundary conditions are not present at all.

 


 

Download trustable_pdsolve_and_pdetest_(II).mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Markiyan Hirnyk 
 

You say: "[based on this example] Can we trust [pdetest, pdsolve] and its developers?"

 

Let's see. This is the system you passed to pdsolve

sys := [diff(u(t, x), t, t)-(diff(u(t, x), x, x)) = 0, u(t, 0) = 0, u(t, Pi) = 0]

[diff(diff(u(t, x), t), t)-(diff(diff(u(t, x), x), x)) = 0, u(t, 0) = 0, u(t, Pi) = 0]

(1)

sol := pdsolve(sys)

u(t, x) = Sum(sin(n*x)*(_C1(n)*sin(n*t)+_C5(n)*cos(n*t)), n = 1 .. infinity)

(2)

Now, the definition of "solution" is, frankly, easy: the solution must cancel the equation. OK. Do not use pdetest. Within sys, there is one PDE and two boundary conditions.

 

Start with the PDE. Evaluate the PDE at this solution returned by pdsolve

eval(sys[1], sol)

Sum(sin(n*x)*(-_C1(n)*n^2*sin(n*t)-_C5(n)*n^2*cos(n*t)), n = 1 .. infinity)-(Sum(-n^2*sin(n*x)*(_C1(n)*sin(n*t)+_C5(n)*cos(n*t)), n = 1 .. infinity)) = 0

(3)

combine(%)

0 = 0

(4)

So the solution cancels the PDE. This is what we expect.

 

Now the first boundary condition, the value of this solution at x = 0

eval(sol, x = 0)

u(t, 0) = Sum(0, n = 1 .. infinity)

(5)

value(%)

u(t, 0) = 0

(6)

Compare with the first boundary condition you passed to pdsolve

sys[2]

u(t, 0) = 0

(7)

So the solution matches the first boundary condition.

 

The second boundary condition is the value of this solution at x = Pi

eval(sol, x = Pi)

u(t, Pi) = Sum(sin(n*Pi)*(_C1(n)*sin(n*t)+_C5(n)*cos(n*t)), n = 1 .. infinity)

(8)

value(%)

u(t, Pi) = 0

(9)

Compare with the second boundary condition you passed to pdsolve

sys[3]

u(t, Pi) = 0

(10)

So, the solution - returned by pdsolve - cancels the equation and matches the two boundary conditions you passed. That is what pdetest tells you if you use it, although in the above I didn't use it.

 

Still, because of this solution you ask whether pdsolve and pdetest can be trusted, also its developers, and then you add "unreliable command!". Well, I see nothing wrong in pdsolve or pdetest regarding this example, I definitely trust myself :) and, frankly, I think both pdsolve and pdetest are state of the art at what they do. Of course there is more and more to be done, always, but then what would be your point with that. Or with posting this result, shown above to be correct, as a "bug in pdsolve" ?

 

A last comment, on constants of the form _C1(n): being that n is the summation index, in this case ranging from 1 to infinity, what _C1(n) represents is a possibly different arbitrary constant entering each of the terms of the sum of this solution, i.e. an arbitrary function of the summation index. If anything, I would post this example as "I understand what it means, the ability to return this kind of solution is impressive, but could Maplesoft please update the documentation with one example of this kind?".

 

``


 

Download trustable_pdsolve_and_pdetest.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Pascal4QM 

Just to note that you can use notation that starts at 0, as in 0, 1, 2, 3, and the 0 as a tensor index is automatically mapped into the number 4 (when the signature has the timelike component in position 4, e.g. (- - - +)). Then when you actually display the matrix (or Array) form of a tensor, the first column is the column number 1, an the column "0" is actually the column 4.

It is a bit tricker when you work with signatures that place the timelike component in position 1, e.g. (- + + +); in this case the number 0 actually points to position 1, but then the value 1 of a tensor index points to position number 2, and so on. This is difficult to remember and so not the default signature.

For any of the four possible values of the signature, you can still set the system to "use coordinates as tensor indices" (see the Physics:-Setup option usecoordinatesastensorindices) and in this way, suppose your coordinates are [r, theta, phi, t], then just write g_[r,r] or g_[r, t] and they will have the natural meaning, without having to remember the position of r and t in the list of coordinates (the computer remembers that position when you set the coordinates).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Arny 

After these corrections youwill have the problem formulated in a way the computer understands it. Regarding your use of DotProduct and CrossProduct it made me think that it may be convenient for you to give a further look at the help page ?Physics,Vectors and pages linked therein to see what are the names of the commands representing the operations.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

First 33 34 35 36 37 38 39 Last Page 35 of 64