ecterrab

14540 Reputation

24 Badges

20 years, 23 days

MaplePrimes Activity


These are answers submitted by ecterrab

Hi nm,

There is no solution in this moment for the combo "alias & diff = dot & extended typesetting". On the other hand, if you are OK with a prime ' instead of the dot to indicate differentiation, you can use PDEtools:-declare and it works precisely as you are expecting. So, after alias(x=x(t)), input

> PDEtools:-declare(prime=t, x);

Then diff(x, t) will be displayed as x', and copy & paste work fine, etc.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

 

Hi Indah

The definition of covariant derivative is uniform in the literature, it is the one you see in the help page Physics,D_  and there is no scale factor around. On the other hand, when working with 3D vectors - but a curvilinear metric where the line element is not equal to the sum of the squares of the differential of the coordinates - the relationship between 3D vectors and tensors is not as direct as you suggest in your post, but actually includes these scale factors you mention. A good discussion of this, easy to follow, with all the algebraic details, is found in the free "A Premier on Tensor Calculus". In particular, formulas (30) and (31) on page 12 of that material show the relationship you are missing.

 

What follows then only shows how you compute covariant derivatives using Physics, how you set the metric to work with spherical coordinates "as defined in Mathworld" (which is different from the main Textbook references used in Physics, listed in the Physics  page), and how you reobtain the Christoffel symbols shown in Mathworld.

 

restart; with(Physics)

Let's then follow the Mathworld conventions you are looking at, that is, define spherical coordinates according to Mathworld's formulas 4, 5, 6

x = r*sin(phi)*cos(theta)

y = r*sin(phi)*sin(theta)

z = r*cos(phi)

Regarding the ordering (1, 2, 3), Mathworld uses (r, theta, phi). So, the square of the line element is as shown in Mathworld's formula 13, that is

ds := dr^2+Physics:-`*`(Physics:-`*`(r^2, sin(phi)^2), dtheta^2)+Physics:-`*`(r^2, dphi^2)

dr^2+r^2*sin(phi)^2*dtheta^2+r^2*dphi^2

(1)

So your setup to work in 3 dimensions, using coordinates [r, theta, phi] and a metric with this line element is

Setup(coordinates = (X = [r, theta, phi]), dimension = 3, metric = ds, quiet)

[coordinatesystems = {X}, dimension = 3, metric = {(1, 1) = 1, (2, 2) = r^2*sin(phi)^2, (3, 3) = r^2}]

(2)

Let's check the metric and compare with Mathworld's formulas 10, 11, 12

g_[]

g[mu, nu] = (Matrix(3, 3, {(1, 1) = 1, (1, 2) = 0, (1, 3) = 0, (2, 1) = 0, (2, 2) = r^2*sin(phi)^2, (2, 3) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = r^2}))

(3)

According to the convention for defining the Christoffel symbols of the second kind used in Physics (see the help page Christoffel ), that is, the standard convention shown in the Landau books, also the same convention used in Arfken indicated in Mathworld, the Christoffel symbols for this metric are as shown in Mathworld formulas 46, 47 and 48

"Christoffel[~1,alpha,beta, matrix]"

Physics:-Christoffel[`~1`, alpha, beta] = Matrix(%id = 18446744078291109278)

(4)

"Christoffel[~2,alpha,beta, matrix]"

Physics:-Christoffel[`~2`, alpha, beta] = Matrix(%id = 18446744078291111798)

(5)

"Christoffel[~3,alpha,beta, matrix]"

Physics:-Christoffel[`~3`, alpha, beta] = Matrix(%id = 18446744078291106622)

(6)

Regarding the covariant derivative in tensor notation for an arbitrary tensor of 1 index representing a vector that depends on the coordinates, define first this tensor of 1 index

Define(A)

`Defined objects with tensor properties`

 

{A, Physics:-D_[mu], Physics:-Dgamma[mu], Physics:-Psigma[mu], Physics:-Ricci[mu, nu], Physics:-Riemann[mu, nu, alpha, beta], Physics:-Weyl[mu, nu, alpha, beta], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-Christoffel[mu, nu, alpha], Physics:-Einstein[mu, nu], Physics:-KroneckerDelta[mu, nu], Physics:-LeviCivita[alpha, mu, nu], Physics:-SpaceTimeVector[mu](X)}

(7)

Use compact notation (see PDEtools:-declare )

PDEtools:-declare(A(X))

A(r, theta, phi)*`will now be displayed as`*A

(8)

This is the covariant derivative

D_[mu](A[nu](X))

Physics:-D_[mu](A[nu](X), [X])

(9)

To see all the components (recall that, due to PDEtools:-declare, derivatives are displayed indexed)

TensorArray(Physics[D_][mu](A[nu](X), [X]))

Matrix(%id = 18446744078287229526)

(10)

Note in the above that the derivatives apply to the covariant components A[mu] (again: to relate these to the actual physical components of 3D vectors you need to introduce scale factors, as indicated on page 12 of the Premier on tensors indicated at the beginning). To relate the covariant derivative with the 3D vectorial Divergence, see section 5 in that Premier introduction.

 

Download CovariantDerivative.mw

Edgardo S. Cheb-Terrab 
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

I just ran the test.mw worksheet you posted, and am unable to reproduce your result, instead  I receive no solution. In case this is about installing the latest updates for Maple DE libraries, they are available for download at the Maplesoft R&D webpage for Differential Equations and Mathematical Functions.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

Hi nm,

You ask about odetesting solutions and say that you 'tried useInt as well' but, from the 5 examples you posted, 3 of them can be odetested right away when using the option useInt. These are your examples 1,3 and 4:

ode1 := diff(y(x), x)+2*tan(y(x))*tan(x)-1:

odetest(dsolve(ode1, useInt), ode1)

0

(1)

ode3 := (x^2+1)*(diff(y(x), x))+(y(x)^2+1)*(2*y(x)*x-1):

odetest(dsolve(ode3, useInt), ode3)

0

(2)

ode4 := x^7*(diff(y(x), x))+(2*(x^2+1))*y(x)^3+5*x^3*y(x)^2

odetest(dsolve(ode4, useInt), ode4)

0

(3)

So it remains to verify for correctness 2 examples: ode2 and ode5. For your ode5,

ode5 := (y(x)-x)*sqrt(x^2+1)*(diff(y(x), x))-a*sqrt((y(x)^2+1)^3) = 0

recall that when you have square roots of the dependent variable the solution is sensitive to branches; in these cases what frequently resolves the odetesting issue is to square the square root before testing, that is, rewrite your equation

map(proc (u) options operator, arrow; u^2 end proc, isolate(ode5, sqrt((y(x)^2+1)^3)))

(y(x)^2+1)^3 = (y(x)-x)^2*(x^2+1)*(diff(y(x), x))^2/a^2

(4)

odetest(dsolve(ode5), (y(x)^2+1)^3 = (y(x)-x)^2*(x^2+1)*(diff(y(x), x))^2/a^2)

0

(5)

Finally, for ode2 the testing of its solution is more challenging because it involves complicated algebraic compositions of special functions of Bessel type, radicals and exponentials, making the simplification-to-0 more difficult than what the computer can do automatically; you need to guide the computer. This is your equation

ode2 := 2*(diff(y(x), x))-3*y(x)^2-4*a*y(x)-b-c*exp(-2*x*a)

It is a Riccati first order equation, rewrite it as a second order linear ODE for u(t)

convert(ode2, linearODE, u(t))

diff(diff(u(t), t), t) = 2*a*(diff(u(t), t))+(-(3/4)*b-(3/4)*c*exp(-2*t*a))*u(t), {x = t, y(x) = -(2/3)*(diff(u(t), t))/u(t)}

(6)

In the output above you see a sequence of two elements, the first one is a 2nd order linear ODE for u(t) and the second element is transformation used to obtain it departing from ode2. Verify first that this transformation is correct by substituting the transformation into ode2 and arriving at the equation shown for u(t)

eval(ode2, (diff(diff(u(t), t), t) = 2*a*(diff(u(t), t))+(-(3/4)*b-(3/4)*c*exp(-2*t*a))*u(t), {x = t, y(x) = -(2/3)*(diff(u(t), t))/u(t)})[2])

-(4/3)*(diff(diff(u(t), t), t))/u(t)+(8/3)*a*(diff(u(t), t))/u(t)-b-c*exp(-2*t*a)

(7)

isolate(-(4/3)*(diff(diff(u(t), t), t))/u(t)+(8/3)*a*(diff(u(t), t))/u(t)-b-c*exp(-2*t*a), diff(u(t), t, t))

diff(diff(u(t), t), t) = -(3/4)*(-(8/3)*a*(diff(u(t), t))/u(t)+b+c*exp(-2*t*a))*u(t)

(8)

This is the same equation you got in (6):

normal((diff(diff(u(t), t), t) = -(3/4)*(-(8/3)*a*(diff(u(t), t))/u(t)+b+c*exp(-2*t*a))*u(t))-(diff(diff(u(t), t), t) = 2*a*(diff(u(t), t))+(-(3/4)*b-(3/4)*c*exp(-2*t*a))*u(t), {x = t, y(x) = -(2/3)*(diff(u(t), t))/u(t)})[1])

0 = 0

(9)

So the transformation used and equation obtained are all correct. Solve now this ode for u(t)

dsolve((diff(diff(u(t), t), t) = 2*a*(diff(u(t), t))+(-(3/4)*b-(3/4)*c*exp(-2*t*a))*u(t), {x = t, y(x) = -(2/3)*(diff(u(t), t))/u(t)})[1])

u(t) = _C1*exp(t*a)*BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)+_C2*exp(t*a)*BesselY(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)

(10)

Verify this solution for u(t)

odetest(u(t) = _C1*exp(t*a)*BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)+_C2*exp(t*a)*BesselY(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a), (diff(diff(u(t), t), t) = 2*a*(diff(u(t), t))+(-(3/4)*b-(3/4)*c*exp(-2*t*a))*u(t), {x = t, y(x) = -(2/3)*(diff(u(t), t))/u(t)})[1])

0

(11)

This is what dsolve did automatically, and you have now all the pieces, so construct now the solution for y(x) solving ode2 just by substituting this solution for u(t) into {x = t, y(x) = -2*(diff(u(t), t))/(3*u(t))}, and you know at this point that the resulting solution for ode2 is correct by construction

sol2 := eval((diff(diff(u(t), t), t) = 2*a*(diff(u(t), t))+(-(3/4)*b-(3/4)*c*exp(-2*t*a))*u(t), {x = t, y(x) = -(2/3)*(diff(u(t), t))/u(t)})[2][2], u(t) = _C1*exp(t*a)*BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)+_C2*exp(t*a)*BesselY(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a))

Even when you know this solution is correct by construction, all the simplifications used fail in determining that this solution cancels the ODE, so odetest returns what it was unable to simplify, different from 0

odetest(sol2, ode2); evalb(% = 0)

false

(12)

One can try to simplify the approach further: the solution above for ode2 has 2 arbitrary constants, one of which can be set to 0

simplify(eval(sol2, _C2 = 0))

y(x) = -(1/3)*(BesselJ(-(1/2)*((4*a^2-3*b)^(1/2)-2*a)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)*3^(1/2)*c^(1/2)*exp(-t*a)+(4*a^2-3*b)^(1/2)*BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)+2*BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)*a)/BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)

(13)

But again, although you know the solution is correct by construction, the simplifiers fail in performing zero recognition

odetest(y(x) = -(1/3)*(BesselJ(-(1/2)*((4*a^2-3*b)^(1/2)-2*a)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)*3^(1/2)*c^(1/2)*exp(-t*a)+(4*a^2-3*b)^(1/2)*BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)+2*BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a)*a)/BesselJ(-(1/2)*(4*a^2-3*b)^(1/2)/a, (1/2)*3^(1/2)*c^(1/2)*exp(-t*a)/a), ode2); evalb(% = 0)

false

(14)

NULL

By the way, zero-recognition is an open problem, unsolvable in general, so the answer to one of your questions is: no, you cannot 'odetest to zero' in all cases - that would require automatic always-successful zero recognition.

 

Download MaplePrimes_odetest.mw


Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

You pose several questions.

First of all, when Maple returns ODESolStruc it means it succeeded in reducing the ODE order, but not in solving the reduced ODE. So, unlike the DESol case you compare with, the ODESolStruc contains information about the ODE solution, a reduction of order, not just the ODE passed to dsolve. The reduce ODE is of course a simpler problem, and the tools to construct a full solution to the original problem in case you find a solution to the reduced ODE are there: DEtools[buildsol]. All this is explained in ?ODESolStruc.

Second: the output you got from odetest can be seen as an indication that something is wrong, but not necessarily with the solution - it could also be with the testing approach, any of the intermediate simplifications, etc. Computer Algebra systems are complex systems with all of their parts interdependent. In this example you posted, the reduction of order computed by dsolve is actually correct, but an intermediate call to solve within the algorithm to test ODESolStruc answers returns NULL, and from there odetest is unable to confirm the solution. Note that in the help page for odetest it tells that the command either confirms the solution is correct by returning 0, or otherwise it only means it didn't succeed in verifying the solution - not necessarily that the solution is incorrect, even when, most of the time, one thing implies the other one.

To feel the issue, try this: a) compute symmetry infinitesimals, b) use them to reduce the ODE order, and you will see basically the same solution returned by dsolve, just written slightly different (one simplification less); and this solution verifies OK via odetest. To accomplish this, enter

X := DEtools[symgen](ode);  # pair of Lie symmetry infinitesimals

sol := DEtools[reduce_order](ode, X);  # reduction of order almost identical to the one returned by dsolve

odetest(sol, ode);

                                                 0

All what happened is that sol above has one simplification less, and with that all the intermediate steps succeed - included that call to solve mentioned - and hence a full verification by odetest proceeds normally. To verify that sol above is actually the same as the solution returned by dsolve enter

simplify(normal(sol, expanded) - normal(sol_by_dsolve, expanded))

                                                 0

So: the output you got from odetest as "not zero", that you posted, reveals a weakness in the testing code only. I already fixed that weakness, and uploaded the improvement to the usual place, the Maplesoft R&D webpage for Differential Equations and Mathematical Functions. If you update your library with the download available in that page, odetest returns 0 for the solution returned by dsolve as well, instead of the "not zero" expression you posted.

Third: I see you would like to have a different typesetting for ODESolStruc. You can see 'ODESolStruc' using lprint - no need to convert to a string - I also prefer the current typesetting of it, but in any case you can always remove the current one by entering

unassign(`print/ODESolStruc`)

And that will suffice. If in addition you want to program the typesetting of ODESolStruc in any other way, you can create the corresponding routine and assign it to `print/ODESolStruc`. All this is explained in ?print.

Summarizing: ODESolStruc does mean the equation got solved to some point, not all the way; ODESolStruc conveys more information than DESol; the solution returned by dsolve for your example, presented as a reduction of order, is correct; to have odetest verifying it you'd need to update your library as said above; you can remove or redo the typesetting of ODESolStruc in any desired way.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi nm

Good catch. It's fixed now, the fix is available as usual in the Maplesoft R&D webpage for Differential Equations and Mathematical Functions. The zip available for download that includes the fix also includes instructions to install it.

After you install the fix:

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

Indeed, when dsolve returns DEsol, or Mathematica's DSolve returns DifferentialRoot, you can consider the system actually did not solve the ODE: the returned structure does not really contain more information than the given ODE. For linear equations, however, Maple's dsolve always returns DESol when no solutions were found, because, as it is the case of RootOf representations of solutions for non-differential solutions, in the case of linear ODEs Maple is able to do some manipulations with the DESol representation of the solution, as for instance computing series.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

I updated the Maplesoft R&D Physics webpage with a new version of Physics, the issue is resolved. At the end it is is also a link to your worksheet reviewed, with your computation running with the fix in place.

Details.

As you noticed there was a weakness that I'd briefely describe as: SubstituteTensor(T[m,n] A[m] = B[n], T[i,j] A[j]) was not substituting because the position of the free index 'n' in  T[m,n] A[m] (this is left-hand side of the substitution equation), is not the same as the position of the free index 'i' in the target expression T[i,j] A[j].

The issue is that, as you said, T is symmetric (BTW there are two different ways of indicating symmetry, that work differently; more about this in the last paragraph), and so T[n, m] = T[m, n] and one expects that the substitution will be performed - i.e. the order of the indices in T should not be an issue.

The implementation however is not as straightforward in that we are talking of something like subs(A*B = C, A*D*F*B), a sort of algsubs operation (i.e. A*B is not an operand of A*D*F*B), and in addition there are the free and repeated tensor indices that work like different kinds of dummies, plus the symmetry properties under permutation of the indices in the substitution equation and in the target expression.

Anyway it is now working. And yes, the key to crack the problem is in the use of this new routine Physics:-Library:-RepositionRepeatedIndicesAsIn, that I still need to document.

A comment on how to indicate that at tensor is symmetric; there are three ways:

  1. Define(eta, symmetric)
  2. Define(eta = ... tensorial expression or Matrix, symmetric)
  3. Define(eta = ... tensorial expression or Matrix)

The 1st form permits you to compute with eta[i, j] <> eta[j, i] and you can apply the symmetry afterwards, when you prefer, using Simplify, i.e Simplify(eta[i, j] - eta[j, i]) returns 0.

The 2nd form automatically normalizes eta[j, i] -> eta[i, j], it is more convenient most of the time, not all the times. But if the right-hand side of the equation passed to Define is (anti)symmetric under permutation of the indices, one would expect that it is not necessary to additionally specify (anti)'symmetric' (as you did in your worksheet).

The 3rd form does not include the word 'symmetric' in the definition, and also automatically assumes eta is symmetric if the code can prove that the tensorial expression on the right-hand side is actually symmetric, and if so this definition will work as 2. However, due to a weakness in the code this automatic determiination of symmetries was not working when the right-hand side was an (anti)symmetric matrix (2 indices) or (anti)symmetric Array (as many indices as you want).

With the changes introduced today: the 3rd form assumes (anti)symmetric automatically if the corresponding matrix or array in the right-hand-side is (anti)symmetric, so this lateral issue is fixed and you do not need to specify (anti)'symmetric' additionally (you still can do that, to override if something is not working as you expect). Substitute tensor works fine also regardless of the position of the free indices provided that the substitution equation can be mapped into the target equation using symmetries (by using this new Reposition routine), and so it works fine, in the same way, whether you entered your definition using the form 1., 2. or 3.

Maple_Question_7.7.14_(reviewed).mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

Thanks for pointing at this one, it is fixed, the fix is available for download at Maplesoft's R&D Physics webpage, and this is your worksheet running your computation with the fix in place: Maple_Question_7.3.14_(reviewed).mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

When you say that an ODE solution involves an 'arbitrary constant' - say _C1 - the only requirement on _C1 is that  d/dx _C1 = 0. So, if y(x, _C1) is a solution, y(x, _C2) where _C2 = -_C1, or for the case _C2 = F(_C1) and F is an arbitrary function (i.e. can be any function), are all different ways of expressing the same solution, because both _C1 and _C2 satisfy d/dx _C = 0. For details see ?dsolve,details, the section on Arbitrary Constants.

Having in mind what the meaning of an arbitrary constant is, going to your question, y(x)=a+_C1e^-(1/bc) and y(x)=a-_C1e^-(1/bc) are just two different ways of expressing the same solution (just replace the arbitrary constant '_C1' by the arbitrary constant _C2 = -_C1), as opposed to one being right and the other one being wrong.

Edgardo S. Cheb-Terrab
Physics, Maplesoft

Hi

The problem is fixed, the fix uploaded and available at the Maplesoft's R&D Differential Equations and Mathematical Functions webpage. Your equation however is currently beyond Maple's pdsolve capabilities for exactly solving problems with boundary conditions, for reason mentioned by Preben. Just to be clear: that doesn't mean the problem cannot be solved using a human-guided approach. For example: you may want to try solving the PDE exactly - the solution involves and arbitrary function _F1 as shown by Preben - then reverse the RootOf using DEtools[remove_RootOf] then try giving 'mapping values' to _F1 such that you can match your boundary conditions.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi
The question, however, is not about what system is better in general, but about what would be “the Maple strengths compared to Matlab and Mathematica”, and as such it looks to me properly presented.

I cannot present a complete list answering your question but here are some Maple strengths compared to Mathematica or Matlab that in my opinion are indisputable, relevant functionality that, somehow surprisingly, basically only exists in Maple.

  • The Maple Differential Equation programs. Give a look at the commands listed in DEtools and PDEtools, then search for similar functionality in Mathematica: mostly none of it exists. These Maple commands are at the root of the much better performance of Maple for the exact solving of ODEs and PDEs. The results of the comparison Maple versus Mathematica in solving Kamkes ODEs are hard to believe, and yet all reproducible. An equivalent comparison for PDEs does not exist yet, but if produced it would show even more disparity between what Maple and Mathematica can do: Mathematica can handle only a very tiny fraction of the PDE problems that Maple can handle. Not less important: in Maple you have commands for performing mostly all the solving steps of rather varied different DE approaches. That allows one to tackle a problem that is beyond the computer abilities by using a human-guided approach. These Maple commands, that do not exist in Mathematica, are also key when teaching DE courses.

  • The Maple Physics package. Give a look at Physics. Now open Mathematica and search for anything that would look similar. There is basically nothing. From Vector Analysis using coordinate free algebraic vectors, to noncommutative quantum operators, from 3D Euclidean tensors to tensors in curved 3+1 spacetimes, or Dirac’s notation for quantum mechanics, functional differentiation, the list is immense. Mathematica only started with tensors, and recently, and in a way that, frankly speaking, resembles more FORTRAN syntax than standard textbook or paper and pencil notation, not to mention that the functionality is fragmentary, for instance nothing for general relativity. Physics is an area where Maple’s strengths compared to Mathematica contrast even more than in DEs. You may also want to give a look at the Virtual User Summit webinar on Physics, a 24 minutes presentation showing the use of the package in mechanics, quantum mechanics and relativity.

  • The Maple conversion network for mathematical functions. Check convert/to_special_function, as well as the whole FunctionAdvisor project. Now open Mathematica and search for this functionality. Again, you won't find it. In Maple things are also not restricted to this network of conversions or the FunctionAdvisor. Maple also has the ability to represent as PDE systems with rational coefficients almost any linear and nonlinear algebraic expression involving almost any mathematical function. The same regarding Maple’s ability to perform symbolic differentiation, e.g. d^n/dz^n cos(z) (n is ‘a symbol’, not a number), or to compute branch cuts of algebraic expressions, all of this only exists in Maple.

  • The Maple DifferentialGeometry package. This is by all means the most resourceful mathematical software available for the area, with over 250 commands covering a wide range of topics, from basic jet calculus to the realm of the mathematics behind general relativity, thorough documentation for all of its commands, 19 differential geometry Lessons from beginner to advanced level and 5 Tutorials illustrating the use of the package in applications. I am repeating, but again: search for something resembling this in Mathematica. This functionality is inexistent in that system.

  • The Maple ability to perform differential algebra. I am talking both about the standard differential algebra, and the completely non-standard ability to perform differential algebra in the presence of non-rational objects that include the whole set of mathematical functions. Search for something resembling that in Mathematica and again you will find that there is none. This by itself already puts Maple’s potential for algebraic computations beyond what Mathematica could do. Differential elimination can be used as a key tool in most non-trivial algebraic problems one could imagine. By the way this differential algebra capabilites are at the root of the advantages Maple has in differential equations and differential geometry.

  • The Maple readability and traceability of Maple programs. You can give a look - literally read - basically any Maple program. Only a tiny bit of kernel commands, typically of little interest, are not readable. Everything else is readable. Wow! In fact I *never* studied a single page of Maple. Instead I learned tons just giving a look at the existing Maple programs and their help pages - and tracing them while running: try trace(command), then execute command(whatever) and you actually see how things are computed, all the steps, you can even use stopat to perform one step at a time, think, do your experiments, execute the next step. Maple is an incredibly open language. Now open Mathematica and search for any of this. In brief: none.

In summary, for physics (all kinds of problems), differential equations (all kinds of problems), mathematical function relations (their inter-connections or their representation with differential equations), differential geometry (all kinds of problems), differential algebra (all kinds of problems) or reading and tracing the actual programs that perform your computations, Maple has dramatic advantages regarding Mathematica or Matlab.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

Note first that r[0]_ is not correct input: in 2D input mode it results in the product of r[0] with `_` while in 1D input mode it interrupts with an error. You can correct that input using either r_[0] or better: r__0_.

Besides that there is an issue regarding a Taylor expansion of the Norm of a vector: you know, the Taylor expansion of f(x-a) around x=a involves the derivatives of f at 0, but d/dr_ Norm(r_) = r_/Norm(r_), so this derivative at 0 would result in division by 0, so you cannot expand the Norm in Taylor series around a 0 of its argument.

Regarding this other question on a generic expansion of a vector function around a vectorial argument, I agree with you, this would be a nice and relevant feature that is not implemented in this moment. I'm adding this to the list of things being developed.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi Fred

Thanks for pointing to the problem. There was a typo introduced in a previous change such that the computation was performed but the result not returned - resulting in what you noticed: the computation returned unperformed. It is fixed now, the fix is available to everybody in the usual place, the Maplesoft R&D Physics webpage. The PhysicsUpdates18.mw worksheet that comes within the zip with the update illustrates the fix.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi USPAS2014

(what a name! :) PlotExpression is a procedure I use frequently, here it is:

> PlotExpression := f -> plots[plotcompare](f, 0, _rest, 'expression_plot', 5);

Now: other people mentioned this approx 1 month and a half ago - at that time I updated the mini-course in Mapleprimes including the definition of PlotExpression above directly in the worksheet, so perhaps you have the previous version of the mini-course?

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

PS: I don't always monitor Mapleprimes; in cases like this one please feel free to write to physics@maplesoft.com and I will receive your email directly in my mailbox.

First 46 47 48 49 50 51 52 Last Page 48 of 59