ecterrab

14540 Reputation

24 Badges

20 years, 21 days

MaplePrimes Activity


These are replies submitted by ecterrab

@mthkvv 

I looked; good feedback yours. Some improvements are needed here. I am busy at this moment. I will try to address them the next week, partially or entirely, and write here again.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@mthkvv 
Yes. I think you can get (96.8) directly from the definition of t you do with the so small-expression (96.5). So isolate t[~i, ~k] in (96.5), then convert(%, Christoffel). You may need to collect g_, factor, or the like to get it written so organized, but you are expected to get (96.8). Take a look at the Sec II, subsection 20 of ?Physics,Tensors to familiarize yourself with the idea; the conversion network is vast and extremely useful for derivations like the ones you are pointing at in Landau's book.

Caveats: in the above, I'm saying, but didn't actually do the computation - if you do the computation and find something not working as I am saying, please let me know; in that case, I will take a closer look.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft.

@mthkvv 

Yes, you can. Define the tensors, lambda and h (92.[2,3]), then T is the EnergyMomentum (or use the Einstein's tensor); next, define `t` using (96.5), then input (96.8) and verify equality, for instance as done in the previous reply above. Once you verified you entered your tensor definitions without typographical mistakes, go with convert((96.8), g_), and you will receive (96.9).

Important: all these things are explained in ?Physics,Tensors, including the conversion network in Sec II, subsection 20; then you already know how to define the tensors (from your kerr.mw). Using inert tensors sometimes help with these demonstrations (there is always more than one way to do them), and any intermediate tensorial manipulation is also expected to be explained in ?Physics,Tensors. If it is not there, let me know, please, and I will include it.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@mthkvv 

with(Physics)

What follows works fine with g_[arbitrary] but, for experimentation, try with something simpler, that allows you to verify almost by eye:

g_[sc]

Physics:-g_[mu, nu] = Matrix(%id = 36893488152206236596)

(1)

To verify (86.9), define  the tensor A[mu]

Define(A[mu])

{A[mu], Physics:-D_[mu], Physics:-Dgamma[mu], Physics:-Psigma[mu], Physics:-Ricci[mu, nu], Physics:-Riemann[mu, nu, alpha, beta], Physics:-Weyl[mu, nu, alpha, beta], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[i, j], Physics:-Christoffel[mu, nu, alpha], Physics:-Einstein[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], Physics:-SpaceTimeVector[mu](X)}

(2)

Define now F[mu, nu]as antisymmetric; use the minimizetensorcomponents option to ensure the antisymmetry is automatically used when computing with components, used when verifying (86.10). Use compact display for both:

 

Define(F[mu, nu], antisymmetric, minimizetensorcomponents)

{A[mu], Physics:-D_[mu], Physics:-Dgamma[mu], F[mu, nu], Physics:-Psigma[mu], Physics:-Ricci[mu, nu], Physics:-Riemann[mu, nu, alpha, beta], Physics:-Weyl[mu, nu, alpha, beta], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-gamma_[i, j], Physics:-Christoffel[mu, nu, alpha], Physics:-Einstein[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], Physics:-SpaceTimeVector[mu](X)}

(3)

CompactDisplay((A, F)(X))

F(X)*`will now be displayed as`*F

(4)

This is Landau & Lifshitz (86.9)

D_[mu](A[mu](X)) = %d_[mu](sqrt(-%g_[determinant])*A[`~mu`](X))/sqrt(-%g_[determinant])

Physics:-D_[mu](A[`~mu`](X), [X]) = %d_[mu]((-%g_[determinant])^(1/2)*A[`~mu`](X))/(-%g_[determinant])^(1/2)

(5)

value(Physics[D_][mu](A[`~mu`](X), [X]) = %d_[mu]((-%g_[determinant])^(1/2)*A[`~mu`](X))/(-%g_[determinant])^(1/2))

Physics:-D_[mu](A[`~mu`](X), [X]) = ((1/2)*A[`~mu`](X)*(4*r^3*sin(theta)^2*Physics:-d_[mu](r, [X])+2*r^4*sin(theta)*Physics:-d_[mu](theta, [X])*cos(theta))/(r^4*sin(theta)^2)^(1/2)+(r^4*sin(theta)^2)^(1/2)*Physics:-d_[mu](A[`~mu`](X), [X]))/(r^4*sin(theta)^2)^(1/2)

(6)

expand(Physics[D_][mu](A[`~mu`](X), [X]) = ((1/2)*A[`~mu`](X)*(4*r^3*sin(theta)^2*Physics[d_][mu](r, [X])+2*r^4*sin(theta)*Physics[d_][mu](theta, [X])*cos(theta))/(r^4*sin(theta)^2)^(1/2)+(r^4*sin(theta)^2)^(1/2)*Physics[d_][mu](A[`~mu`](X), [X]))/(r^4*sin(theta)^2)^(1/2))

2*A[`~mu`](X)*Physics:-d_[mu](r, [X])/r+Physics:-d_[mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics:-d_[mu](A[`~mu`](X), [X]) = 2*A[`~mu`](X)*Physics:-d_[mu](r, [X])/r+Physics:-d_[mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics:-d_[mu](A[`~mu`](X), [X])

(7)

Check that the left and right hand sides of this equation are the equal

evalb(2*A[`~mu`](X)*Physics[d_][mu](r, [X])/r+Physics[d_][mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics[d_][mu](A[`~mu`](X), [X]) = 2*A[`~mu`](X)*Physics[d_][mu](r, [X])/r+Physics[d_][mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics[d_][mu](A[`~mu`](X), [X]))

true

(8)

There are some additional manipulation capabilities for the intermediate steps, taking advantage of the implementation of inert tensors (see Physics, Tensors, Section 1, subsection 7 ). For example, go one step only for the right-hand side of (5)

eval(Physics[D_][mu](A[`~mu`](X), [X]) = %d_[mu]((-%g_[determinant])^(1/2)*A[`~mu`](X))/(-%g_[determinant])^(1/2), %d_ = d_)

Physics:-D_[mu](A[`~mu`](X), [X]) = (-(1/2)*A[`~mu`](X)*Physics:-d_[mu](%g_[determinant], [X])/(-%g_[determinant])^(1/2)+(-%g_[determinant])^(1/2)*Physics:-d_[mu](A[`~mu`](X), [X]))/(-%g_[determinant])^(1/2)

(9)

Verify this expansion

expand(value(Physics[D_][mu](A[`~mu`](X), [X]) = (-(1/2)*A[`~mu`](X)*Physics[d_][mu](%g_[determinant], [X])/(-%g_[determinant])^(1/2)+(-%g_[determinant])^(1/2)*Physics[d_][mu](A[`~mu`](X), [X]))/(-%g_[determinant])^(1/2)))

2*A[`~mu`](X)*Physics:-d_[mu](r, [X])/r+Physics:-d_[mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics:-d_[mu](A[`~mu`](X), [X]) = 2*A[`~mu`](X)*Physics:-d_[mu](r, [X])/r+Physics:-d_[mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics:-d_[mu](A[`~mu`](X), [X])

(10)

evalb(2*A[`~mu`](X)*Physics[d_][mu](r, [X])/r+Physics[d_][mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics[d_][mu](A[`~mu`](X), [X]) = 2*A[`~mu`](X)*Physics[d_][mu](r, [X])/r+Physics[d_][mu](theta, [X])*cos(theta)*A[`~mu`](X)/sin(theta)+Physics[d_][mu](A[`~mu`](X), [X]))

true

(11)

Good. Now the same with (86.10)

D_[nu](F[`~mu`, nu](X)) = %d_[nu](sqrt(-%g_[determinant])*F[`~mu`, `~nu`](X))/sqrt(-%g_[determinant])

Physics:-D_[nu](F[`~mu`, `~nu`](X), [X]) = %d_[nu]((-%g_[determinant])^(1/2)*F[`~mu`, `~nu`](X))/(-%g_[determinant])^(1/2)

(12)

expand(value(Physics[D_][nu](F[`~mu`, `~nu`](X), [X]) = %d_[nu]((-%g_[determinant])^(1/2)*F[`~mu`, `~nu`](X))/(-%g_[determinant])^(1/2)))

Physics:-d_[nu](F[`~mu`, `~nu`](X), [X])+Physics:-Christoffel[`~mu`, alpha, nu]*F[`~alpha`, `~nu`](X)+Physics:-Christoffel[`~nu`, alpha, nu]*F[`~mu`, `~alpha`](X) = 2*F[`~mu`, `~nu`](X)*Physics:-d_[nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics:-d_[nu](theta, [X])*cos(theta)/sin(theta)+Physics:-d_[nu](F[`~mu`, `~nu`](X), [X])

(13)

As with (86.9), you also have some control over the steps:

eval(Physics[D_][nu](F[`~mu`, `~nu`](X), [X]) = %d_[nu]((-%g_[determinant])^(1/2)*F[`~mu`, `~nu`](X))/(-%g_[determinant])^(1/2), %d_ = d_)

Physics:-D_[nu](F[`~mu`, `~nu`](X), [X]) = (-(1/2)*F[`~mu`, `~nu`]*Physics:-d_[nu](%g_[determinant], [X])/(-%g_[determinant])^(1/2)+(-%g_[determinant])^(1/2)*Physics:-d_[nu](F[`~mu`, `~nu`], [X]))/(-%g_[determinant])^(1/2)

(14)

Here are three different ways to verify these outputs are correct, the left and right hand sides have the same value.

 

1) First a direct approach as before

expand(value(Physics[d_][nu](F[`~mu`, `~nu`](X), [X])+Physics[Christoffel][`~mu`, alpha, nu]*F[`~alpha`, `~nu`](X)+Physics[Christoffel][`~nu`, alpha, nu]*F[`~mu`, `~alpha`](X) = 2*F[`~mu`, `~nu`](X)*Physics[d_][nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics[d_][nu](theta, [X])*cos(theta)/sin(theta)+Physics[d_][nu](F[`~mu`, `~nu`](X), [X])))

Physics:-d_[nu](F[`~mu`, `~nu`](X), [X])+Physics:-Christoffel[`~mu`, alpha, nu]*F[`~alpha`, `~nu`](X)-Physics:-Christoffel[`~nu`, alpha, nu]*F[`~alpha`, `~mu`](X) = 2*F[`~mu`, `~nu`](X)*Physics:-d_[nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics:-d_[nu](theta, [X])*cos(theta)/sin(theta)+Physics:-d_[nu](F[`~mu`, `~nu`](X), [X])

(15)

SumOverRepeatedIndices(Physics[d_][nu](F[`~mu`, `~nu`](X), [X])+Physics[Christoffel][`~mu`, alpha, nu]*F[`~alpha`, `~nu`](X)-Physics[Christoffel][`~nu`, alpha, nu]*F[`~alpha`, `~mu`](X) = 2*F[`~mu`, `~nu`](X)*Physics[d_][nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics[d_][nu](theta, [X])*cos(theta)/sin(theta)+Physics[d_][nu](F[`~mu`, `~nu`](X), [X]))

-Physics:-diff(F[`~1`, `~mu`], r)-Physics:-diff(F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta) = -Physics:-diff(F[`~1`, `~mu`], r)-Physics:-diff(F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta)

(16)

evalb(-Physics[diff](F[`~1`, `~mu`], r)-Physics[diff](F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta) = -Physics[diff](F[`~1`, `~mu`], r)-Physics[diff](F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta))

true

(17)

2) First Simplify (13) to take into account the antisymmetry of F[mu, nu]

Simplify(Physics[d_][nu](F[`~mu`, `~nu`](X), [X])+Physics[Christoffel][`~mu`, alpha, nu]*F[`~alpha`, `~nu`](X)+Physics[Christoffel][`~nu`, alpha, nu]*F[`~mu`, `~alpha`](X) = 2*F[`~mu`, `~nu`](X)*Physics[d_][nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics[d_][nu](theta, [X])*cos(theta)/sin(theta)+Physics[d_][nu](F[`~mu`, `~nu`](X), [X]))

-Physics:-Christoffel[`~nu`, nu, `~alpha`]*F[alpha, `~mu`](X)-Physics:-d_[alpha](F[`~alpha`, `~mu`](X), [X]) = 2*F[`~mu`, `~nu`](X)*Physics:-d_[nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics:-d_[nu](theta, [X])*cos(theta)/sin(theta)+Physics:-d_[nu](F[`~mu`, `~nu`](X), [X])

(18)

Turn the alpha index in F[alpha, `~mu`] contravariant so that when you sum over the repeated indices you compare contravariant with contravariant components of F

ToContravariant(-Physics[Christoffel][`~nu`, nu, `~alpha`]*F[alpha, `~mu`](X)-Physics[d_][alpha](F[`~alpha`, `~mu`](X), [X]) = 2*F[`~mu`, `~nu`](X)*Physics[d_][nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics[d_][nu](theta, [X])*cos(theta)/sin(theta)+Physics[d_][nu](F[`~mu`, `~nu`](X), [X]), only = alpha)

-Physics:-Christoffel[`~nu`, nu, `~alpha`]*Physics:-g_[alpha, beta]*F[`~beta`, `~mu`](X)-Physics:-g_[alpha, nu]*Physics:-d_[`~nu`](F[`~alpha`, `~mu`](X), [X]) = 2*F[`~mu`, `~nu`](X)*Physics:-d_[nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics:-d_[nu](theta, [X])*cos(theta)/sin(theta)+Physics:-d_[nu](F[`~mu`, `~nu`](X), [X])

(19)

SumOverRepeatedIndices(-Physics[Christoffel][`~nu`, nu, `~alpha`]*Physics[g_][alpha, beta]*F[`~beta`, `~mu`](X)-Physics[g_][alpha, nu]*Physics[d_][`~nu`](F[`~alpha`, `~mu`](X), [X]) = 2*F[`~mu`, `~nu`](X)*Physics[d_][nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics[d_][nu](theta, [X])*cos(theta)/sin(theta)+Physics[d_][nu](F[`~mu`, `~nu`](X), [X]))

-Physics:-diff(F[`~1`, `~mu`], r)-Physics:-diff(F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta) = -Physics:-diff(F[`~1`, `~mu`], r)-Physics:-diff(F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta)

(20)

evalb(-Physics[diff](F[`~1`, `~mu`], r)-Physics[diff](F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta) = -Physics[diff](F[`~1`, `~mu`], r)-Physics[diff](F[`~2`, `~mu`], theta)-2*F[`~1`, `~mu`]/r-cos(theta)*F[`~2`, `~mu`]/sin(theta))

true

(21)

3) This is frequently the simpler and faster approach towards verification, just ask the computer to compare component by component. Here I take the left-hand side minus the right-hand side, and simplify

TensorArray((lhs-rhs)(Physics[d_][nu](F[`~mu`, `~nu`](X), [X])+Physics[Christoffel][`~mu`, alpha, nu]*F[`~alpha`, `~nu`](X)+Physics[Christoffel][`~nu`, alpha, nu]*F[`~mu`, `~alpha`](X) = 2*F[`~mu`, `~nu`](X)*Physics[d_][nu](r, [X])/r+F[`~mu`, `~nu`](X)*Physics[d_][nu](theta, [X])*cos(theta)/sin(theta)+Physics[d_][nu](F[`~mu`, `~nu`](X), [X])), simplifier = simplify)

Array(%id = 36893488152274153708)

(22)

When you have two indices, you see a Matrix of zeros. And what in the case where you have too many free indices? The above will be an Array where you cannot see all the components. OK, but if those components are all equal to zero, then ArrayElems  applied to that Array returns an empty set. So you can take advantage of that - an easy way to see all the components are zero. Try with (22) to see that in action

"ArrayElems(?)"

{}

(23)

NULL

 

Download Landau_formulas.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@mykola 

The main problem with unspecified dimensions is that you cannot compute the components of the metric, and so you cannot sum over repeated indices nor compute the determinant or the trace; and with that you lack support for operations with all the other tensors: you basically cannot compute any of them, at the core you miss an explicit expression for the Christoffel symbols. All that, together, restricts significantly the computations you can perform.

That said, some things could be done, e.g. simplification using Einstein's sum rule and differentiation. If you could perhaps formulate on a Maple sheet a computation where an unspecified dimension is relevant, what is the starting point and what is the result you wanted to derive from it. What the steps, Maybe an option and support for unspecified dimension can be implemented.

 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@vv 

Well, not you, but I missed something. Wirtinger derivatives were implemented in Maple 18 years ago. You can see the routines: enter kernelopts(opaquemodules = false) followed by print(Physics:-DifferentiateComplexComponent) and display any print/foo foo is a complex component. 

But the Maple18 implementation remained somewhat incomplete - differentiation w.r.t z worked well on expressions involving z and conjugate(z), but not differentiation w.r.t conjugate(z) when there is z around (your examples called my attention to that, thanks!). And I completely forgot about that, mainly because I always differentiate w.r.t z, not conjugate(z).

Anyway, the topic overall is relevant, I think. So besides completing the few missing steps since Maple 18, I prepared a Mapleprimes post about Wirtinger Derivatives. As usual, I don't discard there is more work to do, but up to what I could see on a Sunday morning :), the implementation is working as expected; and you, the more experts, can tell.

Best

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@mthkvv 

:) In turn, I was impressed with the extensive use you do in your document kerr.mw of the ability to define tensors using tensorial expressions, including covariant derivatives and the determinant of the metric. By the way, maybe you are aware, maybe not, it is possible to work with the inert form of the determinant of the metric, including differentiation, instead of the computed form you use. Check the help page ?Physics,Tensors, Section II subsection 22.


PS: not everybody is aware, "distributed in the Maplesoft Physics Updates" means the fix is already present in the version of Maple 2022 under development.
 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

There is an issue in int, triggered after assumptions are (automatically) placed on the coordinates t, r, theta, phi when you set them. I'm taking a look. A fix will be there in the next version of the Maplesoft Physics Updates for Maple 2021. By the way, the first version of the Physics Updates for Maple 2021 was posted earlier today. To install versions, you need to "install the package" first  (it is not sufficient to have it installed in Maple 2020). Also, the webpage still says "2020" but v.927 is for "2021".

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@J F Ogilvie 

Take a look at what's new in Differential Equations in Maple 2021 page. That page answers - not a generic question on "number of solutions" (there are infinitely many possible ODE problems) - but what I meant by skyrocketed (informal for increase very steeply or rapidly).

In brief, the problem of 2nd order linear ODEs splits into those that admit Liovillian solutions and those that do not. In the second set, the approach is to compute hypergeometric or Heun function solutions; the corresponding standard and some original approaches were implemented in previous Maple releases.

In Maple 2021, however, we implemented something far beyond that. For example, as said on that what's new in DEs page, none of the ODE problems shown there can be solved in Maple 2020 or before, or using other computer algebra systems. At the end of the ODE section of that what's new page, you will also see seven references to the scientific literature explaining the new methods and how they extend previously existing ones.

So while It is true that the Maple ODE and PDE solvers were already state-of-the-art in previous Maple releases, in Maple 2021, regarding 2nd order linear ODEs and ODE / PDE problems that require solving that kind of problem as an intermediate step, the solving capabilities skyrocketed. This achievement is a milestone in computer algebra and differential equations.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Hi

Complementing Samir's comments, two advanced topics which are among the things that make Maple 2021 absolutely unique, state-of-the-art, are

  • We literally skyrocketed our ability to solve second order linear ODEs. With that, it also significantly increased our ability to solve higher-order linear and nonlinear ODEs, PDEs, and systems of them that require solving 2nd order linear ODEs as an intermediate step. 
  • Building on the work of Maple 2020, in the Maple 2021 Physics environmentnew, we can compute Feynman Integrals - Particle Physics - and significantly improved our ability to compute with non-commutative operators - Quantum Mechanics - also with tensors and tetrads in curved spacetimes - General Relativity.

The Maple system has acquired a maturity level in these subjects, including the LaTeX development, only possible because of people's systematic feedback, frequently on the novelties presented every week in each Maplesoft Physics Updates, which in truth it includes the differential equation and mathematical functions novelties as well. I want to thank again all of you that contributed in that way here in Mapleprimes.

 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@vv 

The Wirtinger derivatives d/dz, d/dzbar are yes implemented in Maple. It is one of the Physics:-Setup settings (input Setup(); and you see it there in the applet). Your point about diff/abs, however, is another thing. I will take a look.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

@itsme 

I somehow missed your reply 5 years ago (!) Attached is the worksheet showing current output, it works fine with all daggered operators to the left, and SortProducts performing the operation as requested; this command got reworked significantly during the last 3 years.

Your worksheet reviewed: commutator_stuff_(from_2016)_reviewed.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

 

dharr is right. See also the help page for ODESolStruc

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@nm 

Without debating anything of what you are saying, I only want to point out two relevant things about debugging that you may not be aware of.

One is the MapleCloud Emacs package. It provides terrific additional debugging capabilities, using Emacs, right, not the GUI you talk about, but still a very significant step ahead concerning the default DEBUG window you mentioned.

Two: in addition to stopat(some_procedure), input debug(some_procedure), and put the DEBUG window side by side with the Maple worksheet window before starting the computation to be debugged. When the computation starts, you will see what you see in the DEBUG window and also the computation evolving one step at a time and with full typesetting. If you happen to have the code behind, in addition, input kernelopts(tracelineinfo = 2) to see the line numbers.

I only debug using "Two", and if the debugging activity is heavier, always use "One".

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

The error message is incorrect, but the expectation is a bit too. You are not showing the answer; it is in parametric form, parameter _T. There is no y(x) as you say. There is only x(_T) and y(_T). This is not a standard "implicit" solution. To test, you'd need to isolate _T in one of the equations then substitute into the other one - say to obtain a solution involving y(x). Or, differentiate both with respect to _T and somehow manage to use that to remove _T to obtain dy/dx, and from there, see if it reduces ode. Although possible, none of that is implemented.

PS: busy until the end of January.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

First 14 15 16 17 18 19 20 Last Page 16 of 64