ecterrab

14540 Reputation

24 Badges

20 years, 20 days

MaplePrimes Activity


These are answers submitted by ecterrab

@nm ​​​​​
You are right in your "Guess", which is the same thing I mentioned in the other Mapleprimes-question you posted basically about the same: you interrupt with timelimit, (have in mind that is a rather low-level interruption), not letting assuming to restore the status of things of before you call it. So without changes in the current implementation, the problem is not addressable.

And, you know, in spite of the Maplesoft icon under my name, recall I am not working for Maplesoft anymore since the end of 2024. Anyway, here is my suggestion, what would I do:

  • A small touch in assuming tracking, in a single variable, the restoring activity, as in: "evaluate the symbol `assuming/restore_previous_state`"
  • Add a touch to the kernel code so that if timelimit interrupts (ditto for "try catch: end try", and also to simple "interrupt" when pressing that formerly red button, the kernel evaluates `assuming/restore_previous_state`.

OK. What can I do to help you in your saga about differential equations (which I always appreciated)? I will spend some spare time this week and code  `assuming/restore_previous_state`. If my suggestion above is implemented, that day all will be resolved. Until then, you will be able to just add "`assuming/restore_previous_state`" to the catch branch, as in "try timelimit(...) catch: `assuming/restore_previous_state` end try". 

Two other comments  turning the lights on this so that you don't need to scratch your chin and post again about the same.

1. You realize you are not computing with Physics. You are only computing with assume and its optional renamevariables = false, used by assuming when placing temporary assumptions. So why is this under "Physics:-Setup"? Because, for historical reasons, the Physics package, in addition to physics-related functionality, it was also a powerhouse of ideas useful for physics-related computations, but clearly beyond that (e.g what you see today as: simplify(expression, size), FunctionAdvisor, latex, ...). Over time, most of these ideas moved from Physics to "outside the Physics package". The option renamevariables::truefalse of assume is one of these ideas, the advantages of not renaming variables are significant and relevant beyond physics-related computations (see the help page of Physics:-Assume). For reasons A, B, ..., this didn't get documented in assume, nor the functionality exported as an option to assuming before the end of 2024. So to use it today, the documented option is Physics:-Setup(assumingusesAssume = true)

2. Why this doesn't happen when you renamevariables = true (this is the default behaviour of assuming)? Because in this other case, each time assume is called, a completely new variable is created. Although that automatically resolves the problem (who cares of tables filled with varaibles previously used if they will never again be used?), not only it fills assume tables with assumptions (and their consequences) on variables that will never be used again, but in addition has limitations (for details see ?Physics:-Assume)

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions



Download noncommutative_example_(reviewed).mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions

Consider how it works for method = laplace

ode := diff(diff(x(t), t), t)+4*x(t) = Dirac(t)+Dirac(t-Pi)

sol__L := dsolve(ode, x(t), method = laplace)

x(t) = cos(2*t)*x(0)+(1/2)*sin(2*t)*(1+Heaviside(t-Pi)+(D(x))(0))

(1)

You see there are no integration constants to adjust. Instead, you see x(0) and (D(x))(0) as part of the solution. Think about ... it is not the same process of "solve in general, then adjust the integration constants cn". Next you indicate, precisely, the values of  x(0) and (D(x))(0):

ic := x(0) = 0, (D(x))(0) = 0


I.e., in this formulation of the ode+ic problem using method = laplace, there is nothing to be adjusted, it just follows a simple substitution

subs(ic, sol__L)

x(t) = (1/2)*sin(2*t)*(1+Heaviside(t-Pi))

(3)

And that is what dsolve gives you when you pass both ode and ic with method = laplace

dsolve([ode, ic], x(t), method = laplace)

x(t) = (1/2)*sin(2*t)*(1+Heaviside(t-Pi))

(4)

Then this result

odetest(x(t) = (1/2)*sin(2*t)*(1+Heaviside(t-Pi)), ode)

-Dirac(t)

(5)

is related to the fact that Dirac(t) is not really a function, you know, but a distribution, i.e.: there is no well defined meaning for it unless you integrate it over a domain that includes its roots. Also, it is not that the Laplace method is flawed but that your ode has Dirac(t), with the interpretation that at t = 0 you can't really tell what x(t) is, or its derivatives, are. So passing to odetest you ic, for a result computed in this way, and precisely about the value of x(t) and (D(x))(t) at t = 0, is kinda nonsense for me, or else you can expect any value at x(0) and (D(x))(0), even when you substituted their values as in (3).

 

Anyway, dsolve has specialized code for handling Dirac(t)in the ode, but to see it working you need to let dsolve do its job:

sol := dsolve([ode, ic])

x(t) = (1/4)*sin(2*t)*(2*Heaviside(t-Pi)+2*Heaviside(t)-1)

(6)

odetest(x(t) = (1/4)*sin(2*t)*(2*Heaviside(t-Pi)+2*Heaviside(t)-1), [ode, ic])

[0, 0, 0]

(7)

How is this possible? By adjusting integration constants properly (not as you do in "method=laplace)", that is, compute first the general solution,

dsolve(ode)

x(t) = sin(2*t)*c__2+cos(2*t)*c__1+(1/2)*(Heaviside(t)+Heaviside(t-Pi))*sin(2*t)

(8)

then adjust the values of c__1, c__2 accordingly, resulting in (6) above.

NULL


 

Download Dirac_in_the_ode.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions

Your computation is simpler if you use the native Maple Physics package, available there with your Maple copy.  Here I follow your steps adding a few illustrative comments. At the end, a recommendation of a help page with everything you need to compute with GR using Maple, and a link to download what you see here as a Maple worksheet.

 

with(Physics)

Coordinates(X = [t, r, p, q])

{X}

(1)

 

Following your steps,

M := Matrix(4)

M[1, 1] := -1; M[2, 2] := H(X)^2; M[3, 3] := F(X)^2; M[4, 4] := F(X)^2

 

Set now the (all covariant) metric

Setup(metric = M)

[metric = {(1, 1) = -1, (2, 2) = H(X)^2, (3, 3) = F(X)^2, (4, 4) = F(X)^2}, spaceindices = lowercaselatin_is]

(2)

 

Physics:-g_[mu, nu] = Matrix(%id = 36893488153459666804)(3)

For convenience, let's use a compact display avoiding the redundant display of X = (t, r, p, q)

"CompactDisplay(?):"


At any moment during your computation, to see what is being displayed in this compact way you can use show: it will show the actual thing behind the last output.
 

And that is all, since in Maple's Physics, once you entered the metric, everything gets set automatically. For example, you don't need to specify, in addition, the computation of the inverse of the metric. It is already there

"g_[~]"

Physics:-g_[`~mu`, `~nu`] = Matrix(%id = 36893488153459650676)

(5)

The same happens with the Christoffel symbols I see in your input lines. For example,

seq(Christoffel[mu, alpha, beta, matrix], mu = [1, 2, 3, 4])

Physics:-Christoffel[1, alpha, beta] = Matrix(%id = 36893488153487780788), Physics:-Christoffel[2, alpha, beta] = Matrix(%id = 36893488153487774532), Physics:-Christoffel[3, alpha, beta] = Matrix(%id = 36893488153487776092), Physics:-Christoffel[4, alpha, beta] = Matrix(%id = 36893488153487777532)

(6)

"seq(Christoffel[mu, alpha,beta, matrix], mu = [~1, ~2,~3,~4])"

Physics:-Christoffel[`~1`, alpha, beta] = Matrix(%id = 36893488153487821148), Physics:-Christoffel[`~2`, alpha, beta] = Matrix(%id = 36893488153487817892), Physics:-Christoffel[`~3`, alpha, beta] = Matrix(%id = 36893488153487814516), Physics:-Christoffel[`~4`, alpha, beta] = Matrix(%id = 36893488153487809100)

(7)

 

For a convenient way of exploring the components of tensors (any number of indices) see TensorArray , in particular check its explore option.

 

For the other things you present, it is the same

Ricci[]

Physics:-Ricci[mu, nu] = Matrix(%id = 36893488152583639276)

(8)

"Ricci[~]"

Physics:-Ricci[`~mu`, `~nu`] = Matrix(%id = 36893488153484338948)

(9)

Or the mixed components:

"Ricci[alpha,~beta,matrix]"

Physics:-Ricci[alpha, `~beta`] = Matrix(%id = 36893488152583656140)

(10)

The Ricci scalars are also there

Ricci[scalars]

Phi__00 = (1/4)*(-2*F(X)*H(X)*(diff(diff(F(X), r), r))+4*H(X)^2*(diff(diff(F(X), r), t))*F(X)-2*H(X)^3*(diff(diff(F(X), t), t))*F(X)-H(X)^2*(diff(diff(H(X), p), p))-H(X)^2*(diff(diff(H(X), q), q))+2*((-2*H(X)*(diff(H(X), t))+diff(H(X), r))*(diff(F(X), r))+(diff(H(X), t))*H(X)^2*(diff(F(X), t)))*F(X))/(H(X)^3*F(X)^2), Phi__01 = (1/4)*(-F(X)*H(X)*(diff(diff(F(X), p), r))+H(X)^2*(diff(diff(F(X), p), t))*F(X)-I*H(X)*(diff(diff(F(X), q), r))*F(X)+I*H(X)^2*(diff(diff(F(X), q), t))*F(X)+H(X)*(diff(diff(H(X), p), t))*F(X)^2+I*H(X)*(diff(diff(H(X), q), t))*F(X)^2-((diff(F(X), t))*H(X)-(diff(F(X), r)))*(H(X)*(diff(F(X), p))+I*H(X)*(diff(F(X), q))+(I*(diff(H(X), q))+diff(H(X), p))*F(X)))/(H(X)^2*F(X)^3), Phi__02 = (1/4)*(-(diff(diff(H(X), p), p))*F(X)-(2*I)*(diff(diff(H(X), p), q))*F(X)+(diff(diff(H(X), q), q))*F(X)+((2*I)*(diff(H(X), q))+2*(diff(H(X), p)))*(diff(F(X), p))+2*(I*(diff(H(X), p))-(diff(H(X), q)))*(diff(F(X), q)))/(H(X)*F(X)^3), Phi__11 = (1/4)*((diff(F(X), t))^2*H(X)^2*F(X)^2-F(X)^4*H(X)*(diff(diff(H(X), t), t))-(diff(diff(F(X), p), p))*H(X)^2*F(X)-(diff(diff(F(X), q), q))*H(X)^2*F(X)-(diff(F(X), r))^2*F(X)^2+(diff(F(X), p))^2*H(X)^2+(diff(F(X), q))^2*H(X)^2)/(H(X)^2*F(X)^4), Phi__12 = (1/4)*(F(X)*H(X)*(diff(diff(F(X), p), r))+H(X)^2*(diff(diff(F(X), p), t))*F(X)+I*F(X)*H(X)*(diff(diff(F(X), q), r))+I*H(X)^2*(diff(diff(F(X), q), t))*F(X)+H(X)*(diff(diff(H(X), p), t))*F(X)^2+I*H(X)*(diff(diff(H(X), q), t))*F(X)^2-((diff(F(X), t))*H(X)+diff(F(X), r))*(H(X)*(diff(F(X), p))+I*H(X)*(diff(F(X), q))+(I*(diff(H(X), q))+diff(H(X), p))*F(X)))/(F(X)^3*H(X)^2), Phi__22 = (1/4)*(-2*F(X)*H(X)*(diff(diff(F(X), r), r))-4*H(X)^2*(diff(diff(F(X), r), t))*F(X)-2*H(X)^3*(diff(diff(F(X), t), t))*F(X)-H(X)^2*(diff(diff(H(X), p), p))-H(X)^2*(diff(diff(H(X), q), q))+2*((2*H(X)*(diff(H(X), t))+diff(H(X), r))*(diff(F(X), r))+(diff(H(X), t))*H(X)^2*(diff(F(X), t)))*F(X))/(H(X)^3*F(X)^2), Lambda = (1/12)*(-(diff(diff(F(X), p), p))*H(X)^3*F(X)-(diff(diff(F(X), q), q))*H(X)^3*F(X)-2*F(X)^3*(diff(diff(F(X), r), r))*H(X)+2*F(X)^3*(diff(diff(F(X), t), t))*H(X)^3-(diff(diff(H(X), p), p))*H(X)^2*F(X)^2-(diff(diff(H(X), q), q))*H(X)^2*F(X)^2+F(X)^4*H(X)^2*(diff(diff(H(X), t), t))-(diff(F(X), r))^2*H(X)*F(X)^2+2*F(X)^3*(diff(F(X), r))*(diff(H(X), r))+H(X)^2*((diff(F(X), t))^2*F(X)^2*H(X)+2*(diff(F(X), t))*(diff(H(X), t))*F(X)^3+H(X)*((diff(F(X), p))^2+(diff(F(X), q))^2)))/(F(X)^4*H(X)^3)

(11)

You may be curious about the definition used in Maple (main reference is Landau & Lifshitz textbook)

Ricci[scalars, definition]

Phi__00 = -(1/2)*Physics:-Ricci[`~mu`, `~nu`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-l_[nu], Phi__01 = -(1/2)*Physics:-Ricci[`~mu`, `~nu`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-m_[nu], Phi__02 = -(1/2)*Physics:-Ricci[`~mu`, `~nu`]*Physics:-Tetrads:-m_[mu]*Physics:-Tetrads:-m_[nu], Phi__11 = -(1/4)*Physics:-Ricci[`~mu`, `~nu`]*(Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-n_[nu]+Physics:-Tetrads:-m_[mu]*Physics:-Tetrads:-mb_[nu]), Phi__12 = -(1/2)*Physics:-Ricci[`~mu`, `~nu`]*Physics:-Tetrads:-m_[mu]*Physics:-Tetrads:-n_[nu], Phi__22 = -(1/2)*Physics:-Ricci[`~mu`, `~nu`]*Physics:-Tetrads:-n_[mu]*Physics:-Tetrads:-n_[nu], Lambda = (1/24)*Physics:-Ricci[mu, `~mu`]

(12)

Also the Weyl tensor, or its scalars

Weyl[scalars]

psi__0 = (1/4)*((diff(diff(H(X), p), p))*F(X)+(2*I)*F(X)*(diff(diff(H(X), p), q))-(diff(diff(H(X), q), q))*F(X)+(-(2*I)*(diff(H(X), q))-2*(diff(H(X), p)))*(diff(F(X), p))-2*(I*(diff(H(X), p))-(diff(H(X), q)))*(diff(F(X), q)))/(H(X)*F(X)^3), psi__1 = (1/4)*(-F(X)*H(X)*(diff(diff(F(X), p), r))+H(X)^2*(diff(diff(F(X), p), t))*F(X)-I*H(X)*(diff(diff(F(X), q), r))*F(X)+I*H(X)^2*(diff(diff(F(X), q), t))*F(X)-H(X)*(diff(diff(H(X), p), t))*F(X)^2-I*H(X)*(diff(diff(H(X), q), t))*F(X)^2+(H(X)*(diff(F(X), p))+I*H(X)*(diff(F(X), q))+(I*(diff(H(X), q))+diff(H(X), p))*F(X))*(diff(F(X), r))+H(X)*(-H(X)*(diff(F(X), p))-I*H(X)*(diff(F(X), q))+(I*(diff(H(X), q))+diff(H(X), p))*F(X))*(diff(F(X), t)))/(F(X)^3*H(X)^2), psi__2 = (1/12)*(-2*(diff(diff(F(X), p), p))*H(X)^3*F(X)-2*(diff(diff(F(X), q), q))*H(X)^3*F(X)+2*F(X)^3*(diff(diff(F(X), r), r))*H(X)-2*F(X)^3*(diff(diff(F(X), t), t))*H(X)^3+(diff(diff(H(X), p), p))*H(X)^2*F(X)^2+(diff(diff(H(X), q), q))*H(X)^2*F(X)^2+2*F(X)^4*H(X)^2*(diff(diff(H(X), t), t))-2*(diff(F(X), r))^2*H(X)*F(X)^2-2*F(X)^3*(diff(F(X), r))*(diff(H(X), r))+2*(diff(F(X), t))^2*H(X)^3*F(X)^2-2*(diff(F(X), t))*F(X)^3*H(X)^2*(diff(H(X), t))+2*H(X)^3*((diff(F(X), p))^2+(diff(F(X), q))^2))/(H(X)^3*F(X)^4), psi__3 = (1/4)*(F(X)*H(X)*(diff(diff(F(X), p), r))+H(X)^2*(diff(diff(F(X), p), t))*F(X)-I*H(X)*(diff(diff(F(X), q), r))*F(X)-I*H(X)^2*(diff(diff(F(X), q), t))*F(X)-H(X)*(diff(diff(H(X), p), t))*F(X)^2+I*H(X)*(diff(diff(H(X), q), t))*F(X)^2+(-H(X)*(diff(F(X), p))+I*H(X)*(diff(F(X), q))+(I*(diff(H(X), q))-(diff(H(X), p)))*F(X))*(diff(F(X), r))-H(X)*(diff(F(X), t))*(H(X)*(diff(F(X), p))-I*H(X)*(diff(F(X), q))+(I*(diff(H(X), q))-(diff(H(X), p)))*F(X)))/(F(X)^3*H(X)^2), psi__4 = (1/4)*((diff(diff(H(X), p), p))*F(X)-(2*I)*(diff(diff(H(X), p), q))*F(X)-(diff(diff(H(X), q), q))*F(X)+((2*I)*(diff(H(X), q))-2*(diff(H(X), p)))*(diff(F(X), p))+2*(diff(H(X), q)+I*(diff(H(X), p)))*(diff(F(X), q)))/(H(X)*F(X)^3)

(13)

Weyl[scalars, definition]

psi__0 = -Physics:-Weyl[`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-m_[nu]*Physics:-Tetrads:-l_[alpha]*Physics:-Tetrads:-m_[beta], psi__1 = -Physics:-Weyl[`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-n_[nu]*Physics:-Tetrads:-l_[alpha]*Physics:-Tetrads:-m_[beta], psi__2 = -Physics:-Weyl[`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-m_[nu]*Physics:-Tetrads:-mb_[alpha]*Physics:-Tetrads:-n_[beta], psi__3 = -Physics:-Weyl[`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-n_[nu]*Physics:-Tetrads:-mb_[alpha]*Physics:-Tetrads:-n_[beta], psi__4 = -Physics:-Weyl[`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-n_[mu]*Physics:-Tetrads:-mb_[nu]*Physics:-Tetrads:-n_[alpha]*Physics:-Tetrads:-mb_[beta]

(14)

Note as well that all these outputs regarding definitions are actually computable expressions. for example

(psi__0 = -Physics[Weyl][`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-m_[nu]*Physics:-Tetrads:-l_[alpha]*Physics:-Tetrads:-m_[beta], psi__1 = -Physics[Weyl][`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-n_[nu]*Physics:-Tetrads:-l_[alpha]*Physics:-Tetrads:-m_[beta], psi__2 = -Physics[Weyl][`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-m_[nu]*Physics:-Tetrads:-mb_[alpha]*Physics:-Tetrads:-n_[beta], psi__3 = -Physics[Weyl][`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-n_[nu]*Physics:-Tetrads:-mb_[alpha]*Physics:-Tetrads:-n_[beta], psi__4 = -Physics[Weyl][`~mu`, `~nu`, `~alpha`, `~beta`]*Physics:-Tetrads:-n_[mu]*Physics:-Tetrads:-mb_[nu]*Physics:-Tetrads:-n_[alpha]*Physics:-Tetrads:-mb_[beta])[1]

psi__0 = -Physics:-Weyl[`~alpha`, `~beta`, `~mu`, `~nu`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-m_[nu]*Physics:-Tetrads:-l_[alpha]*Physics:-Tetrads:-m_[beta]

(15)

SumOverRepeatedIndices(psi__0 = -Physics[Weyl][`~alpha`, `~beta`, `~mu`, `~nu`]*Physics:-Tetrads:-l_[mu]*Physics:-Tetrads:-m_[nu]*Physics:-Tetrads:-l_[alpha]*Physics:-Tetrads:-m_[beta])

psi__0 = (1/4)*((2*I)*F(X)*(diff(diff(H(X), p), q))-(2*I)*(diff(F(X), p))*(diff(H(X), q))-(2*I)*(diff(H(X), p))*(diff(F(X), q))-(diff(diff(H(X), q), q))*F(X)+(diff(diff(H(X), p), p))*F(X)-2*(diff(F(X), p))*(diff(H(X), p))+2*(diff(F(X), q))*(diff(H(X), q)))/(F(X)^3*H(X))

(16)

The Einstein tensor I see in your input lines

Einstein[]

Physics:-Einstein[mu, nu] = Matrix(%id = 36893488153487790796)

(17)

"Einstein[~]"

Physics:-Einstein[`~mu`, `~nu`] = Matrix(%id = 36893488152583659140)

(18)

Check the help page ? Physics,Tensors  for all you need to compute with tensors, symbolically or in components.

 

Finally, to explore the 4 x 4 tables of components of the Weyl tensor,

TensorArray(Weyl[alpha, beta, mu, nu], explore)

 

You see you can specify which two indices are shown, and also their values, and for details see its help page TensorArray

NULL



Download withPhysics.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions

In your mw, success with eval@simplify happens for pdes, not pde. In your call to pdetest, you use pde. Use pdes and you get the same expected validation of the solution.

Download validation.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions.

 

Good catch. It is fixed within the Physics Updates v.1872. To install, as usual, input Physics:-Version(latest).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions 

Adjust sum to not have these premature evaluation problems:

Physics:-Setup(redefinesum = true)

[redefinesum = true]

(1)


Now you have:

sum(modp(5, product(2, t = 0 .. modp(q-1, 3))), q = 1 .. 2)

2

(2)


See the Physics:-Library help page, subsection for Physics:-Library:-Add. The problem of premature evaluation is an old one. It is addressed in the Physics context (package) making sum use the Physics:-Library-Add wrapper, which performs the summation using sum but such that the computation is free of these old premature evalution problems. I copy here the mini-help-page for this command from the help page of Physics:-Library. The command can be used as Physics:-Library:-Add, or directly as the sum command after entering Physics:-Setup(redefinesum = true)

Add

 
  

Calling Sequence

  

Physics:-Library:-Add(f, k = m..n)

  

Parameters

  

f    : algebraic expression

  

k    : name; summation index

  

m, n : integers or arbitrary expressions, indicate the summation range

  

Description

  

Add unifies the standard add  and sum  commands into one that uses a more modern handling of arguments, is free of premature evaluation problems, and implements multi-index summation. The functionality and handling of arguments of Add can be plugged directly into the sum command using the option redefinesum of Physics:-Setup.

  

Examples

restart: with(Physics): with(Library):

  

Consider this multi-index summation.

Add(f[i, j], 1 <= i+j and i+j <= n);

Physics:-Library:-Add(f[i, j], i+j <= n, lowerbound = 1)

(1.1)
  

For instance, for n = 2,

eval((1.1), n = 2);

f[0, 1]+f[1, 0]+f[0, 2]+f[1, 1]+f[2, 0]

(1.2)
  

The formula for the integer power of a sum is:

(a+b+c)^n = Add(factorial(n)*a^p*b^q*c^r/(factorial(p)*factorial(q)*factorial(r)), p+q+r = n);

(a+b+c)^n = Physics:-Library:-Add(Physics:-`*`(factorial(n), Physics:-`^`(a, p), Physics:-`^`(b, q), Physics:-`^`(c, r), Physics:-`^`(Physics:-`*`(factorial(p), factorial(q), factorial(r)), -1)), p+q+r = n)

(1.3)

eval((1.3), n = 3);

(a+b+c)^3 = a^3+3*a^2*b+3*a^2*c+3*a*b^2+6*a*b*c+3*a*c^2+b^3+3*b^2*c+3*b*c^2+c^3

(1.4)
  

A more modern handling of its arguments, consider a typical problem where a name, say j, is assigned a value, and j is used as dummy summation variable

a, b, j := 1, 2, 3;

1, 2, 3

(1.5)
  

The value just assigned, j = 3, is not expected to interfere with the summation

Add(f[j], j = a..b);

f[1]+f[2]

(1.6)
  

while for this example, sum interrupts with an error message due to a premature evaluation of the summation index j. Another frequent case: a procedure that returns a value according to the value of the summation index.

g := k -> if k::odd then G[k] else 0 end if;

proc (k) options operator, arrow; if k::odd then G[k] else 0 end if end proc

(1.7)
  

Without giving a value to the summation limit f, the following summation cannot be performed.

Add(g(k), k = 1 .. f);

Physics:-Library:-Add(g(k), k = 1 .. f)

(1.8)
  

For this example, add interrupts with an error message while sum returns 0, again due to a premature evaluation of g(k) before k has a numerical value. Using a more modern handling of arguments Add returns unevaluated instead, as seen above, which permits evaluating the summation at a later moment, for instance attributing a value to f.

eval((1.8), f = 3);

G[1]+G[3]

(1.9)
  

See Also

  

add , sum , Physics[Setup]

 

Download sum-redefined.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions

Hi
Unfortunately, it looks like a problem in the Maplecloud repository, I have no idea what that problem could be. In any case, this is a zip with the latest update for Maple 2022.

Physics_Updates.maple_(last_for_2022).zip

Edgardo S. Cheb-Terrab
Physics, Differential Equations, and Mathematical Functions
Maplesoft Emmeritus
Research and Education—passionate about all that.

Hi
I am not sure I get the whole picture, but here is some feedback. 

with(Physics); Setup(mathematicalnotation = true)


In your image, a and i are Euclidean indices that run from 1 to 3, so you do not need to use spacetime indices that run from 1 to 4. Also no need for the Vectors package (more comments on this at the end).

 

The simplest is then to use su2indices, which run from 1 to 3, and for which KroneckerDelta  is a tensor (actually, the metric regarding those indices, it is explained in its help page). So,

Setup(su2indices = lowercaselatin, coordinates = cartesian)

[coordinatesystems = {X}, su2indices = lowercaselatin]

(1)

Define your unit-vector tensor. For visualization purposes, use this macro

macro(R = `#mover(mi("r"),mo("&and;"))`)``


You can enter your position vector in this way

R[i] = [x, y, z]/sqrt(x^2+y^2+z^2)

`#mover(mi("r"),mo("&and;"))`[i] = [x/(x^2+y^2+z^2)^(1/2), y/(x^2+y^2+z^2)^(1/2), z/(x^2+y^2+z^2)^(1/2)]

(2)

Define(`#mover(mi("r"),mo("&and;"))`[i] = [x/(x^2+y^2+z^2)^(1/2), y/(x^2+y^2+z^2)^(1/2), z/(x^2+y^2+z^2)^(1/2)])

{Physics:-Dgamma[mu], Physics:-Psigma[mu], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("r"),mo("&and;"))`[i], Physics:-SpaceTimeVector[mu](X)}

(3)

Now your tensor `V__i,a` as shown in your image (you can change it as you see appropriate). It improves the readability if you use CompactDisplay

CompactDisplay((fA, fB, fC)(r))

fC(r)*`will now be displayed as`*fC

(4)

This is as the image you show

V[i, a] = (1-fA(r))*LeviCivita[a, i, j]*R[j]/(g*r)-fB(r)*(-R[a]*R[i]+KroneckerDelta[i, a])/(g*r)+fC(r)*R[i]*R[a]/(g*r)

V[i, a] = (1-fA(r))*Physics:-LeviCivita[a, i, j]*`#mover(mi("r"),mo("&and;"))`[j]/(g*r)-fB(r)*(-`#mover(mi("r"),mo("&and;"))`[a]*`#mover(mi("r"),mo("&and;"))`[i]+Physics:-KroneckerDelta[a, i])/(g*r)+fC(r)*`#mover(mi("r"),mo("&and;"))`[i]*`#mover(mi("r"),mo("&and;"))`[a]/(g*r)

(5)

Define(V[i, a] = (1-fA(r))*Physics[LeviCivita][a, i, j]*`#mover(mi("r"),mo("&and;"))`[j]/(g*r)-fB(r)*(-`#mover(mi("r"),mo("&and;"))`[a]*`#mover(mi("r"),mo("&and;"))`[i]+Physics[KroneckerDelta][a, i])/(g*r)+fC(r)*`#mover(mi("r"),mo("&and;"))`[i]*`#mover(mi("r"),mo("&and;"))`[a]/(g*r))

{Physics:-Dgamma[mu], Physics:-Psigma[mu], V[i, a], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("r"),mo("&and;"))`[i], Physics:-SpaceTimeVector[mu](X)}

(6)

These are the components of `#mover(mi("r"),mo("&and;"))`[i] and of V[i, a] and note there is no distinction between covariant and contravariant because the indices refer to an Euclidean SU2 space. So,  R[]

`#mover(mi("r"),mo("&and;"))`[a] = Array(%id = 36893488158327921108)

(7)

V[]

V[a, b] = Matrix(%id = 36893488151942105436)

(8)

Alternatively, if you do not need to work with the square roots around, it might be simpler to redefine the `#mover(mi("r"),mo("&and;"))`[i] as a function of the coordinates x, y, z as follows and only use the definition (2) in terms of square roots when necessary. So,

R[i] = R[i](x, y, z)

`#mover(mi("r"),mo("&and;"))`[i] = `#mover(mi("r"),mo("&and;"))`[i](x, y, z)

(9)

Define(redo, `#mover(mi("r"),mo("&and;"))`[i] = `#mover(mi("r"),mo("&and;"))`[i](x, y, z))

{Physics:-Dgamma[mu], Physics:-Psigma[mu], V[i, a], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("r"),mo("&and;"))`[i], Physics:-SpaceTimeVector[mu](X)}

(10)

CompactDisplay(R(x, y, z))

`#mover(mi("r"),mo("&and;"))`(x, y, z)*`will now be displayed as`*`#mover(mi("r"),mo("&and;"))`

(11)

To use this new definition (9), you need to redo the definition of V[i, a] done in (5)

Define(redo, V[i, a] = (1-fA(r))*Physics[LeviCivita][a, i, j]*`#mover(mi("r"),mo("&and;"))`[j]/(g*r)-fB(r)*(-`#mover(mi("r"),mo("&and;"))`[a]*`#mover(mi("r"),mo("&and;"))`[i]+Physics[KroneckerDelta][a, i])/(g*r)+fC(r)*`#mover(mi("r"),mo("&and;"))`[i]*`#mover(mi("r"),mo("&and;"))`[a]/(g*r))

{Physics:-Dgamma[mu], Physics:-Psigma[mu], V[i, a], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("r"),mo("&and;"))`[i], Physics:-SpaceTimeVector[mu](X)}

(12)

Now you have

R[]

`#mover(mi("r"),mo("&and;"))`[a] = Array(%id = 36893488158265443860)

(13)

where all of them are functions of x, y, z. To see that, use show

show

`#mover(mi("r"),mo("&and;"))`[a] = Array(%id = 36893488158265443860)

(14)

And for V[a, b]

V[]

V[a, b] = Matrix(%id = 36893488158296509364)

(15)

A comment on the use of Vectors: currently, you can define a unit vector that is also a tensor. For example: after loading Vectors, everything that starts and ends with an underscore is a unit vector

with(Vectors)

_u_

_u_

(16)

Use the `.` multiplication operator instead of `*` to mean scalar product of vectors

%.%

1

(17)

Then you can define a unit vector as a tensor; I here use 1D input here to make it clear what is what I am entering

_u_[j] = [_u__1_, _u__2_, _u__3_]

_u_[j] = [_u__1_, _u__2_, _u__3_]

(18)

Define(_u_[j] = [_u__1_, _u__2_, _u__3_])

{Physics:-Dgamma[mu], Physics:-Psigma[mu], V[i, a], _u_[j], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu], `#mover(mi("r"),mo("&and;"))`[i], Physics:-SpaceTimeVector[mu](X)}

(19)

Now you have

_u_[]

_u_[a] = Array(%id = 36893488158327943524)

(20)

_u_[1]._u_[1]

1

(21)

_u_[1]._u_[2]

Physics:-Vectors:-`.`(_u__1_, _u__2_)

(22)

NULL

_u_[a] = Array(%id = 36893488158327943524)

(23)

_u_[a]._u_[b]

Physics:-Vectors:-`.`(_u_[a], _u_[b])

(24)

eval(Physics[Vectors][`.`](_u_[a], _u_[b]), [a = 1, b = 1])

1

(25)

eval(Physics[Vectors][`.`](_u_[a], _u_[b]), [a = 1, b = 2])

Physics:-Vectors:-`.`(_u__1_, _u__2_)

(26)

But then we have a problem with your expression: the first term,

(1 - fA(r))*1/g*1/r*LeviCivita[a, i, j]*_r_[j]

(1-fA(r))*Physics:-LeviCivita[a, i, j]*_r_[j]/(g*r)

(27)

The above is a vector (see Identify )

Identify((1-fA(r))*Physics[LeviCivita][a, i, j]*_r_[j]/(g*r))

5

(28)

But neither the second nor the third term is a vector, they are scalars:

fB(r)*1/g*1/r*(KroneckerDelta[i, a] - (_r_[i] . _r_[a]))

fB(r)*(Physics:-KroneckerDelta[a, i]-Physics:-Vectors:-`.`(_r_[i], _r_[a]))/(g*r)

(29)

Identify(fB(r)*(Physics[KroneckerDelta][a, i]-Physics[Vectors][`.`](_r_[i], _r_[a]))/(g*r))

0

(30)

fC(r)*1/g*1/r*(_r_[i] . _r_[a])

fC(r)*Physics:-Vectors:-`.`(_r_[i], _r_[a])/(g*r)

(31)

Identify(fC(r)*Physics[Vectors][`.`](_r_[i], _r_[a])/(g*r))

0

(32)

So, this expression is invalid because you are adding a vector with a scalar,

V[i, a] = (1 - fA(r))*1/g*1/r*LeviCivita[a, i, j]*_r_[j] - fB(r)*1/g*1/r*(KroneckerDelta[i, a] - (_r_[i] . _r_[a])) + fC(r)*1/g*1/r*(_r_[i] . _r_[a])

Error, (in Physics:-Vectors:-+) wrong sum of a vector with the scalar -fB(r)/g/r*(Physics:-KroneckerDelta[a,i]-(_r_[i] . _r_[a])) |lib/Physics/Vectors/src/operations.mm:87|

 

``


 

Download V_Tensor_(reviewed).mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions,
Research and Education


Download FeynmanIntegral.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions
Research and Education, passionate about all that

Hi @nm
First, your expectation is not correct: determining the general form of a symmetry for a first-order ODE is as difficult as solving the ODE itself. I almost had my first paper in the area rejected because of that truth, which however can be a prejudice in practice ... So what is all this hype about symgen, then? It scans for simpler forms, something that can be done systematically. For this problem, the internal heuristic algorithm searching for a polynomial symmetry estimates degree = 2, not enough.

Unfortunately, the help page ?symgen says 'dgun' instead of '`ODEtools/dgun`', or else _Env_symgen_dgun. Both set the upper bound for the polynomial or rational forms of a symmetry, useful when the internal algorithm doesn't help.

Independent of that, Hydon's book says, as you show, "... [this ode] is not easily solved by any standard method". So much for that ... Maple's dsolve solves it almost instantly, regardless of you setting the value of `ODEtools/dgun`. This is an Abel type equation with nonconstant invariant, belonging to the AIR class (try a higher value of infolevel for dsolve), so it can systematically be transformed into a 2nd order linear ODE admitting computable hypergeometric functions, and from there,

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft (Emeritus), Canada
Research & Education, passionate about all that

Originally, the Updates contained only changes to the Physics package, and, by chance, I tightened the update of the "date of a version" to saving the Physics package. The project, however, quickly evolved into Updates that could involve changes to any part of the Maple library, but the update of the version's date still only happened when Physics was saved. Since the recent versions of the Updates involved changes that didn't require saving Physics, the version date didn't change. In the latest one (from today) the version's date reappears correct.

Best wishes for 2025!

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

Change_of_variables_x_to_g(x).mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

1 2 3 4 5 6 7 Last Page 1 of 59