ecterrab

14540 Reputation

24 Badges

20 years, 21 days

MaplePrimes Activity


These are replies submitted by ecterrab

@Area51 

Yes, you can define (add) tensors using tensorial expressions as definitions: the left-hand side (lhs) is the tensor being defined, the right-hand side (rhs) is the tensorial expression. And yes, the rhs can also have contracted indices - the only requirement is that the free indices of the lhs and rhs are the same, naturally. For all that, see the help page for ?Physics:-Define. I also see you used Box; there is Physics:-dAlembertian, the galilean one, but you can define one in curved spaces as explained in the previous sentence.

If you don't know how to formulate your problem on a Maple worksheet using Physics, try to approximate as much as possible the input lines representing your problem, and please post the resulting worksheet and questions here again.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

You need to post the problem, explicitly.

@a_simsim 

Andras

I mentioned I was going to write again here regarding your suggestion concerning Den Iseger's algorithm. We gave a look at the paper, with Katherina, and the three algorithms implemented work pretty well with all the examples, Table 1: Analytical test functions, and Table 4: Continuous Nonanalytical test functions, in both cases getting better accuracy; but do not handle those of Table 2: Two Discontinuous test functions. In brief, for these two discontinuous examples it is not possible to determine whether the accuracy is or not increasing, and then the numerical evaluation routine gives up. We are short of time but will see if we can still give Iseger's algorithm a try. If so, I will post again here. Thanks again for the reference.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Mariusz Iwaniuk 

Thanks for your feedback. I looked at your seven examples. Your example 5 is taken from Table 2 of the paper by Iseger mentioned by Andras, a discontinuous example. Then examples 1, 2, 4 and 7 (with regards to tackling them using the methods implemented) are basically of the same type as example 5. You see that by looking at the form of the Laplace transform being inverted, or at the exact form of the inverse Laplace transform. These examples - the one from Iseger's paper is the best representative, I think -  result uncomputable (using the implemented algorithms) because for discontinuous inverse Laplace transforms it is sometimes not possible to determine whether the accuracy is or not increasing. We were yesterday discussing with Katherina the possibility of implementing one more algorithm for this case (basically, what Andras suggested).

Then your example 6 is a piecewise branching at t < Pi, and you ask the numerical evaluation at t = Pi. Such a case is always difficult, for any numerical algorithm. Your example 3 is however, different: the expression being (Laplace) numerically inverted involves the Zeta function, whose numerical evaluation is itself a difficult problem, and the expected inverse Laplace transform is equal to harmonic(floor(exp(t)), 2). I am not a numerical analyst (more like a theoretical physicist), but I tend to think that numerical methods for problems like your 3 and 6, where you have discrete numerical functions, or where you ask for the numerical evaluation at where the discrete function branches, are not the standard problem, and in any case, not the problems I am targetting in a first implementation round.

Summarizing, if we have time, we will try to implement Iseger's algorithm as suggested by Andras and with that handle the two discontinuous examples of Table 2 of Iseger's paper (that would handle your examples 1, 2, 4, 5 and 7), Your examples 6 and 3 may or not be automatically solved at that point; otherwise they will require computing the exact transform first.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@a_simsim 

Hi Andras,
Thanks for the reference. Below is a worksheet with the testing example - comparing numerical inversion algorithms - of the paper you just mentioned, But the values of the numerical parameters they used are not shown. Do you have them?

Example from: Bruno Josso - Leif Larse, "Laplace transform numerical inversion"

 

In what follows, each Eq.(number) indicated right before an input line corresponds to the equation number shown in the paper.


Eq.(10)

q__D := proc (t) options operator, arrow; Gamma*exp(-mu*t) end proc


Eq.(11)

Gamma := 1/((1/2)*ln(4*A__D/(exp(gamma)*C__A))+S)

Eq.(12)

A__D := A/`r__&omega;`^2


Eq.(13)

mu := 2*Pi/(A__D*Gamma)


Formula within text, between Eq.(13) and Eq.(14)

`#mover(mi("F"),mo("&circ;"))` := proc (p) options operator, arrow; Gamma/(p+mu) end proc

proc (p) options operator, arrow; Gamma/(p+mu) end proc

(1)

Check it out:

`#mover(mi("F"),mo("&circ;"))`(p)

1/(((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*(p+2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)/A))

(2)

Load the inttrans package

with(inttrans)

 

Eq.(14), that is:  `#mover(mi("F"),mo("&circ;"))`(p) = laplace(q__D(t), t, p), and q__D(t) = invlaplace(`#mover(mi("F"),mo("&circ;"))`(p), p, t)

`#mover(mi("F"),mo("&circ;"))`(p) = laplace(q__D(t), t, p)

1/(((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*(p+2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)/A)) = 1/((-(1/2)*gamma+S+(1/2)*ln(4*A/(`r__&omega;`^2*C__A)))*(p+Pi*`r__&omega;`^2*(-gamma+2*S+ln(4*A/(`r__&omega;`^2*C__A)))/A))

(3)

simplify((lhs-rhs)(1/(((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*(p+2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)/A)) = 1/((-(1/2)*gamma+S+(1/2)*ln(4*A/(`r__&omega;`^2*C__A)))*(p+Pi*`r__&omega;`^2*(-gamma+2*S+ln(4*A/(`r__&omega;`^2*C__A)))/A))))

0

(4)

q__D(t) = invlaplace(`#mover(mi("F"),mo("&circ;"))`(p), p, t)NULL

exp(-2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*t/A)/((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S) = 2*exp((gamma-2*S-ln(4*A/(`r__&omega;`^2*C__A)))*`r__&omega;`^2*t*Pi/A)/(-gamma+2*S+ln(4*A/(`r__&omega;`^2*C__A)))

(5)

simplify((lhs-rhs)(exp(-2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*t/A)/((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S) = 2*exp((gamma-2*S-ln(4*A/(`r__&omega;`^2*C__A)))*`r__&omega;`^2*t*Pi/A)/(-gamma+2*S+ln(4*A/(`r__&omega;`^2*C__A)))))

0

(6)

Eq.(15)

(%limit = limit)(p*`#mover(mi("F"),mo("&circ;"))`(p), p = 0)

%limit(p/(((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*(p+2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)/A)), p = 0) = 0

(7)

Eq.(16): this other limit is more difficult to compute without further information on the values of the parameters involved; according to the paper, the right-hand side is expected to be equal to 0

`assuming`([(%limit = limit)(q__D(t), t = infinity)], [real, Gamma > 0])

%limit(exp(-2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*t/A)/((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S), t = infinity) = limit(exp(-2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*t/A)/((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S), t = infinity)

(8)

The problem now is that, in order to reproduce Figure 3 of the paper, that is, to compare the analytical solution q__D(t) = GAMMA*exp(-mu*t) with a numerical Laplace inversion of `#mover(mi("F"),mo("&circ;"))`(p) (i.e., numerically compute the right-hand side of q__D(t) = invlaplace(`#mover(mi("F"),mo("&circ;"))`(p), p, t)), we need numbers for all these letters (parameters of the model) you see in (8)

remove(type, `minus`(indets(%limit(exp(-2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*t/A)/((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S), t = infinity) = limit(exp(-2*Pi*`r__&omega;`^2*((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S)*t/A)/((1/2)*ln(4*A/(`r__&omega;`^2*exp(gamma)*C__A))+S), t = infinity), symbol), {t}), constant)

{A, C__A, S, `r__&omega;`}

(9)

I do not see the values of these parameters in the paper. Without them it is not possible to run this numerical inversion of q__D(t). Do you have the values of these parameters?

NULL


 

Download Example_from_Laplace_transform_numerical_inversion.mw

 

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@a_simsim 

At least for this release, I don't intend to work further in a numerical inversion of Laplace transforms. The three algorithms implemented are good enough. It is also unclear that Iseger's algorithm is better than the ones of section 2 of this post, and there is still work to do regarding inversion algorithms for other transforms.

That said, I looked now at the paper by Den Iseger you mentioned (it is from 2006, same year of the one we used in our implementation, see the post above) and noticed it contains a Table 1 with results for the mean absolute error for a set of 8 test functions. As soon as I find the time, we will run the same examples using the implementation we developed using Euler, Talbot and Gaver-Stehfest, then compare with the results shown in the paper and comment here again.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@Mariusz Iwaniuk 

That is not what the post says. You can use evalf to numerically evaluate %invlaplace (as well as any other inert mathematical function)- that is what the post says. You can use MathematicalFunctions:-Evalf to select one numerical method among several ones implemented, for the mathematical functions that have this feature implemented - certainly the case of %invlaplace as the post says. Giving a look at ?MathematicalFunctions:-Evalf you see it is also the case of the four AppellF functions and the ten Heun and HeunPrime functions.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@nm 

Inert stuff: everything preceded by % is an inert representation of the object not preceded by %, and can be executed using the value command. It is a fantastic Maple feature. The diff, series, evalf, simplify and several other commands understand inert representation and can compute with them taking into account the mathematical properties of these objects even when they are unevaluated (inert).

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@nm 

With the Physics Updates v.433 or higher we get

pde := diff(u(x, t), t) = diff(u(x, t), x, x)
bc := u(-Pi, t) = u(Pi, t), (D[1](u))(-Pi, t) = (D[1](u))(Pi, t)

pdsolve([pde, bc], u(x, t))

u(x, t) = (1/2)*(2*(Sum(exp(-n^2*t)*((Int(_F1(x)*sin(n*x), x = -Pi .. Pi))*sin(n*x)+(Int(_F1(x)*cos(n*x), x = -Pi .. Pi))*cos(n*x))/Pi, n = 1 .. infinity))*Pi+Int(_F1(x), x = -Pi .. Pi))/Pi

(1)

pdetest(u(x, t) = (1/2)*(2*(Sum(exp(-n^2*t)*((Int(_F1(x)*sin(n*x), x = -Pi .. Pi))*sin(n*x)+(Int(_F1(x)*cos(n*x), x = -Pi .. Pi))*cos(n*x))/Pi, n = 1 .. infinity))*Pi+Int(_F1(x), x = -Pi .. Pi))/Pi, [pde, bc])

[0, 0, 0]

(2)

This solution is actually the most general one. Comparing with yours,

hand_sol := u(x, t) = C[0]+Sum(exp(-n^2*t)*(C[n]*cos(n*x)+B[n]*sin(n*x)), n = 1 .. infinity)

u(x, t) = C[0]+Sum(exp(-n^2*t)*(C[n]*cos(n*x)+B[n]*sin(n*x)), n = 1 .. infinity)

(3)

in your hand_sol I see infinitely many arbitrary constants, C[0], C[n] and B[n], both for n = 1 .. infinity. The two solutions, (1) and your hand_sol are actually the same: your constants are the coefficients of the sin and cos series expansions of _F1(x) (see for instance the formulas for the Fourier sin series, with the same notation you used, connecting _F1(x) with your constants C[0], C[n] and B[n]).

At this point the only question that remains is whether the form (1) of the solution contains or not more information than the form (3), and I think so: in (1) there is only 1 arbitrary function - and by plugging a value for _F1 I can get, for instance, a more convenient form of the solution for a problem, while in hand_sol the infinite integration constants C[0], C[n] and B[n] look disconnected (although we know they are interrelated), and so it is unclear how could I take advantage of their apparent "arbitrariness" to get a more appropriate solution for a given problem. In short, (1) has for me more information / usability.

``


 

Download more_general_solution_in_v.433_and_higher.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@J F Ogilvie 

As I already said, the Maple Heun functions include numerical evaluation routines for them, actually sophisticated ones. That these routines can be improved, as everything else, does not change that fact. I also know that you are well aware of the existence of those numerical evaluation routines - you actually reported a couple of problems with them. Writing here in "fake-news" style saying things you know that are not true does not help your point. Anyway I am out of this conversation.

@Jayaprakash J 

Open Maple, and give a look at the help page. To open the help page, input ?PDEtools,dchange.

@Ribeyre 

The following is a worksheet with the input lines you posted (and please the next time upload a worksheet, see the green arrow to upload it in the Mapleprimes window where you write your post)

with(Physics)

 

I am quoting Define so that we see what is what you are defining

('Define')(K[mu, nu] = Matrix(4, {(1, 1) = exp(nu), (2, 2) = -exp(lambda), (3, 3) = -r^2, (4, 4) = -r^2*sin(theta)^2}, shape = symmetric))

Physics:-Define(K[mu, nu] = Matrix(%id = 18446744078249969894))

(1)

Release the delayed evaluation of Define

%

`Defined objects with tensor properties`

 

{Physics:-Dgamma[mu], K[mu, nu], Physics:-Psigma[mu], Physics:-d_[mu], Physics:-g_[mu, nu], Physics:-LeviCivita[alpha, beta, mu, nu]}

(2)

But at this point the metric is Minkowski, not Schwarzschild:

g_[]

Physics:-g_[mu, nu] = Matrix(%id = 18446744078249948934)

(3)

Consequently, all the following (what you posted) are actually correct

TensorArray(K[mu, `~nu`])

Matrix(%id = 18446744078388367718)

(4)

K[]

K[mu, nu] = Matrix(%id = 18446744078388365662)

(5)

"K[~]"

K[`~mu`, `~nu`] = Matrix(%id = 18446744078388356990)

(6)

Note also that you have this alternative syntax available

"K[mu,~nu,matrix]"

K[mu, `~nu`] = Matrix(%id = 18446744078388353974)

(7)

"K[~mu,~nu,matrix]"

K[`~mu`, `~nu`] = Matrix(%id = 18446744078388356990)

(8)

Now if you want to work with the Schwarzschild metric, first you need to set it, the simplest way to do that is:

g_[sc]

_______________________________________________________

 

`Systems of spacetime coordinates are:`*{X = (r, theta, phi, t)}

 

`Default differentiation variables for d_, D_ and dAlembertian are:`*{X = (r, theta, phi, t)}

 

`The Schwarzschild metric in coordinates `*[r, theta, phi, t]

 

`Parameters: `[m]

 

_______________________________________________________

 

Physics:-g_[mu, nu] = Matrix(%id = 18446744078249949894)

(9)

The definition of `K__&mu;,&nu;`remains unchanged

K[definition]

K[mu, nu1] = Matrix(%id = 18446744078249957726)

(10)

So,

K[]

K[mu, nu] = Matrix(%id = 18446744078249970974)

(11)

But of course K[`~mu`, `~nu`]is now different

"K[~]"

K[`~mu`, `~nu`] = Matrix(%id = 18446744078249956166)

(12)

What Maple is calculating is what it is telling:

"K[~mu,~nu] = g_[~mu,~alpha] g_[~nu,~beta] K[alpha,beta]"

K[`~mu`, `~nu`] = Physics:-g_[`~alpha`, `~mu`]*Physics:-g_[`~beta`, `~nu`]*K[alpha, beta]

(13)

TensorArray(K[`~mu`, `~nu`] = Physics[g_][`~alpha`, `~mu`]*Physics[g_][`~beta`, `~nu`]*K[alpha, beta])

Matrix(%id = 18446744078261614942)

(14)

To me all this looks OK. So is this what you were expecting? Or otherwise what is what you think would be wrong in these results?

 

 

Regarding an update for Maple 2018, you already have the last one. The "Physics Updates" is an interesting project, useful for everybody, but it is a significant amount of extra work, we cannot afford that for more than the current version of Maple.
 

Download correct_results.mw

Edgardo S. Cheb-Terrab
Physics, Differential Equations andMathematicalFunctions, Maplesoft

@nm 

Both dsolve and odetest are large software projects, not reduced to "do one thing", and if this is a generic conversation don't expect all the details. Generally speaking things are as said, and trust me: I wrote both dsolve and odetest. If you want to know about the internals the two routines you may want to trace are `odetest/implicit` (returns FAIL for your example, because you removed _C1) and `odetest/explicit`. Also, in this same example, even if you don't remove _C1 from the input to odetest, `odetest/implicit` fails in finding 0,

So, what if one approach fails? Naturally, try other things. There are exactly 1800 lines of code distributed into 20 subroutines below odetest, implementing different catch-up strategies accumulated along the years, taking into account feedback from Maple users. In some cases the catch-up approaches work, in other cases they don't. In this your new (third example) the catch-up works.

Summaryzing again: if you want to test with odetest, don't change the conventions of this code and let it do its job. If you change the conventions, don't expect it to do its job properly, even when, depending on the example, it may still succeed.

Anyway, I can't continue in this thread, need to move to other pending things.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

@nm 

There is nothing like random behavior with software. If the solution is explicit, like the one you show now in your 'reply', then there is no need to isolate the integration constant, and so you can change _C1 by whatever (not depending on x). The behaviour that called your attention (that you cannot change _C1 by C[1] and expect odetest to handle the problem) happens only with implicit solutions, for the reasons explained (in those cases the testing approach starts by isolating the integration constant, etc.).  

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft.

@nm 

As mentioned in a previous thread with you, there is a distinction between essential and non-essential singular solutions. dsolve's default is to return only essential singular solutions, i.e. those that  cannot be obtained from the general solution by specializing (possibly piecewise) integration constants.  On the other hand, casesplit's default is to return all singular solutions unless you explicitly indicate  singsol = essential . When you solve(ode, y(x)) you are splitting (kinda manually) the original problem into two problems not having essential singular solutions.

I realize as well that you are not familiar with essential singular solutions or that distinction - in the previous thread I mentioned you could give a look a the help pages ?DifferentialAlgebra,RosenfeldGroebner, or the old ?diffalg where this distinction between essential and non-essential singular solutions - not mentioned in your posts in this thread - is explained and is at the root of the difference in behaviors you are pointing at.

Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft

First 24 25 26 27 28 29 30 Last Page 26 of 64