Axel Vogt

5936 Reputation

20 Badges

20 years, 249 days
Munich, Bavaria, Germany

MaplePrimes Activity


These are replies submitted by Axel Vogt

it does not (XP, M10.06), but this also happens in specific MS applications (say Excel VBA), but I do not find it very obstructive ... while it works in all the main applications (so it is not due to my configuration I would say)
Alec, I can guess you mean "fire a question and forget, www will answer". Quite common at the usenet (as it is minimalistic), but not at a board, yes. May be some do not even recognize their un-politeness (same as mails without greetings or magic words like "Please" or "Thanks"). Sometimes that annoys me too. And I have to admit, that I am already a bit infected :-( But it does not frustrate me at all if I found an answer. For the Blogs ... may be the originators also are a bit shy and do not understand it as an invitation and see it as _your_ blog (me as well a little bit). Also want to say: fine to read and here from you again :-)
One uses an argument of Abate & Whitt to transform the inversion problem
to the form Int( g(s)*cos(t*s) , s = 0 .. infinity), where g is real and
analytic on the positive Reals and some real constant, see the file below,
which is easily chosen depending on the function to be inverted (it does
not depend on the point in which one wants to evaluate the Laplace inverse).

These integrals with oscillating factor can be computed efficiently according
to Mori & Ooura (the latter provides C and Fortran code, translate that to
Maple and mildly increase precision for the nasty inversion problem).

Download 102_A_Way_to_Compute_the_Numerical_Laplace_Inversion.zip
View file details
Here is the same in 'vector format', which is faster.

If the function to be input is real without singularities then the procedure is
converging after ~ 20 steps (but for the example where Bessels are involved the
evaluations may result in complex values, dito for 1/sqrt(s-1)) and the thing
becomes not very reliable (just try the last examples in a point T=12 or so).
But for good situations is is fine.

As a variant I played with (pre-)computing the factorials with high precision and
then to continue with working precision. It was not convincing (and needs to take
too much care for the number of steps), 60 - 80 digits seems to be needed at least.

  num_invLaplace_2:=proc(F,tau) #option remember;
  local oldDigits, n, r,k, t, result, Y,B,W,G;
  
  n:=80; # max of fct evaluations and summands
  oldDigits:=Digits: 
  Digits:=oldDigits + max(80, 2*n);
  
  t:= evalf(tau);
  Y:= Vector((0+1)..(n+n), (k) -> F(k*ln(2.0)/t) ); 
  B:= Matrix(1 .. n, 1+0 .. 1+n, (i,j) -> binomial(i,j-1) );
  W:= Vector(1 .. n, (k) -> (-1)^(n-k)*k^n/k!/(n-k)!  *  (2*k)!/k!/(k-1)! );
  G:= Vector(1 .. n, (k) -> add((-1)^r*B[k,r+1]*Y[k+r], r = 0 .. k) );
  
  result:= ln(2.0)/t * add( evalf(W[k]*G[k]), k=1 .. n);

  return Re( evalf[oldDigits](result) );
  end proc;


Likewise one can use a recursion to get the weights (a bit faster):

  W:= Vector(1 .. n);
  W[1]:= 2*(-1)^(n-1)/(n-1)!;
  for k from 1 to n-1 do
    W[k+1] := ( 2*(2*k+1)*(k-n)*((k+1)/k)^(n-1)/k^2 ) * W[k];
  end do; k:='k';
http://www.columbia.edu/~ww2040/JoC.pdf & his page for more concurrent work of Whitt ...

The input: Thx, but i hate that HTML cryptics - other boards provide copy & paste for code.
num_invLaplace_1:=proc(F,tau)
# Stehfest algorithm, according to Kou
#option remember;
local oldDigits, w,g, result, n,nMax, r,k, t;

nMax:=20;
oldDigits:=Digits: Digits:=oldDigits + max(80, 2*nMax);

w:= unapply( (-1)^(n-k)*k^n/k!/(n-k)!, k,n);
g:= unapply( ln(2)/t*(2*k)!/k!/(k-1)!*Sum( (-1)^r*binomial(k,r)*F((k+r)*ln(2)/t)  ,r=0..k), k,t):

t:=evalf(tau);
result:= 0;

k:=1;
while ( k LE nMax ) do
  result:= evalf(result + w(k, nMax)*g(k,t));
  k := k+1;
end do;

return evalf[oldDigits](result);
end proc;

Read 'LE' as less or equal (i never understand how to input that here ...)

some test functions:

#F:= s -> 1 / (s + 0.5);  F:= s -> 1/((s^2-1)^(1/2)); F:= s -> s^2/(s^2+2^2)^(3/2);
F:= s -> 2/s^8;
inttrans[invlaplace](F(s), s, T):
f:=unapply(%,T); #plot( f(t), t=0..8);

                                       2
                            F := s -> ----
                                        8
                                       s

                                           7
                         f := T -> 1/2520 T
some test values:

tTst:=10;
num_invLaplace_1(F,tTst); #num_invLaplace(F,tTst); 
evalf(f(tTst)); 
`approximation error` = evalf(%-%%);
                              tTst := 10

                           3968.2539682541
                           3968.2539682540
                                                -9
                   approximation error = -0.1 10

The other test functions involve Bessel functions and need more steps.
I like the suggestion to make NAG (completely?) available ...

There are special solutions, the following is a start for search:
Abate, J. and Whitt, W. (1992) The Fourier-series method for inverting
transforms of probability distributions

If the fct is real one can use the Stehfest algorithm, Google gives
some hits for Stehfest + Maple  (but no code). I think one can use
http://www.pe.tamu.edu/valko/public_html/Nil/ (and translate his MMA
code), I have some old snippets, which I would have to claen up to
understand them again (I missed estimates, how many digits are needed
to get a given precision: for ~ 16 decimals one needs ~ 60 Digits,
but speed was fine)
May be a bit is due to my changed environment - but the input box
now finds its width (no, not resizable manually) and the entrance
through tracking the latest postings seems much clearer.
I enjoyed the Alias command, a fine thing for memory manipulation ...

And with that one can improve CumulativeSum at least a bit, but I can not
achieve a factor of 10, see BM4 below. A drawback of course is, that the
code is not very readable, if low level commands are used. BTW it would
not become significant faster if the transposings are omitted, most time
is consumed through generating the random vector and summing. 

# about 0.01 for random vector + 0.03 for summing in BM4
gc(): time(tmp=BM2(200, 200));
gc(): time(tmp=BM4(200, 200));
                                0.164
                                0.040

# about 0.3 for random vector + 0.7 for summing in BM4
gc(): time(tmp=BM2(1000, 1000));
gc(): time(tmp=BM4(1000, 1000));
                                1.734
                                1.105

# about 1.0 for random vector + 3.8 for summing in BM4
gc(): time(tmp=BM2(2000, 2000));
gc(): time(tmp=BM4(2000, 2000));
                                5.911
                                5.389

May be a compiled version would do a faster summation, but never tried
that for tables.

Axel

Edited to add that these timings are on my old PC (WinME 900MHz Intel with 256 MB).
On the new one (XP Athlon64 non-dual, ~ 2.2 GHz) the difference is similar. But not
so much for a higher number of simulations (~ 10%). And as one needs much (if one
would use it) it is probably not worth the effort.

---------------------------------------------------------------------

BM4 := proc(n::posint, m::posint)
local A, B, j , MV;
A := ArrayTools:-Alias(
     Statistics:-Sample(Normal(0, 1/sqrt(n)), m*n), # datatype = float[8]
     [1..m, 1..n]);

# transpose
MV := ArrayTools:-Alias(A,[m*n],column);
ArrayTools:-DataTranspose(MV,n,m);
B:=ArrayTools:-Alias(MV,[1..n,1..m]);

for j from 2 to n do
  B[j] := B[j] + B[j-1];
end do;

# for checking only
# for i to m do
#   A[i] := Statistics:-CumulativeSum(A[i]);
# end do;

# transpose back
MV := ArrayTools:-Alias(B,[m*n],column);
ArrayTools:-DataTranspose(MV,m,n);
B:=ArrayTools:-Alias(MV,[1..m,1..n]);

# for checking only
#print(B - A); #print(A); print(B);

MV:=NULL;
A:=NULL; 

return (B);
end proc:
BM3 := proc(n::posint, m::posint)
  local A, i, CS;
  A := ArrayTools:-Alias(Statistics:-Sample(Normal(0, 1/sqrt(n)), m*n), [1..m, 1..n]);
 
 CS:=proc(L,n)
 local LL, s;
   LL:=hfarray(1..n);
   s:=0;
   for i from 1 to n do
     s:= s + L[i];
     LL[i]:=s;
   end do;
 LL;
 end proc;
 
 #st:=time():
   for i to m do
     A[i] :=CS(A[i],n);
   end do;
 #print( time()-st);
     A;
 #print( A[1]);
 end proc:

gc(): time(tmp=BM1(200, 200));
gc(): time(tmp=BM2(200, 200));
gc(): time(tmp=BM3(200, 200));
                                4.519
                                0.189
                                0.019

gc(): time(tmp=BM2(1000, 1000));
gc(): time(tmp=BM3(1000, 1000));
                                1.780
                                0.334

PS: may be one should clean the sheet, otherwise the (large) arrays are stored with it.

Nice stuff, Alec :-) Especially I like your construct for W ...

The size comes through the plot. How about exporting or saving?
convert(sqrt(1/x^4+1),hypergeom,include=all): 
int(%,x=1..2);
simplify(%) assuming ...;
evalf(%);

where i always hate that M can not deduce such assumptions about the variable by itself ...
what do you mean by average path? and: i do not see any attachment ...
int (lower case i) computes a symbolic solution (usually, but if you use
floating point numbers, then it may give a numerical answer), compare:

 int(sqrt(1/x^4+1), x = 1   .. 2);
 int(sqrt(1/x^4+1), x = 1.0 .. 2);

Note that in the first case the solution is given using complex functions
and if you apply evalf to it, then an approximation error for the complex
part is displayed (there is one for the real part as well - always, but
you do not recognize it).

To get rid of it you can use Re().

In the second case a numerical routine was used, not a symbolic one.

Note further that it gives you 10 Digits. Usually for low exactness (up to
14 - 15 Digits) M tries to use 'hardware', so you almost always can set
Digits:=14.

If you want numerical values for integrals, the handling is as follows:

  Int(sqrt(1/x^4+1), x = 1   .. 2); evalf(%);

Obey an upper case i. For more digits try evalf[20](%), see the docu.
Thank you, it would be fine, if you can find some solution and/or a fix.
First 199 200 201 202 203 204 205 Last Page 201 of 209