mmcdara

7891 Reputation

22 Badges

9 years, 60 days

MaplePrimes Activity


These are replies submitted by mmcdara

@Carl Love 

Thanks

@acer @Carl Love

Thanks acer for correcting me on the number of digits that some _d01ajc  can handle.
If I have thought it there was such a limitation, that is because, while I was working on Mehdi's example, I got an error message that is easy to reproduce (with Maple 2015):

Digits:=20:
evalf(Int(x, x=0..1, method=_d01ajc))
Error, (in evalf/int) expect Digits<=15 for method _d01ajc but received 23


Two more points:

  1. Can you explain me why evalf is of no use use in your plot command?
    Example
    Digits:=10:
    
    f := Omega -> Int(subs(omega=Omega, omega*x), x=0..1, method=_d01ajc,epsilon=1e-4):
    
    f(1.0);   # non evaluated
    evalf(%); # evaluated
    
    # Below: Why is the definite integral evaluated without invoking evalf?
    # Would the plot command automatically apply evalf?
    
    plot(Omega -> Int(subs(omega=Omega, omega*x), x=0..1, method=_d01ajc,epsilon=1e-4),0..50)
    
      Int(Omega x, x = 0 .. 1, method = _d01ajc, epsilon = 0.0001)
         Int(x, x = 0 .. 1, method = _d01ajc, epsilon = 0.0001)
    
  2. About   int(eval(A1+A2,omega=20.099752),theta=0..Pi, numeric, digits=5)
    I agree with both of you that explicitely specifying the tolerance or using epsilon option would be a better way to get the desired accuracy.
    In my repy I just focused on what happens with a particular value of omega when using the same command that @mehdi jafari uses.

 

@MapleEnthusiast 

Counterexample
 

M := Matrix(2, 2, [a, 1-a, b, 1-b]):
LinearAlgebra:-Eigenvalues(M):
printf("%a",%);
Vector(2, [1,a-b])

# nothing proves that   a-b < 1

 

@acer 

For info: your own code plus symmetry and periodicity considerations

Maple 2015.2, APPLE UNIVERSAL OSX, Dec 20 2015, Build ID 1097895

Digits:=15:
CodeTools:-Usage(plot(Omega->Int(subs(omega=Omega,unapply(A1+A2,theta)),
                                 0..2*Pi,method=_d01ajc,epsilon=1e-4),
                      0..50)):

CodeTools:-Usage(plot(Omega->4*Int(subs(omega=Omega,unapply(A1+A2,theta)),
                                 0..Pi/2,method=_d01ajc,epsilon=1e-4),
                      0..50));

memory used=8.68MiB, alloc change=0 bytes, cpu time=1.64s, real time=1.64s, gc time=0ns
memory used=5.53MiB, alloc change=0 bytes, cpu time=445.00ms, real time=447.00ms, gc time=0ns

 

@mehdi jafari 

Very far from a  super computer:
Maple 2015.2
MacOS Mojave, 10.14.3, proc 2.9 GHz Intel Core i5  (7 years old I think)


"do you think solving it with more digits would change the results or not" : 
I don't think so, furthermore I believe that evalf(Int(...)) returns an error message when used with more than 15 digits (see @acer reply)

 

 

@acer 

Thank you acer.
Furthermore, I was completely unaware of the LargeExpression package, I just had a quick look at it and it looks extremely interesting

 

@mehdi jafari 

Please excuse me @Preben Alsholm  if I take the liberty of intervening in this discussion.
To @mehdi jafari :

The cpu time seems (to me) reasonable given the complexity of A1 and A2

CodeTools:-Usage( plot(w->  int(eval(A1+A2,omega=w),theta=0..2*Pi, numeric, digits=5), 0..50, color=red) );

memory used=314.85MiB, alloc change=0 bytes, cpu time=4.79s, real time=4.64s, gc time=224.68ms


Given the symmetries of A1+A2 one can reduce it this way

CodeTools:-Usage(plot(w->  int(eval(A1+A2,omega=w),theta=0..2*Pi, numeric, digits=5), 0..50)):
CodeTools:-Usage(plot(w->4*int(eval(A1+A2,omega=w),theta=0..Pi/2., numeric, digits=5), 0..50)):
memory used=328.38MiB, alloc change=36.00MiB, cpu time=5.13s, real time=5.08s, gc time=179.32ms
memory used=215.62MiB, alloc change=40.00MiB, cpu time=2.90s, real time=2.84s, gc time=131.88ms


As A1+A2 contains a "boundary layer of length of order 1e-4" located around omega=16 (the integral is numerically null for omega < 16 - 1e-4)...

taylor(op(2, A1)+op(2, A2), omega=16, 3);
plot(w->int(eval(A1+A2,omega=w),theta=0..2*Pi,numeric,digits=5),16-2e-5..16+2e-5);

... one may think that computing the integral for omega >= 16 (maybe this value can be adjusted) is enough. What I propose is to plot this.

plot(w->4*int(eval(A1+A2,omega=w),theta=0..Pi/2., numeric, digits=5), 16-4e-5..50)

To realize an objective (I think) comparison, one must evaluate each plot with the same number of points.
Doing

pp := plot(w->  int(eval(A1+A2,omega=w),theta=0..2*Pi, numeric, digits=5), 0..50):
plottools:-getdata(pp)[3]

shows 231 values of omega are used in the range [0..50].
Let's fix the omega step to the same value 50/230 and evaluate the integration time alone (plotting time discarded). One obtains:

J := proc() seq(int(eval(A1+A2,omega=w),theta=0..2*Pi, numeric, digits=5), w=0..50, 50/230): end proc:
CodeTools:-Usage(J(), output='cputime', iterations=10):
memory used=180.73MiB, alloc change=0 bytes, cpu time=2.24s, real time=2.13s, gc time=168.40ms

K := proc() seq(4*int(eval(A1+A2,omega=w),theta=0..Pi/2, numeric, digits=5), w=0..50, 50/230): end proc:
CodeTools:-Usage(K(), output='cputime', iterations=10):
memory used=162.65MiB, alloc change=0 bytes, cpu time=1.96s, real time=1.85s, gc time=152.76ms

Ka := proc() seq(4*int(eval(A1+A2,omega=w),theta=0..Pi/2, numeric, digits=5), w=16..50, 50/231): end proc:
CodeTools:-Usage(Ka(), output='cputime', iterations=10):
memory used=57.89MiB, alloc change=0 bytes, cpu time=718.70ms, real time=675.30ms, gc time=61.45ms

Comparing the cpu times between J and Ka shows the latter is three times faster than the former.

Next refinement (at least for me): precalculate the list of points [x, int(...)] and plot this list.

Kb := proc() 
  local u:=[ seq([w, 4*int(eval(A1+A2,omega=w),theta=0..Pi/2, numeric, digits=5)], w=16..50, 50/231) ]:
  plot(u):
end proc:

CodeTools:-Usage(Kb());
memory used=57.99MiB, alloc change=0 bytes, cpu time=879.00ms, real time=764.00ms, gc time=174.41ms

We are then 5 times faster than the original version given at the top ogfthis reply.
Probably better performances can be obtained by:

  • choosing carefuly the integration method and its options
  • adjusting the "omega step" (beyond, let's say, omega=30, the definite integral slighly evolves)

 

To end this: believe it would be more important to verify that the the results are not dependent of the integration method, of its options, and of the omega step.
For instance A1+A2 has discontinuities for all omega (at least for values right to the "boundary layer")r

discont(eval(A1+A2, omega=16.0001), theta);
{-0.5004339588e-1+3.141592654*_Z29, 0.5004339588e-1+3.141592654*_Z27, Pi*_Z67+(1/2)*Pi}

Are these discontinuities correctly handled with the integration procedure? Does this matter?

printf("%1.3e", evalf(eval(eval(A1+A2, omega=16.0001), theta=0.05004339586)))
1.600e+05

Interestingly the integral cannot be computed for omega=20.099742 .. 20.099751 ; the peak obtained in the original plot is hugely underestimated:

int(eval(A1+A2,omega=20.099752),theta=0..Pi, numeric, digits=5)
                            2.3543

 

@maplefan123 

What I've done in my first answer does not apply to n > 4 for there is no general procedure to compute formally the roots of a polynomial of degree higher than 4.
So you will have to use Kitonum's solution.

For instance:
 

restart:

form := proc(n, p1, p2)
  local B:
  local p, A, ptlist:

   B := proc(n) 
    local d:
    d := n -> factor(expand(add((-1)^k*k!*Stirling2(n, k)/binomial(k + p + 1, k), k = 0 .. n))); 
    return collect(add(binomial(n, k)*d(k)*x^(n - k), k = 0 .. n), x, factor); 
  end proc:

  for p from p1 to p2 do
    A[p] := fsolve(B(n), x, complex);
  end do:
  ptlist := convert(A,list):
  return 'plots:-complexplot(ptlist, style = point, title=cat("n = ", n))';
end proc:

M := Matrix(3, 3, (i, j) -> form(j+3*(i-1), 0, 5000)):
plots:-display(M)

numeric.mw

@Carl Love 

Extremely elegant, I vote up

@ecterrab 

Thank you for this clear explanation of how things are set up in Maple.

@DJJerome1976 

Maybe the process I used to construct correlated samples of non gaussian RVs might seem strange to you, which would be normal in fact.
In the attached file I show you what happens if you apply to a couple of uniform RVs, the construction I used in my first file to correlate gaussian RVs.

Correlated_uniform_RV.mw

@DJJerome1976

The attached file describes a method to correlate (more precisely to linearly correlate, or to correlate in Pearson's sense) two arbitrary (continuous) RVs.
I tried to do the things clear and didactical, which means all I've wrote can be made more concise and probably more efficient (but this was not my purpose).

Perhaps this will sound like a trick to you?
But there is a rigourous and well estavlished theory behind this names "Copula Theory".
What I have used here without give it its name, is a Gaussian Copula.

Correlated_general_RV_2.mw

@vsnyder 

I keep seeing questions and answers that do not seem to have any relation to the original question. Am I right?
I simply do not understand where all of this is going on.

Here are some personal reflections and questions that deserve to be asked:

  • It seems you WANT to use both Fortran and Maple, WHY?
  • Do you have any good reason to think that you couldn't do all the stuff using a single language?
  • If it is so, what is the most suited language to solve your problem?
    Once identified, use it as the "master" language and the other as the "slave" language
    Now, are you sure you want to call Maple from Fortran and not the opposite?
  • Fortran is aimed to do numerical computations while Maple is aimed (at least from its origins) to do symbolic computations (even if a lot of numerical algorithms are avaliable in Maple) : I can't figure out any numerical algoithm that would be implemented in Maple while not in Fortran (or in some specific Fortran library , see here Libraries).
    So once again who is the "master" language and who's the "slave" one?
  • As a rule I do not see the point to call Maple from Fortran to do numerical computations (without even mentioning the computational performance).
    Is it worth calling Maple to do symbolic computions in the middle of a fortran code where a lot of things have already been done in double (often) or quadruple (in the better case) precision?

 

Let me give you a few examples:

  • I launch Maple from an Eclipse application (Java) because only Eclipse enables me to develop GUIs based on class diagrams.
  • I launch R from Maple because the application I develop relies upon features only Maple has (Graph Thheory and symbolic calculus), but requires at some point statistical computations only R can do.
  • I launch Maple from Dakota to take benefit of Dakota's features in describing a simulation chain, but I find it simpler to reuse specific algorithms I have written in Maple instead of rewriting them in C++.

For these three examples I'd found it stupid to do the opposite (lauching Eclipse ot Dakota from Maple or Maple from R).

@acer @wswain

Didn't you mean to say "it looks like you have assigned theta to 0 earlier?"
If you uncomment the third line above you get exactly  @wswain's error; keeping it commented works as expected.

restart
with(plots):
# theta := 0:
animate(arrow, [<cos(theta), sin(theta)>, width = [0.05, relative], view = [-1 .. 2, -1 .. 2], color = "blue"], theta = 0 .. Pi/2)

 

@acer 

My purpose was more to (try to) write something a little bit general than what you did than to write something computationally efficient.

First 84 85 86 87 88 89 90 Last Page 86 of 154