acer

32400 Reputation

29 Badges

19 years, 345 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer


 

restart;

col:=proc(ee, c::string)
  uses Typesetting;
  subsindets(Typeset(ee),
             'specfunc'(anything,{mi,mo,mn,ms,mfenced}),
             u->op(0,u)(op(u),':-mathcolor'=c));
end proc:

captext1 := col( "A plot of: ", "green" ):
capother := col( sin(x), "red" ):
captext2 := col( " from ", "green" ),
            col( -Pi, "green" ),
            col( " to ", "green"),
            col( Pi, "green" ):

plot(sin(x), x=-Pi..Pi, caption=typeset(captext1, capother, captext2));

 

 

Download colored_caption.mw

evalf(Int(Fg, [0..1, 0..2.2], epsilon=5e-10));
Error, (in evalf/int) Cannot obtain the requested accuracy

evalf(Int(Fg, [0..1, 0..2.2], epsilon=1e-9));
                           8.140784249

evalf(Int(Fg, [0..1, 0..2.2], epsilon=1e-6));

                           8.140784161
Or, construct your procedure to return unevaluated when its arguments are not both of type numeric. (That is more robust than using uneval quotes.)
newFg := proc(x0,y0)
  if not (x0::numeric and y0::numeric) then
    return 'procname'(args);
  end if;
  if (x0>=0)and(x0<=3) and (y0<=x0+2)
   and (y0>=x0-1) and (y0>=0) and (y0 <=3) then
    return y0*(3-y0)*x0*(3-x0)*(x0+2-y0)*(y0-x0+1);
  else
    return 0;
  end if:
end proc:

evalf(Int(newFg(x,y), [x=0..1, y=0..2.2]));

                                8.140784249
Or, do not use the nested form. This seems to be more friendly to the uneval-quote protection (against premature evaluation).
evalf(Int(Int('Fg'(x,y), x=0..1), y=0..2.2, epsilon=1e-9));
  
                              8.140784249

Remove the semicolon that appears before the keyword `local`.

proc(z0, u0, F0);

I mean that semicolon.

 

If your code is in plaintext then you can run the standalong shell tool mint against it. It has been kept much more up-to-date than has the maplemint command, with regards to newer syntax and Maple language developments.

If you don't want to write out your procedure's source code to a text file (and if you haven't developed it as such, or if running shell scripts is awkward on your platform), and if the rather weak maplemint command cannot handle your procedure, then you can try the method in this old Post. That uses a temporary file to get access to the external mint utility.

It isn't clear to me whether you also want to obtain an explicit result, and whether you want it exact or as a float approximation.

expr:=1/(u+v)^2+4*u*v-1:

ans:=[solve({expr=0, u>0, v>0}, real, explicit)]:
ans:=simplify(combine(evalc(simplify(ans)))):

lprint(ans);

[{u = (2*cos(2/9*Pi)+5/4+3^(1/2)*sin(2/9*Pi))
      *(2*(19-6*cos(4/9*Pi)-12*3^(1/2)*sin(2/9*Pi)
      -6*cos(2/9*Pi))^(1/2)+6*cos(4/9*Pi)
      +3*3^(1/2)*sin(2/9*Pi)-3*cos(2/9*Pi)-2)^(1/2),
  v = 1/4*(2*(19-6*cos(4/9*Pi)-12*3^(1/2)*sin(2/9*Pi)
      -6*cos(2/9*Pi))^(1/2)+6*cos(4/9*Pi)
      +3*3^(1/2)*sin(2/9*Pi)-3*cos(2/9*Pi)-2)^(1/2)}]

evalf(ans);                                      
               [{u = 1.594539597, v = 0.1023339994}]

simplify(eval(expr, ans[1]));                    

                                0

It seems to me that if Digits is "high enough" then one may achieve enough numerical robustness to get reasonably close to 10 digits of accuracy with fewer terms in the nested sums. But also, using the accelerated numerical summation techniques offered by evalf(Sum(...)) computes such results quite bit quicker than does add, for these examples.

On my (fast) 64bit Linux machine, using Maple 2016.2, I can compute J(3,Pi/6) to about 9 digits in 16sec using evalf(Sum(...)) while it takes add about 54sec. For the expression a*b*(-A*J(3,Pi/6)+B*J(6,Pi/6)) it computes with evalf(Sum(...) about 7 digits in about a quarter of the time than does add. Your mileage may vary. Please double-check, as we can all make mistakes.

restart;

r := 2.8749: a := 0.7747: b := 0.3812: A := 17.4: B := 29000: R := 5.4813: Z := 2:

JSum := proc(n, phi, NN) options operator, arrow;
8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc:

Digits:=10: NN:=70:
''Digits''=Digits, ''JSum''(3,Pi/6,NN) = CodeTools:-Usage( evalf( JSum(3,Pi/6,NN) ) ): evalf[10](%);

memory used=2.74GiB, alloc change=36.00MiB, cpu time=15.26s, real time=15.27s, gc time=636.00ms

Digits = 10., JSum(3, (1/6)*Pi, 70) = 15.47952908

restart;

r := 2.8749: a := 0.7747: b := 0.3812: A := 17.4: B := 29000: R := 5.4813: Z := 2:

JSum := proc(n, phi, NN) options operator, arrow;
8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc:

Digits:=30: NN:=70:
''Digits''=Digits, ''JSum''(3,Pi/6,NN) = CodeTools:-Usage( evalf( JSum(3,Pi/6,NN) ) ): evalf[10](%);

memory used=3.49GiB, alloc change=36.00MiB, cpu time=20.12s, real time=20.14s, gc time=800.00ms

Digits = 30., JSum(3, (1/6)*Pi, 70) = .3995579519

restart;

r := 2.8749: a := 0.7747: b := 0.3812: A := 17.4: B := 29000: R := 5.4813: Z := 2:

Jadd := proc(n, phi, NN) options operator, arrow;
8*Pi^(3/2)*r*R*(add((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(add((-1)^j*cos(phi)^(2*j)*(add((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc:

Digits:=10: NN:=70:
''Digits''=Digits, ''Jadd''(3,Pi/6,NN) = CodeTools:-Usage( evalf( Jadd(3,Pi/6,NN) ) ): evalf[10](%);

memory used=3.65GiB, alloc change=32.00MiB, cpu time=20.35s, real time=20.39s, gc time=824.00ms

Digits = 10., Jadd(3, (1/6)*Pi, 70) = -155738.6548

restart;

r := 2.8749: a := 0.7747: b := 0.3812: A := 17.4: B := 29000: R := 5.4813: Z := 2:

Jadd := proc(n, phi, NN) options operator, arrow;
8*Pi^(3/2)*r*R*(add((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(add((-1)^j*cos(phi)^(2*j)*(add((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc:

Digits:=30: NN:=70:
''Digits''=Digits, ''Jadd''(3,Pi/6,NN) = CodeTools:-Usage( evalf( Jadd(3,Pi/6,NN) ) ): evalf[10](%);

memory used=5.22GiB, alloc change=36.00MiB, cpu time=28.27s, real time=28.30s, gc time=1.27s

Digits = 30., Jadd(3, (1/6)*Pi, 70) = .3995579519

 

The above are all in separate sessions, to ensure the performance measurements are fair, and absolutely no results
are cached, etc.

 

We can see from the above that using evalf(Sum(....)) gets a result of comparable accuracy to add when Digits=30
and the upper bounds of the summation indices is just 70. But the accelerated floating-point summation method
done by evalf(Sum(...)) is faster than the full explicit summation done by add(...) here.

Let's try and show next that, for at least the case of J(3, Pi/6) , the value of NN=70 suffices when done with Digits=30.
That is to say, more terms, using even greater working precision, is not necessary for at least that example.

 

restart;

r := 2.8749: a := 0.7747: b := 0.3812: A := 17.4: B := 29000: R := 5.4813: Z := 2:

JSum := proc(n, phi, NN) options operator, arrow;
8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. NN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc:

Digits:=100: NN:=100:
''Digits''=Digits, ''JSum''(3,Pi/6,NN) = CodeTools:-Usage( evalf( JSum(3,Pi/6,NN) ) ): evalf[10](%);

memory used=15.43GiB, alloc change=114.57MiB, cpu time=82.20s, real time=80.09s, gc time=4.79s

Digits = 100., JSum(3, (1/6)*Pi, 100) = .3995579519

 

So the result with Digits=30 and NN=100 agrees to 10 digits with that obtained more quickly
with only Digits=30 and NN=70.


You can of course experiment to see whether those faster settings are also numerically robust for
other values of arguments n and phi.

It seems that we can also "simplify" the expression to be evaluated, with some additional performance
gain and only an apparent slight drop in accuracy (for this example at least).

 

restart;

r := 2.8749: a := 0.7747: b := 0.3812: A := 17.4: B := 29000: R := 5.4813: Z := 2:

JSum := proc(n, phi, NN) options operator, arrow;
8*Pi^(3/2)*r*R*(Sum((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(Sum((-1)^j*cos(phi)^(2*j)*(Sum((2*r*cos(phi))^(2*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, 1/2], [2*j+2*l+3/2], -1)*(1/2*Beta(l+1/2, n+2*i+l-1/2)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+3/2, l+1/2], [l+3/2], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, 1/2)*(R^2+r^2)^(n+2*i+l-1/2)), l = 0 .. NN))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. NN)) end proc:

raw:=JSum(ii, Pi/6, NNN):

Digits:=30:
new:=simplify(simplify(combine(convert(combine(raw),StandardFunctions))),size) assuming i::nonnegint, j::nonnegint, l::nonnegint, j<=i, ii::posint, NNN::nonnegint:

NN:=70:
''Digits''=Digits, ''JSum''(3,Pi/6,NN) = CodeTools:-Usage( evalf( subs([ii=3,NNN=NN],new) ) ): evalf[10](%);

memory used=3.06GiB, alloc change=4.00MiB, cpu time=16.58s, real time=16.03s, gc time=1.18s

Digits = 30., JSum(3, (1/6)*Pi, 70) = .3995579521

CodeTools:-Usage( evalf( a * b * ( -A * subs([ii=3,NNN=NN,W=Pi/6],new)
                                   + B * subs([ii=6,NNN=NN,W=Pi/6],new) ) ) ): evalf[10](%);

memory used=3.13GiB, alloc change=0 bytes, cpu time=16.92s, real time=16.38s, gc time=1.21s

.6541253939

Digits:=50:
new:=simplify(simplify(combine(convert(combine(raw),StandardFunctions))),size) assuming i::nonnegint, j::nonnegint, l::nonnegint, j<=i, ii::posint, NNN::nonnegint:

NN:=100:
''Digits''=Digits, ''JSum''(3,Pi/6,NN) = CodeTools:-Usage( evalf( subs([ii=3,NNN=NN],new) ) ): evalf[10](%);

memory used=10.66GiB, alloc change=0 bytes, cpu time=55.85s, real time=53.69s, gc time=4.84s

Digits = 50., JSum(3, (1/6)*Pi, 100) = .3995579521

CodeTools:-Usage( evalf( a * b * ( -A * subs([ii=3,NNN=NN,W=Pi/6],new)
                                   + B * subs([ii=6,NNN=NN,W=Pi/6],new) ) ) ): evalf[10](%);

memory used=10.63GiB, alloc change=40.00MiB, cpu time=55.66s, real time=53.61s, gc time=4.57s

.6541254115

restart;

r := 2.8749; a := 0.7747; b := 0.3812; A := 17.4; B := 29000; R := 5.4813; Z := 2;

J := proc (n, phi) options operator, arrow; 8*Pi^(3/2)*r*R*(add((2*r*R)^(2*i)*pochhammer((1/2)*n, i)*pochhammer((1/2)*n+1/2, i)*(add((-1)^j*cos(phi)^(2*j)*(add((2*r*cos(phi))^(2.*l)*pochhammer(n+2*i, 2*l)*hypergeom([2*j+2*l+1, .5], [2*j+2*l+1.5], -1)*(.5*Beta(l+.5, n+2*i+l-.5)-sin(arctan(-Z/sqrt(R^2+r^2)))^(2*l+1)*hypergeom([-n-2*i-l+1.5, l+.5], [l+1.5], sin(arctan(-Z/sqrt(R^2+r^2)))^2)/(2*l+1))/(factorial(2*l)*pochhammer(2*j+2*l+1, .5)*(R^2+r^2)^(n+2*i+l-.5)), l = 0 .. 70))/(factorial(i-j)*factorial(j)), j = 0 .. i))/factorial(i), i = 0 .. 70)) end proc:

Digits:=30:

CodeTools:-Usage( evalf(a*b*(-A*J(3, (1/6)*Pi)+B*J(6, (1/6)*Pi))) ): evalf[10](%);

2.8749

.7747

.3812

17.4

29000

5.4813

2

memory used=10.44GiB, alloc change=36.00MiB, cpu time=57.16s, real time=57.21s, gc time=2.74s

.6541253936

 

Download sumproc.mw

A little poking around in the code reveals an undocumented method="alternate" option, which makes Maple 2016's plots:-contourplot command utilize plots:-implicitplot internally (via the internal procedure Plot:-ContourPlot).

restart;
kernelopts(version);

   Maple 2016.2, X86 64 LINUX, Jan 13 2017, Build ID 1194701

plots:-contourplot( 1/(x^2+y^2), x=-2 .. 2, y=-2 .. 2, method="alternate" );

plots:-contourplot( 1/(x^2+y^2), x=-2 .. 2, y=-2 .. 2, method="alternate",
                    contours=[seq(i,i=0.5..4,0.5)] );

Download alternate_contourplot.mw

It would be even nicer if that internal procedure Plot:-ContourPlot could separate out additionally passed options specific to plots:-implicitplot (so that it didn't pass all of argument _rest to plots:-display, but allowed options like say gridrefine, etc, to be passed along to plots:-implicitplot). Or, for consistency with plots:-inequal it might be made to pass along with an optionsimplicit option.

evalf[5]( evalf( expression ) )

will round the result to 5 digits. But the inner call to evalf will still use the current Digits working precision (which you may want done).

Since Maple 2015 all platforms of Maple (Linux, OSX, Windows) have used BLAS and LAPACK functions from the Intel Math Kernel Library (MKL) for hardware floating-point linear algebra. That functionality can be utilized by computations other than just overt calls to LinearAlgebra commands.

The kernelopts(numcpus) setting will not affect the number of cores that may be used by those functions. But the OS enviroment variable OMP_NUM_THREADS will restrict that number of cores. As the system administrator you could set that variable in the two shell scripts maple and xmaple, both found in the bin directory of the Maple installation.

You can verify that it is functioning with the following code:

M:=LinearAlgebra:-RandomMatrix(5000,datatype=float[8]):

CodeTools:-Usage(LinearAlgebra:-Eigenvectors(M)):

The printout of that last command might look like the following on an otherwise unloaded 4-core 64bit Linux machine with that variable unset (or set to 4):

memory used=0.75GiB, alloc change=0.78GiB, cpu time=3.39m, real time=51.59s, gc time=32.00ms

The "cpu time" reported is the sum from all threads. Note that it is about 4 times greater than the "real time", which essentially means that four cores were in heavy use. The Linux utility `top` showed a load average of as high as about 3.40 while that was computing.

Now, with that environment variable set to 1 (in the maple shell script) the printout I see is like this:

memory used=0.75GiB, alloc change=0.78GiB, cpu time=68.50s, real time=68.56s, gc time=32.00ms

In that case top showed a load average no higher than about 1.05. And the "real time" matches the "cpu time" which is the sum for just the single thread.

The reason that the 4-thread case "real time" was not so very much faster than the single-thread case "real time" is that the nonsymmetric eigenvector algorithm is not so easily parallelized effectively throughout as is, say, matrix-matrix multiplication. But that's rather incidental to this illustration.

You can also check the value of a OS variable, within Maple, using the getenv command. Eg,

getenv("OMP_NUM_THREADS");

Regarding what Carl mentioned about kernelopts(numcpus=n) only working once, as an initial command. That's good news for your situation, since if you set it within a system-wide Maple initialization file then it could hold forcibly throughout your users' sessions.

for a from 1 to 20 do
  printf("%s\n",sprintf("%a",[op(numtheory:-divisors(a))])[2..-2]);
end do:

1
1, 2
1, 3
1, 2, 4
1, 5
1, 2, 3, 6
1, 7
1, 2, 4, 8
1, 3, 9
1, 2, 5, 10
1, 11
1, 2, 3, 4, 6, 12
1, 13
1, 2, 7, 14
1, 3, 5, 15
1, 2, 4, 8, 16
1, 17
1, 2, 3, 6, 9, 18
1, 19
1, 2, 4, 5, 10, 20

Or, if you don't want to reproduce it as a triangle,

seq(op(numtheory:-divisors(a)), a=1..20);

1, 1, 2, 1, 3, 1, 2, 4, 1, 5, 1, 2, 3, 6, 1, 7, 1, 2, 4, 8, 1, 3, 9, 1, 2, 5, 10, 1, 11, 1, 2, 3, 4, 6, 12, 1, 13, 1, 2, 7, 14, 1, 3, 5, 15, 1, 2, 4, 8, 16, 1, 17, 1, 2, 3, 6, 9, 18, 1, 19, 1, 2, 4, 5, 10, 20

The OP has written that, "The format of the input shouldn't matter."

But of course it can matter. Symbolic integration of expressions containing float-ing point approximations is not always a great idea. In this case alternatives include 1) floating-point numeric integration of the expression containing floats, 2) tricky, extra effort to get symbolic integration of the expression containing floats to behave better, and as mentioned 3) symbolic integration of the expression with exact rather than floating-point coefficients.

(This site doesn't render the worksheet below very nicely. Sorry.)
 

restart

with(VectorCalculus):

R := 1;

1

.6

.8

x := proc (t) options operator, arrow; (R+`-`(r))*cos(t)+d*cos((R+`-`(r))*(1/r)*t) end proc:

y := proc (t) options operator, arrow; VectorCalculus:-`+`(VectorCalculus:-`*`(VectorCalculus:-`+`(R, `-`(r)), sin(t)), `-`(VectorCalculus:-`*`(d, sin(VectorCalculus:-`*`(VectorCalculus:-`*`(VectorCalculus:-`+`(R, `-`(r)), 1/r), t))))) end proc:

 

The ArcLength command allows us to generate the result as an inert integral.

 

Q := ArcLength(`<,>`(x(t), y(t)), t = 0 .. VectorCalculus:-`*`(6, Pi), inert);

Int(((-.4*sin(t)-.5333333334*sin(.6666666668*t))^2+(.4*cos(t)-.5333333334*cos(.6666666668*t))^2)^(1/2), t = 0 .. 6*Pi)


We can force so-called numeric integration (ie. floating-point quadrature) by applying the evalf command to that inert integral.

 

evalf(VectorCalculus:-ArcLength(VectorCalculus:-`<,>`(x(t), y(t)), t = 0 .. VectorCalculus:-`*`(6, Pi), inert));

11.52567160

 

Another way to force numeric integration is to supply the end-points as floats (or, one a float and the other of type numeric).

 

VectorCalculus:-ArcLength(VectorCalculus:-`<,>`(x(t), y(t)), t = 0 .. evalf(VectorCalculus:-`*`(6, Pi)));

11.52567160

 

Let's have another look at that inert integral.

 

P := VectorCalculus:-ArcLength(VectorCalculus:-`<,>`(x(t), y(t)), t = 0 .. B, inert);

Int(((-.4*sin(t)-.5333333334*sin(.6666666668*t))^2+(.4*cos(t)-.5333333334*cos(.6666666668*t))^2)^(1/2), t = 0 .. B)

 

There is a particular symbolic integration method which can produce a viable result. But note that its subsequent symbolic result is subject to round-off error during subsequent floating-point evaluation. Other choice of method of symbolic integration can go awry in other ways (including severe use of resources even in the presence of assumptions on B)

 

PP := `assuming`([int(op(P), method = ftocms)], [B > 0, B <= 6*Pi])

piecewise(1. < .1326291193*B, 2.305134319*floor(.1326291193*B), 0.)+piecewise(0. < 4166666667.*B-0.1570796327e11, 2.305134319*floor(.1326291193*B-.5000000000)+2.305134319, 0.)+1.120000000*((-0.2133333334e20*cos(.8333333334*B)^2+0.2177777778e20)^(1/2)*EllipticE(cos(.8333333334*B), .9897433186)*cos(.8333333334*B)^2+1.029077821*sin(.8333333334*B)*(0.2133333334e20*cos(.8333333334*B)^4-0.4311111112e20*cos(.8333333334*B)^2+0.2177777778e20)^(1/2)-1.*EllipticE(cos(.8333333334*B), .9897433186)*(-0.2133333334e20*cos(.8333333334*B)^2+0.2177777778e20)^(1/2))/(sin(.8333333334*B)*(0.2133333334e20*cos(.8333333334*B)^4-0.4311111112e20*cos(.8333333334*B)^2+0.2177777778e20)^(1/2))

T := subs(B = 6*Pi, PP)

piecewise(1. < 2.500000001, 2.305134319*floor(2.500000001), 0.)+piecewise(0. < 0.6283185308e11, 2.305134319*floor(2.000000001)+2.305134319, 0.)+1.120000000*((-0.2133333334e20*cos(15.70796327)^2+0.2177777778e20)^(1/2)*EllipticE(cos(15.70796327), .9897433186)*cos(15.70796327)^2+1.029077821*sin(15.70796327)*(0.2133333334e20*cos(15.70796327)^4-0.4311111112e20*cos(15.70796327)^2+0.2177777778e20)^(1/2)-1.*EllipticE(cos(15.70796327), .9897433186)*(-0.2133333334e20*cos(15.70796327)^2+0.2177777778e20)^(1/2))/(sin(15.70796327)*(0.2133333334e20*cos(15.70796327)^4-0.4311111112e20*cos(15.70796327)^2+0.2177777778e20)^(1/2))

forget(evalf):

Float(undefined)+Float(undefined)*I

Float(-infinity)

11.564284298113954460

 

Reiterating that last point: even forcing a particular method for symbolic integration, the presence of floats can cause severe difficulties.

 

RR := Int(op(subs(B = VectorCalculus:-`*`(6, Pi), [op(P)])), method = ftocms);

Int(((-.4*sin(t)-.5333333334*sin(.6666666668*t))^2+(.4*cos(t)-.5333333334*cos(.6666666668*t))^2)^(1/2), t = 0 .. 6*Pi, method = ftocms)

Float(undefined)+Float(undefined)*I

Float(-infinity)

11.564284298113954460

 

Someone else mentioned that the symbolic integration works fine for this example if the floats in x(t), y(t) are turned into exact rationals.

 

VectorCalculus:-`<,>`(x(t), y(t));

map(combine, convert(%, rational));

Vector[column]([[.4*cos(t)+.8*cos(.6666666668*t)], [.4*sin(t)-.8*sin(.6666666668*t)]], ["x", "y"])

Vector[column]([[(2/5)*cos(t)+(4/5)*cos((2/3)*t)], [(2/5)*sin(t)-(4/5)*sin((2/3)*t)]], ["x", "y"])

(56/5)*EllipticE((4/7)*3^(1/2))

11.52567160

 


 

Download hypotrochoid_2.mw

Make your batch file print (whatever it thinks is) the PATH, when it runs successfully in a command shell.

Then adjust it to set PATH (for itself) explicitly to that very same thing.

[edit] If it relied on the current working directory (when functioning in cmd) then I suggest that 1) that is poor programming style, and 2) adjust the PATH anyway, but make it all fully qualified rather than relative.

Then try running it from Maple by escaping to a shell, using ssystem or system.

Below I give a procedure presubsuper which allows you to build the so-called atomic identifier (aka typeMK) for a pre-sub-superscripted name.

I also show that an unevaluated function call like, say, Iso(U,92,238) can be made to be automatically prettyprinted using that presubsuper procedure, by having its own so-called print-slash extension to the printing mechanism.

(This whole business could also be done, probably quite gracefully, using the more modern Maple object mechanism. I won't go there right now, but it would be fun to see how much physical chemistry or nuclear physics could be incorporated into static exports of such objects...).


 

restart;

presubsuper := proc(e,sub,super)
  nprintf("#mscripts(mi(%a),none(),none(),none(),none(),mn(%a),mn(%a))",
          convert(e,string),convert(sub,string),convert(super,string));
end proc:

Iso := proc(e,sub,super,$)
  return 'procname'(args);
end proc:

`print/Iso` := proc()
  presubsuper(args);
end proc:

presubsuper(U,92,238);

`#mscripts(mi("U"),none(),none(),none(),none(),mn("92"),mn("238"))`

Iso(U,92,238);

Iso(U, 92, 238)

op(Iso(U,92,238));

U, 92, 238

u := Iso(X,Z,A);

Iso(X, Z, A)

u;

Iso(X, Z, A)

A := foo: Z := bar: X := blah:

u;

Iso(blah, bar, foo)

A := 238: Z := 92: X := U:

u;

Iso(U, 92, 238)

he := Iso(He,2,4);

Iso(He, 2, 4)

th := Iso(Th,90,234);

Iso(Th, 90, 234)

Uranium := u implies th+he

Iso(U, 92, 238) implies Iso(Th, 90, 234)+Iso(He, 2, 4)

 


 

Download presubsuper.mw

The DocumentTools package can be used to programatically generate GUI Tables that are formatted like a report.

The DocumentTools:-Tabulate command provides an easy way to do that, though with ease-of-use comes less flexibility.

The subpackage DocumentTools:-Layout (and, less so here, DocumentTools:-Components) contains lower level commands with more flexibility. (It's what Tabulate uses). The results of the Layout commands can be saved to a new worksheet, opened in a new GUI tab, or embedded within the current worksheet (which you could also subsequently copy&paste).

The sticking point is your desire for LaTeX. The GUI's File->Export as... main menu item will produce LaTeX source for "the whole worksheet", as you probably know. It's a slight drawback that it requires use of one of Maple's own .sty files (bundled with the product). But the thorny bit is that support for such LaTeX export of GUI Tables is not good.

Let's call your AND(A->B, B->C) the statement P. And let's call your A->C the statement Q. It is the case that Q follows from P.

But you seem to be under the impression that Q is the only statement that follows from P. But it isn't. It's also not the case that P follows from Q, and so they are not equivalent. So what's your rationale for thinking that Q should be, say, the obvious or useful statement that follows from P?

It happens that Q does not contain a reference to B. Is that your reason want Q, because in that way it is simpler than P? I'm going to proceed along those grounds.

with(Logic):

p := (A &implies B) &and (B &implies C):

BooleanSimplify( eval(p, B=false) &or eval(p, B=true) );

                                  C &or &not(A)

# And you ought to be able to recognize the above as being
# equivalent to A->C

BooleanSimplify( A &implies C );

                                  C &or &not(A)
First 197 198 199 200 201 202 203 Last Page 199 of 336