acer

32395 Reputation

29 Badges

19 years, 343 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

If the result of a call to `int` has been assigned to `g`, then at the top-level you could test whether,

op(0,eval(g,1)) = int

in order to check whether it has returned as an unevaluated function call to `int`.

The use of 1-level eval is to prevent the active `int` from trying the computation over, especially if all relevant remember tables might have been cleared. (A note on using procedures for a similar effect, more generally.)

You can also test against :-int instead of just int, in case packages are loaded and the name rebound. Or test against inert `Int`, if you started with that and hit it with the `value` command.

I find the idea of `int` emitting an error instead of an unevaluated return to be generally poor; the unevaluated return can be quite conveniently useful in several situations.

acer

Using Maple 15.01 I was able to get a result in approximately 13-15 seconds, using either 32bit or 64bit Maple (both run under Windows 7). For example,

restart:
H:=Int(exp(18.1818*(Int((0.579160e-1*sqrt(x)*Ei(1., 0.500000e-4*x)
                         +(1072.23*(.999950-1.*exp(-0.500000e-4*x)))
                         /sqrt(x))/sqrt(x),
                         x = 1. .. eta))
       -9.10000*eta)/eta,
       eta = 1. .. 100.):

CodeTools:-Usage( evalf(H) );
memory used=0.78GiB, alloc change=69.49MiB, cpu time=14.26s, real time=14.32s
                        0.0004666594253

I believe that the successful method used above was `_Dexp` (double exponential), internally split across several subregions.

It can be done more quickly and accurately. But let's also run through other suggestions.

Carl characterizes `evalf/int` as only ever using Digits for determining a target accuracy. But that is not quite true. The numeric integrator allows for both the working precision (Digits, or its `digits` option) and a tolerance (its `epsilon` option) to be specified separately. Perhaps Carl was trying to express that, unless specified otherwise, the target accuracy (tolerance, epsilon) is determined from the working precision. It may happen that one has to specifiy the tolerance as being looser than what would otherwise be automatically determined from the working precision. In my opinion the most clear way to set about this is to specify all aspects explicitly.

This is what I see in 64bit Maple 15.01, for Carl's suggestion. It takes longer to find a less accurate result.

restart: # Carl Love
Digits:= 30:
#I distributed the sqrt(x) in the inner integral.
Inner:= Int(
     0.579160e-1*Ei(1., 0.500000e-4*x)+(1072.23*(.999950-1.*exp(-0.500000e-4*x)))/x,
     x = 1. .. eta,
     digits= 15
):
Int(exp(18.1818*Inner-9.10000*eta)/eta, eta = 1. .. 100., digits= 5):
CodeTools:-Usage( evalf(%) );

memory used=1.31GiB, alloc change=68.24MiB, cpu time=21.98s, real time=22.11s
                           0.00046645

This next is Markiyan's suggestion. It takes about the same time as the straight call to evalf done first above. It is (I believe) a more accurate result even though no non-default options are used. But, since no non-default options are supplied, it's not clear from just this result that it might be more accurate that the first result above. What it does do, is force an iterated single-variable method. It also succeeds by iterated single-dimension quadrature, integrals split across several subregions, failing first in method _d01ajc due to working precision being too close to accuracy and then competing with method _Dexp.

restart:
F:=eta->int((0.579160e-1*sqrt(x)*Ei(1., 0.500000e-4*x)
            +(1072.23*(.999950-1.*exp(-0.500000e-4*x)))
            /sqrt(x))/sqrt(x),
            x = 1. .. eta, numeric):
CodeTools:-Usage( int(eta->exp(18.1818*F(eta)-9.10000*eta)/eta,
                      1. .. 100., numeric) );
memory used=0.68GiB, alloc change=11.12MiB, cpu time=13.18s, real time=13.19s
                        0.0004666594270

It's probably worth stating explicitly here that another difficulty for such iterated single-dimension integrals is that the inner integral's results might not be accurate enough for the numeric method used on the outer integral to properly do error estimation or adaptive control. In the following attempt the inner integral uses the fast _d01ajc method with digits set as high as that method allows while the tolerance is set as tight as can be to match. The outer integral is left to choose its own method (_Dexp) but must naturally use a looser tolerance because it's doing 1D quadrature with inner results that are only so accurate.

restart:
H:=Int(exp(18.1818*(Int((0.579160e-1*sqrt(x)*Ei(1., 0.500000e-4*x)
                         +(1072.23*(.999950-1.*exp(-0.500000e-4*x)))
                         /sqrt(x))/sqrt(x),
                         x = 1. .. eta, digits=15, epsilon=5e-15, method=_d01ajc))
       -9.10000*eta)/eta,
       eta=1. .. 100., digits=15, epsilon=1e-13):
CodeTools:-Usage( evalf(H) );
memory used=0.53GiB, alloc change=64.49MiB, cpu time=7.52s, real time=7.52s
                      0.000466659427019435

I suspect that is quite accuarate, and a bit faster.

Now for Axel's observation that the inner integral can be solved. The result of using symbolic `int` on the inner integral produces an expression involving several calls to the special function `Ei`. That is itself just an abbreviation for an integral, albeit one which can be computed quickly and accurate via special methods. But even here it is important to note that this inner symbolic result has to be computed accurately enough, or else the outset numeric integral will be constrained in terms of how accurate it can be. (Axel's habit, I believe, is to set Digits to 15 in such situations. And indeed that might even be necessary here.) The following does exactly one call to any numeric integrator (_d01ajc, as it happens), which is successful.

restart:
Digits:=15:
Hinner:=Int((0.579160e-1*sqrt(x)*Ei(1., 0.500000e-4*x)
            +(1072.23*(.999950-1.*exp(-0.500000e-4*x)))
            /sqrt(x))/sqrt(x),
            x = 1. .. eta):
Hinner:=CodeTools:-Usage( value(Hinner) ) assuming eta>1, eta<100:
memory used=21.30MiB, alloc change=17.50MiB, cpu time=437.00ms, real time=447.00ms

H:=Int(exp(18.1818*(Hinner)-9.10000*eta)/eta,
       eta = 1. .. 100., digits=15, epsilon=1e-11, method=_d01ajc):

CodeTools:-Usage( evalf(H) );
memory used=15.19MiB, alloc change=9.50MiB, cpu time=234.00ms, real time=239.00ms
                      0.000466659427127310

I find it interesting that the previous attempt fails for epsilon=1e-12 or tighter for that force method=_d01ajc, where digits=15 is as high as the hard-coded quadrature weights of that method allow.

By removing the specified method from the previous attempt a result of even greater accuracy can be obtained (but still quite quickly). Digits is set to 1000 only to ensure that the inner "symbolic" integral is an adequately accurate expression. It is the `digits` and `epsilon` options to the outer integral which are key.

restart: infolevel[`evalf/int`]:=1:
Digits:=1000:
Hinner:=Int((0.579160e-1*sqrt(x)*Ei(1., 0.500000e-4*x)
            +(1072.23*(.999950-1.*exp(-0.500000e-4*x)))
            /sqrt(x))/sqrt(x),
            x = 1. .. eta):

Hinner:=CodeTools:-Usage( value(Hinner) ) assuming eta>1, eta<100:
memory used=105.24MiB, alloc change=47.12MiB, cpu time=1.19s, real time=1.18s

H:=Int(unapply(exp(18.1818*(Hinner)-9.10000*eta)/eta, eta),
       1. .. 100., digits=26, epsilon=1e-20):

CodeTools:-Usage( evalf(H) );
evalf/int/quadexp: applying double-exponential method
From quadexp, result = .466659427019467124447527497994e-3 integrand evals = 280 error = .24538325557333259051444632402e-23
memory used=29.27MiB, alloc change=0 bytes, cpu time=327.00ms, real time=322.00ms
                0.00046665942701946712444752750

acer

For a single 3D plot you can put it into a PlotComponent and then resize that component by adjusting its properties.

For a single 2D plot (including multiple curves, etc, all shown together using plots:-display) you could also put into a PlotComponent and adjust its width and height properties.

But for such a 2D plot, perhaps it's even easier to pass the additional option size=[k,p] to the plots:-display comamnd, where k and p are positive integers representing the width and height in pixels at normal magnification. I am guessing that of all the suggestions I'm making this might be the most suitable for your situation.

For case of an Array of plots (plots:-display(Array([...])), say) the plots are shown in cells of a worksheet "table", which sizing properties of which are available through right-click context-menu (Table Properties).

acer

If you have assigned a PLOT3D structure to a name (eg, `myplot`) then such changes will not be retrofitted to the structure assigned to that name when you do right-click context-menu actions on the inlined plot.

But if you create a PlotComponent and use DocumentTools:-SetProperty to set that plot as the `value` of the component, then after such context-menu actions on the plot shown in the component you can subsequently extract a modified structure using the DocumentTools:-GetProperty command.

For example, in the attached worksheet I first created a zgrayscale plot. Then, after putting it into the PlotComponent I used the right-click menus to implment a custom "user" lighting. Then when I extract it from the component I pick off the LIGHT and AMBIENTLIGHT substructures and turn them into the syntax for the plot3d or plots:-display commands.

Note that the PlotComponent seems to retain the options set via the right-click. So if you re-rerun the worksheet the zgrayscale plot (or any 3D plot) will show with that same custom lighting inside the component.

restart:

P:=plot3d(x+y^2,x=-1..1,y=-1..1,shading=zgrayscale,grid=[4,4]):

DocumentTools:-SetProperty(Plot0,value,P);

G:=DocumentTools:-GetProperty(Plot0,value):

alopts:=proc(t) if nops(t)>0 then
                  '':-ambientlight''=[op(t[1])];
                else NULL; end if;
        end proc(indets(G,'specfunc(:-AMBIENTLIGHT)'));

'ambientlight' = [.2, .6, .4]

lopts:=proc(t) if nops(t)>0 then
                  ':-light'=[op(t[1])];
                else NULL; end if;
        end proc(indets(G,'specfunc(:-LIGHT)'));

light = [45.0, 90.0, .5019608, 0., .5019608]

plots:-display(P,lopts,alopts);

 


Download lightstuff.mw

acer

It works for me if I use the method=rkf45_dae or method=rosenbrock_dae options to dsolve(...,numeric), as well as supply another initial condition.

I used Maple 14.01 for 64bit Linux.

For example, it works with the same IC D(h)(0)=Pi/2 used by Dr Subramanian.

ODE := A*(diff(h(t), t))^2+B*(diff(h(t), t))*(h(t)+C)+E*h(t)
       = F*cos(diff(h(t), t)):
ODE_SOLUTION := dsolve({ODE, h(0) = 0,D(h)(0)=Pi/2}, numeric, 
                       method=rosenbrock_dae, range = 0 .. 10, 
                       parameters = [A, B, C, E, F]):
ODE_SOLUTION('parameters'=[0,0,1,1,1]):
with(plots):
odeplot(ODE_SOLUTION,[t,h(t)],0..3);
odeplot(ODE_SOLUTION,[t,D(h)(t)],0..1);

acer

It would be nice if `int` would try more changes-of-variable by itself.

restart:

f := Int( ln(x+1)/(1+x^2), x=0..1 ):

simplify( value( IntegrationTools:-Change(f, x=sin(v)/cos(v)) ) );

                                 1         
                                 - Pi ln(2)
                                 8         

acer

Using Maple 13.01,

fsolve(unapply(.956^2-PpRM(s, .185)-PkRM(s, .895), s), 0 .. 1);

                                0.7739453931

acer

If your procedure sumESQtotal is not written to return unevaluated for nonnumeric inputs then you could try it instead as,

  Minimize( 'sumESQtotal'(A,B) );

with unevaluation quotes. Otherwise Maple will evaluate sumESQtotal at unassigned names A and B.

acer

You could try this: Select with the mouse pointer everything you want converted, then use the right-click context-menu action 2-D Math -> Convert To -> 1-D Math Input.

Another way to do the same thing: from the main menubar choose Edit->Select All (keystrokes Ctrl-a). and then from the menubar Format->Convert To->1D Math Input (keystrokes Alt-r v i)

The above doesn't change document blocks to pairs of execution groups, or insert "> " prompts, and so on. So it doesn't really reformat a Document as a Worksheet.

Maybe I should think about what it would take to write a procedure which does such a conversion, nicely!?

acer

Yes, that's right. A call to the print statement prints.

That allows a way to force a print even when a for-do loop is full color terminated (say, to suppress an avalanche of other results). It is very deliberate and useful behaviour.

It would terrible if issuing the restart statement cleared all output from a worksheet.

The easiest way to clear a whole block of previous output in one fell swoop is to make it all the result of a single function call.

Suppose that your have a procedure such as `f` below.

f:=proc()
     readline();
     print(plot(sin));
     printf("hey!\n");
     print("more");
     return "foo";
end proc:

Now, in a new Document Block or Execution Group issue the command,

  f();

The readline call pops up a dialogue, fine. Click ok, and a few various things are printed to the output of the block/group. Now place the cursor on that same call to f() and hit Enter/Return once again. The very first thing that will happen, even before the popup reappears, is that all the old output of that block/section will vanish. Hence I suggest that your "program" be implemented with an entry point which is a procedure or appliable module.

acer

You haven't shared the worksheet/document with us, but it sounds as if there are some fundamental misunderstandings -- perhaps on both sides here.

Quite often people prepare worksheets/documents with many executable paragraphs (document blocks) or execution groups. It is on purpose that re-executing just one of these will not wipe output of all the others. Some results are very time-consuming to compute, and it would be a much weaker GUI that would always require all output be tied together. I don't see the GUI being at fault in what you've described. It sounds to me more as if you're not using the GUI in the most suitable way for your own task.

It also sounds to me as if you might have spread the various inputs and outputs of your game across several or even many blocks/groups. That doesn't seem like a good idea to me, and if that is the case then I suspect you might be best off doing it otherwise.

My surmise could be wrong, but then you haven't really described the way in which your sheet is currently doing output.

How about making a run of the application (game) cover just one or two blocks/groups? Is the code for the game within procedures? If so, then why not have the procedure print carefully only what must be printed (because function call "output" done within an execution group is in fact replaced in one shot)? Why use multiple readline calls instead of maplets or embedded components which could be easily re-used for both input and output (and be easily cleared)?

acer

These two versions contain as much as I see in the file you uploaded originally (which abruptly ended in the middle of a deep subsection.)

I attach two variants, the second where I just closed up all the outstanding subsections, and the other the same except I expanded all sections before saving. And I also attached a zip file containing both.

FHPBedited.mw

FHPBexpandededited.mw

FHPBedited.zip

Your uploaded worksheet appears to have been last saved with Maple 13.02, so I used that to save the one with all-sections-expanded.

acer

Another mechanism for this is map2 (or map[2]).

L1 := [25,5,1,10,4,20];
                         L1 := [25, 5, 1, 10, 4, 20]

L2 := evalf( map( log10, L1 ) );
    L2 := [1.397940008, 0.6989700041, 0., 1., 0.6020599914, 1.301029996]

L2 := map( log10@evalf, L1 );
    L2 := [1.397940009, 0.6989700043, 0., 1., 0.6020599913, 1.301029996]

map2( `^`, 10, L2 );
        [25.00000002, 5.000000000, 1., 10., 4.000000000, 20.00000002]

map2( round@`^`, 10, L2 );
                            [25, 5, 1, 10, 4, 20]

You haven't said anything about precision/accuracy/efficiency needs, so it's not possible to be definitive about what would be best for you. But avoiding custom procs, using kernel builtins, leveraging evalhf, etc, etc, are aspects that might possibly matter to you.

acer

The keysrokes Alt-v l (lowercase l) collapse all sections and subsections, also available from the menubar choice

View->Collapse All Sections.

I don't know of any way to collapse just the outermost sections.

acer

Using Maple 18.02 in Windows 7 64bit, the closet I could get was to use densityplot instead of listdensityplot (so the "grid" lines appear thinner in the GUI, since the latter produces a bunch of polygons...).

restart:

export_plot_options:=font=[TIMES, roman, 30],
                     axis=[thickness=4, location=origin],
                     size=[850,850]:

P:=plots:-densityplot(exp(-(x^2+y^2)*(1/100)),
                      x = -10.0 .. 10.0, y = -10.0 .. 10.0,
                      export_plot_options,
                      grid=[21,21], scaling=constrained):

subsop([1,1]=0..21.5, subsop([1,2]=0..21.5, P));

[edited] I posted the above in the early morning. I now realize that the obscure double subsop step could be replaced by a single call to plots:-translate. Ie,

restart:
export_plot_options:=font=[TIMES, roman, 30],
                     axis=[thickness=4, location=origin],
                     size=[850,850]:
P:=plots:-densityplot(exp(-(x^2+y^2)*(1/100)),
                      x = -10.0 .. 10.0, y = -10.0 .. 10.0,
                      export_plot_options,
                      grid=[21,21], scaling=constrained):
plottools:-translate(P, 10.5, 10.5);

Having done that, I now reazlie that the originally posted example can be handled the same way, using location=origin and a translation. Ie,

restart:
export_plot_options:=font=[TIMES, roman, 30], axis=[thickness=4, location=origin], size=[850,850]:
points := [seq([seq(exp(-(x^2+y^2)*(1/100)), x = -10.0 .. 10.0)], y = -10.0 .. 10.0)]:
P:=plots:-listdensityplot(points, export_plot_options):
plottools:-translate(P, -0.5, -0.5);

Of course, if you don't like the ranges 0..21 then you could adjust to taste. I'm not really sure whether you wanted axes' ranges from -10..10, or 0..21, or 0..21.5, etc.  Or you might ecen want go with densitplot and no translation, as it produces a nicer looking (to me) plot with less obtrusive gridding, and contains an efficient float[8] rtable rather than many list data polygons.

First 231 232 233 234 235 236 237 Last Page 233 of 336