acer

32727 Reputation

29 Badges

20 years, 95 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Hi Alec,

The main difference between d01amc and d01smc is that the latter is supposed to be thread-safe (in its own sake). So, if the Maple integrand were also known to be thread-safe too then the define_external call to the wrapper around d01smc could reasonably get the 'THREAD_SAFE' option. Without that option to define_external (which is presently the case under `evalf/int` in M13 and M14) then accessing the library containing the NAG function entry points is only allowed for one thread at a time. See ?define_external for a paragraph on this.

But it's a tricky thing to automate correctly in general: the maple integrand might be thread-unsafe. It's easier for everyone when the evalf/Int call is explicit in the user's code, but what about code that calls a Maple Library routine which in turn calls evalf/Int internally? In that case there is no convenient and automatic way to pass an evalf/Int option which correctly toggles the relevant define_external call. The user's code might be using Maple Threads, but perhaps the computation instead relies on recognizing that some integrand is actually thread-unsafe.

The d01smc routine may also have an extra (new, over d01amc) argument of type (struct) Nag_Comm, for communicating with the function. But exposing that via evalf/Int might not be of worth to most people.

What I'm trying to suggest is that, in the absence of the relevant  define_external call having its THREAD_SAFE' option, the choice between calling out from Maple to d01amc versus d01smc may be of little or no consequence. And the question of when and how to allow that define_external option might be problematic.

I don't know of a fast, efficient, and robust piece of code to ascertain thread-safety of a Maple routine. What with lexical scoping and other things, there are many devious ways for a Maple procedure to write to a higher scope. A user-defined integrand in operator form might have 'option threadsafe' or a similar flag. But in expression form, an integrand could contain a quoted function call to some other thread-unsafe procedure. And so on...

So perhaps it would be better for evalf/Int only to allow the user to select explicitly the thread-safe NAG function, either by explicit name like _d01smc, _d01skc, etc or by a general option such as, say, 'allowthreads', or by passing the integrand as a proc with 'option threadsafe' (which is a made-up term).

acer

Hi Alec,

The main difference between d01amc and d01smc is that the latter is supposed to be thread-safe (in its own sake). So, if the Maple integrand were also known to be thread-safe too then the define_external call to the wrapper around d01smc could reasonably get the 'THREAD_SAFE' option. Without that option to define_external (which is presently the case under `evalf/int` in M13 and M14) then accessing the library containing the NAG function entry points is only allowed for one thread at a time. See ?define_external for a paragraph on this.

But it's a tricky thing to automate correctly in general: the maple integrand might be thread-unsafe. It's easier for everyone when the evalf/Int call is explicit in the user's code, but what about code that calls a Maple Library routine which in turn calls evalf/Int internally? In that case there is no convenient and automatic way to pass an evalf/Int option which correctly toggles the relevant define_external call. The user's code might be using Maple Threads, but perhaps the computation instead relies on recognizing that some integrand is actually thread-unsafe.

The d01smc routine may also have an extra (new, over d01amc) argument of type (struct) Nag_Comm, for communicating with the function. But exposing that via evalf/Int might not be of worth to most people.

What I'm trying to suggest is that, in the absence of the relevant  define_external call having its THREAD_SAFE' option, the choice between calling out from Maple to d01amc versus d01smc may be of little or no consequence. And the question of when and how to allow that define_external option might be problematic.

I don't know of a fast, efficient, and robust piece of code to ascertain thread-safety of a Maple routine. What with lexical scoping and other things, there are many devious ways for a Maple procedure to write to a higher scope. A user-defined integrand in operator form might have 'option threadsafe' or a similar flag. But in expression form, an integrand could contain a quoted function call to some other thread-unsafe procedure. And so on...

So perhaps it would be better for evalf/Int only to allow the user to select explicitly the thread-safe NAG function, either by explicit name like _d01smc, _d01skc, etc or by a general option such as, say, 'allowthreads', or by passing the integrand as a proc with 'option threadsafe' (which is a made-up term).

acer

Hi Axel,

Yes, it looks as if `evalf/int/control` might use `evalf/int/CreateProc` or something to make a proc from an expression-form integrand.

But before that happens, `evalf/int/control` seems to really poke expensively at an expression-form integrand when a NAG method is not specified. It may be looking for singularities. I can understand why it might do that: it wants to compute tough problems correctly by default. I wouldn't mind seeing another way to prevent that cost.

acer

Hi Axel,

Yes, it looks as if `evalf/int/control` might use `evalf/int/CreateProc` or something to make a proc from an expression-form integrand.

But before that happens, `evalf/int/control` seems to really poke expensively at an expression-form integrand when a NAG method is not specified. It may be looking for singularities. I can understand why it might do that: it wants to compute tough problems correctly by default. I wouldn't mind seeing another way to prevent that cost.

acer

Thanks, Alec. The minor mystery is now solved (see your Edit.)

But it does puzzle me that evalf/Int can do so much more (symbolic?) work, digging away at investigating the integrand when in expression form. I suppose that passing the integrand as a procedure/operator merely tricks it into considering it as a black box that cannot be poked. It's a little worrisome that one has to use a cheap trick, or know which forced method is appropriate, to disable it. It might be nicer to have an optional parameter that controls it, eg. discont=false (assuming that I'm right and that it is discontinuity checking that makes the difference) or whatever.

acer

Thanks, Alec. The minor mystery is now solved (see your Edit.)

But it does puzzle me that evalf/Int can do so much more (symbolic?) work, digging away at investigating the integrand when in expression form. I suppose that passing the integrand as a procedure/operator merely tricks it into considering it as a black box that cannot be poked. It's a little worrisome that one has to use a cheap trick, or know which forced method is appropriate, to disable it. It might be nicer to have an optional parameter that controls it, eg. discont=false (assuming that I'm right and that it is discontinuity checking that makes the difference) or whatever.

acer

Are there any ideas for a more lightweight interface to mapleprimes which might be more convenient on a mobile device? (eg, smartphone, etc)?

I am thinking about the sorts of issues that one encounters when accessing a dense site from a mobile device. There are a few sites (google, yahoo, wikipedia, facebook) where a specialized entry-point can make a world of difference.

Accessing the present (v.1) of mapleprimes isn't that great on a smartphone. I am wondering whether the imminent v.2 mapleprimes might  eventually offer a better experience.

acer

Hi Alec,

On my 64bit Linux Maple 13.01, the operator form I showed and the forced _d01amc method took very similar times. Trying each many times, always in fresh sessions, (plotting from 2..100 say) showed as much timing variation for either alone as between both of them.

I have found that the disable-discont-probing-using-operator trick can avoid timing blowups for some finite ranges too. Using forced methods like _d01amc for semi-infinite ranges, or _d01akc for oscillating integrands, etc, can mean that one has to remember or look up their names and purposes. So I tend to advise trying the operator form first for this trick (if one believes that there is less risk to avoiding discont checks).

I saw plot output for only the range 100..110 using either method. But I think I might know what you mean. Using the Standard GUI, the very small value is not plotted, if there is a much larger value shown. So, for example, plotting f (or PP) from 100 to 200 only shows the plotted red line up to about 118 or so. This seems to be a Standard plotting bug. Similarly if plotting from only 200 to 300, or from only 300 too 400. I wonder if it's a single-precision display issue, or something. It seems to have trouble with the plotted red line varying more than a factor of about 10^8. The problem does not seem to occur in the commandline interface.

acer

Hi Alec,

On my 64bit Linux Maple 13.01, the operator form I showed and the forced _d01amc method took very similar times. Trying each many times, always in fresh sessions, (plotting from 2..100 say) showed as much timing variation for either alone as between both of them.

I have found that the disable-discont-probing-using-operator trick can avoid timing blowups for some finite ranges too. Using forced methods like _d01amc for semi-infinite ranges, or _d01akc for oscillating integrands, etc, can mean that one has to remember or look up their names and purposes. So I tend to advise trying the operator form first for this trick (if one believes that there is less risk to avoiding discont checks).

I saw plot output for only the range 100..110 using either method. But I think I might know what you mean. Using the Standard GUI, the very small value is not plotted, if there is a much larger value shown. So, for example, plotting f (or PP) from 100 to 200 only shows the plotted red line up to about 118 or so. This seems to be a Standard plotting bug. Similarly if plotting from only 200 to 300, or from only 300 too 400. I wonder if it's a single-precision display issue, or something. It seems to have trouble with the plotted red line varying more than a factor of about 10^8. The problem does not seem to occur in the commandline interface.

acer

It was slightly more interesting when the arguments could get passed separately, as opposed to this code where the maple command has to be preformatted. In this code, some user, or other higher-level program must format or type out the maple command. It's really just answering a different (IMO, easier) question.

I guess it depends on who's using it, and for what purpose and in what manner.

For example, this code could be more awkward to work with in a scenario where various commandline arguments were intended to be injected at multiple disjoint locations between distinct, given maple code fragments.

acer

The HDF5 technology suite is not just a storage format. It includes a library which implements an application programming interface (API) with C. And Maple has a C API too (via external_calling or custom wrappers).

So it seems to me that it shouldn't be too hard to get Maple to talk to some parts of the HDF5 library, via maple's external_calling interface. For example, one might have Maple access the HDF5 library's H5TBread_table function.

For other HDF5 structure aspects (groups, b-trees) one natural question is: what do you want it to end up as in Maple? For pure datasets, the matter seems simpler.

acer

Let's forget about evalhf for a moment, and a forced quadrature method, and try to consider the numerical difficulties. One can crank up the working precision of `evalf/int` while keeping the numerical quadrature accuracy requirement loose, supposedly independently of Digits.

> integrand:=proc(t)
>     return subs({_n = n, _t = t}, eval(proc(x)
>     local k;
>         return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
>         (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc));
> end proc:
>
> n:=7:
> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,
>        digits=DD,epsilon=EE)):
>
> # These first 2 plots suggest a smooth monotonic function,
> # which might be expected to cross the x-axis "nicely".
> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=10,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> # What, then, should we make of these?
> Digits,DD,EE:=20,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=100,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

Moreover,

> Digits,DD,EE:=100,100,1.0*10^(-3):
> fsolve(numericIntegral,4.0..5.0): evalf[40](%);
                   4.768430018711191230674619128944015414870

> numericIntegral(%%): evalf[40](%);
                                                            -116
               0.1947217706342926765829506543174745536626 10

> plot(numericIntegral,4.0..5.0); # looks ok
> plot(numericIntegral,4.76..4.80); # looks ok

> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,4.0..5.0); # the mess

So, can the jitter in numericIntegral be put down to not enough working precision alongside too loose an accuracy tolerance during the numeric quadrature?

note: I think I got the same results even with liberal sprinkling of forget(evalf) and forget(`evalf/int`).

acer

Let's forget about evalhf for a moment, and a forced quadrature method, and try to consider the numerical difficulties. One can crank up the working precision of `evalf/int` while keeping the numerical quadrature accuracy requirement loose, supposedly independently of Digits.

> integrand:=proc(t)
>     return subs({_n = n, _t = t}, eval(proc(x)
>     local k;
>         return x*(1 - add(binomial(_n, k)*(1/10*x - _t/10 + 1/2)^k*
>         (1/2 - 1/10*x + _t/10)^(_n - k), k = 0 .. ceil(10 - 1/5*x) - 2));
>     end proc));
> end proc:
>
> n:=7:
> numericIntegral:=t->evalf(Int(integrand(t),t-5..t+5,
>        digits=DD,epsilon=EE)):
>
> # These first 2 plots suggest a smooth monotonic function,
> # which might be expected to cross the x-axis "nicely".
> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=10,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> # What, then, should we make of these?
> Digits,DD,EE:=20,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

> Digits,DD,EE:=100,100,1.0*10^(-3):
> plot(numericIntegral,5.0..5.000000001);

Moreover,

> Digits,DD,EE:=100,100,1.0*10^(-3):
> fsolve(numericIntegral,4.0..5.0): evalf[40](%);
                   4.768430018711191230674619128944015414870

> numericIntegral(%%): evalf[40](%);
                                                            -116
               0.1947217706342926765829506543174745536626 10

> plot(numericIntegral,4.0..5.0); # looks ok
> plot(numericIntegral,4.76..4.80); # looks ok

> Digits,DD,EE:=10,10,1.0*10^(-3):
> plot(numericIntegral,4.0..5.0); # the mess

So, can the jitter in numericIntegral be put down to not enough working precision alongside too loose an accuracy tolerance during the numeric quadrature?

note: I think I got the same results even with liberal sprinkling of forget(evalf) and forget(`evalf/int`).

acer

No worries.

Does it work for you if you simply issue,

> f();

without wrapping it in a DocumentTools:-Do call?

I guess that I assumed that anyone would run the proc `f` that I'd provided. If one doesn't call it, a proc can not do much.

acer

No worries.

Does it work for you if you simply issue,

> f();

without wrapping it in a DocumentTools:-Do call?

I guess that I assumed that anyone would run the proc `f` that I'd provided. If one doesn't call it, a proc can not do much.

acer

First 467 468 469 470 471 472 473 Last Page 469 of 599