acer

33193 Reputation

29 Badges

20 years, 214 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

It's tricky but possible to get at the Theta/Phi/Psi in the GUI's internally stored state in a programmatic way, following a rotation of a 3D plot with the mouse-cursor. But I'm not sure that it can be done continuously (repeatedly, for the same set of movements). And it sounds as if you want it to update repeatedly, in any single set of mouse movements over the plot. Is that right? Is that a strict requirement?

If you'd be ok with having three Slider components for each quantity Theta/Phi/Psi, and using only those to do rotations, then sure you can have the current values displayed and updated continuously while any one slider is adjusted. The rotations could be accomplished, in this scenario, by doing on-the-fly updates of the 3D plot in an embedded PlotComponent. This may not be satisfactory for you.

acer

@Carl Love I don't quite see how this guards against the situation that all three roots are close. If they are, then it might generate the 2nd and 3rd "highest-in-t" roots.

I don't see how to compute safely which narrow `t` range to use, either, or how high Digtit has to be at the nth step. Unless there is some more knowledge, such as (making this up now) that the close root pairs get closer from iteration to iteration, the Digits from the nth is always enough to resolve the (n+1)th close pair, the 3rd root is not as close, etc, then I don't see much that is as safe as trying to find (potentially) all three roots.

I haven't studied the modeling though. There may be justifications, based on the math.

@Carl Love I don't quite see how this guards against the situation that all three roots are close. If they are, then it might generate the 2nd and 3rd "highest-in-t" roots.

I don't see how to compute safely which narrow `t` range to use, either, or how high Digtit has to be at the nth step. Unless there is some more knowledge, such as (making this up now) that the close root pairs get closer from iteration to iteration, the Digits from the nth is always enough to resolve the (n+1)th close pair, the 3rd root is not as close, etc, then I don't see much that is as safe as trying to find (potentially) all three roots.

I haven't studied the modeling though. There may be justifications, based on the math.

ps. If the suggested process makes sense, then you could tweak it a little so that it stopped looking when 3 roots were found (at a given iteration). That would save a bit of time. But it would still involve the costly fail-out for any  iteration at which there were only eactly 1 or 2 roots. This is all under the assumption that, as you wrote, there are 3 roots at most for each iteration.

acer

ps. If the suggested process makes sense, then you could tweak it a little so that it stopped looking when 3 roots were found (at a given iteration). That would save a bit of time. But it would still involve the costly fail-out for any  iteration at which there were only eactly 1 or 2 roots. This is all under the assumption that, as you wrote, there are 3 roots at most for each iteration.

acer

Can you use the `avoid` option of fsolve?

acer

As Robert Israel (and at some earlier date, Alec Mihailovs?) has shown, it can be done without recourse to integration.

Is there an easy way to change the color of the shaded region, when using VolumeOfRevolution? I'm not sure.

This could be put into a procedure, which might be able to accept general ?plot/opions more readily than does VolumeOfRevolution. Here's a crude shot at that:

restart:

RegionBetweenCurves:=proc(f1,f2,rng::name=range(numeric))
    local v1, v2;
    v1:=lhs(rng);
    plottools:-transform(unapply([v1,v2+f2],v1,v2))
       (plot(f1-f2,rng,'filled'=true,_rest));
end proc:

RegionBetweenCurves(x^2-1, -x-1, x=-1.5 .. 1.5, color=gold);
RegionBetweenCurves(x^2-1, -x-1, x=-1.5 .. 1.5, color=cyan);
RegionBetweenCurves(sqrt(x-2), x-4, x=4..6);

I do notice a change in behavior between Maple 15 and 16 for this. In Maple 16 one of the boundary curves may be displayed more darkly than is the filled region. This seems to depend on the choice of color. In Maple 15 the shading appears to be uniform for both region and boundary. Compare with color=gold, for example.

It might be nice to get the region shaded exactly as is the boundary. Less nice is to force both curves to display with the same (possibly darker) shading as each other (even if darker than the region).

restart:
RegionBetweenCurves:=proc(f1,f2,rng::name=range(numeric))
    local v1, v2;
    v1:=lhs(rng);
    plots:-display(
      plottools:-transform(unapply([v1,v2+f2],v1,v2))
         (plot(f1-f2, rng, 'filled'=true, _rest)),
      plot([f1,f2], rng, _rest) );
end proc:
RegionBetweenCurves(x^2-1, -x-1, x=-1.5 .. 1.5, color=gold);
RegionBetweenCurves(x^2-1, -x-1, x=-1.5 .. 1.5, color=cyan);
RegionBetweenCurves(sqrt(x-2), x-4, x=4..6);

acer

I wonder, is there a real root somewhere "near" x=1.59e7 or so? I had some trouble with it, due to scaling issues I suppose.

acer

I wonder, is there a real root somewhere "near" x=1.59e7 or so? I had some trouble with it, due to scaling issues I suppose.

acer

@PatD The procedure Cproc[5] evaluates the 5th expression from your `C` (for those fixed a1 and a2 values you gave), following production from that of an optimized procedure. When Cproc[5] is called, its arguments are used as values for the remaining varaibles (given that a1 and a2 have been fixed).

The procedure f[5] also accepts the same kind of arguments as Cproc[5]. In fact, all that f[5] does is raise the working precision (Digits) and then pass the arguments on to a call to Cproc[5].

Digits is an environment variable, and as such will be inherited in the call to Cproc[5] done within f[5]. So f[5] is just a slick way to get Cproc[5] to compute at higher working precision without having to raise Digits at the top level.

I gave an example, where one can see that f[5] and Cproc[5] return results that differ in something like the 3rd decimal digit. This shows what you said, that the expressions are sensitive to working precision. Hopefully the result from f[5] is accurate enough.

And similarly, for all the f[i] and Cproc[i] for i=1..9.

I created such f[i] because of what Carl mentioned about fsolve. When you call fsolve it has to figure out its stopping/acceptance criteria. It bases that on the number of variables and on Digits. If you want your expressions to get evaluated at high working precision (b/c of roundoff error or numerical instability) then the temptation is to just raise Digits high at the level from which you call fsolve. But raising Digits high at that level will cause fsolve to use a much tighter accuracy/acceptance, which again does not get met. It's a push-me/pull-me dichotomy. What fsolve does not offer are options to raise its working precision to a user-specified value while also forcibly keeping the accuracy requirements low. A problematic scenario is one in which the value of Digits where fsolve is called is never high enough a working precision for the expression evaluations to allow the accuracy tolerance of fsolve (based on that same Digits value) to be met.

And that's where the f[i] come in. Using them as replacements to the Cproc[i] we can cause the expressions to be evaluated numerically at high working precision while allowing fsolve to still use the lower Digits setting (at the outer level at which its called) and thus make far less demands for acceptance of a root.

The idea is to leave Digits as it is, say 10 the default value. Then call fsolve and use the f[i]. Internally, fsolve will try and meet an accuracy acceptance tolerance based on Digits=10. And that will likely never succeed unless the individual expressions can be numerically evaluated as something near 10-digits accurate or so. And such accuracy for the numerical evaluations of the expressions may require a very high working precision indeed. (You make wish to experiment with that formation of the f[i], to see just high they have to locally set Digits.)

This all assumes that fsolve is being called with its first argument as a set of procedures, rather than a set of expressions. Hence the parameter ranges are supplied like [...,1..900,...] instead of [...,c3=1..900,...] etc. This is similar to how `plot` and `Optimization` routines differ for ranges, according to input in procedure or expression form.

 

@PatD The procedure Cproc[5] evaluates the 5th expression from your `C` (for those fixed a1 and a2 values you gave), following production from that of an optimized procedure. When Cproc[5] is called, its arguments are used as values for the remaining varaibles (given that a1 and a2 have been fixed).

The procedure f[5] also accepts the same kind of arguments as Cproc[5]. In fact, all that f[5] does is raise the working precision (Digits) and then pass the arguments on to a call to Cproc[5].

Digits is an environment variable, and as such will be inherited in the call to Cproc[5] done within f[5]. So f[5] is just a slick way to get Cproc[5] to compute at higher working precision without having to raise Digits at the top level.

I gave an example, where one can see that f[5] and Cproc[5] return results that differ in something like the 3rd decimal digit. This shows what you said, that the expressions are sensitive to working precision. Hopefully the result from f[5] is accurate enough.

And similarly, for all the f[i] and Cproc[i] for i=1..9.

I created such f[i] because of what Carl mentioned about fsolve. When you call fsolve it has to figure out its stopping/acceptance criteria. It bases that on the number of variables and on Digits. If you want your expressions to get evaluated at high working precision (b/c of roundoff error or numerical instability) then the temptation is to just raise Digits high at the level from which you call fsolve. But raising Digits high at that level will cause fsolve to use a much tighter accuracy/acceptance, which again does not get met. It's a push-me/pull-me dichotomy. What fsolve does not offer are options to raise its working precision to a user-specified value while also forcibly keeping the accuracy requirements low. A problematic scenario is one in which the value of Digits where fsolve is called is never high enough a working precision for the expression evaluations to allow the accuracy tolerance of fsolve (based on that same Digits value) to be met.

And that's where the f[i] come in. Using them as replacements to the Cproc[i] we can cause the expressions to be evaluated numerically at high working precision while allowing fsolve to still use the lower Digits setting (at the outer level at which its called) and thus make far less demands for acceptance of a root.

The idea is to leave Digits as it is, say 10 the default value. Then call fsolve and use the f[i]. Internally, fsolve will try and meet an accuracy acceptance tolerance based on Digits=10. And that will likely never succeed unless the individual expressions can be numerically evaluated as something near 10-digits accurate or so. And such accuracy for the numerical evaluations of the expressions may require a very high working precision indeed. (You make wish to experiment with that formation of the f[i], to see just high they have to locally set Digits.)

This all assumes that fsolve is being called with its first argument as a set of procedures, rather than a set of expressions. Hence the parameter ranges are supplied like [...,1..900,...] instead of [...,c3=1..900,...] etc. This is similar to how `plot` and `Optimization` routines differ for ranges, according to input in procedure or expression form.

 

I don't know whether it's useful here, but one can write and use an extension to the programmatic convert mechanism for this.  It might not be useful for this particular case, since the right-click context-menu acts nicely in place on a 2D Math table-reference input. Also (except in Maple 16 where it's weird?) there is the subliteral entry on the Layout palette which allows one to enter such atomic subscripted names.

Anyway, using this code,

 

`convert/identifier`:=proc(x)
  cat(`#`,convert(convert(:-Typesetting:-Typeset(x),
                           `global`),
                   name));
end proc:

T:=convert(H[deg],identifier);

`#msub(mi("H"),mi("deg"))`

lprint(T);

`#msub(mi("H"),mi("deg"))`

T - H[deg];

`#msub(mi("H"),mi("deg"))`-H[deg]

 

 

Download convertidentifier.mw

I don't know whether it's useful here, but one can write and use an extension to the programmatic convert mechanism for this.  It might not be useful for this particular case, since the right-click context-menu acts nicely in place on a 2D Math table-reference input. Also (except in Maple 16 where it's weird?) there is the subliteral entry on the Layout palette which allows one to enter such atomic subscripted names.

Anyway, using this code,

 

`convert/identifier`:=proc(x)
  cat(`#`,convert(convert(:-Typesetting:-Typeset(x),
                           `global`),
                   name));
end proc:

T:=convert(H[deg],identifier);

`#msub(mi("H"),mi("deg"))`

lprint(T);

`#msub(mi("H"),mi("deg"))`

T - H[deg];

`#msub(mi("H"),mi("deg"))`-H[deg]

 

 

Download convertidentifier.mw

@Carl Love But that is not a comprehensive solution that can be automatically applied easily. It's easy to accomodate any given dependency/assignment chain, but that does not mean that all unknown chains will so easily be automatically handled.

And grouping variables in a set does not help with the posted question's request for handling "specific variables". Knowing about memory used by sets of names such as {a,b,c,d} doesn't tell us which is which in an assignment chain, say, or which are/is primarily responsible for the allocation.

We still don't know what the Asker really wants, though, and why.

@Carl Love But that is not a comprehensive solution that can be automatically applied easily. It's easy to accomodate any given dependency/assignment chain, but that does not mean that all unknown chains will so easily be automatically handled.

And grouping variables in a set does not help with the posted question's request for handling "specific variables". Knowing about memory used by sets of names such as {a,b,c,d} doesn't tell us which is which in an assignment chain, say, or which are/is primarily responsible for the allocation.

We still don't know what the Asker really wants, though, and why.

First 394 395 396 397 398 399 400 Last Page 396 of 607