acer

32348 Reputation

29 Badges

19 years, 329 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Compare,

restart:
infolevel[solve]:=2:
solve({z*x^3-7*y*z,z*x*y-x+z^3*y^2-4,z*x^2-x*z^5-1},{z,x,y};

restart:
infolevel[solve]:=2:
solve({z*x^3-7*y*z,z*x*y-x+z^3*y^2-4,z*x^2-x*z^5-1},{z,x,y},split);

Supplying the 'split' option seems to force some factorization. Examination of SolveTools:-findsubs(), which gets called by  SolveTools:-PolynomialSystem (), shows lines like,

    if `solve/split` = true or 50000 < leqns then
        userinfo(2, solve, `too big, factorization attempted,
                            size=`, leqns);
        eqns := map(factor, ieqns);

acer

It must be an oversight, that the Explicit option is not documented on that page.

The name Explicit is not protected and might be assigned, which is why it's safer to use that quoted as an option name. It might also be the name of a local, if you use it within a procedure. Hence for safety I prefer the syntax which uses the quoted global name, eg,

solve(x^4+x-1,x,':-Explicit');

You can also control the behaviour of `solve` in this regard using the environment variable _EnvExplicit . That, I believe, is documented in ?solve,details .

The other environment variables that control solve are,

  • _EnvConditionalSolutions
  • _EnvTryHard
  • _EnvExplicit
  • _MaxSols
  • _EnvAllSolutions

The environment variable _SolutionsMayBeLost is also used by solve, as output and not as input.

It would be nice if it were a unified help-page policy to have relevant environment variables listed altogether, right under the calling-sequence.

acer

Hi Doug,

Usually I have in mind "things that one enters directly, or commands in a maple script or worksheet". But that's not very precise. So maybe one could use the term to refer to calls which have no parent environment or scope. No doubt Jacques could tell us the formal CS term for it.

Note that some things are slower, expectedly, at the top-level, because things evaluate fully there. This is as opposed to within a procedure, where locals and parameter references may do just 1-level evaluation. One can actually quite easily cook up examples which run much faster within a procedure than outside it at the top-level, because of this.

So, within a worksheet or document, everything typed and displayed as input is (almost always, see ps.) parsed and evaluated at the top-level. An additional and more broad and noticable slowdown at the top-level due to 2D Math input alone is significant. I'm tempted to say "put Typesetting in the kernel", but I'm not really sure just how much of the effect is due to (Java) code in the GUI itself.

ps. Next Easter egg, special-evaluation rules, (effectively, if not literally) at the top-level.

acer

Hi Doug,

Usually I have in mind "things that one enters directly, or commands in a maple script or worksheet". But that's not very precise. So maybe one could use the term to refer to calls which have no parent environment or scope. No doubt Jacques could tell us the formal CS term for it.

Note that some things are slower, expectedly, at the top-level, because things evaluate fully there. This is as opposed to within a procedure, where locals and parameter references may do just 1-level evaluation. One can actually quite easily cook up examples which run much faster within a procedure than outside it at the top-level, because of this.

So, within a worksheet or document, everything typed and displayed as input is (almost always, see ps.) parsed and evaluated at the top-level. An additional and more broad and noticable slowdown at the top-level due to 2D Math input alone is significant. I'm tempted to say "put Typesetting in the kernel", but I'm not really sure just how much of the effect is due to (Java) code in the GUI itself.

ps. Next Easter egg, special-evaluation rules, (effectively, if not literally) at the top-level.

acer

I have been noticing this effect recently. Several of the worksheets and Documents that mapleprimes members have uploaded are written in 2D Math, and often are full of lots of top-level code. Re-executing these documents often involves a surprising delay.

It seems worse on 64bit Linux, where I can almost watch the 2D Math parser and Typesetting system crawl through each individual command. It's like using a 1200 baud modem all over again.

Actually, even using procedures doesn't avoid it completely, as the system pauses also while digesting the definition of procs written in 2D Math. Of course, subsequent execution of the procedures is OK.

acer

I have been noticing this effect recently. Several of the worksheets and Documents that mapleprimes members have uploaded are written in 2D Math, and often are full of lots of top-level code. Re-executing these documents often involves a surprising delay.

It seems worse on 64bit Linux, where I can almost watch the 2D Math parser and Typesetting system crawl through each individual command. It's like using a 1200 baud modem all over again.

Actually, even using procedures doesn't avoid it completely, as the system pauses also while digesting the definition of procs written in 2D Math. Of course, subsequent execution of the procedures is OK.

acer

I'm not sure whether the "information jumps" to which Jacques alluded are better measured as (the ratio of) absolute deltas or actual entries.

Eg,

plots[pointplot]([seq([i, (S[i]-S[i+1])/(S[i+1]-S[i+2])], i = 1 .. 50)]);

That seems to bring out jump points that match OK with the reciprocal log plot in another comment below.

Like so much else with plots, it's sometimes far easier to see a quality than to measure and detect it with code in a reliable way.

acer

Thanks for clarifying that, Doug.

Apart from this Table element mutability issue, what would you want to see improved in Maplets? The way in which only a child maplet can have the current focus, but not the parent (until the child is closed), is another issue.

And how would you rank the importance of elements missing from Embedded Components? What about the inability to programmatically control or define those components?

You've obviously had a great deal of experience with Maplets. Where do you think efforts would be best spent?

acer

Thanks for clarifying that, Doug.

Apart from this Table element mutability issue, what would you want to see improved in Maplets? The way in which only a child maplet can have the current focus, but not the parent (until the child is closed), is another issue.

And how would you rank the importance of elements missing from Embedded Components? What about the inability to programmatically control or define those components?

You've obviously had a great deal of experience with Maplets. Where do you think efforts would be best spent?

acer

It does seem like there are jumps, at elements 2,4,8,15,18, 21,26,28,..

plots[pointplot]([seq([i, 1/abs(ln(S[i]))], i = 1 .. 30)]);

acer

Is this illustrating how it might be done in Matlab?

acer

Is this illustrating how it might be done in Matlab?

acer

This looks like a job for simulation, to me. The sort of thing where one starts off with a set of rank correlation coefficients, and possibly a joint distribution or copula. I am not expert in this area. If you are lucky, Axel Vogt might have some good advice for you. I too am curious as to how to do this with Maple -- whether it can be done reasonably easily with the Statistics package or whether some key subtask would be facilitated with the Financial Modeling Toolbox.

acer

This looks like a job for simulation, to me. The sort of thing where one starts off with a set of rank correlation coefficients, and possibly a joint distribution or copula. I am not expert in this area. If you are lucky, Axel Vogt might have some good advice for you. I too am curious as to how to do this with Maple -- whether it can be done reasonably easily with the Statistics package or whether some key subtask would be facilitated with the Financial Modeling Toolbox.

acer

I don't understand what exactly it is that you want.

Do you want to generate samples of 10000 elements from each of 20 logistic random variables, so that their correlation matrix fits a 20x20 matrix which you know and supply in advance? Are you asking how to produce the 20 sets of random deviates, given the 20x20 matrix of desired correlation values? (I believe that is how Jacques, and consequently Robert, have responded.)

Or do you simply want (somehow) to generate (correlated in some way or not..) 20 logistic random variables and then compute their sample correlations as product-moment coefficients? If so, then do you care how it's done, so that the samples are made deliberately to not all be uncorrelated?

acer

First 544 545 546 547 548 549 550 Last Page 546 of 592