acer

33193 Reputation

29 Badges

20 years, 214 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Preben Alsholm Yes, thanks, that's why I included my second example. It's an oddity amongst oddities.

> restart:

> eval(`if`(r,sin(Pi),p),r=true); # hmm

                                    sin(Pi)

> f:=x->x:

> eval(`if`(r,f(r),p),r=true);

                                     true

> eval(`if`(r,f(2),p),r=true); # hmm

                                     f(2)

> seq(`if`(r,f(2),p),r=true);

                                       2

It looks like a bug.

acer

What is the purpose of the first loop in the SampleQ procedure (that loops for i from 1 to Size)?

acer

@AndreaAlp It wasn't clear to me what was your exact problem. I thought that perhaps you wanted emitted Matlab code that contained elementwise ~ operators, but weren't getting them.

Perhaps you could upload a short but representative example, so that it would be more clear what you need.

@AndreaAlp It wasn't clear to me what was your exact problem. I thought that perhaps you wanted emitted Matlab code that contained elementwise ~ operators, but weren't getting them.

Perhaps you could upload a short but representative example, so that it would be more clear what you need.

I don't think that it is clear what you mean by, "Since g(x,y) has stationary  point at (0,0), I have to interpolate f(x,y) except (0,0) but also quite close to (0,0)."

Could you please justify that? Why do you believe that numeric quadrature would fail? Why do you believe that only a symbolic interpolatory result would suffice? And what would you do with such a result?

If you only intend to use a final symbolic interpolant for plotting or other computational tasks then an assertion that only a symbolic interpolant would suffice seems suspect. A numeric black box interpolating function might also do.

It's not even clear why you must interpolate, if you can poll your `f` and `g` at any x-y pair.

acer

I may be missing something, but is there some significant way (performance, or other) in which this differs from the splitting by and divvying up with the Threads:-Task model, to parallelize? (See the attachment in my worksheet in the comment to your earlier response above, where I did that.) It seemed to me that both get a small bit of speedup, but not a by large factor.

...Almost as if some Maple level overhead was dealt with better (parallelized, perhaps) while a large significant portion of the computation (the externally called bit perhaps) was done in the same manner.

And CodeTools:-Usage shows a similar value for both the wallclock time[real] and the cpu time (summed for all threads), which indicates that the thing may not actually running concurrently all of the code done under the maple threads. See my comments about possible mututal blocking by call_externals -- which is not confirmed.

It is also possible that multiple cores (not maple threads) may already have been doing some of the external work in parallel, via BLAS.

acer

I may be missing something, but is there some significant way (performance, or other) in which this differs from the splitting by and divvying up with the Threads:-Task model, to parallelize? (See the attachment in my worksheet in the comment to your earlier response above, where I did that.) It seemed to me that both get a small bit of speedup, but not a by large factor.

...Almost as if some Maple level overhead was dealt with better (parallelized, perhaps) while a large significant portion of the computation (the externally called bit perhaps) was done in the same manner.

And CodeTools:-Usage shows a similar value for both the wallclock time[real] and the cpu time (summed for all threads), which indicates that the thing may not actually running concurrently all of the code done under the maple threads. See my comments about possible mututal blocking by call_externals -- which is not confirmed.

It is also possible that multiple cores (not maple threads) may already have been doing some of the external work in parallel, via BLAS.

acer

@Preben Alsholm Indeed. One of the important reason's that the `parameters` option exists is that it allows one to avoid a considerable amount of repeated prelimimary overhead involved in code such as in 3) above.

In 3), dsolve/numeric is called each time the procedure is invoked with to solve for a different `p` value. This entails all the necessary determination of the nature of the IVP, the setup of internal structures for storing results, and so on.

@Preben Alsholm Indeed. One of the important reason's that the `parameters` option exists is that it allows one to avoid a considerable amount of repeated prelimimary overhead involved in code such as in 3) above.

In 3), dsolve/numeric is called each time the procedure is invoked with to solve for a different `p` value. This entails all the necessary determination of the nature of the IVP, the setup of internal structures for storing results, and so on.

For various values of t1 there will be more than a single real value for t2 as roots of mrdot0. The Asker may wish to consider whether it matters which value of t2 is taken.

If it matters which value of t2 is accepted then either fsolve's `avoid` option could be used alongwise repeated fsolve calls, or RootFinding:-NextZero might be used with judicious use of the starting value (based on the previous t1's accepted t2 value?) and `maxdistance` option (based on t1?).

Or you could take the `min`, say, of multiple results from `solve`. (...which does what? Call evalf/RootOf repeatedly using fsolve & avoid? It might be faster to use NextZero, using unapply just once of course.)

Also (implied in Carl's Answer) the first call to `solve` for mrdot0 would need to be brought inside the t1-loop for it to be changed to a numeric fsolve (or NextZero) call, so that t1 had a numeric value each time it was computed.

acer

For various values of t1 there will be more than a single real value for t2 as roots of mrdot0. The Asker may wish to consider whether it matters which value of t2 is taken.

If it matters which value of t2 is accepted then either fsolve's `avoid` option could be used alongwise repeated fsolve calls, or RootFinding:-NextZero might be used with judicious use of the starting value (based on the previous t1's accepted t2 value?) and `maxdistance` option (based on t1?).

Or you could take the `min`, say, of multiple results from `solve`. (...which does what? Call evalf/RootOf repeatedly using fsolve & avoid? It might be faster to use NextZero, using unapply just once of course.)

Also (implied in Carl's Answer) the first call to `solve` for mrdot0 would need to be brought inside the t1-loop for it to be changed to a numeric fsolve (or NextZero) call, so that t1 had a numeric value each time it was computed.

acer

Carl, you wrote, "It is necessary to assume that they are positive."

What if `A` and `B` are of opposite sign, or one them is zero, or both of them are negative with `a` an integer?

acer

Carl, you wrote, "It is necessary to assume that they are positive."

What if `A` and `B` are of opposite sign, or one them is zero, or both of them are negative with `a` an integer?

acer

Have you considered upgrading to Maple 7 (2001) or later?

acer

First 383 384 385 386 387 388 389 Last Page 385 of 607