acer

32343 Reputation

29 Badges

19 years, 327 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I believe that you can insert a URL by selecting some existing text with the mouse and then, with that text highlighted, using the Insert/Edit Link button.

If you haven't yet written that part of the text then you can insert the link first and then edit it afterwards as source. By that I mean switch to source-mode using the "Source" button at the top of the Editor. Then edit by hand what lies between the <a..> and </a> (leaving the href part alone).

It's mildly inconvenient if the text portion cannot be entered or edited in the Insert/Edit Link pop-up.

That numeric integral is hard, I think, to get with good accuracy. Which is something that the author did not address. How accurate an answer did he want? How accurate an answer did he receive from Mathematica?  He appeared to have ignored Mathematica's printed advice and bumped up the MaxRecursion parameter but not chosen Method->Oscillatory. He did not mention that Maple's evalf(Int(..) has [documented] separate options to control both the working precision and an accuracy tolerance.

What I liked about the example is that it illustrates a dichotomy for software. Should it always just try and be super smart and magically handle everything? Or should it provide lots of powerful options and controls to assist with the nastier problems? How should it balance ease of use versus power? And how should it guide the user? Maple's evalf(Int()) can dump out a lot of userinfo messages, if infolevel[`evalf/int`] is set higher than 0. But I don't think that it will use Maple's WARNING() mechanism to offer succinct advice when it fails to converge and returns unevaluated.

An alternative view of the performance on that example is how the two systems behaved when the author threw a nasty problem at them -- likely deliberately in a naive way. On the one hand I think that Maple does have more documented controls than the author claimed to have found (he hinted that he tried stuff), but on the other hand Mathematica was helpful.

Hmm. Why isn't Warning a help alias for WARNING.

acer

I get a lot of mileage out of trying to adhere to a simple principal of using function parameter names which do not coincide with global names in my expressions. It helps me keep it straight.

For example, you probably wouldn't have been puzzled at all if you'd written it as,

N1 := u+v;
M1 := (a,b) -> N1;

The unapply function is nice. Without it, we might be writing things like,

N1 := u+v;
M1 := (a,b)->subs({u=a,v=b},N1);
M1(s,t);

..and then we could get confused by this,

N1 := u+v;
M1 := (u,v)->subs({u=u,v=v},N1);
M1(s,t);

and resort to this,

N1 := u+v;
M1 := (u,v)->subs({:-u=u,:-v=v},N1);
M1(s,t);

And then we might realize that the above wouldn't work all inside a procedure in which u and v were locals. In desperation,

p:=proc() local u,v,N1,M1;
  N1 := u+v;
  M1 := subs({U=u,V=v},(u,v)->subs({U=u,V=v},N1));
  M1(s,t);
end proc:
p();

And then we'd go crazy.

acer

The number of things that people believe that they know how to accomplish is greater than the number which can be done well with the time and people resources available. That's the case for Maple development, too.

I don't think highly of an article which seems to put so much weight on a single computational result. I agree with another poster here, that in effect one could judge a CAS by how well it helps in a given discipline that  is chosen up front.

Every CAS or piece of math software fails on some (different set of) simple problems. Look at the flood of bug reports about Mathematica 6 recently reported on comp.soft-sys.math.maple. I wouldn't pass definitive judgement on Mathematica based on just a few of those reports. Similarly I wouldn't discount Maple based on a single numerical quadrature result.

I was never a big fan of the Wester review that Jacques cited in an earlier comment in this thread. I can see that some people might like its attempt at breadth. But my view is that since complete coverage is next to impossible it is better to focus on one's particular needs. I'd rather see many discipline- or functionality-specific reviews. Pragmatism, that's all I have to offer.

There is one specialized review of which I'm aware. It focuses on statistics and data analysis, with a bit of overflow into general numerics, plots, i/o and format exchange, etc. It is the ncrunch review of Stefan Steinhaus. Granted it has a few problems, where use of the programs might have been better optimized. But he makes an effort to consult with a representative expert, to get the code into fair shape for all. It's not perfect, and not nearly as expert as Wester's, but it's a start toward discipline-specific analysis. I'd like to see a similar review by, say, mechanical engineers, or 1st-year-calculus college lecturers, or chemists, or...

acer

The number of things that people believe that they know how to accomplish is greater than the number which can be done well with the time and people resources available. That's the case for Maple development, too.

I don't think highly of an article which seems to put so much weight on a single computational result. I agree with another poster here, that in effect one could judge a CAS by how well it helps in a given discipline that  is chosen up front.

Every CAS or piece of math software fails on some (different set of) simple problems. Look at the flood of bug reports about Mathematica 6 recently reported on comp.soft-sys.math.maple. I wouldn't pass definitive judgement on Mathematica based on just a few of those reports. Similarly I wouldn't discount Maple based on a single numerical quadrature result.

I was never a big fan of the Wester review that Jacques cited in an earlier comment in this thread. I can see that some people might like its attempt at breadth. But my view is that since complete coverage is next to impossible it is better to focus on one's particular needs. I'd rather see many discipline- or functionality-specific reviews. Pragmatism, that's all I have to offer.

There is one specialized review of which I'm aware. It focuses on statistics and data analysis, with a bit of overflow into general numerics, plots, i/o and format exchange, etc. It is the ncrunch review of Stefan Steinhaus. Granted it has a few problems, where use of the programs might have been better optimized. But he makes an effort to consult with a representative expert, to get the code into fair shape for all. It's not perfect, and not nearly as expert as Wester's, but it's a start toward discipline-specific analysis. I'd like to see a similar review by, say, mechanical engineers, or 1st-year-calculus college lecturers, or chemists, or...

acer

Why couldn't they be allowed to differ by a constant?

If two antiderivatives differ by a constant, and if you differentiate them, will you get the same result?

acer

There are a few things that might be holding Maple back here.

One of those is that the compiled external (NAG) routine d01akc which specializes in oscillatory non-singular integrands has a  `max_num_subint` parameter which is not exposed at the user-level in Maple. The accuracy tolerances are exposed, via evalf/Int's `epsilon` parameter. But the maximal number of allowed subintervals is not so exposed. So that specialized routine will fail for an integral behaving like BesselJ(0,50001*x) when it attempts to use more than 200 (a hard coded default value) subintervals. With 200 subintervals only, one can only request an epsilon of about 5e-2 or so for the integrand at hand to succeed. I didn't try to subvert Maple and so test whether that routine could handle 50001*x in reasonable time, if allowed a very high number of subintervals.

Not having a super fast compiled hardware floating-point BesselJ0 may also affect the performance. Axel Vogt made some interesting posts and comments here a while back about fast BesselK and using that for numeric quadrature.

acer

There are a few things that might be holding Maple back here.

One of those is that the compiled external (NAG) routine d01akc which specializes in oscillatory non-singular integrands has a  `max_num_subint` parameter which is not exposed at the user-level in Maple. The accuracy tolerances are exposed, via evalf/Int's `epsilon` parameter. But the maximal number of allowed subintervals is not so exposed. So that specialized routine will fail for an integral behaving like BesselJ(0,50001*x) when it attempts to use more than 200 (a hard coded default value) subintervals. With 200 subintervals only, one can only request an epsilon of about 5e-2 or so for the integrand at hand to succeed. I didn't try to subvert Maple and so test whether that routine could handle 50001*x in reasonable time, if allowed a very high number of subintervals.

Not having a super fast compiled hardware floating-point BesselJ0 may also affect the performance. Axel Vogt made some interesting posts and comments here a while back about fast BesselK and using that for numeric quadrature.

acer

I suspect that he was referring to g the acceleration due to gravity, or a force on a body of mass m under such acceleration.

acer

I suspect that he was referring to g the acceleration due to gravity, or a force on a body of mass m under such acceleration.

acer

If we're lucky, someone will explain its relationship to the lambda calculus in detail.

Jacques has written in the past that, "Maple's unapply is the same as Church's lambda abstraction operator."

The help-page ?unapply says,

- The unapply command implements the lambda-expressions of lambda calculus.

For reference see ``An Implementation of Operators for Symbolic Algebra
Systems'' by G.H. Gonnet, SYMSAC July 1986.

What I wonder is whether the help ever claimed that, "The scoping behaviour of unbound names is not the same in the lambda calculus," and if so how that might be true.

acer

I was sitting there, wondering why eval(foo:-`:-2`) didn't work. But of course, it's a local name. So of course I can't eval what I type in as the global name.

> restart:
> read("bar.m");
> tmp:=[anames()];
                               tmp := [foo:-:-2]
 
> dismantle(tmp[1]);
 
NAME(4): foo:-`:-2` #[modulename = foo]
 
> eval(foo:-`:-2`);
Error, `foo` does not evaluate to a module
> eval(tmp[1]);
                   proc() print("can you see it?") end proc

So thank you very much. It is march('extractfile',...) that does what i was after.

acer

Yes, that it what I was trying to figure out, thanks.

There is a (not insurmountable) difficulty if the .mla archive has many module members stored in it, as those all appear as ":-XXX.m" where XXX is a posint when listed by march. There may be many such archive members, but I don't mind searching, if it can be done programmatically. The key thing for me is that I don't want the module to be referenced and the ModuleLoad routine to get run.

In my simple experiment, I could extract ":-2.m" to a file, which I called "bar.m". It was the only module member in the foo.mla archive.

But I don't see how I can "load" that bar.m file. I can `read`() it, but then what would I look for?

> restart:
> read("bar.m");
> anames();
                                   foo:-:-2
 
> lprint(%);
foo:-`:-2`
> eval(foo:-`:-2`);
Error, `foo` does not evaluate to a module

I guess that this is a special case of a wider question: how can one view, individually, the contents of the ":-XXX.m" module members that are stored in a .mla archive?

acer

No, stopat() and trace() don't take effect until after the ModuleLoad executes, as can be shown by experiment.

But I realized a little later, that printlevel can be set high before the module name is first accessed and its ModuleLoad tripped.

> restart:
> libname:="./foo.mla",libname:
> kernelopts(opaquemodules=false):
> printlevel:=1000:
> foo();
{--> enter ModuleLoad, args =
                               "can you see it?"
 
<-- exit ModuleLoad (now at top level) = }
{--> enter evalapply, args = module () local ModuleLoad; option package;
ModuleLoad := proc () print("can you see it?") end proc; end module, []
{--> enter type/attributed, args = module () local ModuleLoad; option package;
ModuleLoad := proc () print("can you see it?") end proc; end module, generic
                                     false
 
<-- exit type/attributed (now in evalapply) = false}
          (module() local ModuleLoad; option package;  end module)()
 
<-- exit evalapply (now at top level) = module () local ModuleLoad; option
package; ModuleLoad := proc () print("can you see it?") end proc; end module()
}
          (module() local ModuleLoad; option package;  end module)()

acer

The one-time cost of defining all the StringTools exports as their session-dependent call_externals may be negligible compared to the cost of the initial dlopen of the mstring dynamic library.

Here is a crude illustration, done in order in a fresh TTY session.

> st:=time():
> try StringTools:-Join(): catch: end try:
> time()-st;
                                     0.002
 
> st:=time():
> try
> for j in exports(StringTools) do
>   StringTools[j]():
> end do:
> catch: end try:
> time()-st;
                                     0.001

> st:=time():
> try
> for j in exports(StringTools) do
>   StringTools[j]():
> end do:
> catch: end try:
> time()-st;
                                      0.

Ignoring effects due to try..catch overhead and the cost of raising errors, and assuming that the timer is accurate at such small granularity, it looks like the initial dlopen costs 2/3's as much as initializing/redefining all of the (approximately 200) StringTools exports.

It may only be for packages with many exports (which each need redefining) that having ModuleLoad deal with them all at once on the first access might be undesirable. So it may be reasonable to have StringTools:-ModuleLoad initialize them all, without any change to the persistent store mechanism.

acer

Yes, these waters can get muddy. I guessed that he was having problems with implicit multiplication because I had to fill in the missing `*`s when using the 1D input from the properties of his post's gif image. But I wasn't sure that it wasn't an artefact of some conversion by mapleprimes. So I just filled in the corrections and added a sidenote. Implicit multiplication, and 2D Math input in general, is harder to debug when the code is not one's own because of such additonal ambiguities. I suppose that his claim about failing with combine was stronger evidence.

May I ask, how did you yourself get his expression into Maple? Is there some easier way, when the 2D Math gets put up here in a post as an image?

acer

First 549 550 551 552 553 554 555 Last Page 551 of 592