acer

32363 Reputation

29 Badges

19 years, 332 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Aside from what Joe has stated, it's not correct to try to save a module to a .m file. The (non-exported) local members of a module get "anonymized" and saved as separate .m images inside a .mla archive.

It is generally not a good idea to try to savelib some things to just a .m file in modern Maple. You may be able to save data and (simple) state to .m files, but that would be by using save which is a different command.

If there is no writable .mla archive in the libname path, then a savelib command can end up writing out separate .m files to the directory or folder. The fact that it does so doesn't imply that savelib'ing directly to .m is the right thing to do. This entire topic has always been a little unnecessarily complicated, on account of this behaviour.

Here is an example,

> restart:

> libname := kernelopts(homedir),libname:

> test := module () export f; option package; local g;
>   g:=proc(x) sin(x) end proc:
>   f:=proc(x) g(x); end proc:
> end module:

> LibraryTools:-Create(cat(kernelopts(homedir),
>                           kernelopts(dirsep),
>                          "test.mla")):
> savelib(test):

> restart:

> libname := kernelopts(homedir),libname:
> test:-f(17);
                                    sin(17)

> march('list',cat(kernelopts(homedir),
>                  kernelopts(dirsep),"test.mla"));
[["test.m", [2009, 4, 27, 11, 12, 56], 41984, 96],

    [":-1.m", [2009, 4, 27, 11, 12, 56], 42145, 84],

    [":-2.m", [2009, 4, 27, 11, 12, 56], 42080, 65]]

If you had savelib'ed test to test.m instead of to test.mla then (after restart and resetting libname)  the local  test:-g would not be available. The syntax for doing that (which I don't advise) would be more like savelib(test,`test.m`).

Getting back to your suggestion, it might be a better idea to always prepend rather than to append to libname. And make your Maple installation folder read-only (if your OS supports that) so that you cannot accidentally clobber its contents.

It bothers me slightly that the ?savelib help-page suggests an Example using kernelopts(mapledir) instead of kernelopts(homedir). Apart from the fact that it might not even work for a networked Maple installation, it's not a great idea.

acer

Aside from what Joe has stated, it's not correct to try to save a module to a .m file. The (non-exported) local members of a module get "anonymized" and saved as separate .m images inside a .mla archive.

It is generally not a good idea to try to savelib some things to just a .m file in modern Maple. You may be able to save data and (simple) state to .m files, but that would be by using save which is a different command.

If there is no writable .mla archive in the libname path, then a savelib command can end up writing out separate .m files to the directory or folder. The fact that it does so doesn't imply that savelib'ing directly to .m is the right thing to do. This entire topic has always been a little unnecessarily complicated, on account of this behaviour.

Here is an example,

> restart:

> libname := kernelopts(homedir),libname:

> test := module () export f; option package; local g;
>   g:=proc(x) sin(x) end proc:
>   f:=proc(x) g(x); end proc:
> end module:

> LibraryTools:-Create(cat(kernelopts(homedir),
>                           kernelopts(dirsep),
>                          "test.mla")):
> savelib(test):

> restart:

> libname := kernelopts(homedir),libname:
> test:-f(17);
                                    sin(17)

> march('list',cat(kernelopts(homedir),
>                  kernelopts(dirsep),"test.mla"));
[["test.m", [2009, 4, 27, 11, 12, 56], 41984, 96],

    [":-1.m", [2009, 4, 27, 11, 12, 56], 42145, 84],

    [":-2.m", [2009, 4, 27, 11, 12, 56], 42080, 65]]

If you had savelib'ed test to test.m instead of to test.mla then (after restart and resetting libname)  the local  test:-g would not be available. The syntax for doing that (which I don't advise) would be more like savelib(test,`test.m`).

Getting back to your suggestion, it might be a better idea to always prepend rather than to append to libname. And make your Maple installation folder read-only (if your OS supports that) so that you cannot accidentally clobber its contents.

It bothers me slightly that the ?savelib help-page suggests an Example using kernelopts(mapledir) instead of kernelopts(homedir). Apart from the fact that it might not even work for a networked Maple installation, it's not a great idea.

acer

Too many new users go wrong with this. The goal might have been that implicit multiplication in 2D Math entry followed some "natural" mode, but the clash between brackets as delimiters and for function application is a real issue.

There's a need for an explanation of the 2D Math implicit multiplication rules, on a help-page that is really easy to find (ie. lots of useful aliased help-queries would get one there). The explanation should be as thorough as Doug's analysis. At present, the ?worksheet,documenting,2DMathDetails help-page is too hard to get to, and is too thin with explanations of implicit multiplication in the presence of round-brackets.

Maybe the system could detect some problematic instances, and query the user as to the intention. Consider the separate case of function assignment. If one enters f(x):=x^2 in 2D Math mode then a dialogue pops up, to allow the user to specify whether a function definition or a remember table assignment is intended. Either a similar approach could be implemented for problematic implicit multiplication situations, or the system might be made more robust,... or the entire implicit multiplication scheme could be reconsidered altogether.

The mechanism offered by Typesetting:-Settings(numberfunctions = false) is too obscure. Also, it has an effect on 5.01(c) but not on (5.01)(c). It could also be more clearly documented that changing that setting doesn't affect copy-n-pasted expressions, which may not be re-parsed(?).

acer

Too many new users go wrong with this. The goal might have been that implicit multiplication in 2D Math entry followed some "natural" mode, but the clash between brackets as delimiters and for function application is a real issue.

There's a need for an explanation of the 2D Math implicit multiplication rules, on a help-page that is really easy to find (ie. lots of useful aliased help-queries would get one there). The explanation should be as thorough as Doug's analysis. At present, the ?worksheet,documenting,2DMathDetails help-page is too hard to get to, and is too thin with explanations of implicit multiplication in the presence of round-brackets.

Maybe the system could detect some problematic instances, and query the user as to the intention. Consider the separate case of function assignment. If one enters f(x):=x^2 in 2D Math mode then a dialogue pops up, to allow the user to specify whether a function definition or a remember table assignment is intended. Either a similar approach could be implemented for problematic implicit multiplication situations, or the system might be made more robust,... or the entire implicit multiplication scheme could be reconsidered altogether.

The mechanism offered by Typesetting:-Settings(numberfunctions = false) is too obscure. Also, it has an effect on 5.01(c) but not on (5.01)(c). It could also be more clearly documented that changing that setting doesn't affect copy-n-pasted expressions, which may not be re-parsed(?).

acer

From the ?proc help-page,

Implicit Local Variables
- For any variable used within a procedure without being explicitly mentioned
  in a local localSequence; or global globalSequence; the following rules are
  used to determine whether it is local or global:

  The variable is searched for amongst the locals and globals (explicit or
  implicit) in surrounding procedures, starting with the innermost. If the
  name is encountered as a parameter, local variable, or global variable of
  such a surrounding procedure, that is what it refers to.

  Otherwise, any variable to which an assignment is made, or which appears as
  the controlling variable in a for loop, is automatically made local.

  Any remaining variables are considered to be global.
An example to illustrate some of this follows,
> x := 3:

> f:= proc() local g, x;
>   x := 17;
>   g := proc() x; end proc;
>   :-x, x, g();
> end proc:

> f();
                                   3, 17, 17

acer

From the ?proc help-page,

Implicit Local Variables
- For any variable used within a procedure without being explicitly mentioned
  in a local localSequence; or global globalSequence; the following rules are
  used to determine whether it is local or global:

  The variable is searched for amongst the locals and globals (explicit or
  implicit) in surrounding procedures, starting with the innermost. If the
  name is encountered as a parameter, local variable, or global variable of
  such a surrounding procedure, that is what it refers to.

  Otherwise, any variable to which an assignment is made, or which appears as
  the controlling variable in a for loop, is automatically made local.

  Any remaining variables are considered to be global.
An example to illustrate some of this follows,
> x := 3:

> f:= proc() local g, x;
>   x := 17;
>   g := proc() x; end proc;
>   :-x, x, g();
> end proc:

> f();
                                   3, 17, 17

acer

It is interesting that verify gets this but is does not. It seems that verify gets it because signum gets it.

> signum( (X+w)^2 + Y^2)
>   assuming X::real, w>0, Y::real;
                                       1

I notice that both verify and signum do not handle the expanded expression, likely due to the resulting X+w term. It's possible that is is doing such an expansion.

> signum(expand((X+w)^2 + Y^2))
>   assuming X::real, w>0, Y::real;
                                 2            2    2
                         signum(X  + 2 X w + w  + Y )

> verify(expand((X+w)^2 + Y^2),0,'greater_equal')
>   assuming X::real, w>0, Y::real;
                                     FAIL
I have submitted this as a bug report.

acer

It is interesting that verify gets this but is does not. It seems that verify gets it because signum gets it.

> signum( (X+w)^2 + Y^2)
>   assuming X::real, w>0, Y::real;
                                       1

I notice that both verify and signum do not handle the expanded expression, likely due to the resulting X+w term. It's possible that is is doing such an expansion.

> signum(expand((X+w)^2 + Y^2))
>   assuming X::real, w>0, Y::real;
                                 2            2    2
                         signum(X  + 2 X w + w  + Y )

> verify(expand((X+w)^2 + Y^2),0,'greater_equal')
>   assuming X::real, w>0, Y::real;
                                     FAIL
I have submitted this as a bug report.

acer

If you change the x-axis range to be 0..1000, to represent thousandths of seconds, then you have to accomodate that in the plot, somehow. You could scale the functions. Or you could simply adjust the tickmark values. And the axis-label could be changed to millisecond.

f1,f2 := 3, 10: # frequencies, as cycles/second

plot(sin(f1*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))]);

plot(sin(f2*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))]);

plot(sin(f1*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

plot(sin(f2*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

acer

If you change the x-axis range to be 0..1000, to represent thousandths of seconds, then you have to accomodate that in the plot, somehow. You could scale the functions. Or you could simply adjust the tickmark values. And the axis-label could be changed to millisecond.

f1,f2 := 3, 10: # frequencies, as cycles/second

plot(sin(f1*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))]);

plot(sin(f2*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))]);

plot(sin(f1*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

plot(sin(f2*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

acer

Inspired by Axel's post, just a little shorter,

> expr := ln(x)*ln(1-x)^2:

> combine(convert(expr,Sum,dummy=k)) assuming x>0, x<=1:

> sum(int(op(1,%),x=0..1),op(2,%));

                                    2
                                  Pi
                             -6 + --- + 2 Zeta(3)
                                   3

It could like nicer without the op() calls, if SumTools had exports that acted similarly to IntegrationTools:-GetRange and friends.

acer

Inspired by Axel's post, just a little shorter,

> expr := ln(x)*ln(1-x)^2:

> combine(convert(expr,Sum,dummy=k)) assuming x>0, x<=1:

> sum(int(op(1,%),x=0..1),op(2,%));

                                    2
                                  Pi
                             -6 + --- + 2 Zeta(3)
                                   3

It could like nicer without the op() calls, if SumTools had exports that acted similarly to IntegrationTools:-GetRange and friends.

acer

The difference should be mostly in terms of memory free for your applications (like Maple), rather than in terms of (cpu) system load. You can check it out by experiment.

Boot the machine to console mode only (runlevel 2, say). Enter the command free and see how much memory is used/free. Then start X (startx, or reboot to runlevel 5 or whatever xdm is). Again, issue free in an xterm, and compare how much memory is still available. This can give you an idea of how much memory X and your window-manager and/or desktop (gnome, kde) are using together.

You can also use top and uptime to gauge the cpu resources and system load, in both console mode and in an xterm. You'll likely discover that X itself doesn't use meaningful amounts of cpu, and unless you are running some spiffy piece of eye-candy with graphical effects (all the time) you may well not be able to detect a significant system load.

The bottom line is that (constantly running graphical eye-candy aside) there should not be much difference in the baseline system load. (It would be a disaster for Linux if running X alone involved some significant cpu overhead.)

If your Maple computation isn't memory intensive then commandline Maple should run pretty much the same in console mode as in an xterm. But if your Maple computation is huge and needs every little bit of physical memory you have (so as not to swap) then commandline Maple in console mode will do better. But only fractionally better because X+desktop is only using a fraction of the total system memory. And if that's the relevant case then maybe consider running 64bit Maple and installing more RAM.

ps. I used to run my symbolic Maple calculations in console mode, back when 8MB was a lot of RAM. Nowadays, it doesn't make much difference.

acer

Quoting from the first paragraph of the Description section of the ?Optimization,NLPSolve help-page,

   Most of the algorithms used by the NLPSolve command assume
   that the objective function and the constraints are twice
   continuously differentiable. NLPSolve will sometimes succeed
   even if these conditions are not met.

So, yes, if the constraint is not continuous at the point in question, then there could be problems.

Keep in mind that these are "numerical" (here, floating-point) solvers working at a given precision. In rough terms, at any given fixed precision for floating-point arithmetic, there will be a non-zero quantity epsilon (or "machine epsilon" for hardware double precision) for which x+epsilon is not arithmetically distinguishable from x. These issues are not specific to floating-point computations only in Maple, but are more general. See here for and here some detail. There may be other ways to implement numerical optimization (using interval computation or some "validated" scheme). But this is why the feasibility and optimality tolerances are options to NLPSolve, so that combined with the Digits setting some programmatic control is available.

acer

Quoting from the first paragraph of the Description section of the ?Optimization,NLPSolve help-page,

   Most of the algorithms used by the NLPSolve command assume
   that the objective function and the constraints are twice
   continuously differentiable. NLPSolve will sometimes succeed
   even if these conditions are not met.

So, yes, if the constraint is not continuous at the point in question, then there could be problems.

Keep in mind that these are "numerical" (here, floating-point) solvers working at a given precision. In rough terms, at any given fixed precision for floating-point arithmetic, there will be a non-zero quantity epsilon (or "machine epsilon" for hardware double precision) for which x+epsilon is not arithmetically distinguishable from x. These issues are not specific to floating-point computations only in Maple, but are more general. See here for and here some detail. There may be other ways to implement numerical optimization (using interval computation or some "validated" scheme). But this is why the feasibility and optimality tolerances are options to NLPSolve, so that combined with the Digits setting some programmatic control is available.

acer

First 497 498 499 500 501 502 503 Last Page 499 of 592