acer

32385 Reputation

29 Badges

19 years, 340 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I think that it would be useful to delineate the current status and restrictions of Theads in Maple.

For example,

  • What makes a Maple code routine Thread-unsafe? (There are lots of ways to cause global effects, so a clear list would help. This earlier post doesn't make it clear, sorry.)
  • What low-level Library routines are Thread-unsafe (eg. limit, is, signum, etc). These routines might not be usable inside one's own Threaded code.
  • What blocks all Threads? (eg. garbage collection? object uniquification in the kernel? Other?) These would affect performance, and create bottlenecks possibly unavoidable at present.
  • What actions can occur in only one Thread at a time? (eg. external calling to Maple's own auxiliary external compiled libraries? external calling to a Compiler:-Compile'd routine?) I mean what actions block other Threads from doing the same thing, not what is Thread-safe.
  • Can two Threads write to two different embedded components, with interleaved timing? Is Typesetting Thread-safe, for that?
  • The debugger status was mentioned in another post.

The question of Thread-safety of the Maple Library is very tricky. There may be a characterization of what is currently, typically Thread-safe, for example along the lines of a symbolic vs numeric distinction. People are going to want to write Threaded code which is not simply plain arithmetic and use of kernel built-ins. But a lot of symbolic Library code is not Thread-safe, and a lot of numeric Library code might call externally (and block duplicated external lib access). Characterizing what sort of code be successfully Threaded would be very useful. Revising such a description with each released improvement would also help.

acer

I was thinking of trying that, but haven't found the spare time yet.

I also haven't read through that linked .pdf yet. But (after helping out the Optimization call to handle the proc a little) I saw that the supplied data was generating some complex (nonreal) values (in the `ln` call`). That made Optimization balk. So I haven't figured out whether the implemenatation of the algorithm is faulty, or how to handle such occasional complex returns. I notice that the proc does produce a plot. It's inconvenient to fiddle with it while it is so slow -- another reason I wanted to optimize/Compile the implementation.

It came up in another thread a while back that Compiling/evalhf'ing a proc that used Statistics (Mean, Variance, or samples) didn't work. A workaround was to replace such Statistics calls with explicit (finite) formulae or even with `add` calls.

acer

I was thinking of trying that, but haven't found the spare time yet.

I also haven't read through that linked .pdf yet. But (after helping out the Optimization call to handle the proc a little) I saw that the supplied data was generating some complex (nonreal) values (in the `ln` call`). That made Optimization balk. So I haven't figured out whether the implemenatation of the algorithm is faulty, or how to handle such occasional complex returns. I notice that the proc does produce a plot. It's inconvenient to fiddle with it while it is so slow -- another reason I wanted to optimize/Compile the implementation.

It came up in another thread a while back that Compiling/evalhf'ing a proc that used Statistics (Mean, Variance, or samples) didn't work. A workaround was to replace such Statistics calls with explicit (finite) formulae or even with `add` calls.

acer

Your second (1-line) way is a procedure call. The first way could also be made into a proc, and also be 1 line.

Your second way requires calling dsolve/numeric each time. That involves repeating all sorts of setup, initialization, deduction of problem class, algorithmic switching due to problem class, etc, etc. That is all unnecessary when one knows that the problem form will be the same, and that only some parameter will change. One purpose of the parameter option is to remove duplicated, unnecessary overhead.

The overhead costs both time and memory.

    |\^/|     Maple 13 (X86 64 LINUX)
._|\|   |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2009
 \  MAPLE  /  All rights reserved. Maple is a trademark of
 <____ ____>  Waterloo Maple Inc.
      |       Type ? for help.
> ans:=(a,c)->dsolve({diff(y(x),x)+sin(x^2)*y(x)=a,y(0)=c},y(x),numeric):
> h:=rand(100):
> st,bu:=time(),kernelopts(bytesalloc):
> for ii from 1 to 1000 do
>   ans(h(),h())(5.7);
> end do: time()-st,kernelopts(bytesalloc)-bu;
                                5.424, 90161024
  
> quit
 
 
    |\^/|     Maple 13 (X86 64 LINUX)
._|\|   |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2009
 \  MAPLE  /  All rights reserved. Maple is a trademark of
 <____ ____>  Waterloo Maple Inc.
      |       Type ? for help.
> ans:=dsolve({diff(y(x),x)+sin(x^2)*y(x\
> )=a,y(0)=c},y(x),numeric,parameters=[a,c]):
> h:=rand(100):
> st,bu:=time(),kernelopts(bytesalloc):
> for ii from 1 to 1000 do
>   ans(parameters=[h(),h()]);
>   ans(5.7);
> end do: time()-st,kernelopts(bytesalloc)-bu;
                                1.613, 38003920

   |\^/|     Maple 13 (X86 64 LINUX)
._|\|   |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2009
 \  MAPLE  /  All rights reserved. Maple is a trademark of
 <____ ____>  Waterloo Maple Inc.
      |       Type ? for help.
> ans:=dsolve({diff(y(x),x)+sin(x^2)*
> y(x)=a,y(0)=c},y(x),numeric,parameters=[a,c]):
> h:=rand(100):
> nans:=proc(a,b,x) ans(parameters=[a,b]); ans(x); end proc:
> st,bu:=time(),kernelopts(bytesalloc):
> for ii from 1 to 1000 do
>   nans(h(),h(),5.7);
> end do: time()-st,kernelopts(bytesalloc)-bu;
                                1.632, 38003920

There may also be some technical reasons underneath. For example, using globals is not thread-safe.

acer

Hopefully, this is of some use. There may be easier ways.

> interface(warnlevel=0):
> with(DETools): with(plots): with(plottools):
> Diam := .4: m := 100: g := 9.8: c1 := 0.155e-3*Diam:
> c2 := .22*Diam^2: ynot := 50000*.3048: Wx := 60*.44704: Wy := 0:
> Vwx := diff(x(t), t)+Wx:
> Vwy := diff(y(t), t)+Wy:
> theta := arctan(Vwy/Vwx):
> Vwmag := sqrt(Vwx^2+Vwy^2):
> Fdragmag := -c1*Vwmag-c2*Vwmag^2:
> Fdragx := Fdragmag*cos(theta):
> Fdragy := Fdragmag*sin(theta):
> yeq := m*(diff(y(t), t, t)) = -m*g+Fdragy:
> xeq := m*(diff(x(t), t, t)) = Fdragx:
> ics := (D(x))(0) = ground, (D(y))(0) = 0, y(0) = 50000*.3048, x(0) = 0:
> ohthehumanity := dsolve({ics, xeq, yeq}, numeric, [y(t), x(t)],
>                         output = listprocedure):
> hite := rhs(ohthehumanity[2]):
> pos := rhs(ohthehumanity[4]):
>
> Z:=proc(GG,TT) global ground;
>   if not(type(GG,numeric)) then
>     return 'procname'(args);
>   else
>     ground:=GG;
>   end if;
>   pos(TT)^2 + hite(TT)^2;
> end proc:
>
> Zgrad := proc(V::Vector,W::Vector)
>   W[1] := fdiff( Z, 1, [V[1],V[2]] );
>   W[2] := fdiff( Z, 2, [V[1],V[2]] );
>   NULL;
> end proc:
>
> sol:=Optimization:-Minimize(Z, 'objectivegradient'=Zgrad,
>   'method'='nonlinearsimplex' ,initialpoint=[100,100]);
                                             [107.518346261289281]
           sol := [0.0000301543312111207006, [                   ]]
                                             [104.668066005965386]
 
>
> pos(sol[2][2]), hite(sol[2][2]);
                0.00454084653245279135, 0.00308788665268533435
 
> ground,sol[2][2];
                   107.518346261289281, 104.668066005965386
 
>
> Digits:=30:
> sol:=Optimization:-Minimize(Z, 'objectivegradient'=Zgrad,
>   'method'='nonlinearsimplex' ,initialpoint=[100,100]);
sol := [
 
                                       -13  [107.518073058989098792559364875]
    0.898257246759777661510402862893 10   , [                               ]]
                                            [104.668080856614961089921927934]
 
>                                                                                 
> pos(sol[2][2]), hite(sol[2][2]);
                                    -6                          -6
            -0.405056976848783279 10  , -0.339596677889630882 10
 

EDIT: the objective gradient routine `Zgrad` is not necessary when using the nonlinearsimplex method (which doesn't use derivatives, if I recall). I created it only because it helps when using some other 'method's of Minimize. I had to try various methods before I found one that got the result nicely. Minimize's ProcedureForm or MatrixForm calling sequences sometimes run into trouble unless an objective gradient (and/or constraintjacobian) routine is provided explicitly for "black-box" objective (and/or constraint) procedures which cannot be automatically handled by codegen[GRADIENT] (and/or codegen[JACOBIAN]. The same result returns from simply,

Optimization:-Minimize(Z,
  'method'='nonlinearsimplex' ,initialpoint=[100,100]);

The nature of routine Z is to find a matching pair 'ground' and time values for which both `hite` and `pos` (Y and X) are simultaneously zero.

acer

Hopefully, this is of some use. There may be easier ways.

> interface(warnlevel=0):
> with(DETools): with(plots): with(plottools):
> Diam := .4: m := 100: g := 9.8: c1 := 0.155e-3*Diam:
> c2 := .22*Diam^2: ynot := 50000*.3048: Wx := 60*.44704: Wy := 0:
> Vwx := diff(x(t), t)+Wx:
> Vwy := diff(y(t), t)+Wy:
> theta := arctan(Vwy/Vwx):
> Vwmag := sqrt(Vwx^2+Vwy^2):
> Fdragmag := -c1*Vwmag-c2*Vwmag^2:
> Fdragx := Fdragmag*cos(theta):
> Fdragy := Fdragmag*sin(theta):
> yeq := m*(diff(y(t), t, t)) = -m*g+Fdragy:
> xeq := m*(diff(x(t), t, t)) = Fdragx:
> ics := (D(x))(0) = ground, (D(y))(0) = 0, y(0) = 50000*.3048, x(0) = 0:
> ohthehumanity := dsolve({ics, xeq, yeq}, numeric, [y(t), x(t)],
>                         output = listprocedure):
> hite := rhs(ohthehumanity[2]):
> pos := rhs(ohthehumanity[4]):
>
> Z:=proc(GG,TT) global ground;
>   if not(type(GG,numeric)) then
>     return 'procname'(args);
>   else
>     ground:=GG;
>   end if;
>   pos(TT)^2 + hite(TT)^2;
> end proc:
>
> Zgrad := proc(V::Vector,W::Vector)
>   W[1] := fdiff( Z, 1, [V[1],V[2]] );
>   W[2] := fdiff( Z, 2, [V[1],V[2]] );
>   NULL;
> end proc:
>
> sol:=Optimization:-Minimize(Z, 'objectivegradient'=Zgrad,
>   'method'='nonlinearsimplex' ,initialpoint=[100,100]);
                                             [107.518346261289281]
           sol := [0.0000301543312111207006, [                   ]]
                                             [104.668066005965386]
 
>
> pos(sol[2][2]), hite(sol[2][2]);
                0.00454084653245279135, 0.00308788665268533435
 
> ground,sol[2][2];
                   107.518346261289281, 104.668066005965386
 
>
> Digits:=30:
> sol:=Optimization:-Minimize(Z, 'objectivegradient'=Zgrad,
>   'method'='nonlinearsimplex' ,initialpoint=[100,100]);
sol := [
 
                                       -13  [107.518073058989098792559364875]
    0.898257246759777661510402862893 10   , [                               ]]
                                            [104.668080856614961089921927934]
 
>                                                                                 
> pos(sol[2][2]), hite(sol[2][2]);
                                    -6                          -6
            -0.405056976848783279 10  , -0.339596677889630882 10
 

EDIT: the objective gradient routine `Zgrad` is not necessary when using the nonlinearsimplex method (which doesn't use derivatives, if I recall). I created it only because it helps when using some other 'method's of Minimize. I had to try various methods before I found one that got the result nicely. Minimize's ProcedureForm or MatrixForm calling sequences sometimes run into trouble unless an objective gradient (and/or constraintjacobian) routine is provided explicitly for "black-box" objective (and/or constraint) procedures which cannot be automatically handled by codegen[GRADIENT] (and/or codegen[JACOBIAN]. The same result returns from simply,

Optimization:-Minimize(Z,
  'method'='nonlinearsimplex' ,initialpoint=[100,100]);

The nature of routine Z is to find a matching pair 'ground' and time values for which both `hite` and `pos` (Y and X) are simultaneously zero.

acer

That's an interesting find.

That routine works by computing (approximations, while Digits=8, to) the zeros, extrema, and inflections of the expression. Then it reverts Digits and determines convexity/concavity by taking signum(eval(diff(expr,x,x),x=mid)) for 'mid' the midpoint between each pair of those points of interest.

I see using signum as mildly superior to using `is` directly, since signum can sometimes use evalr and a few other techniques as well as `is`.

I note that these Calculus1 routines (including `Roots`) demand a finite range. I'm not so sure about the wisdom of setting of Digits to 8 while finding critical points.

While determining subintervals on which the expression is (piecewise) convex is interesting, that routine doesn't quite bring all the same inferences. For one thing, it uses only a finite range. It also doesn't actually make an statement about adjoining subintervals, I suspect. Sure x^2 is convex from a..0 and also from 0..b, which it shows. But that's not as strong a statement as saying that, yes, "x^2 is convex from a..b" or yes, "x^2 is convex".

acer

In order to use the test of the 2nd derivative being nonnegative, the expression should be twice differentiable. That is one way of seeing why it falls down for 1/(x*x), which is not even once differentiable.

The code might be altered to check differentiability, running limit of the derivative(s) against values returned by discont or fdiscont. (There is an existing "property" named 'differentiable' known to is&assume&assuming, but it's for "functionals".)

If the method is edited to force use of the definition, avoiding the 2nd derivative test, then one gets behaviour like,

> isconvex(1/(x*x));
isconvex:   falling back to the definition
                                     false
 
>
> isconvex(-sqrt(x^2));
isconvex:   falling back to the definition
                                     false
 
> isconvex(1/(x*x),0,infinity);
isconvex:   falling back to the definition
                                     true
 
> isconvex(-sqrt(x^2),-infinity,0);
isconvex:   falling back to the definition
                                     true
 
> isconvex(abs(x));
isconvex:   falling back to the definition
                                     FAIL

10-15 years ago I used to hear experts say that it was only a misdemeanor if Maple's answer was wrong only on a set of measure zero. Nowadays more Maple routines return and accept piecewise results, and in turn people have come to expect more of the same. Given that not all of Maple is prepared to handle piecewise(), what havoc would be caused by having diff(1/x^2,x) return piecewise(x=0,undefined,-2/x^3) ?

acer

I value your comments, Alec.

acer

As is, it fails for lots of things. Obviously I was implying that when I wrote that there is lots of room for improvement.

Note also that there was a comment in the code about whether/how to normalize. Testing for differentiability (another tricky area) is also possible.

Testing for convexity is similar to lots of computer algebra tasks in that there are lots of theoretical and practical holes. Unless someone starts it off, even if only for discussion, there will always be nothing.

"Pragmatism. Is that all you have have to offer?" - Guildenstern

acer

I was just minimizing a squared expression, so that if NLPSolve found values where the objective was (approximately zero) then that could be a zero of the non-squared expression. (I used NLPSolve as an alternative approach, because fsolve requires as many expressions/functions as variables. You might have just one function but several variables.)

I have to rush off home. Since I wasn't sure of your exact query, I didn't make use of x(t) in my earlier code. I used just y(t). I'll look again, if nobody posts a methodology for your clarified task before then.

acer

I was just minimizing a squared expression, so that if NLPSolve found values where the objective was (approximately zero) then that could be a zero of the non-squared expression. (I used NLPSolve as an alternative approach, because fsolve requires as many expressions/functions as variables. You might have just one function but several variables.)

I have to rush off home. Since I wasn't sure of your exact query, I didn't make use of x(t) in my earlier code. I used just y(t). I'll look again, if nobody posts a methodology for your clarified task before then.

acer

There's a lot of scope for improving this code below. (A probabilistic fallback might also be worthwhile. I suspect that a fallback that using testeq & signum might not work; testeq is rather limited.)

Note that `is` can depend upon Digits.

> isconvex := proc(expr,A::realcons:=-infinity,B::realcons:=infinity)
> local a,b,t,x,use_expr,res;
>   x := indets(expr,name) minus {constants};
>   if nops(x) = 1 then
>     x := op(x);
>   else error;
>   end if;
>   # For more than one variable, one could test
>   # positive-definiteness of the Hessian.
>   res := is( diff(expr,x,x) >=0 )
>            assuming x>=A, x<=B;
>   if res<>FAIL then
>     return res;
>   else
>     userinfo(1,'isconvex',`falling back to the definition`);
>     # Is it better to expand, or simplify, or put
>     # Normalizer calls around the two-arg evals or
>     # their difference, oder...?
>     use_expr:=expand(expr);
>     is( eval(use_expr,x=(1-t)*a+t*b) <=
>         (1-t)*eval(use_expr,x=a)+t*eval(use_expr,x=b) )
>       assuming a<=b, t>=0, t<=1, b<=B, a>=A;
>   #else
>   ##Is global optimization another fallback?
>   end if;
> end proc:

> # note: this shows that none of the examples
> # below had to fall back to the defn after failing
> # a 2nd deriv test. But many such will exist.
> infolevel[isconvex]:=1:

> isconvex( y^2 );
                                     true

>
> isconvex( y^3 );
                                     false

> isconvex( y^3, 0 );
                                     true

>
> isconvex( (y-3)^2 );
                                     true

>
> isconvex( (y-3)^4 );
                                     true

>
> isconvex( exp(y) );
                                     true

> isconvex( -exp(y) );
                                     false

>
> isconvex( -sin(y), 0, Pi );
                                     true

acer

Perhaps someone will add it as the block comment syntax for Maple on wikipedia's language syntax comparison page (both here, and here).

acer

Perhaps someone will add it as the block comment syntax for Maple on wikipedia's language syntax comparison page (both here, and here).

acer

First 476 477 478 479 480 481 482 Last Page 478 of 592