acer

32405 Reputation

29 Badges

19 years, 346 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

This is a much more interesting example. After looking at the plot, I was tempted to brush it off as being another case where the pathological spike was being missed with the default evaluationlimit and nodelimit options. But, as you showed, it doesn't help to crank up those option values. And I didn't have success with increasing Digits simultaneous to that (did Axel?!).

But a tiny shift in the range had success. So I am guessing that the internal routines of the univariate b&b solver have gone wrong when estimating lower bounds for a cell, when using 0.0 or less as the left-endpoint. Maybe something like that. (Or perhaps the interpolating polynomial is not right. I looked with infolevel[Optimization] at 6, but didn't debug it.)

> restart:

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.39 .. 1.4, method=branchandbound, maximize);

              [2.00611111585267299, [x = 1.00326066115002321]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.41,   method=branchandbound, maximize);

              [2.00611111585267166, [x = 1.00326066227561483]]

> Optimization[NLPSolve](x^2 +1- exp(-1/(110*(x-1))^2), x= -1.4 .. 1.4,   method=branchandbound, maximize);

              [1.96001434792277918, [x = -1.39999999999999990]]

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

It's not just a comparison between `subs` and `algsubs` here, since the posted examples supplied them with different inputs. In the `subs` case, the results came following isolation via `solve`.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

The defaults for evaluationlimit and nodelimit could be set higher for the univariate branch-and-bound solver. Very often the objective function evaluates quickly, and so there is no overlong delay when using more resources with higher default limits.

One can remove the "almost" in the statement in the reply above, and its principal clause will still be correct. Any numerical method can be beaten by a bad enough function.

I would not call the OP's example a case of an outright bug, as there is no set of default values for such options which satisfy everyone all the time. But the relevant default values for the nodelimit and evaluationlimit options might be raised (doubled?), to catch more problems by default yet without adding any great delay.

> restart: kernelopts(printbytes=false):
> st:=time():
> Optimization[NLPSolve](x^2/10 + sin(1000*x), x= -5 .. 5,
>    method=branchandbound, maximize, evaluationlimit=12000, nodelimit=2000);
               [3.49984521055628761, [x = -4.99984570315593846]]
 
> time()-st;
                                     5.583

Obviously the option values used for the above example are not suitable as defaults. They would make the computation take too long and work too hard in general. But one can see that the options can be adjusted so as to get a better (than default) result even for this objective.

acer

Maybe this bug could be alleviated by changing some instance of a call to `normal` inside the internal routines used by JordanForm to instead be a call to `Normalizer`. (I didn't debug in order to figure out for which lines, if any, this might be true.)

> restart:

> unprotect(normal):
> _orig_normal:=eval(normal):
> normal:=x->Algebraic:-Normal(x):

> A:=Matrix([[1,sqrt(2),-sqrt(6)],[sqrt(2),2,sqrt(3)],[-sqrt(6),sqrt(3),0]]):

> LinearAlgebra:-JordanForm(A);
                                [-3    0    0]
                                [            ]
                                [ 0    3    0]
                                [            ]
                                [ 0    0    3]
 
> Q1:=LinearAlgebra:-JordanForm(A, output='Q'):
> Q2,J2:=LinearAlgebra:-JordanForm(A, output=['Q','J']):

> normal:=eval(_orig_normal):
> protect(normal):

> simplify(Q1);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2^(-1) . A . Q2 - J2);
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

The idea is that, for a Matrix with radicals present, one might first assign `Normalizer` to be a routine capable of handling radicals (eg. `radnormal`, or Algebraic:-Normal, or `simplify`).

acer

Maybe this bug could be alleviated by changing some instance of a call to `normal` inside the internal routines used by JordanForm to instead be a call to `Normalizer`. (I didn't debug in order to figure out for which lines, if any, this might be true.)

> restart:

> unprotect(normal):
> _orig_normal:=eval(normal):
> normal:=x->Algebraic:-Normal(x):

> A:=Matrix([[1,sqrt(2),-sqrt(6)],[sqrt(2),2,sqrt(3)],[-sqrt(6),sqrt(3),0]]):

> LinearAlgebra:-JordanForm(A);
                                [-3    0    0]
                                [            ]
                                [ 0    3    0]
                                [            ]
                                [ 0    0    3]
 
> Q1:=LinearAlgebra:-JordanForm(A, output='Q'):
> Q2,J2:=LinearAlgebra:-JordanForm(A, output=['Q','J']):

> normal:=eval(_orig_normal):
> protect(normal):

> simplify(Q1);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2);
                 [             1/2  1/2             1/2  1/2]
                 [            2    3               2    3   ]
                 [ 1/3      - --------- + 2/3    - ---------]
                 [                2                    2    ]
                 [                                          ]
                 [   1/2           1/2                      ]
                 [  2             2                         ]
                 [- ----          ----                0     ]
                 [   6             6                        ]
                 [                                          ]
                 [  1/2              1/2                    ]
                 [ 6                6                       ]
                 [ ----         1 - ----              1     ]
                 [  6                6                      ]
 
> simplify(Q2^(-1) . A . Q2 - J2);
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]
                                 [           ]
                                 [0    0    0]

The idea is that, for a Matrix with radicals present, one might first assign `Normalizer` to be a routine capable of handling radicals (eg. `radnormal`, or Algebraic:-Normal, or `simplify`).

acer

It is mostly explained above. If the Matrix has the symmetric or hermitian indexing function, and if the data is all of type complex(numeric) with some float present (or complex[8] or float[8] datatype, etc), then the underlying method used by the Eigenvectors command will sort the eigenvalues in the returned Vector.

And the eigenvectors are columns in the returned Matrix which correspond to entries in that returned Vector of sorted eigenvalues, so the columns are thus sorted for these cases.

This is all due to which external clapack routine is used.

The key thing is that the Eigenvectors and Eigenvalues commands do not try to automatically detect symmetry -- the symmetric of hermitian indexing function must be specificed in the data Matrix.

I think that that is ok behaviour, because with the lack of a nonsymmetric indexing function there would otherwise be no way to force use of the nonsymmetric method in the case that float data only appeared close to hermitian. An automatic test would likely involve a tolerance in the float case.

acer

It was bothering me, that I got the wrong result when I first tried to use Quantile for this. (This is what Quantile is for, in general. So it's a problem if it is weak at this.) Here's a bit of analysis.

> restart:with(Statistics):
> X:=RandomVariable('Normal'(0,1)):

> # This next goes wrong because `evalf/RootOf` doesn't tell fsolve to use a real range!
> evalf[50](Quantile(X,1-10^(-18)));               -3.9377187255341233799589551305177585550470808666755

            - 4.5754487927888755526460423197296824251197610676463 I

> # Observe that for exact arguments Quantile returns a RootOf that lacks "realness" info.
> # Really, that is a bug of Quantile (certainly not fsolve's fault).
> forget(evalf):
> T:=Quantile(X,1-10^(-18));
                                                                (1/2)
       RootOf(500000000000000000 erf(_Z) - 499999999999999999) 2     

> # Sometimes this next call (which gets to `fsolve`) will "get lucky"
> # and find the real root. But often not!
> evalf(%);
                                 8.757290342

> # Let's fix that, by injecting a real-range into the RootOf
> forget(evalf):
> subsindets(T,specfunc(anything,RootOf),t->RootOf(op(t),1..20));evalf[50](%);
                                                                     (1/2)
   RootOf(500000000000000000 erf(_Z) - 499999999999999999, 1 .. 20) 2     
             8.7572903487823150638811286221420828183378493880373

Now for some easier general purpose workarounds (read: better use of Statistics).

# Gosh, silly me. Statistics can do its own real root-finding,
# but we have to let it know that it's a float problem up front.
> forget(evalf):
> Quantile(X,1.0-10^(-18)); # notice 1.0 instead of exact 1
                                 7.391904314

> # And at default precision that answer is poor.
> # But it seems reasonable to expect to have to raise Digits above
> # 18 (at least), if we throw terms like 1.0-10^(-18) around.
> Digits:=20:
> Quantile(X,1.0-10^(-18));
                            8.7572903487832079711

> # Similarly, the 'numeric' option also tells Quantile to use floats.
> forget(evalf):
> Digits:=20:
> Quantile(X,1-10^(-18),numeric);
                            8.7572903487832079711

I ought to have come up with that last pair before.

acer

It was bothering me, that I got the wrong result when I first tried to use Quantile for this. (This is what Quantile is for, in general. So it's a problem if it is weak at this.) Here's a bit of analysis.

> restart:with(Statistics):
> X:=RandomVariable('Normal'(0,1)):

> # This next goes wrong because `evalf/RootOf` doesn't tell fsolve to use a real range!
> evalf[50](Quantile(X,1-10^(-18)));               -3.9377187255341233799589551305177585550470808666755

            - 4.5754487927888755526460423197296824251197610676463 I

> # Observe that for exact arguments Quantile returns a RootOf that lacks "realness" info.
> # Really, that is a bug of Quantile (certainly not fsolve's fault).
> forget(evalf):
> T:=Quantile(X,1-10^(-18));
                                                                (1/2)
       RootOf(500000000000000000 erf(_Z) - 499999999999999999) 2     

> # Sometimes this next call (which gets to `fsolve`) will "get lucky"
> # and find the real root. But often not!
> evalf(%);
                                 8.757290342

> # Let's fix that, by injecting a real-range into the RootOf
> forget(evalf):
> subsindets(T,specfunc(anything,RootOf),t->RootOf(op(t),1..20));evalf[50](%);
                                                                     (1/2)
   RootOf(500000000000000000 erf(_Z) - 499999999999999999, 1 .. 20) 2     
             8.7572903487823150638811286221420828183378493880373

Now for some easier general purpose workarounds (read: better use of Statistics).

# Gosh, silly me. Statistics can do its own real root-finding,
# but we have to let it know that it's a float problem up front.
> forget(evalf):
> Quantile(X,1.0-10^(-18)); # notice 1.0 instead of exact 1
                                 7.391904314

> # And at default precision that answer is poor.
> # But it seems reasonable to expect to have to raise Digits above
> # 18 (at least), if we throw terms like 1.0-10^(-18) around.
> Digits:=20:
> Quantile(X,1.0-10^(-18));
                            8.7572903487832079711

> # Similarly, the 'numeric' option also tells Quantile to use floats.
> forget(evalf):
> Digits:=20:
> Quantile(X,1-10^(-18),numeric);
                            8.7572903487832079711

I ought to have come up with that last pair before.

acer

Since there has been exporting to image files in this thread, Robert may have had in mind something close to the trusty, dusty 640x480 (4:3) vga standard graphics mode. See also vesa.

acer

Since there has been exporting to image files in this thread, Robert may have had in mind something close to the trusty, dusty 640x480 (4:3) vga standard graphics mode. See also vesa.

acer

Did you mean you instead pass [result] as the first argument to pointplot, ie. the expression sequence result wrapped in a list.

And did you want to instead create result so that it took the rhs() of f(i)? You want result to be a list of lists, with each inner list having two purely numeric entries (not equations).

result := seq([i, rhs(f(i))], i = 0 .. 25);

plots:-pointplot([result]);

acer

Did you mean you instead pass [result] as the first argument to pointplot, ie. the expression sequence result wrapped in a list.

And did you want to instead create result so that it took the rhs() of f(i)? You want result to be a list of lists, with each inner list having two purely numeric entries (not equations).

result := seq([i, rhs(f(i))], i = 0 .. 25);

plots:-pointplot([result]);

acer

First 463 464 465 466 467 468 469 Last Page 465 of 593