acer

32333 Reputation

29 Badges

19 years, 325 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Thanks, Paulina. The method of converting to Atomic Identifier we figured out, for non-1D-parsable 2D objects. But ?plots,typesetting also mentions using left-name-quotes to do this (in the same paragraph that it mentions atomic identifiers, for such problematic 2D objects). I couldn't get the left-quotes to do it. Could you give an example? acer
From the Maple program's top menubar, choose, View -> Palettes -> Arrange Palettes That should pop up a new window, which lets you configure which palettes will show in the side pane. Remember, after you create the 2D object, toggle it as Atomic Identifier before you Convert it to 1D notation. acer
From the Maple program's top menubar, choose, View -> Palettes -> Arrange Palettes That should pop up a new window, which lets you configure which palettes will show in the side pane. Remember, after you create the 2D object, toggle it as Atomic Identifier before you Convert it to 1D notation. acer
I also tried to enter the 2D Math expression directly into the session, surround it with left- name-quotes, and use that directly within the textplot() call much as the help-page suggests. At first I used 2D Math left-quotes, ie, inside the object. That ended up producing something with Typesetting errors embedded within it. Which is weird. When trying to use the left-quotes, I also produced a plot whose structure was, PLOT(TEXT([3., 4.], _TYPESET())) desite there being some nice 2D typeset input thing in the actual call. Then I tried with 1D notation left-quotes, just outside but around the 2D Math input object. That is, the quotes were red instead of black. That resulted in a textplot which contained, literally, `LinearAlgebra:-HermitianTranspose(p[gt]);` The thing inside the textplot call was all nicely typeset, though. This was the weirdest. I think that precise instructions would help the user most here. Precise sets of key-strokes, named in the help-pages, seem almost necessary to provide satisfaction. Also, (and this may not be true of this situation; I don't know yet, about the left-quotes) anything which can only be done by using the mouse is not best implemented. acer
I also tried to enter the 2D Math expression directly into the session, surround it with left- name-quotes, and use that directly within the textplot() call much as the help-page suggests. At first I used 2D Math left-quotes, ie, inside the object. That ended up producing something with Typesetting errors embedded within it. Which is weird. When trying to use the left-quotes, I also produced a plot whose structure was, PLOT(TEXT([3., 4.], _TYPESET())) desite there being some nice 2D typeset input thing in the actual call. Then I tried with 1D notation left-quotes, just outside but around the 2D Math input object. That is, the quotes were red instead of black. That resulted in a textplot which contained, literally, `LinearAlgebra:-HermitianTranspose(p[gt]);` The thing inside the textplot call was all nicely typeset, though. This was the weirdest. I think that precise instructions would help the user most here. Precise sets of key-strokes, named in the help-pages, seem almost necessary to provide satisfaction. Also, (and this may not be true of this situation; I don't know yet, about the left-quotes) anything which can only be done by using the mouse is not best implemented. acer
I also tried to cut 'n paste the 2D nicely typeset input object, surrounded by name-quotes (ie. left-single quotes, not uneval right-single quotes), for use in the typeset() call within textplot(). That did not work for me. The help-page ?plot,typesetting indicated that those name quotes should have worked, though, if I understood it correctly. acer
I also tried to cut 'n paste the 2D nicely typeset input object, surrounded by name-quotes (ie. left-single quotes, not uneval right-single quotes), for use in the typeset() call within textplot(). That did not work for me. The help-page ?plot,typesetting indicated that those name quotes should have worked, though, if I understood it correctly. acer
The original poster asked about the determinant, given that the entries were multivariate expressions involving floats. It wasn't my suggestion to compute a determinant for that, but rather it was the original poster's. Now, it may well be that the poster wanted to know because the underlying motivation was a wish to gain insight into whether the system was full rank (in some sense) or some such similar thing. In that case, yes, of course it would be much more appropriate to compute singular values than it would be to try to compute the determinant, *if* the entries were purely numeric floats. But the entries are stated as *not* being purely numeric -- there are mutliple symbols present. For a multivariable system, trying to compute the singular values is even harder than trying to compute the determinant. So that's not a very good suggestion, purely in itself. Maybe the original poster can tell us why she wants to compute this determinant. Perhaps there's another way to get at her end goal. acer
If you are feeling brave, you might try the following: Suppose that your Matrix has been assigned to m. M := map(convert,m,rational): Normalizer := x->x; # the brave bit (p,u):=LinearAlgebra:-LUDecomposition(M,'output'=['P','U']): sol1 := LinearAlgebra:-Determinant(p) * mul(u[i,i],i=1..13): What I've done above is similar to Gaussian elimination. The determinant will then be the product of the diagonal entries of u (which is the result of that process), further corrected with the sign (+/-1) by using the determinant of the "pivot matrix" P. You can play around with it a bit, to try to check correctness. Some possibilities for that are below. Idea 1 ------ Take M := M[1..7,1..7] , say, and repeat the above calculation. You may be able to compute the determinant of that smaller M using LinearAlgebra:-Determinant. If you're lucky, that won't be a result of zero. sol2 := LinearAlgebra:-Determinant(M): radnormal(sol1-sol2); # trying to test sol1=sol2 If sol1 and sol2 agree, that might give you confidence in the sol1 obtained for the full sized M. Idea 2 ------ Alternatively you could try to instantiate sol1 and sol2, computed for the smaller sized M, at specific (but randomly chosen by you) floating-point values for a,b, and c. Ie, evalf[50]( eval(sol1,[a=0.05,b=0.03,c=0.07]) ); evalf[50]( eval(sol2,[a=0.05,b=0.03,c=0.07]) ); Idea 3 ------ Or, you could take the sol1 computed for the full sized M, instantiate as some values as in Idea 2, and compare with this: LinearAlgebra:-Determinant(evalf[50]( eval(m,[a=0.05,b=0.03,c=0.07]) )); That is the determinant of the Matrix obtained by first instantiating Matrix m with those same values. In other words, you are here comparing these two things, A) the formula sol1 for the determinant of m, then instantiated at certain values. B) the Matrix m, instantiated at those same values, which you then take the determinant of. Again, if those two things agree, for a variety of triples of floating-point values for a,b, and c, then you may gain some confidence in the answer sol1. I think that I'd prefer Idea 3, over Ideas 1 and 2 which are a bit shaky as they use only an upper-left portion of Matrix m. The danger of setting Normalizer to x->x is that a pivot might be selected during the (Gaussian elmination) LU computation that is actually zero even though it doesn't immediately appear to be zero. If that were to happen, then the result would be invalid, and hidden divisions by zero would be contained in the results. Another potential problem is that sol1 might be a very much larger expression than its most simplified form, when computed with the Normalizer as x->x. It's not so uncommon for multivariate problems with floating-point coefficients to present maple with difficulties. Converting those floats to rationals is one way to try to handle such problems. acer
That article talks about the possible need for security, when running software. It's discussing older software, without the most recent security patches. But that just makes me think about a pieces of software's current security state of affairs. I've recently considered setting up a separate unix/linux account for, just for the purpose of my running Maple. This should add security, as Maple sessions run under that new account shouldn't be able to affect my true own true user account and its files. I worry most about .mw worksheets that I pick up off the web. What do you think, unnecessary overkill? acer
> f:=proc() return args; end proc: > stopat(f): > f(x-x=0); f: 1* return args DBG> args Execution stopped: Stack limit reached. acer
It may be that, for procedural form, there can be problems internal the solvers, when they come to compute numerical estimates of the gradient of the objective. Supplying the means to compute the gradient numerically is a possible workaround for such an issue. The above did not manifest itself in D.J.Keenan's suggested usage, as in that case the first argument immediately evaluates to an expression (for which maple can compute a formulaic gradient, symbolically). So, one might wonder, how hard is it to supply the objective gradient oneself, given that the objective is taken as a "black box"? It's not so hard. Matrix form input, as the Optimization packages defines that concept, is probably the easiest way to do it. I realize that the original posting mentioned procedural form input, but it should be possible to set up Matrix form input programmatically. Below I'll show my attempt at solving with a supplied objective gradient, for the original example. I'm doing this despite the fact that it's clear that the `objective` procedure is equivalent to a simple expression. If the internal computation of derivatives numerically is really what's at fault underneath, then this technique may be necessary to not see *any* procedural form multivariable problem stall (by mistakenly computing that the gradients are all zero). Here's the example, in full, using this approach. (I made a minor edit to the `objective` procedure, where is looks as if the assignment N:=nargs was omitted.) objective := proc(x1, x2, x3, x4, x5, x6, x7, x8, x9) local phi, i, p, N; N := nargs: phi[0] := Pi/2; for i from 1 to nargs do phi[i] := args[i]; end do; p := seq((sin(phi[i-1])*mul(cos(phi[j]), j=i..N-1))^2, i=1..N); return p[7]; end proc: some_point := [seq(RandomTools[Generate](float(range=0..evalf(2*Pi), method=uniform)), i=1..9)]: p := proc(V) global objective; objective(V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]); end proc: objgrd := proc(V, W) W[1] := fdiff(objective, [1], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[2] := fdiff(objective, [2], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[3] := fdiff(objective, [3], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[4] := fdiff(objective, [4], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[5] := fdiff(objective, [5], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[6] := fdiff(objective, [6], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[7] := fdiff(objective, [7], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[8] := fdiff(objective, [8], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[9] := fdiff(objective, [9], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); NULL; end proc: bl:=Vector(9,fill=0.0,datatype=float): bu:=Vector(9,fill=evalhf(2*Pi),datatype=float): Optimization[NLPSolve](9, p, [], [bl,bu], maximize=true, objectivegradient=objgrd, initialpoint=Vector(some_point), method=sqp ); # or method=modifiednewton acer
It may be that, for procedural form, there can be problems internal the solvers, when they come to compute numerical estimates of the gradient of the objective. Supplying the means to compute the gradient numerically is a possible workaround for such an issue. The above did not manifest itself in D.J.Keenan's suggested usage, as in that case the first argument immediately evaluates to an expression (for which maple can compute a formulaic gradient, symbolically). So, one might wonder, how hard is it to supply the objective gradient oneself, given that the objective is taken as a "black box"? It's not so hard. Matrix form input, as the Optimization packages defines that concept, is probably the easiest way to do it. I realize that the original posting mentioned procedural form input, but it should be possible to set up Matrix form input programmatically. Below I'll show my attempt at solving with a supplied objective gradient, for the original example. I'm doing this despite the fact that it's clear that the `objective` procedure is equivalent to a simple expression. If the internal computation of derivatives numerically is really what's at fault underneath, then this technique may be necessary to not see *any* procedural form multivariable problem stall (by mistakenly computing that the gradients are all zero). Here's the example, in full, using this approach. (I made a minor edit to the `objective` procedure, where is looks as if the assignment N:=nargs was omitted.) objective := proc(x1, x2, x3, x4, x5, x6, x7, x8, x9) local phi, i, p, N; N := nargs: phi[0] := Pi/2; for i from 1 to nargs do phi[i] := args[i]; end do; p := seq((sin(phi[i-1])*mul(cos(phi[j]), j=i..N-1))^2, i=1..N); return p[7]; end proc: some_point := [seq(RandomTools[Generate](float(range=0..evalf(2*Pi), method=uniform)), i=1..9)]: p := proc(V) global objective; objective(V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]); end proc: objgrd := proc(V, W) W[1] := fdiff(objective, [1], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[2] := fdiff(objective, [2], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[3] := fdiff(objective, [3], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[4] := fdiff(objective, [4], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[5] := fdiff(objective, [5], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[6] := fdiff(objective, [6], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[7] := fdiff(objective, [7], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[8] := fdiff(objective, [8], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); W[9] := fdiff(objective, [9], [V[1], V[2], V[3], V[4], V[5], V[6], V[7], V[8], V[9]]); NULL; end proc: bl:=Vector(9,fill=0.0,datatype=float): bu:=Vector(9,fill=evalhf(2*Pi),datatype=float): Optimization[NLPSolve](9, p, [], [bl,bu], maximize=true, objectivegradient=objgrd, initialpoint=Vector(some_point), method=sqp ); # or method=modifiednewton acer
You could first try units of days (if you hope to integrate out as far as a year). Then if you have some success you could try something finer. Trying to integrate out as far as a year, in seconds, with a requested tolerance of 10^(-8) or so, is asking for a lot. Creating one system out of all three that you had originally seems to me to be a good idea, possibly even necessary. As for method, I would start out with rkf45, and then if you find that you really need a stiff solver then perhaps move to rosenbrock (or gear?). I seemed to be able to integrate out as far as a few thousand seconds, using your original setup (extremely fine scale, high requested accuracy, three separate systems) but with method=rkf45. Using the three-system setup that you had, the stiff solver method=rosenbrock will probably require procedures like `diff/X1` (where that is formed by requesting more from the listprocedure output). I suspect that this can be accomplished something like, dX1 := subs(sol2, diff(x(t), t)): `diff/X1` := proc () global dX1; dX1(args[1]) end proc: But maybe that's not right. By doing that for all of dX,dY,dZ,dZ1,dY1,dX1 I was able to get dsolve/numeric to produce sol3. But I wasn't able to get it to produce numbers, without first running out of resources. I concluded this was due to the three-system approach and the very high accuracy requirements. acer
You could first try units of days (if you hope to integrate out as far as a year). Then if you have some success you could try something finer. Trying to integrate out as far as a year, in seconds, with a requested tolerance of 10^(-8) or so, is asking for a lot. Creating one system out of all three that you had originally seems to me to be a good idea, possibly even necessary. As for method, I would start out with rkf45, and then if you find that you really need a stiff solver then perhaps move to rosenbrock (or gear?). I seemed to be able to integrate out as far as a few thousand seconds, using your original setup (extremely fine scale, high requested accuracy, three separate systems) but with method=rkf45. Using the three-system setup that you had, the stiff solver method=rosenbrock will probably require procedures like `diff/X1` (where that is formed by requesting more from the listprocedure output). I suspect that this can be accomplished something like, dX1 := subs(sol2, diff(x(t), t)): `diff/X1` := proc () global dX1; dX1(args[1]) end proc: But maybe that's not right. By doing that for all of dX,dY,dZ,dZ1,dY1,dX1 I was able to get dsolve/numeric to produce sol3. But I wasn't able to get it to produce numbers, without first running out of resources. I concluded this was due to the three-system approach and the very high accuracy requirements. acer
First 569 570 571 572 573 574 575 Last Page 571 of 592