acer

32333 Reputation

29 Badges

19 years, 325 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

There's nothing wrong with your syntax, quantum. He did not place uneval quotes around the argument, objective(x1, x2, x3, x4, x5, x6, x7, x8, x9) and the result was that this evaluated immediately to, sin(x6)^2*cos(x7)^2*cos(x8)^2 which is in fact what got used as the objective for him. If the uneval quotes are added, so as to instead pass, 'objective'(x1, x2, x3, x4, x5, x6, x7, x8, x9) then the same warning about "no iterations performed" shows up, along with an inferior answer I believe. So, what to do? I would suggest supplying the derivatives (in procedural form) and specifying a non-default method (as an option). acer
There's nothing wrong with your syntax, quantum. He did not place uneval quotes around the argument, objective(x1, x2, x3, x4, x5, x6, x7, x8, x9) and the result was that this evaluated immediately to, sin(x6)^2*cos(x7)^2*cos(x8)^2 which is in fact what got used as the objective for him. If the uneval quotes are added, so as to instead pass, 'objective'(x1, x2, x3, x4, x5, x6, x7, x8, x9) then the same warning about "no iterations performed" shows up, along with an inferior answer I believe. So, what to do? I would suggest supplying the derivatives (in procedural form) and specifying a non-default method (as an option). acer
I tried to watch maple do its work here, but... > restart: > stopat(`PD/PD`,22): # Maple 11.00 > D(x->sin(x)); [] `PD/PD`: 22* body := pointto(disassemble(addressof(p))[6]); DBG> next Error, (in D/procedure) invalid keyword expression Ok, I can imagine that pointto() might not work properly within the debugger. So I trace `PD/PD` instead. It returns x->cos(x) to `D`. OK. So, continuing to watch D at work, it look's like it's the call reduce_opr(eval(d,1)) which turns x->cos(x) into cos. How dangerous would it be to replace one's D with a version which did not call reduce_opr ? acer
Here's what I thought, when I first read this. (It seems similar to Axel's thoughts, too...) The given curve describes a circle, not a line. Any point radiating from the center of the circle will intersect that circle perpendicularly. This is true, in particular, of the line which emanates from the circle's center and passes through the point in question. So, find the line which passes through the circle's center and the given point. Finding the equation of the line which passes through two distinct points is basic. Calculating the tangent line, and its slope, the reciprocal, etc, seems needlessly difficult. I would guess, from the easiness of the question, that such a complicated approach is beyond the poster's current knowledge. On the other hand, perhaps the fact that every radius intersects the circle perpendicularly is not supposed to be known by the original poster. Maybe that's unclear. acer
If you suspect that your problem contains expressions or procedures whose floating-point evaluation show numerical issues when computed at the default working precision of Maple, then consider raising that setting. The sames goes for an optimization problem which, just by eye-balling its set-up, might look badly scaled. Or you could try to improve the scaling of the formulation. The numerical issues might be due to round-off error accumulated during the floating-point evaluation of Qmess at numeric values of (lam,a). For "non-atomic" expressions such as in the formula in the body of Qmess Maple's numeric computation model will not promise a particular level of accuracy. Rather, Maple will use a fixed working precision (possibly with guard digits for some subcalculations). So, try raising the value of the Maple environment variable Digits. You might try 14 first, and then more on to a value of 20, and then perhaps by doubling after that. You might also try to conceive of a means of corroborating your computed results (usually a good idea, with most numerical computations, in most any program) within Maple itself. If you find that higher working precision makes all your calculations unbearably slower, then you could try to find out just which parts of the calculation require the higher setting in order to get a satisfactory accuracy. For example, it may be the only Qmess requires it. You could try reformulating Qmess for numeric evaluation, or bump up Digits only within Qmess. You can also look at `evalf` as a means to do floating-point evaluation at an alternate fixed precision. You might also wish to read about the options for NLPSolve, in particular the 'optimalitytolerance' option which will depend by default on the value of Digits. acer
Maple is (mostly) interpreted, rather than compiled. The Maple kernel interprets the source of the routines which are stored in its system archives (libraries). There are some very nice benefits for us all, partially brought about by this aspect of Maple. One such benefit is the runtime debugger, see the help-page ?stopat for some introduction. Another such benefit is that the contents of Maple's own library functions can be viewed by the user. For example, showstat(discont); # show with line numbers interface(verboseproc=3): eval(discont); # lineprint the routine One can even view routines local (but not exported) from a module, which are normally hidden. Eg, restart: showstat(Statistics:-Visualization:-BuildPlotStruct); kernelopts(opaquemodules=false): showstat(Statistics:-Visualization:-BuildPlotStruct); Some compiled routines, built into the kernel, are not accessible. Eg, showstat(print); showstat(irem); While there is some variation in quality due to the long span over which Maple has evolved, the quality is rather good and studying it can help one learn some math as well as the maple language. acer
One might call it isolating for an expression. Now, it may matter to you whether it is allowed to introduce any subexpressions of the expression you are trying to isolate. Consider this example, in which Vo/Vs does not appear explicitly. eq := (Vs - Vo) = R; If both the lhs and rhs of eq are divided by either Vs or Vo then the subexpression Vo/Vs (or Vs/Vo its reciprocal) will manifest itself. I can imagine that you might say yes or no to allowing that, depending on what you want to do with the answers. So, would you say that either of these next two are valid examples of the sort of solution that you're after, given example eq above? try1 := Vo/Vs = Vo/(R+Vo); try2 := Vo/Vs = (-R+Vs)/Vs; Alternatively, you might not only disallow the above manipulations, but also discount recognition of the reciprocal Vs/Vo. acer
One might call it isolating for an expression. Now, it may matter to you whether it is allowed to introduce any subexpressions of the expression you are trying to isolate. Consider this example, in which Vo/Vs does not appear explicitly. eq := (Vs - Vo) = R; If both the lhs and rhs of eq are divided by either Vs or Vo then the subexpression Vo/Vs (or Vs/Vo its reciprocal) will manifest itself. I can imagine that you might say yes or no to allowing that, depending on what you want to do with the answers. So, would you say that either of these next two are valid examples of the sort of solution that you're after, given example eq above? try1 := Vo/Vs = Vo/(R+Vo); try2 := Vo/Vs = (-R+Vs)/Vs; Alternatively, you might not only disallow the above manipulations, but also discount recognition of the reciprocal Vs/Vo. acer
A := Matrix(7,7,[[1,-5,0,0,0,0,0], [0,1,-5/4,0,0,0,0],[0,0,1,-5/4,0,0,0], [0,0,0,1,-5/4,0,0],[0,0,0,0,1,-5/4,0], [0,0,0,0,0,1,-5/4],[0,0,0,0,0,0,1]]): B := Vector(7,[1,1/4,1/4,1/4,1/4,0,204]): with(LinearAlgebra): # Here's what happened at some of the steps. B[7]:=B7: # generalization, instead of a particular B[7] sol:=LinearSolve(A,B); # solve that system eqs:=seq(sol[i],i=1..7); # not really necessary; same as sol[i], but shows them # You want to know which values of B7 give integer # values to all the members of that sequence. # Each eq[i] must be equal to a (very possibly # different) integer. Integer solutions to the # equation x=eq[i] are sought. The isolve routine # is for computing such things, and it allows one # to specify a name in the event that the solutions # are parametrized. # Take the equation x=eq[3] for example. #> isolve(x=eqs[3],Z||3); # {x = 499 + 625 Z3, B7 = 204 + 256 Z3} # The equation in the above result which specifies # x = 499 + 625*Z3 is of no immediate interest. If # you know B7 then you can find all the x's using # the eq sequence, or using sol. # And yes, the key is to extract the equation of the # form B7=.... from the above result. Fortunately, # Maple's `type` routine can check for such patterns # easily. One of those two equations will be found # to be of the right type, and one will not. #> type( x = 499 + 625*Z3, identical(B7)=anything ); # false #> type( B7 = 204 + 256*Z3, identical(B7)=anything ); # true # Generate all those equations, including those with # B7 given by parametrized equations, at once. seq(isolve(x=eqs[i],Z||i), i=1..7); # Extract from that only the equations of the form # B7 = ...., and do it all at once. The `map` # routine lets it all be done at once. The % token # is maple's programmatic way to allow access to the # previous result. # I put {} around % so that there would be something # for `map` to get a grip on. Otherwise, `map` would # see all the sequence at once and wouldn't know # which arguments were which. # The above is a common trick in Maple. Suppose # that you have a routine, `foo`, which can take # either two or more arguments. How do you # know whether the call foo(a,b,c) was meant to # be foo( a, (b,c) ) a two-argument call with an # sequence of two things b and c as the second # argument *or* foo(a,b,c) with a three-argument # call. How convenient if you can instead do it # as foo(a,{b,c}), when it's clear what the second # argument is. So, passing sequences is generally # ambiguous. Better to somehow encapsulate any # sequence that you want to get passed as a single # argument. # Encapsulating a sequence in a list with [] or # in a set with {} is a pretty common Maple trick. # In my actual case, `map` is going to # return a set if I pass it {%} instead of the # sequence which is what % actually was. I used `op` # on the result because I want the contents of the # result set and not that actual set. map(y->op(select(x->type(x,identical(B7)=anything),y)),{%}): # So now there are seven equations of the form # B7 = ...., for you particular Matrix A and Vector B. # There are other examples of A and B for which the # seven equations B7=... would contain an x. In such # a case, this crude approach would break down and # not work without some extra efforts. As it is, # the example is simple enough. # All seven of these equations B7=.... must have # integer solutions, for each solution to your # problem. So I called isolve on them all at once # because they must *all* hold at once for the B7 # to be a valid solution value. isolve(%); # The above call gives a solution in which all # the variables are parameterized. Only the B7 or Z7 # part of the result is of interest. So extract it, # once again using `map` and % to denote the previous # result. select(x->type(x,identical(Z7)=anything),%); # So, that might explain some of my thinking at the # time. # As for your other questions, you could consider # these results. > S1 := {u = v, a = b, f = g, a = g}: > select(x->type(x,identical(a)=anything),S1); {a = b, a = g} > op(select(x->type(x,identical(a)=anything),S1)); a = b, a = g > S := {u=v,a=b,a=g,f=g}, {q=r,a=d,a=p,j=k}: > map( y->op(select(x->type(x,identical(a)=anything),y)), {S}); {a = b, a = g, a = d, a = p} # At an even lower level, S1; op(S1); # I hope that helps some. acer
Your Matrix and Vector are given by, A := Matrix(7,7,[ [1,-5,0,0,0,0,0], [0,1,-5/4,0,0,0,0], [0,0,1,-5/4,0,0,0], [0,0,0,1,-5/4,0,0], [0,0,0,0,1,-5/4,0], [0,0,0,0,0,1,-5/4], [0,0,0,0,0,0,1]]); B := Vector(7,[1,1/4,1/4,1/4,1/4,0,204]); You may be able to get rid of some overhead, and make the thing faster, by doing the most expensive part of the linear solving just once. The first and more expensive bit can be the LU decomposition of the Matrix. But if you compute that part first, just once, then what's left within the loop is the less expensive back- and forward substitutions done for each different B. with(LinearAlgebra): (p,lu):=LUDecomposition(A,output='NAG'): for i from 4 by 4 to 2000 do B[7] := i; X := LinearSolve([p,lu], B); if not type(X,'Vector'(integer)) then next; else print(X^%T); end if; end do: Faster still, one could compute the symbolic answer, and then within the loop merely instantiate at values for the parameter. Ie, B[7]:=B7: X:=LinearSolve(A,B): for i from 4 by 4 to 2000 do X_inst := eval(X,B7=i): if not type(X_inst,'Vector'(integer)) then next; else print(X_inst^%T); end if; end do: On the other hand, the example seems simple enough to grind out an answer. Maybe there are things in the numtheory package to do this, but that's not my field. So I take a crude approach below. (If anyone wants to teach me some number theory here, that could be fun.) > B[7]:=B7: > sol:=LinearSolve(A,B): > eqs:=seq(sol[i],i=1..7): > seq(isolve(x=eqs[i],Z||i), i=1..7): > map(y->op(select(x->type(x,identical(B7)=anything),y)),{%}): > isolve(%); {Z3 = 4 _Z1, Z7 = 204 + 1024 _Z1, Z4 = 3 + 16 _Z1, Z6 = 51 + 256 _Z1, Z1 = _Z1, Z5 = 12 + 64 _Z1, Z2 = _Z1, B7 = 204 + 1024 _Z1} > select(x->type(x,identical(Z7)=anything),%); # the answer {Z7 = 204 + 1024 _Z1} So it looks like the solution is just 204 + 1024*n . acer
Some experiment seems to show that the bulk of the overhead may be in analysis of the problem type. If I replace, result := Optimization:-Maximize(eq,Q=0..1); with, result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); then the computation to obtain the plot gets about 10 times faster and also uses less memory. In full, Payoff := proc(Q, s, p, pa) 1 - 1/24*s^3 - 1/2*s*min(1, Q + 1/2*s)^2 + 1/4*s^2*min(1, Q + 1/2*s) - 1/3*min(1, Q + 1/2*pa)^3 + 1/3*min(1, Q + 1/2*s)^3 + Q*min(1, Q + 1/2*pa)^2 - Q*min(1, Q + 1/2*s)^2 - Q^2*min(1, Q + 1/2*pa) + Q^2*min(1, Q + 1/2*s) + s*Q*min(1, Q + 1/2*s) - 1/2*pa + 1/2*pa*min(1, Q + 1/2*pa)^2 + 1/4*pa^2 - 1/4*pa^2*min(1, Q + 1/2*pa) - s*Q + pa*Q - pa*Q*min(1, Q + 1/2*pa) - p*Q; end proc: Payoffeq:=Payoff(Q, s, p, pa); Qopt2 := proc(sval, pval, paval) local eq, result; global Payoffeq; if not type({args},'set'(numeric)) then return 'procname'('args'); end if; eq := subs({s=sval,p=pval,pa=paval},Payoffeq); result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); if type(result,'list'({numeric,list})) then return rhs(op(result[2])); else return 'procname'(args); end if; end proc: Qopt2(.2, 0.1e-1, 1); plot('Qopt2'(s, 0.1e-1, 1),s=0.1..0.5); I traced through the code in the debugger, when Optimization:-Maximize was used instead of Optimization:-NLPSolve with a supplied method. It looked to me as if the method chosen internally was 'default', which later got used as 'quadratic' and not as 'branchandbound'. Both separate timings and also userinfo corroborate the idea that method=branchandbound is not used when Optimization:-Maximize is used here. The userinfo suggests that method=quadratic is selected. So that means that the problem analysis and method selection overhead is very large, as forcing method=quadratic and calling NLPSolve directly show great performance improvement. acer
Some experiment seems to show that the bulk of the overhead may be in analysis of the problem type. If I replace, result := Optimization:-Maximize(eq,Q=0..1); with, result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); then the computation to obtain the plot gets about 10 times faster and also uses less memory. In full, Payoff := proc(Q, s, p, pa) 1 - 1/24*s^3 - 1/2*s*min(1, Q + 1/2*s)^2 + 1/4*s^2*min(1, Q + 1/2*s) - 1/3*min(1, Q + 1/2*pa)^3 + 1/3*min(1, Q + 1/2*s)^3 + Q*min(1, Q + 1/2*pa)^2 - Q*min(1, Q + 1/2*s)^2 - Q^2*min(1, Q + 1/2*pa) + Q^2*min(1, Q + 1/2*s) + s*Q*min(1, Q + 1/2*s) - 1/2*pa + 1/2*pa*min(1, Q + 1/2*pa)^2 + 1/4*pa^2 - 1/4*pa^2*min(1, Q + 1/2*pa) - s*Q + pa*Q - pa*Q*min(1, Q + 1/2*pa) - p*Q; end proc: Payoffeq:=Payoff(Q, s, p, pa); Qopt2 := proc(sval, pval, paval) local eq, result; global Payoffeq; if not type({args},'set'(numeric)) then return 'procname'('args'); end if; eq := subs({s=sval,p=pval,pa=paval},Payoffeq); result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); if type(result,'list'({numeric,list})) then return rhs(op(result[2])); else return 'procname'(args); end if; end proc: Qopt2(.2, 0.1e-1, 1); plot('Qopt2'(s, 0.1e-1, 1),s=0.1..0.5); I traced through the code in the debugger, when Optimization:-Maximize was used instead of Optimization:-NLPSolve with a supplied method. It looked to me as if the method chosen internally was 'default', which later got used as 'quadratic' and not as 'branchandbound'. Both separate timings and also userinfo corroborate the idea that method=branchandbound is not used when Optimization:-Maximize is used here. The userinfo suggests that method=quadratic is selected. So that means that the problem analysis and method selection overhead is very large, as forcing method=quadratic and calling NLPSolve directly show great performance improvement. acer
Thanks for pointing out that error, Paulina. One thing that I found slightly disappointing was that, using Optimization, the final plot took more resources (time, and memory judging by all the gc calls). I haven't checked to see whether using a "solution module" is leaner. acer
Thanks for pointing out that error, Paulina. One thing that I found slightly disappointing was that, using Optimization, the final plot took more resources (time, and memory judging by all the gc calls). I haven't checked to see whether using a "solution module" is leaner. acer
An array is a maple table data structure which is indexed in a particular way, ie. by integers. A vector and a matrix are two further specializations of arrays which have 1 and 2 dimensions respectively and whose indexing starts at the value of 1. The older, deprecated linalg package is for dealing with the older, deprecated vector and matrix objects. The evalm command is for evaluating vector and matrix expressions (formulae, say) because those objects have what is called "last name evaluation". See ?last_name_eval. The rtable is a newer sort of general data structure. One instance of it is the newer Array, which is indexed by integer values. The newer Vector and Matrix objects are 1 and 2 dimensional instances of rtable whose indexing starts at the value of 1. The LinearAlgebra package is for dealing with Vectors and Matrices. Some top-level arithmetic operators such as `.`, `+`, `^`, etc, also know how to deal with these objects. The rtable objects do not have the "last name evaluation" property. There is no need to use evalm on them, and doing so is just generally inefficient. An important difference is that Vector objects have an orientation (as a row or as a column), which vector objects lack. Maple is picky about the orientation, insofar as it relates to things like the validity of multiplication. The two families, rtable and array based objects, should almost never be used in a mixed way. You example could also be written this way, A := Matrix(7, 7, [[1, -5, 0, 0, 0, 0, 0], [0, 1, -5/4, 0, 0, 0, 0], [0, 0, 1, -5/4, 0, 0, 0], [0, 0, 0, 1, -5/4, 0, 0], [0, 0, 0, 0, 1, -5/4, 0], [0, 0, 0, 0, 0, 1, -5/4], [0, 0, 0, 0, 0, 0, 1]]); B := Vector(1 .. 7, [1, 1/4, 1/4, 1/4, 1/4, 0, 204]); A^(-1) . B; You might also look at ?LinearAlgebra[LinearSolve] , in contrast to using the above method in which B is premultiplied by the inverse of A. As an illustration of the orientation issue, using Matrix A and Vector B as above, compare, 1/A . B; with, B/A; # or B . 1/A; You can use the type() command to distinguish programmatically between, say, Matrices and matrices. Eg, type(A,Matrix) or type(A,matrix). You can also use the convert() command to convert from one to the other. Eg, convert(A,matrix). You can use the syntax A^%T to transpose the Matrix A. You can also use this syntax to produce a copy of a Vector B with the other orientation. acer
First 570 571 572 573 574 575 576 Last Page 572 of 592