acer

32333 Reputation

29 Badges

19 years, 325 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

There may be something even simpler, but, A := Array(1..2,1..3,1..4,(i,j,k)->i*j-k^2); B := Array(seq([rtable_dims(A)][i],i in [1,3,2]),(i,j,k)->A[i,k,j]); acer
Array, Matrix, and Vector are the three flavours of rtable. They can also be formed using the rtable() constructor, see ?rtable . See also ?rtable_indexfcn for documentation of the indexing function capabilities. An Array may have more than 2 dimensions, and its index ranges do not need to start from 1 (or even to be positive). Eg, Array( -2..-1, 4..7, -3..3 ); Arrays are not accepted by LinearAlgebra routines. There are some useful commands for manipulating Arrays in the ArrayTools packages. The arithmetic operators like `+` can handle them, but ther are some differences. For example, `.` will do an elementwise product rather than the usual 2D Matrix linear algebra multiplication. Eg, compare, A := Array(1..2,1..2,(i,j)->i); A.A; A := Matrix(2,2,(i,j)->i); A.A; The Matrix and Vector (Vector[row] or Vector[column]) objects are accepted by LinearAlgebra routines. The ArrayTools package also works with these. (Maybe it should be called the RTableTools package..) The `.` operator acts on these in the usual linear algebra senses. There is a distinction bewteen 1D Matrices and Vectors (unlike in Matlab). One way to consider them is to view Vectors as elements of vector spaces and to view Matrices as mappings (which is a rationale for that distinction). 1x1 Matrices and Vectors are not scalars (also unlike Matlab?). Matrices and Vectors are for doing linear algebra. Arrays are useful as mutable datastructures with fixed sizes (and for which some top-level operators work nicely.) Some packages export their own versions of similar objects, eg. VectorCalculus exports its own Vector constructor, see ?VectorCalculus,Details . The term `rtable` stands for "rectangular table". It was introduced in Maple 6 so that hardware precision rtables could have their data stored as contiguous arrays in memory. The purpose of that was to allow their data to be passed to external routines for computation with compiled code. (The address of the start of the data may be passed as a pointer, with no copying required.) Hence rtables with floating-point datatypes are used this way in much of LinearAlgebra and ArrayTools. acer
Why not permute directly with the Array constructor? B := Array(rtable_dims(A),(i,j,k)->A[i,k,j]); acer
Just looking for clarification. Your post's title had the names capitalized, but the post did not. acer
Thanks for those details. Do you suspect that, even if you find reasonably good defaults for those magic numbers, some user-based control via options may be necessary for some problems to be solved? Will there be some mechanism for repeated solving using different RHS's (but *not* all done at once)? I ask because depending on the method this can require saving the factorization or preconditioning. There is also a real floating-point sparse direct solver available from LinearAlgebra[LinearSolve] with the method=SparseLU option. The userinfo messages indicate that this uses NAG routine f01brf to factorize and routine f04axf to do the subsequent solving for a given RHS. There is an indication that the NAG f01brf approach is based upon the somewhat well-known MA28 FORTRAN code. I don't have much experience with either SuperLU or MA48 (the successor to MA28?). acer
You can still access the global name using the :- notation, even when the "same" name is used as a procedure parameter. For example, to distiguish the local and global names, p := proc(array::posint) local f; f := :-array(1..1,[p]); end proc; p(3); p := proc(array::posint) local f; f := array(1..1,[p]); end proc; p(3); It is certainly not wrong to code it as you did, and I didn't means to imply that. Some people will like the fact that the namespace can be used in this way, and that `array` is available for such use. Personally, I find it a bit unecessary, and suspect that it might confuse some other newer users. acer
In the floating-point case, solving large sparse linear systems can be difficult. The existing solvers for such floating-point systems (hooked into LinearSolve) accept some options -- to control things like the choice of method, amount of fill-in, tolerances, etc. See ?IterativeSolver . What sort of options do you expect that your rational solver implementation might need, if any? ps. The documentation makes it appear that only symmetric/hermitian floating-point sparse systems can be solved with iterative methods. Yet trying it with real nonsymmetric float[8] sparse Matrices shows via userinfo that there are also distinct nonsymmetric solvers. All the sparse floating-point methods show NAG function names through userinfo, ie. with infolevel[LinearAlgebra] > 0 . acer
You are right, of course, sorry. I was thrown by the use of protected top-level names as paramaters for your indexing functions. acer
I would still prefer to use rtable_scanblock over `index/XXX`. The posted method using `index/makeIndex` produces an array, and an Array, and then finally a set. It may be that with rtable_scanblock one can produce only a table (array) and then finally a set. Both incur the cost of the function calls for each entry, of course, either of `index/makeIndex` or of checkindex. FindIndex := proc(comparison, A) local checkindex, G; checkindex := proc (val, ind) if comparison(val) then G[ind] := ind end if; NULL; end proc; rtable_scanblock(A,[rtable_dims(A)],checkindex,[]); eval(G); end proc: A:=Array(1..3,1..4,(i,j)->i-j): T:=FindIndex(x->type(x,nonnegint),A): convert(T,set); acer
You can use rtable_elems() to get the indices of an Array. How about, seq([lhs(x)],x in rtable_elems(A)); It shouldn't be necessary to convert an Array to an array, in general. If it really couldn't be avoided for some task, then that'd be an indication that functionality were missing. acer
Isn't that what implicitplot3d() is for? See ?plots,implicitplot3d acer
Isn't that what implicitplot3d() is for? See ?plots,implicitplot3d acer
I think that the original poster was wanting the exact symbolic integral in particular. What's the difference between evalf(int(...)) and evalf(Int(...)), you might wonder, if they both returned a floating-point result here? Well, the first of those, tries to do an exact integration, just as int() does, and then evaluates the result under evalf. And the second, evalf(Int(...)) uses numeric methods and may not attempt to compute the exact symbolic integral at all. (It can howevere do some fancy symbolic analysis, to find singularities, etc.) But when int() returns unevaluated, because it does not compute or find a integration result, then subsequently hitting that unevaluated result with evalf() will result in doing the numeric computation all the same. If int(foo) returns unevaluated, then evalf(int(foo)) should give you the same thing as evalf(Int(foo)). Except it may take longer, possibly considerably longer, while it tries and fails to do the exact symbolic integral. There are cases, too, where evalf of a formulaic symbolic exact integral result (at default Digits precision) will have measurable round-off error. And it could also be that, for the same integrand, evalf(Int(...)) will produce a more accurate result. And that's certainly not always the case. I'm not trying to muddy the waters by mentioning that, but it's a good idea to find a way to check results. acer
I think that the original poster was wanting the exact symbolic integral in particular. What's the difference between evalf(int(...)) and evalf(Int(...)), you might wonder, if they both returned a floating-point result here? Well, the first of those, tries to do an exact integration, just as int() does, and then evaluates the result under evalf. And the second, evalf(Int(...)) uses numeric methods and may not attempt to compute the exact symbolic integral at all. (It can howevere do some fancy symbolic analysis, to find singularities, etc.) But when int() returns unevaluated, because it does not compute or find a integration result, then subsequently hitting that unevaluated result with evalf() will result in doing the numeric computation all the same. If int(foo) returns unevaluated, then evalf(int(foo)) should give you the same thing as evalf(Int(foo)). Except it may take longer, possibly considerably longer, while it tries and fails to do the exact symbolic integral. There are cases, too, where evalf of a formulaic symbolic exact integral result (at default Digits precision) will have measurable round-off error. And it could also be that, for the same integrand, evalf(Int(...)) will produce a more accurate result. And that's certainly not always the case. I'm not trying to muddy the waters by mentioning that, but it's a good idea to find a way to check results. acer
Which interface are you using with Maple 11? Classic or Standard? If it's Standard, then which "mode", Document or Worksheet? acer
First 577 578 579 580 581 582 583 Last Page 579 of 592