acer

32562 Reputation

29 Badges

20 years, 27 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Another trap: If a space is inserted between the `int` and the open-bracket then the parsed meaning is different in 1D and 2D input. acer
Yes, and there are connections between the authors. Aside from papers on calculating Pi, etc, there also seems to have been some contention in the past, though I know nothing about the truth of it all. acer
You have described close to the usual way in which LaTeX output, and Maple's latex() command, are supposed to work. But, instead of having to copy the .sty files all over, it should be possible to just append the TEXINPUTS environment variable with /etc (or \ETC I guess, if you use Windows). That environment variable is used by latex and tex as a search path. acer
I hardly ever use any CAS other than Maple. I sometimes use Matlab. But even then, it is mostly for Simulink. I usually do that from Maple's own evalM() command. On occasion, I will benchmark parts of the two systems. I only very rarely use Mathematica. I do not like Mathematica's model for numerical computations. acer
Thanks, but I was hoping that finer, low-level detail would be instructional for us all. For example given a Maple set S, whose elements are ordered by memory address, adding a new element could be fast since no comparison with the current order is necessary. (That seems to me to be what you were driving at.) But I also wonder about the mechanics of accessing elements. What precisely happens when one issues, `for i in S do...end do`? Does memory address ordering help there, or hinder, and why? acer
It's just a guess, but I suspect that some internal routine with option `remember,system` may be caching some of the the partial sum computations. But then, if a garbage collection takes place and its remember table is cleared, then the full computation might take a different path. Just a wild guess, though. The bug doesn't seem intermittent, no? Are both 0 and -infinity wrong results, even if they alternate? acer
Relying on creation order to be the same as memory address order would of course be misguided. Once memory management kicks in, and garbage is collected and memory freed, new objects might have addresses all over the place. I wanted to give the original poster (a Maple newcomer) a simple explanation for the behaviour being seen by the students. I believe those examples might illustrate what was going on. Perhaps you might expound a bit, Bryan, on why Maple behaves this way. Ie, for performance in accessing objects, or for memory by holding unique representations of some objects no matter how often they are input or arise, etc. We generally know some of the drawbacks of session dependent ordering of results, but it'd be refreshing to hear some of the benefits. acer
Maple can sort objects (in sets, or sums) by memory address or by the order in which they first appear in the session. Consider, restart: seq( addressof(t), t in op(expand((x+y+z)^6)) ); restart: 60*x*y^2*z^3:30*x*y*z^4:6*y^5*z: # notice now which terms appear first expand((x+y+z)^6); acer
What was the method that you used? It might be possible to speed up a high precision "software" floating-point Matrix calculation by increasing garbage collection frequency. See ?gc . acer
For the original poster's benefit: Values which approximate exact zero, but which have small nonzero components due to floating-point evaluation under fixed precision, can also be handled by judicious use of Maple's `fnormal` routine. For example, `+`( seq(`if`(Im(x)=0.0,signum(Re(x)),NULL),x in map(fnormal,evals)) ); Additional optional arguments to fnormal() allow one to fine-tune what is taken to be "close enough" to zero. acer
For the original poster's benefit: Values which approximate exact zero, but which have small nonzero components due to floating-point evaluation under fixed precision, can also be handled by judicious use of Maple's `fnormal` routine. For example, `+`( seq(`if`(Im(x)=0.0,signum(Re(x)),NULL),x in map(fnormal,evals)) ); Additional optional arguments to fnormal() allow one to fine-tune what is taken to be "close enough" to zero. acer
That's very nice. It makes me wonder, about usability and nice defaults for students. The default value of the discont option to plot() could be reconsidered. An option to show asymptotes easily would be nice, in Maple's own plotting routines. acer
That does sound more likely. I'm not so good a guesser. I wonder whether in future it might be possible to get jump discontinuties to be shown by dashed lines, like appear in so many texts. Is there already an easy way to get that, does anyone know? It might look nice if jumps from a curve to a finite point, or vertical asymptotes, could get shown as dashed lines though some nice option such as 'discont'='dashed'. acer
We might only guess you think is wrong with it. Is it because there are no vertical bars indicating the jumps at -Pi and Pi? If so, you might try extending the range a tiny amount on each side, so as to help Maple realize that there are jumps there. plot( cot(x), x=-Pi-0.001..Pi+0.001, view=[-Pi..Pi,-3..3] ); acer
It might be useful to know how bad the conditioning can be. Does it get worse, as the size of problems in your class grows? (For example, the condition number for solving linear systems with the so-called Hilbert Matrix will grow with the size N.) I ask because, if you needed 128 decimal digits at smaller sizes, and if the conditioning gets worse as the size grows, then for size 7000 the conditioning might be so much worse. If you can generate problems from your "class" in sizes of multiples of 10, say, then you could set Digits high and look for a pattern. Something like this,
Digits:=500:
kernelopts[printbytes=false]:
with(LinearAlgebra):
for k from 1 to 20 do
M := # however you construct the example,
# with size k*10 by k*10
ConditionNumber(M):print(evalf[10](%));
od:
If above shows that the condition number grows with the size, then you might need a very high working precision to deal with the 7000x7000 case. The required working precision might be prohibitively high. There are iterative sparse solvers for high precision floating-point linear systems, available in LinearSolve. The method='SparseIterative' option forces use of (only) such methods. Specifying the method means that it won't fall back to any other (much slower) method if it encounters difficulties. You might experiment with using that on a small problem in the same class. You could try a smaller system at both hardware and software precision, to compare the performance effect of switching to software precision. Be prepared to see a 15-20 times slowdown just be switching between Digits=14 and Digits=16, which covers the hardware double precision cutoff. Further increase to Digits will result in a further (gradual but steady) slowdown. See the help-page ?IterativeSolver for more details on using this method. That page describes symmetric problems only, but experimentation reveals that there is also a nonsymmetric iterative solver. If you go that route, be sure to create your Matrix directly with datatype=sfloat. You may need to use ImportMatrix to effectively get the data into an sfloat Matrix in the first place (it depends on how sparse it is). Do not(!) try to do general computations with such a Matrix, Maple might try to copy it to a dense Matrix and get bogged down. That includes calling ConditionNumber(). I'd suggest setting infolevel[LinearAlgebra]:=2 and breaking any computation that didn't show a NAG f11 function in the printed progress output. Any attempt to form the dense "rectangular" storage version of your large sparse Matrix might end up exceeding your memory resources. The few things that you might reasonably do with such a huge sparse high-precision floating-point system include,
  • linear solving
  • matrix-vector multiplication
You might also need to know the answers to questions like this. What sort of accuracy are you after? What would characterize a valid solution for you? (forward error? backward error?) How long are you prepared to wait for result? Do you need to solve for multiple right-hand-sides (Matrix b instead of Vector b, in A.x=b)? acer
First 565 566 567 568 569 570 571 Last Page 567 of 595