acer

32333 Reputation

29 Badges

19 years, 326 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Very nice, as proof of concept, and in performance. > (st,ba,bu):=time(),kernelopts(bytesalloc),kernelopts(bytesused): > test4(30000,100): > time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu; 0.322, 3276200, 3816512 Even the popular C rng's re-use arrays like this. Your code shows the benefit so directly. Both the speed and memory usage are lower. With normal use of current Statistics, it's only possible to get one of the two so low while the other remains several powers of 2 greater. An interesting question then becomes: should Statistics:-Sample(X) return a proc that allowed an optional Vector argument, or would processing for that option add too much overhead? Would it better to have a new routine like SampleRtable(X) which returned a proc which expected exactly two arguments? What's the tradeoff, between the confusion of more user-level routines and the ability to get very low overhead generated procedures? I mention the idea of two arguments to the procedure returned by SampleRtable(X), since one might only want to generate less random numbers than fit in the rtable. acer
That's quite neat, Joe. Looking at the help-page, ?Statistics,Sample , I don't see where even the simpler invocation Sample(X) is shown. When it is called as Sample(X), for a random variable X, then omitting the second parameter N will result a procedure being returned. That procedure can itself be polled, to return separate Vector samples. This usage appears in several of the examples higher up in this thread. The first person to show it may have been user "alex". The ?Statistics,Sample help-page could be filled with more details on these otherwise hidden bits of functionality. I wonder what other eggs may be found in Statistics. acer
You are quite right, it does create just one generator procedure, sorry. When using the RandomTools generator, more Maple function calls are done to fill in each vector element, and that is also overhead. But garbage collection does indeed happen here, as evinced by the big difference between bytesused and bytesalloc and the regular memory usage "updates". I notice that you omitted the datatype=float[8] in your testing loop. If you add it in, then that might give some assurance that the memory use is not to store many software float Vectors. It doesn't clear it all up, by far, but adding it back might help lay the cause elsewhere. Why bytesalloc keeps growing, through your testing loop, when garbage collections are happening, that I don't know. It might need expert knowledge of inner workings of the garbage collection mechanism to explain it properly. I think that Statistics rather than RandomTools is the better tool for this sort of computation with many generated random numbers. I don't see how to use RandomTools:-Generate to produce a hardware datatype Vector without incurring the overhead of production of many intermediate software float objects or of multiple Maple function calls per element. acer
You have the RandomTools:-Generate makeproc call inside the Vector constructor call, as the initializer. Doesn't that mean that for every element it will try to create a distinct randon number generator? If so then that's a lot of (garbage) procedures being made. So instead, you could pull the makeproc of the random number generator outside of the Vector() constructor call? Ie, > restart: > kernelopts(printbytes=false): > st,ba,bu:=time(),kernelopts(bytesalloc),kernelopts(bytesused): > g := RandomTools:-Generate('distribution(Normal(0, 1))', makeproc = true): > Vector(10000,i->g()): > time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu; 9.339, 11139080, 349536216 Notice how much bytesused grew by. Even better, you could use Statistics to do it very leanly. The random number generator produced above by RandomTools might itself supply "software" float numbers. But you intend to put them into a datatype=float[8] Vector. In a sense those intermediate software float objects all end up as collectible garbage. That's more garbage, on top of any unwanted distinct generator procedures. The Statistics package can be seen by inspection to call external code, the speed (and apparent lack of garbage production) of which indicates that it populates a hardware datatype Vector with no intermediate "software" float garbage. It generates hardware float random numbers directly, and places them into the hardware datatype Vector right away (externally). > restart: > kernelopts(printbytes=false): > st,ba,bu:=time(),kernelopts(bytesalloc),kernelopts(bytesused): > X := Statistics:-RandomVariable(Normal(0,1)): > S := Statistics:-Sample(X): > result:=S(10000): > rtable_options(result,subtype),rtable_options(result,datatype); Vector[row], float[8] > time()-st,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu; 0.019, 1048384, 1384072 See this post for more discussion of using Statistics effiectively. acer
As an earlier post sugegsted, verify() may be used here, as a way to apply testfloat for the individual floating-point number comparisons. Digits:=50: x:=evalf(Pi): x2:=x+Float(6.0,-49): S1:={{[x,evalf(sqrt(2))],[x2,evalf(sqrt(2))]}} union {{x} union {[1.0,0]}}: S2:={{[x2,evalf(sqrt(2))],[x2,evalf(sqrt(2))+Float(7.0,-30)]}} union {{x} union {[1.0+Float(7.0,-29),0+Float(1.0,-30)]}}: verify(S1,S2,'set'('set'({'float'(5,digits=11,test=2),'list'('float'(1,digits=10,test=2))}))); But you also wanted it to work for scalar floats too, it seems. And maybe sets of scalar floats. F:='float'(5,digits=11,test=2): verify(x,x2,{F,'set'({F,'set'({F,'list'(F)})})}); verify(S1,S2,{F,'set'({F,'set'({F,'list'(F)})})}); It should be possible to programmatically construct the set of checks that you want, as long as you know the possible nestings in advance. I'm not really a big fan of the evalindets method posted, or other schemes which essentially do some sort of conversion to exact values. It's tricky to get that right, especially if the accuracy bound and precision and test model need ever need to vary. Unless the size of the data to be compared, or the level of nesting is large, I'd go with something like this using verify(). Of course, if those are big, then the number of unnecessary checks would be prohibitive. The number of combinations (scalars, lists, sets) would grow too big and be too expensive to apply. But even then, I would try to construct some other scheme that used testfloat() at its heart. acer
As an earlier post sugegsted, verify() may be used here, as a way to apply testfloat for the individual floating-point number comparisons. Digits:=50: x:=evalf(Pi): x2:=x+Float(6.0,-49): S1:={{[x,evalf(sqrt(2))],[x2,evalf(sqrt(2))]}} union {{x} union {[1.0,0]}}: S2:={{[x2,evalf(sqrt(2))],[x2,evalf(sqrt(2))+Float(7.0,-30)]}} union {{x} union {[1.0+Float(7.0,-29),0+Float(1.0,-30)]}}: verify(S1,S2,'set'('set'({'float'(5,digits=11,test=2),'list'('float'(1,digits=10,test=2))}))); But you also wanted it to work for scalar floats too, it seems. And maybe sets of scalar floats. F:='float'(5,digits=11,test=2): verify(x,x2,{F,'set'({F,'set'({F,'list'(F)})})}); verify(S1,S2,{F,'set'({F,'set'({F,'list'(F)})})}); It should be possible to programmatically construct the set of checks that you want, as long as you know the possible nestings in advance. I'm not really a big fan of the evalindets method posted, or other schemes which essentially do some sort of conversion to exact values. It's tricky to get that right, especially if the accuracy bound and precision and test model need ever need to vary. Unless the size of the data to be compared, or the level of nesting is large, I'd go with something like this using verify(). Of course, if those are big, then the number of unnecessary checks would be prohibitive. The number of combinations (scalars, lists, sets) would grow too big and be too expensive to apply. But even then, I would try to construct some other scheme that used testfloat() at its heart. acer
The routine testfloat is available as a flexible but powerful means to compare floating-point numbers. Why reinvent this functionality? For example, Digits:=50: epsilon:=Float(1.0,-30): for x in [0,0.0,1,sqrt(2),Pi] do x,testfloat(evalf(x),evalf(x+epsilon),5,digits=11,test=2); od; acer
This is the kind of post that makes me want bookmarking capability within Mapleprimes. acer
If you mean csc(x)^3 , then yes, it does seem to endlessly hint and use the Parts rule. You can do a start it off better by doing a sequence of change of variables (Change rule), starting with say u=csc(x)^2. Or you can roll some of those Changes together and apply the change u=sqrt(1-1/csc(x)^2) immediately. acer
Hi Alex, Only by mentioning such issues can they be resolved. The more get pointed out here, the more kinks can be ironed out. For example, the mod help-page which mentions &^ could have a new subsection specifically for 2D Math entry. This could work well. A user tries the obvious, encounters a problem, looks at the most obvious help-page, and sees a solution. Moving down to the very next example after &^, on that mod help-page, shows another item. How does one bring this next one about, as 2D Math input, in a Document? I just get an error. `mod` := mods I'm not sure how I'd want to define this next one, but why can't I enter it as 2D Math input? It gives me an error in 11.00, "invalid neutral operator". But as 1D Math input it seems fine. x &? w Why does entering this next one (and a few others I've seen) give an error message about an "invalid minus"? Where's the minus? x &\? w While I'm mentioning these things, why is there a `sum` item in the Expression palette, but no `Sum` item? They are not the same, even when the former returns unevaluated (hence the colour difference as 2D output). How does one use the summation symbol (sigma) in the large operators palette? All I ever got from it was an error message "invalid sum (need variable)" but with no apparent way to specify the variable of summation. And it just seems to be `&sum`. Where is inert Sum in the palette? Why is the fraction bar in 2D Math Input a/(b/c) (without the brackets) so much wider than that in a/b/c ? To enter those, type them in as shown below. Why should it matter to horizontal width considerations like fraction bars whether another fraction appears in the numerator vs the denominator? a/b/c a/b/c acer
Hi Alex, Only by mentioning such issues can they be resolved. The more get pointed out here, the more kinks can be ironed out. For example, the mod help-page which mentions &^ could have a new subsection specifically for 2D Math entry. This could work well. A user tries the obvious, encounters a problem, looks at the most obvious help-page, and sees a solution. Moving down to the very next example after &^, on that mod help-page, shows another item. How does one bring this next one about, as 2D Math input, in a Document? I just get an error. `mod` := mods I'm not sure how I'd want to define this next one, but why can't I enter it as 2D Math input? It gives me an error in 11.00, "invalid neutral operator". But as 1D Math input it seems fine. x &? w Why does entering this next one (and a few others I've seen) give an error message about an "invalid minus"? Where's the minus? x &\? w While I'm mentioning these things, why is there a `sum` item in the Expression palette, but no `Sum` item? They are not the same, even when the former returns unevaluated (hence the colour difference as 2D output). How does one use the summation symbol (sigma) in the large operators palette? All I ever got from it was an error message "invalid sum (need variable)" but with no apparent way to specify the variable of summation. And it just seems to be `&sum`. Where is inert Sum in the palette? Why is the fraction bar in 2D Math Input a/(b/c) (without the brackets) so much wider than that in a/b/c ? To enter those, type them in as shown below. Why should it matter to horizontal width considerations like fraction bars whether another fraction appears in the numerator vs the denominator? a/b/c a/b/c acer
The difference would be efficiency. One might not want to actually compute a^b, before computing a modulo operation on the "result". That straight non-modulo powering might not even be possible, in time or memory. That is why it is `&^` rather than `^`, as the original submitter noticed. Consider, > a,b,c:=12345678917,987654321,111777: > a &^ b mod c; # cool 75146 > `^`(a,b) mod c; Error, Maple was unable to allocate enough memory to complete this computation. Please see ?alloc Jacques's suggestion to enter it by typing &\^ works for me, in 2D Math input in a Document. acer
The difference would be efficiency. One might not want to actually compute a^b, before computing a modulo operation on the "result". That straight non-modulo powering might not even be possible, in time or memory. That is why it is `&^` rather than `^`, as the original submitter noticed. Consider, > a,b,c:=12345678917,987654321,111777: > a &^ b mod c; # cool 75146 > `^`(a,b) mod c; Error, Maple was unable to allocate enough memory to complete this computation. Please see ?alloc Jacques's suggestion to enter it by typing &\^ works for me, in 2D Math input in a Document. acer
That's usually called the domain of the function. No, a procedure in Maple doesn't usually allow for a domain to be nicely specified. And that's a pity, because in maths a function is usually meant to mean both a rule (mapping) and a domain. That's what I was taught all through school, at least. It can be a problem, that Maple doesn't have such a basic concept built into its "functions". Oh, you can add type-checks on parameters, so that inputs not-of-the-correct type will make an error be thrown. Or you can create piecewise structures which evaluate according, say, to the position of the input. Or you can code your procedure with calls to is(), and then call that procedure using the `assuming` facility. None of those is a really good domain facility. There are some ways to restrict computations to a particular domain, using special packages, but they're not so strong and not everything works well in them. And they seem to be for specifying a different field or ring and not for nicely specifying closed subsets. So, now there are some Maple routines (like `solve`) which accept constraints as extra arguments, as one of the ways to get some restricted domain effects. But observe that `solve` is one of the hold-outs that doesn't respect the `assume`/`assuming` facilities well (or, at all). acer
That's usually called the domain of the function. No, a procedure in Maple doesn't usually allow for a domain to be nicely specified. And that's a pity, because in maths a function is usually meant to mean both a rule (mapping) and a domain. That's what I was taught all through school, at least. It can be a problem, that Maple doesn't have such a basic concept built into its "functions". Oh, you can add type-checks on parameters, so that inputs not-of-the-correct type will make an error be thrown. Or you can create piecewise structures which evaluate according, say, to the position of the input. Or you can code your procedure with calls to is(), and then call that procedure using the `assuming` facility. None of those is a really good domain facility. There are some ways to restrict computations to a particular domain, using special packages, but they're not so strong and not everything works well in them. And they seem to be for specifying a different field or ring and not for nicely specifying closed subsets. So, now there are some Maple routines (like `solve`) which accept constraints as extra arguments, as one of the ways to get some restricted domain effects. But observe that `solve` is one of the hold-outs that doesn't respect the `assume`/`assuming` facilities well (or, at all). acer
First 563 564 565 566 567 568 569 Last Page 565 of 592