epostma

1534 Reputation

19 Badges

16 years, 254 days
Maplesoft

Social Networks and Content at Maplesoft.com

Maple Application Center
I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

MaplePrimes Activity


These are replies submitted by epostma

Let me try to help you with a general remark, because I'm not sure what you are trying to do exactly. If you want to find some statistical property (such as the expected value, or the covariance) of a random variable, or a group of random variables, you'll need to choose between two options.

The first option is to do everything symbolically. You could characterize this approach by the fact that you never take a sample, using Statistics:-Sample or Finance:-SamplePath, or use data from the outside - it's just the use of the abstract distribution. You can for example ask for the ExpectedValue of the square of the normal distribution with parameters mu and sigma. This is almost exclusively the domain of the Statistics package - the Finance package typically requires the use of the second approach.

The second option is to use data. You can generate SamplePaths or Samples and then measure the properties of these realizations of the random variables. In this case, you'll need to make sure that for every call to a statistical property you want to compute, you have a sample of data for that call. In particular, if you want to find the covariance between, say, a property of a process at t=1 and a property of the process at t=2, then you'll need a sufficient number of replications of the sample path as a whole and pass the values at t=1 and t=2 of all sample paths to the proper function call. You can't compute the covariance by calling ExpectedValue on every data point individually.

Let us know if this helps, and if it doesn't then tell us exactly what you are trying to compute.

Hope this helps,

Erik Postma
Maplesoft.

It turns out that the issue is mathematically much simpler - it is simply use of the ExpectedValue command in a way that it's not meant to be used. For every call to ExpectedValue that you make, the argument is just a constant floating point number. The expected value of a floating point number is always that same floating point number - it's constant! Similarly, the covariance between two constants is always zero - neither of them will ever change.

What you meant to do is take the covariance of the random variables that generated those floating point numbers. I'm looking into how your computation works and will get back to you in a few minutes. (I started writing my previous answer just before you posted yours, and sadly we don't get notified of newer answers or comments when writing answers. But it is still useful information for the case that you described in your initial question, so I'll let the answer stand.)

Erik Postma
Maplesoft.

It turns out that the issue is mathematically much simpler - it is simply use of the ExpectedValue command in a way that it's not meant to be used. For every call to ExpectedValue that you make, the argument is just a constant floating point number. The expected value of a floating point number is always that same floating point number - it's constant! Similarly, the covariance between two constants is always zero - neither of them will ever change.

What you meant to do is take the covariance of the random variables that generated those floating point numbers. I'm looking into how your computation works and will get back to you in a few minutes. (I started writing my previous answer just before you posted yours, and sadly we don't get notified of newer answers or comments when writing answers. But it is still useful information for the case that you described in your initial question, so I'll let the answer stand.)

Erik Postma
Maplesoft.

@serena88: As to your first question / method, ?LinearAlgebra:-DotProduct works only on pairs of Vectors, not on Matrices. It is meant to represent the inner product on real or complex vector spaces. If the product you're trying to compute involves Matrices, use . (the dot), as you found already, or ?LinearAlgebra:-Multiply (which does the same thing, I think).

As to your second question: actually, B . C is well defined: B is a 3x1 Matrix and C a 1x2 Matrix, so you can multiply them to obtain a 3x2 Matrix (since nxk and kxm matrices can be multiplied for any n, k, m to obtain nxm matrices). If you would use, say, C^+ (a 2x1 Matrix) instead of C, then Maple would complain.

Hope this helps,

Erik Postma
Maplesoft.

@serena88: As to your first question / method, ?LinearAlgebra:-DotProduct works only on pairs of Vectors, not on Matrices. It is meant to represent the inner product on real or complex vector spaces. If the product you're trying to compute involves Matrices, use . (the dot), as you found already, or ?LinearAlgebra:-Multiply (which does the same thing, I think).

As to your second question: actually, B . C is well defined: B is a 3x1 Matrix and C a 1x2 Matrix, so you can multiply them to obtain a 3x2 Matrix (since nxk and kxm matrices can be multiplied for any n, k, m to obtain nxm matrices). If you would use, say, C^+ (a 2x1 Matrix) instead of C, then Maple would complain.

Hope this helps,

Erik Postma
Maplesoft.

+1, nice solution. I must admit I didn't try to optimize for speed at all since the demo input [2, 1, 4] was so small - but that's not likely to be the problem that Markiyan wanted to solve :)

It would probably be possible to generate a Compiler:-Compile'able version with some effort, if the highest possible speed is required (iterating over the integers 0 .. 2^n - 1 and using the binary representation to give the signs of the coefficients - with a maximum of 63 or so for n, but that will take way too long anyway), but it's probably better to invest the effort in making a good branch-and-bound strategy. The problem is NP-complete, since Knapsack reduces to it, so solving it perfectly for truly large problem sizes is not going to work.

Erik Postma
Maplesoft.

+1, nice solution. I must admit I didn't try to optimize for speed at all since the demo input [2, 1, 4] was so small - but that's not likely to be the problem that Markiyan wanted to solve :)

It would probably be possible to generate a Compiler:-Compile'able version with some effort, if the highest possible speed is required (iterating over the integers 0 .. 2^n - 1 and using the binary representation to give the signs of the coefficients - with a maximum of 63 or so for n, but that will take way too long anyway), but it's probably better to invest the effort in making a good branch-and-bound strategy. The problem is NP-complete, since Knapsack reduces to it, so solving it perfectly for truly large problem sizes is not going to work.

Erik Postma
Maplesoft.

Indeed the condition m=0 is the problem here: Maple is correctly telling us that there is, in general, no solution to that combination of equations. If we substitute m = 0 in both equations, then we get the solution x(t) = y2(t) = 0 for all t. Otherwise, if we leave m symbolic as is, we can get a solution if we leave the initial condition x(0) = 0 out; it's kind of complicated so I won't include it here. Leaving out the condition y2(0) = 0 still leaves an inconsistent system.

Hope this helps,

Erik Postma
Maplesoft.

Indeed the condition m=0 is the problem here: Maple is correctly telling us that there is, in general, no solution to that combination of equations. If we substitute m = 0 in both equations, then we get the solution x(t) = y2(t) = 0 for all t. Otherwise, if we leave m symbolic as is, we can get a solution if we leave the initial condition x(0) = 0 out; it's kind of complicated so I won't include it here. Leaving out the condition y2(0) = 0 still leaves an inconsistent system.

Hope this helps,

Erik Postma
Maplesoft.

A mini addition to your nice answer: I realized as I saw your code that you can now get some of the speed up of in-kernel code for filling Vectors with a constant, without using ?ArrayTools/Fill, as follows:

(**) bl := Vector(10^7, datatype=float[8]):
(**) kernelopts(bytesused);                 
                                                        82026976

(**) bl[1..5*10^6] := 1:
(**) kernelopts(bytesused);
                                                        82041496

(**) bl[..] := 2:
(**) kernelopts(bytesused);
                                                        82055672

So if you assign a constant to a subrange of an rtable, then no additional allocations are done. (ArrayTools:-Fill is still faster, though, presumably because it is more specialized: you could use the assignment statement for more complicated subsets of the data than ArrayTools:-Fill.)

Hope this helps,

Erik Postma
Maplesoft.

A mini addition to your nice answer: I realized as I saw your code that you can now get some of the speed up of in-kernel code for filling Vectors with a constant, without using ?ArrayTools/Fill, as follows:

(**) bl := Vector(10^7, datatype=float[8]):
(**) kernelopts(bytesused);                 
                                                        82026976

(**) bl[1..5*10^6] := 1:
(**) kernelopts(bytesused);
                                                        82041496

(**) bl[..] := 2:
(**) kernelopts(bytesused);
                                                        82055672

So if you assign a constant to a subrange of an rtable, then no additional allocations are done. (ArrayTools:-Fill is still faster, though, presumably because it is more specialized: you could use the assignment statement for more complicated subsets of the data than ArrayTools:-Fill.)

Hope this helps,

Erik Postma
Maplesoft.

For what it's worth, I've entered this bug into our internal tracking system; that's not a promise that I will have time to do something about it soon, but I'll do my best.

Erik Postma
Maplesoft.

For what it's worth, I've entered this bug into our internal tracking system; that's not a promise that I will have time to do something about it soon, but I'll do my best.

Erik Postma
Maplesoft.

One more little tweak: starting from acer's last version, replace the line

data[y, ix] := data[y, ix] + 1;

by

data[y, ix] := data[y, ix] + iy^(-0.5);

This emphasizes the first couple of iterations over the later ones, and thus shows the 'trails' better as the expression converges towards the attractor. See below. Of course iy^(-0.5) can be replaced by any other function of iy; maybe you'd want to emphasize the first couple of iterations even more, or you'd want to emphasize a few iterations around iy=500, or...

Erik.

@acer Awesome :) I'd like to lighten up the darker areas a bit; so I inserted

map[evalhf,inplace](x -> x^0.8, img_h); 

immediately after the call to worker. Using 50000 iterations per column instead of 500 (using a measly 8 seconds) I got this:

3 4 5 6 7 8 9 Last Page 5 of 22