epostma

1434 Reputation

19 Badges

14 years, 357 days
Maplesoft

Social Networks and Content at Maplesoft.com

Maple Application Center
I am the manager of the Mathematical Software Group, working mostly on the Maple library. I have been working at Maplesoft since 2007, mostly on the Statistics and Units packages and on contract work with industry users. My background is in abstract algebra, in which I completed a PhD at Eindhoven University of Technology. During my studies I was always searching for interesting questions at the crossroads of math and computer science. When I came to Canada in 2007, I had nothing but a work permit, some contacts at Maplesoft (whom I had met at a computer algebra conference a year earlier), and a plan to travel around beautiful Canada for a few months. Since Maplesoft is a company that solves more interesting math and computer science related questions than just about anywhere else, I was quite eager to join them, and after about three months, I could start.

MaplePrimes Activity


These are replies submitted by epostma

@Markiyan Hirnyk : Another option that is equally fast (measured to within measurement accuracy on an i7 running linux-64) is:

nops(select(`<`, thelist, 0));

The big advantage of this over the version with c -> c < 0 is that it only uses a kernel-internal procedure in the inner loop. 

Met vriendelijke groet,

Erik Postma
Maplesoft. 

@Markiyan Hirnyk : Another option that is equally fast (measured to within measurement accuracy on an i7 running linux-64) is:

nops(select(`<`, thelist, 0));

The big advantage of this over the version with c -> c < 0 is that it only uses a kernel-internal procedure in the inner loop. 

Met vriendelijke groet,

Erik Postma
Maplesoft. 

 It uses default values of 10^2 (equally long) time steps and 10^4 iterations.

This is determined by the procedures Finance:-ExpectedValue:-ProcessParameters and Finance:-ExpectedValue:-ProcessCommonOptions. You can look at the source code for these guys by doing:

kernelopts(opaquemodules=false):
showstat(Finance:-ExpectedValue:-ProcessParameters);
showstat(Finance:-ExpectedValue:-ProcessCommonOptions);

and if you showstat(Finance:-ExpectedValue) you can see how these two are called.

Hope this helps,

Erik Postma
Maplesoft.

 It uses default values of 10^2 (equally long) time steps and 10^4 iterations.

This is determined by the procedures Finance:-ExpectedValue:-ProcessParameters and Finance:-ExpectedValue:-ProcessCommonOptions. You can look at the source code for these guys by doing:

kernelopts(opaquemodules=false):
showstat(Finance:-ExpectedValue:-ProcessParameters);
showstat(Finance:-ExpectedValue:-ProcessCommonOptions);

and if you showstat(Finance:-ExpectedValue) you can see how these two are called.

Hope this helps,

Erik Postma
Maplesoft.

Hi mah00,

Here's a few hints to help you get started.

Statistics' ExpectedValue command doesn't know how to deal with Finance's processes - just with Statistics' RandomVariables. (For Statistics, the BrownianMotion process you define looks like a simple non-stochastic variable (you could say, a parameter), and the expected value of a parameter is that parameter itself. (_X is the name Finance gives to the process). You'll need to use Finance's ExpectedValue command, which estimates the expected value from a number of realizations (sample paths).

Also, you'll need to specify a scalar involving the time values at which you want to know the expected value. For example:

ExpectedValue(abs(xi1(1)[1] - xi1(1)[2]), timesteps=100, replications=10^4);

to get the average absolute difference between the first and second of the four components of xi1 at t=1.

Whether you want a Brownian motion or a Wiener process is up to you - you'll need to decide which of those two fits your needs.

Erik Postma
Maplesoft.

Hi mah00,

Here's a few hints to help you get started.

Statistics' ExpectedValue command doesn't know how to deal with Finance's processes - just with Statistics' RandomVariables. (For Statistics, the BrownianMotion process you define looks like a simple non-stochastic variable (you could say, a parameter), and the expected value of a parameter is that parameter itself. (_X is the name Finance gives to the process). You'll need to use Finance's ExpectedValue command, which estimates the expected value from a number of realizations (sample paths).

Also, you'll need to specify a scalar involving the time values at which you want to know the expected value. For example:

ExpectedValue(abs(xi1(1)[1] - xi1(1)[2]), timesteps=100, replications=10^4);

to get the average absolute difference between the first and second of the four components of xi1 at t=1.

Whether you want a Brownian motion or a Wiener process is up to you - you'll need to decide which of those two fits your needs.

Erik Postma
Maplesoft.

Let me try to help you with a general remark, because I'm not sure what you are trying to do exactly. If you want to find some statistical property (such as the expected value, or the covariance) of a random variable, or a group of random variables, you'll need to choose between two options.

The first option is to do everything symbolically. You could characterize this approach by the fact that you never take a sample, using Statistics:-Sample or Finance:-SamplePath, or use data from the outside - it's just the use of the abstract distribution. You can for example ask for the ExpectedValue of the square of the normal distribution with parameters mu and sigma. This is almost exclusively the domain of the Statistics package - the Finance package typically requires the use of the second approach.

The second option is to use data. You can generate SamplePaths or Samples and then measure the properties of these realizations of the random variables. In this case, you'll need to make sure that for every call to a statistical property you want to compute, you have a sample of data for that call. In particular, if you want to find the covariance between, say, a property of a process at t=1 and a property of the process at t=2, then you'll need a sufficient number of replications of the sample path as a whole and pass the values at t=1 and t=2 of all sample paths to the proper function call. You can't compute the covariance by calling ExpectedValue on every data point individually.

Let us know if this helps, and if it doesn't then tell us exactly what you are trying to compute.

Hope this helps,

Erik Postma
Maplesoft.

Let me try to help you with a general remark, because I'm not sure what you are trying to do exactly. If you want to find some statistical property (such as the expected value, or the covariance) of a random variable, or a group of random variables, you'll need to choose between two options.

The first option is to do everything symbolically. You could characterize this approach by the fact that you never take a sample, using Statistics:-Sample or Finance:-SamplePath, or use data from the outside - it's just the use of the abstract distribution. You can for example ask for the ExpectedValue of the square of the normal distribution with parameters mu and sigma. This is almost exclusively the domain of the Statistics package - the Finance package typically requires the use of the second approach.

The second option is to use data. You can generate SamplePaths or Samples and then measure the properties of these realizations of the random variables. In this case, you'll need to make sure that for every call to a statistical property you want to compute, you have a sample of data for that call. In particular, if you want to find the covariance between, say, a property of a process at t=1 and a property of the process at t=2, then you'll need a sufficient number of replications of the sample path as a whole and pass the values at t=1 and t=2 of all sample paths to the proper function call. You can't compute the covariance by calling ExpectedValue on every data point individually.

Let us know if this helps, and if it doesn't then tell us exactly what you are trying to compute.

Hope this helps,

Erik Postma
Maplesoft.

It turns out that the issue is mathematically much simpler - it is simply use of the ExpectedValue command in a way that it's not meant to be used. For every call to ExpectedValue that you make, the argument is just a constant floating point number. The expected value of a floating point number is always that same floating point number - it's constant! Similarly, the covariance between two constants is always zero - neither of them will ever change.

What you meant to do is take the covariance of the random variables that generated those floating point numbers. I'm looking into how your computation works and will get back to you in a few minutes. (I started writing my previous answer just before you posted yours, and sadly we don't get notified of newer answers or comments when writing answers. But it is still useful information for the case that you described in your initial question, so I'll let the answer stand.)

Erik Postma
Maplesoft.

It turns out that the issue is mathematically much simpler - it is simply use of the ExpectedValue command in a way that it's not meant to be used. For every call to ExpectedValue that you make, the argument is just a constant floating point number. The expected value of a floating point number is always that same floating point number - it's constant! Similarly, the covariance between two constants is always zero - neither of them will ever change.

What you meant to do is take the covariance of the random variables that generated those floating point numbers. I'm looking into how your computation works and will get back to you in a few minutes. (I started writing my previous answer just before you posted yours, and sadly we don't get notified of newer answers or comments when writing answers. But it is still useful information for the case that you described in your initial question, so I'll let the answer stand.)

Erik Postma
Maplesoft.

@serena88: As to your first question / method, ?LinearAlgebra:-DotProduct works only on pairs of Vectors, not on Matrices. It is meant to represent the inner product on real or complex vector spaces. If the product you're trying to compute involves Matrices, use . (the dot), as you found already, or ?LinearAlgebra:-Multiply (which does the same thing, I think).

As to your second question: actually, B . C is well defined: B is a 3x1 Matrix and C a 1x2 Matrix, so you can multiply them to obtain a 3x2 Matrix (since nxk and kxm matrices can be multiplied for any n, k, m to obtain nxm matrices). If you would use, say, C^+ (a 2x1 Matrix) instead of C, then Maple would complain.

Hope this helps,

Erik Postma
Maplesoft.

@serena88: As to your first question / method, ?LinearAlgebra:-DotProduct works only on pairs of Vectors, not on Matrices. It is meant to represent the inner product on real or complex vector spaces. If the product you're trying to compute involves Matrices, use . (the dot), as you found already, or ?LinearAlgebra:-Multiply (which does the same thing, I think).

As to your second question: actually, B . C is well defined: B is a 3x1 Matrix and C a 1x2 Matrix, so you can multiply them to obtain a 3x2 Matrix (since nxk and kxm matrices can be multiplied for any n, k, m to obtain nxm matrices). If you would use, say, C^+ (a 2x1 Matrix) instead of C, then Maple would complain.

Hope this helps,

Erik Postma
Maplesoft.

+1, nice solution. I must admit I didn't try to optimize for speed at all since the demo input [2, 1, 4] was so small - but that's not likely to be the problem that Markiyan wanted to solve :)

It would probably be possible to generate a Compiler:-Compile'able version with some effort, if the highest possible speed is required (iterating over the integers 0 .. 2^n - 1 and using the binary representation to give the signs of the coefficients - with a maximum of 63 or so for n, but that will take way too long anyway), but it's probably better to invest the effort in making a good branch-and-bound strategy. The problem is NP-complete, since Knapsack reduces to it, so solving it perfectly for truly large problem sizes is not going to work.

Erik Postma
Maplesoft.

+1, nice solution. I must admit I didn't try to optimize for speed at all since the demo input [2, 1, 4] was so small - but that's not likely to be the problem that Markiyan wanted to solve :)

It would probably be possible to generate a Compiler:-Compile'able version with some effort, if the highest possible speed is required (iterating over the integers 0 .. 2^n - 1 and using the binary representation to give the signs of the coefficients - with a maximum of 63 or so for n, but that will take way too long anyway), but it's probably better to invest the effort in making a good branch-and-bound strategy. The problem is NP-complete, since Knapsack reduces to it, so solving it perfectly for truly large problem sizes is not going to work.

Erik Postma
Maplesoft.

Indeed the condition m=0 is the problem here: Maple is correctly telling us that there is, in general, no solution to that combination of equations. If we substitute m = 0 in both equations, then we get the solution x(t) = y2(t) = 0 for all t. Otherwise, if we leave m symbolic as is, we can get a solution if we leave the initial condition x(0) = 0 out; it's kind of complicated so I won't include it here. Leaving out the condition y2(0) = 0 still leaves an inconsistent system.

Hope this helps,

Erik Postma
Maplesoft.

2 3 4 5 6 7 8 Last Page 4 of 21