mah00

35 Reputation

7 Badges

12 years, 337 days

MaplePrimes Activity


These are replies submitted by mah00

SO even if k is purely symbolik, maple cannot perform the integral?

SO even if k is purely symbolik, maple cannot perform the integral?

The subs version is the most adequate to my application. 

 

Thanks for your help.

The subs version is the most adequate to my application. 

 

Thanks for your help.

Thanks!

I think it is exactly what I want.

Thanks!

I think it is exactly what I want.

Here is an example:

U=Vector[column]([[x],[y]]);

A:=Matrix(3,3,(i,j)->1);

f(U):=exp(U'*A*U);

Then my question is how to calculate int(f(U),x=-infinity..infinity,y=-infinity..infinity);

But in 2D it is easy and I just wrote it.

My question is, is there a more clever way to write this in high diemension?

 

I hope it is clear enough.

Actually I have a function f of U=Vector[column]([[x],[y],[z]]) and I want to calculate the inetgral of f over U, something like:

int(f(U),U=-infinity..infinity);

 

In my case, U is a 12 dimensional vector.

In this example, the covariance matrix is given and it is not computed using the random vector.

 

Here is how he did it:

CorrMat := Matrix([seq([seq(`if`(i = j, 1, rho), j = 1 .. N)], i = 1 .. N)]);

C1 := simplify(S.Transpose(S));

CoVar := Zip(`*`, C1, CorrMat);

 

Here is my code, may be it is more clear this way:

T:=20;

d:=1;

> xi0:=Vector([[0],[0],[0],[0]]):
> Mxi:=Vector([[0],[0],[0],[0]]):
> Rxi:=sqrt(d)*IdentityMatrix(4):
> xi1:=BrownianMotion(xi0,Mxi,Rxi):
> xi:=SamplePath(xi1(t),t=0..T,timesteps =T/d):
> eta0:=Vector([[0],[0]]):
> Meta:=Vector([[0],[0]]):
> Reta:=sqrt(d)*IdentityMatrix(2):
> eta1:=BrownianMotion(eta0,Meta,Reta):
> eta:=SamplePath(eta1(t),t=0..T,timesteps = (T/d),replications=4):
 

zeta:=k->Vector[column]([[xi[1,1,k+2*d]-xi[1,1,k+d]],[xi[1,2,k+2*d]-xi[1,2,k+d]],[xi[1,3,k+2*d]-xi[1,3,k+d]],[xi[1,4,k+2*d]-xi[1,4,k+d]],[eta[1,1,k+2*d]-eta[1,1,k+d]],[eta[1,2,k+2*d]-eta[1,2,k+d]],[eta[2,1,k+2*d]-eta[2,1,k+d]],[eta[2,2,k+2*d]-eta[2,2,k+d]],[eta[3,1,k+2*d]-eta[3,1,k+d]],[eta[3,2,k+2*d]-eta[3,2,k+d]],[eta[4,1,k+2*d]-eta[4,1,k+d]],[eta[4,2,k+2*d]-eta[4,2,k+d]]]):

sigma1:=k->Matrix(12,12,(i,j)->2*nu*(Statistics[ExpectedValue](zeta(k)[i]*zeta(k)[j])-Statistics[ExpectedValue](zeta(k)[i])*Statistics[ExpectedValue](zeta(k)[j])));

Now when you compute Determinant(sigma1(k)), it equal to 0 for any k!

In this example, the covariance matrix is given and it is not computed using the random vector.

 

Here is how he did it:

CorrMat := Matrix([seq([seq(`if`(i = j, 1, rho), j = 1 .. N)], i = 1 .. N)]);

C1 := simplify(S.Transpose(S));

CoVar := Zip(`*`, C1, CorrMat);

 

Here is my code, may be it is more clear this way:

T:=20;

d:=1;

> xi0:=Vector([[0],[0],[0],[0]]):
> Mxi:=Vector([[0],[0],[0],[0]]):
> Rxi:=sqrt(d)*IdentityMatrix(4):
> xi1:=BrownianMotion(xi0,Mxi,Rxi):
> xi:=SamplePath(xi1(t),t=0..T,timesteps =T/d):
> eta0:=Vector([[0],[0]]):
> Meta:=Vector([[0],[0]]):
> Reta:=sqrt(d)*IdentityMatrix(2):
> eta1:=BrownianMotion(eta0,Meta,Reta):
> eta:=SamplePath(eta1(t),t=0..T,timesteps = (T/d),replications=4):
 

zeta:=k->Vector[column]([[xi[1,1,k+2*d]-xi[1,1,k+d]],[xi[1,2,k+2*d]-xi[1,2,k+d]],[xi[1,3,k+2*d]-xi[1,3,k+d]],[xi[1,4,k+2*d]-xi[1,4,k+d]],[eta[1,1,k+2*d]-eta[1,1,k+d]],[eta[1,2,k+2*d]-eta[1,2,k+d]],[eta[2,1,k+2*d]-eta[2,1,k+d]],[eta[2,2,k+2*d]-eta[2,2,k+d]],[eta[3,1,k+2*d]-eta[3,1,k+d]],[eta[3,2,k+2*d]-eta[3,2,k+d]],[eta[4,1,k+2*d]-eta[4,1,k+d]],[eta[4,2,k+2*d]-eta[4,2,k+d]]]):

sigma1:=k->Matrix(12,12,(i,j)->2*nu*(Statistics[ExpectedValue](zeta(k)[i]*zeta(k)[j])-Statistics[ExpectedValue](zeta(k)[i])*Statistics[ExpectedValue](zeta(k)[j])));

Now when you compute Determinant(sigma1(k)), it equal to 0 for any k!

Thanks, I will do.

Thanks, I will do.

Great explanation. Thanks.

 

My problem is actually the following:

 

I want to create a Gaussian PDF so I need to calculate Determinant(sigma) with sigma the covariance matrix of a gaussian variable.

If we call this variable alpha, then sigma_ij=ExpectedValue(alpha_i*alpha_j)-ExpectedValue(alpha_i)*ExpectedValue(alpha_j)

and this is zero most of the time! So the covariance matrix is singular and the determinant is zero. And this is because ExpectedValue(alpha_i) is different from zero (at least that's what I think).

 

Do you think that the problem is elsewhere?

Great explanation. Thanks.

 

My problem is actually the following:

 

I want to create a Gaussian PDF so I need to calculate Determinant(sigma) with sigma the covariance matrix of a gaussian variable.

If we call this variable alpha, then sigma_ij=ExpectedValue(alpha_i*alpha_j)-ExpectedValue(alpha_i)*ExpectedValue(alpha_j)

and this is zero most of the time! So the covariance matrix is singular and the determinant is zero. And this is because ExpectedValue(alpha_i) is different from zero (at least that's what I think).

 

Do you think that the problem is elsewhere?

1 2 3 Page 3 of 3