luigidavinci

65 Reputation

6 Badges

13 years, 83 days

MaplePrimes Activity


These are replies submitted by luigidavinci

Thanks Joe, that was a neat solution, just what I needed. Unfortunately, it does not seem to work if I am querying the BasisIndex table with a separetly created vector which is element-wise identical to one vector in the basis structure. Like in this example:

 

basis:=Vector(1..4):

basis[1]:=Vector([4,0,0,0]): basis[2]:=Vector([3,2,0,0]): basis[3]:=Vector([2,0,0,2]): basis[4]:=Vector([1,1,2,0]):

BasisIndex := table([seq(basis[i] = i, i=1..upperbound(basis,1))]):

v:=basis[1]: v2:=Vector([4,0,0,0]):

if assigned(BasisIndex[v]) then BasisIndex[v]; else print("Not assigned"): end if;
                               1
if assigned(BasisIndex[v2]) then BasisIndex[v2]; else print("Not assigned"): end if;
                         "Not assigned"

 

If I do is(v=v2), I get a false, so I am not surprised that it is not working but, why does this happen? I have created both vectors in the same exact way so they should be identical, shouldn't they ?

 

Cheers.

Thanks Joe, that was a neat solution, just what I needed. Unfortunately, it does not seem to work if I am querying the BasisIndex table with a separetly created vector which is element-wise identical to one vector in the basis structure. Like in this example:

 

basis:=Vector(1..4):

basis[1]:=Vector([4,0,0,0]): basis[2]:=Vector([3,2,0,0]): basis[3]:=Vector([2,0,0,2]): basis[4]:=Vector([1,1,2,0]):

BasisIndex := table([seq(basis[i] = i, i=1..upperbound(basis,1))]):

v:=basis[1]: v2:=Vector([4,0,0,0]):

if assigned(BasisIndex[v]) then BasisIndex[v]; else print("Not assigned"): end if;
                               1
if assigned(BasisIndex[v2]) then BasisIndex[v2]; else print("Not assigned"): end if;
                         "Not assigned"

 

If I do is(v=v2), I get a false, so I am not surprised that it is not working but, why does this happen? I have created both vectors in the same exact way so they should be identical, shouldn't they ?

 

Cheers.

restart:kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);

                        1, 0, 0, 786288
temp:=Matrix(17000):
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                  2, 13220, 257836, 1156786288
unassign('temp'); a:='a': b:='b': c:='c': %, %%, %%%;
                            c, b, a
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                  2, 13220, 257836, 1156786288
gc();
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                     3, 280, 254108, 786288
temp:=Matrix(17000):
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                  4, 3952, 255500, 1156786288
temp2:=Matrix(17000):
Error, (in Matrix) not enough memory to allocate rtable

 

So, the garbage collector seems to be working ok in this case. It must be a bug in Maple 16.

restart:kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);

                        1, 0, 0, 786288
temp:=Matrix(17000):
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                  2, 13220, 257836, 1156786288
unassign('temp'); a:='a': b:='b': c:='c': %, %%, %%%;
                            c, b, a
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                  2, 13220, 257836, 1156786288
gc();
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                     3, 280, 254108, 786288
temp:=Matrix(17000):
kernelopts(gctimes), kernelopts(gcbytesreturned), kernelopts(gcbytesavail), kernelopts(bytesalloc);
                  4, 3952, 255500, 1156786288
temp2:=Matrix(17000):
Error, (in Matrix) not enough memory to allocate rtable

 

So, the garbage collector seems to be working ok in this case. It must be a bug in Maple 16.

Hey Paulina, thanks for your answer, however I want that parsing to be as quick as possible, for instance parsing a pointer to the memory address of the Matrix or something similar to that since I have to parse large sparse matrices in loops around 200-500 times, for which binary files clearly are not the fastest option.

Hey Paulina, thanks for your answer, however I want that parsing to be as quick as possible, for instance parsing a pointer to the memory address of the Matrix or something similar to that since I have to parse large sparse matrices in loops around 200-500 times, for which binary files clearly are not the fastest option.

@Acer, I just tried the wrapper you coded in the link provided. It improves speed with a factor of 3 for size 400x400(Toy problem) since the Matrix cannot be overwritten(preserve=true) and sometimes I need to calculate up to the first 20 or 50 eigenvalues. It helps though but not really what I was expecting from what I've heard about the Lanczos-type of eigensolving that I was hoping to find in the NAG routines. Hope you can still help me out with the NAG wrappers.

@Acer, I just tried the wrapper you coded in the link provided. It improves speed with a factor of 3 for size 400x400(Toy problem) since the Matrix cannot be overwritten(preserve=true) and sometimes I need to calculate up to the first 20 or 50 eigenvalues. It helps though but not really what I was expecting from what I've heard about the Lanczos-type of eigensolving that I was hoping to find in the NAG routines. Hope you can still help me out with the NAG wrappers.

I should have mentioned that size 2000x2000 is just the initial Matrix I have to deal with, I'd like to push things up to 10,000 x 10,000 or larger if possible. My matrix depends on a parameter and it's not unusual that I need to compute the eigenvalues and eigenvectors for a thousand different values of this parameter. Therefore, I'm concerned about both speed and memory.

I checked your link and I still have to try that code to see how it improves things in my case. Looks like it will improve a bit my calculations.

As far as I know(since I haven't been able to play around with the F12 functions), I will be using the suite of basic routines F12FAF, F12FBF, F12FCF, F12FDF and F12FEF which implement a Lanczos method
for the iterative solution of the real symmetric eigenproblem.

I am using full storage because, if I'm not mistaken, I saw a post of yours where you recommended not to use storage=sparse with CLAPACK routines for which LinearAlgebra.-Eigenvectors() is based on.

Thanks acer, respect.

I should have mentioned that size 2000x2000 is just the initial Matrix I have to deal with, I'd like to push things up to 10,000 x 10,000 or larger if possible. My matrix depends on a parameter and it's not unusual that I need to compute the eigenvalues and eigenvectors for a thousand different values of this parameter. Therefore, I'm concerned about both speed and memory.

I checked your link and I still have to try that code to see how it improves things in my case. Looks like it will improve a bit my calculations.

As far as I know(since I haven't been able to play around with the F12 functions), I will be using the suite of basic routines F12FAF, F12FBF, F12FCF, F12FDF and F12FEF which implement a Lanczos method
for the iterative solution of the real symmetric eigenproblem.

I am using full storage because, if I'm not mistaken, I saw a post of yours where you recommended not to use storage=sparse with CLAPACK routines for which LinearAlgebra.-Eigenvectors() is based on.

Thanks acer, respect.

Thanks Acer for your quick response. It makes sense that Maple only has wrappers for Mark 7 and thus the F12 is missing.

I stopped programming in C a long long time ago so probably I wouldn't have any idea how to properly make a hand-written wrapper. I am doing research in Bose-Einstein condensates and in my code I need to find eigenvalues and eigenvectors(at least for some of the smallest eigenvalues) to real symmetric sparse Matrix (size around 2000x2000), I'd really appreciate your help Acer. For the time being, I think I'm gonna try to call MATLAB code from another computer, which I understand uses ARPACK for the sparse eigenvalue problem. I think I would be alright with ARPACK as opposed to NAG F12.

My platform is Win XP 32-bit.

 

Thanks so much.

Thanks Acer for your quick response. It makes sense that Maple only has wrappers for Mark 7 and thus the F12 is missing.

I stopped programming in C a long long time ago so probably I wouldn't have any idea how to properly make a hand-written wrapper. I am doing research in Bose-Einstein condensates and in my code I need to find eigenvalues and eigenvectors(at least for some of the smallest eigenvalues) to real symmetric sparse Matrix (size around 2000x2000), I'd really appreciate your help Acer. For the time being, I think I'm gonna try to call MATLAB code from another computer, which I understand uses ARPACK for the sparse eigenvalue problem. I think I would be alright with ARPACK as opposed to NAG F12.

My platform is Win XP 32-bit.

 

Thanks so much.

atataaV:=proc(m1,m2,n1,n2,V)
local vtmp, coef, cell;
vtmp:=copy(V);
if (m1+m2=n1+n2) then
  cell:=vtmp[n2+2,1]; coef:=sqrt(cell); vtmp[n2+2,1]:=cell-1;
  cell:=vtmp[n1+2,1]; coef:=coef*sqrt(cell); vtmp[n1+2,1]:=cell-1;
  cell:=vtmp[m2+2,1]; coef:=coef*sqrt(cell+1); vtmp[m2+2,1]:=cell+1;
  cell:=vtmp[m1+2,1]; coef:=coef*sqrt(cell+1); vtmp[m1+2,1]:=cell+1;
  vtmp[1,1]:=g*h*w*(m1+m2)!/(Pi*2^(m1+m2+2)*sqrt(m1!*m2!*n1!*n2!))*coef*vtmp[1,1];
  RETURN(vtmp);
else RETURN(0); end if;
end proc;

 

And the definition of u1 is

u1:=FockKet([n-6,6,0,0,0,0,0]);

where

FockKet:=proc(Fstate)
 local i, vtmp;
 
 vtmp:=Matrix(1..(nops(Fstate)+1),1);
 vtmp[1,1]:=1;
 for i from 1 to nops(Fstate) do
   vtmp[i+1,1]:=op(i,Fstate);
 end do;
 
 RETURN(vtmp);
end proc;

 

I didn't want to post all the code because I'm still devolping it and debugging it so it looks very clumsy, but I suppose you can infer more from it. Actually I observed the problem is u1, if I use assume before creating u1 then everything works, as well as if I use this

>assume(n,positive);

>Equal(copy(atataaV(1,1,1,1,u1),2..-1,1),copy(SubMatrix(u1,2..-1,1)));

true

atataaV:=proc(m1,m2,n1,n2,V)
local vtmp, coef, cell;
vtmp:=copy(V);
if (m1+m2=n1+n2) then
  cell:=vtmp[n2+2,1]; coef:=sqrt(cell); vtmp[n2+2,1]:=cell-1;
  cell:=vtmp[n1+2,1]; coef:=coef*sqrt(cell); vtmp[n1+2,1]:=cell-1;
  cell:=vtmp[m2+2,1]; coef:=coef*sqrt(cell+1); vtmp[m2+2,1]:=cell+1;
  cell:=vtmp[m1+2,1]; coef:=coef*sqrt(cell+1); vtmp[m1+2,1]:=cell+1;
  vtmp[1,1]:=g*h*w*(m1+m2)!/(Pi*2^(m1+m2+2)*sqrt(m1!*m2!*n1!*n2!))*coef*vtmp[1,1];
  RETURN(vtmp);
else RETURN(0); end if;
end proc;

 

And the definition of u1 is

u1:=FockKet([n-6,6,0,0,0,0,0]);

where

FockKet:=proc(Fstate)
 local i, vtmp;
 
 vtmp:=Matrix(1..(nops(Fstate)+1),1);
 vtmp[1,1]:=1;
 for i from 1 to nops(Fstate) do
   vtmp[i+1,1]:=op(i,Fstate);
 end do;
 
 RETURN(vtmp);
end proc;

 

I didn't want to post all the code because I'm still devolping it and debugging it so it looks very clumsy, but I suppose you can infer more from it. Actually I observed the problem is u1, if I use assume before creating u1 then everything works, as well as if I use this

>assume(n,positive);

>Equal(copy(atataaV(1,1,1,1,u1),2..-1,1),copy(SubMatrix(u1,2..-1,1)));

true

@luigidavinci 

 

 I found I can use (don't know if it's the best programming practice for this need)

for i from 1 to 3 do U:=A||i+op(U); end do;

Still, is there a way to make `add` work ? I read `add` is sometimes more efficient than `for`loops.

Also, can I define a new type so that my "matrices" and its algebra can be treated separately than that of the regular `Matrix` type? I'm not sure how to do this since there would be no real difference between my special matrices and any other one of type `Matrix`.

 

Cheers.

1 2 Page 1 of 2