acer

32405 Reputation

29 Badges

19 years, 345 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

What exactly do you mean by "full type"? Please be general, giving a clear, consistent, and useful definition that holds for as many different Maple structures as you can.

You might end up looking toward `disassemble` wrapped around `addressof`, or `ToInert`.

acer

That's great to know, Will. (Sorry if that was a duplicate question; I might have asked, and been answered, before.)

acer

I guess it might depend on how much you find that you have to program for computations which are not automatically handled (efficiently enough, or at all) in those larger programs.

One big attraction of CUDA, as I understand it, is its general purpose nature as far as numerical scientific programming goes. It's not just using the GPU for graphics calculations. One can do numeric pdes on it, or whatever, provided someone's written the code.

I do not yet know whether Maple's external-calling mechanism can simply call out to a program compiled within CUDA to run on the GPU. It would be great if it did, and good even if it took a little extra effort. It'd be even better if one could cobble together a pseudo-automated process like this to make use of it on "numeric-typed" Maple procedures.

acer

If I were going to buy a new computer then (since I enjoy scientific programming and computing) I'd consider getting an NVIDIA video card that allowed general computation via the GPU. See CUDA. I'd likely consider a card capable of double precision for such GPU computation, such as the 260 chipset or higher.

acer

Hi Bryon,

Are you going you planning on importing the existing mapleprimes posts (all of them) into the new system?

acer

The relatively very small imarginary components in the computed eigenvalues of C are merely floating-point computational artefacts of using an algorithm for general (non-hermitian) Matrices. Maple does not "know" that your C is hermitian.

If one instead does Eigenvalues(Matrix(C,shape=hermitian)) then purely real results are returned.

> infolevel[LinearAlgebra]:=1:
> Eigenvalues(C);
Eigenvalues:   "calling external function"
Eigenvalues:   "CLAPACK"   hw_zgeevx_
             [                                              -16  ]
             [0.956982330395369285 - 0.128946438434221290 10    I]
             [                                                   ]
             [                                              -16  ]
             [0.268572763565393391 - 0.260648346513817324 10    I]
             [                                                   ]
             [                                              -16  ]
             [-1.10288545306076191 + 0.549243554846528987 10    I]
 
> Eigenvalues(Matrix(C,shape=hermitian));
Eigenvalues:   "calling external function"
Eigenvalues:   "CLAPACK"   hw_zhpevd_
                            [-1.10288545306076191]
                            [                    ]
                            [0.268572763565393335]
                            [                    ]
                            [0.956982330395368730]

acer

The relatively very small imarginary components in the computed eigenvalues of C are merely floating-point computational artefacts of using an algorithm for general (non-hermitian) Matrices. Maple does not "know" that your C is hermitian.

If one instead does Eigenvalues(Matrix(C,shape=hermitian)) then purely real results are returned.

> infolevel[LinearAlgebra]:=1:
> Eigenvalues(C);
Eigenvalues:   "calling external function"
Eigenvalues:   "CLAPACK"   hw_zgeevx_
             [                                              -16  ]
             [0.956982330395369285 - 0.128946438434221290 10    I]
             [                                                   ]
             [                                              -16  ]
             [0.268572763565393391 - 0.260648346513817324 10    I]
             [                                                   ]
             [                                              -16  ]
             [-1.10288545306076191 + 0.549243554846528987 10    I]
 
> Eigenvalues(Matrix(C,shape=hermitian));
Eigenvalues:   "calling external function"
Eigenvalues:   "CLAPACK"   hw_zhpevd_
                            [-1.10288545306076191]
                            [                    ]
                            [0.268572763565393335]
                            [                    ]
                            [0.956982330395368730]

acer

That is quite untrue, that ArrayTools:-Copy does the same as N:=M.

> M:=Matrix(1,1,[[17]]):
> N:=Matrix(1,1):
> ArrayTools:-Copy(1,M,0,1,N,0,1);
> NS:=M:
> evalb(M=N);
                                     false
 
> evalb(M=NS);
                                     true

But you can still use the N, after that Copy application, for comparsion with M.

> LinearAlgebra:-Equal(M,N);
                                     true

Also, evalm is unjustified here. For one thing, it makes a switch from an Array or Matrix to a lowercase array (which is a different structure, with last_name_eval, etc). And using evalm() is just as bad as using lowercase copy() for this task, as far as efficiency goes, since it entails creation of new structures with each iteration of the loop.

A much better way is to create N just once outside the loop, and then use a tool like ArrayTools:-Copy to get the latest entries into N or to reinitialize it with entries from M.

If I understood, you might also be able to use a hardware datatype for your problem, and get even better efficiency (esp. if using ArrayTools:-Copy, but likely elsewhere too with care).

acer

That is quite untrue, that ArrayTools:-Copy does the same as N:=M.

> M:=Matrix(1,1,[[17]]):
> N:=Matrix(1,1):
> ArrayTools:-Copy(1,M,0,1,N,0,1);
> NS:=M:
> evalb(M=N);
                                     false
 
> evalb(M=NS);
                                     true

But you can still use the N, after that Copy application, for comparsion with M.

> LinearAlgebra:-Equal(M,N);
                                     true

Also, evalm is unjustified here. For one thing, it makes a switch from an Array or Matrix to a lowercase array (which is a different structure, with last_name_eval, etc). And using evalm() is just as bad as using lowercase copy() for this task, as far as efficiency goes, since it entails creation of new structures with each iteration of the loop.

A much better way is to create N just once outside the loop, and then use a tool like ArrayTools:-Copy to get the latest entries into N or to reinitialize it with entries from M.

If I understood, you might also be able to use a hardware datatype for your problem, and get even better efficiency (esp. if using ArrayTools:-Copy, but likely elsewhere too with care).

acer

I believe that you are mistaken. The extra ' right-quotation mark was meant to denote matrix transposition, and so was not wrong syntax. That was all discussed in the earlier replies in this thread.

acer

I believe that you are mistaken. The extra ' right-quotation mark was meant to denote matrix transposition, and so was not wrong syntax. That was all discussed in the earlier replies in this thread.

acer

Yes, just read my first response in this same thread.

This is mentioned in the What's New for Maple 12, for Array/Matrices/Vectors.

acer

Yes, just read my first response in this same thread.

This is mentioned in the What's New for Maple 12, for Array/Matrices/Vectors.

acer

And that is a shame, because as far as efficiency goes you've almost certainly chosen a suboptimal structure and method. Every time you add an element you create a new list, which involves overhead for creation, possibly simultaneous storage, and eventual garbage collection (memory management). For example, it can make a task that optimally uses linear storage have quadratic memory use, with potential for worse effects in performance due to memory management needs. It is perhaps one of the more famous of less desirable programming techniques (in Maple).

acer

And that is a shame, because as far as efficiency goes you've almost certainly chosen a suboptimal structure and method. Every time you add an element you create a new list, which involves overhead for creation, possibly simultaneous storage, and eventual garbage collection (memory management). For example, it can make a task that optimally uses linear storage have quadratic memory use, with potential for worse effects in performance due to memory management needs. It is perhaps one of the more famous of less desirable programming techniques (in Maple).

acer

First 465 466 467 468 469 470 471 Last Page 467 of 593