Jean-Claude Arbaut

Mr. Jean-Claude Arbaut

137 Reputation

7 Badges

5 years, 69 days
I used briefly Maple in the late 90s during my undergraduate studies. Back then it was Maple V r4 if I remember correctly, and a few years later Maple 6. I use again Maple since around 2018.

MaplePrimes Activity


These are replies submitted by Jean-Claude Arbaut

@vv 

I didn't think about trying identify, nice solution. Not entirely pleasing as it's "only" a numerical derivation, but still nice as it gives a couple of ideas for a derivation by hand.

Regarding MeijerG: Maple gives me a (wrong) numerical value, after a few minutes.

@vv 

Thanks for the link. I didn't think of convert(...,RootOf), but I would have tried instead factor(expand(ChebyshevU(16,x)),sqrt(17)), which also works, of course, except you now have four polynomials of degree 4. I guess this way is more direct with Maple, you just have to check which root is the right one. While the method above is closer to the usual way by hand.

@acer 

I never load CUDA, actually.

First, ssystem("processor full"); gives nothing useful : [-1, ""]

I am pretty sure the problem comes from matrix multiply in both. As I said, the crash does not occur until the last line, in the question. Of course, a failing SVD /could/ leave Maple in an unstable state, but I have exactly the same problem with vv's code hence (without SVD), so it's plausible.

 

I already mentionned I have an i7 9700F. Windows tells me:

Intel(R) Core(TM) i7-9700F CPU @ 3.00GHz, 3000 MHz

It has 8 cores.

 

OMP_NUM_THREADS is not defined by default. When I set it to 1, the code does not crash anymore (both the original problem in the question and vv's code). It still crashes if OMP_NUM_THREADS is set to 2 or 4.

However, it may be worth telling again that Maple 2020.2 does not crash, with or without OMP_NUM_THREADS.

I also installed Intel oneAPI (after I noticed the bug, so no interaction here), and a simple Fortran program with DGEMM works like a charm. Of course this proves nothing, Maple has likely another version.

Anyway, the fact that the very same code works on the same machine in the same conditions with Maple 2020.2, and crashes with Maple 2021.1, seems a good indication that there is something wrong in the latter. It may be a problem in MKL, and setting OMP_NUM_THREADS definitely helps.

@Jean-Claude Arbaut 

Additionally, with infolevel set, I get this *before* the crash (in vv code, replace A^2 with A.A, otherwise nothing is printed, but I guess the call is the same ultimately).

unknown: entering mplCudaInit
unknown: leaving mplCudaInit
Enable: Enable false
Enable: entering mplCudaShutdown
Enable: leaving mplCudaShutdown
Enable: GetBLAS proc (func::name, hwfunc::procedure,  $) eval(hwfunc) end proc
Properties: entering mplCudaGetProps
Properties: leaving mplCudaGetProps
unknown: NAG hw_f06yaf

f06yaf is NAG's name for dgemm: https://www.nag.com/numeric/nl/nagdoc_27cpp/flhtml/f06/f06yaf.html

The output is the same in Maple 2020, except it doesn't crash.

@Jean-Claude Arbaut 


From ?CUDA,supported_routines, I get that LinearAlgebra[MatrixMatrixMultiply] is CUDA-accelerated. It's the only one, apparently.

However, from ?CUDA,windows_display_reset, I get:

Some releases of Windows may report "Display driver stopped responding and has recovered" after resetting the display driver (a two second blackout) when running a longer computation on a CUDA device.  This should only occur on machines where the CUDA card is being used as both a compute device and as the display driver (which is not suggested).  In order to reduce display lock-ups resulting from the GPU not responding, Microsoft added a Timeout Detection and Recovery (TDR) mechanism that resets the card after the GPU scheduler detects that is has been busy for longer than two seconds (default timeout setting).

And a registry key to change the delay. And indeed, the Intel 9700F does not have integrated graphics, so both video and GPU run on the NVidia card.

However, even with a higher delay (30 seconds), Maple crashes almost immediately.

And this does not make much sense, as Maple 2020 is supposed to do the same, and it does not crash at all: even for n=10000, it just runs an OpenMP matrix multiply (all CPU cores at 100%). Incidentally, this means it does not use the GPU.

@vv 

I can confirm that it passes at 2000, and crashes at 2001.

In case it makes a difference :

Windows 10 Home Edition, version 21H1

16 GB RAM

Intel i7 9700F

Nvidia GeForce RTX 2060 (I don't know for sure whether the GPU is used for these computations).

@nm 

thaw does not 'unfreeze' recursively. Call thaw once more, or use subsindets[flat].

restart;
expr:=exp(sin(x));
thaw(expand(subsindets[flat](expr,specfunc({sin,cos,exp}),freeze)));
                      expr := exp(sin(x))

                          exp(sin(x))

 

restart;
expr:=exp(sin(x));
thaw(thaw(expand(subsindets(expr,specfunc({sin,cos,exp}),freeze))));
                      expr := exp(sin(x))

                          exp(sin(x))

 

Now fonts display correctly even with font antialiasing disabled. I believe this comes from the switch to Java 15. Though not a big enhancement, it's very nice, as I can't bear font antialiasing.

A little regression: when executing a group, the cursor jumps on the second line of the next group (or sometimes at the end of that group), when there is more than one.

@tomleslie 

Slightly shorter with a seq:

(assign('x', 11), " it was 10")

@Carl Love 

Yes, I saw the "uses" clause in your code, which I didn't know, and even though I didn't use it in my preceding function, I will certainly in the future.

I also learned about typing parameters (I knew this, but I understand that typing is much more flexible than I thought). As well as a few tricks for functional programming, with @and ~ (I knew them, more or less).

There is one thing, though, that I have already seen in Maple library source, and I didn't see it yet in the documentation: what do those ':-' mean in quoted symbols, such as ':-symmetric'? I know the syntax package:-function, but what about this usage?

Edit: I found it in the manual (it's simply '?:-'): The ":-" operator can also be used as a unary, prefix operator, whose sole operand is a symbol. The expression :-sym evaluates to the global instance of "sym", even if there is a local binding for "sym" in scope.

 

Thanks for your help.

@tomleslie 

Thank you too. Fortunately the problem mentioned by Carl Love didn't occur in my case because I don't have bidirectional edges, but it may happen of course.

@Carl Love 

Thank you!

In the meantime, I also found a workaround, though it's not as advanced syntax:

Dir2Undir:=proc(G)
  local W:=WeightMatrix(G),d:=ArrayTools[Diagonal]@@2;
  W+=W^%T-d(W);
  Graph(Vertices(G),map(convert,Edges(G),set),W)
end;

Of course, it would need some polishing to check input, and it would fail on an undirected graph.

By the way, I learned new programming tricks with your answer, so thank you for this too.

@Mac Dude 

I can't help you with your config, but I can tell you what I get with the basic CUDA example (?CUDA in Maple).

For all values of n between 1000 and 8000, the matrix product is always faster without CUDA (around 30%-40% faster).

The program does not raise an error, and I can see in the task manager that the GPU is actually used when enabling CUDA in Maple. In Non CUDA mode, all 8 CPU cores are used 100% speed. In CUDA, however, the task manager tells me the GPU is used up to 12%. No idea why.

Config: Maple 2020.2, Windows 10, CPU: i7-9700K (8 cores), GPU: Nvidia RTX 2060, 16 GB RAM.

Conclusion: for the time being, considering my hardware, I don't see a reason to use CUDA in Maple.

@acer 

 

Maybe Intel oneAPI with its new OpenMP offload to GPU? I don't know yet how it behaves but it looks promising, and I believe it was the point of Intel switching to these new compilers.

@vv It's doing exactly what I was looking for. I'll give it a try. Thanks!

1 2 Page 1 of 2