Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 313 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

The vertical axis's tickmarks are positioned at -1/2, -3/2, ..., -(2*n-1)/2. Thus, this works:

restart:
n:= 10:
RM:= LinearAlgebra:-RandomMatrix(n):
NewTickMarks:= [($1..n)+~1/2 =~ A||(1..n)]:
Statistics:-HeatMap(RM, tickmarks= [NewTickMarks, (1-lhs=rhs)~(NewTickMarks)]);

 

You have 4 real-valued dimensions: x1x2x3, and f1. You're only allowed 3. If you set one of those 4 to a constant value, then you could plot the remaining 3 as a level surface. Then you could use time as a 4th dimension and make an animation. Sometimes I use gradations of color to represent extra dimensions.

On the left side of the assignment in the double loop, you have uppercase P. I think that you meant lowercase p.

As you read through this explanation, if you get to any point that you don't understand, please ask about it. You need to understand each step before going to the next step.

First you should strive to understand s(x). Using some plots if you need to, convince yourself that:

  1. is periodic with period 1,
  2. on its fundamental period 0 <= x <= 1, s is equivalent to piecewise(x <= 1/2, x, x <= 1,1-x).

Let me know if you need help with the plot command.

It follows that s'(x) is also periodic with period 1, and is equivalent on its fundamental period to 

piecewise(x=0, undefined, x < 1/2, 1, x = 1/2, undefined, x < 1, -1, x = 1, undefined)

Note that s'(x) is always an integer when it's defined. We'll use that fact later. Also, s'(x) is defined for all real x except when x = p/2 for some integer p.

Now define SM(x) = s(M*x)/M, for any positive integer M. It follows from basic facts about periodic functions (no calculus required) that SM(x) has fundamental period 1/M. From the chain rule, we have that SM'(x) =  s'(M*x) wherever it's defined. In particular, it's either 1 or -1 wherever it's defined. And it's defined for all real x except when x=p/(2*M) for some integer p.

Proposition 1: If you have two periodic functions f(x) and g(x) with fundamental periods P and Q and P/Q is an integer, then the period of h(x) = f(x) + g(x) is P. 

[Edit: The red characters have been deleted from the next line.]

Now define bN(x) = sum(s(2^n*x)/2^n, n= 1..N). Its period is 1/2^N (by that proposition) and it's differentiable for all real x except when x = p/2^(N+1) for some integer p. When it's defined, bN'(x) is a sum of exactly N values, each being either 1 or -1. So, it's an integer between -N and N, inclusive.

Your original problem was b4'(9.05), which equals b4'(.05). The critical points are p/32 for any integer p. The value is S2'(.05) + S4'(.05) + S8'(.05) + S16'(.05) = 1 + 1 + 1 + (-1) = 3 because 1/32 < .05 < 2/32.

Note that we can easily compute bN'(x) for any positive integer N and any real x by looking at the first N digits after the decimal point in the binary (base-2) expansion of frac(x). You just add 1 for every 0 and subtract 1 for every 1.

 

It can be created easily like this:

n:= 4:
M:= Matrix(n, scan= band[1,0], [[1$(n-1)], [$1..n]]);

So, the band[1,0] says that there is 1 subdiagonal and 0 superdiagonals, and the initializer is given by those diagonals, in that order.

To optimize the storage, use

M:= Matrix(
   n, 
   shape= triangular[lower], storage= band[1,0],
   scan= band[1,0], [[1$(n-1)], [$1..n]]
);

So, when you use scan= band[...], then you can make your initializer logically follow your diagonal structure, rather than using some complicated indexing scheme that goes by rows and columns.

Yes, it is easy to build a matrix by its submatrices. I usually use the angle-bracket constructors for that. If you give an example, I'll create it.
 

I implemented the L.D.L^T algorithm for hardware-float matrices for you. It's not as fast as NAG code, but it's not bad. I think that you'll find it useful. Let me know.

Would someone who knows more about compiled Maple than I do suggest some improvements for the timing of this?
 

LDLT_internal:= proc(n::nonnegint, A::Matrix, R::Vector, D::Vector)::nonnegint;
description
   "L.D.L^T factorization of real symmetric Matrix. "
   "Adapted from Algorithm 5.1-2 of _Matrix Computations_ by "
   "Gene H Golub and Charles F Van Loan."
;
option autocompile;
local
   k::nonnegint, k1::nonnegint, p::nonnegint, i::nonnegint,
   S::float, a::float, d::float;
   for k to n do
      k1:= k-1;
      S:= 0;
      for p to k1 do
         a:= A[k,p];
         R[p]:= D[p]*a;
         S:= S + a*R[p]
      od;
      d:= A[k,k] - S;
      if d = 0 then return 0 fi; #Signal bad matrix.
      D[k]:= d;
      for i from k+1 to n do
         S:= 0; for p to k1 do S:= S + A[i,p]*R[p] od;
         A[i,k]:= (A[i,k] - S)/d
      od
   od;
   1 #Signal error-free return.
end proc:

LDLT:= proc(A::Matrix(symmetric, datatype= float[8]))
description
   "Wrapper for externally compiled L.D.L^T factorization of float[8] matrices"
;
option `Author: Carl Love <carl.j.love@gmail.com> 4-Sep-2018`;
local
   n:= LinearAlgebra:-RowDimension(A),
   R:= Vector(n, datatype= float[8]), #workspace
   D:= Vector(n, datatype= float[8]),
   AA:= Matrix(A, datatype= float[8]) #Copy A: This procedure doesn't work "inplace".
;
   if LDLT_internal(n, AA, R, D) = 0 then return FAIL fi;
   Matrix(AA, shape= triangular[lower,unit]), Matrix(n, shape= diagonal, D)
end proc:
 

Digits:= trunc(evalhf(Digits)):

A:= LinearAlgebra:-RandomMatrix(9$2, shape= symmetric, datatype= float[8]):

(L,DD):= LDLT(A):

#Check residuals:
map2(LinearAlgebra:-Norm, L.DD.L^+ - A, [1, 2, Frobenius, infinity]);

[0.997424365323240636e-12, 0.839238148273220e-12, 0.956720711319157462e-12, 0.125943699913477758e-11]

#Time test larger matrix:
A:= LinearAlgebra:-RandomMatrix(2^10$2, shape= symmetric, datatype= float[8], generator= rand(0. .. 1.)) +
   LinearAlgebra:-RandomMatrix(2^10$2, shape= diagonal, datatype= float[8], generator= rand(2^10..2^11))
:
(L,DD):= CodeTools:-Usage(LDLT(A)):

memory used=64.09MiB, alloc change=8.00MiB, cpu time=4.16s, real time=3.87s, gc time=703.12ms

map2(LinearAlgebra:-Norm, L.DD.L^+ - A, [1, 2, Frobenius, infinity]);

[0.282413849250762183e-12, 0.228836495180473e-12, 0.308486829254582010e-11, 0.279835399644157157e-12]

#Compare with NAG's Cholesky:
L:= CodeTools:-Usage(LinearAlgebra:-LUDecomposition(A,  method= Cholesky)):

memory used=20.08MiB, alloc change=20.02MiB, cpu time=0ns, real time=25.00ms, gc time=0ns

map2(LinearAlgebra:-Norm, L.L^+ - A, [1, 2, Frobenius, infinity]);

[0.786496952634441193e-12, 0.682766685755396e-12, 0.706102768947285013e-11, 0.786594097149095894e-12]

 


 

Download LDLT.mw

The seq command does preserve the order. However, you took the output of seq and made it into set by using {...}. Sets do not preserve order, which is consistent with their mathematical definition. If you simply remove those {...}, then you'll get what you expect.

You can assign a default value to a parameter, and thereby make it optional, by directly assigning to it in the proc line:

sim:= proc(x, n, o:= exactly) 

You can do this and also put a type restriction on the parameter like this:

sim:= proc(x, n, o::identical(exactly,minimum,maximum):= exactly)

There's a huge number of different parameter types, and those help pages that you referenced could use a lot more examples.

Why not just use @@ instead of in the first place? 

MyOp:= ((A+B)@@2)(x); #your 1st question

subs(A= f, B= g, MyOp); #your 2nd question

 

Yes, the function is NumberTheory:-PrimeCounting. If your Maple is too old to have that, identical functionality is provided by numtheory[pi].

I suppose that your F is a PDF. If it is reasonably smooth[1], then this can be done by Statistics:-Sample. See the third example on its help page, which deals with a custom distribution. I suspect that the efficiency of this example could've been improved by including the basepoints and range options. I'll look into that later.

[1]Twice differentiable with finitely many inflection points on its support.

You are severely misreading the Wikipedia page that you linked to, which states clearly, in very prominent locations on the page, that A = L.L* is the standard Cholesky factorization[1] and that A = L.D.L* is "a closely related variant."[2].

Regarding "the output described in books": I think that you'd be hard pressed to find a more standard or respected textbook on the subject than Golub and Van Loan's Matrix Computations[3], where is found

  • Theorem 5.2-3: Cholesky Decomposition. If A ... is symmetric positive definite, then there exists a lower triangular G ... with positive diagonal entries such that A = G.GT.

A pseudocode algorithm for computing G is given as Algorithm 5.2-1. The L.D.LT decomposition is given as Theorem 5.1-2, but no name, Cholesky or otherwise, is attributed to it.

[1]This is in the first sentence of the second section.

[2]This is in the first sentence of the third section.

[3]Gene H Golub and Charles F Van Loan, Matrix Computations, 1983, Johns Hopkins University Press, ISBN: 0-8018-3010-9

Here is another way to do it. I think that it's more general.

simplify((lhs-rhs)(eqq)) assuming positive;[1]
`if`(%::`*`, select(depends, %, tau), %) = 0;[2]
collect(%, indets(%, {specfunc(diff), typefunc(q[posint])}));[3] 

[1]simplify operates independently on the two sides of an equation. In particular, it doesn't divide out common factors. (lhs-rhs)(eqq) subtracts the right side of the equation from the left side, forming an expression rather than an equation. The assuming positive allows the square roots to be simplified.

[2]If the resulting expression is a product, select the factors containing tau; the rest are discarded.

[3]collect with respect to any derivatives and any functions of q. Note that these do not need to be explicitly listed.
 

Consider the univariate case: limit(f(x), x= a). There are only two paths along which x can travel to get to a, and we call those "from the left" and "from the right" (and those limits may be different). But in the multivariate case, there are an infinitude of paths along which (x,y) can travel to get to (a,b), and there may be many limits.  To find those, you need to parameterize the path. For example,

X:= a+t; Y:= b-t; limit(f(X,Y), t= 0, right);

Since tanh(p) ->  1 as p -> infinity, just replace tanh(p) by 1 in HeunG and you get Float(infinity) + Float(infinity)*I. Now, I'll be the first to admit that this technique for finding a limit is not 100% reliable. But it can be backed up with numeric and graphical evidence.

The divergence of the imaginary part is much slower than the divergence of the real part.

First 164 165 166 167 168 169 170 Last Page 166 of 395