Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 307 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Here's my GaussSeidel, coded for Maple 2019 or later. The Maple 2019 syntax makes it easy to see that the algorithm is simply adding the residuals until the residual is small. I post this because this procedure is competitive (both in efficiency and accuracy) with stock methods provided by LinearAlgebra:-LinearSolve. Also, I show a little-known trick for including a plot in a userinfo statement.
 

 restart
:

GaussSeidel:= proc(
    A0::Matrix(square), B0::Vector, X0::Vector,
    abstol::positive:= 10.^(1-Digits), maxiters::posint:= 2*Digits #optional arguments
)
option `Author: Carl Love <carl.j.love@gmail.com> 2020-Jun-13`;
uses St= Statistics;
local
    n:= numelems(X0), k, r,
    R:= Array(1..0), cr, XY, #These are only used for the optional convergence analysis.
    X:= rtable(X0, 'datatype'= 'sfloat', 'subtype'= Vector['column']),
    #Precondition A0 and B0 by dividing by diagonal of A0:
    B:= rtable(1..n, i-> B0[i]/A0[i,i], 'subtype'= Vector['column'], 'datatype'= 'sfloat'),
    A:= rtable(
        (1..n)$2, (i,j)-> A0[i,j]/A0[i,i],
        'subtype'= 'Matrix', 'datatype'= 'sfloat'
    )
;
    Digits++; #1 guard digit

    #The entire computation and convergence check are in the next one line:
    for k to maxiters do X+= (r:= B-A.X) until (R,= add(CopySign~(r, 1)))[-1] < abstol;
    #After the preconditioning done in the setup, the computation is simply adding
    #the residuals until the residual is small!

    #These userinfo statements are just to perform and report
    #an analysis of the convergence:
    userinfo(1, 'GaussSeidel', "Used", k, "iterations.");
    userinfo(
        2, 'GaussSeidel', "Convergence rate: |DeltaX[_i]| ~",
        (cr:= (evalf[4]@St:-ExponentialFit)((XY:= <<seq(1..numelems(R))> | R^+>), _i))
    );
    userinfo(
        3, 'GaussSeidel',
        print(plot(
            [cr, XY], _i= 1..XY[-1,1], 'style'= ['line', 'point'],
            'axis'[2]= ['mode'= 'log'],
            'title'= "Convergence to the solution vector X",
            'labels'= [iteration, abs(Delta('X'))]
        ))
    );

    #return or error:
    if k > maxiters then error "did not converge" else X fi
end proc
:

#Tests:
LA:= LinearAlgebra: CT:= CodeTools:
Digits:= 1+trunc(evalhf(Digits)):

#Random example generator:
ABexample:= proc(n::posint)
local A:= LA:-RandomMatrix(n$2, datatype= sfloat), k;
    #Make A strictly diagonally dominant to ensure convergence:
    for k to n do A[k,k]:= 1+add(abs~(A[k])) od;
    A, LA:-RandomVector(n, datatype= sfloat)
end proc
:

#Test our procedure:
infolevel[GaussSeidel]:= 0:
n:= 2^10:
(A,B):= ABexample(n):
X1:= CT:-Usage(GaussSeidel(A, B, Vector(n))):

memory used=2.90GiB, alloc change=-8.00MiB, cpu time=18.70s, real time=14.84s, gc time=5.89s

#Compare with a standard method:
X2:= CT:-Usage(LA:-LinearSolve(A, B, method= hybrid)):

memory used=2.88GiB, alloc change=80.01MiB, cpu time=33.16s, real time=17.87s, gc time=19.45s

LA:-Norm~('A'.~[X1,X2] -~ 'B', infinity); #Compare residuals

[0.73e-12, 0.73e-12]

LA:-Norm(X1-X2, 1); #Compare results

0.9962429e-15

#Redo GaussSeidel with convergence analysis:
infolevel[GaussSeidel]:= 3:
CT:-Usage(GaussSeidel(A, B, Vector(n))):

GaussSeidel: Used 12 iterations.

GaussSeidel: Convergence rate: |DeltaX[_i]| ~ 23.26*exp(-3.306*_i)

GaussSeidel:

memory used=2.91GiB, alloc change=8.00MiB, cpu time=18.80s, real time=14.93s, gc time=6.12s

The convergence is highly linear.

#
#Compare at higher precisioon:
Digits:= 2*trunc(evalhf(Digits)):
X1:= CT:-Usage(GaussSeidel(A, B, Vector(n))):

GaussSeidel: Used 22 iterations.

GaussSeidel: Convergence rate: |DeltaX[_i]| ~ 22.80*exp(-3.312*_i)

GaussSeidel:

memory used=7.94GiB, alloc change=32.00MiB, cpu time=61.47s, real time=44.50s, gc time=24.22s

X2:= CT:-Usage(LA:-LinearSolve(A, B, method= hybrid)):

memory used=4.90GiB, alloc change=16.01MiB, cpu time=50.76s, real time=26.81s, gc time=30.44s

LA:-Norm~('A'.~[X1,X2] -~ 'B', infinity);

[0.110e-25, 0.108e-25]

LA:-Norm(X1-X2, 1);

0.9681626e-29

#New hybrid method: Use stock hfloat solution as initial point to GaussSeidel:
X3:= CT:-Usage(
    GaussSeidel(
        A, B,
        evalf[trunc(evalhf(Digits))](
            LA:-LinearSolve(Matrix(A, datatype= hfloat), Vector(B, datatype= hfloat))
        )
    )
):

GaussSeidel: Used 11 iterations.

GaussSeidel: Convergence rate: |DeltaX[_i]| ~ 0.2479e-13*exp(-3.306*_i)

GaussSeidel:

memory used=4.21GiB, alloc change=48.01MiB, cpu time=33.42s, real time=23.48s, gc time=12.20s

LA:-Norm(A.X3-B, infinity);

0.110e-25

LA:-Norm(X1-X3, 1);

0.63817e-30

This new hybrid method is faster than the stock method provided by LinearAlgebra:-LinearSolve and has comparable accuracy.


 

Download GaussSeidel.mw

See the Sockets package: ?Sockets

In your first line of NEUZMinus,

NEUZMinus:= proc(Unten, Oben, f,G,Liste,n)::real;

the ::real is what I call a return-value type assertion[*1]. It needs to be terminated with a semicolon, which you have. The semicolon terminates the assertion itself; it's purpose is not to terminate the proc(...), which needs no terminator. However, your first line of Basenweschel

Basenwechsel:=proc(Dividend, m);

is incorrect because of the semicolon without a type assertion. It is only a coincidence that this particular procedure is syntactically accepable in Maple 2020; it would be incorrect if there were any declarations other than local on the next line, even in 2020.

[*1] There is no stock type named real; instead, real is a property. If you want to include a type assertion, you should make it realcons. Type assertions other than of procedure parameters are only checked if you set kernelopts(assertlevel= 2).

 

Your problem has nothing to do with the Task programming model. If you use lists and nops, then nops(L) is always a valid index of list L. The same is not always true of a 1-dimensional Array and rtable_num_elems if the Array starts at an index other than 1. The command lowerbound(A) is threadsafe and will return the first index of A.

The command coeffs has a long-standing flaw that it's awkward to match monomials to their coefficients. The following procedure corrects that:

Coeffs:= proc(f, V::{set,list}(name))
local C, t;
    C:= coeffs(f, V, t);
    table(sparse, [t]=~[C])
end proc
:
F:= 2*x+6*y+4*x^2+12*x*y-5*y^2:
C:= Coeffs(F, [x,y]):
C[x], C[x^2], C[x*y];
                            2, 4, 12

 

Here's a start:

EvalPts:= proc(
    a::algebraic, h::algebraic, n::posint, 
    {method::identical(left, right, midpoint):= right}
)
local x:= a-(if method='left' then h elif method='midpoint' then h/2 else 0 fi);
    [to n do x+= h od]
end proc
:
RiemannSum:= proc(
    f, intv::name=range, n::posint, 
    {
        method::identical(left, right, midpoint):= right, 
        output::identical(value, plot):= value
    }
)
local 
    a:= lhs(rhs(intv)), b:= rhs(rhs(intv)), h:= (b-a)/n,
    E:= EvalPts(a, h, n, ':-method'= method), F:= eval~(f, lhs(intv)=~ E),
    X:= [a, EvalPts(a, h, n)[]]
;
    if output='value' then return h*add(F) fi;
    plot(
        [f, [seq]([[X[k],0],[X[k],F[k]],[X[k+1],F[k]],[X[k+1],0]][], k= 1..n)],
        intv, color= [red, gray],
        caption= typeset( 
            "The ", method, " Riemann sum for ", f, " on ", intv,
            " using ", n, " evaluations is ", evalf(h*add(F)). "."
        )
    )
end proc
:
RiemannSum(1/(1+x^2), x= -2..2, 9);
                           1101459956
                           ----------
                           498612465 

evalf(%);
                          2.209050181

RiemannSum(1/(1+x^2), x= -2..2, 9, method= midpoint, output= plot);

Like this

<xl12, xl13, xl23> =~ 
   LinearAlgebra:-LinearSolve(<1,1,-1; 1,-1,1; -1,1,1>/2, <X1,X2,X3>)
;
 

You almost had it. You defined RHS but you didn't use it in the plot. You used sys instead. You also have a typo in the bounds for y1.

You only need 1 loop. I can't even conceive of using two.

Edit: Upon further thought, I do see that two nested loops could be used: The outer loop checks convergence conditions, and the inner loop does the updating of the solution vector. 

You need a multiplication sign after the first x. A space may also work, but I prefer an explicit sign.

Simple solve works:

solve(abs(x^2-3*x) + 4*x - 6, x);
              1, -3 

Or are you looking for a way to solve it "by hand" that doesn't involve breaking it into two cases? I don't think that there's any better way. It is the way that I teach it.

Indeed, there is a package named Student:-VectorCalculus, but it's not needed to create vector-valued functions. No package is needed. For example,

Helix:= t-> <cos(t), sin(t), t>

is a vector-valued function.

Simply

ColorTools:-NearestNamedColor~(c);

Surely something like that must've occurred to you?

The order of a permutation is the least common multiple of the lengths of its orbits. So, the order of the permutation that you show is 3. The order of the permutation from your last question is 20. 

If the order is o, then P^e = P^(e mod o).

The mathematical constant that you have as "pi" is spelled Pi (with uppercase P) in Maple. Once you make this correction, the pdetest will take an extremely long time (20 minutes or so), and will not provide a very useful answer (it'll have an unevaluated limit).

First 100 101 102 103 104 105 106 Last Page 102 of 395