mmcdara

6740 Reputation

18 Badges

8 years, 190 days

MaplePrimes Activity


These are Posts that have been published by mmcdara

HI,
 

This post concerns the simulation of a physical system whose behavior is governed by ODEs.
It is likely that some people will think that all which follows is nothing but embellishments  or a waste of time.
And in some sense they will be right.
I was thinking the same until I received some sharp remarks at the occasion of a few presentations of my works. 
So experience has proven me that doing a presentation in front of project managers with only 2D curves often leads to smiles, not to speak about those who raise their eyes to heaven in front of the poverty of the slides. 
Tired of this attitude, I decided to replace these 2D curves with a short film, which of course does not reveal more than what these 2D curves were already revealing, but which is pretty enough for the financing keeps going on.

For those of you who might regret this situation, just consider this work as a demonstration of the capabilities of Maple in 3D rendering.


PS: all the display outputs have been removed to avoid loading an unnecessary huge file.
      The two last commands must be uncommented to play the animation.

 

Download ODE_Movie.mw

 

I found in the Application Center a quite old work (2010) titled Generation of correlated random numbers  (see here view.aspx).
This work contains a few errors that I thought it was worth correcting. 

Basically the works I refer to concern the sampling of linearly correlated random variables (or correlation in the Pearson sense). Classical textbooks about the subject generally discuss this topic by considering only gaussian random variables and present two methods to generate linearlycorrelated samples: one base on the Cholesky decomposition of the correlation matrix, the other based on its SVD decomposition.

Now the question is: can we apply any of these two procedures to generate linearly correlated samples of arbitrary random variables?
The answer is NO and the reason why it is so is strongly related to a fundamental property of gaussian random variables (GRV) that is that any linear combination of GRVs is still a GRV.
But things are not that simple because even the multi gaussian case handmed with Cholesky's decomposition or SVD can lead to undesired results if no precautions are taken.

The aim of this post is to show what are those wrong results we obtain by thoughtlessly applying these decompositiond and, of course, to show how we must proceed to avoid them.

Let's start by a very simple point of natural good sense: suppose U1 and U2 are two independant identically distributed (iid) random variables and that we have some "function" F which, when applied to the couple (U1, U2) generate a couple (A1, A2) of linearly correlated random variables. Thus F(U1, U2) = (A1, A2).
Let's suppose this same relation holds if we replace U1 and U2 by "a sample of U1" and "a sample of U2", and thus (A1, A2) by "a sample of the bivariate (A1, A2) whose components are linearly correlated". Let's call S the cloud one could obtain by using for instance the ScatterPlot(A1, A2) procedure of Maple.

Let's suppose now that instead of computing F(U1, U2) I decide to compute (U2, U1). Let's call (A1' , A2') the corresponding joint sample and write S' := ScatterPlot(A2', A1').
It seems natural (and it is!) to think that S and S' will be the same up to sampling artifacts. 

Any correct method to generate samples from (linearly or not) correlated random variables must verify this similarity of patterns between S and S' S and S'. But this is not the case in this work view.aspx.

The safer way to correlate, even in the Pearson's sense, random variables is to use the concept of COPULAS (there is a work on copulas in the Application Center, but for a quick overview see here Copula_(probability_theory)).
For this special case of linear correlation on can use copulas without knowing it, and this is very simple: as soon as our  procedure F introduced above gives correct results if U1 and U2 are standard GRVs,

  • take any couple (R1, R2) of arbitrary random variables,
  • build a map M(R1, R2) --> (U1, U2),
  • generate (A1, A2) = F(U1, U2),
  • compute M^(-1)(A1, A2)


What is the point of correcting a work that is 10 years old?
A very simple answer is that the Cholesky's decomposition (or SVD) is still the emblematic method to use for linearly correlating random variable. This is the only one presented in scholar textbooks, the only one a lot of students have been taught about (unless they have  they have had an extensive background in probability or statistics), and thus a systematic source of wrong results users are not even aware of.


Next point: it's well known that the Pearson's correlation cannot be lower than -1 or higher than +1, but this is common mistake to think any value between -1 and +1 can be reached.
This is guaranteed for GRVs, but  not for some other random variables.
For a classical counter-example see  04_correlation_2016_cost_symposium_fkuo_tagged.pdf 

The notation used in the attached file are mainly those used in the initial work  view.aspx.

restart:


This work is aimed to correct the procedure used in  https://fr.maplesoft.com/applications/view.aspx?SID=99806
to correlate arbitrary random variables in the (common) Pearson's sense.

with(LinearAlgebra):
with(plots):
with(Statistics):

 

GAUSSIAN RANDOM VARIABLES

 

# First example: both A1 and A2 are centered gaussian random variables
#                The order we use (A1 next A2 or A2 next A1) to define Ma doesn't matter

Y   := RandomVariable(Normal(0, .25)):
rho := .9:
Q   := 10^4:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:
A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Second example : both A1 and A2 are non-centered gaussian random variables with equal standard deviations.
#                  The order we use to define Ma does matter

Y   := RandomVariable(Normal(1, .25)):
rho := .9:
Q   := 10^4:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

 

# Second example corrected: to avoid order's dependency proceed this way
#    1/ center A1 and A2
#    2/ correlate the now centered rvs
#    3/ uncenter the couple of correlated rvs


C1  := convert(Scale(A1, scale=Mean), Vector[row]):
C2  := convert(Scale(A2, scale=Mean), Vector[row]):

Ma  := `<,>`(`<,>`(C1), `<,>`(C2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1)+~Mean(A1), Column(Rs2, 2)+~Mean(A2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(C2), `<,>`(C1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2)+~Mean(A1), Column(Rs2, 1)+~Mean(A2), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Third example : both A1 and A2 are centered gaussian random variables with unequal standard deviations.
#                 The order we use to define Ma does matter

rho := .9:
Q   := 10^4:
A1  := Sample(Normal(0, 1), Q):
A2  := Sample(Normal(0, 2), Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Third example corrected: to avoid order's dependency proceed this way
#    1/ scale A1 and A2
#    2/ correlate the now scaled rvs
#    3/ unscale the couple of correlated rvs


C1  := A1 /~ StandardDeviation(A1):
C2  := A2 /~ StandardDeviation(A2):

Ma  := `<,>`(`<,>`(C1), `<,>`(C2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1)*~StandardDeviation(A1), Column(Rs2, 2)*~StandardDeviation(A2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(C2), `<,>`(C1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2)*~StandardDeviation(A1), Column(Rs2, 1)*~StandardDeviation(A2), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# More generally: to avoid order's dependency proceed this way
#    1/ transform A1 and A2 into standard gaussian random variables (mean and standard deviation scalings)
#    2/ correlate the now scaled rvs
#    3/ unscale the couple of correlated rvs

 

A MORE COMPLEX EXAMPLE:

NON GAUSSIAN RANDOM VARIABLES
(here two LogNormal rvs)

 

 

# Preliminary
#   the expectation (mean) of a LogNormal rv cannot be 0;
#   as a consequence it is expected that the order used to buid Ma will matter
#
# Proceed as Igor Hlivka did

 

Y   := RandomVariable(LogNormal(.5, .25)):
rho := .9:
Q   := 1000:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

ScatterPlot(A1, A2, color = red, title = ["Raw LogNormal RV", font = [TIMES, BOLD, 12]]):
A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated LogNormal RV", opts, color=blue):

# And now change, as usual, the order in Ma

Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated LogNormal RV", opts, color=red):

display(A1A2, A2A1);

 

# How can we avoid that the order used to assemble Ma do matter?
#
# A close examination of what was done with gaussiann rvs show that in all the cases we
# went back to standard gaussian rvs before correlating them.
# So let's just do the same thing here.
#
# Of course it's not as immediate as previously...
# (please do not focus on the slowness of the code, it is written to clearly explain 
# what is done, not to be fast)



#-------------------------------------- from Y to standard gaussian
G  := RandomVariable(Normal(0, 1)):
G1 := Vector[row](Q, q -> Quantile(G, Probability(Y > A1[q], numeric), numeric)):
G2 := Vector[row](Q, q -> Quantile(G, Probability(Y > A2[q], numeric), numeric)):
# could be replaced by this faster code
#   cdf_Y := unapply(CDF(Y, z), z) assuming z > 0;
#   cdf_G := unapply(CDF(G, z), z);
#   S1    := sort(A1):
#   ini   := -10:
#   V     := Vector[row](Q):
#   for q from 1 to Q do
#     V[q] := fsolve(cdf_G(z)=cdf_Y(S1[q]), z=ini);
#     ini  := V[q]:
#   end do:
#------------------------------------------------------------------

Ma  := `<,>`(`<,>`(G1), `<,>`(G2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


#-------------------------------------- from standard gaussian to Y
C1 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 1], numeric), numeric)):
C2 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 2], numeric), numeric)):
#------------------------------------------------------------------
A1A2 := ScatterPlot(C1, C2, title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(G2), `<,>`(G1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

#-------------------------------------- from standard gaussian to Y
C1 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 1], numeric), numeric)):
C2 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 2], numeric), numeric)):
#------------------------------------------------------------------

A2A1 := ScatterPlot(C2, C1, title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

 

CONCLUSION: Be extremely careful when correlating non standard gaussian random variables,
                             and more generally non gaussian random variables.


Correlating rvs the way Igor Hlivka did can be replaced in the more general framework of COPULA THEORY.

Mathematically a bidimensional copula C is a function from [0, 1] x [0, 1] to [0, 1] if C is joint CDF of a bivariate random variable
both with uniform marginals on [0, 1].
See for instanc here  https://en.wikipedia.org/wiki/Copula_(probability_theory)

What I did here to "correlate" A1 and A2 was nothing but to apply in a step-by-step way a GAUSSIAN COPULA to the bivariate
(A1, A2) random variable.
In  Quantile(G, Probability(Y > A1[q], numeric), numeric) the blue expression maps A1 onto [0, 1] (as it is needed
in the definition of a copula), while the brown sequence is the copula itself (when the same operation on A2 has been done).

 

 


 

Download LInear_Correlated_Random_Variables.mw

A lot of scientific software propose packages enabling drawing figures in XKCD style/
Up to now I thought this was restricted to open products (R, Python, ...) but I recently discovered Matlab and even Mathematica were doing same.

Layton S (2012). “XKCDIFY! Adding flair to boring Matlab Axes one plot at a time.” Last accessed on December 08, 2014, URL https://github.com/slayton/matlab-xkcdify.

Woods S (2012). “xkcd-style graphs.” Last accessed on December 08, 2014, URL http://mathematica.stackexchange.com/questions/11350/xkcd-style-graphs/ 11355#11355.

 

So why not Maple?

As a regular user of R, I could have visualize the body of the corresponding procedures to see how these drawings were made and just translate theminto Maple.
But copying for the sake of copying is not of much interest.
So I started to develop some primitives for "XKCD-drawing" lines, polygons, circles and even histograms.
My goal is not to write an XKCD package (I don't have the skills for that) but just to arouse the interest of (maybe) a few people here who could continue this preliminary work


A main problem is the one of the XKCD fonts: no question to redefine them in Maple and I guess using them in a commercial code is not legal (?). So no XKCD font in this first work, nor even the funny guy who appears recurently on the drawings (but it could be easily constructed in Maple).

In a recent post (Plot styling - experimenting with Maple's plotting...) Samir Khan proposed a few styles made of several plotting options,  some of which he named "Excel style" or "Oscilloscope style"... maybe a future "XKCD xtyle" in Maple ?


This work has been done with Maple 2015 and reuses an old version of a 1D-Kriging procedure 

 

restart:

with(LinearAlgebra):
with(plots):
with(Statistics):

 

The principle is always the same:
    1/   Let L a straight line which is either defined by its two ending points (xkvd_hline) or taken as the default [0, 0], [1, 0] line.
          For xkvd_hline the given line L is firstly rotate to be aligned with the horizontal axis.

    2/   Let P1, ..., PN N points on L. Each Pn writes [xn, yn]

    3/   A random perturbation rn is added yo the values y1, ..., yN

    4/   A stationnary random process RP, with gaussian correlation function is used to build a smooth curve passing through the points
          (x1, y1+r1), ..., (xN, yN+rN) (procedure KG where "KG" stands for "Kriging")

    5/   The result is drawn or mapped to some predefined shape :
                  xkcd_hist,
                  xkcd_polyline,
                  xkcd_circle

    6/   A procedure xkcd_func is also provided to draw functions defined by an explicit relation.
 

KG := proc(X, Y, psi, sigma)
  local NX, DX, K, mu, k, y:
  NX := numelems(X);
  DX := < seq(Vector[row](NX, X), k=1..NX) >^+:
  K  := (sigma, psi) -> evalf( sigma^2 *~ exp~(-((DX - DX^+) /~ psi)^~2) ):
  mu := add(Y) / NX;
  k  := (x, sigma, psi) -> evalf( convert(sigma^2 *~ exp~(-((x -~ X ) /~ psi)^~2), Vector[row]) ):
  y  := mu + k(x, sigma, psi) . (K(sigma, psi))^(-1) . convert((Y -~ mu), Vector[column]):
  return y
end proc:


xkcd_hline := proc(p1::list, p2::list, a::nonnegative, lc::positive, col)
  # p1 : first ending point
  # p2 : second ending point
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local roll, NX, LX, X, Z:
  roll := rand(-1.0 .. 1.0):
  NX   := 10:
  LX   := p2[1]-p1[1]:
  X    := [seq(p1[1]..p2[1], LX/(NX-1))]:
  Z    := [p1[2], seq(p1[2]+a*roll(), k=1..NX-1)]:
  return plot(KG(X, Z, lc*LX, 1), x=min(X)..max(X), color=col, scaling=constrained):
end proc:


xkcd_line := proc(L::list, a::nonnegative, lc::positive, col, {lsty::integer:=1})
  # L  : list which contains the two ending point
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local T, roll, NX, DX, DY, LX, A, m, M, X, Z, P:
  T    := (a, x0, y0, l) ->
             plottools:-transform(
               (x,y) -> [ x0 + l * (x*cos(a)-y*sin(a)), y0 + l * (x*sin(a)+y*cos(a)) ]
             ):
  roll := rand(-1.0 .. 1.0):
  NX   := 5:
  DX   := L[2][1]-L[1][1]:
  DY   := L[2][2]-L[1][2]:
  LX := sqrt(DX^2+DY^2):
  if DX <> 0 then
     A := arcsin(DY/LX):
  else
     A:= Pi/2:
  end if:
  X := [seq(0..1, 1/(NX-1))]:
  Z := [ seq(a*roll(), k=1..NX)]:
  P := plot(KG(X, Z, lc, 1), x=0..1, color=col, scaling=constrained, linestyle=lsty):
  return T(A, op(L[1]), LX)(P)
end proc:


xkcd_func := proc(f, r::list, NX::posint, a::positive, lc::positive, col)
  # f  : function to draw
  # r  : plot range
  # NX : number of equidistant "nodes" in the range r (boundaries included)
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local roll, F, LX, Pf, Xf, Zf:
  roll := rand(-1.0 .. 1.0):
  F    := unapply(f, indets(f, name)[1]);
  LX   := r[2]-r[1]:
  Pf   := [seq(r[1]..r[2], LX/(NX-1))]:
  Xf   := Pf +~ [seq(a*roll(), k=1..numelems(Pf))]:
  Zf   := F~(Pf) +~ [seq(a*roll(), k=1..numelems(Pf))]:
  return plot(KG(Xf, Zf, lc*LX, 1), x=min(Xf)..max(Xf), color=col):
end proc:




xkcd_hist := proc(H, ah, av, ax, ay, lch, lcv, lcx, lcy, colh, colxy)
  # H   : Histogram
  # ah  : amplitude of the random perturbations on the horizontal boundaries of the bins
  # av  : amplitude of the random perturbations on the vertical boundaries of the bins
  # ax  : amplitude of the random perturbations on the horizontal axis
  # ay  : amplitude of the random perturbations on the vertical axis
  # lch : correlation length on the horizontal boundaries of the bins
  # lcv : correlation length on the vertical boundaries of the bins
  # lcx : correlation length on the horizontal axis
  # lcy : correlation length on the vertical axis
  # colh: color of the histogram
  # col : color of the axes
  local data, horiz, verti, horizontal_lines, vertical_lines, po, rpo, p1, p2:
  data  := op(1..-2, op(1, H)):
  verti := sort( [seq(data[n][3..4][], n=1..numelems([data]))] , key=(x->x[1]) );
  verti := verti[1],
           map(
                n -> if verti[n][2] > verti[n+1][2] then
                        verti[n]
                      else
                        verti[n+1]
                      end if,
                [seq(2..numelems(verti)-2,2)]
           )[],
           verti[-1];
  horiz := seq(data[n][[4, 3]], n=1..numelems([data])):

  horizontal_lines := NULL:
  for po in horiz do
    horizontal_lines := horizontal_lines, xkcd_hline(po[1], po[2], ah, lch, colh):
  end do:

  vertical_lines := NULL:
  for po in [verti] do
    rpo := po[[2, 1]]:
    vertical_lines := vertical_lines, xkcd_hline([0, rpo[2]], rpo, av, lcv, colh):
  end do:

  p1 := [2*verti[1][1]-verti[2][1], 0]:
  p2 := [2*verti[-1][1]-verti[-2][1], 0]:

  return
    display(
      horizontal_lines,
      T~([vertical_lines]),
      xkcd_hline(p1, p2, ax, lcx, colxy),
      T(xkcd_hline([0, 0], [1.2*max(op~(2, [verti])), 0], ay, lcy, colxy)),
      axes=none,
      scaling=unconstrained
    );
end proc:


xkcd_polyline := proc(L::list, a::nonnegative, lc::positive, col)
  # xkcd_polyline reduces to xkcd_line whebn L has 2 elements
  # L  : List of points
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local T, roll, NX, n, DX, DY, LX, A, m, M, X, Z, P:
  T    := (a, x0, y0, l) ->
             plottools:-transform(
               (x,y) -> [ x0 + l * (x*cos(a)-y*sin(a)), y0 + l * (x*sin(a)+y*cos(a)) ]
             ):
  roll := rand(-1.0 .. 1.0):
  NX   := 5:
  for n from 1 to numelems(L)-1 do
    DX   := L[n+1][1]-L[n][1]:
    DY   := L[n+1][2]-L[n][2]:
    LX := sqrt(DX^2+DY^2):
    if DX <> 0 then
      A := evalf(arcsin(abs(DY)/LX)):
      if DX >= 0 and DY <= 0 then A := -A end if:
      if DX <= 0 and DY >  0 then A := Pi-A end if:
      if DX <= 0 and DY <= 0 then A := Pi+A end if:
    else
      A:= Pi/2:
      if DY < 0 then A := 3*Pi/2 end if:
    end if:
    if n=1 then
      X := [seq(0..1, 1/(NX-1))]:
      Z := [seq(a*roll(), k=1..NX)]:
    else
      X := [0    , seq(1/(NX-1)..1, 1/(NX-1))]:
      Z := [Z[NX], seq(a*roll(), k=1..NX-1)]:
    end if:
    P    := plot(KG(X, Z, lc, 1), x=0..1, color=col, scaling=constrained):
    P||n := T(A, op(L[n]), LX)(P):
  end do;
  return seq(P||n, n=1..numelems(L)-1)
end proc:


xkcd_circle := proc(a::nonnegative, lc::positive, r::positive, cent::list, col)
  # a   : amplitude of the random perturbations
  # lc  : correlation length
  # r   : redius of the circle
  # cent: center of the circle
  # col : color
  local roll, NX, LX, X, Z, xkg, A:
  roll := rand(-1.0 .. 1.0):
  NX   := 10:
  X    := [seq(0..1, 1/(NX-1))]:
  Z    := [0, seq(a*roll(), k=1..NX-1)]:
  xkg  := KG(X, Z, lc, 1):
  A    := Pi*roll():
  return plot([cent[1]+r*(1+xkg)*cos(2*Pi*x+A), cent[2]+r*(1+xkg)*sin(2*Pi*x+A), x=0..1], color=col)
end proc:

T := plottools:-transform((x,y) -> [y, x]):
 

# Axes plot

x_axis := xkcd_hline([0, 0], [10, 0], 0.03, 0.5, black):
y_axis := xkcd_hline([0, 0], [10, 0], 0.03, 0.5, black):
display(
  x_axis,
  T(y_axis),
  axes=none,
  scaling=constrained
)

 

# A simple function

f := 1+10*(x/5-1)^2:
F := xkcd_func(f, [0.5, 9.5], 6, 0.3, 0.4, red):

display(
  x_axis,
  T(y_axis),
  F,
  axes=none,
  scaling=constrained
)

 

# An histogram

S := Sample(Normal(0,1),100):
H := Histogram(S, maxbins=6):
xkcd_hist(H,   0, 0.02, 0.001, 0.01,   1, 0.1, 0.01, 1,   red, black)

 

# Axes plus grid with two red straight lines

r := rand(-0.1 .. 0.1):

x_axis := xkcd_line([[-2, 0], [10, 0]], 0.01, 0.2, black):
y_axis := xkcd_line([[0, -2], [0, 10]], 0.01, 0.2, black):
d1     := xkcd_line([[-1, 1], [9, 9]] , 0.01, 0.2, red):
d2     := xkcd_line([[-1, 9], [9, -1]], 0.01, 0.2, red):
display(
  x_axis, y_axis,
  seq( xkcd_line([[-2+0.3*r(), u+0.3*r()], [10+0.3*r(), u+0.3*r()]], 0.005, 0.5, gray), u in [seq(-1..9, 2)]),
  seq( xkcd_line([[u+0.3*r(), -2+0.3*r()], [u+0.3*r(), 10+0.3*r()]], 0.005, 0.5, gray), u in [seq(-1..9, 2)]),
  d1, d2,
  axes=none,
  scaling=constrained
)

 

# Axes and a couple of polygonal lines

d1 := xkcd_polyline([[0, 0], [1, 3], [3, 5], [7, 1], [9, 7]], 0.01, 1, red):
d2 := xkcd_polyline([[0, 9], [2, 8], [5, 2], [8, 3], [10, -1]], 0.01, 1, blue):

display(
  x_axis, y_axis,
  d1, d2,
  axes=none,
  scaling=constrained
)

 

# A few polygonal shapes

display(
  xkcd_polyline([[0, 0], [1, 0], [1, 1], [0, 1], [0, 0]], 0.01, 1, red),
  xkcd_polyline([[1/3, 1/3], [2/3, 1/3], [2/3, 4/3], [-1, 4/3], [1/3, 1/3]], 0.01, 1, blue),
  xkcd_polyline([[-1/3, -1/3], [4/3, 1/2], [1/2, 1/2], [1/2,-1], [-1/3, -1/3]], 0.01, 1, green),
  axes=none,
  scaling=constrained
)

 

# A few circles

cols  := [red, green, blue, gold, black]:                                # colors
cents := convert( Statistics:-Sample(Uniform(-1, 3), [5, 2]), listlist): # centers
radii := Statistics:-Sample(Uniform(1/2, 2), 5):                         # radii
lcs   := Statistics:-Sample(Uniform(0.2, 0.7), 5):                       # correlations lengths

display(
  seq(
    xkcd_circle(0.02, lcs[n], radii[n], cents[n], cols[n]),
    n=1..5
  ),
  axes=none,
  scaling=constrained
)

 

# A 3D drawing

x_axis := xkcd_line([[0, 0], [5, 0]], 0.01, 0.2, black):
y_axis := xkcd_line([[0, 0], [4, 2]], 0.01, 0.2, black):
z_axis := xkcd_line([[0, 0], [0, 5]], 0.01, 0.2, black):

f1 := 4*cos(x/6)-1:
F1 := xkcd_func(f1, [0.5, 5], 6, 0.001, 0.8, red):
F2 := xkcd_line([[0.5, eval(f1, x=0.5)], [3, 4]], 0.01, 0.2, red):
f3 := 4*cos((x-2)/6):
F3 := xkcd_func(f3, [3, 7], 6, 0.001, 0.8, red):
F4 := xkcd_line([[5, eval(f1, x=5)], [7, eval(f3, x=7)]], 0.01, 0.2, red):

dx := xkcd_line([[2, 1], [4, 1]], 0.01, 0.2, gray, lsty=3):
dy := xkcd_line([[2, 0], [4, 1]], 0.01, 0.2, gray, lsty=3):
dz := xkcd_line([[4, 1], [4, 3]], 0.01, 0.2, gray, lsty=3):

po := xkcd_circle(0.02, 0.3, 0.1, [4, 3], blue):

# Numerical value come from "probe info + copy/paste"

nvect   := xkcd_polyline([[4, 3], [4.57, 4.26], [4.35, 4.1], [4.57, 4.26], [4.58, 4.02]], 0.01, 1, blue):
tg1vect := xkcd_polyline([[4, 3], [4.78, 2.59], [4.49, 2.87], [4.78, 2.59], [4.46, 2.57]], 0.01, 1, blue):
tg2vect := xkcd_polyline([[4, 3], [4.79, 3.35], [4.70, 3.13], [4.79, 3.35], [4.46, 3.35]], 0.01, 1, blue):
rec1    := xkcd_polyline([[4.118, 3.286], [4.365, 3.396], [4.257, 3.108]], 0.01, 1, blue):
rec2    := xkcd_polyline([[4.257, 3.108], [4.476, 2.985], [4.259, 2.876]], 0.01, 1, blue):



display(
  x_axis, y_axis, z_axis,
  F1, F2, F3, F4,
  dx, dy, dz,
  po,
  nvect, tg1vect, tg2vect, rec1, rec2,
  axes=none,
  scaling=constrained
)

 

# Arrow

d1 := xkcd_polyline([[0, 0], [1, 0], [0.9, 0.05], [1, 0], [0.9, -0.05]], 0.01, 1, red):


T := (a, x0, y0, l) ->
             plottools:-transform(
               (x,y) -> [ x0 + l * (x*cos(a)-y*sin(a)), y0 + l * (x*sin(a)+y*cos(a)) ]
             ):


display(
  seq( T(2*Pi*n/10, 0.5, 0, 1/2)(
           display(
              xkcd_polyline(
                  [[0, 0], [1, 0], [0.9, 0.05], [1, 0], [0.9, -0.05]],
                  0.01,
                  1,
                  ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12])
               )
           )
        ),
       n=1..10
  ),
  axes=none,
  scaling=constrained
)

 

 


 

Download XKCD.mw

 

Hi, 

I would like to share this work I've done. 
No big math here, just a demonstrator of Maple's capabilities in 3D visualization.

All the plots in the file have been discarded to reduce the size of this post. Here is a screen capture to give you an idea of what is inside the file.

Download 3D_Visualization.mw

Hi, 

In a recent post  (Monte Carlo Integration) Radaar shared its work about the numerical integration, with the Monte Carlo method, of a function defined in polar coordinates.
Radaar used a raw strategy based on a sampling in cartesian coordinates plus an ad hoc transformation.
Radaar obtained reasonably good results, but I posted a comment to show how Monte Carlo summation in polar coordinates can be done in a much simpler way. Behind this is the choice of a "good" sampling distribution which makes the integration problem as simple as Monte Carlo integration over a 2D rectangle with sides parallel to the co-ordinate axis.

This comment I sent pushed me to share the present work on Monte Carlo integration over simple polygons ("simple" means that two sides do not intersect).
Here again one can use raw Monte Carlo integration on the rectangle this polygon is inscribed in. But as in Radaar's post, a specific sampling distribution can be used that makes the summation method more elegant.

This work relies on three main ingredients:

  1. The Dirichlet distribution, whose one form enables sampling the 2D simplex in a uniform way.
  2. The construction of a 1-to-1 mapping from this simplex into any non degenerated triangle (a mapping whose jacobian is a constant equal to the ratio of the areas of the two triangles).
  3. A tesselation into triangles of the polygon to integrate over.


This work has been carried out in Maple 2015, which required the development of a module to do the tesselation. Maybe more recent Maple's versions contain internal procedures to do that.
 

Monte_Carlo_Integration.mw

 

1 2 3 4 5 6 Page 4 of 6