mmcdara

6483 Reputation

17 Badges

8 years, 52 days

MaplePrimes Activity


These are Posts that have been published by mmcdara

A lot of scientific software propose packages enabling drawing figures in XKCD style/
Up to now I thought this was restricted to open products (R, Python, ...) but I recently discovered Matlab and even Mathematica were doing same.

Layton S (2012). “XKCDIFY! Adding flair to boring Matlab Axes one plot at a time.” Last accessed on December 08, 2014, URL https://github.com/slayton/matlab-xkcdify.

Woods S (2012). “xkcd-style graphs.” Last accessed on December 08, 2014, URL http://mathematica.stackexchange.com/questions/11350/xkcd-style-graphs/ 11355#11355.

 

So why not Maple?

As a regular user of R, I could have visualize the body of the corresponding procedures to see how these drawings were made and just translate theminto Maple.
But copying for the sake of copying is not of much interest.
So I started to develop some primitives for "XKCD-drawing" lines, polygons, circles and even histograms.
My goal is not to write an XKCD package (I don't have the skills for that) but just to arouse the interest of (maybe) a few people here who could continue this preliminary work


A main problem is the one of the XKCD fonts: no question to redefine them in Maple and I guess using them in a commercial code is not legal (?). So no XKCD font in this first work, nor even the funny guy who appears recurently on the drawings (but it could be easily constructed in Maple).

In a recent post (Plot styling - experimenting with Maple's plotting...) Samir Khan proposed a few styles made of several plotting options,  some of which he named "Excel style" or "Oscilloscope style"... maybe a future "XKCD xtyle" in Maple ?


This work has been done with Maple 2015 and reuses an old version of a 1D-Kriging procedure 

 

restart:

with(LinearAlgebra):
with(plots):
with(Statistics):

 

The principle is always the same:
    1/   Let L a straight line which is either defined by its two ending points (xkvd_hline) or taken as the default [0, 0], [1, 0] line.
          For xkvd_hline the given line L is firstly rotate to be aligned with the horizontal axis.

    2/   Let P1, ..., PN N points on L. Each Pn writes [xn, yn]

    3/   A random perturbation rn is added yo the values y1, ..., yN

    4/   A stationnary random process RP, with gaussian correlation function is used to build a smooth curve passing through the points
          (x1, y1+r1), ..., (xN, yN+rN) (procedure KG where "KG" stands for "Kriging")

    5/   The result is drawn or mapped to some predefined shape :
                  xkcd_hist,
                  xkcd_polyline,
                  xkcd_circle

    6/   A procedure xkcd_func is also provided to draw functions defined by an explicit relation.
 

KG := proc(X, Y, psi, sigma)
  local NX, DX, K, mu, k, y:
  NX := numelems(X);
  DX := < seq(Vector[row](NX, X), k=1..NX) >^+:
  K  := (sigma, psi) -> evalf( sigma^2 *~ exp~(-((DX - DX^+) /~ psi)^~2) ):
  mu := add(Y) / NX;
  k  := (x, sigma, psi) -> evalf( convert(sigma^2 *~ exp~(-((x -~ X ) /~ psi)^~2), Vector[row]) ):
  y  := mu + k(x, sigma, psi) . (K(sigma, psi))^(-1) . convert((Y -~ mu), Vector[column]):
  return y
end proc:


xkcd_hline := proc(p1::list, p2::list, a::nonnegative, lc::positive, col)
  # p1 : first ending point
  # p2 : second ending point
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local roll, NX, LX, X, Z:
  roll := rand(-1.0 .. 1.0):
  NX   := 10:
  LX   := p2[1]-p1[1]:
  X    := [seq(p1[1]..p2[1], LX/(NX-1))]:
  Z    := [p1[2], seq(p1[2]+a*roll(), k=1..NX-1)]:
  return plot(KG(X, Z, lc*LX, 1), x=min(X)..max(X), color=col, scaling=constrained):
end proc:


xkcd_line := proc(L::list, a::nonnegative, lc::positive, col, {lsty::integer:=1})
  # L  : list which contains the two ending point
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local T, roll, NX, DX, DY, LX, A, m, M, X, Z, P:
  T    := (a, x0, y0, l) ->
             plottools:-transform(
               (x,y) -> [ x0 + l * (x*cos(a)-y*sin(a)), y0 + l * (x*sin(a)+y*cos(a)) ]
             ):
  roll := rand(-1.0 .. 1.0):
  NX   := 5:
  DX   := L[2][1]-L[1][1]:
  DY   := L[2][2]-L[1][2]:
  LX := sqrt(DX^2+DY^2):
  if DX <> 0 then
     A := arcsin(DY/LX):
  else
     A:= Pi/2:
  end if:
  X := [seq(0..1, 1/(NX-1))]:
  Z := [ seq(a*roll(), k=1..NX)]:
  P := plot(KG(X, Z, lc, 1), x=0..1, color=col, scaling=constrained, linestyle=lsty):
  return T(A, op(L[1]), LX)(P)
end proc:


xkcd_func := proc(f, r::list, NX::posint, a::positive, lc::positive, col)
  # f  : function to draw
  # r  : plot range
  # NX : number of equidistant "nodes" in the range r (boundaries included)
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local roll, F, LX, Pf, Xf, Zf:
  roll := rand(-1.0 .. 1.0):
  F    := unapply(f, indets(f, name)[1]);
  LX   := r[2]-r[1]:
  Pf   := [seq(r[1]..r[2], LX/(NX-1))]:
  Xf   := Pf +~ [seq(a*roll(), k=1..numelems(Pf))]:
  Zf   := F~(Pf) +~ [seq(a*roll(), k=1..numelems(Pf))]:
  return plot(KG(Xf, Zf, lc*LX, 1), x=min(Xf)..max(Xf), color=col):
end proc:




xkcd_hist := proc(H, ah, av, ax, ay, lch, lcv, lcx, lcy, colh, colxy)
  # H   : Histogram
  # ah  : amplitude of the random perturbations on the horizontal boundaries of the bins
  # av  : amplitude of the random perturbations on the vertical boundaries of the bins
  # ax  : amplitude of the random perturbations on the horizontal axis
  # ay  : amplitude of the random perturbations on the vertical axis
  # lch : correlation length on the horizontal boundaries of the bins
  # lcv : correlation length on the vertical boundaries of the bins
  # lcx : correlation length on the horizontal axis
  # lcy : correlation length on the vertical axis
  # colh: color of the histogram
  # col : color of the axes
  local data, horiz, verti, horizontal_lines, vertical_lines, po, rpo, p1, p2:
  data  := op(1..-2, op(1, H)):
  verti := sort( [seq(data[n][3..4][], n=1..numelems([data]))] , key=(x->x[1]) );
  verti := verti[1],
           map(
                n -> if verti[n][2] > verti[n+1][2] then
                        verti[n]
                      else
                        verti[n+1]
                      end if,
                [seq(2..numelems(verti)-2,2)]
           )[],
           verti[-1];
  horiz := seq(data[n][[4, 3]], n=1..numelems([data])):

  horizontal_lines := NULL:
  for po in horiz do
    horizontal_lines := horizontal_lines, xkcd_hline(po[1], po[2], ah, lch, colh):
  end do:

  vertical_lines := NULL:
  for po in [verti] do
    rpo := po[[2, 1]]:
    vertical_lines := vertical_lines, xkcd_hline([0, rpo[2]], rpo, av, lcv, colh):
  end do:

  p1 := [2*verti[1][1]-verti[2][1], 0]:
  p2 := [2*verti[-1][1]-verti[-2][1], 0]:

  return
    display(
      horizontal_lines,
      T~([vertical_lines]),
      xkcd_hline(p1, p2, ax, lcx, colxy),
      T(xkcd_hline([0, 0], [1.2*max(op~(2, [verti])), 0], ay, lcy, colxy)),
      axes=none,
      scaling=unconstrained
    );
end proc:


xkcd_polyline := proc(L::list, a::nonnegative, lc::positive, col)
  # xkcd_polyline reduces to xkcd_line whebn L has 2 elements
  # L  : List of points
  # a  : amplitude of the random perturbations
  # lc : correlation length
  # col: color
  local T, roll, NX, n, DX, DY, LX, A, m, M, X, Z, P:
  T    := (a, x0, y0, l) ->
             plottools:-transform(
               (x,y) -> [ x0 + l * (x*cos(a)-y*sin(a)), y0 + l * (x*sin(a)+y*cos(a)) ]
             ):
  roll := rand(-1.0 .. 1.0):
  NX   := 5:
  for n from 1 to numelems(L)-1 do
    DX   := L[n+1][1]-L[n][1]:
    DY   := L[n+1][2]-L[n][2]:
    LX := sqrt(DX^2+DY^2):
    if DX <> 0 then
      A := evalf(arcsin(abs(DY)/LX)):
      if DX >= 0 and DY <= 0 then A := -A end if:
      if DX <= 0 and DY >  0 then A := Pi-A end if:
      if DX <= 0 and DY <= 0 then A := Pi+A end if:
    else
      A:= Pi/2:
      if DY < 0 then A := 3*Pi/2 end if:
    end if:
    if n=1 then
      X := [seq(0..1, 1/(NX-1))]:
      Z := [seq(a*roll(), k=1..NX)]:
    else
      X := [0    , seq(1/(NX-1)..1, 1/(NX-1))]:
      Z := [Z[NX], seq(a*roll(), k=1..NX-1)]:
    end if:
    P    := plot(KG(X, Z, lc, 1), x=0..1, color=col, scaling=constrained):
    P||n := T(A, op(L[n]), LX)(P):
  end do;
  return seq(P||n, n=1..numelems(L)-1)
end proc:


xkcd_circle := proc(a::nonnegative, lc::positive, r::positive, cent::list, col)
  # a   : amplitude of the random perturbations
  # lc  : correlation length
  # r   : redius of the circle
  # cent: center of the circle
  # col : color
  local roll, NX, LX, X, Z, xkg, A:
  roll := rand(-1.0 .. 1.0):
  NX   := 10:
  X    := [seq(0..1, 1/(NX-1))]:
  Z    := [0, seq(a*roll(), k=1..NX-1)]:
  xkg  := KG(X, Z, lc, 1):
  A    := Pi*roll():
  return plot([cent[1]+r*(1+xkg)*cos(2*Pi*x+A), cent[2]+r*(1+xkg)*sin(2*Pi*x+A), x=0..1], color=col)
end proc:

T := plottools:-transform((x,y) -> [y, x]):
 

# Axes plot

x_axis := xkcd_hline([0, 0], [10, 0], 0.03, 0.5, black):
y_axis := xkcd_hline([0, 0], [10, 0], 0.03, 0.5, black):
display(
  x_axis,
  T(y_axis),
  axes=none,
  scaling=constrained
)

 

# A simple function

f := 1+10*(x/5-1)^2:
F := xkcd_func(f, [0.5, 9.5], 6, 0.3, 0.4, red):

display(
  x_axis,
  T(y_axis),
  F,
  axes=none,
  scaling=constrained
)

 

# An histogram

S := Sample(Normal(0,1),100):
H := Histogram(S, maxbins=6):
xkcd_hist(H,   0, 0.02, 0.001, 0.01,   1, 0.1, 0.01, 1,   red, black)

 

# Axes plus grid with two red straight lines

r := rand(-0.1 .. 0.1):

x_axis := xkcd_line([[-2, 0], [10, 0]], 0.01, 0.2, black):
y_axis := xkcd_line([[0, -2], [0, 10]], 0.01, 0.2, black):
d1     := xkcd_line([[-1, 1], [9, 9]] , 0.01, 0.2, red):
d2     := xkcd_line([[-1, 9], [9, -1]], 0.01, 0.2, red):
display(
  x_axis, y_axis,
  seq( xkcd_line([[-2+0.3*r(), u+0.3*r()], [10+0.3*r(), u+0.3*r()]], 0.005, 0.5, gray), u in [seq(-1..9, 2)]),
  seq( xkcd_line([[u+0.3*r(), -2+0.3*r()], [u+0.3*r(), 10+0.3*r()]], 0.005, 0.5, gray), u in [seq(-1..9, 2)]),
  d1, d2,
  axes=none,
  scaling=constrained
)

 

# Axes and a couple of polygonal lines

d1 := xkcd_polyline([[0, 0], [1, 3], [3, 5], [7, 1], [9, 7]], 0.01, 1, red):
d2 := xkcd_polyline([[0, 9], [2, 8], [5, 2], [8, 3], [10, -1]], 0.01, 1, blue):

display(
  x_axis, y_axis,
  d1, d2,
  axes=none,
  scaling=constrained
)

 

# A few polygonal shapes

display(
  xkcd_polyline([[0, 0], [1, 0], [1, 1], [0, 1], [0, 0]], 0.01, 1, red),
  xkcd_polyline([[1/3, 1/3], [2/3, 1/3], [2/3, 4/3], [-1, 4/3], [1/3, 1/3]], 0.01, 1, blue),
  xkcd_polyline([[-1/3, -1/3], [4/3, 1/2], [1/2, 1/2], [1/2,-1], [-1/3, -1/3]], 0.01, 1, green),
  axes=none,
  scaling=constrained
)

 

# A few circles

cols  := [red, green, blue, gold, black]:                                # colors
cents := convert( Statistics:-Sample(Uniform(-1, 3), [5, 2]), listlist): # centers
radii := Statistics:-Sample(Uniform(1/2, 2), 5):                         # radii
lcs   := Statistics:-Sample(Uniform(0.2, 0.7), 5):                       # correlations lengths

display(
  seq(
    xkcd_circle(0.02, lcs[n], radii[n], cents[n], cols[n]),
    n=1..5
  ),
  axes=none,
  scaling=constrained
)

 

# A 3D drawing

x_axis := xkcd_line([[0, 0], [5, 0]], 0.01, 0.2, black):
y_axis := xkcd_line([[0, 0], [4, 2]], 0.01, 0.2, black):
z_axis := xkcd_line([[0, 0], [0, 5]], 0.01, 0.2, black):

f1 := 4*cos(x/6)-1:
F1 := xkcd_func(f1, [0.5, 5], 6, 0.001, 0.8, red):
F2 := xkcd_line([[0.5, eval(f1, x=0.5)], [3, 4]], 0.01, 0.2, red):
f3 := 4*cos((x-2)/6):
F3 := xkcd_func(f3, [3, 7], 6, 0.001, 0.8, red):
F4 := xkcd_line([[5, eval(f1, x=5)], [7, eval(f3, x=7)]], 0.01, 0.2, red):

dx := xkcd_line([[2, 1], [4, 1]], 0.01, 0.2, gray, lsty=3):
dy := xkcd_line([[2, 0], [4, 1]], 0.01, 0.2, gray, lsty=3):
dz := xkcd_line([[4, 1], [4, 3]], 0.01, 0.2, gray, lsty=3):

po := xkcd_circle(0.02, 0.3, 0.1, [4, 3], blue):

# Numerical value come from "probe info + copy/paste"

nvect   := xkcd_polyline([[4, 3], [4.57, 4.26], [4.35, 4.1], [4.57, 4.26], [4.58, 4.02]], 0.01, 1, blue):
tg1vect := xkcd_polyline([[4, 3], [4.78, 2.59], [4.49, 2.87], [4.78, 2.59], [4.46, 2.57]], 0.01, 1, blue):
tg2vect := xkcd_polyline([[4, 3], [4.79, 3.35], [4.70, 3.13], [4.79, 3.35], [4.46, 3.35]], 0.01, 1, blue):
rec1    := xkcd_polyline([[4.118, 3.286], [4.365, 3.396], [4.257, 3.108]], 0.01, 1, blue):
rec2    := xkcd_polyline([[4.257, 3.108], [4.476, 2.985], [4.259, 2.876]], 0.01, 1, blue):



display(
  x_axis, y_axis, z_axis,
  F1, F2, F3, F4,
  dx, dy, dz,
  po,
  nvect, tg1vect, tg2vect, rec1, rec2,
  axes=none,
  scaling=constrained
)

 

# Arrow

d1 := xkcd_polyline([[0, 0], [1, 0], [0.9, 0.05], [1, 0], [0.9, -0.05]], 0.01, 1, red):


T := (a, x0, y0, l) ->
             plottools:-transform(
               (x,y) -> [ x0 + l * (x*cos(a)-y*sin(a)), y0 + l * (x*sin(a)+y*cos(a)) ]
             ):


display(
  seq( T(2*Pi*n/10, 0.5, 0, 1/2)(
           display(
              xkcd_polyline(
                  [[0, 0], [1, 0], [0.9, 0.05], [1, 0], [0.9, -0.05]],
                  0.01,
                  1,
                  ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12])
               )
           )
        ),
       n=1..10
  ),
  axes=none,
  scaling=constrained
)

 

 


 

Download XKCD.mw

 

Hi, 

I would like to share this work I've done. 
No big math here, just a demonstrator of Maple's capabilities in 3D visualization.

All the plots in the file have been discarded to reduce the size of this post. Here is a screen capture to give you an idea of what is inside the file.

Download 3D_Visualization.mw

Hi, 

In a recent post  (Monte Carlo Integration) Radaar shared its work about the numerical integration, with the Monte Carlo method, of a function defined in polar coordinates.
Radaar used a raw strategy based on a sampling in cartesian coordinates plus an ad hoc transformation.
Radaar obtained reasonably good results, but I posted a comment to show how Monte Carlo summation in polar coordinates can be done in a much simpler way. Behind this is the choice of a "good" sampling distribution which makes the integration problem as simple as Monte Carlo integration over a 2D rectangle with sides parallel to the co-ordinate axis.

This comment I sent pushed me to share the present work on Monte Carlo integration over simple polygons ("simple" means that two sides do not intersect).
Here again one can use raw Monte Carlo integration on the rectangle this polygon is inscribed in. But as in Radaar's post, a specific sampling distribution can be used that makes the summation method more elegant.

This work relies on three main ingredients:

  1. The Dirichlet distribution, whose one form enables sampling the 2D simplex in a uniform way.
  2. The construction of a 1-to-1 mapping from this simplex into any non degenerated triangle (a mapping whose jacobian is a constant equal to the ratio of the areas of the two triangles).
  3. A tesselation into triangles of the polygon to integrate over.


This work has been carried out in Maple 2015, which required the development of a module to do the tesselation. Maybe more recent Maple's versions contain internal procedures to do that.
 

Monte_Carlo_Integration.mw

 

Hi, 

The present work is aimed to show how bayesian inference methods can be used to infer (= to assess) the probabilility that a person detected infected by the SARS-Cov2  has to die (remark I did not write "has to die if it" because one never be sure of the reason of the death).
A lot of details are avaliable in the attached pdf file (I tried to be pedagogic enough so that the people not familiar with bayesian inference can get a global understanding of the subject, many links are provided for quick access to the different notions).

In particular, I explain why simple mathematics cannot provide a reliable estimate of this probability of death (sometimes referred to as the "death rate") as long as the epidemic continues to spread.

Even if the approach presented here is rather original, this is not the purpose of this post. 
Since a long time I had in mind to post here an application concerning bayesian methods. The CoVid19 outbreak has only provided me with the most high-profile topic to do so.
I will say no more about the inference procedure itself (all the material is given in the attached pdf file) and I will only concentrate on the MAPLE implementation of the solution algorithm.

Bayesian Inference uses generally simple algorithms such as MCMC (Markov Chain Monte Carlo) or ABC (Approximate Bayesian Computation) to mention a few, and their corresponding pseudo code writes generally upon a few tens of lines.
This is something I already done with other languages but I found the task comparatively more difficult with Maple. Probably I was to obsess not to code in Maple as you code in Matlab or R for instance.
At the very end the code I wrote is rather slow, this because of the allocated memory size it uses.
In a question I posed weeks ago (How can I prevent the creation of random variables...) Preben gave a solution to limit the burst of the memory: the trick works well but I'm still stuked with memory size problems (Acer also poposed a solution but I wasn't capable to make it works... maybe I was too lazzy to modify deeply my code).

Anyway, the code is there, in case anyone would like to take up the challenge to make it more efficient (in which case I'll take it).

Note 1: this code contains a small "Maplet" to help you choose any country in the data file on which you would like to run the inference.
Note 2: Be careful: doing statistics, even bayesian statistics, needs enough data: some countries have history records ranging over a few days , or no recorded death at all; infering something from so loos date will probably be disappointing

The attached files:

  • The pdf file is the "companion document" where all or most of it is explained.It has been written a few days ago for another purpose and the results it presents were not ontained from the lattest data (march 21, 2020 coronavirus)
  • xls files are data files, they were loaded yesterday (march 28, 2020) from here coronavirus
  • the mw file... well, I guess you know what it is.
     

Bayesian_inference.pdf

total-cases-covid-19_NF.xls

total-deaths-covid-19_NF.xls

Bayesian_Inference_ABC+MCMC_NF_2.mw


 

Hi,

Two weeks ago, I started loading data on the CoVid19 outbreak in order to understand, out of any official communication from any country, what is really going on.

From february 29 to march 9 these data come from https://bnonews.com/index.php/2020/02/the-latest-coronavirus-cases/ and from 10 march until now from https://www.worldometers.info/coronavirus/#repro.In all cases the loading is done manually (copy-paste onto a LibreOffice spreadsheet plus correction and save into a xls file) for I wasn't capable to find csv data (csv data do exist here https://github.com/CSSEGISandData/COVID-19, by they end febreuary 15th).
So I copied-pasted the results from the two sources above into a LibreOffice spreadsheet, adjusted the names of some countries for they appeared differently (for instance "United States" instead of "USA"), removed the unnessary commas and saved the result in a xls file.

I also used data from https://www.worldometers.info/world-population/population-by-country/ to get the populations of more than 260 countries around the world and, finally, csv data from https://ourworldindata.org/coronavirus#covid-19-tests to get synthetic histories of confirmed and death cases (I have discovered this site only yesterday evening and I think it could replace all the data I initially loaded).

The two worksheet here are aimed to exploratory and visualization only.
An other one is in progress whose goal is to infer the true death rate (also known as CFR, Case Fatality Rate).

No analysis is presented, if for no other reason than that the available data (except the numbers of deaths) are extremely dependent on the testing policies in place. But some features can be drawn from the data used here.
For instance, if you select country = "China" in file Covid19_Evolution_bis.mw, you will observe very well known behaviour which is that the "Apparent Death Rate", I defined as the ratio of the cumulated number of death at time t by the cumulatibe number of confirmed cases at the same time, is always an underestimation of the death rate one can only known once the outbreak has ended. With this in mind, changing the country in this worksheet from China to Italy seems to lead to frightening  scary interpolations... But here again, without knowing the test policy no solid conclusion can be drawn: maybe Italy tests mainly elder people with accute symptoms, thus the huge "Apparent Death Rate" Italy seems to have?


The work has been done with Maple 2015 and some graphics can be improved if a newer version is used (for instance, as Maple 2015 doesn't allow to change the direction of tickmarks, I overcome this limitation by assigning the date to the vertical axis on some plots).
The second Explore plot could probably be improved by using newer versions or Maplets or Embeded components.

Explore data from https://bnonews.com/index.php/2020/02/the-latest-coronavirus-cases/ and https://www.worldometers.info/coronavirus/#repro
Files to use
Covid19_Evolution.mw
Covid19_Data.m.zip
Population.xls

Explore data from  https://ourworldindata.org/coronavirus#covid-19-tests
Files to use
Covid19_Evolution_bis.mw
daily-deaths-covid-19-who.xls
total-cases-covid-19-who.xls
Population.xls


I would be interested by any open collaboration with people interested by this post (it's not in my intention to write papers on the subject, my only motivation is scientific curiosity).

 

1 2 3 4 5 6 Page 4 of 6