mmcdara

7891 Reputation

22 Badges

9 years, 55 days

MaplePrimes Activity


These are replies submitted by mmcdara

@acer 

I admit that I have not yet understood everything in your example, but I will think about it more seriously.

The error Error, numeric exception: division by zero after T remains disturbing, even if your explanation about the level of evaluation helps to understand why this happens.

Indeed, if you write 

expr := 1/(GAMMA(x*2/3)-56*Pi*sqrt(3)/81):  # or expr := 1/(GAMMA(x*2/3)-56*Pi*sqrt(3)/(GAMMA(2/3))):
U := eval(expr, [x=5]);
U;
signum(U);

No error occurs. Then the question is: what in your definition of "expr" really makes the eval(T, 1) necessary?

 


 

@acer 

Thanks for all these examples.
 

  1. subs(x=1.2, sin(x))
    It's true, I had forgotten this kind of situation which often involves me applying eval after subs (or subs[eval])
     
  2. subs([a=b,b=c], [a,b])
    As a basic Maple user I would never ever tried to write this: it seems rather weird and I would not have been certain of its result.
    More of this, subs accept "rewriting rules" in set or list forms. Using list (as you did) seems to imply some precedence in the "rewritting rules", for instance "apply a=b BEFORE b=c".
    Here I would have bet that subs([a=b,b=c], [a,b]) was equivalent to subs(b=c, subs(a=b, [a, b]));
    It's obviously not the case. More of this subs([a=b,b=c], [a,b]) is identical to subs([b=c, a=b], [a,b]).
    This point would not have surprised me if you had written subs( { a=b,b=c }  [a,b]) instead.
     
  3. subs(a=b,b=c, [a,b])
    Here equivalent to subs(b=c, subs(a=b, [a, b]));
    Similarly subs(b=c, a=b, [a,b]) = subs(a=b, subs(b=c, [a, b]));
    Looking to the help pages I understand why "The substitutions are performed sequentially starting with s1"


These examples give me a better understanding of the behaviour of "subs" and  its differences with "eval".

Thank you acer

PS : you wrote "I'm not sure which aspects are most relevant to your query. There are other subtleties or examples which might interest you." ... I'm curious to read you about these latter

@Kitonum 

Exotic situations to which I would never thought about (I xould I have "naturally" written eval(int(f(x),x=0..a),a=1) )

But they really emphasize the different behaviours of eval and subs (even if I don't understand why eval returns a result that could fit the idea someone could have of the result when it writes eval(int(f(x),x=0..x),x=1) ...)

Thanks for that.

@Harry Garst 

I think I have something better at my office.
I'll verify this tomorrow and possibly send you the code.

You can do a lot of formal computations and obtain the exact distribution of Qstar in the simple case of f=cste (f being the pdf of some uniform random variable)
 

Download ExactSolution.mw

 


Please, make the link with my previous post

@Carl Love 

I was surprised by the original question about randperm not being present in old Maple version.
So I did a first answer using rand only.
The OP then answerd that rand(0.0 .. 1.0) wasn't working in ihis old Maple version.
I thus open this 1996 reference book about Maple V and saw that rand, by the time only worked for integer ranges.
I thought it could have been the same for the OP's version and thus proposed the piece of code contained in the book by Monaghan, Geddes ... 

From the other responses, it seems that there was a misunderstanding about the lack of combinat:-randperm in Maple 15

@Magma 

My apologies. I wasn't able at the time to load the code and I made a typo rewriting by hand.

Here is the code in Maple 2015

uniform := proc(r::constant..constant)
  local intrange, f:
  intrange := map(x -> round(x*10^Digits), evalf(r) ):
  f := rand(intrange)
  (evalf @ eval(f)) / 10^Digits
end proc:

# usage

N := 100:
U := uniform(1..N):
s := [seq(U(), n=1..N)]:
t := sort(s, output=permutation)


BTW: I just realize you are using Maple 15, not 2015
 

@cinderella 

It's sand15, now from my home account.

Apologies for the poor work I sent you before (I wasn't able to deliver the code but using the big red arrow and just rewrite it in the post with a lot of typos).

Here is the correction (I will delete my previous post next monday when I'll be at the office)

 

restart :

with(Statistics):

z := proc(S, c0, cs, F)

local a, b, M, N, ECost, DCost, f:

f := unapply(F, x):

a:=min(S):

b:=max(S);

M := int((Q-x)*f(x), x=a..Q):

N := int((Q-x)*f(x), x=Q..b):

ECost:=c0*M+cs*N:

DCost := diff(ECost, Q):

return fsolve(DCost=0, Q)

end proc:

 

Sam := Sample(Uniform(10, 20), 10):

 

# usage

 

z(Sam, 20, 25, 1);   # the last argument means “the function f(x) is a constant”

z(Sam, 20, 25, Pi);   # same result whatever the constant is

z(Sam, 20, 25, x);   # f(x) = x

z(Sam, 20, 25, exp(x));   # f(x) = exp(x)

54.34281056

 

54.34281056

 

-38.05989360, 38.05989360

 

21.25818640

(1)

# Naive coding

 

R := 1000:

 

AllQs := Vector[row](R):

for r from 1 to R do

   Sam := Sample(Uniform(10, 20), 10):

   AllQs[r] := z(Sam, 20, 25, 1);

end do:

AllQs;

Vector(4, {(1) = ` 1 .. 1000 `*Vector[row], (2) = `Data Type: `*anything, (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order})

(2)

# more elaborate coding

 

Sam := Sample(Uniform(10, 20), [R, 10]);   # a  matrix of R samples

 

AllQs := Vector[row](R, map(r -> z(Sam[r, ..], 20, 25, 1), [$1..R]) );

Sam := Vector(4, {(1) = ` 1000 x 10 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order})

 

Vector[row](%id = 18446744078345048782)

(3)

Histogram(AllQs);
DataSummary(AllQs)

 

Vector(7, {(1) = mean = 51.9516928573001, (2) = standarddeviation = 4.92778749452987, (3) = skewness = -.808531480190925, (4) = kurtosis = 3.43837474404132, (5) = minimum = 32.6881569200000, (6) = maximum = 59.9389348400000, (7) = cumulativeweight = 1000.})

(4)

 


 

Download Apologies.mw

 

Are looking for an EM algorithm to assess the parameters of a mixture of several random variables, or of something more general ?

@Carl Love

I think a good way to verify if Maple treats correctly the weights is to generate an unweighted sample from a weighted one by replicating in an adhoc way the samples of this latter.

An example is given below.
Strangely the option weight=... gives the exact weighted mean (data summary on "LargeS") but not the other weighted statistics (with rough errors).

 

restart:

with(Statistics):

# simulate a weithed dice
S := [$1..6];  # outcomes of the dice

# randomly generate a collection of weights

roll := rand(1..1000):
rep  := [seq(roll(), k=1..6)];
K    := rep:
W    := Vector(6, K/~add(K)):  W^+;

# The couple (S, W) forms a weighted sample


# Create a fictitious unweighted sample with the same properties
# that the weighted sample (S, W)

LargeS := [ seq(S[n]$K[n], n=1..6) ]:

S := [1, 2, 3, 4, 5, 6]

 

rep := [861, 759, 751, 890, 301, 992]

 

Vector[row]([287/1518, 1/6, 751/4554, 445/2277, 301/4554, 496/2277])

(1)

# the two summaries should be the same

DataSummary(S, weights=W), DataSummary(LargeS)

Vector(7, {(1) = mean = HFloat(3.4363197189284143), (2) = standarddeviation = HFloat(1.9617232221910301), (3) = skewness = HFloat(0.11583435203566941), (4) = kurtosis = HFloat(1.4176722719630659), (5) = minimum = HFloat(1.0), (6) = maximum = HFloat(6.0), (7) = cumulativeweight = HFloat(1.0)}), Vector(7, {(1) = mean = HFloat(3.436319718928405), (2) = standarddeviation = HFloat(1.7758187785899078), (3) = skewness = HFloat(0.12796065739110243), (4) = kurtosis = HFloat(1.73003154918457), (5) = minimum = HFloat(1.0), (6) = maximum = HFloat(6.0), (7) = cumulativeweight = HFloat(4554.0)})

(2)

# let's look the median and the 50% quantile
Median  (LargeS);
Quantile(LargeS, 0.5);

HFloat(3.0)

 

HFloat(3.0)

(3)

 


 

Download Medians_and_Quantiles_2.mw

 

restart:

with(Statistics):

# simulate a weithed dice
S := [$1..6];  # outcomes of the dice

# randomly generate a collection of weights

roll := rand(1..1000):
rep  := [seq(roll(), k=1..6)];
K    := rep:
W    := Vector(6, K/~add(K)):  W^+;

# The couple (S, W) forms a weighted sample


# Create a fictitious unweighted sample with the same properties
# that the weighted sample (S, W)

LargeS := [ seq(S[n]$K[n], n=1..6) ]:

S := [1, 2, 3, 4, 5, 6]

 

rep := [861, 759, 751, 890, 301, 992]

 

Vector[row]([287/1518, 1/6, 751/4554, 445/2277, 301/4554, 496/2277])

(1)

# the two summaries should be the same

DataSummary(S, weights=W), DataSummary(LargeS)

Vector(7, {(1) = mean = HFloat(3.4363197189284143), (2) = standarddeviation = HFloat(1.9617232221910301), (3) = skewness = HFloat(0.11583435203566941), (4) = kurtosis = HFloat(1.4176722719630659), (5) = minimum = HFloat(1.0), (6) = maximum = HFloat(6.0), (7) = cumulativeweight = HFloat(1.0)}), Vector(7, {(1) = mean = HFloat(3.436319718928405), (2) = standarddeviation = HFloat(1.7758187785899078), (3) = skewness = HFloat(0.12796065739110243), (4) = kurtosis = HFloat(1.73003154918457), (5) = minimum = HFloat(1.0), (6) = maximum = HFloat(6.0), (7) = cumulativeweight = HFloat(4554.0)})

(2)

# let's look the median and the 50% quantile
Median  (LargeS);
Quantile(LargeS, 0.5);

HFloat(3.0)

 

HFloat(3.0)

(3)

 


 

Download Medians_and_Quantiles_2.mw

@Carl Love 

I answer here to your first point onlyDo you feel that those two should be equal?

This reminds me of an animated discussion we had a few months ago.
From my memories as a student, i remember that the median (in Statistics) was defined as the 50% quantile. I think this definition is universally accepted for continuous distributions.

Already it's probably necessary to distinguish between the definition from Probability theory and the Statistical "practical" definition used for finite sets of numbers (this is my feeling).

Next is the case of discrete distributions, and thus of finite samples of any continuous distribution. Here the things seem a little bit more complex for it is sometimes written that the median is the "mid point" of the sample.
For instance, the median of a sorted serie S of size 2P+1 is uniformally acknowledge to be S[P+1] ... and often set to 
(S[P]+S[P+1])/2 if the serie has an even size 2*P.
But this not a definition which is consistent  with this definition of a quantile "For a real valued random variable X with distribution function F(x), and any p between 0 and 1, the p th quantile of X is defined as "inf {y | F(y) >= p}" (Maple help pages)

To summarize, Do you feel that those two should be equal?, I would say "They should !"

But even Maple is not itself concinced of that: in the example below the same experiment (rolling a dice) is modelled into two different, but a priori equivalent ways. Nevertheless the medians Maple returns are not the are not the same: one (median=3) agrees the quantile definition, the other (7/3) doesn't

 

restart:

with(Statistics):

# two different ways to model the outcomes of tossing a dice

X := RandomVariable(ProbabilityTable([(1/6)$6]));
Y := RandomVariable(DiscreteUniform(1, 6));

_R6

 

_R7

(1)

# same distributions

plots:-display(
  plot(CDF(X, k), k=-1..8, color=red, transparency=0.7, thickness=10),
  plot(CDF(Y, k), k=-1..8, color=blue)
);

 

# thus same quantiles

Quantile(X, 1/2);  # failure ???
Quantile(Y, 1/2);

FAIL

 

3

(2)

# and same medians

Median(X);
Median(Y);

3

 

7/2

(3)

 


 

Download Medians_and_Quantiles.mw

 

"

@Carl Love 

It seems that the  DataSummary(X, weights= X) problem didn't exist in Maple 2015.

More of this, it is unclear how Maple computes weighted statistics.


 

restart

interface(version)

`Standard Worksheet Interface, Maple 2015.2, Mac OS X, December 21 2015 Build ID 1097895`

(1)

with(Statistics):

with(RandomTools):

X := [seq(i+.5, i = 11 .. 30)]:

Y := Generate(list(distribution(DiscreteUniform(10, 200)), 20)):

# the problem you put your finger on (accounted weights = X instead of Y) was not present in Maple 2015

DataSummary(X, weights=Vector(Y)), DataSummary(X, weights=Vector(X))

Vector(7, {(1) = mean = HFloat(21.35698070374574), (2) = standarddeviation = HFloat(6.1675934284771685), (3) = skewness = HFloat(-0.09214603163463658), (4) = kurtosis = HFloat(1.7161383756034603), (5) = minimum = HFloat(11.5), (6) = maximum = HFloat(30.5), (7) = cumulativeweight = HFloat(2643.0)}), Vector(7, {(1) = mean = HFloat(22.58333333333333), (2) = standarddeviation = HFloat(5.700003494791689), (3) = skewness = HFloat(-0.3170191063842826), (4) = kurtosis = HFloat(1.8686485006609248), (5) = minimum = HFloat(11.5), (6) = maximum = HFloat(30.5), (7) = cumulativeweight = HFloat(420.0)})

(2)

# Which doesn't mean the results were then correct....
# There is obvious lack of references to understand how Maple computes weighted statistics;
# in particular if it computes biased or unbiased statistics
#
# The formula below come from
# Reference:  https://arxiv.org/pdf/1304.6564.pdf


`Weighted Mean` := add(X *~ Y) / add(Y);

`Weighted Central Moment` := R -> add((X -~ `Weighted Mean`)^~R *~ Y) / add(Y):  # see ref §4

# Examples

`Weighted Variance`  := `Weighted Central Moment`(2);
`Weighted St Dev`    :=  sqrt(%);

`Weighted Skewness`  := `Weighted Central Moment`(3);
`Weighted Kurtosis`  := `Weighted Central Moment`(4);

`Weighted Skewness coefficient` := `Weighted Skewness`  / `Weighted St Dev`^3;
`Weighted Kurtosis coefficient` := `Weighted Kurtosis`  / `Weighted St Dev`^4;



 

HFloat(21.356980703745744)

 

HFloat(35.727559101784294)

 

HFloat(5.9772534747812305)

 

HFloat(-20.30465992067974)

 

HFloat(2332.3145591927905)

 

HFloat(-0.0950803343990422)

 

HFloat(1.827176204228115)

(3)

# Unbiased weighted moments (ref § 5)

V := R -> add(Y^~R):   # see ref §2 (ii)
 

`Unbiased Weighted Mean` := add(X *~ Y) / V(1);

`Unbiased Weighted Variance` := V(1)^2 / (V(1)^2 - V(2)) * `Weighted Central Moment`(2);
`Unbiased Weighted St Dev`   :=  sqrt(%);
`Unbiased Weighted Skewness` := (
                                  V(1)^3
                                  /
                                  (V(1)^3 - 3*V(1)*V(2) + 2*V(3))
                                  *
                                  `Weighted Central Moment`(3)
                                );
`Unbiased Weighted Kurtosis` := (
                                  V(1)^2 * ( V(1)^4 - 3*V(1)^2*V(2) + 2*V(1)*V(3) + 3*V(2)^2 - 3*V(4) )
                                  /
                                  ( (V(1)^2 - V(2)) * ( V(1)^4 - 6*V(1)^2*V(2) + 8*V(1)*V(3) + 3*V(2)^2 - 6*V(4) ) )
                                  *
                                 `Weighted Central Moment`(4)

                                  -

                                  3*V(1)^2 * ( 2*V(1)^2*V(2) - 2*V(1)*V(3) - 3*V(2)^2 + 3*V(4) )
                                  /
                                  ( (V(1)^2 - V(2)) * ( V(1)^4 - 6*V(1)^2*V(2) + 8*V(1)*V(3) + 3*V(2)^2 - 6*V(4) ) )
                                  *
                                 `Weighted Central Moment`(2)^2
                                );


`Unbiased Weighted Skewness coefficient` := `Unbiased Weighted Skewness`  / `Unbiased Weighted St Dev`^3;
`Unbiased Weighted Kurtosis coefficient` := `Unbiased Weighted Kurtosis`  / `Unbiased Weighted St Dev`^4;

HFloat(21.356980703745744)

 

HFloat(38.03920869899475)

 

HFloat(6.1675934284771685)

 

HFloat(-24.592416137821843)

 

HFloat(2445.130743359775)

 

HFloat(-0.10482237320046656)

 

HFloat(1.689814911225736)

(4)

 


 

Download WeightedStatistics.mw



PS : For info, the R package  Weighted.Desc.Stat computes the biased estimators, not the unbiased ones (the discrepancy that exist for the biased estimators of the kurtosis between R an Maple remains mysterious) 

@omkardpd 

I am glad that my answer could have been useful to you.

I was not aware of the existence of this "tutorial" portal, but your suggestion is interesting. However, I would first have to verify the worksheet doesn't contain mistakes or typos, add the appropriate references, present a model that would highlight the interest of resampling methods.... 
I'll keep your idea in mind and think about it.

Thank you for your answer

Jut a remark, unless I'm mistaken, G cannot be a realizable transfer function (proper ransfer function) for the degree in s of its numerator (3) is strictly larger than the degree of its denominator (2).
In such a case the response of the system contains a "pulse of infinite power"(mathematically a Dirac distribution).

So I do not know how you could realise this improper transfer function G with "real" components

@WebSuccess 

Modify your procedure  isInAreaIncludeBorders  this way (simple transposition of what Kitonum wrote)

isInAreaIncludeBorders := proc (u::float, v::float) 
  if eval(FisInAreaIncludeBorders, [x=u, y=v]) then return 'true' else return 'false' end if 
end proc; 


isInAreaIncludeBorders(0.5, 0.5)

 

First 130 131 132 133 134 135 136 Last Page 132 of 154