mmcdara

7891 Reputation

22 Badges

9 years, 54 days

MaplePrimes Activity


These are replies submitted by mmcdara

@Carl Love 

What I meant by "it imposes a strict discipline of regular backup" is that before launching an operation that you are not sure will run correctly in a reasonable amount of time, do not forget to backup the worksheet before executing that operation.
We cannot rely upon the Autosave feature.

This regularly happens to me (I am probably a bit absent-minded) and I often blame myself for not having thought of saving my work before. 
 

It is indeed quite painful
When I'm fed up going back and forth between the two worksheets, I simply kill the attached process, or even all of Maple (in the task manager, the "guilty" mserver process is not released and keps consuming memory).
This imposes a strict discipline of regular back-up.

@schachar 

You realize, I presume, that the plots you provide have nothng to do with the data you provided before?
Just use 

dataplot(X)

to see visualize X.
The abscissa is a number from 1 to 781 and the ordinate is X with, possibly, some inknown unit. So something completely different to what you present here.
Could it be that each plot here has abscissa eqial to X and ordinate equal to Y?

You wrote contradictary things:

  • The object is to use Gaussian Smoothing and to ensure the endpoints fit.  
  • Therefore please use the method that will work best

Either I have to use Gaussian Smoothing ot either you let me free to the method I consider is the best.
For instance my initial reply uses gaussian kernel smoothing and seems, at least to me, a method I consider is good. The problem is that as I never used the term "2D gaussian smoothing" explicitely you seem to have swept it aside with a wave of your hand. Am I wrong?
Without any good reason to use  Gaussian Smoothing I don't see why I should this latter? 


I'm sorry to tell you that I have the strong feeling that I'm wasting my time.
I think you shoud wait for someone else to help you, good luck.

@minhthien2016 

Things end here then, you haven't been very cooperative (you could have asked your "friend", if he really exists).

helpers here do not appreciate to decode LaTeX script to code it into Maple.
What they appreciate is when the member who asks a question has done a minimum effort to deliver a Maple code. We do not ask for this code to be perfect but is really the bare minimum to do.

To illustrate this let me take this simple line of yours

Parameters: m =1; a= 10^3/cm; M = 6.3*10^{17}; D = 10^{-5}cm^2/sec; \Omega = 1.7*10^{23} cm^3/Atoms; \tau = A * D* \Omega * t = A^{'}t = 10^{-14}cm^3/s;

In Maple syntax there are a lot of errors (curly brackets instead of parenthesis, multiplication symbols missing, names like \tau or \Omega are not allowed in Maple, triple equality [in bold font above], ...
I dont't understand what A^{'} is meant to be for you. Is A a matrix and A' its transpose?

Whatever, this is what a correct definition os Patameters should look like in Maple:

restart:
local D;  # D is a protected name in Maple
Parameters := {
  m = 1, 
  a = 10^3 * Units:-Unit(cm),
  M = 6.3*10^(17), 
  D = 10^(-5) * Units:-Unit(cm^2/s), 
  Omega = 1.7*10^(23) * Units:-Unit(cm^3), 
  #  tau = A * D* Omega * t = A^{'}  ### uncomprehensible
  t = 10^(-14) * Units:-Unit(cm^3/s)
};

Parameters := simplify(Parameters union {eval(tau=A * D* Omega * t, Parameters))};

# To integrate numerically wrt epsilon, A must be given a numeric value.

It already took me a few effort to do this, and it will take me, or another memeber, a larger one do decypher  the expression of 
W^{+,-}(x, \tau, M) and code it correctly in Maple.

Last but not least, your plot is unreadable and it is completely impossible to understand what you draw.

Could you do some effort and code your relations in syntax error free Maple?
Without this minimum work I doubt you will find help here.

If you want to integrate numerically a function F(x, a, b, c, ...) with respect to x you must give a, b, c, ... numerical values.
I have only little knowledge about Mathematica but I guess it is so.
So do with Maple what you did with Mathematica.

Illustration

restart:

f := (epsilon, a, x, tau) -> exp(-(epsilon/2/a)^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2

proc (epsilon, a, x, tau) options operator, arrow; exp(-(1/4)*epsilon^2/a^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2 end proc

(1)

# No numerical value returned because a, x, tau are names

J := Int(f(epsilon, a, x, tau), epsilon=0..+infinity);
evalf(J);  # no numerical value returned

Int(exp(-(1/4)*epsilon^2/a^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2, epsilon = 0 .. infinity)

 

Int(exp(-.2500000000*epsilon^2/a^2-1.*tau*epsilon^3)*(cos(epsilon*x)-1.)/epsilon^2, epsilon = 0. .. Float(infinity))

(2)

# Once a, x, tau are given numeric values the integral is computed

J := Int(f(epsilon, 1, 1, 1), epsilon=0..+infinity);
evalf(J);

Int(exp(-(1/4)*epsilon^2-epsilon^3)*(cos(epsilon)-1)/epsilon^2, epsilon = 0 .. infinity)

 

-.3981488764

(3)
 

 

Download evalf_Int.mw

@acer 

... but I don't think NLPSolve can still handle the problem for large n.
For instance a single run of NLPSolve takes about 10 seconds for n=50, which means my naive strategy consisting in starting from 1000 different points to try and catch a posible global maximum becomes rapidly prohibitive. 

I'd never heard of the OP problem, but I'd already been interested in statistical problems related to packing problems. For example, how to place N points in the unit square so that they are as far apart as possible (similar to placing N disks of identical radius in the unit square).
Those problems areusually adressedby specific algorithms, mosply stochastic (greddy, genetic, moving particles, ...) or Central Voronoi Tesselation (see here  pp 648-650).

Fun for fun, here is the best solution I gor to pack 14 spheres in the unit cube in such a way that the sum of their radii is maximum.
The result is absolutely stunning: all the spheres  have the same radius!
 

restart

with(plots):
with(plottools):
with(Statistics):


DISKS WITHIN THE UNIT SQUARE

The packing shiuld be such that the sum of the radii is maximum

epsilon := 1e-8:

n := 14:
J := add(r[i], i=1..n):
P := [seq(<x[i], y[i], z[i]>, i=1..n)]:
cons_1 := seq(seq(add((P[i]-P[j])^~2) >= (r[i]+r[j])^2, j=i+1..n), i=1..n-1):
cons_2 := seq(seq(op([P[i][k]-r[i] >= 0, P[i][k]+r[i] <= 1]), i=1..n), k=1..3):
cons_3 := seq(r[i] >= epsilon, i=1..n):


SoR : Sum of the Radii

Max_trials : maximum number of trials
                     Note: I used a while loop instead of a for loop because NLPSolve sometimes returns an error

Packing := proc(n, Max_trials)
  local SoR, SoR_max, error_found, error_count, rep, StartAt, opt, sor, error_type, Best_packing:

  SoR     := Vector(Max_trials):
  SoR_max := 0:

  error_found := false:
  error_count := 0:

  rep := 0:
  while rep < Max_trials do
    StartAt := {
                 seq(x[i]=rand(0. .. 1.)(), i=1..n),
                 seq(y[i]=rand(0. .. 1.)(), i=1..n),
                 seq(z[i]=rand(0. .. 1.)(), i=1..n),
                 seq(r[i]=rand(0. .. 1e-6)(), i=1..n)
               };
  
    try
      opt := Optimization:-NLPSolve(J, {cons_1, cons_2, cons_3}, maximize, iterationlimit=10^4, initialpoint=StartAt):
    catch:
      error_found := true
    end try:
  
    if `not`(error_found) then
      rep := rep+1:
      sor := evalf[6](eval(add(r[i], i=1..n), opt[2]));
      SoR[rep] := sor:
  
      DocumentTools:-Tabulate(
        [sprintf("%d", rep), sprintf("%1.6f", SoR_max)],
        'alignment'='left',
        'widthmode'='percentage',
        'width'=20
      ):
      
      if sor > SoR_max then
        SoR_max      := sor;
        Best_packing := opt[2];
      end if:
    else
      error_found := false:
      error_count := error_count+1:
      error_type  := lastexception:
    end if:
  end do:

  return  table([
            Errors = `if`(error_count = 0, None, [Type = error_type, Occurrences = error_count]),
            RadSum = SoR,
            Best   = Best_packing
          ])
end proc:

res := Packing(14, 1000):
res[Errors];

SoR := res[RadSum]:
Best_packing := res[Best]:

[Type = (Optimization:-NLPSolve, "no improved point could be found"), Occurrences = 136]

(1)

FivePointSummary([SoR]):
Histogram([SoR], minbins=100):

Histogram(
  (evalf[6]@rhs)~(select(has, Best_packing, r))
  , discrete=true
  , size=[1200, 100]
  , color=red
  , thickness=3
  , title="Radii"
);

couleur := () -> ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12]):
  
display(
  seq(
    sphere( eval([x[i], y[i], z[i]], Best_packing), eval(r[i], Best_packing), color=couleur(), style=surface)
    , i=1..n
  ),
  cuboid([0,0,0], [1,1,1], color="LightGray", transparency=0.8)
  , title=typeset(Sum(r[i], i=1..n) = evalf[6](eval(add(r[i], i=1..n), Best_packing)))
  , scaling=constrained
)

 

 

# Radii

print~(sort(evalf@rhs)~(select(has, Best_packing, r))):

HFloat(0.2071067811865475)

 

HFloat(0.20710678118654763)

 

HFloat(0.2071067811865472)

 

HFloat(0.20710678118654754)

 

HFloat(0.20710678118654754)

 

HFloat(0.20710678118654752)

 

HFloat(0.20710678118654757)

 

HFloat(0.20710678118654768)

 

HFloat(0.2071067811865474)

 

HFloat(0.20710678118654757)

 

HFloat(0.2071067811865474)

 

HFloat(0.20710678118654763)

 

HFloat(0.20710678118654763)

 

HFloat(0.20710678118654752)

(2)


From packomania , click on Spheres in a cube  at the bottom of the page.

What is the largest radius that 14 identical spheres packed in a unit cube may have?
The value given in the page you look is 0.207106781186547524400844362105        

It is remarkable that a priori two different problems have the same solution.


Download Disks_within_unit_square_3D.mw

Add-on: 3D packings of 9 to 14 spheres: Disks_within_unit_square_3D_9_14.mw

Question: optimal 3D packings of n identical spheres which are also optimal 3D packings of n spheres whose sum of radii is maximum are obtained for n=1, 2, 3, 4, 8, 14. Who comes next?

@schachar 

Your file Data_for_2nd_Order_Gassian_Smoothing_6_21_2024.maple can't be downloaded.
Change the extension into mw (not maple), this should, could work.

Unfortunatelythe paper is indeed not avaliable for free.
I'm going to do some research on Google Scholar from its title.

See you soon.

PS: I'm sorry if you found me a bit rough, that wasn't my intention.

@spalinowy 

A few questions:

  • Are z[F0](t) and z[R0](t) explicitely known?
    If it is the case you hust have to define them as I did in my file. More precisely I wrote
    SYS_example := eval(SYS, RemainingFunc =~ t):

    which is, I agree, quite unclear. 
    RemainingFunc is equal to  { z[F0](t) , z[R0](t) } and  RemainingFunc =~ t means 

    { z[F0](t) = t, z[R0](t) = t } 

    If, for instance, z[F0](t) = cos(t) and  z[R0](t) = exp(-t) then define SYS_example this way

    SYS_example := eval(SYS,  {z[F0](t) = cos(t) , z[R0](t) = exp(-t)}):
    

 

  • If you dont explicitely know these functions, for instance they are knoxn only at some values of t, you must construct from these points two analytic expressions to represent boyh of them.

@schachar 

Where is the complete paper, not just an excerpt which means noting out of context?
Where is this Workbook?
Can't you explain your problem in simple words instead of refering to a complete paper which can be lengthy ro read?

You write "My understanding of 2nd Order Gaussian smoothing is that it is done for both x and y"... so you are not even sure of what you want, are you?

Are you talking about convolution of X and Y by a 1D  gaussian signal?
Or about the convolution of field (X, Y) by a 2D  gaussian signal?
Or about something else?

@minhthien2016 

There are probably a lot of reasons why the two results differ.

  • Algorithms are different.
     
  • One result is not "converged", or maybe both.
     
  • One result right and the other is wrong (maybe both are wrong).
     
  • At the end the better result will be the one with the largest sum of radii, but maybe an even better packing does exist.
     
  • Do not look to the pattern but carefully study the solutions (they are obviously not unique and different packings, can lad to samesum of radii).
     

Why don't you go here and say "with n = 14, The answer of Maple and your result are different." ?

(look to vv's reply)

First point: in the future think to upload your worksheet using the green up arrow.

Next: X has length 781 and Y has length 782 , even a simple scatterplot to look at the data is impossible. So fix this and deliver two vectors with same lengths.

Last but not least: what do you mean precisely  by "2nd Order Gausian Smoothing"?

Do I understand correctly if I suggest this Gaussian_Kernel_smoothing.mw ?

@acer Thanks for the correction, I had browsed the help page in a hurry.

@C_R 

it was kind of you.

Besides, I've often thought it would be useful to know who voted for you. It would at least reveal whether the OP was really interested in the responses he received... because I have some doubts about that.

Have a good day.

@C_R 

This is indeed an interesting proposal to account for this "Unanswered question issue".

Allow me to comment on your sentence "To respect the personal choice of an experienced user'
The key point is experienced, as you underline it.

So, what is an experienced user on Mapleprimes? Someone with more than 2000 points, more than 20000...?

If you look to the Users item you will see i have rank thirteen with about 6000 points: does this enough to qualify me as experienced?
I'm pretty sure you ask @acer or @Carl Love, to cite a few,  if I am experienced or not they will probably say no if they are honnest. Asking the same question to someone who has "only" 30 points could receive an opposite question. So being judged as experienced is an extremely subjective statement.

There is a French proverb which litteraly translates as "In the kingdom of the blind, the one-eyed man is king" and, in case you don't have it in English, it means something like "Surrounded by ignorant people, an individual with meagre knowledge  passes for a genius."
With all due respect to all Mapleprimes members, being rewarded by a newbie or a low-ranking member (obviously everyone starts with 0 points and some newbies will become genuine references in the future while others will remain simple users) often makes me feel I am this kind of king. It flatters my ego, gives me points, moves me up the rankings, but so what? Am I more experienced for that?
Surely not: I just did what I knew how to do and passed it on to someone else, but I didn't become more experienced because I probably didn't learn anything new(*).
So, with more than 6000 points I do not consider myself more experienced in Maple than I was several years ago (or just a little bit more to be  honnest).

One way of not suggesting to an OP that I'm a king is to only make comments: the OP won't be able to vote for me and increase my experience in the eyes of others. 
If the OP is satisfied with my comment it's enough for him/her to say so.


A pervert effect of Mapleprimes' ranking strategy is that simply asking very general questions (like mine) fictitiously increases experience (Without doing anything I gained a few points since this thread opened!). I guess the badge system exists to provide a more detailed idea of the real exprience of a member, but who really look to what badges some member earned, or even understand what these 39 badges really mean?


(*) A suggestion: once a member has reached a certain number of points, he/she should be obliged to fill in a profile defining his/her areas of expertise.

First 20 21 22 23 24 25 26 Last Page 22 of 154