8 years, 68 days

## Disks packing within the unit square...

Check this

 > restart

DISKS WITHIN THE UNIT SQUARE

The packing shiuld be such that the sum of the radii is maximum

 > epsilon := 1e-8: n := 23: J := add(r[i], i=1..n): P := [seq(, i=1..n)]: cons_1 := seq(seq(add((P[i]-P[j])^~2) >= (r[i]+r[j])^2, j=i+1..n), i=1..n-1): cons_2 := seq(op([P[i][1]-r[i] >= 0, P[i][1]+r[i] <= 1, P[i][2]-r[i] >= 0, P[i][2]+r[i] <= 1]), i=1..n): cons_3 := seq(r[i]+epsilon >= 0, i=1..n):
 > opt := Optimization:-NLPSolve(J, {cons_1, cons_2, cons_3}, maximize, iterationlimit=1000): Sum(r[i], i=1..n) = evalf[6](eval(add(r[i], i=1..n), opt[2]))
 (1)
 > use plottools in   plots:-display(     seq(       disk( eval([x[i], y[i]], opt[2]), eval(r[i], opt[2]), color=blue)       , i=1..n     ),     rectangle([0, 0], [1, 1], color="LightGray")   ) end use;
 > Packing := proc(n)   local J      := add(r[i], i=1..n):   local P      := [seq(, i=1..n)]:   local cons_1 := seq(seq(add((P[i]-P[j])^~2) >= (r[i]+r[j])^2, j=i+1..n), i=1..n-1):   local cons_2 := seq(op([P[i][1]-r[i] >= 0, P[i][1]+r[i] <= 1, P[i][2]-r[i] >= 0, P[i][2]+r[i] <= 1]), i=1..n):   local cons_3 := seq(r[i]+epsilon >= 0, i=1..n):   local ini, opt, sol:   try     opt := Optimization:-NLPSolve(J, {cons_1, cons_2, cons_3}, maximize, iterationlimit=1000);   catch:     try       ini := initialpoint = {seq(r[i]=1/n, i=1..n)}:       opt := Optimization:-NLPSolve(J, {cons_1, cons_2, cons_3}, maximize, ini, iterationlimit=1000):     end try:   end try:   sol := opt[2]:   plots:-display(     seq(       plottools:-disk( eval([x[i], y[i]], sol), eval(r[i], sol), color=blue)       , i=1..n     )     , plottools:-rectangle([0, 0], [1, 1], color="LightGray")     , title=typeset(Sum(r[i], i=1..n) = evalf[6](eval(add(r[i], i=1..n), sol)))     , scaling=constrained   ) end proc:
 > M := Matrix(4\$2, (i, j) -> Packing(4*(i-1)+j)): plots:-display(M):
 >

## How to get the numeric solution...

If you want to get a NUMERIC solution, please look to this file.

There are several points that it is up to you to fix:

1. There are two functions
` {z[F0](t), z[R0](t)}`

that are not defined.
If you want to solve nimerically the sustme you must give them an analytic expression.
I made an arbitrary choice to go further.

2. You missed 4 initial conditions.
I made an arbitrary choice to go further.

3. Either you give the parameters a numeric valur OR you use(this is what I did) the "'parameters' option
I made an arbitrary choice for their values to gget a fully instanciated solution

 > restart

 > ODES := EQ1, EQ2, EQ3, EQ4, EQ5, EQ6, EQ7, EQ8: SYS  := {ODES, inc}:
 > # A few verifications Unknowns := map(u -> op([1, 1], u), lhs~([ODES])); AllFunc  := indets(SYS, function); RemainingFunc := remove(has, AllFunc, map2(op, 0, Unknowns))
 (1)
 > # Before using dsolve you must define what these two remaining functions are. # # Here is an example SYS_example := eval(SYS, RemainingFunc =~ t):
 > # Then find all parameters of SYS_example: param := convert(indets(SYS_example, name) minus {t}, list)
 (2)
 > # Now try to solve numerically numsol := dsolve(SYS_example, numeric, parameters=param)
 > # The error is clear. IC_required := 2*numelems(Unknowns); IC_given    := numelems({inc}); # 4 IC are missing for u in Unknowns do   printf("%-10a : %d IC given\n", u, numelems(select(has, {inc}, op(0, u)))) end do:
 z[F1](t)   : 2 IC given z[R1](t)   : 2 IC given z[F2](t)   : 2 IC given z[R2](t)   : 2 IC given z[B3](t)   : 2 IC given phi[F2](t) : 0 IC given phi[R2](t) : 0 IC given phi[B3](t) : 2 IC given
 > # So you missed IC for phi[F2] and phi[R2]. # # Let me set them arbitrarily: MyIC := phi[F2](0) = 0, (D(phi[F2]))(0) = 0, phi[R2](0) = 0, (D(phi[R2]))(0) = 0
 (3)
 > # Update SYS_example: SYS_example := SYS_example union {MyIC}: print~(%):
 (4)
 > # Now try to solve again numsol := dsolve(SYS_example, numeric, parameters=param)
 (5)
 > # How to use this solution? # # Let us define some specific set of parameters, for instance: MyData := [seq(p = rand(0. .. 1.)(), p in param)];
 (6)
 > # Instanciate numsol with these data: numsol(parameters=MyData);
 (7)
 > # Then you have access to the solution for this specific set of data: numsol(10); # or: plots:-odeplot(numsol, [t, phi[B3](t)], t=0..10)

## @Aung Thanks for your feedback.Feel...

Feel free to ask for more help on this problem.

By the way: the Statistics:-NonlinearFit solver uses Optimization:-NLPSolve's and this latter supports constraints (for instance p3 > p2, p1 > 0, ...) which makes it, in my opinion, more versatile and powerful than the former, to solve non linear least square problems.
The main differences between Statistics:-NonlinearFit and Optimization:-NLPSolve are:

• the former builds the function to be minimized (the summ of squared residuals) while you must define it explifitly with the latter;

• the former provides more outputs than the latter... but it is very easy to reconstruct them as soon as Optimization:-NLPSolve has privided the location of the minimizer.

## What is sigma_t[j]?As you wrote it ...

THANK YOU

What is sigma_t[j]?
As you wrote it sigma_t[j] is a name which doesn't depend on any quantity, as you can verify doing this

```indets(sigma_t[1], name)
{}
```

So:

```diff(sigma_t[1], p[1]);
diff(sigma_t[1], E_0[90]);
diff(sigma_t[1], epsilon_dot);
0
0
0
```

Definining sigma_t does not imply defining sigma_t[j].

Finally:

1. Define sigma_t[j] correctly (or make sigma_t to depend on on some index j).

2. And once done define g should be defined this way

```g:= add(( sigma[j]-sigma_t[j])^2,j=1..10):
```

Infering what you want to achieve...

(in relation to your previous question), it seems to me that your problem could be stated this way:

•  sigma[j] is the outcome of an experiment done on some (viscoelastic?) material submitted to a loading epsilon.

• You have a physical model sigma_t  : true_strain ---> sigma parameterized by a vector p =(p[1], p[2], p[3]) which is aimed to explain the results observed duting 10 experiments whose input-output couples are ( true_strain[j], true_strain[j] ), j=1..10.

• Then g appears as the residual sum of squares between the observed outcomes sigma[j] and their predicted values sigma_t( true_strain[j] ).

• Would your goal be to find the vector p which minimizes g?
If it is so the attached file will show you how to set correctly your problem.

 > restart:
 (1)
 > True_strain := Vector(10, symbol=epsilon): Sigma := Vector(10, symbol=sigma):
 > g := add( (Sigma - sigma_t~(True_strain))^~2 ):
 > data   := indets(Sigma) union indets(True_strain); params := indets(g, name) minus data minus {seq(p[i], i=1..3)}
 (2)
 > # Once data are known numerically, you must give "params" numeric values # # Then use something like Optimization:-NLPSolve(g) to find p

Once data are known numerically, you must give "params" numeric values

Then use something like Optimization:-NLPSolve(g) to find p

Here is a notional example

 > num_params := params =~ 1
 (3)
 > Observed_strain := Vector(10, i -> i); # Hypothesis : True_p := {seq(p[i]=i, i=1..3)}: # Then noisy values of observed sigma could be Observed_Sigma  := Vector(10, i -> eval( sigma_t(Observed_strain[i]), num_params union True_p ) + rand(-1. .. 1.)());
 (4)
 > g := add( (Observed_Sigma - eval(sigma_t~(Observed_strain), num_params))^~2 );
 (5)
 > Optimization:-NLPSolve(g, initialpoint=True_p, assume=nonnegative)
 (6)
 >

## a little remark:...

Before the 2nd Tabulate you shoulf change opts:

```opts := fillcolor = (proc (T, i, jj) options operator, arrow; `if`(jj = 1, "LightBlue", "PeachPuff") end proc), widthmode = pixels, exterior = none:
```

By the way, the OP loads Student:-Statistics and Statistics: can this generate conflicts?
For instance, if one calls Mean, which one is used?

## Forgetful or rude?...

Whatever I see you understood how to build an unweighted sample from a weighted one, so at least you read my reply...

By the way (1) , the formula you use to compute EcartType is uncorrect, the denominator should be add(Weights)-1

By the way (2), still another Maple Bug:
StandardDeviation(data, 'weights'=Weights) and Variance(data, 'weights'=Weights) return wrong results.

## @MaPal93 Then what do you mean by &...

@MaPal93

Then what do you mean by "the interpolator depends on a single parameter named psi"?
The command to construct the kringing emulator is

```krig := KG_Matern52(data[[seq(1..59, 1)], ..], 5):
```

where 5 is THE SINGLE parameter psi the solution depends on (compare to the 6 parameters of the Lennard-Jones model).

Feel free to change this value, for instance use 20 instead of 5: you will get a curve which still passes through the points (its is an interpolating curve) but which strongly oscillates between these points.
The value of psi can:

• either be set to a convenient value trom a visual observation of the resulting interpolation curve (what I did),
• or searched by solving a specific minimization problem (which I didn't implemented here).

The expression is indeed complex and is never displayed. A kriging emulator is always used as a black-box function: you give it the value of the input(s) and it returns the corresponding value of the output (and possibly some other useful informations).
To draw a parallel, Its usage is exactly the same as sol in

`sol := dsolve(ode_system, numeric)`

Kriging is an extremely powerful and versatile method which is widely used in M&S (Modelling and Simulation) to build analytic models (aka metamodels) of outputs of a costly computer code.

For insance your Maple code can be seen as a complex function F : Gamma_1 --> Gamma_2 = F(Gamma_1) whose no closed-form is known.
Observing a collection C = { (Gamma_1[n], F(Gamma_1[n])),  n=1..N } of points is sometimes enough to infer a possible expression for F.
But not always, for instance I'm not sure that a lot of people would have say here "A correct metamodel is a Lennard-Jones type model".

The main problem with classical metamodels (a linear one for instance), is that there is almost no chance that it passes through all the points C contains.
How can you justify this when F is a deterministic function and when there is no observation noise on the outputs Gamma_2?
(The situation is completely different when points come from empirical (aja experimental) observations).

So you have to build an interpolator and not an estimator, which means that if M is the metamodel, you want M(Gamma_1[n]) = F(Gamma_1[n]) for all n=1..N.

There exist an infinity of such estimators (probably the most known are spline interpolators of any odd order), Krining (aka Gaussian Emulator, GE in the sequel) is another type.
The simplest GE depends on a single parameter (I denoted psi) for scalar inputs. For vector inputs psi is then a vector of same size. Beyond this simple GE there exist more complex variant dependong on parameters whose natures differ from psi's.
To be simple psi characterizes the smoothness of the interpolator. If you believe the interpolator must be very regular, or smooth, you must use a "small" value for psi, a "large"one if you believe the interpolator can have large deviations out of the points in C.

The fact that the smoothness of the GE is almost the only parameter to fit makes Kriging one of the most powerful tool to build high dimensional metamodels (in dimension d psi contains d parameters: compare this number to the fit a polynomial of total degree q in dimension d which depends on (q+d)!/q!/d! parameters)

At the end the GE is always used as a black-box function: you give it the values of th inputs (Gamma_1 in your case) and it returns the output (here Gamma_2).
Despite the fact it has a closed-form expression this latter can be that complex that it is never displayed (that is exactly the same thing for a cubic spline interpolator for instance).

Unfortunately there is no easy to read synthetic reference about gaussian emulators
In coase you go further I can browse the web to try and find a few.

## Not a simplification but an approximatio...

An approximation is not a simplification.
If you think the opposite, there are two even simpler expressions:

```taylor(expr1, x=0, 5);
31  3    / 5\
2 x - -- x  + O\x /     (basically what S22 gives)
3
taylor(expr1, x=0, 1)
O(x)   (4 characters alone)
mtaylor(expr1, x=0, 1)
0   (only one single character: you can't do simpler)
```

## The BoxPlot command accepts SEVERAL list...

You are wrong writting "The BoxPlot command only accepts one list"
See the corresponding help page.

QBoxPlot_mmcdara.mw

## ...

I won't promote any commercial product: you can easily find some.
Here is an example (likely among many) of a free software FreeFEM

## My own opinion...

Don't you think it would be much simpler to use a product, commercial or otherwise, capable of solving elliptic equations numerically, rather than thinking about writing FD or FE code in Maple?
Personally, if I wanted to be efficient, I'd choose the first option.

Writting a FD code to solve a Laplace/Poisson equation over a rectangle with Dirichlet bcs on a regular mesh is quite simple.
If the domain is not rectangular it's better fo use FE, maybe on an unstructured mesh... and this becomes quite complex to code even if the methods are well known.
If your PDE contains non linearities or discontinuous parameters a special care to the approximation of the numerical fluxes must be accounted for. here again well known methods do exist, but coding them adds another level of complexity.
I advise you to think twice to the complexity of what you ask.

Last point: the problem you refer to in your attached file is not well posed, you can't write simultanously  limit(u[2]*(x, y), x = infinity) = 0 and u[1](0, y) = u[2](0, y) : is your domain bounded or not, is  u[1](0, y) = u[2](0, y) a (weak) bc or an internal condition?
If the domain is bounded what is it?

## Try this...

If you want to know which simplification is responsible of the results you get, jdo this

 > restart;
 > #version 1.0   #increment version number each time when making changes. full_simplify:=proc(e::anything)    local result::list;    local f:=proc(a,b)       RETURN(MmaTranslator:-Mma:-LeafCount(a)
 > #test cases T:=[ (-192*cos(t)^6 + 288*cos(t)^4 - 912*cos(t)^3 - 108*cos(t)^2 + 684*cos(t) - 54)/(4608*cos(t)^9 - 10368*cos(t)^7 + 6208*cos(t)^6 + 7776*cos(t)^5 - 9312*cos(t)^4 - 2440*cos(t)^3 + 3492*cos(t)^2 + 372*cos(t) - 1169), (10*(5+sqrt(41)))/(sqrt(70+10*sqrt(41))*sqrt(130+10*sqrt(41))), ((6-4*sqrt(2))*ln(3-2*sqrt(2))+(3-2*sqrt(2))*ln(17-12*sqrt(2))+32-24*sqrt(2))/(48*sqrt(2)-72)*(ln(sqrt(2)+1)+sqrt(2))/3, (1/2)*exp((1/2)*x)*(cosh((1/2)*x)-cosh((3/2)*x)+sinh((1/2)*x)+sinh((3/2)*x)) ]:
 > print~(full_simplify~(T)):
 (1)

EDITED:
As several simplification methods lead to the same result, you could be interested in knowing those that give the same. Here is a way

```a := full_simplify(T[2]):
b := convert(rhs~(a), set):
t := map(u -> lhs~(a[[ListTools:-SearchAll(u, rhs~(a))]]) = u, b):
print~(t):
(1/2)
1  /          (1/2)\ /             1              \
[s17] = -- \50 + 10 41     / |----------------------------|
10                   |/      (1/2)\ /       (1/2)\|
\\7 + 41     / \13 + 41     //

(1/2)
5 + 41
[s3] = --------------------------------------
(1/2)               (1/2)
/      (1/2)\      /       (1/2)\
\7 + 41     /      \13 + 41     /

(1/2)
/      (1/2)\ /             1              \
[s4] = \5 + 41     / |----------------------------|
|/      (1/2)\ /       (1/2)\|
\\7 + 41     / \13 + 41     //

1  (1/2)
[s9, s8, s7, s6, s5] = - 2
2

[s16, s15, s14, s13, s12, s11, s10, s2, s1] =

/      (1/2)\
10 \5 + 41     /
----------------------------------------------
(1/2)                   (1/2)
/          (1/2)\      /           (1/2)\
\70 + 10 41     /      \130 + 10 41     /
```
 2 3 4 5 6 7 8 Last Page 4 of 135
﻿