mmcdara

6478 Reputation

17 Badges

8 years, 49 days

MaplePrimes Activity


These are replies submitted by mmcdara

@MaPal93 

opt[1]  is the value of the minimum of the objective function J.
Given the definition of Jopt[1]  is sum of squares of the residual and is exactly what

Statistics:-NonlinearFit(K(Gamma), convert(pts, Matrix), Gamma, output=residualsumofsquares)

 would return if it was capable (Maple 2015) the find the best fir (which would be opt[2])
As a rule I prefer using Optimization:-NLPSolve instead of Statistics:-NonlinearFit: the former is more vesatile (you can account for ranges, initial points and your own expression of the objective function) and less often fail than the latter. 


You wrote "Can you please help me? (I didn't find something like this)"
I believe the attached file could help you. It contains a procedure named NonLinearLeastSquares which wraps a  Optimization:-NLPSolve in order to solve non linear least squares problem.
The way to use it mimics Statistics:-NonlinearFit's

restart

# use interface(warnlevel=0) to suppress warnings

NonLinearLeastSquares := proc(
                           Model, Data, Regressor,
                           {
                             output::{name, list(name)}:=leastsquaresfunction,
                             ranges::set:={},
                             hypothesis::name:=NULL
                           }
                         )

  local outputs, asked, K, J, opt, info, f, res:

  outputs := {
               leastsquaresfunction,
               parametervalues,
               parametervector,
               residuals,
               residualmeansquare,
               residualstandarddeviation,
               residualsumofsquares
             }:

  asked := convert(output, set) minus outputs:
  if asked <> {} then
    error cat("allowed options are ", op(outputs), " received ", op(asked))
  end if:

  K   := unapply(Model, Gamma);
  J   := add((Data[.., 2] -~ K~(Data[..,1]))^~2):

  if ranges = {} then
    if hypothesis <> NULL then
      opt := Optimization:-NLPSolve(J, assume=hypothesis);
    else
      opt := Optimization:-NLPSolve(J);
    end if:
  else
    opt := Optimization:-NLPSolve(J, op(ranges));
  end if:

  f    := unapply(eval(Model, opt[2]), Regressor):
  res  := Data[.., 2] -~ f~(Data[..,1]):
  info := table(
            [
              leastsquaresfunction      = f(Regressor),
              residuals                 = res,
              residualsumofsquares      = add(res^~2),
              residualstandarddeviation = Statistics:-StandardDeviation(res),
              residualmeansquare        = Statistics:-Mean(res),
              parametervalues           = opt[2],
              parametervector           = < opt[2] >
            ]
          ):

  if numelems(convert(output, list)) = 1 then
    return info[output]
  else
    return [seq(info[out], out in convert(output, list))]
  end if
end proc:
 

pts := [[1.020408163, .9618194785], [1.021860259, .9591836735], [1.047329717, .9183673469], [1.077147904, .8775510204], [1.112217401, .8367346939], [1.153643677, .7959183673], [1.202786007, .7551020408], [1.224489796, .7394037725], [1.266731737, .7142857143], [1.350499968, .6734693878], [1.428571429, .6426647396], [1.463093584, .6326530612], [1.632653061, .5927592700], [1.638566951, .5918367347], [1.836734694, .5653094517], [2.024654644, .5510204082], [2.040816327, .5499100928], [2.244897959, .5415404104], [2.448979592, .5375577894], [2.653061224, .5364038532], [2.857142857, .5371174931], [3.061224490, .5390909888], [3.265306122, .5419306191], [3.469387755, .5453753581], [3.673469388, .5492483720], [3.759428515, .5510204082], [3.877551020, .5532080969], [4.081632653, .5571718264], [4.285714286, .5612391621], [4.489795918, .5653739340], [4.693877551, .5695506292], [4.897959184, .5737510616], [5.102040816, .5779621428], [5.306122449, .5821743709], [5.510204082, .5863807930], [5.714285714, .5905762856], [5.775983717, .5918367347], [5.918367347, .5943431266], [6.122448980, .5978993957], [6.326530612, .6014214453], [6.530612245, .6049104007], [6.734693878, .6083674439], [6.938775510, .6117937612], [7.142857143, .6151905094], [7.346938776, .6185587956], [7.551020408, .6218996660], [7.755102041, .6252141001], [7.959183673, .6285030098], [8.163265306, .6317672398], [8.219364040, .6326530612], [8.367346939, .6345736540], [8.571428571, .6371910423], [8.775510204, .6397830180], [8.979591837, .6423509843], [9.183673469, .6448962158], [9.387755102, .6474198713], [9.591836735, .6499230055], [9.795918367, .6524065794], [10.00000000, .6548714697]]:

model := a[1] + a[2]*((a[3]/Gamma)^a[4] - (a[5]/Gamma)^a[6]);

a[1]+a[2]*((a[3]/Gamma)^a[4]-(a[5]/Gamma)^a[6])

(1)

NonLinearLeastSquares(model, convert(pts, Matrix), Gamma, output=fit)

Error, (in NonLinearLeastSquares) allowed options are parametervaluesparametervectorresidualsleastsquaresfunctionresidualmeansquareresidualsumofsquaresresidualstandarddeviation received fit

 

NonLinearLeastSquares(model, convert(pts, Matrix), Gamma)

Error, (in Optimization:-NLPSolve) complex value encountered

 

NonLinearLeastSquares(model, convert(pts, Matrix), Gamma, hypothesis=nonnegative)

Warning, limiting number of major iterations has been reached

 

HFloat(0.7470488435814877)+HFloat(1.3535649663432212)*(1/Gamma)^HFloat(1.8973232834731362)-HFloat(1.122055326348035)*(1/Gamma)^HFloat(0.9850486348743975)

(2)

NonLinearLeastSquares(model, convert(pts, Matrix), Gamma, hypothesis=nonnegative, output=residualsumofsquares)

Warning, limiting number of major iterations has been reached

 

HFloat(0.001648876586099211)

(3)

NonLinearLeastSquares(model, convert(pts, Matrix), Gamma, hypothesis=nonnegative, output=[leastsquaresfunction, residualsumofsquares, residuals])

Warning, limiting number of major iterations has been reached

 

[.747048885556404+1.35356502427243*(1/GAMMA)^1.89732314927265-1.12205542536442*(1/GAMMA)^.985048517861938, 0.164887657072576e-2, Vector(4, {(1) = ` 1 .. 59 `*Vector[column], (2) = `Data Type: `*anything, (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order})]

(4)

NonLinearLeastSquares(model, convert(pts, Matrix), Gamma, ranges={seq(a[i]=1..3, i=1..6)})

HFloat(1.0)+HFloat(9.001471509131882)*(1/Gamma)^HFloat(1.1669942103125366)-HFloat(9.0)*(1/Gamma)^HFloat(1.0)

(5)

 


Download NonLinearLeastSquares.mw

Ice on the cake: NonLinearLeastSquares_and_Kriging.mw completes the previous file by adding a simple implementation of Kriging.
The "approximation function" is an exact interpolator which passes through points pts.  Its approximation error out of these points is less than 1e-7.
Keep in mind this Kriging part of the file is purely illustrative and serves only to demonstrate the power of the Kriging approach (note that the interpolator depends on a single parameter named psi whose meaning is a correlation length, that is how far from a point x[i] the information it contains (y[i}) "propagates" to affect the valur y[k] at any other point x[k]).

 

@MaPal93 


 

  1. If I am not satisfied with the linear fit...
    There exist very versatile interpolation models that could be used instead of more classical nonlinear least squares.
    I thing more specificaly to kriging models.
    The CurveFitting package proposes such a model since Maple 2019, proposes an implementation, but having had a look at it, it's not really suited to modeling the output of a computer code.
    Take a look at this approach and let me know if you're interested.
     
  2. I guess that using option grid=[N, N] with N larger than the default value (around 30 if I'm not mistaken) should generate more points on the level curve, but be aware that the computation time probably varies like N^2.
     
  3. The explanation is provided in the last part of file slices_of_three_surfaces_mmcdara.mw

@MaPal93 
 

  1. I didn't check the value, but yes, the Lennard-Jones potential has an asymptote.
  2. The green curve is indeed the graph of the potential.
    And yes, its expression is
    eval(K(Gamma), opt[2])
  3. Yes, ar least iw we consider que domain 0 < Gamma < 10, -1 < rho < 1.

 

"Why don't you embed the L-J snippet in the worksheet and post your comment as an answer I can upvote/give best answer?".
Because I'm fed up with this voting and ranking system.
I'm not participating to gain notoriety but to provide help and share my limited experience. That is why I decided to publish only comments and satisfy myself with a simple positive feedback. 


Error, (in CurveFitting:-Spline) data points not in recognizable format
I use Maple 2015. 
Here is the detail of the operators in pc0:


# with Maple 2015 op(pc0) returns something like this:

CURVES(
    [[x[1], y[1], [[x[2], y[2]],
    [[x[3], y[3], [[x[4], y[4]],
    ...
    [[x[n], y[n], [[x[n+1], y[n+1]],
    COLOUR(RGB, .47058824, 0., 0.54901961e-1)
),
AXESLABELS(GAMMA, rho)

# so op(1, pc0) returns 

CURVES(
    [[x[1], y[1], [[x[2], y[2]],
    [[x[3], y[3], [[x[4], y[4]],
    ...
    [[x[n], y[n], [[x[n+1], y[n+1]],
    COLOUR(RGB, .47058824, 0., 0.54901961e-1)
)

# and [op(1..-2, op(1, pc0))] returns 

[
    [[x[1], y[1], [[x[2], y[2]],
    [[x[3], y[3], [[x[4], y[4]],
    ...
    [[x[n], y[n], [[x[n+1], y[n+1]]
]

It is higly likely that the structure of pc0 that your newer version builds is not the same.
Can you execute the command op(pc0) in yourworksheet and send it to me?


Here is an alternative to fsolve for the "Surface 1" case only.

Explanation:

  1. plot the level curve  H(Gamma, rho) = 0 where H(Gamma, rho ) = D[2](f)(Gamma, rho),
  2. extract the points (Gamma[i], rho[i]) (remove duplicate abscissas),
  3. build the spline approximation of the curve passing through points  (Gamma[i], rho[i]): this curve is a piecewise representation of the function Z : Gamma --> Z(Gamma) such that H(Gamma, Z(Gamma)) = 0.
     

First surface

restart;

with(plots):

f := proc(Gamma,rho) local i;
   Digits := 2*Digits;
   max(remove(type,simplify(fnormal~([seq(evalf(RootOf(-8*(rho + 1)^4*_Z^4 + 12*(rho + 1)^3*Gamma*(rho - 1)*_Z^3 - 5*(rho + 1)^2*(-4/5 + Gamma^2*rho^2 + 2*(-2/5 - Gamma^2)*rho + Gamma^2)*_Z^2 - 4*(rho + 1)*Gamma*(rho^2 - 1)*_Z + Gamma^2*(rho + 1)*(rho - 1)^2, index=i)),i=1..4)]),'zero'),nonreal));
end proc:

d2f := unapply(D[2](f)(Gamma,rho), (Gamma, rho)):

with(plots):

t0 := time():
pc0 := contourplot('d2f(Gamma,rho)',
       Gamma=0..10, rho=-1.0..1.0, contours=[0]);
time()-t0;

 

131.265

(1)

pts := fnormal~(map2(op, 1, [op(1..-2, op(1, pc0))])):
pts := convert( convert(pts, set), list):
pts := sort(pts, key = (x -> op(1, x)));


use CurveFitting in
  rho__spline := Spline(pts, Gamma, order=1);
  print(
    display(
      plot(pts, style=point, symbol=circle, color=blue),
      plot(rho__spline(Gamma), Gamma=1..10, color=red)
    )
  );
end use:

[[1.020408163, .9618194785], [1.021860259, .9591836735], [1.047329717, .9183673469], [1.077147904, .8775510204], [1.112217401, .8367346939], [1.153643677, .7959183673], [1.202786007, .7551020408], [1.224489796, .7394037725], [1.266731737, .7142857143], [1.350499968, .6734693878], [1.428571429, .6426647396], [1.463093584, .6326530612], [1.632653061, .5927592700], [1.638566951, .5918367347], [1.836734694, .5653094517], [2.024654644, .5510204082], [2.040816327, .5499100928], [2.244897959, .5415404104], [2.448979592, .5375577894], [2.653061224, .5364038532], [2.857142857, .5371174931], [3.061224490, .5390909888], [3.265306122, .5419306191], [3.469387755, .5453753581], [3.673469388, .5492483720], [3.759428515, .5510204082], [3.877551020, .5532080969], [4.081632653, .5571718264], [4.285714286, .5612391621], [4.489795918, .5653739340], [4.693877551, .5695506292], [4.897959184, .5737510616], [5.102040816, .5779621428], [5.306122449, .5821743709], [5.510204082, .5863807930], [5.714285714, .5905762856], [5.775983717, .5918367347], [5.918367347, .5943431266], [6.122448980, .5978993957], [6.326530612, .6014214453], [6.530612245, .6049104007], [6.734693878, .6083674439], [6.938775510, .6117937612], [7.142857143, .6151905094], [7.346938776, .6185587956], [7.551020408, .6218996660], [7.755102041, .6252141001], [7.959183673, .6285030098], [8.163265306, .6317672398], [8.219364040, .6326530612], [8.367346939, .6345736540], [8.571428571, .6371910423], [8.775510204, .6397830180], [8.979591837, .6423509843], [9.183673469, .6448962158], [9.387755102, .6474198713], [9.591836735, .6499230055], [9.795918367, .6524065794], [10.00000000, .6548714697]]

 

 

# Verify

RHO := unapply(rho__spline, Gamma):

k := 5;
true_Gamma := pts[k][1];
true_rho   := RHO(true_Gamma);

# fsolve gets the correct value of rho is initial point is true_Gamma

estim_rho := fsolve('d2f'(true_Gamma, rho), rho=true_Gamma );

# and still keeps finding it when started a little bit far from true_Gamma

estim_rho := fsolve('d2f'(true_Gamma, rho), rho=true_Gamma + 0.1 );

5

 

1.112217401

 

HFloat(0.8367346939)

 

.8349598268

 

.8349598268

(2)

 


Download Surface1_fsolve_workaround_mmcdara.mw



n case an approximation of the function Z(Gamma) would be enough for you, you can observe that the level curve   H(Gamma, rho) = 0 looks like a Lennard-Jones potential.
Then you can try to fit it this way:

K := Gamma -> a+b*((c/Gamma)^d-(e/Gamma)^f):
J := add((map2(op, 2, pts) -~ K~(map2(op, 1, pts)))^~2):
opt := evalf[6]( Optimization:-NLPSolve(J, assume=nonnegative) );

display(
  plot(pts, style=point, symbol=circle, color=blue), 
  plot(eval(K(Gamma), opt[2]), Gamma=1..10, color=green, view=0.5..1)
);
opt := [0.164888e-2, [a = .747049, b = .980991, c = 1.18492, d = 1.89732, e = 1.14613, f = .985049]]

Here is the result.

You already wrote a quite impressive Maple code (or maybe did you copy it from some source?)

If you indeed write the cote which implements the N-soft set method, I suppose you know what this method is and how to use it (personally, I don't code a method I don't know about or I don't don't understand the results ir provides.).
If you know what the N-soft sets method is you shouldknow what results it provides andhow to analyze them, don't you?
 

Maybe you should use a method you know how to analyze the results.

In any case, your question isn't strictly a Maple question (you've already written (?) the code) but a Statistics question.
 

@MaPal93 

I try to find something explaining the results you get with ncal3 but I failed.
Probably because I didn't know what I was seeking...?

By the way, your code stops at the "discontinuity" with Maple 2015.

@Andiguys 

You're welcome

@FDS 

M[2..-1] -~ M[1..-2]

But if you prefer a procedure which returns a vector:

tst := proc (M) 
   local N, i, m; 
   N := numelems(M):
   m := Vector(N-1):
  for i from 2 to N do 
     m[i-1] := M[i]-M[i-1] 
   end do; 
   return m
 end proc:

tst(M)

 

@Andiguys 

As you have a more recent version of Maple than mine's, replace 

InsertContent(Worksheet(Group(Input( Z )))):

by

InsertContent(Worksheet( Z )):

@MaPal93 

I'm curious, could you upload this calibration case?
Thanks

Here are a few works arround your ncal2 case Ideas_2__mmcdara_2.mw

@nm 

not ISODINE but ISOCLINE  isocline.mw

In case you read French this is an histortical description of a method for graphical_solution_of_ode  (if not plots are pretty, in partular the mechanical devices that draw thesolutions of odes)

@Andiguys 

I notice you are still unclear about the expression of Q2 that you write in two different ways:

Q2 = -Q1*delta*h+d*h*x
# and
Q2 = max(0, -Q1*delta*h+d*h*x)

So I consider the two cases in the attached file.
Do not be surprised that the value of I1 has no significative importance:

# In
Q1 = 0.2e-1*exp(.4*I1) +  4.000000000*10^7-4.000000000*10^7*exp(.5*I2);
# the influence of I1 is negligible when I1 and I2 are in 0..6

# same thing happens for Q2 and TRC

q2_mmcdara.mw

@MaPal93 

I started from INDEX=0.001 and used a smaller INDEX step:  fsolve cannot find any solution for INDEX > 0.31100616.
The ability of fsolve to go beyond this limit seems to depend strongly on the step you use and of the starting point (which "orients" to some initial solution).
Indeed, withe the same INDEX step of 1e-4 I was able to reach  INDEX = 0.8731 while starting from INDEX=0.25.

So I believe I'm done now with this thread because I have no more ideas to propose.

@Andiguys 

My mistake, the correct command is 

# First thing: construct TRC correctly

TRC_ok := eval(TRC, {alpha = (u -> 0.02*exp(0.4*u)), beta = (u -> 0.2e-1*(1-exp(.5*u)))})

The syntax eval(TRC, alpha = (u -> 0.02*exp(0.4*u))) means replace.the function alpha(u)=0.02*exp(0.4*u) (u could be anything else) .
I wrote mistakenly beta = (u -> 0.2e-1*(1-exp(.5*u))), but the result is the same because TRC contains only beta(I2).

How can we calculate Q1a and Q1b (not to mention that in your previous reply you said Q1a and Q2a...).

I suggest you to write a simple and clear new worksheet with the exact equation to solve, the correct conditions and/or constraints, the relations between auxiliary variables Q1a, Q1b, Q2a?) which do npt appear in TRC and express clearly what you want.

Your points 1 to 5 in your last reply only add more confusion.
So take the time to write your problem correctly and upload the corresponding worksheet.

@MaPal93 

Increasing Digits is very unlikely to help here (IMO).
A dynamic control of the step of INDEX  is extremely complex to do: even if you find a good strategy for one calibration case there is no guarantee it still performs well for another one.

Here are two questions you should think about:

  • Why do you believe positive solutions exist for INDEX > 0.8731 ?
  • Do you feel they do or are you sure they really do?

I'm not done looking at this thread, it's just that I'm not sure that your ncal2 problem cab be solved up to INDEX=1 and that spending time to try and solve it is not a waste of time.

3 4 5 6 7 8 9 Last Page 5 of 134