pantole

29 Reputation

7 Badges

19 years, 10 days

MaplePrimes Activity


These are answers submitted by pantole

 Giving unrealistic values for a, b, c, d0 (formerly d), j and k, and of course initial values for T(0) and c(0) (provided that it is an IVP) you may do the following [for T(t)=y[1](t) and c(t)=y[2](t)]: 

> restart: a:=1: b:=1: c:=1: d0:=-0.75: j:=0.88: k:=15: odeset:={diff(y[1](t),t)-a*diff(y[2](t),t)+b,diff(y[2](t),t)+c^2*d0*exp(j*(1/y[1](t)-1/k))};
> initcond:={y[1](0)=2,y[2](0)=1};
> ans1 := dsolve(odeset union initcond, numeric, method=gear, abserr=1e-6, relerr=1e-6, output=procedurelist):
> n:=1000: aq:=seq(ans1(i),i=1..n): dataset:=[seq([i,rhs(op(2,aq[i]))],i=1..n)]: datatemp:=seq(dataset[i,2],i=1..nops(dataset)):
> for i to n do if(datatemp[i]=max(datatemp)) then maximum:=[dataset[i,1],datatemp[i]]; end if; end do; maximum;
> plot(dataset);
 
The above code solves numerically the system of odes with the initial values I gave for times from 1 to n. Provided that the maximum of y[1] is not between i and i+1, the code gives you the maximum of the dataset (values of time, values of temperature), which (I think) is the global one. For smoother temperature profiles you can give larger n, or smaller time step, say, ans1(0.1*i), but then you have to be careful with the arguments of dataset.

 

Very elegant. This is more than I had expected posting the question about distributions. I didn't know that an analytical solution exists.

[Does the term 'closed-form solution' hold for an expression that includes infinity? The hypergeometric function I mean.]

Thank you both.

pantole

Reproducing the same example as in the Helppage of Statistics[LinearFit], we first write the postulated model function:

with(linalg): f:=b[1]+b[2]*p+b[3]*p^2:

where p is the independent variable, and b the parameter vector. We then compute the vector of the derivatives of f with respect to the parameters, and then the transposed vector:

gT:=[seq(diff(f,b[i]),i=1..3)]: g:=transpose(gT):

Neglecting the need of an optimization procedure in order us to find the parameters, let’s assume that the parameter-vector is the one appearing in Maple Help Page, while r is the vector of the data:

b:=[1.95999999999999996,0.165000000000000063,0.110714285714285710]:     r:=[1, 2, 3, 4, 5, 6]:

We then define the procedure atab to find matrix X:

A:=multiply(gT,g): m:=nops(r): atab:=proc() global r, p, A, m; local i; X1:=[[0,0,0],[0,0,0],[0,0,0]]; X:=apply(X1,table); for i to m do p:=r[i]; X:=X+apply(A,table); end do; end proc:

Eventually, the variance-covariance matrix (covmatrix) can be evaluated by:

C:=evalm(atab());

Corig:=evalm(inverse(C));

of:=1.28771428571428581: covmatrix:=evalm(of/(m-nops(b))*Corig);

where of is the residuals sums of squares, as appearing in Maple Help Page.

 I understand that procedure atab is not the best in the world to evaluate the matrix, but I am not so ashamed not to upload it.

You can find theory for the above analysis in Himmelblau (1970), Process Analysis by Statistical Methods, pp. 197-198 (for Nonlinear Models).

pantole 

>f:=s->s^s*(1-s)^(1-s)*sin(Pi*s)/Pi; >evalf(int(f(s),s=0..1)); pantole
First of all, if you use Maple BVP solver, it will return an error: ‘…can only solve two-point boundary value problems’. Maybe you have to define the last BC at point x=4 (somehow), or use a different solver for which I have no idea. If it is the first, you can write the following (I guess I wrote in a right way the second order BCs, didn’t I?): > restart: sys:=diff(y(x),x$3)+1/2*y(x)*diff(y(x),x$2), diff(g(x),x$3)+1/2*g(x)*diff(g(x),x$2); bcs := y(0)=0, g(0)=0, D(y)(0)=D(g)(0), D(D(g))(0)=D(D(y))(0), D(y)(4)=1, D(g)(4)=1; > dsn:=dsolve({sys,bcs},numeric); > plots[odeplot](dsn,[x,D(D(g))(x)]); Of course, this is wrong, since it doesn’t solve your system, but perhaps it is illustrative of how you could change one or two BCs. A question: is this a sort of Blasius equation(s)? pantole
Try different integrals for BesselK(0,1): > int(cos(x)/sqrt(x^2+1),x=0..infinity); BesselK(0, 1) > int(cos(sinh(x)),x=0..infinity); BesselK(0, 1) These are included in Abramowitz/Stegun "Handbook of Mathematical Functions" (1972), p.376. Nevertheless, evalf still fits me...
> restart: > evalf(int(exp(-x)/sqrt(x^2-1),x=1..infinity)); 0.4210244382 > evalf(BesselK(0,1)); 0.4210244382
Since you have BCs in terms of derivatives equal to a constant, then these are of Neumann type. But still I didn't understand the BCs with respect to r. Is du/dr=0 (for r=1)? And which is the second one?
Still, one can discretize the derivatives with finite differences (or better, orthogonal collocation: using this method a similar problem is solved in p. 593, Rice & Do, Applied Mathematics and Modeling for Chemical Engineers) ending up to coupled algebraic equations. Of course, the corresponding boundary conditions are needed. Nevertheless, there might be an analytical solution as well, depending on the BCs. Thank you Allan for the feedback [I didn't know that]. pantole
Once you enter the boundary conditions get back to the forum; maybe we can help you. Note that you (may) have periodic conditions, since u(r,0)=u(r,2*pi), which is a bit more complex (if not scary), but you may as well find sth. relevant in the literature by adding the keywords "periodic boundary conditions". pantole
If I understood well, you have a PDE of second order (with respect to r and theta). So you will need 2 boundary conditions for r and 2 for theta. But you have 3 for theta (one of them is not needed) and none for r. Perhaps, you have to treat the range [0.5, 1] as a continuous range, and not as a part of the BCs. Nevertheless, there is a helpful book for this instance: David Betounes, Partial Differential Equations for Computational Science: With Maple & Vector Analysis (Cd-rom included). Try Chapter 12 which deals with Laplacian and its BCs/ICs in polar coordinates. pantole
In the previous post, the points command should read: points:=[seq([2*Pi*(i-1)/m,f[i]],i=1..m+1)]; Sorry. I hope this is the only mistake. pantole
Waiting for more experienced users to share their thoughts, I will try an approximation: The differential equation with the boundary conditions comprise a nonlinear boundary value problem (a BVP refers to a situation where the auxiliary conditions are specified at more than one value of the independent variable). I don’t know if Maple has got a built-in routine to solve boundary value problems (maybe others know better), but a way to solve (numerically) such a system is to discretize the derivatives using finite difference formulas (this is not the best way: collocation methods are better, but not simple). For example, using m intervals (so that you have m+1 node points) the second derivative will be (f[i+1]-2*f[i]+f[i-1])/h^2, where h=1/m, and f=y. The first condition becomes: f[1]-f[m+1] (=0). For the second boundary condition you need first order finite differences: (-f[m+1]+4*f[m]-3*f[m-1])/(2*h)+(f[3]-4*f[2]-3*f[1])/(2*h) (=0). Eventually, the finite difference scheme might be [with x=(i-1)*h] in Maple notation: > restart: m:=10: h:=1/m: s1:=fsolve({seq((f[i+1]-2*f[i]+f[i-1])/h^2-(1+0.2*cos((i-1)*h)*sinh(f[i])+0.3*cosh(f[i])), i=2..m),f[1]-f[m+1],-f[m+1]+4*f[m]-3*f[m-1]+f[3]-4*f[2]-3*f[1]},{seq(f[i],i=1..m+1)}); > Solutions:=subs(s1,[seq(f[i],i=1..m+1)]): f:=[seq(op(i,Solutions),i=1..m+1)]; Note that you need to define a different range in order to print a plot: > points:=[seq([2*Pi*i/(m+1),f[i]],i=1..m+1)]; plot(points); Provided that the above analysis is flawless, you may notice some things. 1. The differential equation became a system of m+1 algebraic equations. f[1] is y(0) and f[m+1] is y(2*Pi). 2. For values of m larger than 20, Maple cannot return a solution. Then you need to define a range for f[1], that is f[1]=0..0.02 in the fsolve command. 3. The plot becomes smoother for larger values of m, but at the expense of computational time. 4. The set of algebraic equations might have more than one solutions. For example, for m=4, f[1]=f[5]=4.0059, but also f[1]=f[5]=0.053. I don’t know what this means for your problem. pantole
Remember two things: 1st, optimization (maximize/minimize) is a numerical procedure. As far as I know Maple Optimization package does not use an analytical method (I don't know if it exists) to find maxima/minima. This means (I guess) that a numerical approximation is returned. For an one-variable function, such as f:=t->t^2*(1-t)^2*(1+t^2)*(1+(1-t)^2), the (local) optima are the roots of the first derivative. e.g. g:=t->diff(f(t),t): solve(g(t)); [and the roots are: 0, 1, 1/2, 1/2-1/2*sqrt(-2+I*sqrt(7)), 1/2+1/2*sqrt(-2+I*sqrt(7)), 1/2-1/2*sqrt(-2-I*sqrt(7)), 1/2+1/2*sqrt(-2-I*sqrt(7))] - I guess that this is an analytical procedure [it doesn't tell you if it is maximum or minimum]. 2nd, Maple Optimization package is a LOCAL method, that is, it finds only local optima. It can use branch-and-bound method (using the corresponding call: method=branchandbound) to find global optima [with far more accurate results for f(t)], but Maple rather advertises Global Optimization Package for global search. A plot of the function in the required range would show global optima in the first place. pantole
Page 1 of 1