Robert Israel

6577 Reputation

21 Badges

18 years, 217 days
University of British Columbia
Associate Professor Emeritus
North York, Ontario, Canada

MaplePrimes Activity


These are replies submitted by Robert Israel

One of the best ways to smoothly connect data points is with a spline curve.  You can get that with Spline in the CurveFitting package.  For example:

> Data := [[0,3],[1,1],[2,4],[3,1],[4,5],[5,9]];
   S := CurveFitting[Spline](Data, t);
   with(plots):
   display([ plot(S, t = 0 .. 5), 
       pointplot(Data)]);

 

Sure, let's try it with the actual values. 

Sure, let's try it with the actual values. 

Again, an equilibrium is a stationary solution, i.e. a solution that is constant in time.
There is no such thing as "an equilibrium at some point in time": if it's there, it's always there.  

It's probably better not to think of "running time backwards" in terms of changing signs in the equation: just keep the same equations, but instead of looking at the future (t -> +infinity) look at the past (t -> -infinity). 

By the way, I took another look at your equations.  To find the equilibria, you just make all the dependent variables constant (i.e. evaluate the equations with R(t) = R, W(t) = W, H(t) = H) and solve for R, W and H.  I get

{H = 1000000.308, R = 40656.07694, W = 83013.66824}

Again, an equilibrium is a stationary solution, i.e. a solution that is constant in time.
There is no such thing as "an equilibrium at some point in time": if it's there, it's always there.  

It's probably better not to think of "running time backwards" in terms of changing signs in the equation: just keep the same equations, but instead of looking at the future (t -> +infinity) look at the past (t -> -infinity). 

By the way, I took another look at your equations.  To find the equilibria, you just make all the dependent variables constant (i.e. evaluate the equations with R(t) = R, W(t) = W, H(t) = H) and solve for R, W and H.  I get

{H = 1000000.308, R = 40656.07694, W = 83013.66824}

This seems to be another branch cut problem. 

I think it's a bit more convenient to take a = 1. 

> J := eval(Int(1/H, z=0..infinity), a=1);

J := Int(1/((b+3*b*z+3*b*z^2+b*z^3+1)^(1/2)),z = 0 .. infinity)

> W := simplify(value(J));

W := 1/3*1/b^(1/3)*3^(3/4)*EllipticF(2*3^(1/4)*(1+b^(1/3))^(1/2)/(1+b^(1/3)+3^(1/2)),1/4*2^(1/2)*3^(1/2)+1/4*2^(1/2))

 

Now EllipticF(z,k) has a branch point at z=1, corresponding to b = -10 + 6*sqrt(3) = .39230485..., and plotting J and W near there shows that J (the actual integral, as evaluated numerically) appears to be smooth there but W is not.

> plot([W,J], b= 0.35 .. 0.45, colour=[blue,red]);

My guess is that for b < -10 + 6*sqrt(3), the wrong branch of EllipticF is being used.

An equilibrium is just a stationary solution.  Reversing the direction of time does not change this.  What may change is the stability of the equilibrium.  If the equilibrium is a repeller for the forward direction of time, you won't see it in simulations run forwards, because there are no initial conditions (other than the equilibrium point itself) from which the solution approaches the equilibrium.  However, running time backwards will make the equilibrium appear to be an attractor: solutions with final conditions anywhere near the equilibrium approach it as t -> -infinity.

 

 

An equilibrium is just a stationary solution.  Reversing the direction of time does not change this.  What may change is the stability of the equilibrium.  If the equilibrium is a repeller for the forward direction of time, you won't see it in simulations run forwards, because there are no initial conditions (other than the equilibrium point itself) from which the solution approaches the equilibrium.  However, running time backwards will make the equilibrium appear to be an attractor: solutions with final conditions anywhere near the equilibrium approach it as t -> -infinity.

 

 

Expand the integrand and combine the trig functions, and you get some terms such as cos((a-2)*t)/(4*t^3).  These are basically where the non-analyticity at a=2 comes from.  For convenience, I'll write a = 2 + p.  Then:

> int(cos(p*t)/t^3, t=1..infinity) assuming p > 0;

1/2*cos(p)-1/2*sin(p)*p+1/2*Ci(p)*p^2

> series(%,p=0);

1/2+(-3/4+1/2*gamma+1/2*ln(p))*p^2-1/48*p^4+O(p^6)

Note the term in ln(p).

You could also think about it this way.  Let

F(p) = Int(cos(p*t)/t^3,t = 1 .. infinity)

This converges nicely for real p, and will be a continuous function of p (by the Lebesgue Dominated Convergence Theorem, since cos(p*t) is continuous and uniformly bounded and 1/t^3 is integrable on [1,infinity)).

However, differentiate with respect to p under the integral sign and you get a factor of t coming out.  Differentiate twice and you have

`F''`(p) = Int(-cos(p*t)/t,t = 1 .. infinity)

which no longer converges when p=0.  So it should not be surprising that F is not twice differentiable at p=0.

 

There seems to be no closed form solution.  We've said that several times, sorry if we didn't make it clear enough.


However, if you put in numerical values for the parameters and initial values, you can get numerical solutions.  For example:

> sys := {
  D(y)(t) = m*(a + b*y(t) + c*x(t) + d*x(t)*y(t) + f *y(t)^2),
  D(x)(t) = n*(a + b*x(t) + c*y(t) + d*x(t)*y(t) + f*x(t)^2)};
  params:= {a = 1, b = -2, c = 3, d = -4, f = 5, m = 0.1, n = 0.2}; 
  ics:= {x(0)=1, y(0)=0};
  Sol := dsolve(eval(sys,params) union ics, numeric);

> Sol(1);

[t = 1., x(t) = 4.84578624552315418, y(t) = .464864176325355926]

 

> plots[odeplot](Sol, [t, y(t)/x(t)], t=0 .. 1);

There seems to be no closed form solution.  We've said that several times, sorry if we didn't make it clear enough.


However, if you put in numerical values for the parameters and initial values, you can get numerical solutions.  For example:

> sys := {
  D(y)(t) = m*(a + b*y(t) + c*x(t) + d*x(t)*y(t) + f *y(t)^2),
  D(x)(t) = n*(a + b*x(t) + c*y(t) + d*x(t)*y(t) + f*x(t)^2)};
  params:= {a = 1, b = -2, c = 3, d = -4, f = 5, m = 0.1, n = 0.2}; 
  ics:= {x(0)=1, y(0)=0};
  Sol := dsolve(eval(sys,params) union ics, numeric);

> Sol(1);

[t = 1., x(t) = 4.84578624552315418, y(t) = .464864176325355926]

 

> plots[odeplot](Sol, [t, y(t)/x(t)], t=0 .. 1);

Your eq4 and eq5 both involve du/dt.  In order for them to be consistent, you need an algebraic relation between the dependent variables.  That's what makes it a DAE (differential-algebraic equation) system.  After plugging in the initial conditions, eq4 says u'(0) =  -267.7090873/Pi, while eq5 says  u'(0) = -3.676419607*10^7/(4775.755209+273.6096309*Pi).  Those are not equal: the first is approximately -85.21444909, the second is approximately -6523.881882.  So the error message is quite correct: the initial conditions do not satisfy the algebraic constraints.

Your eq4 and eq5 both involve du/dt.  In order for them to be consistent, you need an algebraic relation between the dependent variables.  That's what makes it a DAE (differential-algebraic equation) system.  After plugging in the initial conditions, eq4 says u'(0) =  -267.7090873/Pi, while eq5 says  u'(0) = -3.676419607*10^7/(4775.755209+273.6096309*Pi).  Those are not equal: the first is approximately -85.21444909, the second is approximately -6523.881882.  So the error message is quite correct: the initial conditions do not satisfy the algebraic constraints.

I'm no great fan of 2D input, but I don't think this is simply a matter of "getting it wrong".  The fact is that if you're using 2D input you can't just use the same keystrokes as with 1D input.  That's not the way it's designed to work.  In particular, "/" takes you into a denominator, and to get out of that you can use e.g. the right arrow key.

Suppose X and Y are Bernoulli random variables, both with mean p.

We have E[XY] = Prob(X=1,Y=1) = 2*p-1 - Prob(X=0,Y=0) >= max(0, 2*p-1).
Thus Cov[X,Y] = E[XY] - p^2 >= max(-p^2, -(1-p)^2) and
Corr(X,Y) >= max(p/(p-1), (p-1)/p). 

Moreover, this inequality is best possible.

First 125 126 127 128 129 130 131 Last Page 127 of 187