Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 313 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Regardless of differences between versions, I can make the whole thing run in 12 seconds by a trivial application of evalhf. After your definition of procedure J, add the line

Jhf:= (x,y)-> evalhf(J(x,y)):

Then plot3d(Jhf, ...) instead of plot3d(J, ...).

I thought that plot3d already used evalhf by default, but I guess that I was wrong about that.

One benefit of the OP having posted a screen shot rather than plain text is that it clearly shows that both P and P appear in the input. My guess is that they are being treated as different variables and that this is the source of the anomaly. Furthermore, I guess that the upright P is actually the Greek letter capital Rho copied from a palette.

Simply include the argument scene= [x(t),y(t)] in your DEplot command and you'll get this:


restart:

RS:= Student:-Calculus1:-RiemannSum(
    f(x), x= a..b, method= left, output= sum, partition= n
);

(b-a)*(Sum(f(a+i*(b-a)/n), i = 0 .. n-1))/n

Approx:= evalf(eval(RS, [f= sqrt, a= 0, b= 4, n= 100]));

5.291703576

Exact:= int(sqrt(x), x= 0..4);

16/3

evalf(Exact);

5.333333333


Download RiemannSum.mw

While Kitonum's brute-force search does find the solutions to the equation, it doesn't use the algorithm given in the Question, which is a much better algorithm. I implemented the algorithm in the Question, and then I made a few improvements to it.

My procedure, like the algorithm in the Question, does not list the symmetric and negative solutions. If those are desired, they are trivial to generate by post-processing the output of the procedure.

quadsum:= proc(n::nonnegint)
local
     k:= 0, mylist:= table(),
     x:= isqrt(iquo(n,2)), y:= x, x2:= x^2, y2:= y^2
;
     if 2*x2 <> n then   x:= x+1;  x2:= x2+2*x-1;  y:= x;  y2:= x2  end if;
     while x2 <= n do
          y:= isqrt(n-x2);  y2:= y^2;
          if x2+y2 = n then  k:= k+1;  mylist[k]:= [x,y]  end if;
          x:= x+1;  x2:= x2+2*x-1
     end do;
     convert(mylist, list)
end proc:

Applying this procedure to random input on the order of 100 billion (10^11), it returns the solutions nearly instantaneously (156 ms). I let Kitonum's procedure run for over 23 minutes with this input with no results.

n:= eval('rand'(2^17..2^18)()^2 + rand(2^17..2^18)()^2);

st:= time():  quadsum(n);  T:= time()-st;

[[221539, 205782], [236579, 188298], [238766, 185517],
  [257901, 157838], [260634, 153283], [269181, 137722],
  [270774, 134563], [281011, 111618], [285826, 98637],
  [287843, 92586], [297357, 54814], [297549, 53762],
  [297978, 51331], [300477, 33754], [302242, 8691], [302323, 5154]
  ]
                             0.156

The algorithm used by isolve is even faster, by about a factor of two.

Sin:= unapply(subs(infinity= n-1, convert(sin(x), FormalPowerSeries)), [x,n]);

plot(Sin(x,20), x= -2*Pi..2*Pi);

Yes, plots:-arrow (not plottools:-arrow) is the answer. Here's a procedure to do what you want. Any optional arguments that you pass to this procedure (such as options controlling the size, shape, and color of the arrows or general plot options such as those controlling the display of axes) will be passed on to plots:-arrow.

A:= [<1,1>, <2,2>, <3,3>]:

HeadToTail:= proc(L::list(Vector(2)))
local L0:= [<0,0>, L[..-2][]];
     plots:-arrow(L0, L -~ L0, args[2..]);
end proc:
     
HeadToTail(A);

I entered your equation into dsolve in Maple 16 and I got a solution.

eq:= (-2*r*beta(t)+J)*diff(beta(t),t$2)+sin(beta(t))*(r*diff(beta(t),t)^2+g)=0:
dsolve(eq);

In this case, it means that you made a syntax error. The y in your difeq should be y(x). That is how you indicate to Maple that y is the dependent variable and x is the independent variable.

Of course you get errors: You're using Matlab code in Maple. The syntax is completely different. Maple syntax is much more compact and intuitive. Here is how to do it in Maple:

a0:= 80;  a1:= -5;  b1:= -5*(3)^0.5;  w:= Pi/12;  k:= 0.2;
U0:= [65,70,95]:
T:= 0..100:
c0:= u0-a0-(k^2*a1-k*w*b1)/(k^2+w^2);
c1:= (k^2*a1-k*w*b1)/(k^2+w^2);
d1:= (k*w*a1+k^2*b1)/(k^2+w^2);
u:= a0+c0*exp(-k*t)+c1*cos(w*t)+d1*sin(w*t);
plot(
     [seq(u, u0= U0)], t= T, legend= [seq('u'(0)= u0, u0= U0)],
     color= [red, black, green], linestyle= [solid, dash, solid]
);

Using the S and t from your original Question, the idea of Tom Leslie can be implemented like this:

S3:= zip((x,y)-> [x[1],x[2]*y], S, CurveFitting:-ArrayInterpolation(rtable(t), S[..,1]));

By the way, "serie" is not an English word. The word is "series" whether it's singular or plural.

Your understanding of the theory is a bit off. The function has a minimum if the matrix of second derivatives (called the Hessian) is positive definite. This is the same as the eigenvalues of the Hessian being all positive. The determinant of a matrix is the product of its eigenvalues, but it alone will not reveal the signs of the eigenvalues.

The worksheet below answers your problem #1.

 

f:= 4*x^2+y^2+2*z^2+2*a*x*y-4*x*z+2*y*z:
V:= [x,y,z]:

G:= VectorCalculus:-Gradient(f, V):

solve(convert(G,list), V);

[[x = 0, y = 0, z = 0]]

(1)

So (0,0,0) is a critical point regardless of a.

H:= VectorCalculus:-Hessian(f, V);

H := Matrix(3, 3, {(1, 1) = 8, (1, 2) = 2*a, (1, 3) = -4, (2, 1) = 2*a, (2, 2) = 2, (2, 3) = 2, (3, 1) = -4, (3, 2) = 2, (3, 3) = 4})

(2)

The critical point will be a relative minimum if all three eigenvalues of the Hessian are positive.

E:= LinearAlgebra:-Eigenvalues(H):

plot(convert(E,list), a= -10..10, legend= [E1,E2,E3], gridlines= false);

 

There is apparently a small region where all three are positive.

solve(E[2], a);

0, -2

(3)

So (0,0,0) is a relative minimum for a in (-2,0). What happens at a = -2 and a = 0 is unclear. For a outside [-2,0], (0,0,0) is a saddle point.

 

Download 2nd_derivative_test.mw

 

@tomleslie A second alternative:

M:= Matrix(5,`+`);

seq(`*`(L4[1..k]), k= 1..nops([L4]));

There is a scaling problem because the values in X are so close together. You can rescale simply by subtracting 616.3 from each element of X:

g:= expand(subs(x= x+616.3, Statistics:-Fit(a+b*x+c*x^2, X -~ 616.3, Y, x));

Now, you just get a warning message saying that there are zero degrees of freedom, which just means that there are the same number of data points as parameters.

Similar results can be achieved with

CurveFitting:-PolynomialInterpolation(X,Y,x);

First 272 273 274 275 276 277 278 Last Page 274 of 395