Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 313 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Maple does not understand your use of square brackets; however, your usage is not so far from valid that Maple will say so outright.

Do this:

j:= (n,z)-> sqrt(2)*sqrt(Pi/z)*BesselJ(n + 1/2, z)/2;
plot(j(1,z), z= 0..25);

Assuming that you're only interested in the computed results rather than a symbolic representation of the spline function, the following is more computationally efficient and shorter codewise:

x1:= Vector(
   [0.8e-1, .28, .48, .68, .88, 1, 1.2, 1.4, 1.6, 1.8, 2, 
    2.2, 2.4, 2.6, 2.8, 3, 3.2, 3.4, 3.6, 3.8, 4, 4.2
   ], 
   datatype= hfloat
):
y1:= Vector(
   [-10.081, -10.054, -10.018, -9.982, -9.939, -9.911, -9.861, -9.8, -9.734, -9.659, -9.601, 
    -9.509, -9.4, -9.293, -9.183, -9.057, -8.931, -8.806, -8.676, -8.542, -8.405, -8.265
   ],
   datatype= hfloat
):
x2:= log10~(2*10^~x1):
y2:= CurveFitting:-ArrayInterpolation(x1, y1, x2, method= spline):
plot([<x1|y1>, <x2|y2>], style= [point,line], legend= [original, interpolated]);

As is often possible in Maple, the above code treats all vectors as fundamental units of data, without any need for indexing or paying any attention to the number of elements in the vectors.

To understand better what's going on here, it helps to look at the exact values. I parameterized your sum by replacing (-1)^n by a^n and summed it symbolically for those a for which it converges (which, of course, doesn't include a = -1):

sum(a^n*n^2/(n+1), n= 1..infinity) assuming abs(a) < 1:
simplify(%);
                                   2      2    
                -ln(-a + 1) (a - 1)  + 2 a  - a
                -------------------------------
                                   2           
                          a (a - 1)            

Then "analytically extend" that solution by using a outside the original restrictions:

eval(%, a= -1);
                            3        
                          - - + ln(2)
                            4        
evalf(%);
                         -0.0568528194

That decimal value is the same as you got from your evalf(Sum(...)), and the corresponding exact value is ln(2) - 3/4.

Using Alt - and Alt + allows for the zoom factor to be any multiple of 25% between 50% and 400%. That's 15 levels, which is more than are listed on the menu (7 levels). But it would be nice to have finer gradations for the values close to 100%, such as 90% and 110%.

The following treatment gives more flexibility using code of about the same complexity:

  • The degree of the curves is determined by the number of control points; it need not be cubic;
  • The dimension is determined by the number of elements of the control points; it can be any positive integer;
  • Plot options can be passed in.
restart;
Bezier:= proc(P::seq(list(realcons)))
local 
   i, j, t, n:= nops([P]), d:= max(3, nops~([P])[]), #defaults to 3D. 
   #Implicit 0-padding is intentional; it allows easy switching between dimensions:
   V:= rtable~(1..d, [P]) 
;
   seq(unapply(add(V[j][i]*binomial(n-1,j-1)*(1-t)^(n-j)*t^(j-1), j= 1..n), t), i=  1..d)
end proc:

Bezier_plot3d:= proc(P::seq(list(realcons)))
local t;
   plots:-spacecurve([Bezier(P)[..3](t)], t= 0..1, _rest)   
end proc:

Example:

Bezier([2,5], [3.5,4], [3.5,1], [2,0]);

Bezier_plot3d([2,5], [3.5,4], [3.5,1], [2,0], thickness= 4, orientation= [-90,0]);

Here's a somewhat generalized treatment of these situations:

  • a system of low-degree multivariate polynomials,
  • the dimension (algebraic, not topological) of the complex solution set is higher than that of the real solution set,
  • only the real solutions are wanted.

This is similar to Tom's solution, but doesn't use assume (which usually causes solve to complain); and to Christian's, but doesn't use (which is totally bizarre).

restart;
Eqns:= {x^2+y^2+z^2-4*x+6*y-2*z-11 = 0, 2*x+2*y-z = -18};
n:= nops(Eqns);
V:= indets(Eqns, And(name, Not(constant)));
sol:= [solve(Eqns, V[..n], explicit)]; #So, same number of eqns as solved-for vars
#Solve for imaginary parts of free vars to be 0:
free_v:= map(solve, ((evalc@Im)~)~(sol), V[n+1..]);
#Re-evaluate original solution(s) with those free-var values:
{(eval~(sol, free_v) union~ free_v)[]};

This treatment allows for the possibility of multiple solutions. The first solve result is put in a list so that the free-variable solutions are matched with the correct primary solution.

There may not be any situations where this result is better than that from parametricreal.

Like this:

restart:
PDE1:= diff(m(t,x),t)+diff((u(t,x)^2-(diff(u(t,x), x))^2)*m(t,x), x) = 0;
PDE2:= eval(PDE1, m(t,x)= u(t,x)-diff(u(t,x), x,x)); 
ODE:= convert(simplify(eval(PDE2, u(t,x)= U(x-a*t)), {x-a*t= y}), diff);

 

The main problem with your code is that the initialization of the product to 1 is inside the for loop. This should be easy for you to correct. Let me know if you need further help with that.

An alternative is the simpler

mul(ArrayTools:-Diagonal(M)),

where is the Matrix.

You'll get a much smoother plot if you let plot itself choose the ordinates (the values of i for which the fsolve is done). This is very easy to do:

F:= proc(i)
local
   x, ThetaBn:= (1/180)*i*Pi,
   s:= cos(2*ThetaBn)*x+(2*sin(ThetaBn)*sin(ThetaBn))*sin(x)
;
   180.0/Pi*fsolve(s = 0, x, 1 .. 6)
end proc:

plot(F, 50..85);

For your second example, it's even easier to get plot to do most of the work because there's no fsolve:

alpha:= 
   sqrt(V^2-2*V*sin(theta)*(V*sin(theta)-1)+4*sin(theta)^2*(V*sin(theta))^2)
   /(-1/cos(theta)-2*cos(theta)*(V*sin(theta)-1))
;
plot(eval(180/Pi*alpha-140, theta= 70*Pi/180), V= 0.1..0.8);

In either case, there's no need for an array or any other explicit container structure to hold the numeric data.

Here is some simpler code:

IntegerTriangles:= proc(p::posint)
local x, y, `p/2`:= iquo(p,2)+1;
   {seq(seq([x, y, p-x-y], y= max(x, `p/2`-x)..iquo(p-x, 2)), x= 1..iquo(p,3))}
end proc
:

So, it's possible to ensure that the triangle inequality is satisfied just by using the loop bounds of the middle number, y; so, there's no need to explictly check whether p-x-y < x+y (what Kitonum did).

Where you have used irem (integer remainder), you meant to use iquo (integer quotient).

Using assume on a variable which will be assigned a value does nothing useful. In other words, assume is only effective for symbolic variables. If -- for debugging purposes -- you want to restrict the type of values that can be assigned to a variable, there is a way to do that such that an error will be thrown if the restriction is violated while you're in debugging mode and there'll be no overhead for the checking code when you're not. Plus, the checking code can be isolated into the declarations section. If that was your purpose for using assume, let me know.

Creating a sequence (as your code does with L), list, or set by adding elements one at a time is extremely inefficient due to a Maple idiosyncracy of these container structures. (That same idiosyncracy makes these structures more efficient for other purposes.) Kitonum shows one way around that: Use a table as the container structure. I show another: Use seq instead of for

Since your code never changes p, there's no need to copy the parameter n to the local p; hence there's no need for the local variable -- just use the parameter instead.

There are at least four anomalies[*1] in your code:

  1. There is a contradiction between your exposition and your code as to the condition for the product to be 0: The former says m <> l, while the latter says n <> l. Please clarify which is correct.
  2. It makes no sense for to be a local variable. Either make it a parameter to the procedure, or leave it undeclared.
  3. The `.` operator in your main line of computation should be ordinary multiplication, `*`.
  4. If phii is a procedure external to your code above, okay; otherwise it should be phi[n, m+s-2*j](t) (note the square brackets). In any case, this must be the final return value.

And while the following is not an error, it's an easily avoidable inefficiency: g (with parameters m, j, and s) and a should be defined external to multiply.

If I'm reading your exposition correctly, this multiply is non-commutative. Is that correct?

[*1]By anomaly I mean, in this context, something that is not manifestly an error but is suspicious as a potential source of error once this code is put into the broader context for which it's intended.

I don't trust solve in any situation where the number of equations is greater than the number of variables being solved for. In those  cases, I use eliminate instead:

eliminate({x^2= 2, x^3=sqrt(2)^3}, {x});

The return value is always a list of two sets, the first containing the solutions and the second the residual information (possibly empty). The residual information is expressions (other than itself) which when equated to represent what's left after the solved-for variables are eliminated from the equations.

Now here's my unsubstantiated guess as to why solve gets this wrong sometimes, as in the present example: In situations where the residual information is not the empty set, solve correctly and intentionally returns NULL. Here are two such situations, shown here from the point of view of eliminate:

eliminate({x=2, y=3}, {x});
                       [{x = 2}, {y - 3}]
eliminate({x=2, x=3}, {x});
                        [{x = 2}, {-1}]

Note that in the second case, the residual information is self-contradictory, yet eliminate intentionally returns it anyway, as it may be useful for some purposes (that there's no need to go into here). My guess about solve is that it performs a preliminary and superficial analysis of the system of equations, and if it thus guesses that this is one of those situations where the residual information will be non-empty, it returns NULL.

Because my guess about what solve is doing in these situations is something that I've thought about for years, I'd appreciate it if someone could provide some evidence for or against it. The code underneath solve is very long, and has been patched many times over the years, so direct evidence may be difficult to obtain; but perhaps an experiment could be devised to collect some indirect evidence.
 

The issue that you're describing is merely a feature of Maple's GUI's pretty printing of Vectors and Matrices. It has absolutely nothing to do with the form by which a Matrix is imported, where it's imported from, or whether it's imported at all; it has nothing to do with how the Matrix is stored in Maple; there is no distinction between a "standard" form and some other form(s). So, there's no need (and no way) to "recast" a Matrix from one form to another because those other forms simply don't exist. The complete Matrix exists in Maple's memory even if it is displayed in the summarized form that you described.

Do:

interface(rtablesize);

If you have a default setup, this will return 10. A Vector or Matrix with a number of rows or columns greater than this number will be displayed in the summarized form (when it's prettyprinted). If want to change that, do something like

interface(rtablesize= 30);

This will affect the prettyprinted display of all Vectors and Matrices until you change it back, start a new session, or do restart.

[This solution is similar to MMcDara's/Sand15's "Exact solution", although I derived it independently before his was posted. I explicitly use the BetaDistribution. He uses it also, but not explicitly by name. In my (limited) experience, Statistics works better the closer one sticks to the pre-defined distributions.]

It may surprise you that your problem can be solved completely symbolically (and even using fewer lines of code than you used), such that your QStar can be expressed as a fairly simple parameterized random variable to Maple's Statistics. After that, it's trivial to compute the mean, standard deviation, etc., as symbolic expressions and to generate samples in the millions (rather than the 1000 that you asked for), all without a single loop.

Indeed, QStar can be expressed as an affine (i.e., linear plus a constant) combination of two Beta random variables. There are five parameters to the combination:

  • a: the lower bound (not the sample min) of the underlying Uniform distribution
  • b: the upper bound (not the sample max) of the underlying Uniform distribution
  • n: the sample size (the 10 that you used)
  • co: exactly as you used
  • cs: exactly as you used

Here a worksheet with the analysis:
 

Analyzing a random variable derived from the minimum and maximum of a uniform sample

Author: Carl Love <carl.j.love@gmail.com> 23-March-2019

restart
:

St:= Statistics
:

Set lower and upper limits of uniform distribution (a and b) and sample size n. Then generate sample and find its min A and max B.

Note that this code is commented out because I want these five variables remain symbolic, at least for the time being.

(*
(a,b,n):= (10, 20, 10):
(A,B):= (min,max)(St:-Sample('Uniform'(a,b), n)):
*)

Your computation, with symbolic variables:

Ecost:= co*int((Q-x)*f(x), x= A..Q) + cs*int((x-Q)*f(x), x= Q..B):

DCost:= diff(Ecost, Q);

co*(int(f(x), x = A .. Q))+cs*(int(-f(x), x = Q .. B))

QStar:= solve(eval(DCost, [f(x)= 1/(B-A)]), Q);

(A*co+B*cs)/(co+cs)

A and B are derived as mathematical operations (min and max) performed on a random sample of a particular distribution (Uniform(a,b)), thus they are random variables, and thus QStar is a simple linear combination of two random variables and thus is itself a random variable. Surprisingly, we can express and evaluate QStar as a random variable that Maple can easily work with.

 

The k-th order statistic of a sample of size n drawn from Uniform(0,1) has distribution BetaDistribution(k, n-k+1) (see the Wikipedia article "Order statistic" https://en.wikipedia.org/wiki/Order_statistic#Probability_distributions_of_order_statistics). The minimum corresponds to k=1 and the maximum to k=n. So, if we tranlate from interval [0,1] to [a,b] the min and max have distributions A:= (b-a)*BetaDistribution(1,n) + a and B:= (b-a)*BetaDistribution(n,1) + a.

Q:= eval(
   QStar,
   [
      A= a + (b-a)*St:-RandomVariable(BetaDistribution(1,n)),
      B= a + (b-a)*St:-RandomVariable(BetaDistribution(n,1))
   ]
)
:

We can get Maple to derive symbolic expressions for the parametric statistics of Q:

Stats:= seq(St[p], p= (Mean, StandardDeviation, Skewness, Kurtosis)):
QStats:= <cat~([mu, sigma, gamma, kappa], __Q)> =~ <Stats(Q)>;

Vector(4, {(1) = `#msub(mi("&mu;",fontstyle = "normal"),mi("Q"))` = (a*co*n+b*cs*n+a*cs+b*co)/(co*n+cs*n+co+cs), (2) = `#msub(mi("&sigma;",fontstyle = "normal"),mi("Q"))` = sqrt(n*(a^2*co^2+a^2*cs^2-2*a*b*co^2-2*a*b*cs^2+b^2*co^2+b^2*cs^2)/(co^2*n^3+2*co*cs*n^3+cs^2*n^3+4*co^2*n^2+8*co*cs*n^2+4*cs^2*n^2+5*co^2*n+10*co*cs*n+5*cs^2*n+2*co^2+4*co*cs+2*cs^2)), (3) = `#msub(mi("&gamma;",fontstyle = "normal"),mi("Q"))` = -2*n*(a^3*co^3*n-a^3*cs^3*n-3*a^2*b*co^3*n+3*a^2*b*cs^3*n+3*a*b^2*co^3*n-3*a*b^2*cs^3*n-b^3*co^3*n+b^3*cs^3*n-a^3*co^3+a^3*cs^3+3*a^2*b*co^3-3*a^2*b*cs^3-3*a*b^2*co^3+3*a*b^2*cs^3+b^3*co^3-b^3*cs^3)/((co^3*n^5+3*co^2*cs*n^5+3*co*cs^2*n^5+cs^3*n^5+8*co^3*n^4+24*co^2*cs*n^4+24*co*cs^2*n^4+8*cs^3*n^4+24*co^3*n^3+72*co^2*cs*n^3+72*co*cs^2*n^3+24*cs^3*n^3+34*co^3*n^2+102*co^2*cs*n^2+102*co*cs^2*n^2+34*cs^3*n^2+23*co^3*n+69*co^2*cs*n+69*co*cs^2*n+23*cs^3*n+6*co^3+18*co^2*cs+18*co*cs^2+6*cs^3)*(n*(a^2*co^2+a^2*cs^2-2*a*b*co^2-2*a*b*cs^2+b^2*co^2+b^2*cs^2)/(co^2*n^3+2*co*cs*n^3+cs^2*n^3+4*co^2*n^2+8*co*cs*n^2+4*cs^2*n^2+5*co^2*n+10*co*cs*n+5*cs^2*n+2*co^2+4*co*cs+2*cs^2))^(3/2)), (4) = `#msub(mi("&kappa;",fontstyle = "normal"),mi("Q"))` = (9*co^4*n^3+6*co^2*cs^2*n^3+9*cs^4*n^3+15*co^4*n^2+42*co^2*cs^2*n^2+15*cs^4*n^2+72*co^2*cs^2*n+12*co^4+12*cs^4)/((co^2+cs^2)^2*(n^2+7*n+12)*n)})

Now use the particular numeric values that you used in your Question:

NumVals:= [a= 10, b= 20, n= 10, co= 20, cs= 25]
:

We can generate huge samples from Q instantaneously:

n:= eval(n, NumVals):  Sam:= St:-Sample(eval(Q, NumVals), 10^6):  n:= 'n'
:

Compare the theoretical and sample statistics:

<Stats(Sam)>, evalf(eval(QStats, NumVals));

Vector(4, {(1) = 15.4545135702626, (2) = .589764236722842, (3) = -.351366162924583, (4) = 4.44997814866995}), Vector(4, {(1) = `&mu;__Q` = 15.45454545, (2) = `&sigma;__Q` = .5904268660, (3) = `&gamma;__Q` = -.3524307757, (4) = `&kappa;__Q` = 4.454789470})

St:-Histogram(Sam, gridlines= false);

 


 

Download OrderStats.mw

You just need to add sin(x+t) and sin(x-t), letting x be the space parameter and t the time parameter:

plots:-animate(
   plot,
   [
      [sin(x+t), sin(x-t), sin(x+t)+sin(x-t)], x= -2*Pi..2*Pi, 
      color= [red,blue,black], thickness= [0,0,2]
   ],
   t= 0..2*Pi, size= [967,610], frames= 100
);

 

That's all that's needed mathematically. You can add various aesthetic doo-dads like the red dots, of course.

First 147 148 149 150 151 152 153 Last Page 149 of 395