Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 312 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

The line with the add commands should be (at least in the current version of Maple):

L:= (1/3)*h*(y(x[1])+4*add(y(x[i]), i = 2 .. N, 2)+2*add(y(x[i]), i = 3 .. N-1, 2)+y(x[N+1]));

The result, after evalf, should be 4354.742601 (as you got). If you get something else (not close) or an error, use this:

L:= (1/3)*h*(y(x[1])+4*add(y(x[2*i]), i= 1 .. N/2)+2*add(y(x[2*i+1]), i= 1 .. N/2-1)+y(x[N+1]));

I'm sure that this latter syntax will work in Maple 16. (Current Maple allows a third argument for add--the "skip"--just as you used as a third argument for seq.)

You may compare your results with:

Student:-Calculus1:-ApproximateInt(y(X), X= a..b, method= simpson, partition= N);
evalf(%);

Also note that your function y(X) is equivalent to a hyperbolic cosine cosh(X/C).

Unit should be capitalized, not unit. But you probably don't need to set anything to access all 32 G from Maple.

Write each method as a procedure (proc(...)). Let's say that these procedures are named JacobiGaussSeidel, and SOR. For each, do 

CodeTools:-Usage(Jacobi(...)),

etc.

This returns the result of the inner procedure. The times and memory are printed as side effects.

If you're timing things that take "minutes", this should be sufficiently accurate. Some special care is needed to time very small things (say, < 0.2 seconds).

Algorithm:

  1. Select points Xj at random in C. Discard any at distance <= R1 from X1. Continue until you have m-1 points.
  2. Compute min pairwise distance among X2, ..., Xm. Call this eps1.
  3. Compute min({|X1-Xj| - R1 : 2 <= j <= m}). Call this eps2.
  4. Let all of R2, ..., Rm = min(eps1, eps2) / 3.

For each instance of beta(10) you need to put a space between the beta and the left parenthesis. 

The appropriate command is add, not sum. It seems that everyone wants to use sum, which is primarily for infinite, symbolic, and/or indefinite summation. If you have specific integer values of the index, use add. Yes, you could use quote marks, as suggested by NM, but the better solution is to use add.

Also, you should use eval instead of subs. It seems that everyone wants to use subs, a command which has no knowledge of mathematics.

add(eval(diff(x^2, [x$k]), x= 0), k= 1..2);

The linear system of equations is inconsistent. In particular, the coefficients of tau[1] and tau[4] are identical.

Situations where string manipulations are needed to perform mathematical operations are extremely rare. Your string work can be replaced easily by using eval to change exp, like this:

a:= exp(2*t);
b:= exp(1)^(2*t);
eval(a-b, exp= (x-> exp(1)^x));

Likewise, numeric operations are not needed to verify the equality of expressions; simplify can be used, like this:

answer:= 5*exp(7.5*t);
response:= 5*e^(7.5*t);
simplify(eval(answer-response, e= exp(1)));

or, better yet,

verify(answer, eval(response, e= exp(1)), simplify);

Note that these solutions still work even if the symbol e has not been used in the response. 

The situations that you described as "terrible computations" are unavoidable consequences of the rounding of decimal arithmetic. You'll find them in any system of decimal arithmetic, be it a computer program, a hand calculator, or pencil and paper. It's called loss of significance, or, more dramatically, catastrophic cancellation. You'll find umpteen hits web-searching either of these expressions.

The way to avoid this is to actually use your CAS as something more than a fancy calculator---replace numeric computations with symbolic ones.

 

A negative number to a fractional (or decimal) power is complex. You have (f '')^0.4.

You didn't try expand(res), which I think will work, possibly followed by simplify(..., symbolic).

Just answering on spec; not at my computer to test it.

Your code is correct, and should not give an error. I suspect an installation or licensing issue.

What do you mean by "multiple roots"? A sequence can only converge to one value. Yes, an iterative root finder can be used to find multiple roots, but only if given multiple initial values, and there'll be separate convergent sequences for each root.

We'll define the absolute error of the kth term of a sequence as the absolute value of the difference between this term and the limit. If a sequence converges, we'll say that the order or convergence is r if the absolute error of the kth term of the sequence can be approximated by A^(r^k) for some 0 < A < 1 and r > 0. The values of A and r can be estimated by linear regression. Here are some Maple procedures for doing that:

restart
:
(*>>>-------------------------------------------------------------------------
This procedure approximates the order of convergence of a convergent sequence
S. It outputs 3 values: the order of convergence, the error-approximating 
function, and the coefficient of determination (aka r-squared) of the 
regression.

This syntax requires Maple 2018 or later.
-------------------------------------------------------------------------<<<*)
ConvergenceOrder:= proc(S::~Vector);
local Err:= (-ln@abs)~(S[..-2] -~ S[-1]), n:= numelems(Err), k, j, R, r;
   Digits:= 15;

   #Check integrity of sequence:
   for k to n do until Err[k] > 0;
   if k > n-2 then
      if k > n then 
         error "sequence doesn't converge sufficiently rapidly"
      fi;
      error "need more terms in sequence"
   fi;
   for j from k to n-1 do 
      if Err[j] >= Err[j+1] then 
         error "absolute errors must be strictly decreasing" 
      fi
   od;

   R:= Statistics:-LinearFit(
      [1, _k], <($k..n)>, ln~(Err), _k, 
      ':-output'= [':-rsquared', ':-parametervector']
   );
   Digits:= 5;
   evalf([(r:= exp(R[2][2])), _e = exp(-exp(R[2][1]))^(r^_k), R[1]])[]
end proc
:
(*>>>----------------------------------------------------------------------
This procedure takes a function f whose root we want and a template T for a
root-finding method that uses only one initial value and returns the 
iteration function.
-----------------------------------------------------------------------<<<*)
MakeIterator:= proc(f::procedure, T::procedure)
local x;
   unapply(simplify(T(x,f)), x)
end proc
:
(*>>>----------------------------------------------------------------------
This procedure applies the iteration J, starting with initial value v0, and
returns the sequence of iterates as an Array.

This uses Maple 2019 syntax.
-----------------------------------------------------------------------<<<*)
Sequence:= proc(
   J::procedure, #unary iteration function 
   x0::complexcons, #initial value
   {  #keyword parameters:
      abserr::And(realcons, positive):= 10.^(1-Digits), #convergence criterion
      max_iters::posint:= 99 #max numbers of iterations
   }
)
local x:= x0, x1, X:= Array([x]), n;
   for n to max_iters while abs((x1:= evalf(J(x))) - x) > abserr do
      X,= (x:= x1)
   od;
   if n > max_iters then error "did not converge" fi;
   X
end proc
: 

The iterative method that you gave is close to---but not quite the same as---Newton's method. If you provide a function and initial value for which your method converges, I'll apply these procedures to it. In lieu of that, I'll just show an example using regular Newton's method:

#Example usage:
Digits:= 999: #Large Digits is needed to get sufficient data.

M:= (x,f)-> (x - f(x)/D(f)(x)): #method--Newton's method in this case.
f:= x-> x^3 - x - 2: #function to find root of
x0:= 1: #initial guess of root

ConvergenceOrder(Sequence(MakeIterator(f,M), x0));
                                /      _k\         
                                \2.0391  /         
            2.0391, _e = 0.77243          , 0.99721

So, the order of convergence is about 2.04, the error-approximating function is as shown, and the coefficient of determination is > 99.7%, indicating that the regression was highly accurate.

I'll give you two methods---the first is easy and lazy and the second guarantees uniform selection from the available primes (meaning each prime is equally likely to be chosen):

EasyRandPrime:= (R::range(realcons))->
   nextprime(rand(ceil(lhs(R))-1 .. prevprime(floor(rhs(R)))-1)())
:
#This syntax requires Maple 2018 or later:
UniformRandPrime:= proc(R::range(realcons))
local rnd:= rand(ceil(lhs(R))..floor(rhs(R))), r;
   do until isprime((r:= rnd()));
   r
end proc:

The usage is the same for both:

EasyRandPrime(3..30);
UniformRandPrime(3..30);

If you require a large number of random selections from the same range, the above procedures can be made more efficient fairly easily.


 

The Answer by dharr exploits the low-level structure of a procedure in a way that won't work if f's parameters have type decalrations and is potentially dangerous if f's parameters have default values.

The Answer by Christian Wolinski does something that could be done more simply with the commands curry and/or rcurry.

What you want can be achieved much more robustly by simply declaring the procedure f such that its parameters have default values that are global names:

f:= proc(a:= ':-a', b:= ':-b', c:= ':-c') a*b*c end proc:
f(3);

                             
3*b*c

Another option, which is even more robust are eliminates the correspondence between parameters and the order that they're declared--but which requires a little more typing--is to use keyword parameters: 

f:= proc({a::algebraic:= ':-a', b::algebraic:= ':-b', c::algebraic:= ':-c'})
   a*b*c 
end proc:
f(b=7);

                     7*a*c

Now f's parameters can be given in any order.
 

 

The values of keyword parameters of type name and the keywords themselves (the word output in this case) are always global. When there is a conflict between a local (your local Q) and a global (your desired setting of the keyword) with obstensibly the same name, the local name takes precedent. In those cases, it's necessary to distinguish the global with the :- prefix, thus, ':-Q'.

I give this Answer just to explain the reasoning behind the other Answer.

Some Maple programmers always use the ':-...syntax for these unevaluated global names.

First 131 132 133 134 135 136 137 Last Page 133 of 395