Carl Love

Carl Love

28085 Reputation

25 Badges

13 years, 98 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

It looks like evalc won't automatically assume that D(x11)(1) and D(y11)(1) are real. It will assume that if diff form is used instead of D. So, this gets you your desired result:

convert(evalc(Re(convert(F1,diff))), D)

If you plan on doing a lot of this, I could add code to evalc (specifically, a procedure named `evalc/D`) that would do the above automatically.

There seems to be a critical-severity bug in fsolve in Maple 2019.1 such that every attempt that I've made at solving this with fsolve (using various formulations of the function whose root is wanted) has led to a kernel crash. This is particularly disturbing in this case because the function is differentiable, strictly increasing, and has a unique real root.

Another root-finding command--an alternative to fsolve--is RootFinding:-NextZero. Here it is applied to your problem. I've done this in a way that can be applied to most arclength problems:

restart:

#These first 2 steps are simply to reconstruct your original function
#from its derivative:
C:= 5.557990765:
f:= int(<-C,-7.3,-5.6,7.3>.<[sin, cos, sinh, cosh](C*x)>, x);

x0:= 0: #Start of arclength measurement.
L:= 0.5: #Desired arclength.

F:= subs(  #Construct numeric procedure whose root we want.
   [_ArcLInt= VectorCalculus:-ArcLength(<x, f>, x0..X, 'inert'), _ArcL= L],
   proc(X) local x; evalf(_ArcLInt) - _ArcL end proc
):    
x1:= RootFinding:-NextZero(F, x0);
                       x1 := 0.1550088023
plot(f, x= x0..x1);

An easy way to get a close upper bound on the desired x-value is to use the straight-line distance between two points on the curve:

x_ub:= fsolve((x0-x)^2 + (eval(f, x= x0) - f)^2 = L^2);
                     
x_ub := 0.1570169605

This upper bound might be useful if you're using a root finder whose performance is improved by specifying bounds.

An example of an fsolve that leads to a kernel crash is

fsolve([F], [x0 .. L])

using procedure and x0 and L from above.

To permute the rows of Matrix so that they're sorted (ascending) by column C, do

M[sort(M[..,C], output= permutation), ..]

To change that to descending, make `>` the second argument to sort.

To count the entries of column that are larger than V, do

add(`if`(x > V, 1, 0), x= M[..,C])

You can read the through the code using showstat(fsolve)showstat(fsolve::sysnewton)showstat(fsolve::linsolve), etc.

One eye-opener for me is that linsolve uses a directly coded Gaussian elimination rather than using the extensively optimized LinearAlgebra:-LinearSolve. This is done for every iteration of Newton's method.

It only evaluates when you press RETURN. If instead you end lines with SHIFT-RETURN, you'll get a new line without any evaluation happening. You can enter any  number of lines this way. Each group of such lines is called an execution group.

You need to use algsubs instead of subs:

eqn:= N*x*y - x*y + f(x,y) = 0;
algsubs(x*y= 1, eqn);

Both subs and eval (and even subsindets and evalindets) can only substitute for subexpressions that occur as syntactic subunits in the main expression, but algsubs can find algebraic subunits. It would require a lengthy digression into the minutiae of computer algebra for me to precisely define syntactic subunit, but if you'd like, I'll try later. I think an example will suffice as a definition for now: Let's consider the present expression, but written in functional form (I'd also call it a tree form):

`=`(`+`(`*`(N, x, y), `*`(-1, x, y), f(x,y)), 0);

The above is not just a pedantic presentation form, but is actually executable Maple code that produces the same eqn as the original. You will note that our target expression `*`(x*y) doesn't appear in the above functional form, which is why subs doesn't see it.

The above functional form is very close to (but not identical to) Maple's internal representation of the expression. In reality, the numeric coefficients are handled a little bit differently, which is why subs does find the x*y in `*`(-1, x, y).

 

For the purpose of checking associativity, we can ignore the phi on the left side of the equation. Then do this:

`&*`:= (u__1,u__2)-> ln(exp(phi(u__1))+exp(phi(u__2))) + Q(phi(u__1) - phi(u__2));
AssocLeft:= (u__1 &* u__2) &* u__3;
AssocRight:= u__1 &* (u__2 &* u__3);
CriticalEqn:= AssocLeft - AssocRight = 0;

So, to check any particular Q, we substitute it into the CriticalEqn, along with some arbitrary phi and u's. Here, I let and phi be the identity function:

eval(CriticalEqn, [Q= (x->x), phi= (x->x), u__1= 1, u__2= 0, u__3= -1]);
evalf(%);

The left side isn't even close to 0, which shows immediately that it's not associative for this Q.

Regardless of whether the equations are linear or the solutions are exact, you can achieve what you asked for like this:

eval(Vector[row](nops(SOL[0]), symbol= tau), SOL[0]);

Here is code to exactly duplicate in Maple 2019 the functionality of rept that you show in your worksheet:

rept:= (f, R::(name= range(realcons)))-> 
   ((t,a,r)-> eval(f, t= t - r*floor((t-a)/r)))((lhs, (lhs, rhs-lhs)@~rhs)(R))
: 

Your example:

F:= rept(piecewise(t<1, t/2+1/2, t>=1, -t+2), t= -1..2);
F := piecewise(t - 3*floor(t/3 + 1/3) < 1, t/2 - (3*floor(t/3 + 1/3))/2 + 1/2, 1 <= t - 3*floor(t/3 + 1/3), -t + 3*floor(t/3 + 1/3) + 2)

Note that my code gives an exact symbolic representation (in algebraic form) of the periodic extension, not merely a procedure or a function only suitable for plotting.

I think that you're saying that 1 shouldn't be a solution under RealDomain. I agree with you, and I think that this is a bug.

In order to use RealDomain properly, you should use it with with(RealDomain) so that the operations `^` and sqrt use the versions from the package. However, doing this, I still get in the solution set.

The following worksheet shows DirectSearch performing superbly when applied to a black-boxed Ackley function where I've hidden the minimum at a random location to make it more challenging.
 

Global optimization of a 2-D black-boxed Ackley function with DirectSearch

Author: Carl Love <carl.j.love@gmail.com> 10-July-2018

restart:

Digits:= 15
:

The Ackley function below comes from Wikipedia  "Ackley function" https://en.wikipedia.org/wiki/Ackley_function

Ackley:= (x,y)-> -20*exp(-sqrt((x^2+y^2)/2)/5) - exp((cos(2*Pi*x)+cos(2*Pi*y))/2) + exp(1) + 20:

plot3d(Ackley(x,y), x= -5..5, y= -5..5);

Its global minimum is known to be at (0,0). To make the minimum more challenging to find, we shift it at random and put it in a "black box".

(R||(0..1)):= (rand(-5.0..5.0), rand(0.0..5.0)):

randomize():

x||(0..2):= R0(),'R1'()$2:  y||(0..2):= R0(), 'R1'()$2:

Minimum:= <x0,y0>;

Vector(2, {(1) = -2.49751216120529, (2) = 3.22783673670402})

BlackBox:= subs(
   [seq]([X0,Y0]=~ Minimum),
   proc(x,y)
      if not [args]::list(numeric) then return 'procname'(args) fi;
      Ackley(x-X0, y-Y0)
   end proc
);

proc (x, y) if not [args]::(list(numeric)) then return ('procname')(args) end if; Ackley(x+2.49751216120529, y-3.22783673670402) end proc

Bounds:= [x= -5-x1 .. 5+x2, y= -5-y1 .. 5+y2];

[x = -7.65098815580740 .. 7.74869690455387, y = -6.38285102812737 .. 9.65238456868460]

Sol:= CodeTools:-Usage(DirectSearch:-GlobalOptima(BlackBox, Bounds));

memory used=1.23GiB, alloc change=-16.00MiB, cpu time=12.41s, real time=11.72s, gc time=1.48s

[8.328163048076931*10^(-9), Vector(2, {(1) = -2.497512161605566, (2) = 3.227836733786906}), 406]

Examine the relative errors:

RelErrs:= (Sol[2] - Minimum) /~ Minimum;

Vector(2, {(1) = 0.1602699930e-9, (2) = -0.9037365715e-9})

 


 

Download Ackley.mw

I will need to see your code to diagnose the problem. But my first guess of the cause of your problem is that you're storing large intermediate results pursuant to your complicated calculation in indexed variables. Thus, the memory that they use can't be re-used. In our vernacular we say that they aren't being garbage collected. The solution may be as simple as changing indexed variables to plain ones.

The diff_table command allows you to use indexing (not subscripting) to specify the independent variables with respect to which derivatives are taken. Indices in Maple are placed in square brackets on input, and appear as subscripts in the prettyprinted output[*1]. So, you must do like this:

restart:
with(PDETools):
U:= diff_table(u(x,t)):
declare(U[]);
PDE:= U[x,x] - U[t] = 0;  #Note the square brackets!
Infinitesimals(PDE);

Your input  U_x,x - U_t = 0 is pretty much nonsense to Maple in a PDE context. Sometimes it's unfortunate that Maple allows you to enter those things without complaining.

[*1] As opposed to the prettyprinted output, you can view the raw form, plaintext output of any command by doing 

lprint(%);

after the command. Or you can apply it to a name, such as 

lprint(PDE);

Then you'll see the square brackets. This can often provide a "reality check" when you're baffled by some output.

The answer to your immediate problem is that you need to use expand to force through the multiplication. This case and more-general cases can be done by 

numer(e)  #numerator

where e is the expression that you want to work on. 

While it's not directly related to your specific question, I thought that it's worth pointing out that your procedure can be vastly simplified to 

MinDistPoint:= proc(
   A::[algebraic, algebraic], B::[algebraic, algebraic], P::[algebraic, algebraic]
)
description "Point on line AB at minimum distance from P";
local D:= <A-B>, R:= <0,-1;1,0>, V:= <<P>.D, -R.<A>.<B>>;
   [V.D, R.V.D] /~ (D.D)
end proc:

There are probably numerous other formulations of similar simplicity. This was just the first that I derived.

First 138 139 140 141 142 143 144 Last Page 140 of 395