Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 313 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

From what you've said in your two most-recent replies, I'm now convinced that this epsilon nonsense is the source of your troubles. Let's summarize what you're doing:

  1. You make up a somewhat arbitrary small positive number epsilon. You've used 1e-10. You have to admit that there's no scientific basis for this particular value.
  2. You compute tan(Pi/2+epsilon), thereby obtaining an arbitrary negative number of very large magnitude (approximately -1/epsilon).
  3. After several pages of calculations, you divide some terms by this number to produce yet another arbitrary small factor of approximately -epsilon.
  4. You claim to do this because the terms being divided are "neglible".
  5. You claim that when you do this in a "similar model", "I'm getting the results I want."
  6. Since a larger value of Digits (15) is giving you results that you don't "want", you ignore that truth and settle for the false security of using a smaller value of Digits (10). 

When summarized like that, it seems almost crazy. The terms are neglible because you're dividing them by a large magnitude; that doesn't mean that they were inherently neglible. If for some other valid reason they are in fact inherently neglible, why did you expend the effort to type them into the formula? You should simply omit them from the formula and include a comment such as "I am omiting the term ___ because it is neglible because of (some scientific reason)."

Are you under some political pressure to include those terms and you're trying to obfuscate the fact that they're artificially being removed?

While it's true that you can't divide by tan(Pi/2) in a numerical computation, you can multiply by cot(Pi/2) = 0. Isn't that equivalent?

A scientist's goal ought to be getting correct results, not the results that they want.

The differential order of the equation is 2, so you need 2 initial or boundary conditions, whereas you've supplied 1. The error message indicates that it's expecting you to supply a value for D(u)(0), i.e., u'(0). So, you can add something like D(u)(0) = 1 to SYS2.

It would help if you show your code. When you say "the numerical solver", I assume that you mean LinearAlgebra:-Eigenvectors. I'll call your constructed matrix M. As you no doubt realize, M is also symmetric. The solver can produce more accurate results if you tell it this. So, instead of doing

LinearAlgebra:-Eigenvectors(M),

do

LinearAlgebra:-Eigenvectors(Matrix(M, shape= symmetric, datatype= float[8])).

Without having your actual matrix, I can't be sure if this will help. So, if it doesn't, please post your actual matrix.

You can't have a derivative satisfy some nice, smooth, normal condition at all points except for a few specific points at which you specify its value. It makes no sense mathematically. You'd might as well say that you can have more than one boundary value per derivative.

The Answer by user sand15 shows you how to correct the syntax so that dsolve won't reject your input, but its output is not a solution to your posed problem, nor can it ever be. On the other hand, if you change the problem to include an epsilon, as in his Reply to Kitonum, then it becomes possible.

The computational difference that gc() makes is the deletion and reinitialization of the remember tables of all procedures declared with option system. I suspect that the additional anomaly that you're having with kernelopts(memusage) is because its usage is coincidentally provoking a call to gc().

So, it would be entirely appropriate for you to call gc() before simplify. The only reason that use of gc is discouraged is because of its potential impact on efficiency (i.e., each extra gc() takes some time).

Your second argument to int should be of type list(name= range), such as [px= -infinity..infinity, py= -infinity..infinity, pz= -infinity..infinity], which could be abbreviated to [px,py,pz]=~ -infinity..infinity.

I'm not claiming that this will give you the result that you ultimately want. I'm just saying that you need to do this to overcome the syntax error that you currently face.

This is a univariate problem with finite bounds and no other constraints. For such a problem, the algorithm "Quadratic Interpolation"---which doesn't use an initial point---may be used; and indeed it is used in this case. You can see which algorithm is used and get a bunch of other information by setting infolevel[Optimization]:= 5. The algorithms are described on the help page ?Optimization,General,Methods. If change the method, then an initial point can be used:

Optimization:-NLPSolve(sin(x)/x, x= 1..30, initialpoint= {x = 17}, method= branchandbound);

It seems that pdsolve is having trouble with your boundary condition at infinity. So, I changed infinity to a variable, Infinity, before passing the input system to pdsolve, and I applied limit(..., Infinity= infinity) to the output returned by pdsolve after it was returned. This seemed to work (no undefined in the output), but I don't have time right now to check the accuracy of those results. Perhaps you can do that and report back here?

This is a bug in Maple's 2D Input. It misinterprets &xor as &x or. I suggest that you switch to 1D Input (aka Maple Input). Using xor instead or &xor will not (in general) give you correct results when using the Logic package! You need to use the & forms of all logical operators because the rest of Maple will consider them inert and only the Logic commands will process them.

2D Input is complete garbage with no redeeming value. However, if you insist on using it, you can work around this bug by using `&xor`(a,b) wherever you would've used a &xor b.

You have an eigenvalue with algebraic multiplicity 4. Depending on the numeric values of the parameters, the matrix may be defective---that is, have a eigenvalue for which the number of linearly independent eigenvectors is less than the algebraic multiplicity of the eigenvalue.

It may be possible to break down the cases into elaborate nested piecewise expressions. These'll specify the eigenvectors for all possible inequality ranges of the parameters. It may be possible to do the required polynomial[1] algebra with solve(..., parametric...) (see ?solve,parametric), which is essentially the same thing as SolveTools:-Parametric (see ?SolveTools,Parametric). Then it may be possible to process each case with LinearAlgebra:-NullSpace to get the eigenvectors. If you decide that this is worth pursuing (and note where I've emphasized "may"), the first step is to specify allowed ranges for all parameters and to state whether they are real. This'll reduce the number of cases.

[Edit: The green part has been added; the red part deleted.]

It may also be possible to get a "generic" solution: one where it's implicitly assumed that all divide-by-zero[2] conditions are met such that there are four linearly independent eigenvectors corresponding to the eigenvalue of multiplicity 4. This'll be easier than what I described in the previous paragraph, but we'll be assuming things that may not be true. That may not matter if you're going to substitute approximate decimal values for the parameters.

[1]These commands are only implemented for polynomials. To apply them to rational functions requires that the denominators be cleared and have their zeroing conditions analyzed separately.

[2]Row reduction of a matrix of polynomials requires division by polynomials. Careful case-wise analysis requires considering separately the conditions that would make the denominators zero.

There are undocumented commands SaveSession and RestoreSession. These are written in top-level Maple in a clean, simple style so they're easy to read, understand, and modify. These save and restore more stuff than just user variables: kernelopts settings, interface settings, debugger settings, enviroment variables, etc. These procedures may not have been updated for several years, and there may be some newer things that are worth saving (like some kernelopts and interface settings) that aren't mentioned in these procedures that you may want to add.

When you need scan through an entire rtable (any Vector, Matrix, or Array), probably the best command to use is rtable_scanblock. This command is builtin and extremely fast. The following worksheet contains code to solve your reverse-index problem and a comprehensive collection of convenient access methods. It addresses all the issues discussed by the other respondents. I doubt that there can be anything faster.

This code builds the reverse index once, and quickly, so that all subsequent inquiries are essentially instantaneous. It also handles the lookup of individual entries, subsets of entries, non-existent entries, or any combination of those.

RtableReverseIndex.mw

MaplePrimes is refusing to display the worksheet inline, so you'll need to download it to get all the details. The main block of code in the worksheet uses a feature new to Maple 2018. If you're using an earlier version, I can make a minor change (it'd only take 2 minutes) so that it'll work in earlier versions. Let me know.

Here's the worksheet in plaintext with the output removed:

restart:
This procedure contains everything needed to make a reusable reverse index for any rtable (any Vector, Matrix, or Array) of any number of dimensions.
RtableRI:= proc(A::rtable, $)
local T:= table(), v;
   rtable_scanblock(A, [], (v,j)-> (T[v][j]:= ())); #Build reverse index.
   for v in indices(T, 'nolist') do #for each entry...
      #...count indices; make the entry's indices a list; store both in a vector:  
      T[v]:= (J-> <nops(J)|J>)([indices(T[v], 'nolist')])     
   od;
   eval(T)
end proc:

The above procedure uses a coding feature (embedded assignment) that was only made available in Maple 2018. If you're using an earlier Maple version, use the following:
RtableRI:= proc(A::rtable, $)
local T:= table(), v;
   rtable_scanblock(A, [], proc(v,j) T[v][j]:= () end proc);earlier 
   for v in indices(T, 'nolist') do
      T[v]:= (J-> <nops(J)|J>)([indices(T[v], 'nolist')])     
   od;
   eval(T)
end proc:

It works like this....
A:= <a,a; b,c>;
T:= RtableRI(A);
Lookup entry a:
T[a];
It occurs 2 times and at the given indices. 

A tabular presentation of the entire table can be made thus:
M:= (T-> <<lhs~(T)> | <rhs~(T)[]>>)([entries(T, 'pairs')]);
And a DataFrame can used for a slightly more-elegant presentation:
DF:= DataFrame(op~(M[..,2..]), rows= M[..,1], columns= [Count, Indices]);
All of the above, plus some additional convenient access methods to the data, are collected together in the following module:
RtableReverseIndex:= module()
description
   "Builds a reverse index of any rtable so that subsequent lookup of entries "
   "is essentially instantaneous. Lookup results are returned as DataFrames."
;
option 
   `Author: Carl Love <carl.j.love@gmail.com> 23-Aug-2018`, 
   object
;
local 
   #just a convenient abbreviation:
   Inds::static:= proc(T::table) option inline; indices(T, 'nolist') end proc,

   #common output row for non-existent entries, if they're searched for:
   ZRow::static:= rtable(1..2, {1= 0, 2= ()}),

   ###### some stuff for formatting DataFrames ##########################################
   Cols::static:= 'columns'= ['Count', 'Indices'],                                      #
   Names::static:= proc(J::{list,set}) option inline; convert~([J[]], 'name') end proc, #
   Rows::static:= (J::{list,set})-> 'rows'= Names([J[]]),                               #
   ModulePrint::static:= (A)-> A:-DF, #Prettyprint raw objects as their DataFrame.      #
   ###### end of formatting stuff #######################################################

   #main access point: the new-object-constructing procedure:
   ModuleApply::static:= proc(A::rtable, $)::RtableReverseIndex; 
   local New:= Object(RtableReverseIndex);
      New:-DF:= DataFrame(
         <entries((New:-Tab:= RtableRI(A)), 'nolist', 'indexorder')>, 
         Rows((New:-Ent:= {Inds(New:-Tab)})), Cols
      ); 
      New
   end proc
;
export
   ###### the dynamic part of the object #
   Tab::table,    #the master index      #
   DF::DataFrame, #the master DataFrame  #
   Ent::set,      #the rtable's entries  #
   ###### end of dynamic part ############

   #the index-creating procedure:
   RtableRI::static:= proc(A::rtable, $)::table; 
   local T:= table(), v;
      rtable_scanblock(A, [], (v,j)-> (T[v][j]:= ())); #Build index.
      #Count each entry's indices and convert them to a sequence.
      for v in Inds(T) do T[v]:= (()-> rtable(1..2, {1= nargs, 2= args}))(Inds(T[v])) od; 
      eval(T)
   end proc,

   #the index-accessing procedure: 
   `?[]`::static:= proc(A::RtableReverseIndex, J::list, $)
   local 
      J0:= {J[]} minus A:-Ent,
      R:= `if`(
         J0 = {}, #all requested entries exist
         A:-DF,
         #Append a row for each non-existent entry:  
         Append(A:-DF, DataFrame(`<,>`(ZRow $ nops(J0)), Rows(J0), Cols))
      )[Names(`if`(J=[], A:-Ent, J)), ..]
  ;
      `if`(nops(J)=1, [R[1,1], [R[1,2]]][], R) 
   end proc 
;
end module: 
      
Some examples:
interface(rtablesize= 30):
randomize(1): #just to stabilize these examples
A:= LinearAlgebra:-RandomMatrix((5,5), generator= rand(-5..5));
ARI:= RtableReverseIndex(A);
You can request the sub-table for any subset of entries, in any order, including entries that don't exist:
ARI[4,-1,7,1];
If you request a single entry, then just the count and the list of indices is returned, without the surrounding DataFrame:
ARI[4];
That applies to non-existent entries also:
ARI[7];
Test on a large example:
A:= LinearAlgebra:-RandomMatrix((2^10)$2, generator= rand(-5..5));
So, over a million entries. 
ARI:= CodeTools:-Usage(RtableReverseIndex(A)):
Once the index is created, all subsequent lookups are essentially instantaneous. Here, I extract the column of counts:
CodeTools:-Usage(
   ARI[][Count]
);
And here I extract (without displaying) the list of all indices for entry 0:
ZeroIndices:= CodeTools:-Usage(
   ARI[0][2]
):
nops(ZeroIndices);
A small sample of those:
ZeroIndices[100..150];

 

You can simply integrate your expression. There's no need to simplify anything or assume that anything is real. The following returns 2*Pi:

int(conjugate(exp(I*phi))*exp(I*phi), phi= 0..2*Pi);

Also, the product of a complex function and its conjugate is the square of its absolute value (regardless of whether the parameters are real). So, the following also returns 2*Pi:

int(abs(exp(I*phi))^2, phi= 0..2*Pi);

You are trying to compute the expected value (E(.)) of a linear combination of r.v.s. As is very well known, expected value is a linear operator. So the answer is 0.3*E(p1) + 0.7*E(p2). Now, you can get this with Statistics:-Mean, as has been pointed out, or you can even use a pocket calculator: 0.3*1/(1+100) + 0.7*1/(1+50).

Maple's Statistics package is not very good at finding the PDFs of combinations of r.v.s, even linear combinations. I suspect that this has something to do with difficulty simplifying combinations of the piecewise expressions that are used for the PDFs of distributions whose support is not the whole real line.

For array input: As far as I can tell, it's not directly possible. But you can put your array data through CurveFitting:-Spline to turn it into a suitably smooth piecewise polynomial that interpolates the data.

For array output: The easiest thing is to make a plot and extract the data matrix from it. For most plots P, the data matrix is op([1,1], P).

First 165 166 167 168 169 170 171 Last Page 167 of 395