pagan

5142 Reputation

23 Badges

16 years, 258 days

 

 

"A map that tried to pin down a sheep trail was just credible,

 but it was an optimistic map that tried to fix a the path made by the wind,

 or a path made across the grass by the shadow of flying birds."

                                                                 - _A Walk through H_, Peter Greenaway

 

MaplePrimes Activity


These are replies submitted by pagan

@Jim Spamming us with requests isn't going to help. You should put your followups in the earlier thread. Or at the very least show worksheets of what you've tried so far and what parts don't work as you'd like.

The "e" in the abserr and initstep options likely denote exponentiation by 10. Those seem clear.

The unclear ones were the S(0)=5/e^7 and E(0)=2/e^7. I suspect that the submitter mistranscribed, and that those refer to 5e-7 and 2e-7 with exponentiation in base 10 and not to 5*exp(7) and 2*exp(7).

That would make the whole thing more like this.

dsys := {diff(E(t), t) = -kf*E(t)*S(t)+kr*ES(t)+kcat*ES(t),

                diff(ES(t), t) = kf*E(t)*S(t)-kr*ES(t)-kcat*ES(t),

                diff(P(t), t) = kcat*ES(t),

                diff(S(t), t) = -kf*E(t)*S(t)+kr*ES(t)};
  
kf := 10^6; kr := 10^(-4); kcat := .1;
                        
isc := {E(0) = 2e-7, ES(0) = 0, P(0) = 0, S(0) = 5e-7};
        
dsol := dsolve(dsys union isc, numeric, method = gear[polyextr],
                initstep = 0.15e-1, minstep = 10^(-11),
                abserr = 0.1e-4, relerr = 0.1e-5);

We'll only know for sure if the submitter tells us.

The "e" in the abserr and initstep options likely denote exponentiation by 10. Those seem clear.

The unclear ones were the S(0)=5/e^7 and E(0)=2/e^7. I suspect that the submitter mistranscribed, and that those refer to 5e-7 and 2e-7 with exponentiation in base 10 and not to 5*exp(7) and 2*exp(7).

That would make the whole thing more like this.

dsys := {diff(E(t), t) = -kf*E(t)*S(t)+kr*ES(t)+kcat*ES(t),

                diff(ES(t), t) = kf*E(t)*S(t)-kr*ES(t)-kcat*ES(t),

                diff(P(t), t) = kcat*ES(t),

                diff(S(t), t) = -kf*E(t)*S(t)+kr*ES(t)};
  
kf := 10^6; kr := 10^(-4); kcat := .1;
                        
isc := {E(0) = 2e-7, ES(0) = 0, P(0) = 0, S(0) = 5e-7};
        
dsol := dsolve(dsys union isc, numeric, method = gear[polyextr],
                initstep = 0.15e-1, minstep = 10^(-11),
                abserr = 0.1e-4, relerr = 0.1e-5);

We'll only know for sure if the submitter tells us.

@viraghj Well, Maple will not raise Digits even to accomodate floating-point roundoff error for arithmetic computation of compound scalar expressions, in general. Its model of floating-point computation, vis-a-vis precision and accuracy, is a deeper issue. The evalf(Int(...)) routines do admit a tolerance (epsilon) option and strive to attain it, but even there the working precision is not necessarily raised to account for roundoff error during functional evaluation of the integrand. Your illustration by example does not demonstrate the contrary (as the need might not have arisen for the particular example).

Back to your example: In principle, Maple could try and raise Digits internally following estimation of condition numbers at various internal stages.

@viraghj Well, Maple will not raise Digits even to accomodate floating-point roundoff error for arithmetic computation of compound scalar expressions, in general. Its model of floating-point computation, vis-a-vis precision and accuracy, is a deeper issue. The evalf(Int(...)) routines do admit a tolerance (epsilon) option and strive to attain it, but even there the working precision is not necessarily raised to account for roundoff error during functional evaluation of the integrand. Your illustration by example does not demonstrate the contrary (as the need might not have arisen for the particular example).

Back to your example: In principle, Maple could try and raise Digits internally following estimation of condition numbers at various internal stages.

@Markiyan Hirnyk I used the GlobalOptimization package to find this point.

I did not have to supply a good intitial point in order for it to get that good a fit. In my opinion, the less specific detail that you are forced to supply, the better.

And so obtaining a particular fit is done better without requiring an close initial point. And better also if done with a wider range for the parameters than a narrow one (and better with not range than with any).

Indeed, having to supply a very judicious or fortuitous initial point goes against the spirit of having a global rather than a local optimizer. The best global solvers will succeed, without being told just where to look, even in the presence of many local minima.

Comparison of solvers is still hard to do comprehensively, because there will always be some problems that solver A can do but solver B cannot, and vice versa.

Using an objective procedure `obj` that simply added the squares of the reciprocals, I did the following and obtained a reproducible result in under a minute on an Intel i5 processor:

GlobalOptimization:-GlobalSolve(
                 'obj'(a,b,c,d,e)
               , method=branchandbound
               , evaluationlimit=100000
               , a = 0..10, b = -10..10, c = 0..100, d = 0..7, e = 0..4
                                );

          [0.00112973296049516327, [a = 2.6174720167848577,
                                    b = -1.7195008832497265,
                                    c = 2.309346760329378, 
                                    d = 1.5033851325407128,
                                    e = 1.8458792860566562] ]

I would believe that increasing the timelimit and evaluationlimit to be very high, and possibly switching to the multistart method, might produce an even better residual. But (again), what impressed me here is that a good result was found quickly, without needed a tight range or initial point.

@Markiyan Hirnyk I used the GlobalOptimization package to find this point.

I did not have to supply a good intitial point in order for it to get that good a fit. In my opinion, the less specific detail that you are forced to supply, the better.

And so obtaining a particular fit is done better without requiring an close initial point. And better also if done with a wider range for the parameters than a narrow one (and better with not range than with any).

Indeed, having to supply a very judicious or fortuitous initial point goes against the spirit of having a global rather than a local optimizer. The best global solvers will succeed, without being told just where to look, even in the presence of many local minima.

Comparison of solvers is still hard to do comprehensively, because there will always be some problems that solver A can do but solver B cannot, and vice versa.

Using an objective procedure `obj` that simply added the squares of the reciprocals, I did the following and obtained a reproducible result in under a minute on an Intel i5 processor:

GlobalOptimization:-GlobalSolve(
                 'obj'(a,b,c,d,e)
               , method=branchandbound
               , evaluationlimit=100000
               , a = 0..10, b = -10..10, c = 0..100, d = 0..7, e = 0..4
                                );

          [0.00112973296049516327, [a = 2.6174720167848577,
                                    b = -1.7195008832497265,
                                    c = 2.309346760329378, 
                                    d = 1.5033851325407128,
                                    e = 1.8458792860566562] ]

I would believe that increasing the timelimit and evaluationlimit to be very high, and possibly switching to the multistart method, might produce an even better residual. But (again), what impressed me here is that a good result was found quickly, without needed a tight range or initial point.

And if the working precision is higher then the coefficients of those unexpected terms will become small. Once small "enough", they could also be sieved out after the fact. Or a different method could be used.

restart;
A := Matrix([[1,x^2,x^4,x^6,x^8,x^10],
             [1.,1.4,2.1,3.0,4.3,6.3],
             [1.,3.6,13.,46.,160.,590.],
             [1.,3.7,14.,51.,190.,690.],
             [1.,3.7,14.,52.,200.,740.],
             [1.,2.8,7.9,22.,63.,180.]]):

LinearAlgebra:-Determinant(A,method=minor);

                      2            4            6           10           8
  -804.820 + 4224.20 x  - 3846.41 x  + 982.420 x  - 29.183 x   + 47.673 x 

evalf[16](LinearAlgebra:-Determinant(A));

         -11                        2                      6
  -3.2 10    x + 4224.199999999948 x  + 982.4200000000010 x 

                          4                      8                      10
     - 3846.409999999956 x  + 47.67299999998294 x  - 29.18299999999682 x  

             -11  3          -11  5           -11  7                   
     + 2.2 10    x  + 3.91 10    x  - 2.123 10    x  - 804.819999999943

               -11  9
     - 4.612 10    x 

fnormal(%);

                                   2                6                4
       -804.8200000 + 4224.200000 x  + 982.4200000 x  - 3846.410000 x 

                         8                10
          + 47.67300000 x  - 29.18300000 x  

And if the working precision is higher then the coefficients of those unexpected terms will become small. Once small "enough", they could also be sieved out after the fact. Or a different method could be used.

restart;
A := Matrix([[1,x^2,x^4,x^6,x^8,x^10],
             [1.,1.4,2.1,3.0,4.3,6.3],
             [1.,3.6,13.,46.,160.,590.],
             [1.,3.7,14.,51.,190.,690.],
             [1.,3.7,14.,52.,200.,740.],
             [1.,2.8,7.9,22.,63.,180.]]):

LinearAlgebra:-Determinant(A,method=minor);

                      2            4            6           10           8
  -804.820 + 4224.20 x  - 3846.41 x  + 982.420 x  - 29.183 x   + 47.673 x 

evalf[16](LinearAlgebra:-Determinant(A));

         -11                        2                      6
  -3.2 10    x + 4224.199999999948 x  + 982.4200000000010 x 

                          4                      8                      10
     - 3846.409999999956 x  + 47.67299999998294 x  - 29.18299999999682 x  

             -11  3          -11  5           -11  7                   
     + 2.2 10    x  + 3.91 10    x  - 2.123 10    x  - 804.819999999943

               -11  9
     - 4.612 10    x 

fnormal(%);

                                   2                6                4
       -804.8200000 + 4224.200000 x  + 982.4200000 x  - 3846.410000 x 

                         8                10
          + 47.67300000 x  - 29.18300000 x  

There are quite a few usage problems in your revised worksheet.

Some expressions are misused as if they were procedures. Some of those are also passed arguments as if to set the dsolve/numeric parameters, even though no machinery was set up to allow that. Some expressions containing global `r` are made into procedures by wrapping them with proc(r)..end proc (even though that fails to work, by confusing global r with formal parameter r) instead of using `unapply` or some other valid technique.

Attached is an edited copy. Hopefully, it will help. Please let me know, if parts are unclear.

You triply-nested loops entail many evaluations, all told. Perhaps you could consider reducing the number of iterations. Also, you'll likely want to store the computed values if you intend on referring to them latter on, eg. for plotting.

At the start, you said you wanted it quicker. But there is no free lunch. Using dsolve/numeric's "parameters" options can avoid unnecessary processing overhead due to repeatedly calling dsolve(...numeric). But it still has to re-solve numerically, for each parameter value set.

scalingedited.mw

In the worksheet I attach here, I give a few alternate ways to do some of the subtasks.

I meant to ask earlier: are you using ApproximateInt just to do numeric integration, and if so then why not use a more appropriate evalf(Int(..)) technique instead? I wonder, too, whether that "integration" could be incorporated right into the dsolve/numeric solving, perhaps by coupling more terms into the system.

There are quite a few usage problems in your revised worksheet.

Some expressions are misused as if they were procedures. Some of those are also passed arguments as if to set the dsolve/numeric parameters, even though no machinery was set up to allow that. Some expressions containing global `r` are made into procedures by wrapping them with proc(r)..end proc (even though that fails to work, by confusing global r with formal parameter r) instead of using `unapply` or some other valid technique.

Attached is an edited copy. Hopefully, it will help. Please let me know, if parts are unclear.

You triply-nested loops entail many evaluations, all told. Perhaps you could consider reducing the number of iterations. Also, you'll likely want to store the computed values if you intend on referring to them latter on, eg. for plotting.

At the start, you said you wanted it quicker. But there is no free lunch. Using dsolve/numeric's "parameters" options can avoid unnecessary processing overhead due to repeatedly calling dsolve(...numeric). But it still has to re-solve numerically, for each parameter value set.

scalingedited.mw

In the worksheet I attach here, I give a few alternate ways to do some of the subtasks.

I meant to ask earlier: are you using ApproximateInt just to do numeric integration, and if so then why not use a more appropriate evalf(Int(..)) technique instead? I wonder, too, whether that "integration" could be incorporated right into the dsolve/numeric solving, perhaps by coupling more terms into the system.

@Mac Dude It works ok in Maple 16.

You can submit bug reports here.

The workaround for Maple 15 can probably be made to work automatically, adjusting itself for the data at hand.

@Mac Dude It works ok in Maple 16.

You can submit bug reports here.

The workaround for Maple 15 can probably be made to work automatically, adjusting itself for the data at hand.

@dman How many correct digits you need to retain in the roots depends on what subsequent computations you intend to do with them.

@dman How many correct digits you need to retain in the roots depends on what subsequent computations you intend to do with them.

4 5 6 7 8 9 10 Last Page 6 of 81