acer

32395 Reputation

29 Badges

19 years, 344 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

The recursive version calls itself. Your original version (or Joe's amended version) does not call itself. That's the difference. One of them recurses on itself, and the others do not.

acer

I will submit an SCR against evalhf returning Float(infinity) when evaluating abs(z) for your example of z=1e200+I*1e200, Axel.

I notice that Compiler:-Compile seems to do OK with that.

> restart:

> z:=10^200+I*10^200:
> evalhf(abs(z));
Float(infinity)

> p:=proc(x::complex(numeric)) abs(x); end proc:
> cp:=Compiler:-Compile(p):
> trace(evalf):
> cp(z); # is not using evalf
                                         201
                  0.141421356237309504 10

It would be better if evalhf and Compiler:-Compile shared their more robust (complex arithmetic) runtime features.

acer

I will submit an SCR against evalhf returning Float(infinity) when evaluating abs(z) for your example of z=1e200+I*1e200, Axel.

I notice that Compiler:-Compile seems to do OK with that.

> restart:

> z:=10^200+I*10^200:
> evalhf(abs(z));
Float(infinity)

> p:=proc(x::complex(numeric)) abs(x); end proc:
> cp:=Compiler:-Compile(p):
> trace(evalf):
> cp(z); # is not using evalf
                                         201
                  0.141421356237309504 10

It would be better if evalhf and Compiler:-Compile shared their more robust (complex arithmetic) runtime features.

acer

It is recursive because the procedure bisection calls itself.

It calls itself differently, according to whether f(c) has the same sign as f(a). (I just used your logic there. You could improve it.)

When it calls itself, it replaces either argument a or b by c, according to your logic as mentioned.

That whole process, of bisection calling itself with new arguments, over and over, happens until the tolerance eps is met.

You can first issue trace(bisection) before calling it with example inputs. to see more printed detail of what it does when it runs. See the ?trace help-page.

acer

It is recursive because the procedure bisection calls itself.

It calls itself differently, according to whether f(c) has the same sign as f(a). (I just used your logic there. You could improve it.)

When it calls itself, it replaces either argument a or b by c, according to your logic as mentioned.

That whole process, of bisection calling itself with new arguments, over and over, happens until the tolerance eps is met.

You can first issue trace(bisection) before calling it with example inputs. to see more printed detail of what it does when it runs. See the ?trace help-page.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

I often get a feeling of surprise that floating-point accuracy (as opposed to working precision!) does not come up on here on mapleprimes as a topic of intense discussion.

Routines such as evalf/Int and Optimization's exports have numerical tolerance parameters. And "atomic arithmetic operations", single calls to trig, and some special functions compute results accurate within so many ulps. But other than that, for compound expressions, all bets are off. The expression might be badly conditioned for floating-point evaluation.

Now, Maple has Digits as a control over working precision but it has no more general specifier of requested accuracy. Compare with Mathematica, which claims to have both. So, in Maple one must either do analysis or raise Digits to some problem-specific mystery value. The evalr command (and its less well-known friend, shake) is not really strong enough to help much. An argument that I have sometimes heard against making any progress in this area is that its an open field of study, and partial, fuzzily-bounded coverage is to be avoided. If we all accepted that sort of thinking all the time, Maple might have normal but no radnormal, evala/Normal, simplify/trig let alone simplify proper.

acer

I often get a feeling of surprise that floating-point accuracy (as opposed to working precision!) does not come up on here on mapleprimes as a topic of intense discussion.

Routines such as evalf/Int and Optimization's exports have numerical tolerance parameters. And "atomic arithmetic operations", single calls to trig, and some special functions compute results accurate within so many ulps. But other than that, for compound expressions, all bets are off. The expression might be badly conditioned for floating-point evaluation.

Now, Maple has Digits as a control over working precision but it has no more general specifier of requested accuracy. Compare with Mathematica, which claims to have both. So, in Maple one must either do analysis or raise Digits to some problem-specific mystery value. The evalr command (and its less well-known friend, shake) is not really strong enough to help much. An argument that I have sometimes heard against making any progress in this area is that its an open field of study, and partial, fuzzily-bounded coverage is to be avoided. If we all accepted that sort of thinking all the time, Maple might have normal but no radnormal, evala/Normal, simplify/trig let alone simplify proper.

acer

This is shorter than what I had before, but still multiplies and divides by exp(t). Of course, it also does ln-of-exp as another "arbitrary" operation-with-its-inverse.

> ee:=ln(cosh(t)):

> combine(simplify(expand(
>   ln(exp(t)*expand(exp(-t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

> # or, using 1/exp(t), showing nonreliance at least upon the exp "form".
> combine(simplify(expand(
>   ln(exp(t)*expand(1/exp(t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

acer

This is shorter than what I had before, but still multiplies and divides by exp(t). Of course, it also does ln-of-exp as another "arbitrary" operation-with-its-inverse.

> ee:=ln(cosh(t)):

> combine(simplify(expand(
>   ln(exp(t)*expand(exp(-t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

> # or, using 1/exp(t), showing nonreliance at least upon the exp "form".
> combine(simplify(expand(
>   ln(exp(t)*expand(1/exp(t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

acer

You've started with lots of the interesting bits already done, such as gracefully pulling out that exp(t) factor while keeping exp(-2*t) instead of 1/exp(t^2).

So, one can do it by first multiplying by exp(-t), then multiplying shortly after by exp(t). But I consider that a kludge since it's not fully automatic -- it sorta depends on fortuitously choosing that particular factor of exp(t) to introduce. And still there can be the problem of multiplying through by the exp(-t) and getting exp(-2*t) rather than 1/exp(2*t).

Below the intermediate conversion back to trigh is there merely to allow the exp(-t) factor to survive multiplication by exp(t). Quite ugly, all told.

> ee:=ln(cosh(t)):

> exp(t)*convert(exp(-t)*(exp(convert(ee,exp))),trigh);
                                     2
                      exp(t) (cosh(t)  - sinh(t) cosh(t))
 
> combine(simplify(convert(expand(ln(%)),exp))) assuming real;
                          t + ln(1/2 + 1/2 exp(-2 t))

acer

You've started with lots of the interesting bits already done, such as gracefully pulling out that exp(t) factor while keeping exp(-2*t) instead of 1/exp(t^2).

So, one can do it by first multiplying by exp(-t), then multiplying shortly after by exp(t). But I consider that a kludge since it's not fully automatic -- it sorta depends on fortuitously choosing that particular factor of exp(t) to introduce. And still there can be the problem of multiplying through by the exp(-t) and getting exp(-2*t) rather than 1/exp(2*t).

Below the intermediate conversion back to trigh is there merely to allow the exp(-t) factor to survive multiplication by exp(t). Quite ugly, all told.

> ee:=ln(cosh(t)):

> exp(t)*convert(exp(-t)*(exp(convert(ee,exp))),trigh);
                                     2
                      exp(t) (cosh(t)  - sinh(t) cosh(t))
 
> combine(simplify(convert(expand(ln(%)),exp))) assuming real;
                          t + ln(1/2 + 1/2 exp(-2 t))

acer

Hi Doug,

Thanks for the interesting write-up.

I was wondering what you might be able to report about two items that you have yourself mentioned here in the past.

The first relates to the inability to download .mw files using Microsoft Outlook's webmail access, which you reported as a serious restriction for your university's use of Maple in courses.

The second relates to a post you made about using Maplet's vs Embedded Components, in which you mentioned the clear lack of programmatic control of Embedded Components.

May I ask, did you bring up these subjects?

acer

What I was trying to get at before is that testing for the difference of two expressions' being zero is a more practical general strategy than trying to simplify one form to the other with a general manipulation rule.

Nobody is going to argue -- sensibly, I suppose -- about the "simplest" form of an expression the is exactly equal to zero. So rather than wonder what is the "simplest" form of either the RHS or the LHS of your (here, trigonometric) equation, you can instead subtract them and test for zero.

acer

First 470 471 472 473 474 475 476 Last Page 472 of 592