John May

Dr. John May

2896 Reputation

18 Badges

17 years, 157 days
Maplesoft
Pasadena, California, United States

Social Networks and Content at Maplesoft.com

I have been a part of the Mathematical Software Group at Maplesoft since 2007. I have a Ph.D in Mathematics from North Carolina State University as well as Masters and Bachelors degrees from the University of Oregon. I have been working on research in computational mathematics since 1997. I currently work on symbolic solvers and visualization as well as other subsystems of Maple.

MaplePrimes Activity


These are replies submitted by John May

...Except this is not quite right:

sols2:=solve(Equations(sol[1], R));
sys2:=remove(x->is(x=0),eval(sys,sols2));
# set y2 = 1 and solve for x8
solve(eval(sys2,y2=1));
allvalues(%)[1];
                                             1/2
                  61042969   3726368095584961
            {x8 = -------- + -------------------}
                   31250            31250

sols3:={(op(%), y2=1)};
radnormal(eval(eval(sys, sols2), sols3));
                 [0, 0, 0, 0, 0, 0, 0, 0, 0]

So clearly y2 and x8 are not really free variables and something wierd is happening inside solve (probably related to why vanilla solve does not get an answer to this system).

John

It seems pretty rough and pretty out of date, but there is a Maple project at WikiBooks for any aspiring Wikinauts:

http://en.wikibooks.org/wiki/Maple

John

It seems pretty rough and pretty out of date, but there is a Maple project at WikiBooks for any aspiring Wikinauts:

http://en.wikibooks.org/wiki/Maple

John

For what it is worth, I was able to install and run the Maple Standard GUI on an Asus Eee 701 (4 GB SSD and 512 MB RAM).  The space is pretty tight on the SSD, so I installed to an ext2 formatted SDHC card, but if you started with a full 1.3 GB free, you would probably have enough space to install it directly to the SSD.  If you just have the 700MB free for Maple, you can play some games to use a USB stick or SD card as tmp space,  and still install just fine.

Anyway, I plan to mostly run TTY maple on this.  Some initial tests show that it is about 1/4 to 1/3 the speed of my Core2 desktop machine running some solve() benchmarks.

John

Here is what I tried (edit: jakubi's solution above --^ is a bit more compact)

A1a := A4*(cosh(B*L)*v-cos(B*L)*v+2*B*L*sin(B*L))/((sinh(B*L)-sin(B*L))*v);

A1b := A4*((exp(B*L))^2*v+v-2*cos(B*L)*exp(B*L)*v+4*sin(B*L)*exp(B*L)*B*L) / (v*((exp(B*L))^2-1-2*sin(B*L)*exp(B*L)));

Do this one part at a time:

aa1, aa2:= op(collect(A1a, v));
bb1, bb2:= op(collect(A1b, v));

Start with the parts without v:

aa1:

                           A4 (cosh(B L) - cos(B L))
                           -------------------------
                             sinh(B L) - sin(B L)

bb1:

                               2
                   A4 (exp(B L)  + 1 - 2 cos(B L) exp(B L))
                   ----------------------------------------
                             2
                     exp(B L)  - 1 - 2 sin(B L) exp(B L)

work on bb1:

algsubs(exp(B*L)=sinh(B*L)+cosh(B*L), bb1):
simplify(%):
cc1:=factor(%);

                A4 (sin(B L) + sinh(B L)) (-cosh(B L) + cos(B L))
              - -------------------------------------------------
                                    2               2
                           cosh(B L)  - 2 + cos(B L)

numer(cc1)/subs(cos(B*L)^2=-sin(B*L)^2+1, cosh(B*L)^2=sinh(B*L)^2+1, denom(cc1)):
dd1:=factor(%); # =normal(aa1)

And something similar should work for the part with v.  Maybe someone else has a simpler way to do this.

What was the original command that gave these expressions?  It might be easier to work from that side.

John

Here is what I tried (edit: jakubi's solution above --^ is a bit more compact)

A1a := A4*(cosh(B*L)*v-cos(B*L)*v+2*B*L*sin(B*L))/((sinh(B*L)-sin(B*L))*v);

A1b := A4*((exp(B*L))^2*v+v-2*cos(B*L)*exp(B*L)*v+4*sin(B*L)*exp(B*L)*B*L) / (v*((exp(B*L))^2-1-2*sin(B*L)*exp(B*L)));

Do this one part at a time:

aa1, aa2:= op(collect(A1a, v));
bb1, bb2:= op(collect(A1b, v));

Start with the parts without v:

aa1:

                           A4 (cosh(B L) - cos(B L))
                           -------------------------
                             sinh(B L) - sin(B L)

bb1:

                               2
                   A4 (exp(B L)  + 1 - 2 cos(B L) exp(B L))
                   ----------------------------------------
                             2
                     exp(B L)  - 1 - 2 sin(B L) exp(B L)

work on bb1:

algsubs(exp(B*L)=sinh(B*L)+cosh(B*L), bb1):
simplify(%):
cc1:=factor(%);

                A4 (sin(B L) + sinh(B L)) (-cosh(B L) + cos(B L))
              - -------------------------------------------------
                                    2               2
                           cosh(B L)  - 2 + cos(B L)

numer(cc1)/subs(cos(B*L)^2=-sin(B*L)^2+1, cosh(B*L)^2=sinh(B*L)^2+1, denom(cc1)):
dd1:=factor(%); # =normal(aa1)

And something similar should work for the part with v.  Maybe someone else has a simpler way to do this.

What was the original command that gave these expressions?  It might be easier to work from that side.

John

 Wow, thank you very much Georgios. How can it be, that Maple does not automatically use expand first?

It turns out that it is not always a good idea to expand. Here is an extreme example:

solve((x^2+x-1)^10000,x);

is slower than it should be maybe, but it returns an answer. While

solve(expand((x^2+x-1)^10000),x);

uses more time and all available memory on my machine before I kill it.

John

 Wow, thank you very much Georgios. How can it be, that Maple does not automatically use expand first?

It turns out that it is not always a good idea to expand. Here is an extreme example:

solve((x^2+x-1)^10000,x);

is slower than it should be maybe, but it returns an answer. While

solve(expand((x^2+x-1)^10000),x);

uses more time and all available memory on my machine before I kill it.

John

This seems to be one of those systems where Maple's resultant based solver surprises us all and cracks it in less than a minute while Gröbner and Triangular decomposition methods don't seem to have a chance.

It definitely helps that it is very sparse and only quadratic.

John

This seems to be one of those systems where Maple's resultant based solver surprises us all and cracks it in less than a minute while Gröbner and Triangular decomposition methods don't seem to have a chance.

It definitely helps that it is very sparse and only quadratic.

John

Of course you're right. :D  Post now corrected.

It is also worth noting that many of the exact routines that do float->rational convertion (dsolve, solve, limit etc.) use convert/rational, with argument 'exact'.  Which gives the following behavior:

convert(evalf(1/3), rational, exact);
                                  3333333333
                                  -----------
                                  10000000000
convert(evalf(1/3), rational);       
                                      1/3

identify(evalf(1/3));         
                                      1/3

And it can be worse the hardware floats:

convert(evalhf(1/3), rational, exact);
                              66666666666666663
                              ------------------
                              200000000000000000

convert(evalhf(1/3), rational);       
                                      1/3

identify(evalhf(1/3));                
                             0.333333333333333315

If you want to compute the pixel-by-pixel error between the original image and the low-rank "compressed" image you can compute the matrix norm of the differences between the two images:
LinearAlgebra:-MatrixNorm(faceM-face8, Frobenius); # face8 is the rank=4 image
   78.987

A well known theorem says that this norm is equal to the euclidean norm of the vector S[5..512], that is: sqrt(S[5]^2+S[6]^2+..+S[511]^2+S[512]^2)
Thus, it is pretty easy to plot a graph of how the pixel-by-pixel error decreases as you increase the rank of the compressed image:

5480_rankvserror.jpeg

And the curve stays pretty smooth, decreasing to 0 error at rank 512.  This plot confirms the information from the gaps in the singular values: most of  the information about the image is in the first 3 singular vectors, and after rank 7, addtional increases in rank don't buy you much.  But, that rank 3 approximation is still pretty far from the original image:



 

You could also use _EnvExplicit or the "Explicit" argument to solve to force it to return radical form instead of RootOf form:
solve(1/(40^2-x^2)^(1/2)+1/(30^2-x^2)^(1/2) = 1/10,x,'Explicit');

 


John May
Mathematical Software
Maplesoft

It is actually not true that all matrices have a clear natural gap.  In approximate algebra applications you can run across matrices where the singular values are pretty much evenly spaced.

That said, this image has a very large ("natural") gap between the third and fourth singular values:
# S is the vector of singular values from the worksheet

Srat := [seq(S[i]/S[i+1], i = 1 .. 511)]:
SSrat:=sort(Srat);
evalf[5](SSrat[-8..-1]);
  [1.0875, 1.1044, 1.1639, 1.1741, 1.2214, 1.2647, 1.2871, 4.1747]
member(SSrat[-1], Srat, 'loc'): loc;
  3

And this is the largest gap by a pretty large margin.  The next five biggest gaps are at 4, 6, 7, 5, and 1 respectively. This is pretty interesting.  This suggests you get a lot of information when adding in the first 7 or so singular vector pairs but that it tapers off quite a bit after that.  I think you can kind of see this in rank 8 vs. rank 32 pictures.

I'll post the rank 3 image tomorow for comparison.
 

Tip to older computer (e.g. first gen g4 powerbook) users:  run the worksheet on an 256x256 image instead.

First 15 16 17 18 19 Page 17 of 19