John May

Dr. John May

2616 Reputation

18 Badges

17 years, 6 days
Maplesoft
Pasadena, California, United States

Social Networks and Content at Maplesoft.com

Maple Application Center
I have been a part of the Mathematical Software Group at Maplesoft since 2007. I have a Ph.D in Mathematics from North Carolina State University as well as Masters and Bachelors degrees from the University of Oregon. I have been working on research in computational mathematics since 1997. I currently work on symbolic solvers and visualization as well as other subsystems of Maple.

MaplePrimes Activity


These are replies submitted by John May

Here is what I tried (edit: jakubi's solution above --^ is a bit more compact)

A1a := A4*(cosh(B*L)*v-cos(B*L)*v+2*B*L*sin(B*L))/((sinh(B*L)-sin(B*L))*v);

A1b := A4*((exp(B*L))^2*v+v-2*cos(B*L)*exp(B*L)*v+4*sin(B*L)*exp(B*L)*B*L) / (v*((exp(B*L))^2-1-2*sin(B*L)*exp(B*L)));

Do this one part at a time:

aa1, aa2:= op(collect(A1a, v));
bb1, bb2:= op(collect(A1b, v));

Start with the parts without v:

aa1:

                           A4 (cosh(B L) - cos(B L))
                           -------------------------
                             sinh(B L) - sin(B L)

bb1:

                               2
                   A4 (exp(B L)  + 1 - 2 cos(B L) exp(B L))
                   ----------------------------------------
                             2
                     exp(B L)  - 1 - 2 sin(B L) exp(B L)

work on bb1:

algsubs(exp(B*L)=sinh(B*L)+cosh(B*L), bb1):
simplify(%):
cc1:=factor(%);

                A4 (sin(B L) + sinh(B L)) (-cosh(B L) + cos(B L))
              - -------------------------------------------------
                                    2               2
                           cosh(B L)  - 2 + cos(B L)

numer(cc1)/subs(cos(B*L)^2=-sin(B*L)^2+1, cosh(B*L)^2=sinh(B*L)^2+1, denom(cc1)):
dd1:=factor(%); # =normal(aa1)

And something similar should work for the part with v.  Maybe someone else has a simpler way to do this.

What was the original command that gave these expressions?  It might be easier to work from that side.

John

Here is what I tried (edit: jakubi's solution above --^ is a bit more compact)

A1a := A4*(cosh(B*L)*v-cos(B*L)*v+2*B*L*sin(B*L))/((sinh(B*L)-sin(B*L))*v);

A1b := A4*((exp(B*L))^2*v+v-2*cos(B*L)*exp(B*L)*v+4*sin(B*L)*exp(B*L)*B*L) / (v*((exp(B*L))^2-1-2*sin(B*L)*exp(B*L)));

Do this one part at a time:

aa1, aa2:= op(collect(A1a, v));
bb1, bb2:= op(collect(A1b, v));

Start with the parts without v:

aa1:

                           A4 (cosh(B L) - cos(B L))
                           -------------------------
                             sinh(B L) - sin(B L)

bb1:

                               2
                   A4 (exp(B L)  + 1 - 2 cos(B L) exp(B L))
                   ----------------------------------------
                             2
                     exp(B L)  - 1 - 2 sin(B L) exp(B L)

work on bb1:

algsubs(exp(B*L)=sinh(B*L)+cosh(B*L), bb1):
simplify(%):
cc1:=factor(%);

                A4 (sin(B L) + sinh(B L)) (-cosh(B L) + cos(B L))
              - -------------------------------------------------
                                    2               2
                           cosh(B L)  - 2 + cos(B L)

numer(cc1)/subs(cos(B*L)^2=-sin(B*L)^2+1, cosh(B*L)^2=sinh(B*L)^2+1, denom(cc1)):
dd1:=factor(%); # =normal(aa1)

And something similar should work for the part with v.  Maybe someone else has a simpler way to do this.

What was the original command that gave these expressions?  It might be easier to work from that side.

John

 Wow, thank you very much Georgios. How can it be, that Maple does not automatically use expand first?

It turns out that it is not always a good idea to expand. Here is an extreme example:

solve((x^2+x-1)^10000,x);

is slower than it should be maybe, but it returns an answer. While

solve(expand((x^2+x-1)^10000),x);

uses more time and all available memory on my machine before I kill it.

John

 Wow, thank you very much Georgios. How can it be, that Maple does not automatically use expand first?

It turns out that it is not always a good idea to expand. Here is an extreme example:

solve((x^2+x-1)^10000,x);

is slower than it should be maybe, but it returns an answer. While

solve(expand((x^2+x-1)^10000),x);

uses more time and all available memory on my machine before I kill it.

John

This seems to be one of those systems where Maple's resultant based solver surprises us all and cracks it in less than a minute while Gröbner and Triangular decomposition methods don't seem to have a chance.

It definitely helps that it is very sparse and only quadratic.

John

This seems to be one of those systems where Maple's resultant based solver surprises us all and cracks it in less than a minute while Gröbner and Triangular decomposition methods don't seem to have a chance.

It definitely helps that it is very sparse and only quadratic.

John

Of course you're right. :D  Post now corrected.

It is also worth noting that many of the exact routines that do float->rational convertion (dsolve, solve, limit etc.) use convert/rational, with argument 'exact'.  Which gives the following behavior:

convert(evalf(1/3), rational, exact);
                                  3333333333
                                  -----------
                                  10000000000
convert(evalf(1/3), rational);       
                                      1/3

identify(evalf(1/3));         
                                      1/3

And it can be worse the hardware floats:

convert(evalhf(1/3), rational, exact);
                              66666666666666663
                              ------------------
                              200000000000000000

convert(evalhf(1/3), rational);       
                                      1/3

identify(evalhf(1/3));                
                             0.333333333333333315

If you want to compute the pixel-by-pixel error between the original image and the low-rank "compressed" image you can compute the matrix norm of the differences between the two images:
LinearAlgebra:-MatrixNorm(faceM-face8, Frobenius); # face8 is the rank=4 image
   78.987

A well known theorem says that this norm is equal to the euclidean norm of the vector S[5..512], that is: sqrt(S[5]^2+S[6]^2+..+S[511]^2+S[512]^2)
Thus, it is pretty easy to plot a graph of how the pixel-by-pixel error decreases as you increase the rank of the compressed image:

5480_rankvserror.jpeg

And the curve stays pretty smooth, decreasing to 0 error at rank 512.  This plot confirms the information from the gaps in the singular values: most of  the information about the image is in the first 3 singular vectors, and after rank 7, addtional increases in rank don't buy you much.  But, that rank 3 approximation is still pretty far from the original image:



 

You could also use _EnvExplicit or the "Explicit" argument to solve to force it to return radical form instead of RootOf form:
solve(1/(40^2-x^2)^(1/2)+1/(30^2-x^2)^(1/2) = 1/10,x,'Explicit');

 


John May
Mathematical Software
Maplesoft

It is actually not true that all matrices have a clear natural gap.  In approximate algebra applications you can run across matrices where the singular values are pretty much evenly spaced.

That said, this image has a very large ("natural") gap between the third and fourth singular values:
# S is the vector of singular values from the worksheet

Srat := [seq(S[i]/S[i+1], i = 1 .. 511)]:
SSrat:=sort(Srat);
evalf[5](SSrat[-8..-1]);
  [1.0875, 1.1044, 1.1639, 1.1741, 1.2214, 1.2647, 1.2871, 4.1747]
member(SSrat[-1], Srat, 'loc'): loc;
  3

And this is the largest gap by a pretty large margin.  The next five biggest gaps are at 4, 6, 7, 5, and 1 respectively. This is pretty interesting.  This suggests you get a lot of information when adding in the first 7 or so singular vector pairs but that it tapers off quite a bit after that.  I think you can kind of see this in rank 8 vs. rank 32 pictures.

I'll post the rank 3 image tomorow for comparison.
 

Tip to older computer (e.g. first gen g4 powerbook) users:  run the worksheet on an 256x256 image instead.

Warning, solving for expressions other than names or functions is not
recommended.

The reason why this is not recommended is that Solve will try to automatically replace the expression you are solving for with a new variable.  That can lead to some unexpected results.  You are much better off doing the substitution yourself first. and then solving.

Or, in this case, when you are just trying to manipulate an equation and not actually "solve", try using isolate instead:

a2 := op(eliminate({x=cos(v), y=sin(v)},{v})[2]);
isolate(a2, y^2);


John May
Mathematical Software
Maplesoft

Warning, solving for expressions other than names or functions is not
recommended.

The reason why this is not recommended is that Solve will try to automatically replace the expression you are solving for with a new variable.  That can lead to some unexpected results.  You are much better off doing the substitution yourself first. and then solving.

Or, in this case, when you are just trying to manipulate an equation and not actually "solve", try using isolate instead:

a2 := op(eliminate({x=cos(v), y=sin(v)},{v})[2]);
isolate(a2, y^2);


John May
Mathematical Software
Maplesoft

For what it's worth, this pretty much what Maple's solve command does under the hood to try to solve this problem (solve({diff(y,x)},{x})).  In this case, it generates some much larger RootOf expressions and then it gets bogged down for a long time while trying to make its answer explicit, but it does eventually produce an answer - it is just a lot larger than the one Robert gets.


John May
Mathematical Software
Maplesoft

For problem 11, "Calculation of 10,000,000 fibonacci numbers"

Original code:

with(combinat): with(LinearAlgebra):

frnd:=rand(100..1000):
A := [seq(frnd(),i=1..10000000)]:
time(evalf(Map(a->fibonacci(a), A)));

was reported as taking about 821.140 sec on their "Intel Quad Core Q6600 processor with 2.4 GHz and 2 GB RAM running under Windows Vista Home".

It took 1180 seconds on my 2.13GHz Core2 machine.

Since floating point version of the Fibonacci numbers are being asked for, it doesn't make sense to compute them exactly then convert to floats. It looks like all the other platforms are computing them numerically:

frnd:=rand(100..1000):
A := Vector([seq(frnd(),i=1..10000000)], datatype=float[8]):
f:=subs(is5=evalhf(1/sqrt(5)),phi =evalhf((1+sqrt(5))/2), a->((phi^a-(-phi)^(-a))*is5));
time(map[evalhf](f, A));

takes 18.38s on my machine.


John May
Mathematical Software
Maplesoft

First 15 16 17 18 19 Page 17 of 19