8 years, 71 days

## @acer Thanks for completing what I ...

Thanks for completing what I did.
By the way, I've just sent a reply to @Hullzie16 's last comment and I think you might give more precision about the use of is and its  misuses?

## @Hullzie16  You say "I was ju...

@Hullzie16

You say "I was just showing that everything appeared to work when not inside of the loop. "

But you are wrong, this is not what happens in the loop: in the loo you use an if..end if condition as you use an is command in your sequence outside he loop.
The problem is that using the integration method _Dexp returns an unevaluated integral (Int(...)) and that

`is(Int(...) > some_value)`

is always false.

But... the test

`if Int(...) > some_value then... end if`

generates an error: basically the comparison between the objects on both sides of the > sign can be done (which is natural)?

For more info about how is behaves and when it should be used see the correslonding help page (help(assume)) from which this excerpt comes

Look closely the end of the attached file Mistake.mw

## For my understanding...

@Kitonum

The expression I get is by far simpler... did I miss something?
Maybe using LargeExpressions:-Veil "freezes" the coefficients and evalc no longer works as expected?

 > restart;
 > with(LargeExpressions):
 > eq := T*z^2 - (R + S*I)*z +G-K + H*I;
 (1)
 > veq := collect(eq, z, Veil[C]);
 (2)
 > sol:=solve(veq,z);
 > sol1,sol2:=evalc~([%])[];
 > expand(sol1*sol2)=a^2+b^2;
 > simplify(sol1+sol2=2*a);
 > simple_1 := simplify(sol1-sol2=2*b);
 (3)
 > rew := op(select(has, indets(simple_1), signum)[]) = Phi;
 (4)
 > simple_2 := map(factor, algsubs(rew, simple_1))
 (5)
 > print~([(rhs=lhs)(rew), seq( C[i] = Unveil[C]( C[i] ), i=1..LastUsed[C] ) ]):
 (6)
 > eval(simple_2, (rhs=lhs)(rew));
 (7)
 > Unveil[C](%);
 (8)
 >

## Pretty good...

@dharr

I was sure the challenge would tempt you
I vote up of course.

## I understand know!!!...

@Scot Gould

Starting point:
Computing the integral of a function f(x) over an interval x=a..b, can always be interpreted as computing the expectation the random variable Y = f(X) where X is a Uniform random variable with support a..b.
Indeed, denoting phi(x) the probability density function of X (phi(x) = 1/(b-a) if a <= x <= b and 0 elswhere)

`J = int(f(x), x=a..b) = (b-a) * int(f(x)*phi(x), x=a..b)`

The integral represents the Expectation (aka mean) E(Y) of the random variable Y.
Then J = (b-a)*E(Y).

The Monte-Carlo*  method; originally designed to assess the Expectation of a random variable, generalizes to estimate the integral of a function over a bounded domain (generally connected).
*This name has been adapted into Monegasque: Monte-Carlu [ˌmõteˈkaʀlu]. The typographical rules for toponyms used by the Imprimerie Nationale [a French instution which translates in "national printing house"] require Monte-Carlo to be written with a hyphen. It's generally pronounced /Monté-Carlo/, but some say /Monté-Carl'/.

Here is "your" problem:
How to assess the mean value of func(X) where X is a Uniform random variable with support 0.8..3.
One simple idea is to use a Monte-Carlo approach:

• draw a sample S from X, let n the size of this sample,
• then add(func~(S))/n is the Monte-Carlo estimator of Mean(func(X)).

What may go wrong in doing this?
Let us suppose that func(x) is a function whose value is close to zero within a  large portion of the interval 0.8..3, and which gets very large values in a narrow subdomain of 0.8..3. Let us say the "effective" domain of variation of func has a size equal to a w (for instance 0.001)
Drawing the sample S randomly will give you about N*w/(3-0.8) points in this "effective" domain of variation, meaning that we have a small aount of chance to capture the true bahaviour of func in the region where ir strongly changes.As a consequence your Monte-Carlo estimator  add(func~(S))/n can be of poor quality.
Being of poor quality means that doing again the same computation with another sample S'  (independent from the first one) can lead to a result significantly different from the first one, unless the value of n is huge.
So the estimator, which is a rantom variable whose add(func~(S))/n is a a particular realization and add(func~(S'))/n is another one, maypresent large deviations: onesays its variance is high.

The goal being to get reliable values for the estimator of Mean(func(X)) the naive way is to increase the value of n (the variance of the Monte-Carlo estimator decreases as n-1). The drawback being an increase of the computational time.
All the other ways consist in applying a more astute strategy where the sampling is done to capture the bahaviour of func.
The simplest strategy is named Importance Sampling.
Is goal is to reduce the variance of the Monte-Carlo estimator by introducing an auxiliary random variable Z whose PDF (Probability Density Function) is closer to func than the PDF of X is.
Then drawing a sample from Z will generate points whose spreading will give a closer picture of the

The basic idea and an illustrative example are provided in this file Importance_Sampling_Idea.mw.
Look how Importance Sampling dramatically decrease the variance of the Monte-Carlo estimator.

Go back to the MMA code.
Here is a new file which ALMOST does in Maple what is done with MMA: TruncatedNormal_3.mw.
That is assessing the mean value of func over the interval 0.8..3 by using both the crude Monte-Carlo method (which implicitely considers that the random variable X is uniform with some support, which I arbitrarily took as 0.8..3 but could be something else in which case the estimator is known up to an arbitrary multiplicative constant) and assesing the same estimator by using an auxiliary random variable Z whose PDF is "close" to func (Z is the truncated gaussian random variable).

You will see that the formula used to assess the Importance Sampling Estimator (ISE) does not exactly correspond to the formula the MMA code uses:

1. The term
```func[RV]/Distrib[RV, 1, 0.399]
# which translate in
func~(RV) /~ Distrib~(RV, 1, 0.399)```
should be
```func[RV]*f[RV]/Distrib[RV, 1, 0.399]
# which translate in
func~(RV) *~ f~(RV) /~ Distrib~(RV, 1, 0.399)
# where f is the PDF of the "native" random variable

...

# unless f = 1 for any real (which seems to be a nonsense as its integraj
# from -oo to +oo is not equal to 1!!!) but is clasiccaly used under the
# name of "improper distribution" in bayesian statistics```
2. The presence of the term
`Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}]`
Wich seems to mean that Distrib is not a true distribution (this integral should be naturally equal to 1, which is the case with the construction ot the TruncatedGaussian I made).

The MMA code doesn't assess a mean value of func(X)  but directly jumps to use the Monte-Carlo method to assess the integral of func(x) by using a majoring function (the PDF of the truncated gaussian random variable).
This is why the red line above is replaced by the pink one (see alose the double green equality at the top of this reply).
The code is byfar unnecessarily complex. For instance there is no need to use a truncated gaussian distribution to assess this integral, more of this the "1.1*x-0.1" stuff is useless and brings nothing but confusion.
Whatever... this new file explains what Monte-Carlo integration is, from its historical point of view to its classical to-day use.

Monte_Carlo.mw

"[You] wondered if the sampling technique provided was going to be more useful than the crude method as a demonstration of a single-variable example of Monte Carlo integration.  Of course, Maple and Mathematica can quickly integrate the function. "
The L2 norm of the integration error of the Monte-Carlo method is n-1/2 whatever the dimension of the integration domain. This is signiicantly lower than a simple rectangle-rule integration which is  n-1 ... in dimension 1 and n-1/d in dimension d.
A Simson method has an integration error whixh evolves as n-2/d in dimension d and is then outperformed by a Montecarlo integration in domains of dimensions higher than 4.
In fact the main interest of Monte-Carlo integration lies in the simplicity of the method and the fact it is efficient in domains of any shapes.

Last point:
"Could you recommend a book, website, etc., that would help me get up to speed with probability distributions?"
I'll get back to you shortly.

I heard positive feedback from Khan Academy (that I just browse very quickly to form a first idea).
If by chance you speak French, WikiStat is really quite good (not sure there is an english version).
"Good" off-the-shelf books are not that easy to find (they surely do exist).

## Ideas...

As I am very lazzy, my line of conduct is "why redo what has already been done elsewhere?"
For example, Geogebra has a module for creating solid patterns, and as I think you're a French speaker, aren't you? see the video here Obtenir un patron avec Geogebra

About your question concerning Maple and an easy way to develop a solid, I believe the answer is no, this is not that easy.
What we can do easily as a first step is to:

• "extract" the faces of a polyhedron defined by plottools
• use GraphTheory to create cuts, duplicate vertices and edges and so on.

Here is an example... but as you will see the cuts are done by hand and the final result has to be rescaled given the coordinates you use in plottools.
First_bricks.mw

Another way could be to use geom3d (instead of plottools): then the faces are got in a simple way, for instance

```restart
with(geom3d):
solid := tetrahedron(t, point(o,1,2,3), 3):
draw(solid);
faces(solid):
```

You will find  HERE a detailed explanation of an algorithm based on the "covering tree" of the graph
associated to a convex polyhedron.

I remember @dharr did recently a lot of work concerning graphs, try to brose its answers/replies on this subject.

## @jrive What if Rhi < RIo?Do you ...

What if Rhi < RIo?
Are imaginary resistors physical resistors?

Had you spoke of impedances instead of resistors (whose impedance is real as it is purely imaginary for a capacitor, for instance) I wouldn't have sent my comment.

Note that "my" equation (4) indicates there is no solution when Rhi < RIo.

## @acer To avoid any misunderstanding...

To avoid any misunderstanding, I will not say that you did something wrong, but that the OP did not consider some natural / physical conditions.
With this in mind, the solutions you get are not physical (as the OP speaks about resistors, all the Rs are strictly positive values,which leads to the condition Rhi > RIo and excludes your second solution).

Anticipating a possible clarification from @jrive this solution seems more physical:

 >
 >
 (1)

 >
 (2)
 >
 (3)
 > # The previous solutions require Rhi > RIo and all the impedances to be strictly positive # for these solutions to be physical. physical_sol := solve({eq1, eq2}, {R1, R2}, explicit, useassumptions) assuming positive
 (4)
 > # So, I would say the physical solutions are physical_sol := (eval(physical_sol) assuming Rlo < Rhi)[]
 (5)

## It would be clearer if you uploaded you...

It would be clearer if you uploaded your Maple worksheet (use yhe big green arrow in the menu bar).

## Is this that you want?...

 > restart
 > u(xi):=c[0]+c[1]*a^(f(xi))
 (1)
 > Eq1 := -(alpha^2 + beta)*u(xi) + k^2*diff(u(xi), xi \$ 2) + b*u(xi)^3 = 0
 (2)
 > D1 := diff(f(xi), xi) = (a^(-f(xi))*p + q + r*a^f(xi))/ln(a);
 (3)
 > D2 := diff(f(xi), xi\$2) = eval(diff(rhs(D1), xi), D1);
 (4)
 > DEq1_0 := eval(Eq1, {D1, D2})
 (5)
 > # Setting Z = a^f(xi) DEq1_1 := collect(eval(expand(DEq1_0), a^f(xi) = Z), Z)
 (6)
 > # I don't understand why this command doesn't work ??? coeffs(DEq1_1, Z);
 > # Workaround with(LargeExpressions): collect(DEq1_1, Z, Veil[C] ); CoefficientNullity := [ seq( 0 = factor(Unveil[C]( C[i] )), i=1..LastUsed[C] ) ]; sols := solve({CoefficientNullity[], b > 0}, [alpha, c[0], c[1]]): sols := eval(sols) assuming b > 0; whattype(sols); print(cat('_'\$120)): print~(sols):
 (7)
 > # In case you would like to remove complex solutions: RealSols := remove(has, sols, I): print~(RealSols):
 (8)
 > # Check: eval(DEq1_1, RealSols[1]); simplify(eval(DEq1_1, RealSols[2])); simplify(eval(DEq1_1, RealSols[3]));
 (9)

## I agree with Rouben...

@CaptNnFalcon

See HERE for some hint.

## Welcome on the thread :-)...

Thanks for taking over.

## @MaPal93 Z has an trivial root thet...

Z has an trivial root theta=0 and I focus on the other one

What is HDT? Read my code (equation (5) )

The she sign of -HDT is "+"  if HDT < 0 and "-" if HDT > 0.

By the way, if you are puzzled by the meaning of HDT , make your choice here acronyms

## Poor strategy...

What do you think will happen if you add this constraint?
Do you sincerely believe that this will change something?

I believe you should put in question your intuition or, if you are certain of it, write the assumptions in your problem PRIOR to verify if your intuition is right or not.
What you are trying to achieve right now is to find, through trial and error, the assumptions that will prove A POSTERIORI that your intuition is good... which has never been a scientific way to proceed.

## @MaPal93 You "might be missing...

You "might be missing out some additional constraint" ?
No, you surely did.

```convert(difference_term, horner, theta)
theta ((-lambda__1 - lambda__2 - lambda__3) theta

- X__3 lambda__3 + X__2 lambda__2 + X__1 lambda__1

+ lambda__1 (X__1 + delta__1) + lambda__2 (X__2 + delta__2)

- lambda__3 (X__3 + delta__3))
```

Considered as a polynom in theta it has 2 zeros (one of them being trivial).
The other one is.

`Z = (2*X__1*lambda__1+2*X__2*lambda__2-2*X__3*lambda__3+delta__1*lambda__1+delta__2*lambda__2-delta__3*lambda__3)/(lambda__1+lambda__2+lambda__3)`

Then right to Zdifference_term has one sign and left to it it has the oppposite sign.
The only possibility would be that, for any values of the Xs, deltas  and lambdas,  this difference_term > 0 right to AND Z < 0.
Nothing in your assumptions enable to think this is the case.

 First 7 8 9 10 11 12 13 Last Page 9 of 135
﻿