ABond

35 Reputation

8 Badges

12 years, 233 days

MaplePrimes Activity


These are replies submitted by ABond

@acer well, the standard notation of optimal control problem assumes no further restrictions on y(t) (even continuity is not an issue in most cases).

So yes, y(t) is an arbitrary function of t (time), such that it maximizes (minimizes) f(y,x,t) for all t,

i. e. f(x,y,t)=int^T_0{x(t)-1/2*y(t)^2}dt ->max_y(t):

given dx/dt=a*y(t)-b*x(t), (1)

t=[0,..,T]

is a standard linear-quadratic optimal control problem.

This particular is solvable analytically, since the DE (1) is solvable analytically:

y^opt: dH/dy=0 where H is a hamiltonian H=x(t)-1/2*y(t)^2+lambda(t)*(a*y(t)-b*x(t)).

is, say, y^opt=a*lambda(t) (2)

where lambda(t) is given also through a DE:

and dlambda/dt=-dH/dx (3)

is a costate equation (system).

Then one may substitute for y(t) and solve the subsequent system (1),(3) analytically. If not, one has to approximate the solution to (2)

Sorry if I am writing smth. obvious here, but I want to clarify myself a bit.

So, if one has a larger or more complicated system (say, not one x(t), but a system of two DEs for x_1(t), x_2(t)), the DE (1) is no longer analytically solvable and thus one needs numerical optimization.

One of the ways is gradient projection, but this is slow and not always efficient. Hence the idea of optimization package.

ABond

@acer well, the standard notation of optimal control problem assumes no further restrictions on y(t) (even continuity is not an issue in most cases).

So yes, y(t) is an arbitrary function of t (time), such that it maximizes (minimizes) f(y,x,t) for all t,

i. e. f(x,y,t)=int^T_0{x(t)-1/2*y(t)^2}dt ->max_y(t):

given dx/dt=a*y(t)-b*x(t), (1)

t=[0,..,T]

is a standard linear-quadratic optimal control problem.

This particular is solvable analytically, since the DE (1) is solvable analytically:

y^opt: dH/dy=0 where H is a hamiltonian H=x(t)-1/2*y(t)^2+lambda(t)*(a*y(t)-b*x(t)).

is, say, y^opt=a*lambda(t) (2)

where lambda(t) is given also through a DE:

and dlambda/dt=-dH/dx (3)

is a costate equation (system).

Then one may substitute for y(t) and solve the subsequent system (1),(3) analytically. If not, one has to approximate the solution to (2)

Sorry if I am writing smth. obvious here, but I want to clarify myself a bit.

So, if one has a larger or more complicated system (say, not one x(t), but a system of two DEs for x_1(t), x_2(t)), the DE (1) is no longer analytically solvable and thus one needs numerical optimization.

One of the ways is gradient projection, but this is slow and not always efficient. Hence the idea of optimization package.

ABond

@acer Indeed this is the problem of finding x(t) and y(t) functions.

However, I know of a couple of examples of using 'standard' optimization techniqes for optimal control problems also.

One in this forum, another in Mathematica.

Hence the idea.

However, due to this t-dependence I have no clue how it can be done.

There is a law of motion

dx/dt=f(x,y,t),

y(t) to be the optimal control

and a functional f(x,y,t)

Hence x(t) is differentiabke function, y(t) - just continious one. for which you have first-order condition like df/dy=0.

Now maximize f(x,y,t) subject to y(t).

That's it

@acer Indeed this is the problem of finding x(t) and y(t) functions.

However, I know of a couple of examples of using 'standard' optimization techniqes for optimal control problems also.

One in this forum, another in Mathematica.

Hence the idea.

However, due to this t-dependence I have no clue how it can be done.

There is a law of motion

dx/dt=f(x,y,t),

y(t) to be the optimal control

and a functional f(x,y,t)

Hence x(t) is differentiabke function, y(t) - just continious one. for which you have first-order condition like df/dy=0.

Now maximize f(x,y,t) subject to y(t).

That's it

@pagan Hi,

Your suggestion turned to be very useful.

However now I am getting stuck with implementation of the same thing with two different functions.

It seems, that the algorithm although updating both ogf them with every i, do not optimize w.r.t. the second of them , a.

In the following code I use the sequential solution of two systems of DEs (called newsys1 and newsys2) with two functions, F[i], G[i] which are updated according tot he solution received at i-th step. However, the results do not differ much for G[i]=const: the same form of F[i] function i obtained. Is there a mostake in my code, or I have to use some sort of a double.cycle algorithm instead of one here?

Here is the code:

for i from 1 to N do
>
> print(i);
> newsys1:=subs(u(t)=F[i-1](t),a(t)=G[i-1](t),sys1):
> #print(newsys1);
>  sol1[i]:=dsolve(newsys1, numeric,
>                   output=listprocedure, known=[F[i-1],G[i-1]]):
> KK[i]:=subsop(3=remember,eval(k(t),sol1[i])):
>
> MM[i]:=subsop(3=remember,eval(m(t),sol1[i])):
> TT[i]:=subsop(3=remember,eval(tau(t),sol1[i])):
>
>
> newsys2:=subs(u=F[i-1],a=G[i-1],k=KK[i],m=MM[i],tau=TT[i],sys2):
> sol2[i]:=dsolve(newsys2, numeric,
>                   output=listprocedure, known=[F[i-1],G[i-1],KK[i],MM[i],TT[i]]):
> F[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),
> tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),                              FF[1]=F[i-1],
>                               REC[1])),
>                  t,numeric,proc_options=[remember]):
> #print(newsys2);
> G[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),psi[3](t)=subsop(3=remember,eval(psi[3](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),
> tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),                              FF[2]=G[i-1],
>                               REC[2])),
>                  t,numeric,proc_options=[remember]):
> GR[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),psi[3](t)=subsop(3=remember,eval(psi[3](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),FF[1]=F[i-1],FF[2]=G[i-1],
>                               GRAD[1])),
>                  t,numeric,proc_options=[remember]):
>
> DR[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),psi[3](t)=subsop(3=remember,eval(psi[3](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),FF[1]=F[i-1],FF[2]=G[i-1],
>                               GRAD[2])),
>                  t,numeric,proc_options=[remember]):
>
> end do:
which is the version of what You suggested.

Thanks for any hints

@pagan Hi,

Your suggestion turned to be very useful.

However now I am getting stuck with implementation of the same thing with two different functions.

It seems, that the algorithm although updating both ogf them with every i, do not optimize w.r.t. the second of them , a.

In the following code I use the sequential solution of two systems of DEs (called newsys1 and newsys2) with two functions, F[i], G[i] which are updated according tot he solution received at i-th step. However, the results do not differ much for G[i]=const: the same form of F[i] function i obtained. Is there a mostake in my code, or I have to use some sort of a double.cycle algorithm instead of one here?

Here is the code:

for i from 1 to N do
>
> print(i);
> newsys1:=subs(u(t)=F[i-1](t),a(t)=G[i-1](t),sys1):
> #print(newsys1);
>  sol1[i]:=dsolve(newsys1, numeric,
>                   output=listprocedure, known=[F[i-1],G[i-1]]):
> KK[i]:=subsop(3=remember,eval(k(t),sol1[i])):
>
> MM[i]:=subsop(3=remember,eval(m(t),sol1[i])):
> TT[i]:=subsop(3=remember,eval(tau(t),sol1[i])):
>
>
> newsys2:=subs(u=F[i-1],a=G[i-1],k=KK[i],m=MM[i],tau=TT[i],sys2):
> sol2[i]:=dsolve(newsys2, numeric,
>                   output=listprocedure, known=[F[i-1],G[i-1],KK[i],MM[i],TT[i]]):
> F[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),
> tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),                              FF[1]=F[i-1],
>                               REC[1])),
>                  t,numeric,proc_options=[remember]):
> #print(newsys2);
> G[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),psi[3](t)=subsop(3=remember,eval(psi[3](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),
> tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),                              FF[2]=G[i-1],
>                               REC[2])),
>                  t,numeric,proc_options=[remember]):
> GR[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),psi[3](t)=subsop(3=remember,eval(psi[3](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),FF[1]=F[i-1],FF[2]=G[i-1],
>                               GRAD[1])),
>                  t,numeric,proc_options=[remember]):
>
> DR[i]:=unapply('evalf'(subs(psi[1](t)=subsop(3=remember,eval(psi[1](t),sol2[i]))(t),psi[3](t)=subsop(3=remember,eval(psi[3](t),sol2[i]))(t),k(t)=subsop(3=remember,eval(k(t),sol1[i]))(t),tau(t)=subsop(3=remember,eval(tau(t),sol1[i]))(t),FF[1]=F[i-1],FF[2]=G[i-1],
>                               GRAD[2])),
>                  t,numeric,proc_options=[remember]):
>
> end do:
which is the version of what You suggested.

Thanks for any hints

Wel, this night be in line of what I am asking about.

However, as a return I need optimal FUNCTIONS x(t), y(t), not the optimla value t=const.

Basically this is a question how to use optimizer for an optimal control problem..

Thats it, in the example say x(t) while y(t) has to be found...

Wel, this night be in line of what I am asking about.

However, as a return I need optimal FUNCTIONS x(t), y(t), not the optimla value t=const.

Basically this is a question how to use optimizer for an optimal control problem..

Thats it, in the example say x(t) while y(t) has to be found...

well, I found a loss in the optimality while using the coupled system for the case of BVP (in the case of IVP it is the same, right), not,  to be said about efficiency, but the solution differs

But. Indeed,i need only the final iteration (or, for the sake of completeness, a couple of), so this would increase performance.

I'll report the result in the case of succeeding.

Anyway, thanks for the hint.

 

P.S. Anyone can give a standing ground for the coupled system to be equivalent for the decoupled one in the sense of their solutions (optimal control case, etc.)?

If so, I would be ever grateful =)

well, I found a loss in the optimality while using the coupled system for the case of BVP (in the case of IVP it is the same, right), not,  to be said about efficiency, but the solution differs

But. Indeed,i need only the final iteration (or, for the sake of completeness, a couple of), so this would increase performance.

I'll report the result in the case of succeeding.

Anyway, thanks for the hint.

 

P.S. Anyone can give a standing ground for the coupled system to be equivalent for the decoupled one in the sense of their solutions (optimal control case, etc.)?

If so, I would be ever grateful =)

@pagan now it seems to be exactly what I meant.

of course I have to check this for my real system yet, but the algorythm itself is exactly what I though of.

Many-many thanks for Your invaluable help

@pagan now it seems to be exactly what I meant.

of course I have to check this for my real system yet, but the algorythm itself is exactly what I though of.

Many-many thanks for Your invaluable help

@pagan Sorry.

I used the symbol 'f' just to refer to some generic function.

What I mean though is that I need to estimate F[i]=F[i-1]+f(sol[i])

where 'f' denotes some function and sol[i] - the solution of the given DE (numeric one).

Suppose 'f' is smth, like sin(y) or ln(y) - does not matter.

The point is that the x is driving th solution of DE on y (as in Your example), but the x is defined recursively as

x=F[i]=F[i-1]+sin(y),

where y is a solution with y=eval(y(t),sol[i]) with sol[i] deing defined as in your example.

I hope this time I am more precise.

Again, thanks.

@pagan Sorry.

I used the symbol 'f' just to refer to some generic function.

What I mean though is that I need to estimate F[i]=F[i-1]+f(sol[i])

where 'f' denotes some function and sol[i] - the solution of the given DE (numeric one).

Suppose 'f' is smth, like sin(y) or ln(y) - does not matter.

The point is that the x is driving th solution of DE on y (as in Your example), but the x is defined recursively as

x=F[i]=F[i-1]+sin(y),

where y is a solution with y=eval(y(t),sol[i]) with sol[i] deing defined as in your example.

I hope this time I am more precise.

Again, thanks.

@pagan A bit inconsistent.

The code does not work for getting a function of the solution like F[i]=F[i-1]+f(sol[i])...

 

 

1 2 3 Page 1 of 3