# Difference between revisions of "PROPT"

Line 2: | Line 2: | ||

The following sections describe the functionality of PROPT. It is recommended to get familiar with a few examples (included with software) to get accustomed with the notations used. | The following sections describe the functionality of PROPT. It is recommended to get familiar with a few examples (included with software) to get accustomed with the notations used. | ||

− | |||

− | |||

===Installation=== | ===Installation=== |

## Revision as of 05:03, 14 February 2012

## Contents

- 1 PROPT Guide Overview
- 2 Introduction to PROPT
- 3 Modeling optimal control problems
- 4 A simple example
- 5 Multi-phase optimal control
- 6 Scaling of optimal control problems
- 7 Setting solver and options
- 8 Solving optimal control problems
- 9 Optimal Control Examples

## PROPT Guide Overview

The following sections describe the functionality of PROPT. It is recommended to get familiar with a few examples (included with software) to get accustomed with the notations used.

### Installation

The PROPT software is included in the general TOMLAB installation executable. Please refer to the TOMLAB Installation manual for further information. The following Matlab commands can be used to verify a successful installation.

>> cd c:\tomlab\ >> startup >> vanDerPol

After a few seconds two graphs should display the state and control variables for the example.

### Foreword to the software

The software is provided without limitations to all registered customers in order to demonstrated what PROPT is capable of. The implementation of the extensive example library (more then 100 test cases are included) has eliminated most bugs, but there may still be a few that has escaped attention.

If problems are encountered please email a copy of the problem for debugging. (All examples and data are treated as strictly confidential unless other instructions are given.) A status report can be expected within 24 hours.

### Initial remarks

PROPT is a combined modeling, compilation and solver engine for generation of highly complex optimal control problems. The users will need to have licenses for the TOMLAB Base Module, TOMLAB /SNOPT (or other suitable nonlinear TOMLAB solver) (always included with a demo license).

It is highly recommended to take a look at the example collection included with the software before building a new problem (Tip: Copy one of the examples and use as a starting-point).

The brysonDenham problems in the examples folder contain very extensive documentation, explanations and general information about the usage and software design.

PROPT aims to encompass all areas of optimal control including:

- Aerodynamic trajectory control

- Bang bang control

- Chemical engineering

- Dynamic systems

- General optimal control

- Large-scale linear control

- Multi-phase system control

- Mechanical engineering design

- Non-differentiable control

- Parameters estimation for dynamic systems

- Singular control

At least one example from each area is included in the example suite.

## Introduction to PROPT

PROPT is a software package intended to solve dynamic optimization problems. Such problems are usually described by

- A state-space model of a system. This can be either a set of ordinary differential equations (
**ODE**) or differential algebraic equations (**DAE**).

- Initial and/or final conditions (sometimes also conditions at other points).

- A cost functional, i.e. a scalar value that depends on the state trajectories and the control function.

- Sometimes, additional equations and variables that, for example, relate the initial and final conditions to each other.

The goal of PROPT is to enable the formulation of such problem descriptions as seamlessly as possible, without having to worry about the mathematics of the actual solver. Once a problem has been properly defined, PROPT will take care of all the necessary steps in order to return a solution.2

PROPT uses pseudospectral collocation methods (and other options) for solving optimal control problems. This means that the solution takes the form of a polynomial, and this polynomial satisfies the DAE and the path constraints at the collocation points (Note that both the DAE and the path constraints can be violated between collocation points). The default choice is to use Gauss points as collocation points, although the user can specify any set of points to use.

It should be noted that the code is written in a general way, allowing for a DAE rather than just an ODE formulation with path constraints.

Parameter estimation for dynamic systems is intrinsically supported by the framework as scalar decision variables can be introduced in the formulation.

### Overview of PROPT syntax

A PROPT problem is defined with tomSym objects and standard Matlab expressions (usually in cell arrays), which contain information about different aspects of the problem.

In general an optimal control consists of the following:

**Phases:**A problem may contain several different phases, each with its own set of variables and equations.

**Independent:**Variables that are independent (normally time).

**States:**(Could also be called "dependent") States (short for*state variables*) are continuous functions of the independent variable. Control variables are (possibly) discontinuous functions of the same variable.

**Controls:**Control variables (states that may be discontinuous).

**Scalars:**A problem may also contain unknown scalar variables that are solved for at the same time as the controls and state trajectories.

**Parameters:**Numeric constants that are used to define a problem.

2 This is accomplished by translating the dynamic problem into a nonlinear programming problem, generating Matlab function files and a TOMLAB problem, that can be passed to a numeric solver.

**Expressions:**Intermediate results in computations.

**Equations and inequalities:**The core of the problem definition.

**Cost:**A performance index or objective function to be minimized.

**PROPT has many built-in features:**

- Computation of the constant matrices used for the differentiation and integration of the polynomials used to approximate the solution to the trajectory optimization problem.

- Code manipulation to turn user-supplied expressions into Matlab code for the cost function
*f*and constraint function*c*(with two levels of analytical derivative and sparsity patterns). The files are passed to a nonlinear programming solver (by default) in TOMLAB for final processing.

- Functionality for plotting and computing a variety of information for the solution to the problem.

- Integrated support for non-smooth (hybrid) optimal control problems.

Note: The tomSym and PROPT softwares are the first Matlab modules to enable complete source transformation for optimal control problem.

### Vector representation

In PROPT, each state is represented by a scalar and a vector , such that corresponds to the value of the state at the i:th collocation point.

State equations are written on vector form, and hence apply at all collocation points. The initial point is treated separately, so the state equations do not apply at the initial point.

The final state is not a free variable. Instead, it is computed via the interpolating polynomial.

### Global optimality

PROPT does not use Pontryagin's maximum principle, rather it uses a pseudospectral method, however the results are mathematically equivalent. This means that if the solver prints "an optimal solution was found", the solution satisfies necessary, but not sufficient, conditions of optimality. It is guaranteed that the returned solution cannot be improved by an infinitesimal change in the trajectory, but there may exist completely different trajectories that are better.

A recommended procedure for finding other solutions is to set conservative bounds for all variables (states, controls and parameters) and change the solver to "multiMin". This will run a large number of optimizations from random starting points. If they all converge to the same minimum, this is an indication, but no guarantee, that the solution is indeed the global optimum.

In order guarantee that the global optimum is obtained, one must either solve the Hamilton-Jacobi-Bellman equation (which PROPT does not) or show that the problem is convex and therefore only has one optimum (which 3 The final state could be computed by integrating the ODE, however, as we use a DAE, it is more convenient to go via the interpolating polynomial. Proofs that these methods are mathematically equivalent have been published.

may not be the case). If ezsolve claims that the problem type is "LP" or "QP", the problem is convex, and the solution is a global optimum.

It is also worth mentioning that a solution computed by PROPT only satisfies the ODE and constraints in the specified collocation points. There is no guarantee that the solution holds between those points. A common way of testing the integrity of a solution is to re-run the computation using twice as many collocation points. If nothing changes, then there was probably enough points in the first computation.

## Modeling optimal control problems

In order to understand the basic modeling features in PROPT it is best to start with a simple example. Open the file called *brysonDenham.m *located in /tomlab/propt/examples.

There are also *brysonDenhamShort.m*, *brysonDenhamDetailed.m*, *brysonDenhamTwoPhase.m *that solve the same problem, utilizing different features of PROPT.

To solve the problem, simply enter the following in the Matlab command prompt:

>> brysonDenham

## A simple example

The following example can be found in *vanDerPol.m *in /tomlab/propt/examples. The objective is to solve the following problem:

minimize:

subject to:

To solve the problem with PROPT the following compact code can be used:

toms t p = tomPhase('p', t, 0, 5, 60); setPhase(p); tomStates x1 x2 x3 tomControls u % Initial guess x0 = {icollocate({x1 == 0; x2 == 1; x3 == 0}) collocate(u == -0.01)}; % Box constraints cbox = {-10 <= icollocate(x1) <= 10 -10 <= icollocate(x2) <= 10 -10 <= icollocate(x3) <= 10 -0.3 <= collocate(u) <= 1}; % Boundary constraints cbnd = initial({x1 == 0; x2 == 1; x3 == 0}); % ODEs and path constraints ceq = collocate({dot(x1) == (1-x2.^2).*x1-x2+u dot(x2) == x1; dot(x3) == x1.^2+x2.^2+u.^2}); % Objective objective = final(x3); % Solve the problem options = struct; options.name = 'Van Der Pol'; solution = ezsolve(objective, {cbox, cbnd, ceq}, x0, options);

### Code generation

It is possible to compile permanently, in order to keep the autogenerated code:

>> Prob = sym2prob('lpcon',objective, {cbox, cbnd, ceq}, x0, options); >> Prob.FUNCS ans = f: 'lp_f' g: 'lp_g' H: 'lp_H' c: 'c_AFBHVT' dc: 'dc_AFBHVT' d2c: 'd2c_AFBHVT' r: [] J: [] d2r: [] fc: [] gdc: [] fg: [] cdc: [] rJ: [] >> edit c_AFBHVT

The code that was generated can be found in the $TEMP directory. The objective in this case is linear and can found in the Prob structure (*Prob.QP.c*).

Here is what the auto generated constraint code may look like:

function out = c_AFBHVT(tempX,Prob) % c_AFBHVT - Autogenerated file. tempD=Prob.tomSym.c; x3_p = reshape(tempX(183:243),61,1); u_p = reshape(tempX(1:60),60,1); x2_p = reshape(tempX(122:182),61,1); x1_p = reshape(tempX(61:121),61,1); tempC7 = tempD{2}*x2_p; tempC8 = tempC7.^2; tempC10 = tempD{2}*x1_p; out = [(tempD{3}-tempC8).*tempC10-tempC7+u_p-0.2*(tempD{1}*x1_p);... tempC10.^2+tempC8+u_p.^2-0.2*(tempD{1}*x3_p)];

And the objective function to optimize (in this case a simple linear objective already available in TOMLAB (hence not auto generated, but defined by the *Prob *field used)):

function f = lp_f(x, Prob) f = Prob.QP.c'*x(:);

### Modeling

The PROPT system uses equations and expressions (collected in cells) to model optimal control problems. Equations must be written either using (== *<*= *>*=) equality/inequality markers.

It is possible to include more than one equation on the same line. For example:

toms a b c cnbd = {a <= b <= c};

does the same job as:

toms a b c cnbd = {a <= b b <= c};

(The same is true for equalities.)

Be careful! This syntax is allowed in tomSym and PROPT because it is very common in mathematical notation, and allows for more compact code. Since it is not correct MATLAB notation, it will give unexpected results if the middle object is not a symbol. For example:

a = 1; b = 2; toms c cbnd = {a <= b <= c} % This gives an error!

When working with optimal control problems, one typically work with expressions that are valid for all collocation points. The functions *collocate *and *icollocate *can be used to extend an arbitrary expressions to the necessary collocation points.

Consider for example the starting points:

tomStates x1 tomControls u1 x0 = {icollocate(x1==1); collocate(u1==1)};

Note: icollocate, as it is working with a state variable, also includes the starting point.

**Scale the problem:**

Proper scaling may speed up convergence, and conversely, improperly scaled problems may converge slowly or not at all. Both unknowns and equations can be scaled.

**Don't use inf in equations:**

It is strongly discouraged to use -inf/inf in equations. This is because the equation may be evaluated by subtracting its left-hand side from the right-hand side and comparing the result to zero, which could have unexpected consequences.

Equations like *x <= Inf * are better left out completely (Although in many cases tomSym will know to ignore them anyway).

**Avoid using non-smooth functions:**

Because the optimization routines rely on derivatives, non-smooth functions should be avoided. A common exam- ple of a non-smooth function is the Heaviside function *H *, defined as *H *(*x*) = 1 for *x > *0, and *H *(*x*) = 0 for *x = *0,

which in Matlab code can be written as (*x > *0). A smoother expression, which has the same behavior for *x >> a *is:

When the problems are non-smooth in the independent variable (time, t), the problem should normally be sepa- rated into phases.

**Use descriptive names for equations, states, controls, and variables:**

The names used for states, controls, and variables make the code easier to read. Consider using names such as

"speed", "position", "concentration", etc. instead of "x1", "x2", "x3".

The names used for equations do not matter in most cases. However, they will be useful when accessing Lagrange multipliers.

**Re-solve on successively finer grids:**

If a very fine grid is needed in order to obtain a good solution, it is usually a good idea to solve the problem on a less dense grid first, and then re solve, by using the obtained solution as a starting point. The following code will do the trick:

for n=[10 40] p = tomPhase('p', t, 0, 6, n); setPhase(p); tomStates x1 x2 % Initial guess if n == 10 x0 = {p1 == 0; p2 == 0}; else x0 = {p1 == p1opt; p2 == p2opt icollocate({x1 == x1opt; x2 == x2opt})}; end ... ... ... solution = ezsolve(objective, {cbox, cbnd, ceq}, x0, options); % Optimal x, p for starting point x1opt = subs(x1, solution); x2opt = subs(x2, solution); p1opt = subs(p1, solution); p2opt = subs(p2, solution); end

See for example *denbighSystem.m *and *drugScheduling.m*.

#### Modeling notes

The examples included with the software in many cases just scratch the surface of the capabilities of PROPT. The software is designed for maximum user flexibility and with many "hidden" features. The following notes are worth keeping in mind when modeling optimal control problems.

**Left-hand side treatment:**

The left hand side of equations do not have to be a single time-derivative. Things like *collocate*(*m * dot*(*v*) == *F *) work just fine, even if m is a state variable (A constraint that prevents m from becoming to small may improve convergence).

**Second derivatives:**

Second derivatives are allowed, as in *collocate*(*dot*(*dot*(*x*)) == *a*), although convergence is almost always better when including extra states to avoid this.

**Equations:**

Equations do not have to include any time-derivative. For example *collocate*(0*.*5 ** m * v*^{2} + *m * g * h*) == *initial*(0*.*5 ** m * v*^{2} + *m * g * h*) will work just fine in the PROPT framework.

**Fixed end times:**

Problems with fixed end-times enables the use of any form of expression involving the collocation points in time since the collocation points are pre-determined.

The following illustrates how t can (should) be used.

toms t % Will cause an error if myfunc is undefined for tomSym myfunc(t); % This works since collocate(t) is a double vector and not a % tomSym object. myfunc(collocate(t))

It is also worth noting that feval bypasses tomSym, so it is possible to call any function, at the expense of slower derivatives. In the case of a fixed end time, there are no derivatives:

toms t % Works since tomSym is bypassed. collocate(feval('myfunc',t))

**Segmented constraints:**

With PROPT is it possible to define constraints that are valid for only a part of phase. In addition, it is possible to define constraints for other points than the collocation points which could assist in avoiding constraint violations in between the collocation points.

*mcollocate *automates this process for the entire phase and imposes twice the number of constraints, i.e. the solution will be constrained in between each collocation point as well.

The following example shows how to set segmented constraints (valid for a part of the phase) for time-free and time-fixed problems.

% For variable end time (the number of constraints need to be fixed) % Relax all constraints below the cutoff, while 0 above con = {collocate((8*(t-0.5).^2-0.5-x2) >= -1e6*(t<0.4))}; % When the end time is fixed there are many possible solutions. % One can use custom points or a set of collocation points if needed. tsteps = collocate(t); n = length(tsteps); y = atPoints(p,tsteps(round(n/3):round(n/2)),(8*(t-0.5).^2-0.5-x2)); con = {y >= 0}; % Alternatively one could write for custom points. con = atPoints(p,linspace(0.4, 0.5, 10),(8*(t-0.5).^2-0.5-x2) >= 0);

**Constant unknowns:**

It is possible to mix constant unknowns (created with "toms") into the equations, and those will become part of the solution: *icollocate*(0*.*5 ** m * v*2 + *m * g * h *== *Etot*).

**Higher-index DAEs:**

Higher-index DAEs usually converge nicely with PROPT. One can use dot() on entire expressions to get symbolic time derivatives to use when creating lower index DAEs.

**Lagrangian equations:**

Deducing Lagrangian equations manually is usually not necessary, but if needed, one can use tomSym for derivatives. TomSym allows for computation of symbolic derivatives with respect to constant scalars and matrices using the "derivative" function, but this does not work for tomStates or tomControls. The current implementation is therefore to create expressions using normal tomSym variables (created with the "toms" command), use the derivative function and then use "subs" function to replace the constants with states and controls before calling collocate.

### Independent variables, scalars and constants

Independent variables can be defined as follows (yes, it is that simple!):

toms t % Time toms tf % Final time % Phase name, time, start, end, number of nodes p = tomPhase('p', t, 0, tf, 25); cbox = {20 <= tf <= 30}; % Bounds on end time x0 = {tf == 22); % Starting point for end time (must be set before used % in other expressions)

It is also possible use scalar variables within the PROPT framework:

toms p1 cbox = {1 <= p1 <= 2}; x0 = {p1 == 1.3);

where the variables define lower, upper bound and a suitable guess.

A constant variable ki0 can be defined with the following statement (i.e. no difference from normal Matlab code):

ki0 = [1e3; 1e7; 10; 1e-3];

### State and control variables

The difference between state and control variables is that states are continuous between phases, while control variables can be discontinuous.

Examples:

Unconstrained state variable *x*1 with a starting guess from 0 to 1:

tomStates x1 x0 = icollocate(x1 == t/tf);

State variable *x*1 with bounds of 0.5 and 10. The starting guess should be set as well to avoid any potential singularities:

tomStates x1 cbox = {0.5 <= icollocate(x1) <= 10};

Control variable T with a "table" defining the starting point:

tomControls T x0 = collocate(T==273*(t<100)+415*(t>=100));

### Boundary, path, event and integral constraints

Boundary constraints are defined as expressions since the problem size is reduced. For example if state variable

*x*1 has to start in 1 and end in 1 and *x*2 has to travel from 0 to 2 define:

cbnd1 = initial(x1 == 1); cbnd2 = initial(x2 == 0); cbnd3 = final(x1 == 1); cbnd4 = final(x2 == 2);

A variety of path, event and integral constraints are shown below:

x3min = icollocate(x3 >= 0.5); % Path constraint integral1 = {integrate(x3) - 1 == 0}; % Integral constraint final3 = final(x3) >= 0.5; % Final event constraint init1 = initial(x1) <= 2.0; % Initial event constraint

**NOTE**: When defining integral constraints that span over several phase, use either of the following notations:

integrate(p1,x3p1) + integrate(p2,x3p1) + integrate(p3,x3p1) <= 2.1e3

or with expressions:

p1qx3 == integrate(p1,x3p1); p2qx3 == integrate(p2,x3p1); p3qx3 == integrate(p3,x3p1); qx2sum = {p1qx3 + p2qx3 + p3qx3 <= 2.1e3};

## Multi-phase optimal control

*brysonDenhamTwoPhase.m *is a good example for multi-phase optimal control. The user simply have to link the states to complete the problem formulation.

link = {final(p1,x1p1) == initial(p2,x1p2) final(p1,x2p1) == initial(p2,x2p2)};

It is important to keep in mind that the costs have to be individually defined for the phases in the case that a integrating function is used.

## Scaling of optimal control problems

Scaling of optimal control problems are best done after they are defined. The level of the cost function, variables, states and controls should be balanced around 1.

PROPT does however, have an automated scaling feature that could be tested if needed. The option can be activated by setting:

options.scale = 'auto';

*shuttleEntry.m *and *minimumClimbEng.m *exemplify how to use and enable the scaling of optimal control problems.

## Setting solver and options

The solver options can be set in *options.**. See *help ezsolve *for more information. The following code exemplifies some basic settings:

options.solver = 'knitro'; % Change the solver options.type = 'lpcon'; % Set the problem type to avoid identification % Nonlinear problem with linear objective

It is also possible to gain complete control over the solver by bypassing the *ezsolve *command and converting the problem to a standard TOMLAB Prob structure.

The following code will, for example, run vanDerPol (it is possible to edit the example directly of course), then convert the problem and run it with ALG 3 (SLQP) in KNITRO, and finally extract the standard solution.

vanDerPol; Prob = sym2prob(objective, {cbox, cbnd, ceq}, x0, options); Prob.KNITRO.options.ALG = 3; Result = tomRun('knitro', Prob, 1); solution = getSolution(Result);

## Solving optimal control problems

The preceding sections show how to setup and define an optimal control problem using PROPT. Many addi- tional features have been implemented, such as advanced input error identification, to facilitate fast modeling and debugging.

PROPT functions for setting upp an ODE

### Advanced functions and operators

Some user may wish to use more advanced function of PROPT as well. The following function can be useful in many situations.

#### atPoints - Expand a propt tomSym to a set of points on a phase.

y = atPoints(phase, t, x) for a m-by-n tomSym object x, and a vector t of length p, returns an p-by-m*n tomSym with values of x for each value of t (this is done via substitution).

If x is a cell array of tomSym objects, then atPoints is applied recursively to each element in the array.

If x is an equality or inequality, and the left-hand-side is divided by a symbolic phase.tdelta, then both sides are multiplied by phase.tdelta

See also: tomPhase

Overloaded methods: tomSym/atPoints tomSym/atPoints

#### interp1p - Polynomial interpolation.

yi = interp1p(x,y,xi) fits a polynomial to the points (x,y) and evaluates that polynomial at xi to compute yi. The behavior of interp1p is different from that of interp1 in that the same high-degree polynomial is fitted to all points. This is usually not a good idea, and for general interpolation interp1 is to be preferred. However, it works well in some cases, such as when the points x are the roots of a Legendre polynomial.

Overloaded methods: tomSym/interp1p tomSym/interp1p

#### proptGausspoints - Generate a list of gauss points.

[r, w] = gausspoints(nquad) = proptGausspoints(n)

Input: n - The number of points requested

Output r - The Gauss points (roots of a Legendre polynomial of order n.) w - The weights for Gaussian quadrature.

#### proptDiffMatrix - Differentiation matrix for interpPoly.

M = proptDiffMatrix(X,XI,N) creates a matrix M of the values of the N:th derivatives of the interpolating polynomials defined by the roots X, at the points XI. Each column of M corresponds to a specific root, and each row corresponds to a component of XI.

### Screen output

The screen output depends on the functionality of the nonlinear programming solver. In most cases screen output can be generated with the following command:

options.PriLevOpt = 1; % or higher for more output

A print level of 3 is recommended when using the global NLP solver *multiMin*.

### Solution structure

The solution structure from PROPT contains the relevant results.

It is possible to use the solution information to evaluate any custom expressions. For example:

subs(x1.*x2,solution);

### Plotting

The best way to plot the data is to directly use the data fields as needed. There are also some automated plotting routines included, see for example *help tomSym/ezplot*.