# Improving the speed of tomSym

Speed is often important when solving optimization problems. The main advantage of tomSym is that it allows for high overall speed. The time it takes from the moment you start coding until the solution presented should be as short as possible. Often, much more time is spent coding than numerically solving the problem, so emphasis should be placed on writing code that is easy to read and debug. Sometimes, however, the time used by the numerical solver is critical, and this section gives som hints on how to make it run faster.

### Pre-process the symbolic problem

See `tomsym_mpc.m`

### Avoid loops

Manipulating tomSym expressons inside a loop can be very slow. For example, the following loop generates a long symbolic expression.

>> v = tom('v',1,10); >> s = 0; >> for i = 1:length(v); s = s+v(i); end >> disp(s); ((((((((v(1)+v(2))+v(3))+v(4))+v(5))+v(6))+v(7))+v(8))+v(9))+v(10)

In this case, the loop can simply be replaced by `s = sum(v)`, which will run much faster.

### Use separate symbols for separate things

It is common in some optimization frameworks to use a single vector `x` for all the optimization variables, and then using indexes to represent different things. (`x(1)` is position in m, `x(2)` is speed in m/s, etc.)

Doing this in tomSym is a bad idea for several reasons. The generated code will be less efficient, because tomSym tries to generated derivatives with respect to the whole vector but it is only used one element at a time. Automatic scaling will not work as expected, becuase tomSym tries to scale the entire vector by one factor, when the individual elements may have different magnintude.

Also, it is much easier to read code that uses `x` for position and `v` for speed, than code that uses `x` for both.

### Don't vectorize expressions that don't belong together

It is faster say `{ x >= 0, cos(x) >= 0.5 }` than [`x; cos(x)`]` >= `[` 0; 0.5 `].

Here we have one linear constraint and one nonlinar constraint. Normally, the solver will handle these separately, and linear constraints are treated much more efficiently than nonlinear ones. But if they are combined into a nonlinear constraint with two components, then both are handled by the nonlinear part of the solver.

### Avoid using `abs` and `max`

It is much more efficient to say `{ v >= -1, v <= 1 }` than `{ max(abs(v)) <= 1 }`. Again, this is because linear constraints are treated more efficiently than nonlinear ones. For the same reason it is faster to minimize `y` under the constraint `{ v >= -y, v <= y }` than to minimize `max(abs(x))`.

The `rewriteV` function (called by `ezsolve`) attempts to get rid of instances of `abs` and `max`, but it is only able to handle simple cases.

### Don't put equations in the objective

Using an objective that requires solving equations, such as `sum((Ab).^2))` is very inefficient. A better way to handle this is to rewrite the objective as `sum(x.^2)` and add the constraint `A*x==b`. This formulation gives much more information to the solver, and avoids having to compute the derivative of the solution to an equation (which is numerically unstable.)

For this reason, functions like `fzero` should never be used with tomSym. The function `tomfzero` can often be used in its place.

### Balance the scale

Equations like `a/1000000 + b ==1` don't work well with numeric solvers, because `a` and `b` will have very different magnitudes in the solution.

One way to overcome this problem is to set `OPTIONS.scale = 'auto'`, which will cause `ezsolve` to try to guess a scaling factor for each unknown. This works best if each variable has been given reasonable upper and lower bounds.

Another way to fix the problem is to use scaled variable from the start. For example, if we use the scaled variable `as`, and let `a` equal `1000000*as`, then the equation becomes `as + b ==1`, which is well-scaled.

### Multiply rather than divide

Multiplication is "less nonlinear" than division. Hence, equations like `m*x ==f` typically result in better convergence than `x ==f/m`. This is particularly true for matrices: `M*x ==f` can be much faster than `x ==Mf`.

This method also prevents the problem where the solver shuts down because of a division-by-zero. (However, if `m` is zero then `f==` is a solution, so watch out!)