TOMLAB Solving Least Squares and Parameter Estimation Problems: Difference between revisions

From TomWiki
Jump to navigationJump to search
mNo edit summary
Line 1: Line 1:
{{Cleanup|Clean this article.}}
==Solving Least Squares and Parameter Estimation Problems==
==Solving Least Squares and Parameter Estimation Problems==



Revision as of 15:36, 5 July 2011

{{#switch: | left =

{{#switch:{{#if: | {{{smallimage}}} | }} | none =

| #default =

}} {{#if:{{#if: | {{{smallimageright}}} | }} | {{#ifeq:{{#if: | {{{smallimageright}}} | }}|none | | }} }}

| #default =

{{#switch: | none =

| #default =

}}

{{#if: | {{#ifeq:|none

 | 
| }} }}

}}

Solving Least Squares and Parameter Estimation Problems

This section describes how to define and solve different types of linear and nonlinear least squares and parameter estimation problems. Several examples are given on how to proceed, depending on if a quick solution is wanted, or more advanced tests are needed. TOMLAB is also compatible with MathWorks Optimization TB. See Appendix E for more information and test examples.

All demonstration examples that are using the TOMLAB format are collected in the directory examples. The examples relevant to this section are lsDemo and llsDemo. The full path to these files are always given in the text.

Section 8.5 (page 81) contains information on solving extreme large-scale ls problems with Tlsqr.


Linear Least Squares Problems

This section shows examples how to define and solve linear least squares problems using the TOMLAB format. As a first illustration, the example lls1Demo in file llsDemo shows how to fit a linear least squares model with linear constraints to given data. This test problem is taken from the Users Guide of LSSOL \[29\].

Name='LSSOL  test  example';

%  In TOMLAB it is best to use Inf and -Inf, not  big  numbers. 
n = 9; 	% Number  of  unknown  parameters
x_L = [-2 -2  -Inf, -2*ones(1,6)]';
x_U = 2*ones(n,1);

A       = [ ones(1,8) 4;  1:4,-2,1 1 1 1;  1 -1  1 -1, ones(1,5)];
b_L = [2     -Inf  -4]';
b_U = [Inf     -2  -2]';

y = ones(10,1);
C  = \[ ones(1,n); 1 2 1 1 1 1 2 0 0; 1 1 3 1 1 1 -1  -1  -3; ...
1 1 1 4 1 1 1 1 1;1 1 1 3 1 1 1 1 1;1 1 2 1 1 0 0 0 -1; ...
1 1 1 1 0 1 1 1 1;1 1 1 0 1 1 1 1 1;1 1 0 1 1 1 2 2 3;  ...
1 0 1 1 1 1 0 2 2\];

x_0 = 1./[1:n]';

t           = [];   %  No time set for y(t) (used  for plotting)
weightY     = [];   %  No weighting 
weightType  = [];   %  No weighting type  set

x_min	    = [];   %  No lower bound for plotting 
x_max	    = [];   %  No upper bound for plotting

Prob = llsAssign(C, y, x_L,  x_U, Name, x_0,  t, weightType,   weightY, ...
                 A,  b_L,  b_U,	x_min,  x_max); Result 	= tomRun('lsei',Prob,2);

It is trivial to change the solver in the call to tomRun to a nonlinear least squares solver, e.g. clsSolve, or a general nonlinear programming solver.

Linear Least Squares Problems using the SOL Solver LSSOL

The example lls2Demo in file llsDemo shows how to fit a linear least squares model with linear constraints to given data using a direct call to the SOL solver LSSOL. The test problem is taken from the Users Guide of LSSOL \[29\].

%   Note that when calling the  LSSOL   MEX  interface directly,  avoid  using
%   Inf and -Inf. Instead use big  numbers that indicate Inf.
%   The standard  for the  MEX  interfaces is 1E20 and -1E20,  respectively.

n = 9; % There are  nine  unknown  parameters, and 10 equations 
x_L = [-2 -2  -1E20,  -2*ones(1,6)]';
x_U = 2*ones(n,1);

A   = [ ones(1,8) 4;  1:4,-2,1 1 1 1;  1 -1  1 -1, ones(1,5)];
b_L = [2 	-1E20 -4]';
b_U = [1E20	   -2 -2]';
% Must put lower and upper bounds on variables and constraints together 
bl = \[x_L;b_L];
bu = \[x_U;b_U\];

H   = [ ones(1,n); 1 2 1 1 1 1 2 0 0;  1 1 3 1 1 1 -1  -1  -3; ...
        1 1 1 4 1 1 1 1 1;1  1 1 3 1 1 1 1 1;1  1 2 1 1 0 0 0 -1; ...
        1 1 1 1 0 1 1 1 1;1  1 1 0 1 1 1 1 1;1  1 0 1 1 1 2 2 3;  ...
        1 0 1 1 1 1 0 2 2];
y   = ones(10,1);

x_0 = 1./[1:n]';

% Set  empty indicating default values  for most variables
c         = [];	          % No linear coefficients, they are for LP/QP 
Warm	  = [];	          % No  warm  start
iState	  = [];	          % No  warm  start
Upper	  = [];	          % C   is not  factorized 
kx 	  = [];	          % No  warm  start
SpecsFile = [];	          % No   parameter  settings in a SPECS  file 
PriLev 	  = [];	          % PriLev  is not  really used in LSSOL 
ProbName  = [];           % ProbName  is not  really used in LSSOL 
optPar(1) = 50;           % Set  print level at  maximum
PrintFile  = 'lssol.txt'; % Print result on the  file with  name  lssol.txt

z0 = (y-H*x_0);
f0  = 0.5*z0'*z0;
fprintf('Initial function value  %f\n',f0);

[x,  Inform, iState, cLamda, Iter, fObj, r, kx]  = ... 
     lssol( A,  bl, bu,  c, x_0,  optPar, H,  y, Warm, ...
     iState, Upper,  kx,  SpecsFile, PrintFile,  PriLev, ProbName );

%  We could equally well call with the following shorter call:
%  [x,  Inform, iState, cLamda, Iter, fObj, r, kx]  = ...
%       lssol( A,  bl, bu,  c, x, optPar, H,  y);

z = (y-H*x);
f = 0.5*z'*z;
fprintf('Optimal  function value  %f\n',f);

Nonlinear Least Squares Problems

This section shows examples how to define and solve nonlinear least squares problems using the TOMLAB format. As a first illustration, the example ls1Demo in file lsDemo shows how to fit a nonlinear model of exponential type with three unknown parameters to experimental data. This problem, Gisela, is also defined as problem three in ls prob. A weighting parameter K is sent to the residual and Jacobian routine using the Prob structure. The solver clsSolve is called directly. Note that the user only defines the routine to compute the residual vector and the Jacobian matrix of derivatives. TOMLAB has special routines ls f, ls g and ls H that computes the nonlinear least squares objective function value, given the residuals, as well as the gradient and the approximative Hessian, see Table 39. The residual routine for this problem is defined in file ls1 r in the directory example with the statements

function r = ls_r(x, Prob)

% Compute  residuals to  nonlinear least squares  problem  Gisela

% US_A is the  standard  TOMLAB  global parameter  to  be used in the
% communication  between the  residual and the  Jacobian  routine 

global US_A

% The extra weight parameter  K  is sent as part of the structure
K = Prob.user.K;
t = Prob.LS.t(:); % Pick  up the  time  points

% Exponential  expressions to be later used when computing the Jacobian
US_A.e1  = exp(-x(1)\*t); US_A.e2  = exp(-x(2)\*t);

r = K\*x(1)\*(US_A.e2  - US_A.e1) / (x(3)\*(x(1)-x(2))) - Prob.LS.y;

Note that this example also shows how to communicate information between the residual and the Jacobian routine. It is best to use any of the predefined global variables US A and US B, because then there will be no conflicts with respect to global variables if recursive calls are used. In this example the global variable US A is used as structure array storing two vectors with exponential expressions. The Jacobian routine for this problem is defined in the file ls1 J in the directory example. The global variable US A is accessed to obtain the exponential expressions, see the statements below.

function J = ls1_J(x,  Prob)

% Computes  the  Jacobian  to  least squares  problem  Gisela. J(i,j) is dr_i/d_x_j
% Parameter K  is input in the  structure 
Prob a = Prob.user.K * x(1)/(x(3)*(x(1)-x(2)));
b      = x(1)-x(2);
t      = Prob.LS.t;

% Pick up the globally saved exponential computations 
global US_A
e1 = US_A.e1; e2 = US_A.e2;

% Compute  the  three columns in the Jacobian, one for each of variable
J = a * [ t.*e1+(e2-e1)*(1-1/b), -t.*e2+(e2-e1)/b, (e1-e2)/x(3) ];

The following statements solve the ''Gisela ''problem.
%   --------------------------------------------------------------------- 
function ls1Demo - Nonlinear parameter  estimation with  3 unknowns
%   --------------------------------------------------------------------- 

Name='Gisela';

% Time values
t = [0.25; 0.5; 0.75; 1;  1.5; 2;  3;  4;  6;  8;  12;  24;  32;  48;  54;  72;  80;...
     96;  121;  144;  168;  192;  216;  246;  276;  324;  348;  386];

%   Observations
y = [30.5; 44;  43;  41.5; 38.6; 38.6; 39;  41;  37;  37;  24;  32;  29;  23;  21;...
      19;  17;  14;  9.5; 8.5; 7;  6;  6;  4.5; 3.6; 3;  2.2; 1.6];

x_0 = \[6.8729,0.0108,0.1248\]'; %   Initial values  for unknown  x

%   Generate the  problem  structure using  the  TOMLAB  format  (short call)
%   Prob = clsAssign(r, J, JacPattern, x_L,  x_U, Name, x_0,  ...
%	             y, t, weightType,   weightY, SepAlg,  fLowBnd, ...
%	             A,  b_L,  b_U, c, dc,  ConsPattern, c_L,  c_U, ...
%	             x_min,  x_max, f_opt, x_opt);

Prob = clsAssign('ls1_r', 'ls1_J', \[\], \[\], \[\], Name, x_0,  y, t);

% Weighting  parameter  K  in model is sent  to  r and J computation  using  Prob
Prob.user.K = 5;

Result 	= tomRun('clsSolve', Prob,  2);

The second example ls2Demo in file lsDemo solves the same problem as ls1Demo, but using numerical differences to compute the Jacobian matrix in each iteration. To make TOMLAB avoid using the Jacobian routine, the variable Prob.NumDiff has to be set nonzero. Also in this example the flag Prob.optParam.IterPrint is set to enable one line of printing for each iteration. The changed statements are

...
Prob.NumDiff            = 1; % Use standard  numerical differences
Prob.optParam.IterPrint = 1; % Print one line each iteration

Result    = tomRun('clsSolve',Prob,2);

The third example ls3Demo in file lsDemo solves the same problem as ls1Demo, but six times for different values of the parameter K in the range \[3.8, 5.0\]. It illustrates that it is not necessary to remake the problem structure Prob for each optimization, but instead just change the parameters needed. The Result structure is saved as an vector of structure arrays, to enable post analysis of the results. The changed statements are

for i=1:6
   Prob.user.K = 3.8  + 0.2\*i;
   Result(i)	= tomRun('clsSolve',Prob,2);
end

fprintf('\nWEIGHT PARAMETER  K  is %9.3f\n\n\n',Prob.user.K);

Table 39 describes the low level routines and the initialization routines needed for the predefined constrained nonlinear least squares (cls) test problems. Similar routines are needed for the nonlinear least squares (ls) test problems (here no constraint routines are needed).

Table 39: Constrained nonlinear least squares (cls) test problems.
Function Description
cls prob Initialization of cls test problems.
cls r Compute the residual vector ri (x), i = 1, ..., m. x ? Rn for cls test problems.
cls J Compute the Jacobian matrix Jij (x) = ?ri /dxj , i = 1, ..., m, j = 1, ..., n for cls test problems.
cls c Compute the vector of constraint functions c(x) for cls test problems.
cls dc Compute the matrix of constraint normals ?c(x)/dx for cls test problems.
cls d2c Compute the second part of the second derivative of the Lagrangian function for cls test problems.
ls_f General routine to compute the objective function value f (x) = 1 r(x)T r(x) for nonlinear least squares type of problems.
ls_g General routine to compute the gradient g(x) = J (x)T r(x) for nonlinear least squares type of problems.
ls_H General routine to compute the Hessian approximation H (x) = J (x)T *J (x) for nonlinear least squares type of problems.

Fitting Sums of Exponentials to Empirical Data

In TOMLAB the problem of fitting sums of positively weighted exponential functions to empirical data may be formulated either as a nonlinear least squares problem or a separable nonlinear least squares problem \[66\]. Several empirical data series are predefined and artificial data series may also be generated. There are five different types of exponential models with special treatment in TOMLAB, shown in Table 40. In research in cooperation with Todd Walton, Vicksburg, USA, TOMLAB has been used to estimate parameters using maximum likelihood in simulated Weibull distributions, and Gumbel and Gamma distributions with real data. TOMLAB has also been useful for parameter estimation in stochastic hydrology using real-life data.

Table 40: Exponential models treated in TOMLAB.
, , .
, , .
, , .
, , .
, , .
, , .

Algorithms to find starting values for different number of exponential terms are implemented. Test results show that these initial value algorithms are very close to the true solution for equidistant problems and fairly good for non-equidistant problems, see the thesis by Petersson \[61\]. Good initial values are extremely important when solving real life exponential fitting problems, because they are so ill-conditioned. Table 41 shows the relevant routines.


Table 41: Exponential fitting test problems.
Function Description
expAssign Assign exponential fitting problem.
exp ArtP Generate artificial exponential sum problems.
expInit Find starting values for the exponential parameters ?.
expSolve Solve exponential fitting problems.
exp prob Defines a exponential fitting type of problem, with data series (t, y). The file includes data from several different empirical test series.
Helax prob Defines 335 medical research problems supplied by Helax AB, Uppsala, Sweden, where an exponential model is fitted to data. The actual data series (t, y) are stored on one file each, i.e. 335 data files, 8MB large, and are not distributed. A sample of five similar files are part of exp prob.
exp r Compute the residual vector ri (x), i = 1, ..., m. x ? Rn
exp J Compute the Jacobian matrix ?ri /dxj , i = 1, ..., m, j = 1, ..., n.
exp d2r Compute the 2nd part of the second derivative for the nonlinear least squares exponential fitting problem.
exp c Compute the constraints ?1 < ?2 < ... on the exponential parameters ?i , i = 1, ..., p.
exp dc Compute matrix of constraint normals for constrained exponential fitting problem.
exp d2c Compute second part of second derivative matrix of the Lagrangian function for con- strained exponential fitting problem. This is a zero matrix, because the constraints are linear.
exp q Find starting values for exponential parameters ?i , i = 1, ..., p.
exp p Find optimal number of exponential terms, p.

The algorithmic development implemented in TOMLAB is further discussed in \[49\]. An overview of the field is also given in this reference.

Large Scale LS problems with Tlsqr

The Tlsqr MEX solver provides special parameters for advanced memory handling, enabling the user to solve extremely large linear least squares problems.

We'll take the problem of solving Ax = b in the least squares sense as a prototype problem for this section. Here, A ? Rm×n is a dense or sparse matrix and b ? Rm .

Controlling memory allocation in Tlsqr

The normal mode of operation of Tlsqr is that memory for the A matrix is allocated and deallocated each time the solver is called. In a real-life situation with a very large A and where the solver is called repeatedly, this may become inefficient and even cause problems getting memory because of memory fragmenting.

The Tlsqr solver provides a parameter Alloc, given as the second element of the first input parameter to control the memory handling. The possible values of Alloc and their meanings are given in Table 42.


Table 42: Alloc values for Tlsqr
Alloc (m(2)) Meaning
0 Normal operation: allocate - solve - deallocate
1 Only allocate, no results returned
2 Allocate and solve, no deallocate
3 Only solve, no allocate/deallocate
4 Solve and deallocate
5 Deallocate only, no results returned


An example of the calling sequence is given below.

>> m   = 60000;  n = 1000;  d = 0.01; % Size  and density of  A
>> A  = sprand(m,n,d);	              % Sparse random matrix
>> b = ones(m,1);	              % Right  hand side
>> whos A

  Name	     Size 	  Bytes 	Class
  A	60000x500	3584784	        sparse  array

Grand total is 298565 elements  using  3584784 bytes

%   =======================================================================
%   Simple standard call to Tlsqr, Alloc is set to default 0 if m is scalar

>> x=Tlsqr(m,n,A,[],[],b);

% =======================================================================
% To solve  repeatedly with  e.g. the  same  A  but  different b,
% the  user  may  do:

% Indicate to Tlsqr to allocate and solve the problem

>> m(2) = 2 
m =
      60000	2

>> x = Tlsqr(m,n,A,[],[],b); % First solution

%   Indicate to  Tlsqr that memory  is  already allocated,
%   and that no deallocation should  occur  on exit

>> m(2) = 3 
m =
      60000	3

% Loop 100 times, calling  Tlsqr each time  - without re-allocation of  memory

>> for  k=1:100
>>	b = (...);	          % E.g. alter the right hand side each time
>>	x = Tlsqr(m,n,A,[],[],b); % Call Tlsqr, now with m(2)=3
>> end

% Final call, with m(2) = 4: Solve and deallocate

>> m(2) = 4 
m =
      60000	4

>> x=Tlsqr(m,n,A,[],[],b);

%   Alternatively, to  just deallocate, the  user  could  do
>> m(2) = 5;
>> Tlsqr(m,n,A,[],[],b); % Nothing is returned

Further Memory Control: The maxneA Parameter

If the number of non-zero elements in a sparse A matrix increases in the middle of a Tlsqr-calling loop, the initially allocated space will not be sufficient. One solution is that the user checks this prior to calling Tlsqr and reallocating if necessary. The other solution is to set m(3) to an upper limit (maxneA) of the number of nonzero elements in A in the first allocation call. For example:

   >> m   = [ 60000	1 1E6 ]

m =
    60000	1	1000000

will initiate a Tlsqr session, allocating sufficient memory to allow A matrices with up to 1.000.000 nonzeros. If the allocated memory is still insufficient, Tlsqr will try to reallocate enough space for the operation to continue.

Using Global Variables with Tlsqr and Tlsqrglob.m

For cases where it is not possible to send the A matrix to Tlsqr because it is simply too large, the user may choose to use the tomlab/mex/Tlsqrglob.m routine.

This function, which more often than not needs to be customized to the application in mind, should provide the following functionality:

function y = Tlsqrglob( mode, m, n,  x, Aname, rw )

global A

if mode==1 
  y = A*x; 
else
  y = A'\*x;
end

The purpose is to provide the possibility to define a global variable A and perform the multiplication without transferring this potentially very large matrix to the MEX function Tlsqr.

If several matrices are involved, for example if A = \[A1 ; A2 \], this approach can be used to eliminate the need to explicitly repeatedly form the composite matrix A during a run. Tlsqrglob.m should then be (copied and) modified as:

function y = Tlsqrglob( mode, m, n,  x, Aname, rw )

global A1 A2 

if mode==1
  y = A1*x;
  y = [y ; A2*x];
else
  M = size(A1,1);
  y = A1' * x(1:M) + ...
  A2' * x(M+1:end);
end

To use the global approach, Tlsqr must be called with the name of the global multiplication routine, for example:

[ x, ... ] = Tlsqr(m,n,'Tlsqrglob',...);