Main Content

srchbac

1-D minimization using backtracking

Syntax

[a,gX,perf,retcode,delta,tol] = srchbac(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,TOL,ch_perf)

Description

srchbac is a linear search routine. It searches in a given direction to locate the minimum of the performance function in that direction. It uses a technique called backtracking.

[a,gX,perf,retcode,delta,tol] = srchbac(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,TOL,ch_perf) takes these inputs,

net

Neural network

X

Vector containing current values of weights and biases

Pd

Delayed input vectors

Tl

Layer target vectors

Ai

Initial input delay conditions

Q

Batch size

TS

Time steps

dX

Search direction vector

gX

Gradient vector

perf

Performance value at current X

dperf

Slope of performance value at current X in direction of dX

delta

Initial step size

tol

Tolerance on search

ch_perf

Change in performance on previous step

and returns

a

Step size that minimizes performance

gX

Gradient at new minimum point

perf

Performance value at new minimum point

retcode

Return code that has three elements. The first two elements correspond to the number of function evaluations in the two stages of the search. The third element is a return code. These have different meanings for different search algorithms. Some might not be used in this function.

 0  Normal
 1  Minimum step taken
 2  Maximum step taken
 3  Beta condition not met
delta

New initial step size, based on the current step size

tol

New tolerance on search

Parameters used for the backstepping algorithm are

alpha

Scale factor that determines sufficient reduction in perf

beta

Scale factor that determines sufficiently large step size

low_lim

Lower limit on change in step size

up_lim

Upper limit on change in step size

maxstep

Maximum step length

minstep

Minimum step length

scale_tol

Parameter that relates the tolerance tol to the initial step size delta, usually set to 20

The defaults for these parameters are set in the training function that calls them. See traincgf, traincgb, traincgp, trainbfg, and trainoss.

Dimensions for these variables are

Pd

No-by-Ni-by-TS cell array

Each element P{i,j,ts} is a Dij-by-Q matrix.

Tl

Nl-by-TS cell array

Each element P{i,ts} is a Vi-by-Q matrix.

V

Nl-by-LD cell array

Each element Ai{i,k} is an Si-by-Q matrix.

where

Ni =net.numInputs
Nl =net.numLayers
LD =net.numLayerDelays
Ri =net.inputs{i}.size
Si=net.layers{i}.size
Vi= net.targets{i}.size
Dij=Ri * length(net.inputWeights{i,j}.delays)

More About

collapse all

Backtracking Search

The backtracking search routine srchbac is best suited to use with the quasi-Newton optimization algorithms. It begins with a step multiplier of 1 and then backtracks until an acceptable reduction in the performance is obtained. On the first step it uses the value of performance at the current point and a step multiplier of 1. It also uses the value of the derivative of performance at the current point to obtain a quadratic approximation to the performance function along the search direction. The minimum of the quadratic approximation becomes a tentative optimum point (under certain conditions) and the performance at this point is tested. If the performance is not sufficiently reduced, a cubic interpolation is obtained and the minimum of the cubic interpolation becomes the new tentative optimum point. This process is continued until a sufficient reduction in the performance is obtained.

The backtracking algorithm is described in Dennis and Schnabel. It is used as the default line search for the quasi-Newton algorithms, although it might not be the best technique for all problems.

Algorithms

srchbac locates the minimum of the performance function in the search direction dX, using the backtracking algorithm described on page 126 and 328 of Dennis and Schnabel’s book, noted below.

References

Dennis, J.E., and R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Englewood Cliffs, NJ, Prentice-Hall, 1983

Version History

Introduced before R2006a

See Also

| |