Main Content

trainbu

Batch unsupervised weight/bias training

Syntax

net.trainFcn = 'trainbu'
[net,tr] = train(net,...)

Description

trainbu trains a network with weight and bias learning rules with batch updates. Weights and biases updates occur at the end of an entire pass through the input data.

trainbu is not called directly. Instead the train function calls it for networks whose NET.trainFcn property is set to 'trainbu', thus:

net.trainFcn = 'trainbu' sets the network trainFcn property.

[net,tr] = train(net,...) trains the network with trainbu.

Training occurs according to trainbu training parameters, shown here with the following default values:

net.trainParam.epochs1000

Maximum number of epochs to train

net.trainParam.show25

Epochs between displays (NaN for no displays)

net.trainParam.showCommandLinefalse

Generate command-line output

net.trainParam.showWindowtrue

Show training GUI

net.trainParam.timeinf

Maximum time to train in seconds

Validation and test vectors have no impact on training for this function, but act as independent measures of network generalization.

Network Use

You can create a standard network that uses trainbu by calling selforgmap. To prepare a custom network to be trained with trainbu:

  1. Set NET.trainFcn to 'trainbu'. (This option sets NET.trainParam to trainbu default parameters.)

  2. Set each NET.inputWeights{i,j}.learnFcn to a learning function.

  3. Set each NET.layerWeights{i,j}.learnFcn to a learning function.

  4. Set each NET.biases{i}.learnFcn to a learning function. (Weight and bias learning parameters are automatically set to default values for the given learning function.)

To train the network:

  1. Set NET.trainParam properties to desired values.

  2. Set weight and bias learning parameters to desired values.

  3. Call train.

See selforgmap for training examples.

Algorithms

Each weight and bias updates according to its learning function after each epoch (one pass through the entire set of input vectors).

Training stops when any of these conditions is met:

  • The maximum number of epochs (repetitions) is reached.

  • Performance is minimized to the goal.

  • The maximum amount of time is exceeded.

  • Validation performance (validation error) has increased more than max_fail times since the last time it decreased (when using validation).

Version History

Introduced in R2010b

See Also

|