Main Content

learngd

Gradient descent weight and bias learning function

Syntax

[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learngd('code')

Description

learngd is the gradient descent weight and bias learning function.

[dW,LS] = learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs:

W

S-by-R weight matrix (or S-by-1 bias vector)

P

R-by-Q input vectors (or ones(1,Q))

Z

S-by-Q output gradient with respect to performance x Q weighted input vectors

N

S-by-Q net input vectors

A

S-by-Q output vectors

T

S-by-Q layer target vectors

E

S-by-Q layer error vectors

gW

S-by-R gradient with respect to performance

gA

S-by-Q output gradient with respect to performance

D

S-by-S neuron distances

LP

Learning parameters, none, LP = []

LS

Learning state, initially should be []

and returns

dW

S-by-R weight (or bias) change matrix

LS

New learning state

Learning occurs according to learngd’s learning parameter, shown here with its default value.

LP.lr - 0.01

Learning rate

info = learngd('code') returns useful information for each supported code character vector:

'pnames'

Names of learning parameters

'pdefaults'

Default learning parameters

'needg'

Returns 1 if this function uses gW or gA

Examples

Here you define a random gradient gW for a weight going to a layer with three neurons from an input with two elements. Also define a learning rate of 0.5.

gW = rand(3,2);
lp.lr = 0.5;

Because learngd only needs these values to calculate a weight change (see “Algorithm” below), use them to do so.

dW = learngd([],[],[],[],[],[],[],gW,[],[],lp,[])

Algorithms

learngd calculates the weight change dW for a given neuron from the neuron’s input P and error E, and the weight (or bias) learning rate LR, according to the gradient descent dw = lr*gW.

Version History

Introduced before R2006a

See Also

| |