Main Content

learnsom

Self-organizing map weight learning function

Syntax

[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnsom('code')

Description

learnsom is the self-organizing map weight learning function.

[dW,LS] = learnsom(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,

W

S-by-R weight matrix (or S-by-1 bias vector)

P

R-by-Q input vectors (or ones(1,Q))

Z

S-by-Q weighted input vectors

N

S-by-Q net input vectors

A

S-by-Q output vectors

T

S-by-Q layer target vectors

E

S-by-Q layer error vectors

gW

S-by-R weight gradient with respect to performance

gA

S-by-Q output gradient with respect to performance

D

S-by-S neuron distances

LP

Learning parameters, none, LP = []

LS

Learning state, initially should be = []

and returns

dW

S-by-R weight (or bias) change matrix

LS

New learning state

Learning occurs according to learnsom’s learning parameters, shown here with their default values.

LP.order_lr0.9

Ordering phase learning rate

LP.order_steps1000

Ordering phase steps

LP.tune_lr0.02

Tuning phase learning rate

LP.tune_nd1

Tuning phase neighborhood distance

info = learnsom('code') returns useful information for each code character vector:

'pnames'

Names of learning parameters

'pdefaults'

Default learning parameters

'needg'

Returns 1 if this function uses gW or gA

Examples

Here you define a random input P, output A, and weight matrix W for a layer with a two-element input and six neurons. You also calculate positions and distances for the neurons, which are arranged in a 2-by-3 hexagonal pattern. Then you define the four learning parameters.

p = rand(2,1);
a = rand(6,1);
w = rand(6,2);
pos = hextop(2,3);
d = linkdist(pos);
lp.order_lr = 0.9;
lp.order_steps = 1000;
lp.tune_lr = 0.02;
lp.tune_nd = 1;

Because learnsom only needs these values to calculate a weight change (see “Algorithm” below), use them to do so.

ls = [];
[dW,ls] = learnsom(w,p,[],[],a,[],[],[],[],d,lp,ls)

Algorithms

learnsom calculates the weight change dW for a given neuron from the neuron’s input P, activation A2, and learning rate LR:

dw = lr*a2*(p'-w)

where the activation A2 is found from the layer output A, neuron distances D, and the current neighborhood size ND:

a2(i,q) = 1,  if a(i,q) = 1
		 = 0.5, if a(j,q) = 1 and D(i,j) <= nd
		 = 0, otherwise

The learning rate LR and neighborhood size NS are altered through two phases: an ordering phase and a tuning phase.

The ordering phases lasts as many steps as LP.order_steps. During this phase LR is adjusted from LP.order_lr down to LP.tune_lr, and ND is adjusted from the maximum neuron distance down to 1. It is during this phase that neuron weights are expected to order themselves in the input space consistent with the associated neuron positions.

During the tuning phase LR decreases slowly from LP.tune_lr, and ND is always set to LP.tune_nd. During this phase the weights are expected to spread out relatively evenly over the input space while retaining their topological order, determined during the ordering phase.

Version History

Introduced before R2006a

See Also

|