Main Content

newpnn

Design probabilistic neural network

Syntax

net = newpnn(P,T,spread)

Description

Probabilistic neural networks (PNN) are a kind of radial basis network suitable for classification problems.

net = newpnn(P,T,spread) takes two or three arguments,

P

R-by-Q matrix of Q input vectors

T

S-by-Q matrix of Q target class vectors

spread

Spread of radial basis functions (default = 0.1)

and returns a new probabilistic neural network.

If spread is near zero, the network acts as a nearest neighbor classifier. As spread becomes larger, the designed network takes into account several nearby design vectors.

Examples

Here a classification problem is defined with a set of inputs P and class indices Tc.

P = [1 2 3 4 5 6 7];
Tc = [1 2 3 2 2 3 1];

The class indices are converted to target vectors, and a PNN is designed and tested.

T = ind2vec(Tc)
net = newpnn(P,T);
Y = sim(net,P)
Yc = vec2ind(Y)

Algorithms

newpnn creates a two-layer network. The first layer has radbas neurons, and calculates its weighted inputs with dist and its net input with netprod. The second layer has compet neurons, and calculates its weighted input with dotprod and its net inputs with netsum. Only the first layer has biases.

newpnn sets the first-layer weights to P', and the first-layer biases are all set to 0.8326/spread, resulting in radial basis functions that cross 0.5 at weighted inputs of +/– spread. The second-layer weights W2 are set to T.

References

Wasserman, P.D., Advanced Methods in Neural Computing, New York, Van Nostrand Reinhold, 1993, pp. 35–55

Version History

Introduced before R2006a