Neural Network Toolbox

Accelerated Training and Large Data Sets

You can speed up neural network training and simulation of large data sets by using Neural Network Toolbox with Parallel Computing Toolbox. Training and simulation involve many parallel computations, which can be accelerated with multicore processors, CUDA-enabled NVIDIA GPUs, and computer clusters with multiple processors and GPUs.

Distributed Computing

Parallel Computing Toolbox lets neural network training and simulation run across multiple processor cores on a single PC, or across multiple processors on multiple computers on a network using MATLAB Distributed Computing Server. Using multiple cores can speed up calculations. Using multiple computers lets you solve problems using data sets too big to fit within the system memory of any single computer. The only limit to problem size is the total system memory available across all computers.

GPU Computing

Parallel Computing Toolbox enables Neural Network Toolbox simulation and training to be parallelized across the multiprocessors and cores of a general-purpose graphics processing unit (GPU). GPUs are highly efficient on parallel algorithms such as neural networks. You can achieve higher levels of parallelism by using multiple GPUs or by using GPUs and processors together. With MATLAB Distributed Computing Server, you can harness all the processors and GPUs on a network cluster of computers for neural network training and simulation.

Learn more about GPU computing with MATLAB.

Try Neural Network Toolbox

Get trial software

Machine Learning with MATLAB

View webinar