Technical Articles

MATLAB and Simulink in the World: Parallel Computing


Researchers and engineers must often solve computationally intensive problems involving very large data sets or numerous simulations while meeting tight deadlines. They dramatically accelerate their work by taking advantage of multicore computers and computer clusters using MathWorks parallel computing tools.

Max Planck Institute

world_pc_fig1_w.jpg
Using cryo-electron microscopy data to reconstruct protein complexes

Structural and computational biologists at the Max Planck Institute of Biochemistry have reconstructed the 3D structure of key proteins from 2D images obtained by cryo-electron microscopes. They used MATLAB® as the platform for their entire workflow: controlling the microscope, selecting projections from images, averaging and processing projections to improve image quality, and reconstructing a 3D density map of the protein. A GUI developed using MATLAB enabled them to automatically collect millions of projections. They distributed computation of these huge data sets over a 64-node cluster, speeding up the process by 20 to 30 times and reducing a week’s job to an overnight run.

 

Argonne National Laboratory

world_pc_fig2_w.jpg

To evaluate designs and technologies for next-generation fuel-cell, hybrid electric, and plug-in hybrid electric vehicles, Argonne developed a modeling environment using Simulink®. The advanced vehicle framework and scalable powertrain components, modeled using Simulink and Stateflow®, are automatically assembled using MATLAB scripts. Argonne used this environment to generate models for various configurations, automatically scaling battery size, engine power, and other parameters. Design studies required at least 30 iterations and 400 simulations per vehicle configuration. By running the simulations on a computing cluster, Argonne reduced the time needed for one set of simulations from two weeks to one day.

 

C-COR Incorporated

Optimizing designs for high-density QAM systems

C-COR’s high-density edge quadrature amplitude modulation (QAM) systems convert video content and Internet data into signals used by home set-top boxes and cable modems. To debug and optimize the system design, C-COR engineers evaluated multiple filter topologies and dozens of filter types and values, running Simulink models on an eight-node cluster assembled from decommissioned machines. This approach not only enabled them to design systems that met their performance objectives and complied with ITU standards, but also reduced design times by 30%.

 

Massachusetts Institute of Technology

world_pc_fig3_w.jpg

To identify proteins that could signal the presence of cancer, Massachusetts Institute of Technology researchers analyzed mass spectrometry data that included millions of data points from hundreds of patients. They modeled a network of interacting biological molecules that consisted of more than 20,000 nodes and 100,000 edges. After performing statistical calculations and other analyses on the network properties, they combined these with the mass spectrometry results. To speed up calculation of network properties and statistics, they divided the network into chunks and ran the MATLAB algorithms in parallel on a large computer cluster.

 

Queen Mary, University of London

world_pc_fig4_w.jpg

As part of the International Linear Collider (ILC) Global Design Effort, Queen Mary researchers developed a real-time beam alignment control system. The ILC accelerates beams of electrons and positrons toward each other through two 20-kilometer-long linear accelerators, producing collision energies of up to 1,000 Giga-electron volts. To tune the control system, the group runs simulations that track more than 600 particle bunches modeled as 80,000 individual particles. They accelerated this process by running more than 100 simulations in parallel on a computer cluster.

 

University of Geneva

Executing portfolio optimization algorithms

University of Geneva researchers developed an automated system to select optimal portfolios, employing heuristic approaches that account for advanced risk functions and constraints. Before running the optimization algorithm, the MATLAB based application automatically determines an effective threshold sequence based on the data and on specified constraints. The application runs dozens of times, with 100 or more different starting points, to find an optimal solution. At three to seven minutes per run, finding a single solution at first took up to 11 hours. Using a 32-node system, the researchers executed the application 32 times in the time needed to execute the algorithm once on a single processor.

Published 2009 - 91787v00