麻豆影音

Skip to main content
SHARE
News

Scaling Deep Learning for Science

Inspired by the brain鈥檚 web of neurons, deep neural networks consist of thousands or millions of simple computational units. Leveraging the GPU computing power of the Cray XK7 Titan, ORNL researchers were able to auto-generate custom neural networks for science problems in a matter of hours as opposed to the months needed using conventional methods.

ORNL-designed algorithm leverages Titan to create high-performing deep neural networks

November 28, 2017 鈥� Deep neural networks鈥攁 form of artificial intelligence鈥攈ave demonstrated mastery of tasks once thought uniquely human. Their triumphs have ranged from identifying animals in images, to , to , among other successes.

Now, researchers are eager to apply this computational technique鈥攃ommonly referred to as deep learning鈥攖o some of science鈥檚 most persistent mysteries. But because scientific data often looks much different from the data used for animal photos and speech, developing the right artificial neural network can feel like an impossible guessing game for nonexperts. To expand the benefits of deep learning for science, researchers need new tools to build high-performing neural networks that don鈥檛 require specialized knowledge.

Using the Titan supercomputer, a research team led by Robert Patton of the (顿翱贰鈥檚)&苍产蝉辫;Oak Ridge National Laboratory (ORNL) has developed an evolutionary algorithm capable of generating custom neural networks that match or exceed the performance of handcrafted artificial intelligence systems. Better yet, by leveraging the GPU computing power of the Cray XK7 Titan鈥攖he leadership-class machine managed by the , a  User Facility at ORNL鈥攖hese auto-generated networks can be produced quickly, in a matter of hours as opposed to the months needed using conventional methods.

The research team鈥檚 algorithm, called MENNDL (Multinode Evolutionary Neural Networks for Deep Learning), is designed to evaluate, evolve, and optimize neural networks for unique datasets. Scaled across Titan鈥檚 18,688 GPUs, MENNDL can test and train thousands of potential networks for a science problem simultaneously, eliminating poor performers and averaging high performers until an optimal network emerges. The process eliminates much of the time-intensive, trial-and-error tuning traditionally required of machine learning experts.

鈥淭here鈥檚 no clear set of instructions scientists can follow to tweak networks to work for their problem,鈥� said research scientist Steven Young, a member of ORNL鈥檚 Nature Inspired Machine Learning team. 鈥淲ith MENNDL, they no longer have to worry about designing a network. Instead, the algorithm can quickly do that for them, while they focus on their data and ensuring the problem is well-posed.鈥�

Pinning down parameters

Inspired by the brain鈥檚 web of neurons, deep neural networks are a relatively old concept in neuroscience and computing, first popularized by two University of Chicago researchers in the 1940s. But because of limits in computing power, it wasn鈥檛 until recently that researchers had success in training machines to independently interpret data.

Today鈥檚 neural networks can consist of thousands or millions of simple computational units鈥攖he 鈥渘eurons鈥濃€攁rranged in stacked layers, like the rows of figures spaced across a foosball table. During one common form of training, a network is assigned a task (e.g., to find photos with cats) and fed a set of labeled data (e.g., photos of cats and photos without cats). As the network pushes the data through each successive layer, it makes correlations between visual patterns and predefined labels, assigning values to specific features (e.g., whiskers and paws). These values contribute to the weights that define the network鈥檚 model parameters. During training, the weights are continually adjusted until the final output matches the targeted goal. Once the network learns to perform from training data, it can then be tested against unlabeled data.

Although many parameters of a neural network are determined during the training process, initial model configurations must be set manually. These starting points, known as hyperparameters, include variables like the order, type, and number of layers in a network.

Finding the optimal set of hyperparameters can be the key to efficiently applying deep learning to an unusual dataset. 鈥淵ou have to experimentally adjust these parameters because there鈥檚 no book you can look in and say, 鈥楾hese are exactly what your hyperparameters should be,鈥欌€� Young said. 鈥淲hat we did is use this evolutionary algorithm on Titan to find the best hyperparameters for varying types of datasets.鈥�

Unlocking that potential, however, required some creative software engineering by Patton鈥檚 team. MENNDL homes in on a neural network鈥檚 optimal hyperparameters by assigning a neural network to each Titan node. The team designed MENNDL to use a deep learning framework called Caffe to carry out the computation, relying on the parallel computing Message Passing Interface standard to divide and distribute data among nodes. As Titan works through individual networks, new data is fed to the system鈥檚 nodes asynchronously, meaning once a node completes a task, it鈥檚 quickly assigned a new task independent of the other nodes鈥� status. This ensures that the 27-petaflop Titan stays busy combing through possible configurations.

鈥淒esigning the algorithm to really work at that scale was one of the challenges,鈥� Young said. 鈥淭o really leverage the machine, we set up MENNDL to generate a queue of individual networks to send to the nodes for evaluation as soon as computing power becomes available.鈥�

To demonstrate MENNDL鈥檚 versatility, the team applied the algorithm to several datasets, training networks to identify sub-cellular structures for medical research, classify satellite images with clouds, and categorize high-energy physics data. The results matched or exceeded the performance of networks designed by experts.

Networking neutrinos

One science domain in which MENNDL is already proving its value is neutrino physics. Neutrinos, ghost-like particles that pass through your body at a rate of trillions per second, could play a major role in explaining the formation of the early universe and the nature of matter鈥攊f only scientists knew more about them.

Large detectors at DOE鈥檚  (Fermilab) use high-intensity beams to study elusive neutrino reactions with ordinary matter. The devices capture a large sample of neutrino interactions that can be transformed into basic images through a process called 鈥渞econstruction.鈥� Like a slow-motion replay at a sporting event, these reconstructions can help physicists better understand neutrino behavior.

鈥淭hey almost look like a picture of the interaction,鈥� said Gabriel Perdue, an associate scientist at Fermilab.

Perdue leads an effort to integrate neural networks into the classification and analysis of detector data. The work could improve the efficiency of some measurements, help physicists understand how certain they can be about their analyses, and lead to new avenues of inquiry.

Teaming up with Patton鈥檚 team under a 2016 Director鈥檚 Discretionary application on Titan, Fermilab researchers produced a competitive classification network in support of a neutrino scattering experiment called  (Main Injector Experiment for v-A). The task, known as vertex reconstruction, required a network to analyze images and precisely identify the location where neutrinos interact with the detector鈥攁 challenge for events that produce many particles.

In only 24 hours, MENNDL produced optimized networks that outperformed handcrafted networks鈥攁n achievement that would have taken months for Fermilab researchers. To identify the high-performing network, MENNDL evaluated approximately 500,000 neural networks. The training data consisted of 800,000 images of neutrino events, steadily processed on 18,000 of Titan鈥檚 nodes.

鈥淵ou need something like MENNDL to explore this effectively infinite space of possible networks, but you want to do it efficiently,鈥� Perdue said. 鈥淲hat Titan does is bring the time to solution down to something practical.鈥�

Having recently been awarded another allocation under the Advanced Scientific Computing Research Leadership Computing Challenge program, Perdue鈥檚 team is building off its deep learning success by applying MENDDL to additional high-energy physics datasets to generate optimized algorithms. In addition to improved physics measurements, the results could provide insight into how and why machines learn.

鈥淲e鈥檙e just getting started,鈥� Perdue said. 鈥淚 think we鈥檒l learn really interesting things about how deep learning works, and we鈥檒l also have better networks to do our physics. The reason we鈥檙e going through all this work is because we鈥檙e getting better performance, and there鈥檚 real potential to get more.鈥�

AI meets exascale

When Titan debuted 5 years ago, its GPU-accelerated architecture boosted traditional modeling and simulation to new levels of detail. Since then, GPUs, which excel at carrying out hundreds of calculations simultaneously, have become the go-to processor for deep learning. That fortuitous development made Titan a powerful tool for exploring artificial intelligence at supercomputer scales.

With the OLCF鈥檚 next leadership-class system, , set to come online in 2018, deep learning researchers expect to take this blossoming technology even further. Summit builds on the GPU revolution pioneered by Titan and is expected to deliver more than five times the performance of its predecessor. The IBM system will contain more than 27,000 of Nvidia鈥檚 newest Volta GPUs in addition to more than 9,000 IBM Power9 CPUs. Furthermore, because deep learning requires less mathematical precision than other types of scientific computing, Summit could potentially deliver exascale-level performance for deep learning problems鈥攖he equivalent of a billion billion calculations per second.

鈥淭hat means we鈥檒l be able to evaluate larger networks much faster and evolve many more generations of networks in less time,鈥� Young said.

In addition to preparing for new hardware, Patton鈥檚 team continues to develop MENNDL and explore other types of experimental techniques, including neuromorphic computing, another biologically inspired computing concept.

鈥淥ne thing we鈥檙e looking at going forward is evolving deep learning networks from stacked layers to graphs of layers that can split and then merge later,鈥� Young said. 鈥淭hese networks with branches excel at analyzing things at multiple scales, such as a close-up photograph in comparison to a wide-angle shot. When you have 20,000 GPUs available, you can actually start to think about a problem like that.鈥�

Related Publication: Steven R. Young, Derek C. Rose, Travis Johnston, William T. Heller, Thomas P. Karnowski, Thomas E. Potok, Robert M. Patton, Gabriel Perdue, and Jonathan Miller, 鈥淓volving Deep Networks Using HPC.鈥� In Proceedings of the Machine Learning on HPC Environments. Paper presented at The International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, Colorado (November 2017), doi: .

Adam M. Terwilliger, Gabriel N. Perdue, David Isele, Robert M. Patton, and Steven R. Young, 鈥淰ertex Reconstruction of Neutrino Interactions Using Deep Learning.鈥� In 2017 International Joint Conference on Neural Networks (IJCNN), IEEE (2017): 2275鈥�2281, doi: .

Oak Ridge National Laboratory is supported by the US Department of Energy鈥檚 Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit .