acm-header
Sign In

Communications of the ACM

ACM TechNews

Ornl Researchers ­se Titan to Accelerate Design, Training of Deep-Learning Networks


View as: Print Mobile App Share:
Oak Ridge researchers Steven Young (left) and Travis Johnston with the Titan supercomputer.

Oak Ridge National Laboratory researchers combined artificial intelligence with high-performance computing to achieve a peak speed of 20 petaflops in the creation and training of deep learning networks on the lab's Titan supercomputer.

Credit: Oak Ridge National Laboratory

Researchers at Oak Ridge National Laboratory (ORNL) have combined artificial intelligence with high-performance computing to achieve a peak speed of 20 petaflops in the creation and training of deep-learning networks on the Titan supercomputer.

ORNL's Advances in Machine Learning to Improve Scientific Discovery at Exascale and Beyond (ASCEND) project aims to use deep learning to understand the massive datasets produced by the world's most sophisticated scientific experiments. However, analyzing those datasets normally requires existing neural networks to be modified so they can produce valid results.

ORNL researchers say they have demonstrated the modification process can be dramatically expedited with a capable computing system.

The team developed two codes for evolving and fine-tuning deep neural network architectures: MENNDL and RAvENNA. They say both codes can generate and train up to 18,600 neural networks concurrently, and peak performance can be estimated by randomly sampling and profiling several hundred of the independently trained networks.

From Oak Ridge National Laboratory
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account