Abstract
High Performance Computing (HPC) has recently been attracting more attention in remote sensing applications due to the challenges posed by the increased amount of open data that are acquired daily by Earth Observation programs. The unique parallel computing environments and programming techniques that are integrated in HPC systems are able to solve large-scale problems such as the training of classification algorithms with large amounts of remote sensing data. This webinar will explain how to distribute the training of deep neural networks with parallel implementation techniques on HPC systems that include a large number of Graphics Processing Units. To show that distributed training can drastically reduce the training time and preserve the accuracy performance, the webinar will present recent experimental results performed on the HPC systems at the Jülich Supercomputing Centre
🔌High Performance Computing for Deep Learning
🎛Hardware Levels of Parallelism
🖥Distributed Training with Data Parallelism
- Message Passing Interface (MPI)
- Horovod Framework
🏰Jülich Supercomputing Centre
- HPC systems
- Computing Time and Training Courses
🚀Distributed deep learning for remote sensing data
- Classification and Super-Resolution Problems
🔧New Distributed Training Strategies and Supercomputers for AI