USA - 2021READ FULL CITATION AND ESSAY
For his pioneering contributions to numerical algorithms and libraries that enabled high performance computational software to keep pace with exponential hardware improvements for over four decades
Jack Dongarra has led the world of high-performance computing through his contributions to efficient numerical algorithms for linear algebra operations, parallel computing programming mechanisms, and performance evaluation. For nearly forty years, Moore's Law produced exponential growth in hardware performance. During that same time, while most software failed to keep pace with these hardware advances, high performance numerical software did -- in large part due to Dongarra's algorithms, optimization techniques, and production quality software implementations. Dongarra recognized that linear algebra operations could be designed and implemented in a largely hardware-independent way by choosing suitable abstractions and optimization methods. His innovations have been key to mapping linear algebra operations efficiently to increasingly complex computer architectures.
For over four decades, Dongarra has been the primary implementor or principal investigator for many libraries such as LINPACK, BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA, and SLATE. These libraries have been written for single processors, parallel computers, multicore nodes, and multiple GPUs per node. His software libraries are used, practically universally, for high performance scientific and engineering computation on machines ranging from laptops to the world's fastest supercomputers.
These libraries embody many deep technical innovations such as:
- Autotuning: through his 2016 Supercomputing Conference Test of Time award-winning ATLAS project, Dongarra pioneered methods for automatically finding algorithmic parameters that produce linear algebra kernels of near-optimal efficiency, often outperforming vendor-supplied codes.
- Mixed precision arithmetic: In his 2006 Supercomputing Conference paper, "Exploiting the Performance of 32 bit Floating Point Arithmetic in Obtaining 64 bit Accuracy," Dongarra pioneered harnessing multiple precisions of floating-point arithmetic to deliver accurate solutions more quickly. This work has become instrumental in machine learning applications, as showcased recently in the HPL-AI benchmark, which achieved unprecedented levels of performance on the world's top supercomputers.
- Batch computations: Dongarra pioneered the paradigm of breaking computations of large dense matrices, which are commonly used in simulations, modeling, and data analysis, into many computations of smaller tasks over blocks that can be calculated independently and concurrently. Based on his 2016 paper, "Performance, design, and autotuning of batched GEMM for GPUs," Dongarra led the development of the Batched BLAS Standard for such computations, and they also appear in the software libraries MAGMA and SLATE.
Dongarra has collaborated internationally with many people on the efforts above, always in the role of the driving force for innovation by continually developing new techniques to maximize performance and portability while maintaining numerically reliable results using state of the art techniques. Other examples include the Message Passing Interface (MPI) the de-facto standard for portable message-passing on parallel computing architectures, and the Performance API (PAPI), which provides an interface that allows collection and synthesis of performance from components of a heterogeneous system. The standards he helped create, such as MPI, the LINPACK Benchmark, and the Top500 list of supercomputers, underpin computational tasks ranging from weather prediction to climate change to analyzing data from large scale physics experiments.
USA - 2019
For his key role in the development of software and software standards, software repositories, performance and benchmarking software, and in community efforts to prepare for the challenges of exascale computing, especially in adapting linear algebra infrastructure to emerging architectures.
USA - 2013
For influential contributions to mathematical software, performance measurement, and parallel programming, and significant leadership and service within the HPC community
USA - 2001
For contributions in the field of scientific computing, the development of mathematical software, parallel methods, and enabling technologies for high-performance computing.