About ACM A.M. Turing Award
The A.M. Turing Award was named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing, and who was a key contributor to the Allied cryptanalysis of the Enigma cipher during World War II. Since its inception in 1966, the Turing Award has honored the computer scientists and engineers who created the systems and underlying theoretical foundations that have propelled the information technology industry.
The ACM A.M. Turing Award, often referred to as the “Nobel Prize in Computing,” is named for Alan M. Turing, the British mathematician who articulated the mathematical foundations of computing. Accompanied by a prize of $1,000,000, ACM’s most prestigious award is given to recognize contributions of a technical nature which are of lasting and major technical importance to the computing field. Financial support of the A.M. Turing Award is provided by Google Inc.
Recent A.M. Turing Award News
2023 ACM A.M. Turing Award
ACM has named Avi Wigderson as recipient of the 2023 ACM A.M. Turing Award for foundational contributions to the theory of computation, including reshaping our understanding of the role of randomness in computation, and for his decades of intellectual leadership in theoretical computer science.
Wigderson is the Herbert H. Maass Professor in the School of Mathematics at the Institute for Advanced Study in Princeton, New Jersey. He has been a leading figure in areas including computational complexity theory, algorithms and optimization, randomness and cryptography, parallel and distributed computation, combinatorics, and graph theory, as well as connections between theoretical computer science and mathematics and science.
What is Theoretical Computer Science?
Theoretical computer science is concerned with the mathematical underpinnings of the field. It poses questions such as “Is this problem solvable through computation?” or “If this problem is solvable through computation, how much time and other resources will be required?”
Theoretical computer science also explores the design of efficient algorithms. Every computing technology that touches our lives is made possible by algorithms. Understanding the principles that make for powerful and efficient algorithms deepens our understanding not only of computer science, but also the laws of nature. While theoretical computer science is known as a field that presents exciting intellectual challenges and is often not directly concerned with improving the practical applications of computing, research breakthroughs in this discipline have led to advances in almost every area of the field—from cryptography and computational biology to network design, machine learning, and quantum computing.
Why is Randomness Important?
Fundamentally, computers are deterministic systems; the set of instructions of an algorithm applied to any given input uniquely determines its computation and, in particular, its output. In other words, the deterministic algorithm is following a predictable pattern. Randomness, by contrast, lacks a well-defined pattern, or predictability in events or outcomes. Because the world we live in seems full of random events (weather systems, biological and quantum phenomena, etc.), computer scientists have enriched algorithms by allowing them to make random choices in the course of their computation, in the hope of improving their efficiency. And indeed, many problems for which no efficient deterministic algorithm was known have been solved efficiently by probabilistic algorithms, albeit with some small probability of error (that can be efficiently reduced). But is randomness essential, or can it be removed? And what is the quality of randomness needed for the success of probabilistic algorithms?
These, and many other fundamental questions lie at the heart of understanding randomness and pseudorandomness in computation. An improved understanding of the dynamics of randomness in computation can lead us to develop better algorithms as well as deepen our understanding of the nature of computation itself.
Wigderson’s Contributions
A leader in theoretical computer science research for four decades, Wigderson has made foundational contributions to the understanding of the role of randomness and pseudorandomness in computation.
Computer scientists have discovered a remarkable connection between randomness and computational difficulty (i.e., identifying natural problems that have no efficient algorithms). Working with colleagues, Wigderson authored a highly influential series of works on trading hardness for randomness. They proved that, under standard and widely believed computational assumptions, every probabilistic polynomial time algorithm can be efficiently derandomized (namely, made fully deterministic). In other words, randomness is not necessary for efficient computation. This sequence of works revolutionized our understanding of the role of randomness in computation, and the way we think about randomness. This series of influential papers include the following three:
- “Hardness vs. Randomness” (co-authored with Noam Nisan)
Among other findings, this paper introduced a new type of pseudorandom generator, and proved that efficient deterministic simulation of randomized algorithms is possible under much weaker assumptions than previously known. - “BPP Has Subexponential Time Simulations Unless EXPTIME has Publishable Proofs” (co-authored with László Babai, Lance Fortnow, and Noam Nisan)
This paper used `hardness amplification’ to demonstrate that bounded-error probabilistic polynomial time (BPP) can be simulated in subexponential time for infinitely many input lengths under weaker assumptions. - “P = BPP if E Requires Exponential Circuits: Derandomizing the XOR Lemma” (co-authored with Russell Impagliazzo)
This paper introduces a stronger pseudo-random generator with essentially optimal hardness vs randomness trade-offs.
Importantly, the impact of these three papers by Wigderson goes far beyond the areas of randomness and derandomization. Ideas from these papers were subsequently used in many areas of theoretical computer science and led to impactful papers by several leading figures in the field.
Still working within the broad area of randomness in computation, in papers with Omer Reingold, Salil Vadhan, and Michael Capalbo, Wigderson gave the first efficient combinatorial constructions of expander graphs, which are sparse graphs that have strong connectivity properties. They have many important applications in both mathematics and theoretical computer science.
Outside of his work in randomness, Wigderson has been an intellectual leader in several other areas of theoretical computer science, including multi-prover interactive proofs, cryptography, and circuit complexity.
Mentoring
In addition to his groundbreaking technical contributions, Wigderson is recognized as an esteemed mentor and colleague who has advised countless young researchers. His vast knowledge and unrivaled technical proficiency—coupled with his friendliness, enthusiasm, and generosity—have attracted many of the best young minds to pursue careers in theoretical computer science.
Background
Avi Wigderson is the Herbert H. Maass Professor in the School of Mathematics at the Institute for Advanced Study in Princeton, New Jersey. He has been a leading figure in areas including computational complexity theory, algorithms and optimization, randomness and cryptography, parallel and distributed computation, combinatorics, and graph theory, as well as connections between theoretical computer science and mathematics and science.
Wigderson’s honors include the Abel Prize, the IMU Abacus Medal (previously known as the Nevanlinna Prize), the Donald E. Knuth Prize, the Edsger W. Dijkstra Prize in Distributed Computing, and the Gödel Prize. He is an ACM Fellow and a member of the U.S. National Academy of Sciences and the American Academy of Arts and Sciences.
2022 ACM A.M. Turing Award
ACM has named Bob Metcalfe as recipient of the 2022 ACM A.M. Turing Award for the invention, standardization, and commercialization of Ethernet.
Metcalfe is an Emeritus Professor of Electrical and Computer Engineering (ECE) at The University of Texas at Austin and a Research Affiliate in Computational Engineering at the Massachusetts Institute of Technology (MIT) Computer Science & Artificial Intelligence Laboratory (CSAIL).
Invention of The Ethernet
In 1973, while a computer scientist at the Xerox Palo Alto Research Center (PARC), Metcalfe circulated a now-famous memo describing a “broadcast communication network” for connecting some of the first personal computers, PARC’s Altos, within a building. The first Ethernet ran at 2.94 megabits per second, which was about 10,000 times faster than the terminal networks it would replace.
Although Metcalfe’s original design proposed implementing this network over coaxial cable, the memo envisioned “communication over an ether,” making the design adaptable to future innovations in media technology including legacy telephone twisted pair, optical fiber, radio (Wi-Fi), and even power networks, to replace the coaxial cable as the “ether.” That memo laid the groundwork for what we now know today as Ethernet.
Metcalfe’s Ethernet design incorporated insights from his experience with ALOHAnet, a pioneering computer networking system developed at the University of Hawaii. Metcalfe recruited David Boggs (d. 2022), a co-inventor of Ethernet, to help build a 100-node PARC Ethernet. That first Ethernet was then replicated within Xerox to proliferate a corporate internet.
In their classic 1976 Communications of the ACM article, “ Ethernet: Distributed Packet Switching for Local Computer Networks ,” Metcalfe and Boggs described the design of Ethernet. Metcalfe then led a team that developed the 10Mbps Ethernet to form the basis of subsequent standards.
Standardization and Commercialization
After leaving Xerox in 1979, Metcalfe remained the chief evangelist for Ethernet and continued to guide its development while working to ensure industry adoption of an open standard. He led an effort by Digital Equipment Corporation (DEC), Intel, and Xerox to develop a 10Mbps Ethernet specification—the DIX standard. The IEEE 802 committee was formed to establish a local area network (LAN) standard. A slight variant of DIX became the first IEEE 802.3 standard, which is still vibrant today.
As the founder of his own Silicon Valley Internet startup, 3Com Corporation, in 1979, Metcalfe bolstered the commercial appeal of Ethernet by selling network software, Ethernet transceivers, and Ethernet cards for minicomputers and workstations. When IBM introduced its personal computer (PC), 3Com introduced one of the first Ethernet interfaces for IBM PCs and their proliferating clones.
Today, Ethernet is the main conduit of wired network communications around the world, handling data rates from 10 Mbps to 400 Gbps, with 800 Gbps and 1.6 Tbps technologies emerging. Ethernet has also become an enormous market, with revenue from Ethernet switches alone exceeding $30 billion in 2021, according to the International Data Corporation.
Metcalfe insists on calling Wi-Fi by its original name, Wireless Ethernet, for old times’ sake.
Biographical Background
Robert Melancton Metcalfe is Emeritus Professor of Electrical and Computer Engineering (ECE) after 11 years at The University of Texas at Austin. He has recently become a Research Affiliate in Computational Engineering at his alma mater, the Massachusetts Institute of Technology (MIT) Computer Science & Artificial Intelligence Laboratory (CSAIL). Metcalfe graduated from MIT in 1969 with Bachelor degrees in Electrical Engineering and Industrial Management. He earned a Master’s degree in Applied Mathematics in 1970 and a PhD in Computer Science in 1973 from Harvard University.
Metcalfe’s honors include the National Medal of Technology, IEEE Medal of Honor, Marconi Prize, Japan Computer & Communications Prize, ACM Grace Murray Hopper Award, and IEEE Alexander Graham Bell Medal. He is a Fellow of the US National Academy of Engineering, the American Academy of Arts and Sciences, and the National Inventors, Consumer Electronics, and Internet Halls of Fame.
2021 ACM A.M. Turing Award
ACM named Jack J. Dongarra recipient of the 2021 ACM A.M. Turing Award for pioneering contributions to numerical algorithms and libraries that enabled high performance computational software to keep pace with exponential hardware improvements for over four decades. Dongarra is a University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He also holds appointments with Oak Ridge National Laboratory and the University of Manchester.
Dongarra has led the world of high-performance computing through his contributions to efficient numerical algorithms for linear algebra operations, parallel computing programming mechanisms, and performance evaluation tools. For nearly forty years, Moore’s Law produced exponential growth in hardware performance. During that same time, while most software failed to keep pace with these hardware advances, high performance numerical software did—in large part due to Dongarra’s algorithms, optimization techniques, and production-quality software implementations.
These contributions laid a framework from which scientists and engineers made important discoveries and game-changing innovations in areas including big data analytics, healthcare, renewable energy, weather prediction, genomics, and economics, to name a few. Dongarra’s work also helped facilitate leapfrog advances in computer architecture and supported revolutions in computer graphics and deep learning.
Dongarra’s major contribution was in creating open-source software libraries and standards which employ linear algebra as an intermediate language that can be used by a wide variety of applications. These libraries have been written for single processors, parallel computers, multicore nodes, and multiple GPUs per node. Dongarra’s libraries also introduced many important innovations including autotuning, mixed precision arithmetic, and batch computations.
As a leading ambassador of high-performance computing, Dongarra led the field in persuading hardware vendors to optimize these methods, and software developers to target his open-source libraries in their work. Ultimately, these efforts resulted in linear algebra-based software libraries achieving nearly universal adoption for high performance scientific and engineering computation on machines ranging from laptops to the world’s fastest supercomputers. These libraries were essential in the growth of the field—allowing progressively more powerful computers to solve computationally challenging problems.
“Today’s fastest supercomputers draw headlines in the media and excite public interest by performing mind-boggling feats of a quadrillion calculations in a second,” explains ACM President Gabriele Kotsis. “But beyond the understandable interest in new records being broken, high performance computing has been a major instrument of scientific discovery. HPC innovations have also spilled over into many different areas of computing and moved our entire field forward. Jack Dongarra played a central part in directing the successful trajectory of this field. His trailblazing work stretches back to 1979, and he remains one of the foremost and actively engaged leaders in the HPC community. His career certainly exemplifies the Turing Award’s recognition of ‘major contributions of lasting importance.’”
“Jack Dongarra's work has fundamentally changed and advanced scientific computing,” said Jeff Dean, Google Senior Fellow and SVP of Google Research and Google Health. “His deep and important work at the core of the world's most heavily used numerical libraries underlie every area of scientific computing, helping advance everything from drug discovery to weather forecasting, aerospace engineering and dozens more fields, and his deep focus on characterizing the performance of a wide range of computers has led to major advances in computer architectures that are well suited for numeric computations.”
Dongarra will be formally presented with the ACM A.M. Turing Award at the annual ACM Awards Banquet, which will be held this year on Saturday, June 11 at the Palace Hotel in San Francisco.
SELECT TECHNICAL CONTRIBUTIONS
For over four decades, Dongarra has been the primary implementor or principal investigator for many libraries such as LINPACK, BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA, and SLATE. These libraries have been written for single processors, parallel computers, multicore nodes, and multiple GPUs per node. His software libraries are used, practically universally, for high performance scientific and engineering computation on machines ranging from laptops to the world’s fastest supercomputers.
These libraries embody many deep technical innovations such as:
Autotuning: through his 2016 Supercomputing Conference Test of Time award-winning ATLAS project, Dongarra pioneered methods for automatically finding algorithmic parameters that produce linear algebra kernels of near-optimal efficiency, often outperforming vendor-supplied codes.
Mixed precision arithmetic: In his 2006 Supercomputing Conference paper, “Exploiting the Performance of 32 bit Floating Point Arithmetic in Obtaining 64 bit Accuracy,” Dongarra pioneered harnessing multiple precisions of floating-point arithmetic to deliver accurate solutions more quickly. This work has become instrumental in machine learning applications, as showcased recently in the HPL-AI benchmark, which achieved unprecedented levels of performance on the world’s top supercomputers.
Batch computations: Dongarra pioneered the paradigm of dividing computations of large dense matrices, which are commonly used in simulations, modeling, and data analysis, into many computations of smaller tasks over blocks that can be calculated independently and concurrently. Based on his 2016 paper, “Performance, design, and autotuning of batched GEMM for GPUs,” Dongarra led the development of the Batched BLAS Standard for such computations, and they also appear in the software libraries MAGMA and SLATE.
Dongarra has collaborated internationally with many people on the efforts above, always in the role of the driving force for innovation by continually developing new techniques to maximize performance and portability while maintaining numerically reliable results using state of the art techniques. Other examples of his leadership include the Message Passing Interface (MPI) the de-facto standard for portable message-passing on parallel computing architectures, and the Performance API (PAPI), which provides an interface that allows collection and synthesis of performance from components of a heterogeneous system. The standards he helped create, such as MPI, the LINPACK Benchmark, and the Top500 list of supercomputers, underpin computational tasks ranging from weather prediction to climate change to analyzing data from large scale physics experiments.
Biographical Background
Jack J. Dongarra has been a University Distinguished Professor at the University of Tennessee and a Distinguished Research Staff Member at the Oak Ridge National Laboratory since 1989. He has also served as a Turing Fellow at the University of Manchester (UK) since 2007. Dongarra earned a B.S. in Mathematics from Chicago State University, an M.S. in Computer Science from the Illinois Institute of Technology, and a Ph.D. in Applied Mathematics from the University of New Mexico.
Dongarra’s honors include the IEEE Computer Pioneer Award, the SIAM/ACM Prize in Computational Science and Engineering, and the ACM/IEEE Ken Kennedy Award. He is a Fellow of ACM, the Institute of Electrical and Electronics Engineers (IEEE), the Society of Industrial and Applied Mathematics (SIAM), the American Association for the Advancement of Science (AAAS), the International Supercomputing Conference (ISC), and the International Engineering and Technology Institute (IETI). He is a member of the National Academy of Engineering and a foreign member of the British Royal Society.
2020 ACM A.M. Turing Award
ACM named Alfred Vaino Aho and Jeffrey David Ullman recipients of the 2020 ACM A.M. Turing Award for fundamental algorithms and theory underlying programming language implementation and for synthesizing these results and those of others in their highly influential books, which educated generations of computer scientists. Aho is the Lawrence Gussman Professor Emeritus of Computer Science at Columbia University. Ullman is the Stanford W. Ascherman Professor Emeritus of Computer Science at Stanford University.
Computer software powers almost every piece of technology with which we interact. Virtually every program running our world—from those on our phones or in our cars to programs running on giant server farms inside big web companies—is written by humans in a higher-level programming language and then compiled into lower-level code for execution. Much of the technology for doing this translation for modern programming languages owes its beginnings to Aho and Ullman.
Beginning with their collaboration at Bell Labs in 1967 and continuing for several decades, Aho and Ullman have shaped the foundations of programming language theory and implementation, as well as algorithm design and analysis. They made broad and fundamental contributions to the field of programming language compilers through their technical contributions and influential textbooks. Their early joint work in algorithm design and analysis techniques contributed crucial approaches to the theoretical core of computer science that emerged during this period.
“The practice of computer programming and the development of increasingly advanced software systems underpin almost all of the technological transformations we have experienced in society over the last five decades,” explains ACM President Gabriele Kotsis. “While countless researchers and practitioners have contributed to these technologies, the work of Aho and Ullman has been especially influential. They have helped us to understand the theoretical foundations of algorithms and to chart the course for research and practice in compilers and programming language design. Aho and Ullman have been thought leaders since the early 1970s, and their work has guided generations of programmers and researchers up to the present day.”
“Aho and Ullman established bedrock ideas about algorithms, formal languages, compilers and databases, which were instrumental in the development of today’s programming and software landscape,” added Jeff Dean, Google Senior Fellow and SVP, Google AI. “They have also illustrated how these various disciplines are closely interconnected. Aho and Ullman introduced key technical concepts, including specific algorithms, that have been essential. In terms of computer science education, their textbooks have been the gold standard for training students, researchers, and practitioners.”
A Longstanding Collaboration
Aho and Ullman both earned their PhD degrees at Princeton University before joining Bell Labs, where they worked together from 1967 to 1969. During their time at Bell Labs, their early efforts included developing efficient algorithms for analyzing and translating programming languages.
In 1969, Ullman began a career in academia, ultimately joining the faculty at Stanford University, while Aho remained at Bell Labs for 30 years before joining the faculty at Columbia University. Despite working at different institutions, Aho and Ullman continued their collaboration for several decades, during which they co-authored books and papers and introduced novel techniques for algorithms, programming languages, compilers and software systems.
Influential Textbooks
Aho and Ullman co-authored nine influential books (including first and subsequent editions). Two of their most widely celebrated books include:
The Design and Analysis of Computer Algorithms (1974)
Co-authored by Aho, Ullman, and John Hopcroft, this book is considered a classic in the field and was one of the most cited books in computer science research for more than a decade. It became the standard textbook for algorithms courses throughout the world when computer science was still an emerging field. In addition to incorporating their own research contributions to algorithms, The Design and Analysis of Computer Algorithms introduced the random access machine (RAM) as the basic model for analyzing the time and space complexity of computer algorithms using recurrence relations. The RAM model also codified disparate individual algorithms into general design methods. The RAM model and general algorithm design techniques introduced in this book now form an integral part of the standard computer science curriculum.
Principles of Compiler Design (1977)
Co-authored by Aho and Ullman, this definitive book on compiler technology integrated formal language theory and syntax-directed translation techniques into the compiler design process. Often called the “Dragon Book” because of its cover design, it lucidly lays out the phases in translating a high-level programming language to machine code, modularizing the entire enterprise of compiler construction. It includes algorithmic contributions that the authors made to efficient techniques for lexical analysis, syntax analysis techniques, and code generation. The current edition of this book, Compilers: Principles, Techniques and Tools (co-authored with Ravi Sethi and Monica Lam), was published in 2007 and remains the standard textbook on compiler design.
Biographical Background
Alfred Vaino Aho
Alfred Aho is the Lawrence Gussman Professor Emeritus at Columbia University. He joined the Department of Computer Science at Columbia in 1995. Prior to Columbia, Aho was Vice President of Computing Sciences Research at Bell Laboratories where he worked for more than 30 years. A graduate of the University of Toronto, Aho earned his Master’s and PhD degrees in Electrical Engineering/Computer Science from Princeton University.
Aho’s honors include the IEEE John von Neumann Medal and the NEC C&C Foundation C&C Prize. He is a member of the US National Academy of Engineering, the American Academy of Arts and Sciences, and the Royal Society of Canada. He is a Fellow of ACM, IEEE, Bell Labs, and the American Association for the Advancement of Science.
Jeffrey David Ullman
Jeffrey Ullman is the Stanford W. Ascherman Professor Emeritus at Stanford University and CEO of Gradiance Corporation, an online learning platform for various computer science topics. He joined the faculty at Stanford in 1979. Prior to Stanford, he served on the faculty of Princeton University from 1969 to 1979, and was a member of the technical staff at Bell Labs from 1966 to 1969. A graduate of Columbia University, Ullman earned his PhD in Computer Science from Princeton University.
Ullman’s honors include receiving the IEEE John von Neumann Medal, the NEC C&C Foundation C&C Prize, the Donald E. Knuth Prize, and the ACM Karl V. Karlstrom Outstanding Educator Award. He is a member of the US National Academy of Engineering, the National Academy of Sciences, and the American Academy of Arts and Sciences, and is an ACM Fellow.
2019 ACM A.M. Turing Award
ACM named Patrick M. (Pat) Hanrahan and Edwin E. (Ed) Catmull recipients of the 2019 ACM A.M. Turing Award for fundamental contributions to 3-D computer graphics, and the revolutionary impact of these techniques on computer-generated imagery (CGI) in filmmaking and other applications. Catmull is a computer scientist and former president of Pixar and Disney Animation Studios. Hanrahan, a founding employee at Pixar, is a professor in the Computer Graphics Laboratory at Stanford University.
Ed Catmull and Pat Hanrahan have fundamentally influenced the field of computer graphics through conceptual innovation and contributions to both software and hardware. Their work has had a revolutionary impact on filmmaking, leading to a new genre of entirely computer-animated feature films beginning 25 years ago with Toy Story and continuing to the present day.
Today, 3-D computer animated films represent a wildly popular genre in the $138 billion global film industry. 3-D computer imagery is also central to the booming video gaming industry, as well as the emerging virtual reality and augmented reality fields. Catmull and Hanrahan made pioneering technical contributions which remain integral to how today’s CGI imagery is developed. Additionally, their insights into programming graphics processing units (GPUs) have had implications beyond computer graphics, impacting diverse areas including data center management and artificial intelligence.
“CGI has transformed the way films are made and experienced, while also profoundly impacting the broader entertainment industry,” said ACM President Cherri M. Pancake. “We are especially excited to recognize Pat Hanrahan and Ed Catmull, because computer graphics is one of the largest and most dynamic communities within ACM, as evidenced by the annual ACM SIGGRAPH conference. At the same time, Catmull and Hanrahan’s contributions demonstrate that advances in one specialization of computing can have a significant influence on other areas of the field. For example, Hanrahan’s work with shading languages for GPUs, has led to their use as general-purpose computing engines for a wide range of areas, including my own specialization of high-performance computing.”
“Because 3-D computer graphic imagery is now so pervasive, we often forget what the field was like just a short time ago when a video game like Pong, which consisted of a white dot bouncing between two vertical white lines, was the leading-edge technology,” said Jeff Dean, Google Senior Fellow and SVP, Google AI. “The technology keeps moving forward, yet what Hanrahan and Catmull developed decades ago remains standard practice in the field today—that’s quite impressive. It’s important to recognize scientific contributions in CGI technology and educate the public about a discipline that will impact many areas in the coming years—virtual and augmented reality, data visualization, education, medical imaging, and more.”
Background and Development of Recognized Technical Contributions
Catmull received his PhD in Computer Science from the University of Utah in 1974. His advisors included Ivan Sutherland, a father of computer graphics and the 1988 ACM A.M. Turing Award recipient. In his PhD thesis, Catmull introduced the groundbreaking techniques for displaying curved patches instead of polygons, out of which arose two new techniques: Z-buffering (also described by Wolfgang Strasser at the time), which manages image depth coordinates in computer graphics, and texture mapping, in which a 2-D surface texture is wrapped around a three-dimensional object. While at Utah, Catmull also created a new method of representing a smooth surface via the specification of a coarser polygon mesh. After graduating, he collaborated with Jim Clark, who would later found Silicon Graphics and Netscape, on the Catmull-Clark Subdivision Surface, which is now the preeminent surface patch used in animation and special effects in movies. Catmull’s techniques have played an important role in developing photo-real graphics, and eliminating “jaggies,” the rough edges around shapes that were a hallmark of primitive computer graphics.
After the University of Utah, Catmull founded the New York Institute of Technology (NYIT) Computer Graphics Lab, one of the earliest dedicated computer graphics labs in the US. Even at that time, Catmull dreamed of making a computer-animated movie. He came a step closer to his goal in 1979, when George Lucas hired Catmull, who in turn hired many who made the advances that pushed graphics toward photorealistic images. At LucasFilm, Catmull and colleagues continued to develop innovations in 3-D computer graphic animation, in an industry that was still dominated by traditional 2-D techniques. In 1986, Steve Jobs bought LucasFilm’s Computer Animation Division and renamed it Pixar, with Catmull as its President.
One of Catmull’s first hires at Pixar was Pat Hanrahan. Hanrahan had received a PhD in BioPhysics from the University of Wisconsin-Madison in 1985 and had worked briefly at NYIT’s Computer Graphics Laboratory before joining Pixar.
Working with Catmull and other members of the Pixar team, Hanrahan was the lead architect of a new kind of graphics system, which allowed curved shapes to be rendered with realistic material properties and lighting. A key idea in this system, later named RenderMan, was shaders (used to shade CGI images). RenderMan’s functions separated the light reflection behavior from the geometric shapes, and computed the color, transparency, and texture at points on the shapes. The RenderMan system also incorporated the Z-buffering and subdivision surface innovations that Catmull had earlier contributed to the field.
During his time at Pixar, Hanrahan also developed techniques for volume rendering, which allows a CGI artist to render a 2-D projection of a 3-D data set, such as a puff of smoke. In one of his most cited papers, Hanrahan, with co-author Marc Levoy, introduced light field rendering, a method for giving the viewer the sense that they are flying through scenes by generating new views from arbitrary points without depth information or feature matching. Hanrahan went on to develop techniques for portraying skin and hair using subsurface scattering, and for rendering complex lighting effects—so-called global illumination or GI—using Monte Carlo ray tracing.
Hanrahan published his RenderMan research in a seminal 1990 paper that was presented at ACM SIGGRAPH. It would take five more years, however, for the computing hardware to develop to a point where the full-length 3-D computer animated movie Toy Story could be produced using Hanrahan’s RenderMan system.
Under Catmull’s leadership, Pixar would make a succession of successful films using RenderMan. Pixar also licensed RenderMan to other film companies. The software has been used in 44 of the last 47 films nominated for an Academy Award in the Visual Effects category, including Avatar, Titanic, Beauty and the Beast, The Lord of the Rings trilogy, and the Star Wars prequels, among others. RenderMan remains the standard workflow for CGI visual effects.
After he left Pixar in 1989, Hanrahan held academic posts at Princeton and Stanford universities. Beginning in the 1990s, he and his students extended the RenderMan shading language to work in real time on powerful GPUs that began to enter into the marketplace. The programming languages for GPUs that Hanrahan and his students developed led to the development of commercial versions (including the OpenGL shading language) that revolutionized the writing of video games.
The prevalence and variety of shading languages that were being used on GPUs ultimately required the GPU hardware designers to develop more flexible architectures. These architectures, in turn, allowed the GPUs to be used in a variety of computing contexts, including running algorithms for high performance computing applications, and training machine learning algorithms on massive datasets for artificial intelligence applications. In particular, Hanrahan and his students developed Brook, a language for GPUs that eventually led to NVIDIA’s CUDA.
Catmull remained at Pixar, which later became a subsidiary of Disney Animation Studios, for over 30 years. Under his leadership, dozens of researchers at these labs invented and published foundational technologies (including image compositing, motion blur, cloth simulation, etc.) that contributed to computer animated films and computer graphics more broadly. Both Hanrahan and Catmull have received awards from ACM SIGGRAPH, as well as the Academy of Motion Picture Arts & Sciences for their technical contributions.
Background
Edwin E. (Ed) Catmull is co-founder of Pixar Animation Studios and a former President of Pixar and Walt Disney Animation Studios. He earned Bachelor of Science degrees in Physics and Computer Science (1970) and a PhD in Computer Science (1974) from the University of Utah. During his career, Catmull was Vice President of the Computer Division of Lucasfilm Ltd., where he managed development in areas of computer graphics, video editing, video games and digital audio. He founded the Computer Graphics Lab at the New York Institute of Technology.
Catmull received the 1993 ACM SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics, and the 2006 IEEE John von Neumann Medal for fundamental contributions to computer graphics and a pioneering use of computer animation in motion pictures. He is a Fellow of ACM and of the Visual Effect Society. He is a member of the Academy of Motion Picture Arts & Sciences and of the National Academy of Engineering.
Background
Patrick M. (Pat) Hanrahan is the CANON Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. He received a Bachelor of Science degree in Nuclear Engineering (1977) and a PhD in Biophysics (1985) from the University of Wisconsin-Madison. He held positions at the New York Institute of Technology and Digital Equipment Corporation in the 1980s before serving as a Senior Scientist at Pixar (1986-1989). He later served as an Associate Professor at Princeton University (1991-1994) and Professor at Stanford University (1994-present), where he has advised more than 40 PhD students. Hanrahan co-founded Tableau Software, a data analytics company that was acquired by Salesforce in August 2019.
Hanrahan’s many honors include the 2003 ACM SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics. He is a Fellow of ACM and of the American Academy of Arts & Sciences. He is a member of the National Academy of Engineering, in addition to induction into many other prestigious organizations.
2018 ACM A.M. Turing Award
ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. Bengio is Professor at the University of Montreal and Scientific Director at Mila, Quebec’s Artificial Intelligence Institute; Hinton is VP and Engineering Fellow of Google, Chief Scientific Adviser of The Vector Institute, and University Professor Emeritus at the University of Toronto; and LeCun is Professor at New York University and VP and Chief AI Scientist at Facebook.
Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.
While the use of artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s, by the early 2000s, LeCun, Hinton and Bengio were among a small group who remained committed to this approach. Though their efforts to rekindle the AI community’s interest in neural networks were initially met with skepticism, their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field.
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said ACM President Cherri M. Pancake. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools—in areas ranging from medicine, to astronomy, to materials science.”
"Deep neural networks are responsible for some of the greatest advances in modern computer science, helping make substantial progress on long-standing problems in computer vision, speech recognition, and natural language understanding,” said Jeff Dean, Google Senior Fellow and SVP, Google AI. “At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year's Turing Award winners, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. By dramatically improving the ability of computers to make sense of the world, deep neural networks are changing not just the field of computing, but nearly every field of science and human endeavor."
Machine Learning, Neural Networks and Deep Learning
In traditional computing, a computer program directs the computer with explicit step-by-step instructions. In deep learning, a subfield of AI research, the computer is not explicitly told how to solve a particular task such as object classification. Instead, it uses a learning algorithm to extract patterns in the data that relate the input data, such as the pixels of an image, to the desired output such as the label “cat.” The challenge for researchers has been to develop effective learning algorithms that can modify the weights on the connections in an artificial neural network so that these weights capture the relevant patterns in the data.
Geoffrey Hinton, who has been advocating for a machine learning approach to artificial intelligence since the early 1980s, looked to how the human brain functions to suggest ways in which machine learning systems might be developed. Inspired by the brain, he and others proposed “artificial neural networks” as a cornerstone of their machine learning investigations.
In computer science, the term “neural networks” refers to systems composed of layers of relatively simple computing elements called “neurons” that are simulated in a computer. These “neurons,” which only loosely resemble the neurons in the human brain, influence one another via weighted connections. By changing the weights on the connections, it is possible to change the computation performed by the neural network. Hinton, LeCun and Bengio recognized the importance of building deep networks using many layers—hence the term “deep learning.”
The conceptual foundations and engineering advances laid by LeCun, Bengio and Hinton over a 30-year period were significantly advanced by the prevalence of powerful graphics processing unit (GPU) computers, as well as access to massive datasets. In recent years, these and other factors led to leap-frog advances in technologies such as computer vision, speech recognition and machine translation.
Hinton, LeCun and Bengio have worked together and independently. For example, LeCun performed postdoctoral work under Hinton’s supervision, and LeCun and Bengio worked together at Bell Labs beginning in the early 1990s. Even while not working together, there is a synergy and interconnectedness in their work, and they have greatly influenced each other.
Bengio, Hinton and LeCun continue to explore the intersection of machine learning with neuroscience and cognitive science, most notably through their joint participation in the Learning in Machines and Brains program, an initiative of CIFAR, formerly known as the Canadian Institute for Advanced Research.
Select Technical Accomplishments
The technical achievements of this year’s Turing Laureates, which have led to significant breakthroughs in AI technologies include, but are not limited to, the following:
Geoffrey Hinton
Backpropagation: In a 1986 paper, “Learning Internal Representations by Error Propagation,” co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that the backpropagation algorithm allowed neural nets to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach. The backpropagation algorithm is standard in most neural networks today.
Boltzmann Machines: In 1983, with Terrence Sejnowski, Hinton invented Boltzmann Machines, one of the first neural networks capable of learning internal representations in neurons that were not part of the input or output.
Improvements to convolutional neural networks: In 2012, with his students, Alex Krizhevsky and Ilya Sutskever, Hinton improved convolutional neural networks using rectified linear neurons and dropout regularization. In the prominent ImageNet competition, Hinton and his students almost halved the error rate for object recognition and reshaped the computer vision field.
Yoshua Bengio
Probabilistic models of sequences: In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into a system used by AT&T/NCR for reading handwritten checks, were considered a pinnacle of neural network research in the 1990s, and modern deep learning speech recognition systems are extending these concepts.
High-dimensional word embeddings and attention: In 2000, Bengio authored landmark paper, “A Neural Probabilistic Language Model,” that introduced high-dimension word embeddings as a representation of word meaning. Bengio’s insights had a huge and lasting impact on natural language processing tasks including language translation, question answering, and visual question answering. His group also introduced a form of attention mechanism which led to breakthroughs in machine translation and form a key component of sequential processing with deep learning.
Generative adversarial networks: Since 2010, Bengio’s papers on generative deep learning, in particular the Generative Adversarial Networks (GANs) developed with Ian Goodfellow, have spawned a revolution in computer vision and computer graphics. In one fascinating application of this work, computers can actually create original images, reminiscent of the creativity that is considered a hallmark of human intelligence.
Yann LeCun
Convolutional neural networks: In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Today, convolutional neural networks are an industry standard in computer vision, as well as in speech recognition, speech synthesis, image synthesis, and natural language processing. They are used in a wide variety of applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering.
Improving backpropagation algorithms: LeCun proposed an early version of the backpropagation algorithm (backprop), and gave a clean derivation of it based on variational principles. His work to speed up backpropagation algorithms included describing two simple methods to accelerate learning time.
Broadening the vision of neural networks: LeCun is also credited with developing a broader vision for neural networks as a computational model for a wide range of tasks, introducing in early work a number of concepts now fundamental in AI. For example, in the context of recognizing images, he studied how hierarchical feature representation can be learned in neural networks—a concept that is now routinely used in many recognition tasks. Together with Léon Bottou, he proposed the idea, used in every modern deep learning software, that learning systems can be built as complex networks of modules where backpropagation is performed through automatic differentiation. They also proposed deep learning architectures that can manipulate structured data, such as graphs.
Biographical Background
Geoffrey Hinton
Geoffrey Hinton is VP and Engineering Fellow of Google, Chief Scientific Adviser of The Vector Institute and a University Professor Emeritus at the University of Toronto. Hinton received a Bachelor’s degree in experimental psychology from Cambridge University and a Doctoral degree in artificial intelligence from the University of Edinburgh. He was the founding Director of the Neural Computation and Adaptive Perception (later Learning in Machines and Brains) program at CIFAR.
Hinton’s honors include Companion of the Order of Canada (Canada’s highest honor), Fellow of the Royal Society (UK), foreign member of the National Academy of Engineering (US), the International Joint Conference on Artificial Intelligence (IJCAI) Award for Research Excellence, the NSERC Herzberg Gold medal, and the IEEE James Clerk Maxwell Gold medal. He was also selected by Wired magazine for “The Wired 100—2016’s Most Influential People” and by Bloomberg for the 50 people who changed the landscape of global business in 2017.
Yoshua Bengio
Yoshua Bengio is a Professor at the University of Montreal, and the Scientific Director of both Mila (Quebec’s Artificial Intelligence Institute) and IVADO (the Institute for Data Valorization). He is Co-director (with Yann LeCun) of CIFAR's Learning in Machines and Brains program. Bengio received a Bachelor’s degree in electrical engineering, a Master’s degree in computer science and a Doctoral degree in computer science from McGill University.
Bengio’s honors include being named an Officer of the Order of Canada, Fellow of the Royal Society of Canada and the Marie-Victorin Prize. His work in founding and serving as Scientific Director of the Quebec Artificial Intelligence Institute (Mila) is also recognized as a major contribution to the field. Mila, an independent nonprofit organization, now counts 300 researchers and 35 faculty members among its ranks. It is the largest academic center for deep learning research in the world, and has helped put Montreal on the map as a vibrant AI ecosystem, with research labs from major companies as well as AI startups.
Yann LeCun
Yann LeCun is Silver Professor of the Courant Institute of Mathematical Sciences at New York University, and VP and Chief AI Scientist at Facebook. He received a Diplôme d'Ingénieur from the Ecole Superieure d'Ingénieur en Electrotechnique et Electronique (ESIEE), and a PhD in computer science from Université Pierre et Marie Curie.
His honors include being a member of the US National Academy of Engineering; Doctorates Honoris Causa, from IPN Mexico and École Polytechnique Fédérale de Lausanne (EPFL); the Pender Award, University of Pennsylvania; the Holst Medal, Technical University of Eindhoven & Philips Labs; the Nokia-Bell Labs Shannon Luminary Award; the IEEE PAMI Distinguished Researcher Award; and the IEEE Neural Network Pioneer Award. He was also selected by Wired magazine for “The Wired 100—2016’s Most Influential People” and its “25 Geniuses Who are Creating the Future of Business.” LeCun was the founding director of the NYU Center of Data Science, and is a Co-director (with Yoshua Bengio) of CIFAR's Learning in Machines and Brains program. LeCun is also a co-founder and former Member of the Board of the Partnership on AI, a group of companies and nonprofits studying the societal consequences of AI.
ACM will present the 2018 ACM A.M. Turing Award at its annual Awards Banquet on June 15, 2019 in San Francisco, California.
2017 ACM A.M. Turing Award
ACM named John L. Hennessy, former President of Stanford University, and David A. Patterson, retired Professor of the University of California, Berkeley, recipients of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry. Hennessy and Patterson created a systematic and quantitative approach to designing faster, lower power, and reduced instruction set computer (RISC) microprocessors. Their approach led to lasting and repeatable principles that generations of architects have used for many projects in academia and industry. Today, 99% of the more than 16 billion micro-processors produced annually are RISC processors, and are found in nearly all smartphones, tablets, and the billions of embedded devices that comprise the Internet of Things (IoT).
Hennessy and Patterson codified their insights in a very influential book, Computer Architecture: A Quantitative Approach, now in its sixth edition, reaching generations of engineers and scientists who have adopted and further developed their ideas. Their work underpins our ability to model and analyze the architectures of new processors, greatly accelerating advances in microprocessor design.
“ACM initiated the Turing Award in 1966 to recognize contributions of lasting and major technical importance to the computing field,” said ACM President Vicki L. Hanson. “The work of Hennessy and Patterson certainly exemplifies this standard. Their contributions to energy-efficient RISC-based processors have helped make possible the mobile and IoT revolutions. At the same time, their seminal textbook has advanced the pace of innovation across the industry over the past 25 years by influencing generations of engineers and computer designers.”
Attesting to the impact of Hennessy and Patterson’s work is the assessment of Bill Gates, principal founder of Microsoft Corporation, that their contributions “have proven to be fundamental to the very foundation upon which an entire industry flourished.”
Development of MIPS and SPARC
While the idea of reduced complexity architecture had been explored since the 1960s—most notably in the IBM 801 project—the work that Hennessy and Patterson led, at Stanford and Berkeley respectively, is credited with firmly establishing the feasibility of the RISC approach, popularizing its concepts, and introducing it to academia and industry. The RISC approach differed from the prevailing complex instruction set computer (CISC) computers of the time in that it required a small set of simple and general instructions (functions a computer must perform), requiring fewer transistors than complex instruction sets and reducing the amount of work a computer must perform.
Patterson’s Berkeley team, which coined the term RISC, built and demonstrated their RISC-1 processor in 1982. With 44,000 transistors, the RISC-1 prototype outperformed a conventional CISC design that used 100,000 transistors. Hennessy co-founded MIPS Computer Systems Inc. in 1984 to commercialize the Stanford team’s work. Later, the Berkeley team’s work was commercialized by Sun Microsystems in its SPARC microarchitecture.
Despite initial skepticism of RISC by many computer architects, the success of the MIPS and SPARC entrepreneurial efforts, the lower production costs of RISC designs, as well as more research advances, led to wider acceptance of RISC. By the mid-1990s, RISC microprocessors were dominant throughout the field.
Groundbreaking Textbook
Hennessy and Patterson presented new scientifically-based methodologies in their 1990 textbook Computer Architecture: a Quantitative Approach. The book has influenced generations of engineers and, through its dissemination of key ideas to the computer architecture community, is credited with significantly increasing the pace of advances in microprocessor design. In Computer Architecture, Hennessy and Patterson encouraged architects to carefully optimize their systems to allow for the differing costs of memory and computation. Their work also enabled a shift from seeking raw performance to designing architectures that take into account issues such as energy usage, heat dissipation, and off-chip communication. The book was groundbreaking in that it was the first text of its kind to provide an analytical and scientific framework, as well as methodologies and evaluation tools for engineers and designers to evaluate the net value of microprocessor design.
Biographical Background
John L. Hennessy
John L. Hennessy was President of Stanford University from 2000 to 2016. He is Director of the Knight-Hennessy Scholars Program at Stanford, a member of the Board of Cisco Systems and the Gordon and Betty Moore Foundation and Chairman of the Board of Alphabet Inc. Hennessy earned his Bachelor’s degree in electrical engineering from Villanova University and his Master’s and doctoral degrees in computer science from the State University of New York at Stony Brook.
Hennessy’s numerous honors include the IEEE Medal of Honor, the ACM-IEEE CS Eckert-Mauchly Award (with Patterson), the IEEE John von Neumann Medal (with Patterson), the Seymour Cray Computer Engineering Award, and the Founders Award from the American Academy of Arts and Sciences. Hennessy is a Fellow of ACM and IEEE, and is a member of the National Academy of Engineering, the National Academy of Sciences and the American Philosophical Society.
David A. Patterson
David A. Patterson is a Distinguished Engineer at Google and serves as Vice Chair of the Board of the RISC-V Foundation, which offers an open free instruction set architecture with the aim to enable a new era of processor innovation through open standard collaboration. Patterson was Professor of Computer Science at UC, Berkeley from 1976 to 2016. He received his Bachelor’s, Master’s and doctoral degrees in computer science from the University of California, Los Angeles.
Patterson’s numerous honors include the IEEE John von Neumann Medal (with Hennessy), the ACM-IEEE CS Eckert-Mauchly Award (with Hennessy), the Richard A. Tapia Award for Scientific Scholarship, Civic Science, and Diversifying Computing, and the ACM Karl V. Karlstrom Outstanding Educator Award. Patterson served as ACM President from 2004 to 2006. He is a Fellow of ACM, AAAS and IEEE, and was elected to the National Academy of Engineering and the National Academy of Sciences.
ACM will present the 2017 ACM A.M. Turing Award at its annual Awards Banquet on June 23, 2018 in San Francisco, California.
2016 ACM A.M. Turing Award
ACM named Sir Tim Berners-Lee, a Professor at Massachusetts Institute of Technology and the University of Oxford, the recipient of the 2016 ACM A.M. Turing Award. Berners-Lee was cited for inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale. Considered one of the most influential computing innovations in history, the World Wide Web is the primary tool used by billions of people every day to communicate, access information, engage in commerce, and perform many other important activities.
“The first-ever World Wide Web site went online in 1991,” said ACM President Vicki L. Hanson. “Although this doesn’t seem that long ago, it is hard to imagine the world before Sir Tim Berners-Lee’s invention. In many ways, the colossal impact of the World Wide Web is obvious. Many people, however, may not fully appreciate the underlying technical contributions that make the Web possible. Sir Tim Berners-Lee not only developed the key components, such as URIs and web browsers that allow us to use the Web, but offered a coherent vision of how each of these elements would work together as part of an integrated whole.”
“The Web has radically changed the way we share ideas and information and is a key factor for global economic growth and opportunity,” said Andrei Broder, Google Distinguished Scientist. “The idea of a web of knowledge originated in a brilliant 1945 essay by Vannevar Bush. Over the next decades, several pieces of the puzzle came together: hypertext, the Internet, personal computing. But the explosive growth of the Web started when Tim Berners-Lee proposed a unified user interface to all types of information supported by a new transport protocol. This was a significant inflection point, setting the stage for everyone in the world, from high schoolers to corporations, to independently build their Web presences and collectively create the wonderful World Wide Web.”
Development of the World Wide Web
Berners-Lee, who graduated from Oxford University with a degree in Physics, submitted the proposal for the World Wide Web in 1989 while working at CERN, the European Organization for Nuclear Research. He noticed that scientists were having difficulty sharing information about particle accelerators. In 1989, interconnectivity among computers via Transmission Control Protocol/Internet Protocol (TCP/IP) had been in existence for a decade, and while segments of the scientific community were using the Internet, the kinds of information they could easily share was limited. Berners-Lee envisioned a system where CERN staff could exchange documents over the Internet using readable text that contained embedded hyperlinks.
To make his proposed information-sharing system work, Berners-Lee invented several integrated tools that would underpin the World Wide Web, including:
- Uniform Resource Identifier (URI) that would serve to allow any object (such as a document or image) on the Internet to be named, and thus identified
- Hypertext Transfer Protocol (HTTP) that allows for the exchange, retrieval, or transfer of an object over the Internet
- Web browser, a software application that retrieves and renders resources on the World Wide Web along with clickable links to other resources, and, in the original version, allowed users to modify webpages and make new links
- Hypertext Markup Language (HTML) that allows web browsers to translate documents or other resources and render them as multimedia webpages
Berners-Lee launched the world’s first website, http://info.cern.ch, on August 6, 1991.
Central to the universal adoption of the World Wide Web was Berners-Lee’s decision to develop it as open and royalty-free software. Berners-Lee released his libwww software package in the early 1990s, granting the rights to anyone to study, change, or distribute the software in any way they chose. He then continued to guide the project and worked with developers around the world to develop web-server code. The popularity of the open source software, in turn, led to the evolution of early web browsers, including Mosaic, that are credited with propagating the Web beyond academic and government research settings and making it a global phenomenon.
By 1994, the number of websites had grown to nearly 3,000, and today, there are more than 1 billion websites online.
ACM will present the 2016 ACM A.M. Turing Award at its annual Awards Banquet on June 24, 2017 in San Francisco, California.
Biographical Background
Tim Berners-Lee is a graduate of Oxford University, where he received a first-class Bachelor of Arts degree in Physics. Berners-Lee is the 3Com Founders Professor of Engineering in the School of Engineering with a joint appointment in the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory (CSAIL) at Massachusetts Institute of Technology (MIT), where he also heads the Decentralized Information Group (DIG). He is also a Fellow at Christ Church and a Professorial Research Fellow at the Department of Computer Science, University of Oxford.
Berners-Lee founded the World Wide Web Consortium (W3C) in 1994, where he continues to serve as Director. W3C is an international community that develops open standards to ensure the interoperability and long-term growth of the Web. In 2009, he established the Word Wide Web Foundation, which works to advance the Open Web as a public good and a basic human right. He is the President of the Open Data Institute (ODI) in London.
He has received many awards and honors, including the ACM Software System Award in 1995. Berners-Lee was knighted in 2004 and received the Order of Merit in 2007, becoming one of only 24 living members entitled to hold the honor. He is a Fellow of the Royal Society, and has received honorary degrees from a number of universities around the world, including Manchester, Harvard, and Yale. TIME magazine included him as one of the 100 Most Important People of the 20th Century.
Cryptography Pioneers Receive 2015 ACM A.M. Turing Award
Whitfield Diffie , former Chief Security Officer of Sun Microsystems and Martin E. Hellman, Professor Emeritus of Electrical Engineering at Stanford University, are the recipients of the 2015 ACM A.M. Turing Award, for critical contributions to modern cryptography. The ability for two parties to communicate privately over a secure channel is fundamental for billions of people around the world. On a daily basis, individuals establish secure online connections with banks, e-commerce sites, email servers and the cloud. Diffie and Hellman’s groundbreaking 1976 paper, “New Directions in Cryptography,” introduced the ideas of public-key cryptography and digital signatures, which are the foundation for most regularly-used security protocols on the Internet today. The Diffie-Hellman Protocol protects daily Internet communications and trillions of dollars in financial transactions.
Biographical Background
Whitfield Diffie is a former Vice President and Chief Security Officer of Sun Microsystems, where he became a Sun Fellow. As Chief Security Officer, Diffie was the chief exponent of Sun’s security vision and responsible for developing Sun’s strategy to achieve that vision. Diffie is a graduate of the Massachusetts Institute of Technology (MIT).
Diffie received the 1996 ACM Paris Kanellakis Theory and Practice Award (with Leonard Adleman, Martin Hellman, Ralph Merkle, Ronald Rivest and Adi Shamir), and received the 2010 IEEE Richard W. Hamming Medal (with Martin Hellman and Ralph Merkle). He is a Marconi Fellow, a Fellow of the Computer History Museum, and received an honorary doctorate from the Swiss Federal Institute of Technology.
Diffie has authored more than 30 technical papers, and has testified several times to the U.S. Senate and House of Representatives on the public policy aspects of cryptography.
Martin Hellman is Professor Emeritus of Electrical Engineering at Stanford University, where he was Professor of Electrical Engineering for 25 years. A graduate of New York University, Hellman earned his Master's degree and his Ph.D. from Stanford. Hellman received the 1996 ACM Paris Kanellakis Theory and Practice Award (with Leonard Adleman, Whitfield Diffie, Ralph Merkle, Ronald Rivest and Adi Shamir), as well as the 2010 IEEE Richard W. Hamming Medal (with Whitfield Diffie and Ralph Merkle). He is a Marconi Fellow, a Fellow of the Computer History Museum, and a member of the US National Academy of Engineering
Hellman has authored more than 70 technical papers, 12 U.S. patents and a number of corresponding international patents. View Hellman's publications in the ACM DL.
Avi Wigderson Delivers Turing Lecture at STOC 2024
Avi Wigderson received the 2023 ACM A.M. Turing Award for foundational contributions to the theory of computation, including reshaping our understanding of the role of randomness in computation, and for his decades of intellectual leadership in theoretical computer science. Wigderson is the Herbert H. Maass Professor in the School of Mathematics at the Institute for Advanced Study in Princeton, New Jersey.
Wigderson delivered his Turing Award Lecture "Alan Turing: A TCS Role Model," at STOC 2024: ACM Symposium on Theory of Computing.
Learn About the Computing Pioneers Who Have Received the A.M. Turing Award
ACM's History Committee maintains the A.M. Turing Award website where you can find essays about the recipients of the A.M. Turing Award, their A.M. Turing Award Lectures, video interviews and transcripts, annotated bibliographies, photos and more.
Spotlight on Turing Laureates
The ACM A.M. Turing Award, computing’s most prestigious honor, acknowledges individuals who have made lasting and major contributions to the field. Here, we look back at some of these technologies and breakthroughs that continue to impact our lives, and the remarkable innovators who helped shape them.
ACM Awards by Category
-
Career-Long Contributions
-
Early-to-Mid-Career Contributions
-
Specific Types of Contributions
ACM Charles P. "Chuck" Thacker Breakthrough in Computing Award
ACM Eugene L. Lawler Award for Humanitarian Contributions within Computer Science and Informatics
ACM Frances E. Allen Award for Outstanding Mentoring
ACM Gordon Bell Prize
ACM Gordon Bell Prize for Climate Modeling
ACM Luiz André Barroso Award
ACM Karl V. Karlstrom Outstanding Educator Award
ACM Paris Kanellakis Theory and Practice Award
ACM Policy Award
ACM Presidential Award
ACM Software System Award
ACM Athena Lecturer Award
ACM AAAI Allen Newell Award
ACM-IEEE CS Eckert-Mauchly Award
ACM-IEEE CS Ken Kennedy Award
Outstanding Contribution to ACM Award
SIAM/ACM Prize in Computational Science and Engineering
ACM Programming Systems and Languages Paper Award -
Student Contributions
-
Regional Awards
ACM India Doctoral Dissertation Award
ACM India Early Career Researcher Award
ACM India Outstanding Contributions in Computing by a Woman Award
ACM India Outstanding Contribution to Computing Education Award
IPSJ/ACM Award for Early Career Contributions to Global Research
CCF-ACM Award for Artificial Intelligence -
SIG Awards
-
How Awards Are Proposed