Experts Come to ISI to Ponder Problems of Exascale Computing

| August 1, 2011

Discussions included systemic problems in ultrahigh performance computing.

A photo of ISI's Bob Lucas

Bob Lucas

Robert Lucas hosted specialists from all over the world who came to the Marina del Rey California campus of the USC Information Sciences Institute to discuss the challenges of “Exascale” computing.

The event was a workshop conference under the auspices of U.S. Department of Energy’s Office of Science and Office of Advanced Scientific Computing Research (ASCR). In recent years, ISI and sister institutions have encountered systemic problems in ultrahigh performance computing, which they discussed at “Exascale and Beyond: Gaps in Research, Gaps in our Thinking.”

Lucas, the director of ISI’s Computational Science group, described these emerging issues in a presentation that began the two-day event, entitled “Exascale: Can My Code Get from Here to There?”

Exascale (“‘extreme scale”) refers to new computing systems which run at a rates of more than million trillion floating point operations (flops) per second &ndash that is to say, performing the computing functions of 1 billion laptop computers running simultaneously. This is stretching the limits of the state of the programming arts.

“Today’s high-end scientific and engineering software is formulated to fit an execution model that we have evolved to over half a century,” noted Lucas.

Computers are no longer single standalone machines that do what they have to do alone and independently inside a single information processing chip. Instead, engineers have been moving to systems in which computing functions and memory are distributed across vast industrial parks full of linked processing and memory units.

Which creates a different world for programmers, Lucas noted. “It is getting increasingly difficult for application developers to map their codes to today’s petascale systems given a programming environment that is cobbled together from a mixture of programming languages, extensions, and libraries, almost all designed for the systems fielded in the last millennium, Features expected in Exascale systems that are not well represented in today’s programming model. The objective will be to bring these problems to the fore, not propose solutions to them.”

As the “statement of challenges to be addressed” for the event noted, “Energy efficiency constraints and growth in explicit on-chip parallelism will require a mass migration to new algorithms and software architecture that is as broad and disruptive as the migration from vector to parallel computing systems that occurred 15 years ago.”

The conference attracted participants from all over the computer science world, including researchers from the Argonne, Sandia, Laurence Livermore, Lawrence Berkeley, Los Alamos and Oakridge National Laboratories, as well as from M.I.T., Carnegie Mellon, and other major universities.

ISI researchers Pedro Diniz and Jaqueline Chame were part of the event along with their longtime ISI colleague Mary Hall, who now at the University of Utah. ISI’s Larry Godinez coordinated venue arrangements.

The event was a working conference, trying to create recommendations for ways to deal with and solve the problems involved in Exascale computing.

Published on August 1st, 2011

Last updated on August 5th, 2021

Share This Story