SC98: HPC Challenge


SC98 High Performance Computing Challenge

Participants have two sessions to present their latest on-the-edge work to a panel of judges who represent industry, universities and government laboratories. Groups will be asked to give a poster presentation at the Gala Opening on Monday night and a 20-minute demonstration to the judging panel on Wednesday. Awards will be presented to the winners on Thursday afternoon at the Awards Session.

Contacts

Stephen Jones
sjones@wes.army.mil

Philip Papadopoulos
phil@msr.epm.ornl.gov

 

Important Deadlines

November 1 - Final title and list of team members

November 9 - Monday Night Poster Session at Gala Opening

November 11 - Online demo and judging

High Performance Computing Challenge Judges

Jack Dongarra, University of Tennessee, Knoxville, Oak Ridge National Laboratory
Tom Kitchens, DOE
John Grosh, DoD High Performance Computing Modernization Office
Kay Howell, National Coordination Office for Computing, Information, and Communications
Chuck Koebel, National Science Foundation
Robyn MacFarlane, Eglin Air Force Base

High Performance Computing Challenge Schedule

Poster Presentations: Monday, November 9, 7:00PM to 9:00 PM
Location: Research booths, as noted below

Judging: Wednesday, November 11, 1:00PM to 4:30PM
Location: Research booths, as noted below

Awards will be presented at the Award Session, Thursday, November 12, 1:30PM, ROOM 110A

Titles | Demos | Booths | Time

Title: Dual-level Parallel Analysis of Harbor Wave Response Using MPI and OpenMP
Demonstration: DoD HPC Modernization Program
Booth R780
Time: 1:00 p.m.

Title: Metacomputing the Einstein Theory of Spacetime: Colliding Black Holes and Neutron Stars Across the Atlantic Ocean.
Demonstration: NCSA
Booth R660
Time: 1:30 p.m.

Title: A Dynamic Master-Slave Approach for the Simulation of Dispersed Multiphase Flows
Demonstration: University of Illinois Urbana-Champaign
Booth R125
Time: 2:00 p.m.

Title: Industrial Mold Filling Simulation Using an Internationally Distributed Software Component Architecture
Demonstration: iGRID
Booth R130
Time: 2:30 p.m.

Title: Innovative Wide Area Applications on the GUSTO Grid Testbed
Demonstration: Argonne National Laboratory
Booth R570
Time: 1:00 p.m.

Title: The Terabyte Challenge: An Open Testbed for Managing, Mining and Modeling Massive and Distributed Data
Demonstration: National Center for Data Mining (NCDM) / The National Scalable Cluster Project (NCSP)
Booth R370
Time: 1:30 p.m.

Title: Legion: Seamless, Secure Wide-Area Metacomputing
Demonstration: Legion
Booth R730
Time: 2:00 p.m.

Title: Everyware: Combining Disparate Software Infrastructures for Performance
Demonstration: NPACI
Booth R665
Time: 2:30 p.m.

Title: Near Real-time Imaging of Human Brain Activity
Demonstration: Pittsburgh Supercomputing Center
Booth R565
Time: 3:30 p.m.

Title: Semi-Transparent Supercomputing: Dynamic Volume Rendering on Remote HPC Systems
Demonstration: NPACI
Booth R665
Time: 4:00 p.m.

High Performance Computing Challenge Teams

DUAL-LEVEL PARALLEL ANALYSIS OF HARBOR WAVE RESPONSE USING MPI AND OPENMP

Primary Contact: Henry A. Gabb, gabb@ibm.wes.hpc.mil

Steve W. Bova, Mississippi State University; Clay P. Breshears, Rice University; Henry A. Gabb and Christine Cuicchi, Waterways Experiment Station DoD Major Shared Resource Center; Richard Strelitz, Science Applications International Corporation; Zeki Demirbilek, Waterways Experiment Station Coastal and Hydraulics Laboratory

The application, CGWAVE, models harbor response taking into account outside sea state, harbor shape and man-made structures (i.e., piers, breakwaters, naval vessels). It is a forecasting and nowcasting tool used in coastal and military planning and civil engineering. Historically, a lack of computing power has forced approximations that limited the predictive capability of the model. We will show how simultaneous MPI/OpenMP parallelism drastically improves program performance and improves simulation accuracy.

 

METACOMPUTING THE EINSTEIN THEORY OF SPACETIME: COLLIDING BLACK HOLES AND NEUTRON STARS ACROSS THE ATLANTIC OCEAN

Primary Contact: Ed Seidel, eseidel@aei-potsdam.mpg.de

Werner Benger, Max-Planck-Institut-fuer-Gravitationsphysik, Konrad-Zuse-Institut; Bernd Bruegmann, Max-Planck-Institut-fuer Gravitationsphysik; Ian Foster and Olle Larsson, Argonne National Laboratory; Joan Masso, University of the Balearic Islands; Mark Miller, Washington University; Jason Novotny, NCSA; Marcus Pattloch, DFN-Verein; Edward Seidel and John Shalf, NCSA; Warren Smith, Argonne National Lab; Wai-Mo Suen and Malcolm Tobias, Washington University

Using tightly coupled supercomputers in Europe and America, we propose to perform an intercontinental, distributed simulation of the full 3D Einstein equations of general relativity, calculating the collision of black holes and neutron stars. The simulation itself will be distributed across machines on both continents, utilizing Globus, and will be controlled and displayed live on an Immersadesk Virtual Reality system at SC98. The domain decomposition will involve one compact object (either a neutron star of black hole) in Europe and one in America. Fully utilizing a transatlantic ATM network, the two objects will collide and merge (in a virtual space "somewhere" over the Atlantic Ocean).

 

A DYNAMIC MASTER-SLAVE APPROACH FOR THE SIMULATION OF DISPERSED MULTIPHASE FLOWS

Primary Contact: Bernard Bunner, bunner@engin.umich.edu

Bernard Bunner and Gretar Tryggvason, Department of Mechanical Engineering and Applied Mechanics, University of Michigan, Ann Arbor

Simulations of dispersed multiphase flows present some of the most difficult challenges in computational fluid dynamics because of the existence of a deformable boundary within the flow domain. A finite-difference/front-tracking method has been developed to accurately calculate the motion of bubbles and drops in a suspending fluid. The interface between the two fluids is tracked explicitly by a moving two-dimensional mesh superimposed on the fixed three-dimensional computational grid on which the Navier-Stokes equations are solved. A simple and efficient master-slave approach is implemented to parallelize the linked list structures describing the moving mesh while domain decomposition is the natural choice to parallelize the fixed grid data. This state-of-the-art method has already produced revolutionary results yielding new insight into the dynamics of multiphase flows.

 

INDUSTRIAL MOLD FILLING SIMULATION USING AN INTERNATIONALLY DISTRIBUTED SOFTWARE COMPONENT ARCHITECTURE

Primary Contact: Randall Bramley, bramley@cs.indiana.edu

Martin Audet, Jean-Francois Hetu and Florin Ilinca, Industrial Materials Institute, NRC, Quebec; Peter Beckman and William F. Humphrey, Los Alamos National Laboratory; Randall Bramley, Fabian Breg, Prafulla Deuskar, Shridhar Diwan, Donald F. McMullen, Dennis Gannon, Madhusudhan Govindaraju, John N. Huffman, Juan Villacis and Eric Wernert, Indiana University&endash;Bloomington; Ian Foster and Steve Tuecke, Argonne National Laboratory; Benoit Ozell, Centre de Recherche en Calcul Applique, Montreal

This project will connect high performance software and hardware systems to provide an integrated solution environment for a 3D parallel finite element code modeling industrial material processes such as casting and injection molding. It will encompass problems of resource allocation and management, component systems for problem-solving environments, parallel solution of implicit finite element systems on complex geometries, solution of large-scale distributed systems of equations and 3D immersive visualization. One goal of this project is to demonstrate a complete, end-to-end high performance parallel solution of problems of major importance to industry, all occuring in a geographically distributed hardware and software environment.

 

INNOVATIVE WIDE AREA APPLICATIONS ON THE GUSTO GRID TESTBED

Primary Contact: Ian Foster, foster@mcs.anl.gov

Joe Bester, Ian Foster, Joe Insley, Stuart Martin, Warren Smith, Brian Toonen, Steve Tuecke and Gregor von Laszewski, Mathematics and Computer Science Division, Argonne National Laboratory; Karl Czajkowski, Steve Fitzgerald, Carl Kesselman, Mei Su and Marcus Thiebaux, USC Information Sciences Institute; Sharon Brunett, California Institute of Technology; Russ Miller, SUNY Stony Brook; Steve Wang, Ian McNulty and Mark Rivers, Advanced Photon Source, Argonne National Laboratory

We showcase the Globus metacomputing toolkit and the associated GUSTO testbed, the first large-scale realization of a high-performance distributed computing infrastructure. We do this by using Globus and GUSTO to perform three unique computations, none of which would have been possible without Globus services and GUSTO resources: (1) collaborative online analysis of data from a microtomographic beamline at the Advanced Photon Source at Argonne; (2) record-setting distributed interactive simulation, using multiple supercomputers; and (3) high-throughput computing for crystallographic phase problems.

 

THE TERABYTE CHALLENGE: AN OPEN TESTBED FOR MANAGING, MINING AND MODELING MASSIVE AND DISTRIBUTED DATA

Primary Contact: Robert Grossman, grossman@uic.edu

Robert Grossman, Stuart Bailey, Simon Kasif, Don Mon, Vijay Natarajan, Harinath Sivakumar, Gutti Srinath, Guruvayurappan Subramanyam, Andriy Turinskiy, Mohamed Zaheer and Suma Batchu, University of Illinois at Chicago; Robert Hollebeek, Peter Buneman, Don Benton and Pavlos Protopapas, University of Pennsylvania; David Rocke, University of California at Davis; Ken Sevik, University of Toronto; Yike Guo, Imperial College, London; Drew Baden, University of Maryland at College Park; Peter Milne, Cooperative Research Center for Advanced Computational Systems, Australia; Bernie O'Lear, National Center for Atmospheric Research (NCAR); Robert Grossman and Michael Cornelison, Magnify, Inc.; Bob Hollebeek, HUBS, Inc.

The Terabyte Challenge Testbed (TCTB) is an open, distributed testbed for experimental studies and demonstrations involving data mining, predictive modeling and data intensive computing. We will demonstrate software tools to work with clusters of workstations, metaclusters consisting of geographically distributed clusters and superclusters consisting of geographically distributed clusters connected with high performance networks. The focus will be on mining large, massive and distributed data sets. We will demonstrate a variety of applications, including mining scientific and engineering data, medical and health care data and business data.

 

LEGION: SEAMLESS, SECURE WIDE-AREA METACOMPUTING

Primary Contact: Marty Humphrey, humphrey@cs.virginia.edu

Marty Humphrey, Andrew Grimshaw and the Legion team at the University of Virginia

Legion is an object-based, metasystems software project that provides the appearance of a single virtual machine from a collection of geographically-dispersed, heterogeneous resources. Applications will be shown that illustrate the programming environment and run-time environment of Legion. These applications, including hydrodynamic turbulence simulation with remote visualization (MPI and DICE), gene sequence matching and a parameter space study, will be executed using hundreds of heterogeneous machines spread across DoD, NPACI, UVa and our booth. A visual display will be used to monitor application performance on the nationwide Legion net. The emphasis in the accompanying discussion will be on ease of use, performance and security.

 

EVERYWARE: COMBINING DISPARATE SOFTWARE INFRASTRUCTURES FOR PERFORMANCE

Primary Contact: Rich Wolski, rich@cs.ucsd.edu

Rich Wolski, UC San Diego, NPACI, University of Tennessee at Knoxville; John Brevik, UC Berkeley; Alan Su, UC San Diego; Neil Spring, UC San Diego, University of Washington; Chandra Krintz, UC San Diego, University of Tennessee at Knoxville; Graziano Obertelli, UC San Diego

Our goal is to demonstrate the ways in which various distributed and metacomputing infrastructures such as Legion, Globus, Condor and Java can be profitably combined with "vanilla" Unix systems to solve large-scale computational problems. Everyware will allow processes and computational agents running on supercomputers, workstations, personal computers and within web browsers to be used in concert by a single application.

 

NEAR REAL-TIME IMAGING OF HUMAN BRAIN ACTIVITY

Primary Contact: Greg Hood, ghood@psc.edu

Greg Hood, Chad Vizino and Joel Welling, Pittsburgh Supercomputing Center; Doug Noll and Andy Stenger; University of Pittsburgh Medical Center; Jana Asher, Carnegie Mellon University

We will demonstrate near real-time imaging of brain activity with a live human subject in an MRI scanner at the University of Pittsburgh Medical Center. Data will be sent as it is acquired to the Cray T3E at the Pittsburgh Supercomputing Center for processing, and the results then sent to the show floor for visualization in the PSC research booth on an SGI Onyx with Infinite Reality Engine. Our presentation will describe the nature of functional imaging (fMRI), the methods that we use to reconstruct three-dimensional volumes from the raw scanner output and the prospective uses for this technology.

 

SEMI-TRANSPARENT SUPERCOMPUTING: DYNAMIC VOLUME RENDERING ON REMOTE HPC SYSTEMS

Primary Contact: Gregory Johnson, johnson@sdsc.edu

Gregory Johnson, Jon Genetti and Mike Gannis, SDSC, NPACI

Volume rendering of medical data produces accurate, highly detailed images of internal anatomy not available by other means. However, multi-gigabyte data such as that from the Visible Human Project (NLM), exceeds the ability of workstation class systems with respect to generating images at the rates required for an effective exploration of the subject.

SDSC researchers have developed a distributed direct volume rendering system called the Massively Parallel Interactive Rendering Environment (MPIRE). MPIRE includes a Java applet through which the user configures the desired rendering parameters, target MPP system and compute resources. A scalable parallel rendering engine is then automatically started on the MPP over the specified number of nodes, the data loaded and an image created and sent back to the Java applet. From there, the image is automatically updated by the engine as the user modifies the camera position, lighting, coloration or any other rendering parameter.

SC98 Table of Contents

Questions regarding this site to: Webmaster
Copyright 1998. All rights reserved.