HPC 2008

 

High Performance Computing

and grids

 

An International Advanced Research Workshop

 

 

 

June 30th – July 4th, 2008, Cetraro, Italy

 

 

 

 

 

 

 

 

Final Programme

 

 

 

 

 

 

 

Programme Committee

 

D. ABRAMSON

Monash University

AUSTRALIA

F. BAETKE

Hewlett Packard

U.S.A.

R. BUYYA

University of Melbourne

AUSTRALIA

F. CAPPELLO

INRIA

FRANCE

C. CATLETT

Argonne National Laboratory and University of Chicago

U.S.A.

J. Dongarra

Univ. of Tennessee

U.S.A.

I. Foster

Argonne National Laboratory and University of Chicago

U.S.A.

G. FOX

Community Grid Computing Laboratory and Indiana University

U.S.A.

W. GENTZSCH

DEISA and Duke University - U.S.A.

GERMANY

L. Grandinetti

University of Calabria

Italy

C. JESSHOPE

University of Amsterdam

NETHERLANDS

H. JIN

Huazhong University of Science and Technology

CHINA

G. Joubert

Technical University of Clausthal

Germany

C. KESSELMAN

University of Southern California

U.S.A.

J. Kowalik

Gdansk UniversityPoland

U.S.A.

M. LIVNY

University of Wisconsin

U.S.A.

S. MATSUOKA

Tokyo Institute of Technology

JAPAN

D. REED

Microsoft Research

U.S.A.

S. SEKIGUCHI

National Institute of Advanced Industrial Science and Technology        

JAPAN

H. SIMON

Lawrence Berkeley National Laboratory Berkeley

U.S.A.

P. SLOOT

University of Amsterdam

Netherlands

D. TALIA

University of Calabria

Italy

M. VALERO

Universidad Politecnica de Catalunya

Spain

 

 

 

Organizing Committee

 

JACK DONGARRA

pASQUALE legato

LUCIO Grandinetti

MEHIDDIN Al baali

mARIA CARMEN incutti

 

 

 

 

Sponsors

 

HEWLETT PACKARD

 

 

IBM

 

 

MICROSOFT

 

 

NEC

 

 

SUN

 

 

INTEL

 

 

 

 

 

 

Altair Engineering

 

 

 

 

ENEA     Italian National Agency for New Technologies, Energy and the Environment

 

 

 

 

CINECA

 

 

SPACI     Southern Partnership for Advanced Computational Infrastructures

 

 

 

 

DataDirect Networks

 

 

 

 

ClusterVision

 

 

 

 

FZJ   Juelich Supercomputing Center

 

 

 

 

Nice

 

 

 

 

SiCortex

Harvard Medical School

 

 

 

 

IEEE Computer Society

 

 

 

 

TABOR COMMUNICATIONS HPCWire, GridToday

 

 

 

 

 

UNIVERSITY OF CALABRIA, D.E.I.S. ITALY

 

 

 

 

Speakers

 

 

David Abramson

Clayton School of Information Technology

Monash University

Clayton, Vic

AUSTRALIA

 

Avner Algom

The Israeli Association of Grid Technologies

ISRAEL

 

Mehiddin Al-Baali

Dept. of Mathematics and Statistics

Sultan Qaboos University

Muscat

OMAN

 

Giovanni Aloisio

University of Salento

Lecce

ITALY

 

Marcos Athanasoulis

Harvard Medical School

Harvard University

U.S.A.

 

Frank Baetke

Global HPC Technology

Hewlett Packard

Richardson, TX

U.S.A.

 

Toine Beckers

DataDirect Networks Inc.

Netherlands

 

Pete Beckman

Maths & Computer Science Division

Argonne National Laboratory

Argonne, IL

U.S.A.

 

Patrizia Beraldi

Dept. of Electronics, Informatics and Systems

University of Calabria

Rende, Cosenza

ITALY

 

John R. Boisseau

Texas Advanced Computing Center

The University of Texas at Austin

Austin, Texas

U.S.A.

 

Marian Bubak

Institute of Computer Science and

Academic Computer Centre CYFRONET

AGH University of Science and Technology

Krakow

POLAND

 

Franck Cappello

Laboratoire de Recherche en Informatique

INRIA

Orsay

FRANCE

 

Umit Catalyurek

Department of Biomedical Informatics

The Ohio State University

Columbus, Ohio

U.S.A.

 

Charlie Catlett

Maths and Computer Science Division

Argonne National Laboratory

Argonne, IL

and

University of Chicago

Chicago, IL

U.S.A.

 

Kihyeon Cho

e-Science Division

KISTI - Korea Institute of Science and Technology Information

Daejon

KOREA

 

Antonio Congiusta

NICE
Cortanze,
Asti

ITALY

 

Tim David

Centre for Bioengineering University of Canterbury
Christchurch

NEW ZEALAND

 

Martijn De Vries

ClusterVision BV

Amsterdam

NETHERLANDS

 

Jack Dongarra

Innovative Computing Laboratory

Computer Science Dept.

University of Tennessee

Knoxville, TN

U.S.A.

 

Giovanni Erbacci

System and Technology Department

CINECA - Inter-University Consortium

Casalecchio di Reno

ITALY

 

Sandro Fiore

University of Salento

Lecce

ITALY

 

Ian Foster

Math & Computer Science Div.

Argonne National Laboratory

Argonne, IL

and

Dept. of Computer Science

The University of Chicago

Chicago, IL

U.S.A.

 

Geoffrey Fox

Community Grid Computing Laboratory

Indiana University

Bloomington, IN

U.S.A.

 

Alan Gara

Blue Gene Supercomputers

IBM

Watson Research Center

U.S.A.

 

Wolfgang Gentzsch

DEISA Distributed European Infrastructure for Supercomputing Applications

and

Duke University

Durham, North Carolina

U.S.A.

 

Stephan Gillich

Intel - HPC EMEA

U.S.A.

 

Lucio Grandinetti

Dept. of Electronics, Informatics and Systems

University of Calabria

Rende, Cosenza

ITALY

 

Atul Gurtu

Tata Institute of Fundamental Research

Mumbai

INDIA

 

Rick Hetherington

Microelectronics

Sun Microsystems, Inc

U.S.A.

 

André Höing

Electrical Engineering and Computing Science

Technical University of Berlin

Berlin

GERMANY

 

Weiwu Hu

Institute of Computing Technology

Chinese Academy of Sciences

Beijing

CHINA

 

Chris Jesshope

Informatics Institute

Faculty of Science

University of Amsterdam

Amsterdam

NETHERLANDS

 

William Johnston

Computational Research Division

Lawrence Berkeley National Laboratory

Berkeley, CA

U.S.A.

 

Carl Kesselman

Information Sciences Institute

University of Southern California

Marina del Ray, Los Angeles, CA

U.S.A.

 

Thomas Lippert

John von Neumann-Institute

for Computing (NIC)

FZ Jülich

GERMANY

 

Miron Livny

Computer Sciences Department
University
of Wisconsin

Madison, WI

U.S.A.

 

Ignacio Llorente

Distributed Systems Architecture Group

Universidad Complutense de Madrid

Madrid

SPAIN

 

Fabrizio Magugliani

Sicortex EMEA

Maynard, MA

U.S.A.

 

Satoshi Matsuoka

Department of Mathematical and Computing Sciences

Tokyo Institute of Technology

Tokyo

JAPAN

 

Mirco Mazzucato

INFN - Istituto Nazionale di Fisica Nucleare

University of Padova

ITALY

 

Paul Messina

formerly

Caltech

and

Argonne National Lab.

U.S.A.

 

Barton Miller

Computer Sciences Dept.

University of Wisconsin

Madison, Wisconsin

U.S.A.

 

Per Öster

CSC – Finnish IT Center for Science

Espoo

FINLAND

 

Marcelo Pasin

École Normale Supérieure de Lyon

Laboratoire de l’informatique du parallélisme

Lyon

FRANCE

 

Robert Pennington

National Center for Supercomputing Applications

University of Illinois at Urbana - Champaign

Urbana, IL

U.S.A.

 

Daniel Reed

Microsoft Research

Redmond, Seattle

formerly

University of North Carolina at Chapel Hill

and

Renaissance Computing Institute

University of North Carolina

Chapel Hill, NC

U.S.A.

 

Yves Robert

Ecole Normale Supérieure de Lyon

FRANCE

 

Anatoly Sachenko

American-Ukrainian School of Computer Science

Department of Information Computing Systems and Control

Ternopil State Economic University

Ternopil

UKRAINE

 

Rizos Sakellariou

University of Manchester

Manchester

UNITED KINGDOM

 

Takayuki Sasakura

NEC HPCE

GERMANY

 

Alex Shafarenko

Department of Computer Science

University of Hertfordshire

Hatfield

UNITED KINGDOM

 

Mark Silberstein

Technion-Israel Institute of Technology

Haifa

ISRAEL

 

Derek Simmel

Pittsburgh Supercomputing Center

Pittsburgh, PA

U.S.A.

 

Peter Sloot

Faculty of Science

University of Amsterdam

Amsterdam

NETHERLANDS

 

Achim Streit

Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich

GERMANY

 

Domenico Talia

Dept. of Electronics, Informatics and Systems

University of Calabria

Rende, Cosenza

ITALY

 

Abderezak Touzene

Computer Science Department

Sultan Qaboos University

AL-Khod

OMAN

 

Anne Trefethen

Oxford e-Research Center

Oxford University

Oxford

U.K.

 

Paolo Trunfio

Dept. of Electronics, Informatics and Systems

University of Calabria

Rende, Cosenza

ITALY

 

Jeffrey Vetter

Computer Science and Maths Division

Oak Ridge National Laboratory

Oak Ridge, TN

and

Georgia Institute of Technology

Atlanta, GA

U.S.A.

 

 

 

 

Workshop Agenda

 

 

MONDAY, June 30

 

Session

Time

Speaker/Activity

 

9.00 – 9.10

Welcome Address

Session I

 

State of the Art and Future scenarios of HPC and Grid

 

9.10 – 9.45

J. Dongarra

“Scheduling for Numerical Linear Algebra Library at Scale”

 

9.45 – 10.15

I. Foster

“Towards an Open Analytics Environment”

 

10.15 – 10.45

D. Reed

“Clouds and ManyCore: The Revolution”          

 

10.45 - 11.15

A. Gara

“Present and future challenges as we architect for the Exascale

 

11.15 – 11.45

COFFEE BREAK

 

11.45– 12.15

A. Trefethen

“Effective computing on heterogeneous platforms”

 

12.15 – 12.45

W. Johnston

“The Evolution of Research and Education Networks and their Essential Role in Modern Science”

 

12.45 – 13.00

Concluding Remarks

Session II

 

Emerging Computer Systems and Solutions

 

17.00 – 17.25

F. Baetke

“Grids, Clouds and HPC: Opportunities and Challenges”

 

17.25 – 17.50

S. GILLICH

“Intel - Delivering Leadership HPC Technology Today and Tomorrow”

 

17.50 – 18.15

t. sasakura

“NEC’s HPC Strategy - Consistency and Innovation”

 

18.15 – 18.45

COFFEE BREAK

 

18.45 – 19.10

t. beckers

“High Performance Storage Solutions from DataDirect Networks”

 

19.10– 19.35

M. DE VRIES

“Next-Generation Cluster Management with ClusterVisionOS

 

19.35 – 20.00

F. MAGUGLIANI

“Green Scalable High Performance Supercomputing”

 

20.00 – 20.10

Concluding Remarks

 

 

TUESDAY, July 1

 

Session

Time

Speaker/Activity

Session III

 

Advances in HPC Technology and Systems 1

 

9.00 – 9.25

W. HU

“The Godson-3 multi-core CPU and its application in High Performance Computers”

 

9.25 – 9.50

R. Hetherington

“Aggressively Threaded Systems: A Wise Choice for HPC”

 

9.50 – 10.15

C. Jesshope

“Managing resources dynamically in SVP - from many-core to Grid”

 

10.15 – 10.40

 A. Shafarenko

“Nondeterministic Coordination using S-Net”

 

10.40 – 11.05

F. CAPPELLO

“Fault Tolerance for PetaScale Systems: Current Knowledge, Challenges and Opportunities”

 

11.05 – 11.35

COFFEE BREAK

 

11.35 – 12.00

P. Beckman

“The Path to Exascale Computing”

 

12.00 – 12.25

S. Matsuoka

“Ultra Low Power HPC --- scaling supercomputing by three

orders of magnitude”

 

12.25 – 12.50

J. Vetter

“HPC Interconnection Networks – The Key to Exascale Computing”

 

12.50 – 13.00

Concluding Remarks

Session IV

 

Advances in HPC Technology and Systems 2

 

16.30 – 17.00

J. Boisseau

“Deployment Experiences, Performance Observations, and Early Science Results on Ranger”

 

17.00 – 17.25

R. PENNINGTON

“NCSA Blue Waters: Preparing for the Sustained Petascale System”

 

17.25 – 17.50

T. LIPPERT

“The Impact of Petacomputing on Models and Theories”

 

17.50 – 18.15

B. MILLER

“Scalable Middleware for Large Scale Systems”

 

18.15 – 18.45

COFFEE BREAK

 

18.45 – 20.00

PANEL DISCUSSION 1: “EXASCALE COMPUTING”

Chairman: P. Messina

Co-organizers: P. Beckman, P. Messina

Panelists: P. Beckman, A. Gara, D. Reed, S. Matsuoka, R. Vetter

 

 

WEDNESDAY, July 2

 

Session

Time

Speaker/Activity

Session V

 

Grid Technology and Systems 1

 

9.00 – 9.25

 M. Livny

“Old problems never die – managing the multi-programming mix”

 

9.25 – 9.50

D. Abramson

“Active Data: Blurring the distinction between data and computation”

 

9.50 – 10.15

D. Talia

“Using Peer-to-Peer Dynamic Querying in Grid Information Services”

 

10.15 – 10.40

Y. ROBERT

“Algorithms and scheduling techniques for clusters and grids”

 

10.40 – 11.05

R. SAKELLARIOU

“Feedback control for efficient autonomic solutions on the Grid”

 

11.05 – 11.35

COFFEE BREAK

 

11.35 – 12.00

C. Catlett

“Accidentally Using Grid Services”

 

12.00 – 12.25

A. Algom

“From Grid Computing to Cloud Computing - The evolution of the Grid Marketplace”

 

12.25 – 12.50

I. Llorente

“Cloud Computing for on-Demand Resource Provisioning”

 

12.50 – 13.00

Concluding Remarks

Session VI

 

Grid Technology and Systems 2

 

17.00 – 17.25

M. PASIN

“Network resource reservation and virtualization for grid applications”

 

17.25 – 17.50

A. TOUZENE

“A Performance Based Distribution Algorithm for Grid Computing

Heterogeneous Tasks”

 

17.50 – 18.15

C. KESSELMAN

“Applications of Grid Technology to Health Care Systems”

 

18.15 – 18.45

COFFEE BREAK

 

18.45 – 20.30

Panel discussion 2: “FROM GRIDS TO CLOUD SERVICES”

Organizer:  C. CATLETT

Panelists: Avner Algom, Pete Beckman, Charlie Catlett, Ignacio Llorente, Satoshi Matsuoka

 

 

THURSDAY, July 3

 

Session

Time

Speaker/Activity

Session VII

 

Infrastructures, Instruments, Products, Solutions for High Performance Computing and Grids

 

9.00 – 9.25

G. FOX

“Parallel Data Mining from Multicore to Cloudy Grids”

 

9.25 – 9.50

A. HÖING

“Summary-based Distributed Semantic Database for Resource and Service

Discovery”

 

9.50 – 10.15

A. STREIT

Unicore 6 – A European Grid Technology”

 

10.15 – 10.40

W. GENTZSCH

“e-Science Applications on Grids - The DEISA Success Story”

 

10.40 – 11.05

M. SILBERSTEIN

Superlink-online - delivering the power of GPUs, clusters and

opportunistic grids to geneticists”

 

11.05 – 11.35

COFFEE BREAK

 

11.35 – 12.00

M. BUBAK

“Building collaborative applications for system-level science”

 

12.00 – 12.25

D. SIMMEL

“DMOVER: Scheduled Data Transfer for HPC Grid Workflows”

 

12.25 – 12.50

A. CONGIUSTA

“Grid Computing or the Internet of services? Opportunities and perspectives from research to business”

 

12.50 – 13.00

Concluding Remarks

Session VIII

 

National and International Grid Infrastructures and Projects

 

17.00 – 17.25

D. ABRAMSON

“e-Research & Grid computing in Australia:

From Infrastructure to Research”

 

17.25 – 17.50

K. CHO

“Grid and e-Science in Korea”

 

17.50 – 18.15

A. GURTU

“Grid Activity in India”

 

18.15 – 18.45

COFFEE BREAK

 

18.45 – 19.10

A. SACHENKO

“National Grid Initiative of Ukraine”

 

19.10– 19.35

P. ÖSTER

“European Grid Initiative”

 

19.35 – 20.00

M. MAZZUCATO

“Italian Grid Infrastructure”

 

20.00 – 20.10

Concluding Remarks

 

 

FRIDAY, July 4

 

Session

Time

Speaker/Activity

Session IX

 

Challenging Applications of HPC and Grids

 

9.00 – 9.25

 M. ATHANASOULIS

“Building Shared High Performance Computing Infrastructure for the Biomedical Sciences”

 

9.25 – 9.50

P. SLOOT

ViroLab: Distributed Decision Support in a virtual laboratory  for infectious diseases”

 

9.50 – 10.15

U. CATALYUREK

“Processing of Large-Scale Biomedical Images on a Cluster of Multi-Core CPUs and GPUs

 

10.15 – 10.40

T. DAVID

“A Heterogeneous Computing Model for a Grand Challenge Problem”

 

10.40 – 11.05

L. GRANDINETTI – P. BERALDI

“Grid Computing for Financial Applications”

 

11.05 – 11.35

COFFEE BREAK

 

11.35 – 12.00

G. ALOISIO – S. FIORE

“Challenging Applications of HPC and Grid”

 

12.00 – 12.25

G. ERBACCI

“An advanced HPC  infrastructure in Italy for challenging scientific applications”

 

12.25 – 12.50

K. CHO

“The e-Science for High Energy Physics”

 

12.50 – 13.00

Concluding Remarks

 

 

ABSTRACTS

 

Scheduling for Numerical Linear Algebra Library at Scale

 

Jack Dongarra

Innovative Computing Laboratory

Computer Science Dept.

University of Tennessee

Knoxville, TN, U.S.A.

 

In this talk we will look at some of the issues numerical library developers are facing when using manycore systems with millions of threads of execution.

 

Back to Session I

Clouds and ManyCore: The Revolution

 

Daniel A. Reed

Microsoft Research

Redmond, Seattle, U.S.A.

 

As Yogi Berra famously noted, “It’s hard to make predictions, especially about the future.” Without doubt, though, scientific discovery, business practice and social interactions are moving rapidly from a world of homogeneous and local systems to a world of distributed software, virtual organizations and cloud computing infrastructure.  In science, a tsunami of new experimental and computational data and a suite of increasingly ubiquitous sensors pose vexing problems in data analysis, transport, visualization and collaboration. In society and business, software as a service and cloud computing are empowering distributed groups.

 

Let’s step back and think about the longer term future. Where is the technology going and what are the research implications?  What architectures are appropriate for 100-way or 1000-way multicore designs?  How do we build scalable infrastructure? How do we develop and support software?  What is the ecosystem of components in which they will operate? How do we optimize performance, power and reliability?  Do we have ideas and vision or are we constrained by ecosystem economics and research funding parsimony? 

 

Biographical Sketch

Daniel A. Reed is Microsoft’s Scalable and Multicore Computing Strategist, responsible for re-envisioning the data center of the future.  Previously, he was the Chancellor’s Eminent Professor at UNC Chapel Hill, as well as the Director of the Renaissance Computing Institute (RENCI) and the Chancellor’s Senior Advisor for Strategy and Innovation for UNC Chapel Hill.  Dr. Reed is a member of President Bush’s Council of Advisors on Science and Technology (PCAST) and a former member of the President’s Information Technology Advisory Committee (PITAC).  He recently chaired a review of the federal networking and IT research portfolio, and he is chair of the board of directors of the Computing Research Association.

 

He was previously Head of the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC). He has also been Director of the National Center for Supercomputing Applications (NCSA) at UIUC, where he also led National Computational Science Alliance. He was also one of the principal investigators and chief architect for the NSF TeraGrid.  He received his PhD in computer science in 1983 from Purdue University.

Back to Session I

Present and future challenges as we architect for the Exascale

 

Alan Gara

Dept. of Computer Science

Indiana University

U.S.A.

 

In this presentation current trends toward achieving Petascale computing are examined. These current trends will be contrasted with what is needed to reach the Exascale. Possible directions and critical enabling technologies will be discussed.

 

Back to Session I

Effective computing on heterogeneous platforms

 

Anne Trefethen

Oxford University

Oxford, U.K.

 

We have entered an era where at every scale of computing - desktop, high-performance and distributed - we need to deal with heterogeneity.  Systems are made up of multicore chips and accelerators in an assortment of hardware architectures and software environments.  This has created a complexity for scientific application developers and algorithm developers alike.  Our focus is on effective algorithms and environments across these scales to support efficient scientific application development.

 

Back to Session I

The Evolution of Research and Education Networks
and their Essential Role in Modern Science

 

William E. Johnston

Senior Scientist and Energy Sciences Network (ESnet) Department Head

Lawrence Berkeley National Laboratory

Berkley, CA, U.S.A.

 

In the past 15 years there has been a remarkable increase in the volume of data that must be analyzed in world-wide collaborations in order to accomplish the most advanced science and a corresponding increase in network bandwidth, deployment, and capabilities to meet these needs. Further, these changes have touched all aspects of science including, in addition to data analysis, remote conduct of experiments and multi-component distributed computational simulation.

 

Terabytes of data from unique and very expensive instruments must be collaboratively analyzed by the many science groups involved in the experiments. The highly complex, long-running simulations needed to accurately represent macro-scale phenomenon such as the climate, stellar formation, in-vivo cellular functioning in complex organisms, etc., all involve building applications that incorporate and use components that are located at the home institutions of many different scientific groups.

 

The volume of traffic in research and education networks has increased exponentially since about 1990. Virtually all of this increase – demonstrably so in the past five years – is due to increased use of the network for moving vast quantities of data among scientific instruments and widely distributed analysis systems, and among supercomputers and remote analysis centers. Further, this data movement is no longer optional for science: Increasingly large-scale science is dependent on network-based data movement in order for the science to be successful.

 

Modern science approaches require that networks provide not only high bandwidth, but also advanced services. Scheduled and on-demand bandwidth enables connection and simultaneous operation of instruments, local compute clusters, supercomputers, and large storage systems. Low latency, high bandwidth, secure circuits interconnect components of simulations running on systems scattered around the country and internationally. Comprehensive, global monitoring and reporting that allow distributed workflow systems to know exactly how end-to-end paths that transit many different networks are performing. At the same time, the network must provide a level of reliability that is commensurate with the billion dollar instrument systems, scarce supercomputers, and the hundreds of collaborating scientific groups being interconnected that is typical of large-scale science.

 

In this talk I will look at how network architectures, technologies, and services have evolved over the past 15 years to meet the needs of science that now uses sophisticated distributed systems as an integral part of the process of doing science. One result of this is that the R&E community has some unique communications requirements and some of the most capable networks in the world to satisfy those requirements. I will also look at the projected requirements for science over the next 5 to 10 years and how the R&E networks must further expand and evolve to meet these future requirements.

Back to Session I

Grids, Clouds and HPC: Opportunities and Challenges

 

Dr. Frank Baetke - Global HPC Technology Program Manager

Richardson, TX, U.S.A.

 

New trends in the HPC area can be derived from increasing growth-rates at the lower end of the market, specifically at the workgroup and departmental level, and from concepts which are based on the original promises of computational grids. Those trends combined with the ever increasing demand  for even higher component densities and higher energy efficiency generate additional challenges: examples of new products will be shown which specifically address those issues.

 

Back to Session II

Intel - Delivering Leadership HPC Technology Today and Tomorrow

 

Stephan Gillich

Director HPC EMEA

Enterprise Marketing EMEA

Intel GmbH, U.S.A.

 

We are excited about the opportunity that lies in front of us as our
manufacturing processes move from 45nm to 32nm to 22nm and on to 16nm.
This progress will e.g enables us to pack more cores into each
processor.  But it is not just about adding cores that will deliver the
performance, it's about creating the right infrastructure to keep all
those cores busy which is just as valuable as the processor and the
infrastructure - combining the energy efficiency and performance gains
in hardware with advances in software and programming techniques. It is
about providing the SW tools that enable organizations to create
sustainable high performing applications that scale in performance.
This talk will cover Intel's HPC technology fueled by our manufacturing
leadership and driven by our research - providing a robust and powerful
infrastructure. It will conclude with an outlook into Intel's Tera-scale
research program.

Back to Session II

High Performance Storage Solutions from DataDirect Networks 

 

Toine Beckers

DataDirect Networks Inc., Netherlands

 

With the growing needs for High Performance Computing clusters (from GFlops to TFlops and even PFlops systems) in many application fields also the need for more and more data storage capacity increases as well. This often leads to complex, difficult to manage storage solutions. With the Silicon Storage Appliance products from DataDirect Networks an easy to manage, scalable and high performance solution is provided which is becoming widely accepted in the High Performance Computing Community.

 

Back to Session II

Next-Generation Cluster Management with ClusterVisionOS

 

Martijn De Vries

ClusterVision BV

Amsterdam, Netherlands

 

Setting up and managing a large cluster can be a challenging task without
the right tools at hand. With the upcoming release of ClusterVisionOS 4,
ClusterVision is introducing the next-generation of its widely used
cluster operating system. The new release of ClusterVisionOS features a
brand new SOAP-based cluster management infrastructure that allows
administrators to monitor and control all aspects of their clusters
through graphical and command-line based interfaces.

In this presentation, various aspects of the ClusterVisionOS cluster
management infrastructure and associated tools will be described.

 

Back to Session II

Green Scalable High Performance Supercomputing

 

Fabrizio Magugliani

EMEA Business Development Director Sicortex

Maynard, MA, U.S.A.

 

As CPU speeds have reached a point where simply increasing the clock
frequency is no longer an option, the world has turned to multi-core 
CPUs in an attempt to continue to get more computational cycles out 
essentially the same system arhitecture.  This has led to an increasing inbalance 
between brute computational capability and other aspects of the system.

By coupling an energy efficient CPU core design with an very fast and
reliable interconnect, all on a single piece of silicon, we have 
created a low-power/low-heat system that balances computational capability with 
high performance I/O.  This system provably scales to thousands of processors
in about one cubic meter of space at 18 KW.

 

Back to Session II

The Godson-3 multi-core CPU and its application in High Performance Computers

 

Weiwu Hu, Xianggao, Yunji Chen

Institute of Computing Technology, Chinese Academy of Sciences

Beijing, China

 

Godson-3 is a multi-core processor based on the 64-bit superscalar Godson-2 CPU core. It takes a scalable CMP architecture in which processors and global addressed L2 cache modules are connected in a distributed way and coherence of multiple L1 copies of the same L2 block is maintained with a directory-based cache coherence protocol.

The Godson-2 CPU core is a four-issue, out-of-order execution CPU which runs the MIPS64 instruction set. The latest Godson-2F which integrates the Godson-2 CPU core, 512KB L2 cache, 333MHz DDR2 controller and PCI/PCIX controller achieves 1GHz based on the 90nm STMicro CMOS technology, and has been volume produced and shipped to the market for low-cost PCs, low-end servers, and many embedded applications. The CPU core of Godson-3 is enhanced from Godson-2 CPU core to support efficient X86 to MIPS binary translation, and to optimize performance, power consumption, reliability and debug methods.

Godson-3 adopts two-dimension mesh topology. Each node in the mesh include an 8*8 crossbar which connects four processor cores, four shared L2-cache banks and four adjacent nodes in the East, South, West and North. A 2*2 mesh network can connect a 16-core processor, and a 4*4 mesh network can connect a 64-core processor. The distributed on-chip L2 cache modules are globally addressed. Each cache block of L1 cache has a fixed L2 cache home node in which the cache directory is maintained by directory-based cache coherence protocol. Each node has one (or more) DDR2 memory controller. IO controllers are connected through free crossbar ports of boundary nodes.

Based on the Godson-3 architecture, several product chips are defined and will be physically implemented. The 4-core Godson-3 chip is designed and fabricated based on 65nm STMicro CMOS technology. It includes one 4-core node, 4MB L2 cache, two DDR2/3 ports, two HT1.0 ports, two PCIE ports, one PCI port and one LPC port. It will be taped out in first half of 2008.

One important application of Godson-3 is the low cost high performance computers (HPC). Based on Godson-3, the design of one national PetaFLOPS HPC and one personal TeraFLOPs HPC are planed. This presentation will introduces the HPC plans based on the Godson-3 multi-core processor.

 

Back to Session III

Aggressively Threaded Systems: A Wise Choice for HPC

 

Rick Hetherington
Vice President, CTO
Microelectronics
Sun Microsystems, Inc

U.S.A.

 

Niagara technology, in its infancy, targeted the commercial computing market.

These throughput workloads were not very computationally intensive but demanded memory subsystems that provided high bandwidth and high capacity.

The second and third generations of NIagara added greatly increased computation capability to the processing cores while continuing to focus on high throughput.

The result is a set of products that efficiently deliver high levels of computational throughput.

This talk will discuss the UltraSparc T2 and T2+ processor designs as well as an analysis of their behavior while executing 'technical' workloads.

 

Back to Session III

Managing resources dynamically in SVP – from many-core to Grid

 

Chris Jesshope

Professor of Computer Systems Architecture

University of Amsterdam

Amsterdam, Netherlands

 

Our computer systems are becoming pervasive and ubiquitous. It is now 
possible for everyday consumer products to have networks comprising  thousands of nodes and Grids will multiply this by another large  factor. These systems may comprise a huge number of heterogeneous  computing elements and will certainly evolve. How can such complex  systems be programmed, when we may not know which nodes are available  at any time. Taking advantage of the huge computing power offered by  this collaboration of elements will require the dynamic management of  concurrency and this is a significant challenge. To solve these 
issues, a disruptive approach is promoted in the AETHER European  project, which embeds self-adaptivity at each level of the system,  giving autonomy to the components and enabling the application  designer to concentrate on the application instead of having to cope  with all possible events in the lifetime of a computing resource in  such a rapidly evolving environment. For this purpose, we have  introduced the SANE concept (Self-Adaptive Networked Entity) and a 
programming model based on a SANE Virtual Processor (SVP). This provides a dynamic model of concurrency based on dynamically binding  resources to families of blocking threads in a hierarchical manner.  This approach presents a new dynamic architecture and protocols to  enable the sharing of resources and the consequent management of  concurrency. This presentation will describe  the resource management  protocols that enables delegation of work. SANEs are autonomous and  from time to time may be given jobs to execute; a local user may 
submit a job or one may be delegated from its environment. In the  latter case, the SANE will have contracted with an external thread to  run that job and to meet certain expectations in its execution, for  example performance. The contract is negotiated using a credit  exchange, where the cost of executing a job is initially assumed to be  the energy expended by the contracted SANE but which can be miodified  by market forces. Thus the contracting thread, which may be acting on 
behalf of another SANE, transfers credit for the agreed amount of  energy to execute the work on the contracted SANE. In response, the  contracted SANE agrees to meet the deadlines or performance  constraints imposed by the contracting SANE.

Back to Session III

Nondeterministic Coordination using S-Net

 

Prof Alex Shafarenko

Department of Computer Science

University of Hertfordshire

Hatfield, AL10 9AB, UK

 

Coordination languages have been used for many years in order to separate computation and concurrency/communication, that is coordination, concerns. Despite that, a typical coordination language intrudes into the computational part of the code even though it provides some abstract projection of those distributed computing realities. As a result, units of an application program become barely readable in isolation, without having the "big picture" in mind --- and that big picture in turn is overburdened with interface details. 

We believe that the reason why coordination has these problems is that true separation between computation and concurrency concerns is only possible using a nondeterministic glue. Indeed deterministic coordination abstracts application code as a state-transition system, introducing synchonization over and above the inimum needed for correct functioning of the application code. Nondeterministic coordination, which we describe in this paper, leans towards loose, data-flow-style composition using asynchronous computational structures --- and synchronisers where necessary to ensure that the correct data sets are worked on by fully encapsulated application code units. 

The paper will present a coordination language S-Net, developed and implemented by the authors.

The language is very compact, only using 4 combinators acting on user-defined boxes to create hierarchical networks of asynchronously communicating components. The boxes are written in a conventional language and use a conventional stream interface for output, while the input comes as a standard parameter list.

We expect ordinary engineers to be able to provide these components.  There is only one special box which the user cannot create and which comes with the S-Net language: the synchrocell. The significant expressive power of coordination in such a small language  is achieved by using a sophisticated type system with subtyping, which influences the network "wiring" provided by the combinators. The coordination program is thus a large algebraic formula using the combinators, or several such formulae, and it is written by a concurrency engineer who needs

no detailed knowledge of the application domain.

Concurrency and self-adaptivity of S-Net is helped by the fact that user-defined 

boxes are assumed to be without persistent state, i.e. after the output stream has been flushed and the box terminates, all local state is destroyed, so that the next invocation of the box can take place at a different location in the distributed system. Synchrocells retain their state between invocations but they do not perform computations and consequently consume no computing power. 

In conclusion, we will briefly dwell on the recent success in applying S-Net to a signal processing problem in radar systems industry at Thales Research & Technology, France.

Back to Session III

Fault Tolerance for PetaScale Systems: Current Knowledge, Challenges and Opportunities

 

Franck Cappello

INRIA

France

 

The emergence of PetaScale systems reinvigorates the community interest about how to manage failures in such systems and ensure that large applications successfully complete. Existing results for several key mechanisms associated with fault tolerance in HPC platforms will be presented during this talk.

Most of these key mechanisms come from the distributed system theory.

Over the last decade, they have received a lot of attention from the community and there is probably little to gain by trying to optimize them again. We will describe some of the latest findings in this domain.

Unfortunately, despite their high degree of optimization, existing approaches do not fit well with the challenging evolutions of large scale systems. There is room and even a need for new approaches. Opportunities may come from different origins like adding hardware dedicated to fault tolerance or relaxing some of the constraints inherited from the pure distributed system theory. We will sketch some of these opportunities and their associated limitations.

 

Back to Session III

Ultra Low Power HPC --- scaling supercomputing by three orders of
magnitude

 

Satoshi Matsuoka

Tokyo Institute of Technology

Tokyo, Japan

 

Low power supercomputing as represented by various power efficient architectures such as IBM BlueGene and power aware methods are starting to receive considerable attention in the light of global agenda to reduce energy consumption and also to alleviate increasing heat density problems. Our new project, Ultra Low-Power HPC, greatly extend this horizon by taking the innovative approaches to fundamentally slash energy consumption of supercomputing by up to 3 orders of magnitude in 10 years. This is achieved by the comprehensive use of new energy-efficient hardware devices and power-saving algorithms that are modeled and optimized in a systemwide fashion. Early results from the project are exhibiting good results in achieving 10-100 times energy efficiency, mostly by the use of acceleration and new memory device technologies.

Back to Session III

HPC Interconnection Networks – The Key to Exascale Computing

 

Jeffrey Vetter

Oak Ridge National Laboratory and Georgia Institute of Technology

 

Interconnection networks play a critical role in the design of next generation HPC architectures and the performance of important applications. Despite the significance of interconnects, current trends in HPC interconnects do not appear to fulfill the requirements for next generation multi-petaflop and exaflop systems. Application requirements drive networks with high bandwidth, low latency, and high message rate, while practical constraints, such as signaling, packaging, and cost, limit improvements in hardware bandwidth and latencies.  To address these challenges, Sandia and Oak Ridge National Laboratories have established the Institute for Advanced Architectures and Algorithms (IAA). In this talk, I will present some of the challenges and potential solutions for exa-scale interconnection networks, which are being considered by IAA.

 

Back to Session III

Deployment Experiences, Performance Observations, and Early Science Results on Ranger

 

John (Jay) R. Boisseau, Ph.D.

Director, Texas Advanced Computing Center

The University of Texas at Austin

 

The Texas Advanced Computing Center (TACC) at The University of
Texas at Austin has deployed the largest open-science supercomputing system
in the world. Ranger is a 1/2 petaflop Sun Constellation Cluster comprising
15,744 AMD Quad-core Opteron processors and a new Sun InfiniBand
interconnect, and uses a fully open-source software stack. The challenges of
deploying a Linux cluster of unprecedented scale with next-generation
components have been numerous, and the experiences gained in resolving then
should have great value in the HPC community. As the system is being
optimized, understanding of performance and scalability characteristics is
increasing, usage is growing, and some researchers are already computing at
32K processing cores with excellent scalability. This talk will summarize
the deployment experience, performance characteristics, and early science
results of Ranger and discuss some future challenges and implications.

 

Back to Session IV

Preparing for the Sustained Petascale System

 

Robert Pennington, National Center for Supercomputing Applications

University of Illinois at Urbana - Champaign

Urbana, IL, U.S.A.

 

The NCSA Blue Waters system will be installed at the University of Illinois for production use in 2011.  To prepare for this, there is significant work to be done on software, applications, education/outreach as well as a new building that will be constructed to house the system.  The talk will be overview of the plans and timeline for preparing for the system and a preliminary, high level summary of the system capabilities.

Back to Session IV

The Impact of Petacomputing on Models and Theories

 

Thomas Lippert

John von Neumann-Institute

for Computing (NIC)

FZ Jülich, Germany

 

In 2008, supercomputers have reached the Petaflop/s performance level. Machines likes the IBM Blue Gene/P, the Los Alamos Roadrunner or the IBM Ranger at TACC achieve their unprecedented power using O(100.000) cores. In my talk I will, on the one hand, discuss the question if we have arrived at the limits of scalability – I will present first scalability results from the Jülich Blue Gene/P system with 64k cores –, and, on the other hand, argue how Petacomputers with hundreds of thousands of processors might transform science itself.

 

Back to Session IV

Scalable Middleware for Large Scale Systems

 

Barton P. Miller

Computer Sciences Department

University of Wisconsin

Madison, Wisconsin, U.S.A.

 

I will discuss the problem of developing tools for large scale parallel environments. We are especially interested in systems, both leadership class parallel computers and clusters that have 10,000's or even millions of processors. The infrastructure that we have developed to address this problem is called MRNet, the Multicast/Reduction Network. MRNet's approach to scale is to structure control and data flow in a tree-based overlay network (TBON) that allows for efficient request distribution and flexible data reductions.

 

The second part of this talk will present an overview of the MRNet design, architecture, and computational model and then discuss several of the applications of MRNet.  The applications include scalable automated performance analysis in Paradyn, a vision clustering application and, most recently, an effort to develop our first petascale tool, STAT, a scalable stack trace analyzer running currently on 100,000's of processors.

 

I will conclude with a brief description of a new fault tolerance design that leverages natural redundancies in the tree structure to provide recovery without checkpoints or message logging.

Back to Session IV

Old problems never die – managing the multi-programming mix

 

Miron Livny

Computer Sciences Department

University of Wisconsin – Madison, WI, U.S.A.

 

Old problems never die; they just fade away as technologies and tradeoffs change.  As the state of the art in hardware and applications evolves further, they resurface.  When virtual memory was introduced almost 50 years ago, computer systems had to find a way to prevent thrashing by controlling the number and properties of the applications allowed to share their physical memory. The recent proliferation of multi-core processors, usage of virtual machines and deployment of complex I/O sub-systems require the development of similar capabilities to control and manage at several scales the mix of applications that share the compute and storage resources of today’s systems.

 

Back to Session V

Active Data: Blurring the distinction between data and computation

 

Tim Ho and David Abramson

Monash University

Clayton, Vic, Australia

 

The amount of data being captured, generated, replicated and archived
on the Grid is growing at an astonishing rate; some researchers predict a tenfold increase every five years from 2000 to 2015 alone. Managing this growth will be challenging, and will require significant effort in the areas of data management
and curation.

While it may now be feasible to store everything forever, it is wasteful and we should not assume that we can continue doing this for very long. In addition, much of the data requires active management, so that it can be discovered, understood and reused by scientists and researchers within and across disciplines. Clearly, there is a need to balance and justify the costs, benefits and the anticipated needs
of future generation when considering maintaining large, distributed
archives indefinitely.

Fortunately, some datasets, such as those created by computations, are reproducible. These datasets are typically derived from other data through a process of computation, as opposed to those captured by instruments that are often non-reproducible. Derived datasets can be reproduced by re-running the computations that previously created them. Clearly, this requires sufficient information on how data is computed, and a set of mechanisms that can create it on demand.

This talk concerns problems that arise in the management and curation of derived data. Specifically, we have developed a data lifecycle that models the whole replication process, and, based on this lifecycle, designed and prototyped an Active Data System that provides a complete, automated solution to the above problems. In particular, our system allows users to recompute data rather than necessarily store it, and adds a layer that provides efficient access to remotely distributed replicated sources across different middleware stacks.

 

Back to Session V

Using Peer-to-Peer Dynamic Querying in Grid Information Services

 

Domenico Talia and Paolo Trunfio

DEIS, University of Calabria

Rende, Italy

 

Dynamic querying (DQ) is a technique adopted in unstructured Peer-to-Peer (P2P) networks to minimize the number of nodes that is necessary to visit to obtain the desired number of results. In this talk we describe the use of the DQ technique over a distributed hash table (DHT) to implement a scalable Grid information service. The DQ-DHT (dynamic querying over a distributed hash table) algorithm has been designed to perform DQ-like searches over DHT-based networks. The aim of DQ-DHT is two-fold: allowing arbitrary queries to be performed in structured P2P networks, and providing dynamic adaptation of search according to the popularity of resources to be located.

Through the use of the DQ-DHT technique it is possible to implement a scalable Grid information service supporting both structured search and execution of arbitraries queries for searching Grid resources on the basis of complex criteria or semantic features.

Back to Session V

Algorithms and scheduling techniques for clusters and grids

 

Yves Robert

Ecole Normale Supérieure de Lyon, France

 

In this talk we provide several examples to
illustrate key algorithmic concepts required to efficiently
execute applications on clusters and grids.
The idea is to give a lively exposition of the necessity to
inject whatever static knowledge is available into the design
of typical applications, such as master-slave tasking,
numerical kernels, and job workflows. We claim that this is the
key to an efficient deployment of these applications onto
large-scale distributed computational platforms.
The talk will proceed through examples to explain how to cope
with resource selection, memory constraints, platform heterogeneity, etc.

 

Back to Session V

Feedback control for efficient autonomic solutions on the Grid

 

Rizos Sakellariou

University of Manchester

Manchester, United Kingdom

 

This talk will consider different approaches for
autonomic solutions on the Grid. The talk will argue for the
need to use autonomic techniques to improve performance on
volatile environments such as those typically associated with
Grids and large-scale distributed computing. Examples will be
presented drawn from our work in the context of a
UK and a
European research project.

Further information about the speaker:
http://www.cs.man.ac.uk/~rizos

 

Back to Session V

Accidentally Using Grid Services

 

Charlie Catlett

Maths and Computer Science Division

Argonne National Laboratory

Argonne, IL

and

University of Chicago

Chicago, IL, U.S.A.

 

Though the term "grid" has fallen from the front page headlines, there is an extremely active market of "grid services" - based on web services and other standards - emerging.  The web originally empowered Internet users to create services and products with very little infrastructure, and signs of success a decade ago included server meltdown from high demand.  Today one need not own any infrastructure at all to launch a new service or product, and the combination of virtual and web services offers not only near unlimited scaling but also reliability.  This talk will focus on a number of examples of new services, illustrating that at least one measure of success is not only "ease of use" but "accidental use" of transparent, but foundational, services.

 

Back to Session V

From Grid Computing to Cloud Computing

The evolution of the Grid Marketplace

 

Avner Algom

The Israeli Association of Grid Technologies

Israel

 

Over the last few years we have seen grid computing evolve from a niche technology associated with scientific and technical computing, into a business-innovating technology that is driving increased commercial adoption. Grid deployments accelerate application performance, improve productivity and collaboration, and optimize the resiliency of the IT infrastructure.

Today, the maturity of the Virtualization technologies, both at the VM and at the IT infrastructure levels, and the convergence of the Grid, Virtualization and SOA concepts, enables the business implementation of the Cloud Computing for utility and SaaS services.

At last, the Grid Computing vision becomes a reality: people that get electricity from their electrical outlet, on-demand, can get applications, computing and storage services from the network, on-demand. We can dynamically scale our computation and storage power, at no time, and we pay only for what we use.

This is going to change the marketplace as we know it.

 

Back to Session V

Cloud Computing for on-Demand Resource Provisioning

 

Ignacio Llorente

Distributed Systems Architecture Group

Universidad Complutense de Madrid

Madrid, Spain

 

The aim of the presentation is to show the benefits of the separation of resource provisioning from job execution management in different deployment scenarios. Within an organization, the incorporation of a new virtualization layer under existing Cluster and HPC middleware stacks decouples the execution of the computing services from the physical infrastructure. The dynamic execution of working nodes, on virtual resources supported by virtual machine managers such as the OpenNEbula Virtual Infrastructure Engine, provides multiple benefits, such as cluster consolidation, cluster partitioning and heterogeneous workload execution. When the computing platform is part of a Grid Infrastructure, this approach additionally provides generic execution support, allowing Grid sites to dynamically adapt to changing VO demands, so overcoming many of the obstacles for Grid adoption.

The previous scenario can be modified so the computing services are executed on a remote virtual infrastructure. This is the resource provision paradigm implemented by some commercial and scientific infrastructure Cloud Computing solutions, such as Globus VWS or Amazon EC2, which provide remote interfaces for control and monitoring of virtual resources. In this way a computing platform could scale out using resources provided on-demand by a provider, so supplementing local physical computing services to satisfy peak or unusual demands. Cloud interfaces can also provide support for the federation of virtualization infrastructures, so allowing virtual machine managers to access resources from remote resources providers or Cloud systems in order to meet fluctuating demands. The OpenNEbula Virtual Infrastructure Engine is being enhanced to access on-demand resources from EC2 and Globus-based clouds. This scenario is being studied in the context of the RESERVOIR– Resources and Services Virtualization without Barriers — EU-funded initiative.

 

Back to Session V

Network resource reservation and virtualization for grid applications

 

Marcelo Pasin

INRIA, École Normale Supérieure de Lyon

Laboratoire de l’informatique du parallélisme

Lyon, France

 

The coordination of grid resource  allocation often needs a service to 
transfer large datasets from one  site to another within a specified 
time interval. Generally the transfer can start from any time after 
its request and use any time  variant bandwidth, as long as it is 
completed before its deadline. We present BDTS, a new service for 
Bulk Data Transfer Scheduling in the Grid, which provides users, 
applications or grid middleware a service to specify transfer jobs and 
ensure transparent control of  them. It divides the active window of a 
transfer job into multiple  intervals and independently assigns 
bandwidth values for each of them producing bandwidth profiles. It 
builds dynamic virtual networks by  combining network bandwidth 
reservation and on-demand optical link provisioning. In this 
presentation, we discuss the conceptual foundations of the problem, 
some implementation issues, and we present some experimental results.

 

Back to Session VI

Parallel Data Mining from Multicore to Cloudy Grids

 

Geoffrey Fox

Indiana University

Bloomington, IN, U.S.A.

 

We describe a suite of data mining tools that cover clustering, Gaussian modeling and dimensional reduction and embedding. These are applied to three class of applications; Geographical information systems, cheminformatics and bioinformatics.  The data vary in dimension from low (2), high (thousands) to undefined (sequences with dissimilarities but not vectors defined). We use deterministic annealing to provide more robust algorithms that are relatively insensitive to local minima. We use embedding algorithms both to associate vectors with sequences and to map high dimensional data to low dimensions for visualization. We discuss the algorithm structure and their mapping to parallel architectures of different types and look at the performance of the algorithms on three classes of system; multicore, cluster and Grid using a MapReduce style algorithm. Each approach is suitable in different application scenarios.

 

Back to Session VII

Summary-based Distributed Semantic Database for Resource and Service Discovery

 

André Höing

Electrical Engineering and Computing Science

Technical University of Berlin

Berlin, Germany

 

Today's RDF triple stores that are based on distributed hash tables (DHTs) distribute the knowledge of all participating peers in the P2P network. They use hash values of the subject, predicate, and object of each triple in order to identify three nodes in the network that shall store a copy of the triple. Query processors collect relevant triples by identifying responsible nodes using the hash values of literals and constants occurring in the query.

Usually, a soft state update mechanism is used to discard obsolete information. This requires a periodic re-dissemination of all data which creates significant network traffic despite the network being a bottleneck of such systems. When considering resource and service discovery with highly dynamic data like the system load, it is expensive and difficult to maintain up-to-date information in the database.

This presentation introduces two novel ideas about managing semantically enriched data in a DHT Peer-to-Peer network without such an enormous overhead. We assume that in most cases RDF documents describing services and resources have a structure that does not change much over time. Furthermore, descriptions of similar instances also have a similar structure.

The first idea uses annotation of triples to label specific triples as dynamic triples, where literals are substituted by pointers to locations where dynamic data is stored.

The second idea goes one step further. Inspired by the idea of XML Data Guides, we can build summaries of RDF documents that represent only the structure but not the actual content of the documents. Such summaries are smaller and more stable. By distributing the summaries in the network while keeping the original data local to their source, the network overhead is largely reduced. Most updates can be performed locally at a document's source and do not generate any network traffic.

Before collecting the needed information for query processing, the query algorithm must now identify nodes and documents that contain desired information utilizing the distributed summary. This step can produce false positives depending on the degree of detail in the summary. A main challenge is to keep the number of false positives and the amount of  network traffic as small as possible which requires a sensible balance of the degree of details and the size of summaries. Once nodes with relevant information have been determined, the query processor collects
this information and processes the query thereon.

 

Back to Session VII

UNICORE 6 – A European Grid Technology

 

Achim Streit

Jülich Supercomputing Centre (JSC) at Forschungszentrum

Juelich, Germany

 

The development of UNICORE started back in 1997 with two projects funded by the German ministry of education and research (BMBF). UNICORE is a vertically integrated Grid middleware, which provides a seamless, secure, and intuitive access to distributed resources and data and provides components on all levels of a Grid architecture from an easy-to-use graphical client down to the interfaces to the Grid resources. Furthermore, UNICORE has a strong support for workflows while security is established through X.509 certificates. Since 2002 UNICORE is continuously improved to mature production ready quality and enhanced with more functionalities in several European projects. Today UNICORE is used in several national and international Grid infrastructures like D-Grid and DEISA and is also providing access to the national Supercomputer of the NIC in Germany.

The talk will give details about the new version of UNICORE 6, which is web-services enabled, OGSA-based and standards-compliant. To begin with the underlying design principles and concepts of UNICORE are presented. A detailed architecture diagram shows the different components of UNICORE 6 and its interdependencies with a special focus on workflows. This is followed by a view on the adoption of common open standards in UNICORE 6, which allows interoperability with other Grid technologies and a realisation of an open and extensible architecture. The talk closes with some interesting use case examples, where the UNICORE Grid technology is used.

The European UNICORE Grid Middleware is available as Open Source from http://www.unicore.eu.

 

Back to Session VII

e-Science Applications on Grids - The DEISA Success Story

 

Wolfgang Gentzsch

DEISA Distributed European Infrastructure for Supercomputing Applications

and

Duke University

Durham, North Carolina, U.S.A.

 

We will present selected compute and data intensive applications which
have been ported to the DEISA Distributed European Infrastructure for
Supercomputing Applications. DEISA connects 11 powerful supercomputers in
Europe into a supercomputing grid offering secure, transparent and remote
access to supercomputing cycles to every scientist in Europe. The
presentation will include lessons learned and recommendations for
applications to be ported on grids.

 

Bio:

 

Wolfgang Gentzsch

 

DEISA, Duke University

 

Wolfgang Gentzsch is Dissemination Advisor for the DEISA Distributed European Initiative for Supercomputing Applications. He is adjunct professor of computer science at Duke University in Durham, and visiting scientist at RENCI Renaissance Computing Institute at UNC Chapel Hill, both in North Carolina. From 2005 to 2007, he was the Chairman of the German D-Grid Initiative. Recently, he was Vice Chair of the e-Infrastructure Reflection Group e-IRG; Area Director of Major Grid Projects of the OGF Open Grid Forum Steering Group; and he is a member of the US President's Council of Advisors for Science and Technology (PCAST-NIT). Before, he was Managing Director of MCNC Grid and Data Center Services in North Carolina; Sun's Senior Director of Grid Computing in Menlo Park, CA; President, CEO, and CTO of start-up companies Genias and Gridware, and professor of mathematics and computer science at the University of Applied Sciences in Regensburg, Germany. Wolfgang Gentzsch studied mathematics and physics at the Technical Universities in Aachen and Darmstadt, Germany.

 

Back to Session VII

Superlink-online - delivering the power of GPUs, clusters and opportunistic grids to geneticists

 

M. Silberstein

Technion-Israel Institute of Technology

Haifa, Israel

 

Genetic linkage analysis is a statistical tool used by geneticists for mapping disease-susceptibility genes in the study of genetic diseases. The analysis is based on the exact inference in very large probabilistic (Bayesian) networks, which is often computationally hard (ranging from seconds to years on a single CPU).

We present a distributed system for faster analysis of genetic data, called Superlink-online. The system achieves high performance through parallel execution of linkage analysis tasks over thousands of computational resources residing in multiple opportunistic computing environments, aka Grids. It utilizes the resources in many available grids, unifying thousands CPUs over campus grids in the Technion and the University of Wisconsin in Madison, EGEE, Open Science Grid and Community Computing Grid Superlink@Technion.

Notably, the system is available online, which allows geneticists to perform computationally intensive analyses with no need for either
installation of software, or maintenance of a complicated distributed environment. It is being extensively used by medical centers worldwide, running over 15,000 interactive genetic analysis tasks since 2006 and consuming about 360 CPU years in all grids.

While the grids potentially provide enormous amount of computing power, we also explore an alternative approach of using Graphics Processing Units (GPUs) to accelerate the genetic linkage computations. We achieve up to two orders of magnitude speedups on average, and up to three order of magnitude speedups on some particularly complex problem instances versus the optimized application performance on a single CPU. The use of GPUs is particularly appealing in the context of Community Grids,  considering the number of high performance GPUs available worldwide.

In this talk we describe various aspects  of the system architecture which drives Superlink-online, including scheduling, resource allocation, fault tolerance and reliability, as well as our recent GPU-related results.

Back to Session VII

Building Collaborative Applications for System-Level Science

 

Marian Bubak

Institute of Computer Science AGH, al. Mickiewicza 30,

30-059 Krakow, Poland

ACC CYFRONET AGH, Krakow, ul. Nawojki 11,

30-950 Krakow, Poland

 

A novel approach to scientific investigations, besides analysis of individual phenomena, integrates different, interdisciplinary sources of knowledge about a complex system to obtain an understanding of the system as a whole. This innovative way of research has recently been called system-level science [1].

 

Problem-solving environments and virtual laboratories have been the subject of research and development for many years [2]. Most of them are built on top of workflow systems [3]. Their main drawbacks include limited expressiveness of the programming model and lack of mechanisms for integration of computing resources from grids, clusters and dedicated computers.

 

The ViroLab project [4] is developing a virtual laboratory [5] for research of infectious diseases to facilitate medical knowledge discovery and provide decision support for HIV drug resistance [6], and this virtual laboratory may be useful in

other areas of system-level science.

 

To overcome the limitations of the programming methods, we have defined an experiment plan notation based on a high-level scripting  language - Ruby. For easy interfacing of different technologies, we have introduced a grid object abstraction level hierarchy [7]. Each grid object class is an abstract entity which defines the operations that can be invoked from the script, each class may have multiple implementations, representing the same functionality; and an implementation may have multiple instances,running on different resources [8].

 

The Experiment Planning Environment is an Eclipse-based tool supporting rapid experiment plan development while Experiment Management Interface enables loading and execution of experiments. The Experiment Repository stores experiment plans prepared by developers and published for future usage, and the laboratory database holds the obtained results.To enable high-level programming, the virtual laboratory engine, called the GridSpace, includes the Grid Operation Invoker which instantiates grid object representatives and handles remote operation invocations. The GridSpace Application Optimizer is responsible for optimal load

balancing on computational servers.The Data Access Service acquires data from remote databases located in research institutions and hospitals. To meet the specific requirements for exchanging biomedical information within such a virtual environment, the solution introduced in DAS bases on existing Grid technologies: Globus Toolkit, OGSA-DAI, and Shibboleth. The provenance approach [9] in the ViroLab virtual laboratory brings together ontology-based semantic modeling, monitoring of applications and the runtime infrastructure, and database technologies, in order to collect rich information concerning the execution of experiments, represent it in a meaningful way, and store it in a scalable repository [10].

 

The virtual laboratory has already been used to plan and execute a few virological experiments, with various types of analysis of HIV virus genotypes such as calculation of drug resistance based on virus genotype, querying historical and provenance information about experiments, a drug resistance system based on the Retrogram set of rules, data mining and classification with Weka [5], and the

molecular dynamics NAMD application which has been installed on the CYFRONET EGEE site.

 

The virtual laboratory provides an environment to collaboratively plan, develop and use collaborative applications; it is dedicated for multi-expertise task-oriented groups running complex computer simulations; its basic features are: mechanisms for user-friendly experiment creation and execution, possibility of reusing existing

libraries, tools etc., gathering and exposing provenance information, integration of geographically-distributed data resources, access to WS, WSRF, MOCCA components and jobs, secure access to data and applications.

 

Acknowledgments

 

The Virtual Laboratory is being developed at the Institute of Computer

Science and CYFRONET AGH, Gridwise Technologies, Universiteit van Amsterdam,

and HLRS Stuttgart in the framework of the EU IST ViroLab and CoreGRID

projects as well as the related Polish SPUB-M and Foundation for Polish

Science grants.

 

References

 

 [1] I. Foster and C. Kesselman: Scaling System-Level Science: Scientific

     Exploration and IT Implications,

     IEEE Computer, vol. 39, no 11, 31-39, 2006

 

 [2] K. Rycerz, M. Bubak, P.M.A. Sloot, V. Getov: Problem

     Solving Environment for Distributed Interactive Simulations in: Sergiei

     Gorlatch, Marian Bubak, and Thierry Priol (Eds). Achievements in European

     Reseach on Grid Systems. CoreGRID Integration Workshop 2006 (Selected

     Papers) ISBN-13: 978-0-387-72811-7; pp 55 - 66, Springer, 2008

 

 [3] Y. Gil, E. Deelman, M. Ellisman, T. Fahringer, G. Fox,

     D. Gannon, C. Goble, M. Livny, L. Moreau, and J. Myers.

     Examining the Challenges of Scientific Workflows. IEEE

     Computer vol 40, no 12 pp 24-32, 2007

 

 [4] ViroLab - EU IST STREP Project 027446; www.virolab.org

 

 [5] ViroLab Virtual Laboratory, http://virolab.cyfronet.pl

 

 [6] P. M.A. Sloot, I. Altintas, M. Bubak, Ch.A. Boucher:

     From Molecule to Man: Decision Support in Individualized E-Health,

     IEEE Computer vol. 39, no 11, 40-46, 2006

 

 [7] T. Gubala, M. Bubak: GridSpace - Semantic Programming Environment

     for the Grid, PPAM'2005, LNCS 3911, 172-179, 2006

 

 [8] M. Malawski, M. Bubak, M. Placek, D. Kurzyniec, V. Sunderam:

     Experiments with Distributed Component Computing Across Grid Boundaries,

     Proc. HPC-GECO/CompFrame Workshop - HPDC'2006, Paris, 2006

 

 [9] D. de Roure, N.R. Jennings, N. Shadbolt, The semantic grid:

     a future e-science infrastructure, Grid Computing -

     Making the Global Infrastructure a Reality,

     Wiley, 2003, pp. 437-470

 

[10] B. Balis, M. Bubak, and J. Wach: User-Oriented Querying

     over Repositories of Data and Provenance,

     In G. Fox, K. Chiu, and R. Buyya, editors, Third IEEE International

     Conference on e-Science and Grid Computing, e-Science 2007,

     Bangalore, India, 10-13 December 2007, pages 77-84.

     IEEE Computer Society, 2007

 

Back to Session VII

A Performance Based Distribution Algorithm for Grid Computing

Heterogeneous Tasks

 

Abderezak Touzene, Hussein AlMaqbali, Ahmed AlKindi, Khaled Day

Department of Computer Science Sultan Qaboos University

AL-Khod, Oman

 

Recently in [1] we proposed a performance based load-balancing algorithm for independent tasks, which require similar computing need in the sense that the

tasks are almost identical. This paper extends the work and proposes a load distribution algorithm for independent tasks with different computing requirements including short and long tasks. We assume a preprocessing phase of prediction of the number of instruction (TNI) needed for each task in the grid. Our load distribution algorithm takes into account both the CPU speed of the computing units and the TNI of different tasks. We design a simulation model using steady-state, based on NS2 to study the performance of our load distribution algorithm.

 

Keywords: grid computing, load-balancing, steady-state, resource management, performance evaluation, simulation models.

 

Back to Session VII

DMOVER: Scheduled Data Transfer for HPC Grid Workflows

 

Derek Simmel

Pittsburgh Supercomputing Center

Pittsburgh, PA, U.S.A.

 

TeraGrid users have expressed a need for better tools to schedule and
manage data transfer among distributed HPC resources and instruments
involved in computational workflows. Users require assured access to  data
transfer services and reliable means to coordinate data transfers  before,
during, and after computational jobs and instrument data  collection
events. The interface to these coordinated data transfer  services must be
easy to use, consistent and interoperable among  different HPC resources,
instruments, and computational workflow  tools. Scheduled data transfer
tasks must be reliable, repeatable, and  exhibit consistent performance
for similar transfer parameters and  conditions.
DMOVER addresses these needs using familiar job scheduling methods to
allocate and prepare WAN-connected nodes dedicated to data transfer
tasks. DMOVER is a portable harness used to schedule and automate data
transfers via Globus GridFTP and GSI-SCP. Users submit DMOVER jobs  using
the dsub command-line client, with applicable parameters and a  task file
containing a list of transfer source an destination URLs.  When a DMOVER
job starts, it automatically configures and deploys  optimized GridFTP and
GSI-SSH servers on the allocated nodes.  Transfers specified in the tasks
file are then executed, supported by  automated retries as well as user
credential monitoring and renewal.  Detailed logs are collected to
facilitate monitoring of data transfer  tasks. Coordinated workflows
involving computational and DMOVER jobs  are implemented via compatible
job managers or Globus job submission  services.

 

Back to Session VII

Grid Computing or the Internet of services?

Opportunities and perspectives from research to business

 

Antonio Congiusta

NICE-ITALY, Cortanze, Asti, Italy

 

Experience has shown that solutions to better enable organizations to take advantage of the benefits of Grid computing, are based on clear identification of the requirements and the application of the best available standardized and reliable technologies.

Relevant examples of such principle with related best practices can be extracted from some of the success stories that recently have involved EnginFrame in the Oil & Gas industry, the Energy and Automotive sectors, HPC support from collaboration facilities to infrastructure provision and management, and also some fruitful cooperations with strategical partners.

In particular, beyond to well established HPC activities within a primary European consortium for providing a production quality infrastructure, a new trend has been undertaken towards the integration of collaboration facilities to HPC environments. Quite interesting are also the activities devoted to enable for workflow management and distributed visualization, some of which are part of European-wide research projects.

From all such experiences we can envision as future of the Grid an always strong evolution towards interoperable key services, within a scenario in which comprehensive and all-inclusive software is ever less important. In such a scenario, a key role is played by integration technologies capable of homogenizing and enforcing service interactions and access.

Back to Session VII

 e-Research & Grid computing in Australia: From Infrastructure to  Research

 

David Abramson

Monash University

Clayton, Vic, Australia

 

Over the past few years the Australian government has performed a major review of its research infrastructure needs, from hard technological areas to the social sciences. Along with this review, they have investigated the electronic platforms required to support  these various disciplines. What has evolved is an grid computing strategy called "Platforms for Collaboration" that addresses computation, networking and data management. In addition to this, various computer science groups are developing grid technologies that underpin this platform. In this talk I will give an over of the Australian e-Research agenda and highlight a few major research  activities in grid computing.

 

Back to Session VIII

Grid and e-Science in Korea

 

Kihyeon Cho

 e-Science Division

Korea Institute of Science and Technology Information Daejeon, 305-806, Korea

 

For Grid and e-Science in Korea we had been focused on the research of Grid technology and Grid infrastructure till 2006. Since 2007, we have been in the stage of enabling science on the cyber infrastructure for four major sciences such as e-Life Science, e-Physics, e-Engineering, and e-Geo Science. In order to support these application areas we also work on scientific workflow technology and WSRF (Web Service Resource Framework) based Grid service technology. We also provide researcher collaborations with both infrastructure and technology. In this talk, we will present the current status of Grid and e-Science projects and activities in Korea.

 

Back to Session VIII

Grid Activity in India

 

Atul Gurtu

Tata Institute of Fundamental Research

Mumbai, INDIA

 

Grid technology has changed the way advanced research is being conducted today. In India too, the main driver for introduction of GRID activity has been participation in High Energy Physics (HEP) experiments at the LHC. To extract meaningful physics results within a reasonable span of time from peta-bytes of data of unprecedented complexity and to include effective participation of collaborating institutions spread out world wide, the only way was to develop GRID based distributed computing. Integration of various distributed environments in different administrative domains having varied security policies poses new challenges in data security & data sharing. The required computational performance is supported through world wide LHC Computing Grid (WLCG). The progress and status of setting up WLCG networking and computing in India will be described with main emphasis on the High Energy Physics related activity. Status of Tier-2 centers for participation in the ALICE and CMS experiments at the LHC at CERN will be presented. The role of the EU-India GRID project in developing Grid based solutions in other areas of science such as earth sciences, bio, condensed matter physics etc. will be mentioned as also some domestic GRID initiatives within the country.

 

Back to Session VIII

National Grid Initiative of Ukraine

 

Anatoly Sachenko

American-Ukrainian School of Computer Science

Department of Information Computing Systems and Control

Ternopil State Economic University

Ternopil, Ukraine

 

 Uniting of the existing Grid segments and supercomputer centers in scientific and educational areas   into joint Ukrainian National Grid Initiative(UNGI) and the issues of UNGI integration into the European Grid infrastructure are considered in this paper. The peculiarities  of Grid segment at National Academy of Science as well as the UGrid Project of Ministry of Education and Science are described too. It’s stressed on the joint project UNGI for EGI and other integration possibilities within INTAS, NATO and Frame 7 programs. Finally an advanced approach for security strengthening  in Grid-systems is proposed.

 

Back to Session VIII

The European Grid Initiative

 

Per Öster

CSC – Finnish IT Center for Science

Espoo, Finland

 

The European Grid Initiative (EGI) has as goal to ensure a long-term sustainability of grid infrastructures in Europe.  This is to be done through establishment of a new federated model bringing together National Grid Infrastructures to build the EGI Organisation. For this purpose, the European Commission has funded a specific project, the EGI Design Study (EGI_DS), to in 27 months make the conceptual setup and operation of a new organisational model of a sustainable pan-European grid infrastructure. The goal of the EGI Design Study (EGI_DS) is to evaluate use cases for the applicability of a coordinated effort, to identify processes and mechanisms for establishing EGI, to define the structure of a corresponding body, and ultimately to initiate the construction of the EGI Organisation. The project started in September 2007 and a very important milestone is the Blueprint of the EGI Organisation. In this talk the EGI Blueprint and feedback from the NGIs will be presented together with a discussion of the role of EGI in the European “HPC Ecosystem”.

Back to Session VIII

Building Shared High Performance Computing Infrastructure for the Biomedical Sciences

 

Marcos Athanasoulis, Dr.PH, MPH

Harvard Medical School

U.S.A.

In recent years high performance computing has moved from the sidelines to the mainstream of biomedical research.  Increasingly researchers are employing computational methods to facilitate their wet lab research.  Some emerging laboratories and approaches are based on a 100% computational ramework.  While there are many lessons to be learned from the computational infrastructure put into place for the physical and mechanical sciences, the character, nature and demands of biomedical computing differ from the needs of the other sciences.  Biomedical computational problems, for example, tend to be less computationally intensive but more “bursty” in their needs.  This creates both an opportunity (it is easier to meet capacity needs) and a challenge (job scheduling rules are more complicated to accommodate the bursts). Harvard Medical School provides one of the most advanced shared high performance research computing centers at an academic medical center. In 2007, Harvard convened the first Biomedical High Performance Computing Leadership Summit to explore the issues in creating shared computing infrastructure for the biomedical sciences.  We brought together over 100 leaders in the field to exchange ideas and approaches.  Through special sessions and direct participant surveys a number of themes emerged around best practices in deploying shared computational infrastructure for the biomedical sciences.  Based on prior experience and the summit findings, we summarizes obstacle and opportunities facing those who wish to provide biomedical oriented high performance computing infrastructure.  We will provide quantitative results about HPC Biomedical Implementations. We will also provide some examples of current problems in biomedical computations including whole genome processing, massively parallel correlation analysis and natural language processing of clinical notes.

Back to Session IX

ViroLab: Distributed Decision Support in a virtual laboratory for infectious diseases

 

P. Sloot

University of Amsterdam

Amsterdam, Netherlands

 

In future years, genetic information is expected to become increasingly significant in many areas of medicine. This expectation comes from the recent and anticipated achievements in genomics, which provide an unparalleled opportunity to advance the understanding of the role of genetic factors in human health and disease, to allow more precise definition of the non-genetic factors involved, and to apply this
insight rapidly to the prevention, diagnosis and treatment of diseases.

Virolab integrates the biomedical information from viruses (proteins and mutations), patients (e.g. viral load) and literature (drug resistance experiments) resulting in a rule-based distributed decision support system for drug ranking. In addition ViroLab includes advanced tools for (bio) statistical analysis, visualization, modelling and simulation, enabling prediction of the temporal virological and immunological response of viruses with complex mutation patterns for drug therapy.

Back to Session IX

Processing of Large-Scale Biomedical Images on a Cluster of Multi-Core CPUs and GPUs

 

Umit Catalyurek

Department of Biomedical Informatics

The Ohio State University

Columbus, Ohio, U.S.A.

 

As microprocessor manufacturers strain to continue to increase performance, multi-core chips are quickly becoming the norm. The demand in computer gaming industry also brought us GPUs as an alternative fast, general purpose, streaming co-processors.

Commodity GPUs and multi-core CPUs bring together an unprecedented

combination of high performance at low cost, and provide an ideal environment for biomedical image analysis applications.

In this talk we will present our ongoing efforts on developing optimized biomedical image analysis kernels for heterogeneous multi-core CPUs and GPUs. We will also present how a cooperative cluster of multi-Core CPUs and GPUs can be efficiently used for large scale biomedical image analysis.

 

Back to Session IX

Grid Computing for Financial Applications

 

M. Al-Baali§, P. Beraldi*, L. Grandinetti*, G. Aloisio^ I. Epicoco^, A. Violi**, C. Figà Talamancaç

 

§ Dept. of Mathematics and Statistics, Sultan Qaboos University, Muscat, Oman

* Department of Electronics, Informatics and Systems, University of Calabria

* * CESIC - University of Calabria

^ University of Salento

ç Innova spa

 

In recent years financial operators have shown an increasing interest in quantitative tools able to efficiently measure, control and manage risk. Such an interest is motivated by the necessity to operate in a very competitive and volatile environment  with a high level of complexity increased by the   globalization of the economic activities and the continuous introduction of innovative financial products. The complexity of the problems to deal with and the necessity to operate in real time has highlighted the serious computational constraints imposed by conventional numerical platforms, prompting the need to take advantage of high performance computing systems.

 

In this talk we present a prototypal system designed to support financial operators in investment decisions concerning the strategic asset allocation problem.  The system has been designed and tested within the European Project BEINGRID.

At the core of the system is the formulation of sophisticated optimization models able to capture with an increasing level of realism with respect to traditional approaches, the specific features of the applicative problem. Moreover, the system is based on the integration of advanced scenario generation procedures and efficient methods to solve the resulting huge sized problems.

 

The system has been deployed on the SPACI grid infrastructure. In particular, an user – friendly  web grid environment has been realized by using the GRB technology for the resource management and the GRelC services for distributed data.

 

Back to Session IX

Data Issues in a challenging HPC application to Climate Change

 

Giovanni Aloisio

University of Salento

Lecce, Italy

 

Earth Science is strongly becoming a data intensive and oriented activity. Petabytes of data, big collections, huge datasets are continuously produced, managed and stored as well as accessed, transferred and analyzed by several scientists and researchers at multiple sites. From the data grid perspective, a key element to search, discover, manage and access huge amount of data stored within distributed storages is the related data and metadata framework. A new supercomputing centre, the Euro-Mediterranean Centre for Climate Change (CMCC), was recently created by the Italian Government to support research on Climate Change. The SPACI Consortium, one of the main CMCC Associate Centres, provides know-how and expertise on High Performance and Grid Computing. The GRelC Middleware (provided by SPACI Consortium) has been recently adopted as part of the CMCC Data Grid framework in order to provide a secure, transparent and scalable grid enabled metadata management solution.

We present the CMCC initiative, the supercomputing facility as well as data grid architectural and infrastructural issues concerning the adopted grid data/metadata handling systems.

Back to Session IX

Tim David

Centre for Bioengineering University of Canterbury

Christchurch, New Zealand

 

“A Heterogeneous Computing Model for a Grand Challenge Problem”

 

Back to Session IX

The e-Science for High Energy Physics

 

Kihyeon Cho, Ph.D.

KISTI (Korea Institute of Science and Technology Information)

Daejon, Korea

 

The e-Science for High Energy Physics is to study High Energy Physics (HEP) any time and anywhere even if we are not on-site of accelerator laboratories. The components are 1) data production, 2) data processing and 3) data analysis any time and anywhere. The data production is to do remote control and take shifts remotely. The data processing is to run jobs anytime, anywhere using Grid farms. The data analysis is to work together to publish papers using collaborative environment. We apply this concept to LHC experiment at CERN and Tevatron experiment at Fermilab. It this talk we will present the current status and embodiment of the idea.

 

Back to Session IX

A HPC infrastructure at the service of Scientific Research in Italy

 

Giovanni Erbacci

CINECA - System and Technology Deparment

CINECA Inter-University Consortium, Casalecchio di Reno,  Italy

 

State of the art HPC infrastructures are fundamental to support scientific research and to advance science at European level. Since many years, at Italian level, CINECA has been able to assure to the scientific community  a competitive advantage by putting into timely production advanced HPC systems  that have proven very wide applicability and success.

The CINECA HPC infrastructure de facto represents the national facility for supercomputing and the CINECA HPC systems are  part of the Italian research Infrastructures system, integrated by means of the Italian academic and research network facility (GARR).

In this work we present the CINECA HPC infrastructure, its  evolution, and the service model. Moreover,  we outline the CINECA role  in the context of the main HPC Infrastructure projects,  operating at the European level: DEISA, PRACE and HPC-Europa.

DEISA is a consortium between the  most advanced HPC centres in Europe, whose aim is to deploy and operate a persistent, production quality, distributed supercomputing environment with continental scope.

This infrastructure is mainly intended to support challenge scientific applications by integrating and making easily accessible supercomputers in different centres.

PRACE is a feasibility project intended to build the next generation of challenge HPC infrastructure and services at  European level. The infrastructure will consist of a limited number (3 to 5) of  PetaFlop/s class HPC systems integrated in a network of HPC systems on a pyramidal model basis, with three different layers  (European, National and Regional)  in the European HPC eco-system. 

HPC-Europa supports the human network of knowledge, experiences and expertise exchange, in the context of the scientific research communities  using advanced HPC systems. 

HPC-Europa factively promotes such mission supporting the mobility of the European researchers among the main research institutions, and  providing the access to the computational resources offered by the main European HPC infrastructures.

Back to Session IX

 

 

PANELS

 

PANEL 1: “Exascale Computing”

 

Chairman: Paul Messina

Co-organizers: Pete Beckman, Paul Messina

Panelists: Pete Beckman, Alan Gara, Dan Reed, Satoshi Matsuoka, Jeffrey Vetter

 

The Petascale computing era has just begun, almost exactly in the timeframe that was estimated in the mid 1990s. What new issues and challenges will need to be addressed to create Exascale computing environments? When might they become a reality?

The panelists will discuss exascale computing from various points of view:

- system architecture, including power consumption

- system software and tools

- performance bottlenecks

- data center and data-intensive applications

- integration of exascale systems into users' computing environments.

 

Back to Session IV

PANEL 2: “From Grids to Cloud Services”


Organizer: Charlie Catlett

Panelists: Avner Algom, Pete Beckman, Charlie Catlett, Ignacio Llorente, Satoshi Matsuoka


This panel will explore the adoption of "grid" concepts in 
industry and the emergence of "cloud" computing as well as related 
services (cloud storage, etc.). The focus of the panel will be to 
examine what services seem to be succeeding, why, and how we believe 
"Cloud Services" will evolve over the next 12-24 months.   A key 
question for the panel is, based on where the industry is going, what 
are the key research questions that will emerge for computer 
scientists and how might computational scientists plan for 
computational (and storage, etc.) services over the next several years?

 

Back to Session VI