HPC 2023

 

High Performance Computing

 

State of the Art, Emerging Disruptive Innovations and Future Scenarios

 

An International Advanced Workshop

 

 

 

June 26 – 30, 2023, Cetraro, Italy

 

 

image002

 

 

Programme Committee

Organizers

Sponsors &

Media Partners

Speakers

Agenda

Chairpersons

Panel

Abstracts

 

 

Final Programme

 

Programme Committee

LUCIO GRANDINETTI (Chair)

Department of Computer Engineering, Electronics, and Systems

University of Calabria – UNICAL

and

Center of Excellence for High Performance Computing

ITALY

 

JAMES AHRENS

Los Alamos National Laboratory

Information Science and Technology Institute

Los Alamos, NM

USA

 

FRANK BAETKE

EOFS

European Open File System Organization

formerly

Hewlett Packard Enterprise

Munich

GERMANY

 

RUPAK BISWAS

NASA

Exploration Technology Directorate

High End Computing Capability Project

NASA Ames Research Center

Moffet Field, CA

USA

 

SUSAN COPPERSMITH

Head School of Physics

University of New South Wales Sydney

Sydney

AUSTRALIA

 

GIUSEPPE DE PIETRO

National Research Council of Italy

Director ICAR -Institute for High Performance Computing and Networks

Naples

ITALY

 

SUDIP DOSANJH

Director National Energy Research Scientific Computing Center

Lawrence Berkeley National Laboratory

Berkeley, CA

USA

 

WOLFGANG GENTZSCH

The UberCloud Regensburg

GERMANY

and

Sunnyvale, CA

USA

 

VLADIMIR GETOV

Distributed and Intelligent Systems Research Group

School of Computer Science and Engineering

University of Westminster

London

UNITED KINGDOM

 

VICTORIA GOLIBER

D-Wave Systems Inc.

GERMANY and USA

 

HIROAKI KOBAYASHI

Architecture Laboratory

Department of Computer and Mathematical Sciences

Graduate School of information Sciences

Tohoku University

JAPAN

 

SATOSHI MATSUOKA

Director RIKEN Center for Computational Science

Kobe

and

Department of Mathematical and Computing Sciences

Tokyo Institute of Technology

Tokyo

JAPAN

 

KEVIN OBENLAND

Quantum Information and Integrated Nanosystems

Lincoln Laboratory

Massachusetts Institute of Technology MIT

Boston, MA

USA

 

PER OSTER

Director Advanced Computing Facility

CSC-IT Center for Science

Espoo

FINLAND

 

VALERIO PASCUCCI

Center for Extreme Data Management, Analysis and Visualization

and

Scientific Computing and Imaging Institute

School of Computing, University of Utah

and

Laboratory Fellow, Pacific Northwest National Laboratory

USA

 

KRISTEN PUDENZ

Director Advanced Research Programs

Atom Computing

Berkeley, California

USA

 

DANIEL REED

Department of Electrical and Computer Engineering

School of Computing

University of Utah

Salt Lake City, Utah

USA

 

MARK SAFFMAN

INFLEQTION Quantum Technologies

and

University of Wisconsin-Madison

USA

 

THOMAS STERLING

President & CSO

Simultac LLC

Bloomington, IN

formerly

AI Computing Systems Laboratory (AICSL)

School of Informatics, Computing, and Engineering

Indiana University

Bloomington, IN

USA

 

WILLIAM TANG

Princeton University Dept. of Astrophysical Sciences,

Princeton Plasma Physics Laboratory

and

Center for Statistics and Machine Learning (CSML)

and

Princeton Institute for Computational Science & Engineering (PICSciE)

Princeton University

USA

 

MICHELA TAUFER

The University of Tennessee

Electrical Engineering and Computer Science Dept.

Knoxville, TN

USA

 

Organizing Committee

 

L. GRANDINETTI (Co-chair)

ITALY

T. LIPPERT (Co-chair)

GERMANY

M. ALBAALI

(OMAN)

J. DONGARRA

(U.S.A.)

W. GENTZSCH

(GERMANY)

R. Biswas

(USA)

 

 

 

Sponsors

 

AMAZON WEB SERVICES

logo_amazon

ATOM COMPUTING

CEREBRAS

Immagine che contiene testo, clipart

Descrizione generata automaticamente

CMCC Euro-Mediterranean Center on Climate Change

CSC Finnish Supercomputing Center

CSCS

Swiss National Supercomputing Centre

Immagine che contiene testo

Descrizione generata automaticamente

CSIR

Council for Scientific and Industrial Research - South Africa

DIRAQ

EOFS

Hewlett Packard Enterprise

INFLEQTION Quantum Technologies

Juelich Supercomputing Center, Germany

logo_fzj

LENOVO (t.b.c.)

National Research Council of Italy - ICAR - Institute for High Performance Computing and Networks

NEC

Immagine che contiene testo, Carattere, logo, Elementi grafici

Descrizione generata automaticamente

NEXT SILICON

NVIDIA

PARTEC

PSIQUANTUM

SAMBANOVA SYSTEMS

Immagine che contiene logo

Descrizione generata automaticamente

University of Calabria

Department of Computer Engineering, Electronics, and Systems

dimes-marchio-01

 

 

 

Media Partners

 

 

Immagine che contiene testo, Carattere, logo, Elementi grafici

Descrizione generata automaticamente

 

HPCwire is a news portal and weekly newsletter covering the fastest computers in the world and the people who run them. As the trusted source for HPC news since 1987, HPCwire has served as the publication of record on the issues, challenges, opportunities, and conflicts relevant to the global High Performance Computing space. Its reporting covers the vendors, technologies, users, and the uses of high performance, AI- and data-intensive computing within academia, government, science, and industry.

Subscribe now at www.hpcwire.com.

 

 

 

 

 

Immagine che contiene testo, Carattere, logo, Elementi grafici

Descrizione generata automaticamente

 

https://insidehpc.com/about/

 

About insideHPC

Founded in 2006, insideHPC is a global publication recognized for its comprehensive and insightful coverage across the HPC-AI community, linking vendors, end-users and HPC strategists. insideHPC has a large and loyal audience drawn from public and private companies of all sizes, government agencies, research centers, industry analysts and academic institutions. In short: the buyers and influencers of HPC, HPC-AI and associated emerging technologies.

 

THE News Analysis Site for HPC Insiders: Written and edited by seasoned technology journalists, we’re all about HPC and AI, offering feature stories, commentary, news coverage, podcasts and video interviews with HPC’s leading voices. Like the evolving HPC ecosystem, insideHPC’s coverage continually expands into emerging focus areas to better serve our growing readership and advertising base, insideHPC in 2023 will deliver an updated format and new spotlight coverage of of enterprise HPC, HPC-AI, exascale (and post-exascale) supercomputing, quantum computing, cloud HPC, edge computing, High Performance Data Analytics and the geopolitical implications of supercomputing.

 

 

 

 

 

ubercloud

 

UberCloud provides Cloud With One Click – a fully automated, secure, on-demand, browser-based and self-service Engineering Simulation Platform for engineers and scientists to build, compute, and analyze their engineering simulations. Our unique HPC software containers facilitate software packaging and portability, simplify access and use of any public, privale, hybrid, and multi-cloud resources, and ease software maintenance and support for end-users, IT teams, and their cloud service providers.

 

Please follow UberCloud on LinkedIn and contact us for performing a Proof of Concept in the Cloud.

 

 

 

 

 

Speakers

 

JAMES AHRENS

Director of the Information Science Technology Institute

Los Alamos National Laboratory

Los Alamos, NM

USA

 

FRANK BAETKE

EOFS

European Open File System Organization

GERMANY

 

RUPAK BISWAS

NASA

Exploration Technology Directorate

High End Computing Capability Project

NASA Ames Research Center

Moffet Field, CA

USA

 

SERGIO BOIXO

GOOGLE

Quantum Artificial Intelligence Laboratory, Google AI

Santa Barbara, CA

USA

 

FERNANDO BRANDAO

California Institute of Technology (Caltech)

and

Director Quantum Applications at Amazon-AWS

Los Angeles, CA

USA

 

RONALD BRIGHTWELL

SANDIA National Laboratories Center for Computing Research

Albuquerque, NM

USA

 

JERRY CHOW

IBM Fellow and Director of Quantum Infrastructure

IBM Quantum

T. J. Watson Research Center

Yorktown Heights, NY

USA

 

MARCUS DOHERTY

Co-Founder & Chief Scientific Officer

Quantum Brilliance

and

Australian National University

ACT, Canberra

AUSTRALIA

 

SUDIP DOSANJH

Director National Energy Research Scientific Computing Center

Lawrence Berkeley National Laboratory

Berkeley, CA

USA

 

DANIELE DRAGONI

Leonardo S.p.A.

High Performance Computing Lab.

Genova

ITALY

 

ANDREW DZURAK

School of Electrical Engineering & Telecommunications

University of New South Wales

Sydney, Australia

and

Australian Research Council

and

Founder&CEO of Diraq

Sydney, NSW

AUSTRALIA

 

WOLFGANG GENTZSCH

The UberCloud

Regensburg

GERMANY

and

Sunnyvale, CA

USA

 

VLADIMIR GETOV

Distributed and Intelligent Systems Research Group

School of Computer Science and Engineering

University of Westminster

London

UNITED KINGDOM

 

VLAD GHEORGHIU

Institute for Quantum Computing, University of Waterloo

and

SoftwareQ Inc, Waterloo

Waterloo, Ontario

CANADA

 

JUSTIN GING

Atom Computing

Berkeley, California

USA

 

ROBERT HOEKSTRA

Extreme Scale Computing

Computing Research Center

Sandia National Laboratories

Albuquerque, NM

USA

 

TOSHIYUKI  IMAMURA

RIKEN Center for Computational Science

Kobe

JAPAN

 

NOBUYASU ITO

RIKEN Center for Computational Science

Kobe

JAPAN

 

HIROAKI KOBAYASHI

Architecture Laboratory

Department of Computer and Mathematical Sciences

Graduate School of information Sciences

Tohoku University

JAPAN

 

KRZYSZTOF KUROWSKI

Technical Director, Poznan Supercomputing and Networking Center

POLAND

 

SALVATORE MANDRA

Senior Research Scientist and Task Lead

Quantum Artificial Intelligence Lab (QuAIL)

KBR, Inc.

NASA, Ames Research Center

CA, USA

 

STEFANO MARKIDIS

KTH Royal Institute of Technology

Computer Science Department / Computational Science and Technology Division

Stockholm

SWEDEN

 

MARTIN MUELLER

SambaNova Systems Inc

Palo Alto, CA

USA

 

JOSH MUTUS

Rigetti Computing

Director Quantum Devices

USA/CANADA

 

YUICHI NAKAMURA

Executive Professional, NEC Corporation

JAPAN

 

KEVIN OBENLAND

Quantum Information and Integrated Nanosystems

Lincoln Laboratory

Massachusetts Institute of Technology MIT

Boston, MA

USA

 

PER OSTER

Director Advanced Computing Facility

CSC-IT Center for Science

Espoo

FINLAND

 

VALERIO PASCUCCI

Center for Extreme Data Management, Analysis and Visualization

and

Scientific Computing and Imaging Institute

School of Computing

University of Utah, Salt Lake City

and

US DOE Pacific Northwest National Laboratory

USA

 

NICOLAI PETKOV

Faculty of Science and Engineering, Intelligent Systems

University of Groningen

Groningen

THE NETHERLANDS

 

VALERIO RIZZO

EMEA Head of AI & Subject Matter Expert for Lenovo

ITALY

 

MARK SAFFMAN

INFLEQTION Quantum Technologies

and

University of Wisconsin-Madison

USA

 

THOMAS SCHULTHESS

CSCS

Swiss National Supercomputing Centre

Lugano

and

ETH

Zurich

SWITZERLAND

 

PETE SHADBOLT

Co-founder

PsiQuantum Corp.

Palo Alto, California

USA

 

GILAD SHAINER

NVIDIA

Santa Clara, CA

USA

 

THOMAS STERLING

President & CSO

Simultac LLC

Bloomington, IN

formerly

AI Computing Systems Laboratory (AICSL)

School of Informatics, Computing, and Engineering

Indiana University

Bloomington, IN

USA

 

FRED STREITZ

Center for Forecasting and Outbreak Analytics (CFA/CDC)

USA

and

National AI Research Resource Task Force (NAIRR-TF)

USA

and

Lawrence Livermore

National Laboratory (LLNL/DOE)

Livermore, California

USA

 

SERGII STRELCHUK

Department of Applied Mathematics and Theoretical Physics

and

Centre for Quantum Information and Foundations

University of Cambridge

Cambridge

UK

 

WILLIAM TANG

Princeton University Dept. of Astrophysical Sciences

Princeton Plasma Physics Laboratory

and

Center for Statistics and Machine Learning (CSML)

and

Princeton Institute for Computational Science & Engineering (PICSciE)

Princeton University

USA

 

MICHELA TAUFER

The University of Tennessee

Electrical Engineering and Computer Science Dept.

Knoxville, TN

USA

 

MIWAKO TSUJI

RIKEN Center for Computational Science

Kobe

JAPAN

 

ERIC VAN HENSBERGEN

ARM Research

Austin, TX

USA

 

NATALIA VASSILIEVA

Cerebras Systems

Sunnyvale, CA

USA

 

ANDREW WHEELER

HPE Fellow & VP

Hewlett Packard Labs

Fort Collins, CO

USA

 

 

Workshop Agenda

Monday, June 26th

 

Session

Time

Speaker/Activity

9:45 – 10:00

Welcome Address

Session I

State of the art and future scenarios

 

10:00 – 10:30

T. STERLING

Active Memory Architecture

 

10:30 – 11:00

S. DOSANJH

Towards a Unified Infrastructure for Computation, Experimental Data Analysis and AI

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

A. WHEELER

What Comes After Exascale?

12:00 – 12:30

R. HOEKSTRA

The Promise of Neuromorphic Computing

12:30 – 12:45

CONCLUDING REMARKS

Session II

 

Emerging Computer Systems and Solutions

 

17:00 – 17:30

Y. NAKAMURA

Simulated annealing at HPC VS Quantum annealing, qualitative and quantitative analysis

 

17:30 – 18:00

V. GETOV

New Frontiers in Energy-Efficient Application-Architecture Co-Design of Multi-Core Processors

 

18:00 – 18:30

E. VAN HENSBERGEN

Addressing Heterogeneity and Disaggregation in Future Ecosystems

18:30 – 19:00

COFFEE BREAK

 

19:00 – 19:30

v. rizzo

How AI is unlocking the potential of the Metaverse

19:30 – 19:45

CONCLUDING REMARKS

 

 

Tuesday, June 27th

 

Session

Time

Speaker/Activity

Session III

Advances in HPC Technology and Systems, Architecture and Software

 

9:30 – 10:00

F. STREITZ

The ADMIRRAL Project

 

10:00 – 10:30

J. AHRENS

To Exascale and Beyond: Accomplishments and Challenges for Large Scale Scientific Visualization

 

10:30 – 11:00

T. IMAMURA

Numerical challenging and Libraries from Large-scale capacity computing to capability computing on Fugaku

 

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

H. KOBAYASHI

Potential and Limitations of Quantum Annealing as an Accelerator for Conventional HPC

 

12:00 – 12:30

R. BRIGHTWELL

Evaluation of HPC Workloads Running on Open-Source RISC-V Hardware

 

12:30 – 12:45

CONCLUDING REMARKS

Session IV

 

BIG DATA Processing: Challenges and Perspectives

 

17:00 – 17:30

V. PASCUCCI

The National Science Data Fabric: Democratizing Data Access for Science and Society

 

17:30 – 18:00

g. shainer

Addressing HPC/AI Performance Bottlenecks with BlueFIeld Data Processing Units

 

18:00 – 18:30

P. OSTER

Accelerated Computing with EuroHPC LUMI - a Research Infrastructure for Advanced Computing

 

18:30 – 19:00

COFFEE BREAK

 

19:00 – 19:30

F. BAETKE

Open-Source for HPC and AI - The File System Example

19:30 – 19:45

CONCLUDING REMARKS

 

 

Wednesday, June 28th

 

Session

Time

Speaker/Activity

Session V

AI on HPC Platforms

 

9:15 – 9:40

W. TANG

Impact of Advances in HPC/AI/Machine Learning on Fusion Energy Development

 

9:40 – 10:05

M. MUELLER

Advanced Use Cases of Reconfigurable Dataflow Architecture in Science

 

10:05 – 10:30

N. VASSILIEVA

Training Large Language Models on Cerebras Wafer Scale Clusters

 

10:30 – 10:55

M. MORAES

Molecular Dynamics + Machine Learning =Deeper Insight for Drug Discovery

 

10:55 – 11:25

COFFEE BREAK

Session VI

The QUANTUM COMPUTING Promises 1

 

11:3011:55

J. GING

Advances in Atomic Array Quantum Computing

 

11.5512:20

A. DZURAK

Quantum Processing based on Silicon-CMOS technology

 

12:20 – 12:45

K. OBENLAND

Developing and Analyzing Quantum Computing Circuits for Applications in Physical Science

 

12:45 – 13:00

CONCLUDING REMARKS

Session VII

 

The QUANTUM COMPUTING Promises 2

 

17:00 – 17:25

S. MANDRA

Improved Simulations of Random Quantum Circuits

 

17:25 – 17:50

S. STRELCHUK

Simulating quantum circuits using efficient tensor network contraction algorithms with subexponential upper bound

 

17:50 – 18:15

V. GHEORGHIU

What does it take to run a quantum algorithm?

 

18:15 – 18:45

COFFEE BREAK

 

18:45 – 19:10

M. DOHERTY

Hybrid computing using a diamond quantum computer directly integrated into a supercomputer and the pathway to massive parallelization and hybridization

 

19:10 – 19:35

M. TSUJI

Quantum HPC Hybrid Computing Platform toward Cooperative Computation of Classical and Quantum Computers

 

19:35 – 19:45

COFFEE BREAK

 

 

Thursday, June 29th

 

Session

Time

Speaker/Activity

 

Session VIII

The Quantum Computing Promises 3

 

 

9:30 – 9:55

J. CHOW

The next wave of computing, quantum-centric supercomputing

 

 

9:55 – 10:20

S. BOIXO

Quantum Computing at Google

 

 

10:20 – 10:45

J. MUTUS

Algorithm specific resource estimates for fault tolerant applications on superconducting qubits architectures

 

 

10:4511:10

M. SAFFMAN

Circuit model quantum computing with neutral atom arrays

 

 

11:10 – 11:40

COFFEE BREAK

 

 

11:40 – 12:05

F. BRANDAO

Building a Concatenated Bosonic Logical Qubit

 

 

12:05 – 12:30

P. SHADBOLT

A manufacturable platform for fault-tolerant photonic quantum computing

 

 

12:30 – 12:45

CONCLUDING REMARKS

 

Session IX

The Quantum Computing Promises 4

 

 

17:00 – 17:25

D. DRAGONI

QUANTUM COMPUTING at Leonardo: an industrial end-user standpoint

 

 

17:25 – 17:50

N. ITO

HPC-QC hybrid challenge on the Fugaku

 

 

17:50 – 18:15

K. KUROWSKI

t.b.a.

 

18:15 – 18:45

COFFEE BREAK

 

18:45 -19:45

PANEL DISCUSSION

“The Intersection of Quantum Computing and HPC”

 

Chairperson: Rupak BISWAS, NASA Ames Research Center

 

Panelists: t.b.a.

 

During the past several decades, supercomputing speeds have gone from Gigaflops to Teraflops, to Petaflops and Exaflops. As the end of Moore’s law approaches, the HPC community is increasingly interested in disruptive technologies that could help continue these dramatic improvements in capability. This interactive panel will identify key technical hurdles in advancing quantum computing to the point it becomes useful to the HPC community. Some questions to be considered:

 

·       When will quantum computing become part of the HPC infrastructure?

·       What are the key technical challenges (hardware and software)?

·       What HPC applications might be accelerated through quantum computing?

belle epoque

Is the “belle époque” of classical High Performance Computer Systems coming at the end?

 

 

Friday, June 30th

 

Session

Time

Speaker/Activity

Session X

Key Projects, Novel Developments and Challenging Applications

9:30 – 10:00

M. TAUFER

Building Trust in Scientific Applications through Data Traceability and Results Explainability

10:00 – 10:30

W. GENTZSCH

Latest Trends and Developments in Cloud HPC

 

10:30– 11:00

T. SCHULTHESS

Piz Daint on Alps: a modern day view of extreme computing and data in science

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

N. PETKOV

Machine learning based prediction of excess returns of stocks

 

12:00 – 12:30

S. MARKIDIS

Plasma-PEPSC: Enabling Exascale Simulations for Plasma Science Grand Challenges

12:30 – 12:45

CONCLUDING REMARKS

 

 

 

Chairpersons

 

 

SESSION I

 

WOLFGANG GENTZSCH

The UberCloud

GERMANY AND USA

 

SESSION II

 

ELAD RAZ

NextSilicon

ISRAEL

 

SESSION III

 

THOMAS STERLING

Simultac LLC

USA

 

SESSION IV

 

VLADIMIR GETOV

University of Westminster

UNITED KINGDOM

 

SESSION V

 

ROBERT HOEKSTRA

Sandia National Laboratories

USA

 

SESSION VI

 

MARK SAFFMAN

INFLEQTION and University of Wisconsin

USA

 

SESSION VII

 

HIROAKI KOBAYASHI

Architecture Laboratory

Tohoku University

JAPAN

 

SESSION VIII

 

RUPAK BISWAS

NASA Ames Research Center

USA

 

SESSION IX

 

STEFANO MARKIDIS

KTH Royal Institute of Technology

SWEDEN

 

SESSION X

 

VLADIMIR GETOV

University of Westminster

UNITED KINGDOM

 

 

 

Panel

 

 

“The Intersection of Quantum Computing and HPC”

Thursday, June 29th

18:45 -19:45

 

Chairperson:

Rupak BISWAS, NASA Ames Research Center

 

Panelists:

Sergio Boixo, Jerry Chow, Nobuyasu Ito (t.b.c.), Pete Shadbolt, William Tang, Andrew Wheeler

 

During the past several decades, supercomputing speeds have gone from Gigaflops to Teraflops, to Petaflops and Exaflops. As the end of Moore’s law approaches, the HPC community is increasingly interested in disruptive technologies that could help continue these dramatic improvements in capability. This interactive panel will identify key technical hurdles in advancing quantum computing to the point it becomes useful to the HPC community. Some questions to be considered:

 

  • When will quantum computing become part of the HPC infrastructure?
  • What are the key technical challenges (hardware and software)?
  • What HPC applications might be accelerated through quantum computing?

 

 

 

 

Abstracts

To Exascale and Beyond: Accomplishments and Challenges for Large Scale Scientific Visualization

 

James Ahrens

Los Alamos National Laboratory, Los Alamos, NM, USA

 

Short Abstract

Highlighting accomplishments from exascale visualization projects and presenting a vision of how to support visual analysis for the evolving modern scientific process.

 

Long Abstract

Visualization plays a critical role in the scientific understand of the massive streams of data from scientific simulations and experiments. Continued growth in performance and availability of large scale supercomputing resources (e.g. exascale soon and faster over the next decade) enables both increasing simulation resolutions and an increasing number of and breadth of simulation ensemble runs. In the modern scientific process these simulation ensembles are verified for correctness and then validated with experimental ensembles to increase our overall scientific knowledge. Effective visualization of the verification and validation (V&V) process is a significant challenge. Additional challenges include the significant gap between supercomputer processing and data storage speeds. In this talk, I will highlight current accomplishments from the U.S. Exascale Computing Project to address these challenges include high-dimensional visual analysis, comparative visualization, in situ visualization, portable multi-threaded visualization algorithms, and automated techniques. I will present a vision of a set of needed initiatives to support the visual understanding of the complex and evolving modern scientific process.

 

Bio

Dr. James Ahrens is the director of the Information Science Technology Institute at Los Alamos National Laboratory. He is also the Department of Energy Exascale Computing Project (ECP) Data and Visualization lead for seven storage, data management and visualization projects that will be a key part of a vibrant exascale supercomputing application and software ecosystem. His research interests include visualization, data science and parallel computing. Dr. Ahrens is author of over 120 peer reviewed papers and the founder/design lead of ParaView, an open-source visualization tool designed to handle extremely large data. ParaViewis broadly used for scientific visualization and is in use at supercomputing and scientific centers worldwide. Dr. Ahrens received his B.S. in Computer Science from the University of Massachusetts at Amherst in 1989 and a Ph.D. in Computer Science from the University of Washington in 1996. Dr. Ahrens is a member of the IEEE and the IEEE Computer Society. Contact him at ahrens@lanl.gov.

 

Back to Session III

Open-Source for HPC and AI - The File System Example

 

Frank Baetke

EOFS, European Open File System Organization, Germany

 

This talk will address aspects of parallel filesystems which are key components of today’s HPC infrastructures but seem to be a bit underrepresented in discussions compared to hardware architectures of current and future high-end systems.

Three open-source filesystems used today in production-ready HPC systems will be used as examples of open-source designs. Development and test concepts that have been classified as the “Cathedral” and the “Bazaar” approach will be discussed.

It will also be shown to what extend we will see either competition with proprietary offerings or cooperation potential to cover different and sometimes overlapping requirements. The role of academic environments and curricula with a focus on filesystems specifically and on HPC middleware in general will also be covered.

Note that this talk is not a comparison of features, roadmaps and performance numbers of different products and offerings.

Back to Session IV

Quantum Computing at Google

 

Sergio Boixo

Google, Quantum Artificial Intelligence Laboratory, Google AI

 

The Google Quantum AI group's long-term objective is to develop a fault-tolerant universal quantum computer. Last year, we experimentally demonstrated that quantum error correction begins to improve performance as the number of qubits in a scalable error-correcting code increases. At the same time, our experimental quantum processor outperformed state-of-the-art supercomputers in one specific benchmark. I will also review some recent scientific demonstrations.

 

Back to Session VIII

Building a Concatenated Bosonic Logical Qubit

 

Fernando Brandao

California Institute of Technology (Caltech) and Director Quantum Applications at Amazon-AWS, Los Angeles, CA, USA

 

I’ll discuss recent progress towards building a logical qubit concatenating a bosonic qubit with an outer quantum error correcting code.

 

Back to Session VIII

Evaluation of HPC Workloads Running on Open-Source RISC-V Hardware

 

Ronald Brightwell

SANDIA National Laboratories Center for Computing Research, USA

 

The emerging RISC-V ecosystem has the potential to improve the speed, fidelity, and quality of hardware/software co-design R&D activities. However, the suitability of the RISC-V ecosystem for co-design targeting HPC use cases is not yet well understood. This talk examines the performance of several HPC benchmark workloads running on simulated open-source RISC-V cores under the FireSim FPGA-accelerated simulation tool. To provide a realistic and reproducible HPC software stack, we ported the Spack package manager to RISC-V and used it to build our workloads. Our key finding is that each of the RISC-V cores evaluated can run complex HPC workloads executing for many trillions of instructions under simulation with rates of approximately 1/50th real-time. Additionally, we provide a baseline set of performance results for comparison in future studies. Our results highlight the readiness of the RISC- V ecosystem for performing open co-design activities for HPC. We expect performance to improve as co-design activities targeting RISC-V increase and the RISC-V community continues to make advancements.

Back to Session III

The next wave of computing, quantum-centric supercomputing

 

Jerry M. Chow

IBM Fellow and Director of Quantum Infrastructure, IBM Quantum, T. J. Watson Research Center, USA

 

The last few years have witnessed a strong evolution in quantum computing technologies, moving from research labs to an unprecedented access by the general public via the cloud. Recent progress in quantum processor size, speed, and quality, have cleared the picture towards a long- term vision in computing, where quantum processors will play a key role in extending the computational reach of supercomputers. In this talk I will describe how modularity will enable scaling, and how quantum communication will increase computational capacity. All this orchestrated by a hybrid cloud middleware for quantum with seamless integration of classical and quantum workflows in an architectural construct that we call quantum-centric supercomputer.

 

Back to Session VIII

Hybrid computing using a diamond quantum computer directly integrated into a supercomputer and the pathway to massive parallelization and hybridization

 

Marcus Doherty

Chief Scientist, Quantum Brilliance, Canberra, Australia

 

Quantum Brilliance is the world’s largest diamond quantum computing company. Quantum Brilliance is exploiting the remarkable ability for diamond qubits to operate in ambient conditions to pursue the development of quantum accelerators: compact quantum computers that are the same size, weight and power as CPUs/GPUs and uniquely capable of massive parallelization in high-performance computing (HPC) and deployment in edge computing. Quantum Brilliance is unique in already integrating desktop-sized quantum computers into supercomputing systems, and its release of the software development kit Qristal that is specific designed for mass parallelization and deep hybridization of classical and quantum computers.

In this presentation, I will first report key lessons learned from integrating quantum computers into HPC systems as well as a demonstration of hybrid computational chemistry using such systems, which attained the decisive goal of chemical accuracy. I will then introduce the concept of Quantum Utility and how this informs which applications quantum accelerators will deliver the greatest and earliest advantage in HPC and edge computing. I will finally outline the pathway forward for further miniaturizing diamond quantum computers, whilst simultaneously increasing their qubit numbers and further engineering their integration and hybridization with HPC systems.

 

Back to Session VII

Towards a Unified Infrastructure for Computation, Experimental Data Analysis and AI

 

Sudip S. Dosanjh

Lawrence Berkeley National Laboratory, USA

 

NERSC’s mission is to accelerate scientific discovery at the U.S. Department of Energy (DOE) Office of Science through high performance computing and data analysis. NERSC supports the largest and most diverse research community of any supercomputing facility within the U.S., providing large-scale, state-of-the-art computing for unclassified research programs in alternative energy sources, environmental science, materials research, astrophysics and other science areas related to DOE’s science mission.

 

Data-intensive computing has been of growing importance at NERSC. Considerably more data is transferred to NERSC than away from NERSC. Experimental facilities are being inundated with data due to advances in detectors, sensors and sequencers — in many cases these instruments are improving at a rate even faster than Moore’s law for semiconductors. Scientists are finding it increasingly difficult to analyze these large scientific data sets and, as a consequence, they are often transferring data to supercomputing centers like NERSC. Examples range from cosmology to particle physics to biology. Berkeley Lab is partnering with other institutions to create a Superfacility for Science through advanced networking, the development of new supercomputing technologies and advances in software and algorithms.  The goal is to integrate experimental and observational facilities and supercomputing centers through the ESnet network.

 

Supercomputers at NERSC are increasingly being designed to complex workflows that combine computation, experimental data analysis and AI. This presentation discusses some of the workflows driving the design of NERSC-10 which will be deployed in 2026 and their architectural implications. We have also started pathfinding for NERSC-11 which will be deployed in 2030+. It is possible that this system will be even more heterogenous than previous systems and may include an array of different accelerators.

Back to Session I

QUANTUM COMPUTING at Leonardo: an industrial end-user standpoint

 

Daniele Dragoni

Leonardo S.p.A., High Performance Computing Lab, Genova, ITALY

 

Quantum Computing is an emerging paradigm that offers the potential to solve complex problems that are considered intractable within the classical/digital computing domain. Although no quantum advantage has been yet demonstrated on practical problems, many industries have already started to investigate the potential benefits associated with this technology in an attempt to gain competitive advantages in their sector of reference.

 

In this talk, I will present the approach Leonardo is taking to assess in concrete the potentialities and limitations associated with QC in the aerospace, security, and defense sector. I will discuss our positioning with respect to QC from an industrial end-user perspective, introducing examples of activities and use cases that we are currently pursuing via combined HPC-QC methodologies as part of a national strategy.

 

Back to Session IX

Quantum Processing based on Silicon-CMOS technology

 

Andrew Dzurak

UNSW, Sydney, Australia

Diraq, Sydney, Australia

 

In this talk I will discuss the advantages and challenges facing the development of quantum computers employing spin-based quantum processors that can be manufactured using industry-standard silicon CMOS technology. I will begin by discussing the development of SiMOS quantum dot qubits, including the demonstration of high-fidelity single-qubit gates [1], the first demonstration of two-qubit logic gate [2], and assessments of silicon qubit fidelities [3,4]. I will then explore the technical issues related to scaling a CMOS quantum processor [5] up to the millions of qubits that will be required for fault-tolerant QC, including demonstrations of silicon qubit operation above one kelvin [6] and the use of global microwave fields capable of controlling millions of qubits [7].

 

References

[1] M. Veldhorst et al., Nature Nanotechnology 9, 981 (2014).

[2] M. Veldhorst et al., Nature 526, 410 (2015).

[3] H. Yang et al., Nature Electronics 2, 151 (2019).

[4] W. Huang et al., Nature 569, 532 (2019).

[5] M. Veldhorst et al., Nature Communications 8, 1766 (2017).

[6] H. Yang et al., Nature 580, 350 (2020).

[7] Vahapoglu et al., Science Advances 7, eabg9158 (2021).

 

Back to Session VI

Latest Trends and Developments in Cloud HPC

 

Wolfgang Gentzsch

The UberCloud Regensburg, Germany and Sunnyvale, CA, USA

 

Market analysts like Hyperion and Intersect360 Research predict a continuous growth of Cloud HPC of 20% annually, while on-premise HPC is growing 6.8% over the coming years. Another study among 740 engineers

found that 24% of respondents are using the cloud for engineering simulation today, with another 24% planning to use it over the next 12 months. And because HPC is now more and more entering relatively new fields such as digital twins, big data analytics, machine learning, natural language processing, edge computing, predictive maintenance, and more, Hyperion at ISC stated that Cloud HPC will also benefit from this trend aynd they predicted that 2024 will be another tipping point for Cloud HPC.

 

There are several reasons for this accelerating trend. In our presentation, we will discuss the following: ease of access and use of Cloud HPC resources; flexible cloud services are more agile than on-premise hardware; energy cost for hyperscalers is generally cheaper than for on-premise data centers; fastest processors e.g. from AMD and NVIDIA are easier available in the cloud, and specific processors especially for AI are only available in the cloud; high failure rates from Do-It-Yourself can be dramatically reduces when collaborating with cloud infrastructure and experienced cloud services providers; and (last but not least) there is a growing number of published Cloud HPC success stories. Finally, we will demonstrate several of these items with one specific Cloud HPC technology which provides all these benefits.

 

Back to Session X

New Frontiers in Energy-Efficient Application-Architecture Co-Design of Multi-Core Processors

 

Vladimir Getov

Distributed and Intelligent Systems Research Group, University of Westminster, London, U.K.

 

Over the last two decades, further developments of computer architecture and microprocessor hardware have been hitting the so-called “energy wall” because of their excessive demands for more energy. Subsequently, we have been ushering in a new era with electric power and temperature as the primary concerns for scalable computing. Therefore, significantly reducing the energy consumption for data processing and movement has been the most important challenge towards achieving higher computer performance at exascale level and beyond. This is a very difficult and complex problem which requires revolutionary disruptive methods with a stronger integration among hardware features, system software and applications. Moreover, the interplay between power, temperature and performance adds another layer of complexity to this already difficult group of challenges.

Since existing methodologies and tools are limited by hardware capabilities and their lack of information about the application code, a promising approach is to consider together the characteristics of both the processor and the application-specific workload. Indeed, it is pivotal for hardware to expose mechanisms for optimizing dynamically consumed power and thermal energy for various workloads and for reducing data motion, a major component of energy use. Therefore, our thermal energy model is based on application-specific parameters such as consumed power, execution time, and equilibrium temperature as well as hardware-specific parameters such as half time for thermal rise or fall. As observed with both out-of-band and in-band instrumentation and monitoring infrastructures on our experimental cluster, the temperature changes follow a relatively slow capacitor-style charge-discharge process. Therefore, we use the lumped thermal model that initiates an exponential process whenever there is a change in processor’s power consumption.

In our recent work we have also been investigating the use of barium titanate-based materials for building an intelligent thin film thermoelectric (TFTE) converter attached to a multi-core processor with dynamic workload management. Reviewing and comparing the specific properties of barium titanate-based materials confirms the potential to achieve a rapid heating-cooling cycle and therefore recover substantial wasted heat per unit time. Building upon these initial results, the ongoing and future research efforts involve the development of a novel tuning methodology and the evaluation of its advantages in real use cases. Early experiments demonstrate the efficient use of the model for analyzing and significantly improving the application-specific balance between power, temperature, and performance.

Back to Session II

What does it take to run a quantum algorithm?

 

Vlad Gheorghiu

Institute for Quantum Computing, University of Waterloo and SoftwareQ Inc, Waterloo, Canada

 

Software engineers know well that asymptotically optimal algorithms can be outperformed by alternatives in practice; the O(n log n) time algorithm for integer multiplication is not necessarily the best algorithm for multiplying 64-bit integers. With that in mind: Does a known quantum algorithm outperform its classical counterpart in practice? E.g., does Grover search outperform classical exhaustive search for some given objective function? And if so, how much of an advantage does it provide? A satisfactory answer will depend on future technological progress. Nevertheless, we can begin to estimate the cost of particular quantum circuits using current proposals for quantum architectures. In this talk I will discuss the resources required for quantum computation using the surface code and how to realistically estimate the 'quantum advantage' provided by a quantum algorithm.

 

Back to Session VII

Advances in Atomic Array Quantum Computing

 

Justin Ging

Atom Computing, USA

 

Scalability is key for quantum computer design and implementation. Atom Computing's atomic array devices offer advances toward large gate model quantum computers. We will show results from multiple atomic array quantum processors supporting progress toward NISQ algorithm implementation and error correction to carry us beyond the NISQ era.

Back to Session VI

The Promise of Neuromorphic Computing

 

Robert Hoekstra

Sandia National Laboratories, USA

 

Neuromorphic computing (NMC) is an emerging paradigm that aims to emulate the brain’s architecture and algorithms to achieve transformational computational capabilities at brain-like (~20 Watt) low-power. While NMC potentially provides a path for AIs with more human-like capabilities, the primary value to DOE in the near-to-medium term is the promise of an extremely low-power alternative to conventional computing approaches. NMC algorithms have recently been developed that enable efficient and effective AI and scientific computing. The AI applications are of particular interest in edge computing applications, where NMC can provide a more efficient, robust and low-power alternative to conventional AI algorithms and accelerators. In scientific computing, NMC is potentially valuable for more diverse computing workloads and can deliver both power and speed advantages due to its extremely parallel non-von Neumann architecture. These include applications highly relevant to DOE’s broad scientific computing missions including Monte Carlo sampling for solving stochastic differential equations, graph analytics, and discrete optimization.

Back to Session I

Numerical challenging and Libraries from Large-scale capacity computing to capability computing on Fugaku

 

Toshiyuki Imamura

RIKEN Center for Computational Science, Japan

 

Fugaku is a platform that enables comprehensive and all-encompassing support for state-of-the-art scientific computing and AI to quantum computation, utilizing various toolchains as part of the Arm ecosystem. The role of numerical libraries is to show shortcuts to problems with high accuracy and speed for complex algorithms. Our team has been developing an eigenvalue calculation library and deepening cooperation with the materials science field. This library enables diagonalization of dense matrices of one million dimensions using "Fugaku" and assists in more precise analysis than approximations in some spaces for large-scale capability computing. However, another point of view from capacity computing is accelerated by our libraries, for example, batched eigensolver, and so on. These technologies definitely consolidate an advanced quantum simulator project running in our center. We are also deepening our research on computational precision with partner organizations and promoting an approach to cooperatively improve both computational accuracy and computation time by skilfully utilizing mixed-precision arithmetic. In this session, we will report on the status of our eigenvalue solver EigenExa, also highlighting batch processing libraries, FFTE-C, various mixed-precision arithmetic libraries, and the deepening of numerical libraries that support large-scale applications.

Back to Session III

HPC-QC hybrid challenge on the Fugaku

 

Nobuyasu Ito

RIKEN Center for Computational Science, Japan

 

Current status of the HPC-QC hybrid activities in the R-CCS, and RIKEN Quantum activities will be overviewed, together with a focus on QC simulation on the Fugaku. A state-vector simulator, “braket”[1], has been developed for HPC[2,3] and it is now tuning up to the Fugaku. It makes 40-qubits scale simulation easily and execution elapse time is about one second or less per gate, and it will reach to 48 qubits simulation in double precision using full nodes of the Fugaku. As an example, estimation of ground-state energy of spin-1/2 Heisenberg chain is to be shown up to 40 spins using 41 circuits of 40 qubits, which implies exact calculation of 1641 qubits circuits. Another QC simulator using tensor network method also be developed and it will reach 10,000 qubits simulation with moderately entangled circuit.

 

Back to Session IX

Potential and Limitations of Quantum Annealing as an Accelerator for Conventional HPC

 

Hiroaki Kobayashi

Department of Computer and Mathematical Sciences, Graduate School of information Sciences, Tohoku University, JAPAN

 

In this talk, I will be presenting our on-going project entitled Quantum-Annealing Assisted Next-Generation HPC Infrastructure.   In this project, we try to realize transparent accesses to not only classical HPC resources with heterogeneous computing platforms such as x86 and vector accelerator, but also Quantum Computing one in a unified fashion.

 

In this project, I am focusing on the different types of annealing machines, quantum annealing machines and its inspired ones.  Through the evaluation using combinatorial clustering as a benchmark program, I will be discussing potential and limitations of annealing machines as an accelerator for conventional HPC infrastructure.

Back to Session III

Improved Simulations of Random Quantum Circuits

 

Salvatore Mandra

Senior Research Scientist and Task Lead, Quantum Artificial Intelligence Lab (QuAIL), KBR, Inc., NASA, Ames Research Center, CA, USA

 

In the past few years, numerical techniques to classically simulate quantum circuits, in particular random circuit sampling (RCS), have steadily improved. In my presentation, I will present our latest RCS result [arXiv:2304.11119], with particular attention to the numerical simulation of Sycamore-like circuits using tensor network contraction and matrix product states.

 

Back to Session VII

Plasma-PEPSC: Enabling Exascale Simulations for Plasma Science Grand Challenges

 

Stefano Markidis

KTH Royal Institute of Technology, Computer Science Department / Computational Science and Technology Division, Stockholm, Sweden

 

Plasma-PEPSC (Plasma Exascale-Performance Simulation CoE) aims to bring plasma science to new frontiers through the power of exascale computing and extreme-scale data analytics. Our project focuses on maximizing the parallel performance and efficiency of four flagship plasma codes—BIT, GENE, PIConGPU, and Vlasiator—to address critical challenges in plasma science. By leveraging algorithmic advancements in load balancing, resilience, and data compression, along with programming model and library developments such as MPI, accelerator and data movement APIs, and in-situ data analysis, we want to enable unprecedented simulations on current and future exascale platforms. Plasma-PEPSC adopts an integrated HPC software engineering approach, ensuring the deployment, verification, and validation of extreme-scale kinetic plasma simulations that can serve as a community standard. We employ a continuous and integrated co-design methodology, collaborating closely with the EPI Processor, accelerator design and development, and European quantum computing initiatives. In this presentation, I will provide an overview of Plasma-PEPSC, highlighting our objectives, methodologies, and anticipated impact. I will showcase the advancements in plasma science made possible by our project and our collaborative efforts to drive innovation and community-wide adoption of our optimized plasma codes.

 

Back to Session X

Molecular Dynamics + Machine Learning =Deeper Insight for Drug Discovery

 

Mark Moraes

Leader Engineering Group, D.E. Shaw Research, USA

 

At D. E. Shaw Research, we have designed and built Anton, a massively parallel special-purpose architecture for molecular dynamics simulation.

Now in its third generation, our Anton 3 machines achieve simulation speeds at least 100-fold faster than the fastest general-purpose supercomputers on a wide range of biomolecular systems.

Anton machines are an essential foundational technology for our group’s scientific and drug discovery efforts, which we further augment with deep learning molecular models to identify and optimize drug candidates, and to correct systematic errors in quantum-mechanical approximations.

This talk will describe how we co-design hardware, software and molecular models to enable both research and drug discovery.

Back to Session V

Advanced Use Cases of Reconfigurable Dataflow Architecture in Science

 

Martin Mueller

SambaNova Systems Inc., USA

 

SambaNova Systems developed a novel approach to process neural-network like artificial intelligence challenges of nearly arbitrary size and at low latency. This session will briefly introduce you to the company and its “Reconfigurable Dataflow Architecture”. It focusses on explaining example scientific use cases and examples from real-life customers, including very low-latency scenarios and application of large language models to research problems.

Back to Session V

Algorithm specific resource estimates for fault tolerant applications on superconducting qubits architectures

 

Josh Mutus

Rigetti Computing, Director Quantum Devices, Usa/Canada

 

I will describe in detail what a fault tolerant quantum computer (FTQC), based on superconducting qubits, might look like. We have developed an architectural model for such a machine informed by the need to create a framework for benchmarking generic quantum algorithms. By applying methodologies developed in measurement based quantum computing, to separate algorithms into Clifford+T, we developed a microarchitecture with specialized elements and detailed resource estimates. The resulting resource estimates have reduced overheads compared to existing techniques. The software tooling that accompanies this architecture allows us to compute the space, time, energy requirements to execute FTQC algorithms, and allows us to examine the tradeoffs between possible embodiments of the architecture.

 

Funding acknowledgment:

“The views, opinions and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.  This research was developed with funding from the Defense Advanced Research Projects Agency under Agreement HR00112230006.”

 

Back to Session VIII

Simulated annealing at HPC VS Quantum annealing, qualitative and quantitative analysis

 

Yuichi Nakamura

Executive Professional, NEC Corporation, Japan

 

A combinational optimization problem is found at a lot of social problems to be solved. According to “no free lunch theory”, although the best way is to be developed and applied the special exclusive method for the target problems, we have to solve many problems and problems are customizable by social conditions. Then, instead of the special exclusive methods which is needed huge times to be developed, a general method which can be obtained relatively good results for many problems, should be applied to the target social problems. One of general methods is annealing. Annealing can be solved many kinds of combinational optimization problems like as multi dimension problems or etc. There are many approaches for annealing classical and quantum based. In this talk, evaluations from various investigation between simulated annealing at HPC VS quantum annealing is presented.

Back to Session II

Developing and Analyzing Quantum Computing Circuits for Applications in Physical Science

 

Kevin Obenland

Quantum Information and Integrated Nanosystems, Lincoln Laboratory, Massachusetts Institute of Technology MIT, USA

 

Quantum computing has the potential to fundamentally speedup the solution to numerous problems from physical science. Examples include: quantum chemistry, electronic structure systems, material science, and fluid/plasma dynamics. Small-scale examples of quantum computers exist today, but machines that can provide a meaningful impact on physical science will need to be much larger than today’s machines. Much like classical computers, these future machines will require programs (or circuits) to run on them. These programs will need to be efficient, easy to construct, and applicable across numerous problem instances. In this talk I will describe the process of constructing quantum circuits for these future machines that will hopefully provide “quantum advantage” over classical machines. I will describe a software library (pyLIQTR) that we are developing at MIT Lincoln Laboratory and show how it can be used to understand the cost of creating circuit implementations as well as the quantum computing resources required for specific problems.

 

Back to Session VI

Accelerated Computing with EuroHPC LUMI - a Research Infrastructure for Advanced Computing

 

Per Oster

CSC - IT Center for Science Ltd.

 

Since the installation of the first phase of LUMI in September 2021 it has been a constant evolution of the system. The second phase with installation of AMD GPUs in 2022 took the system to a 3rd place on the TOP500.

LUMI consists of a diverse set of resources to accommodate advanced workflows including accelerated computing with GPUs and quantum computers. The use of AMD GPUs has turned out to be less of a hurdle than anticipated. Accelerated computing in form of quantum computing (QC) is around the corner and LUMI has been connected to two quantum computers in Finland and Sweden, respectively. All to develop  how QC and HPC can be integrated and to give researchers a chance to explore the possibilities of QC. This talk will present how LUMI is exploiting accelerated computing and evolving into an research infrastructure for advanced computing supporting a very diverse set of applications such as, digital twins, training of large language models, and complex life science workflows.

 

Back to Session IV

The National Science Data Fabric: Democratizing Data Access for Science and Society

 

Valerio Pascucci

John R. Parks Endowed Chair, University of Utah, Professor, School of Computing

Faculty, Scientific Computing, and Imaging Institute, Director, Center for Extreme Data Management Analysis and Visualization (CEDMAV), USA

 

Effective use of data management techniques to analyze and visualize massive scientific data is a crucial ingredient for the success of any experimental facility, supercomputing center, or cyberinfrastructure that supports data-intensive science. This is particularly true for high-volume/high-velocity datasets and resource-constrained institutions. However, universal data delivery remains elusive, limiting the scientific impact of these facilities.

This talk will present the National Science Data Fabric (NSDF) testbed, which introduces a novel trans-disciplinary data fabric integrating access to and use of shared storage, networking, computing, and educational resources. The NSDF technology addresses the key data management challenges in constructing complex streaming workflows that take advantage of data processing opportunities that may arise while data is in motion. This technology finds practical use in many research and industrial applications, including materials science, precision agriculture, ecology, climate modeling, astronomy, connectomics, and telemedicine. Practical use cases include the real-time data acquisition from an Advanced Photon Source (APS) beamline to allow remote users to monitor the progress of an experiment and direct integration in the Materials Commons community repository. Full integration with Python scripting facilitates the use of external libraries for data processing. For example, hundreds of terabytes of climate modeling data from NASA can be easily distributed and visualized with a Jupyter notebook that I will demonstrate live.

Overall, this leads to building flexible data streaming workflows for massive models without compromising the interactive nature of the exploratory process, the most effective characteristic of discovery activities in science and engineering. The presentation will be combined with a few live demonstrations including running Jupyter notebooks that show (i) how hundreds of terabytes of NASA climate data from the cloud can be easily distributed and visualized on any computer and (ii) how undergraduate students of a minority-serving institution (UTEP) can be provided with real-time access to large-scale materials science data normally used only by established scientists in well-funded research groups.

 

Bio

Valerio Pascucci is the Inaugural John R. Parks Endowed Chair, the founding Director of the Center for Extreme Data Management Analysis and Visualization (CEDMAV), a Faculty of the Scientific Computing and Imaging Institute, and a Professor of the School of Computing of the University of Utah. Valerio has received the 2022 IEEE VGCT Visualization Technical Achievement Award and the 2022-2023 Distinguished Research Award (DRA) from the University of Utah and has been inducted into the IEEE VGTC Visualization Academy in 2022.

Valerio is also the President of ViSOAR LLC, a University of Utah spin-off, and the founder of Data Intensive Science, a 501(c) nonprofit providing outreach and training to promote the use of advanced technologies for science and engineering. Valerio's research interests include Big Data management and analytics, progressive multi-resolution techniques in scientific visualization, discrete topology, and compression. Valerio is the coauthor of more than two hundred refereed journal and conference papers and was an Associate Editor of the IEEE Transactions on Visualization and Computer Graphics.

 

Back to Session IV

How AI is unlocking the potential of the Metaverse

 

Valerio Rizzo

EMEA Head of AI & Subject Matter Expert for Lenovo, Italy

 

The session will focus on Lenovo business strategy and technological approach to the area of AI and Metaverse. In this session we will also cover current state of art of both technologies and each one contribute to the empowerment and enablement of the other.

 

Speaker bio and LinkedIn profile:

EMEA Head of AI & Subject Matter Expert for Lenovo, Dr. Valerio Rizzo is a key member of an expert team of Artificial Intelligence, Machine Learning and Deep Learning specialists operating within the EMEA field sales organization and its business development team. He is a recognized expert in the fields of neuroscience and neurophysiology with 10 years of track record in brain research made between Italy and USA.

 

https://www.linkedin.com/in/valerio-rizzo-phd

 

Back to Session II

Circuit model quantum computing with neutral atom arrays

 

Mark Saffman

Infleqtion, Inc., and University of Wisconsin-Madison, USA

 

Neutral atom arrays have demonstrated remarkable progress in the last

few years to the point where they are a competitive platform for scalable circuit model quantum computing. Progress on improving gate fidelities, design of multi-qubit gate operations, low-crosstalk mid-circuit measurements, and the introduction of neural network based signal analysis for improved performance wil be presented.

 

Back to Session VIII

Piz Daint on Alps: a modern day view of extreme computing and data in science

 

Thomas Schulthess

CSCS Swiss National Supercomputing Centre, Lugano and ETH, Zurich, SWITZERLAND

 

High Performance Computing (HPC), i.e. scientific computing where performance matters, has been somewhat intimidating for non-experts. With the recent surge of machine learning, HPC technologies have found massive adoption in the commercial software world, which in turn allows us to make better use of extreme-scale computing and data in scientific workflows. Alps is how we call the new supercomputing infrastructure at CSCS, on which we are providing a practical synthesis of cloud-native and HPC, as well as AI technologies for science. We will discuss our plans, opportunities to better deal with large-scale scientific data and its analysis, and what we believe are the main investment that (domain) science communities should be making now.

 

Back to Session X

A manufacturable platform for fault-tolerant photonic quantum computing

 

Pete Shadbolt

Co-founder PsiQuantum Corp., Palo Alto, California, USA

 

PsiQuantum is developing a large-scale, fault-tolerant quantum computer based on integrated photonic components – originally developed for optical networking. In this talk we will describe a manufacturable platform for fault-tolerant photonic quantum computing. This includes a manufacturing capability for integrated photonic chips incorporating low-loss silicon nitride waveguides, spontaneous single-photon sources and superconducting nanowire single-photon detectors, as well as new results on optical switching and modulation using beyond-state-of-the-art electro-optic thin films. We will also describe subsystem prototypes including high-performance qubits, pseudo-number-resolving single-photon detectors, cryogenic opto-electronic packaging, cryogenic control electronics, high-performance qubit interconnects, and novel high-power cryogenic cooling systems. We will describe recent performance improvements as well as outstanding technical challenges. This talk will also cover future directions, including an overview of the full system architecture and recent progress on fault-tolerant algorithms and quantum applications.

 

Back to Session VIII

Addressing HPC/AI Performance Bottlenecks with BlueFIeld Data Processing Units

 

Gilad Shainer

NVIDIA, Santa Clara, CA, USA

 

AI and scientific workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly parallelized algorithms. As these computing requirements continue to grow, the traditional GPU-CPU architecture further suffers from imbalance computing, data latency and lack of parallel or pre-data-processing. The introduction of the Data Processing Unit (DPU) brings a new tier of computing to address these bottlenecks, and to enable, for the first-time, compute overlapping and nearly zero communication latency. The session will deliver a deep dive into DPU computing, and how it can help address long lasting performance bottlenecks. Performance results of a variety of HPC and AI applications will be presented as well.

 

Back to Session IV

Active Memory Architecture

 

Thomas Sterling

Simultac LLC, USA

 

The US IARPA AGILE research program was undertaken within the last year to develop innovative HPC architectures for large-scale graph-based applications. Six collaborative performer teams were selected with the goal to create novel computer architectures and designs capable of achieving orders of magnitude performance improvement for data analytics and dynamic graph-driven computation. As part of this aggressive program, Simultac LLC is developing the Active Memory Architecture (AMA), a radical departure from conventional practices to circumvent legacy challenges and exploit emerging opportunities. AMA Is a message-driven memory-centric non von Neumann scalable architecture guided by the new HPX* distributed runtime system based on a derivative of the shared-memory ParalleX execution model. This presentation will describe at depth the AGILE AMA system structure, semantics, and dynamic asynchronous operational control methods. The small Fonton compute-cell, which is a highly replicated smart memory bank, is to be detailed with some early A-SST simulation results to convey advances for irregular time-varying graph processing. This research is driven by the AGILE industry benchmarks and specified workflows and will be evaluated by the IARPA program’s “Test and Evaluation” teams supported by DOE National Laboratories. Questions and comments from the participants will be welcome throughout the discussion.

Back to Session I

The ADMIRRAL Project

 

Fred Streitz

Center for Forecasting and Outbreak Analytics (CFA/CDC), USA

 

The powerful combination of high-performance computing (HPC) and Machine Learning (ML) has been especially fruitful in the area of computational biology, where the addition of ML techniques has helped ameliorate the lack of clear mechanistic models and often poor statistics which has impeded progress in our understanding. I will discuss the status of the ADMIRRAL (AI-Driven Machine-learned Investigation of RAS-RAF Activation Lifecycle) Project, which is investigating the behavior of an oncogenic protein in the context of a cellular membrane. I will present our progress in the development of a novel hybrid ML/HPC approach that exploits machine-learned latent spaces to substantially advance molecular dynamics simulations.

 

*This work was performed under the auspices of the U.S. Department of Energy (DOE) by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344 and under the auspices of the National Cancer Institute (NCI) by Frederick National Laboratory for Cancer Research (FNLCR) under Contract 75N91019D00024. This work has been supported by the NCI-DOE Collaboration established by the U.S. DOE and the NCI of the National Institutes of Health.

 

Back to Session III

Simulating quantum circuits using efficient tensor network contraction algorithms with subexponential upper bound

 

Sergi Strelchuk

Department of Applied Mathematics and Theoretical Physics and Centre for Quantum Information and Foundations University of Cambridge, UK

 

We derive a rigorous upper bound on the classical computation time of finite-ranged tensor network contractions in d 2 dimensions. By means of the Sphere Separator Theorem, we are able to take advantage of the structure of quantum circuits to speed up contractions to show that quantum circuits of single-qubit and finite-ranged two-qubit gates can be classically simulated in subexponential time in the number of gates. In many practically relevant cases this beats standard simulation schemes. Moreover, our algorithm leads to speedups of several orders of magnitude over naive contraction schemes for two-dimensional quantum circuits on as little as an 8 × 8 lattice. We obtain similarly efficient contraction schemes for Google’s Sycamore-type quantum circuits, instantaneous quantum polynomial-time circuits and non-homogeneous (2+1)-dimensional random quantum circuits.

 

Back to Session VII

Impact of Advances in HPC/AI/Machine Learning on Fusion Energy Development

 

William Tang

Princeton University, Center for Statistics and Machine Learning (CSML) and Princeton Plasma Physics Laboratory (PPPL), USA

 

William Tang of Princeton University is a Professor in Astrophysical Sciences, Participating Faculty at the Center for Statistics and Machine Learning (CSML), Executive Committee member for the Princeton Institute for Computational Science & Engineering (PICSciE), and Principal Research Physicist at the Princeton Plasma Physics Laboratory (PPPL) – where he served as Chief Scientist from 1998 to 2008.  A Fellow of the American Physical Society and recipient of awards including the 2018 NVIDIA Global Impact Award, he has authored over 200 journal publications, is currently the Principal Investigator (PI) of the new AURORA Exascale Early Science Project at Argonne National Laboratory.  A co-author of the NATURE (April 2019) article on “Predicting Tokamak Disruptions Using Deep Learning at Scale,” Prof. Tang’s Ph.D. students including recipients of the US Presidential Early Career Award for Scientists and Engineers in 2000 and 2005.  He has recently presented an invited talk on AI/ML/HPC-enabled Digital Twins and chaired the associated featured session at the international HPC conference (ISC-2023) in Hamburg, Germany (May 22-25, 2023).

 

Stimulating progress in delivering accurate predictions in science and industry featuring the impact of HPC/AI/Machine Learning methods are now enabling data-driven discoveries that are essential for realizing the potential of fusion energy. As emphasized, for example, by the 2022 US White House Summit on developing “a bold decadal vision for commercial Fusion Energy” [1] accelerating the fusion energy development timeline to meet the climate challenge will rely heavily on the scientific and engineering advances being driven by HPC together with advanced statistical methods featuring artificial intelligence/deep learning/machine learning (AI/DL/ML). An especially time-urgent problem is the need to reliably predict and avoid large­scale “major disruptions” in MFE (magnetic fusion energy) tokamak systems such as DIII-D in San Diego, the EUROFUSION Joint European Torus (JET), and the international ITER device scheduled to produce 500 MW of fusion power by the mid-30s -- while hopefully maximizing such production. This mission requires innovation in the development of improved data-driven and model-based approaches that maximize plasma performance in existing experiments with impact on optimizing operational scenarios.

Encouraging advances include the deployment of recurrent and convolutional neural networks in the Princeton's Deep Learning Code "FRNN" that enabled the first adaptable predictive DL model for carrying out efficient "transfer learning" accurate predictions of disruptive events across different tokamak devices [2].  The demonstrated successful validation of FRNN software on a huge observational FES database provides still stronger evidence that deep learning approaches using large scale classical supercomputers can predict disruptions with unprecedented accuracy. More recent publications have further shown that this AI/DL capability can provide not only the “disruption score,” as an indicator of the probability of an imminent disruption but also a “sensitivity score” in real-time to indicate the underlying reasons for the predicted disruption – i.e., “explainable AI.” [3].

 

Moreover, detailed information for a plasma control system (PCS) can now be provided to improve disruptions avoidance in near-real-time to help optimize plasma performance.  In particular, the application of these AI/DL methods for real-time prediction and control has recently been further advanced with the introduction of a surrogate model/HPC simulator ("SGTC") [4]. SGTC models satisfy compatibility requirements of a plasma control system (PCS) and deliver inference times on order of milliseconds (ms) and can deliver results 5 order of magnitude faster than the validated first-principles-based global particle-in-cell GTC code runs on advanced leadership computing HPC systems!  These capabilities are now leading to exciting avenues for moving from passive prediction to active control and ultimately, to the optimization of the design for a first-of-a-kind fusion pilot plant (FPP) -- aided by the exciting introduction in a recent invited talk on AI/ML/HPC-enabled Digital Twins at the international HPC conference (ISC-2023) in Hamburg, Germany (May 22-25, 2023) [5].

 

References

 

[1]https://www.whitehouse.gov/ostp/news-updates/2022/04/19/readout-of-the-white-house-summit-on-developing-a-bold-decadal-vision-for-commercial-fusion-energy/

 

[2] Julian Kates-Harbeck, Alexey Svyatkovskiy, and William Tang, "Predicting Disruptive Instabilities in Controlled Fusion Plasmas Through Deep Learning," NATURE 568, 526 (2019)

 

[3] WilliamTang, Ge Dong, Jayson Barr, Keith Erickson, Rory Conlin, Dan Boyer, Julian Kates-Harbeck, Kyle Felker, Cristina Rea, N. C. Logan, et al., “Implementation of Ai/Deep Learning Disruption Predictor into a Plasma Control System,” arXiv preprint arXiv:2204.01289, 2021;

updated version with “Explainable AI/ML Focus” in CONTRIBUTIONS TO PLASMA PHYSICS, Special Issue dedicated to Machine Learning , accepted for publication (April, 2023)

 

[4] Ge Dong, et al., Deep Learning-based Surrogate Model for First-principles Global Simulations of Fusion Plasmas, NUCLEAR FUSION 61 126061 (2021).

 

[5] William Tang, et al., “Fusion Digital Twin Tokamak Enabled by AI-Machine Learning,” Proceeding of the International Supercomputing Conference, ISC-2023, Hamburg, Germany, to be published (2023).

Back to Session V

Building Trust in Scientific Applications through Data Traceability and Results Explainability

 

Michela Taufer

Dongarra Professor, University of Tennessee Knoxville

 

To trust findings in computational science, scientists need workflows that trace the data provenance and support results explainability. As workflows become more complex, tracing data provenance and explaining results become more challenging. In this talk, we propose a computational environment that automatically creates a workflow execution’s record trail and invisibly attaches it to the workflow’s output, enabling data traceability and results explainability. Our solution transforms existing container technology, includes tools for automatically annotating provenance metadata, and allows effective movement of data and metadata across the workflow execution. We demonstrate the capabilities of our environment with the study of SOMOSPIE, an earth science workflow. This workflow uses machine learning modeling techniques to predict soil moisture values from the 27 km resolution satellite data down to higher resolutions necessary for policy-making and precision agriculture. By running the workflow in our environment, we can identify the causes of different accuracy measurements for predicted soil moisture values in different resolutions of the input data and link different results to different machine learning methods used during the soil moisture downscaling, all without requiring scientists to know aspects of workflow design and implementation.

 

Back to Session X

Quantum HPC Hybrid Computing Platform toward Cooperative Computation of Classical and Quantum Computers

 

Miwako Tsuji

RIKEN Center for Computational Science, Kobe, JAPAN

 

Quantum computers (QC) are systems based on the principles of quantum theory. Quantum computers have been expected to play an important role in the fields where classical computers should show little growth. On the other hand, there should be requirements of significant computational capabilities of supercomputers to make use of quantum computers.

Here, we present an overview and plan for the quantum HPC hybrid computing platform in RIKEN R-CCS, which is a new project in RIKEN to make use of quantum computational technologies by the integration with high end supercomputers.

We focus a programming environment to support the cooperative computation of quantum computers and supercomputers, which offloads some kernels in an application to quantum computers based on their characteristics using a remote procedure call (RPC). We also discuss the role of supercomputers in enhancing the development and evolution of quantum computers, such as circuit optimization, circuit cutting/knitting, and error collection/mitigation.

 

Back to Session VII

Addressing Heterogeneity and Disaggregation in Future Ecosystems

 

Eric Van Hensbergen

ARM Research

 

The quest for higher performance, energy efficiency, and lower total cost of ownership have driven a trend towards increasing heterogeneity in computing environments.  Within the Arm ecosystem this has materialized in heterogeneity both within system-on-chip (SoC) designs as well as disaggregation of computing resources within data centers.  Within an SoC, Arm has long supported heterogenous environments in IoT and mobile segments, but the movement of the supply chain towards chiplet based packaging has created opportunities within the data center and HPC markets, creating new opportunities for domain specialization without the dramatic increase in costs associated with producing custom silicon.  Meanwhile disaggregated computing in terms of computational storage (CSD), smart NICs (DPU), and CXL memory pooling provide different opportunities for placing compute near different resources.  These hybrid computing models present new challenges in how to effectively use them with existing application ecosystems.  This talk will discuss the variety of options available for hybrid computing models within the Arm ecosystem and present some of the different options for how we are looking to make use of these hybrid technologies more seamless to software.

Back to Session II

Training Large Language Models on Cerebras Wafer Scale Clusters

 

Natalia Vassilieva

Cerebras Systems, USA

 

Large Language Models (LLMs) are shifting “what’s possible”, but require massive compute and massive complexity of distributed training across thousands of accelerators with traditional hardware. Cerebras Wafer Scale Clusters make training LLMs faster and easier compared to GPUs due to near-perfect linear scaling and simple data-parallel distribution strategy for models of any size. In this talk we will share our experience and insights from training various LLMs, including open-sourced family of Cerebras-GPT models, on the Cerebras hardware.

 

Back to Session V

What Comes After Exascale?

 

Andrew Wheeler

Hewlett Packard Labs, USA

 

Exascale computing excellence was engineered performance by design – posing and addressing the challenge of building supercomputers within power and cost budgets to perform a quintillion math operations a second. In this new post-Exascale era of converged Analytics, HPC, and AI - “workflows are the new applications”.  Building supercomputers for end-to-end workflows is going to be about engineering capability that is dynamic to (i) availability/accessibility, (ii) efficiency (code portability, energy), and (iii) configurability for performance. This HPE approach enables architectural creativity to deliver flexible consumption models of Exaflop-seconds for analytics, Exaflop-hours for HPC codes, and Exaflop-months for AI models. This talk will cover the technical vision, the challenges, and focus areas of research and development.

Back to Session I