HPC 2025

 

High Performance Computing

 

State of the Art, Emerging Disruptive Innovations and Future Scenarios

 

An International Advanced Workshop

 

 

 

June 23 – 27, 2025, Cetraro, Italy

 

 

image002

 

 

Programme Committee

Organizers

Sponsors &

Media Partners

Speakers

Agenda

Chairpersons

Panels

Abstracts

 

 

Final Programme

 

Programme Committee

LUCIO GRANDINETTI (Chair)

Department of Computer Engineering, Electronics, and Systems Science

University of Calabria – UNICAL

and

Center of Excellence for High Performance Computing

ITALY

 

GIOVANNI ALOISIO

Euro Mediterranean Center on Climate Change (CMCC Foundation)

CMCC Strategic Advisor

Former Director, CMCC Supercomputing Center

and

University of Salento

ITALY

 

FRANK BAETKE

EOFS

European Open File System Organization

formerly

Hewlett Packard Enterprise

Munich

GERMANY

 

RUPAK BISWAS

NASA

Exploration Technology Directorate

High End Computing Capability Project

NASA Ames Research Center

Moffet Field, CA

USA

 

GEOFFREY FOX

University of Virginia

USA

 

WOLFGANG GENTZSCH

Co-founder & President of Simr

SimOps Simulation Operations Org.

Regensburg

GERMANY

and

Sunnyvale, CA

USA

 

VLADIMIR GETOV

Distributed and Intelligent Systems Research Group

School of Computer Science and Engineering

University of Westminster

London

UNITED KINGDOM

 

VLAD GHEORGHIU

University of Waterloo

Institute for Quantum Computing

Waterloo, Ontario

CANADA

 

KIMMO KOSKI

CSC The Finnish IT Center for Science

Helsinki

FINLAND

 

SALVATORE MANDRÁ

Research Scientist

Google Quantum AI

Mountain View, CA

USA

 

STEFANO MARKIDIS

KTH Royal Institute of Technology

Computer Science Department

Stockholm

SWEDEN

 

SATOSHI MATSUOKA

RIKEN

Director Center for Computational Science

Kobe

and

Department of Mathematical and Computing Sciences

Tokyo Institute of Technology

Tokyo

JAPAN

 

JOSH MUTUS

Rigetti Computing

Berkeley, CA

USA

 

KEVIN OBENLAND

Quantum Information and Integrated Nanosystems

Massachusetts Institute of Technology

Lincoln Laboratory

Boston, MA

USA

 

VALERIO PASCUCCI

Center for Extreme Data Management, Analysis and Visualization

and

Scientific Computing and Imaging Institute

School of Computing, University of Utah

and

Laboratory Fellow, Pacific Northwest National Laboratory

USA

 

RAFFAELE SANTAGATI

Quantum Group

Boheringen Ingelheim

GERMANY

 

THOMAS STERLING

Senior Research Scientist

University of Texas at Austin

Texas Advanced Computing Center

Austin, TX

USA

 

WILLIAM TANG

Princeton University Dept. of Astrophysical Sciences,

Princeton Plasma Physics Laboratory

and

Center for Statistics and Machine Learning (CSML)

and

Princeton Institute for Computational Science & Engineering (PICSciE)

Princeton University

USA

 

MICHELA TAUFER

The University of Tennessee

Electrical Engineering and Computer Science Dept.

Knoxville, TN

USA

 

ROBERT WISNIEWSKI

Chief Architect in HPC and AI

Solutions Organization

HPE

USA

 

Organizing Committee

 

L. GRANDINETTI (Chair)

ITALY

M. ALBAALI

(OMAN)

F. BAETKE

(GERMANY)

W. GENTZSCH

(GERMANY)

L. MIRTAHERI

(ITALY)

 

 

 

Sponsors

 

CEREBRAS

Immagine che contiene testo, Carattere, Elementi grafici, grafica

Il contenuto generato dall'IA potrebbe non essere corretto.

CMCC Euro-Mediterranean Center on Climate Change

Immagine che contiene testo, Carattere, logo, Elementi grafici

Il contenuto generato dall'IA potrebbe non essere corretto.

CSC Finnish Supercomputing Center

CSCS

Immagine che contiene testo, Carattere, logo, schermata

Il contenuto generato dall'IA potrebbe non essere corretto.

DWAVE Systems

Immagine che contiene Carattere, logo, Elementi grafici, simbolo

Il contenuto generato dall'IA potrebbe non essere corretto.

EOFS

Immagine che contiene testo, Carattere, schermata, logo

Il contenuto generato dall'IA potrebbe non essere corretto.

Juelich Supercomputing Center, Germany

Immagine che contiene testo, Carattere, logo, Elementi grafici

Il contenuto generato dall'IA potrebbe non essere corretto.

LENOVO

NVIDIA

Immagine che contiene simbolo, logo, Elementi grafici

Il contenuto generato dall'IA potrebbe non essere corretto.

PARTEC

PSIQUANTUM

Immagine che contiene nero, schermata, Carattere, oscurit�

Il contenuto generato dall'IA potrebbe non essere corretto.

QUANTINUUM

Immagine che contiene Carattere, logo, Elementi grafici, bianco

Il contenuto generato dall'IA potrebbe non essere corretto.

QUANTUM BRILLIANCE

Immagine che contiene testo, Carattere, logo, Elementi grafici

Il contenuto generato dall'IA potrebbe non essere corretto.

QUANTUM MACHINES

Immagine che contiene testo, Carattere, logo, Elementi grafici

Il contenuto generato dall'IA potrebbe non essere corretto.

RIGETTI Computing

Immagine che contiene testo, logo, Carattere, simbolo

Il contenuto generato dall'IA potrebbe non essere corretto.

SAMBANOVA SYSTEMS

Immagine che contiene testo, Carattere, logo, Elementi grafici

Il contenuto generato dall'IA potrebbe non essere corretto.

ThinkParQ

Immagine che contiene Carattere, design, tipografia

Il contenuto generato dall'IA potrebbe non essere corretto.

University of Calabria

Department of Computer Engineering, Electronics, and Systems

dimes-marchio-01

UNIVERSITY OF SALENTO

Immagine che contiene testo, Carattere, logo, simbolo

Il contenuto generato dall'IA potrebbe non essere corretto.

VastData

Immagine che contiene Carattere, logo, testo, simbolo

Il contenuto generato dall'IA potrebbe non essere corretto.

 

 

Media Partners

 

 

 

Immagine che contiene testo, Carattere, logo, Elementi grafici

Descrizione generata automaticamente

 

HPCwire is a news portal and weekly newsletter covering the fastest computers in the world and the people who run them. As the trusted source for HPC news since 1987, HPCwire has served as the publication of record on the issues, challenges, opportunities, and conflicts relevant to the global High Performance Computing space. Its reporting covers the vendors, technologies, users, and the uses of high performance, AI- and data-intensive computing within academia, government, science, and industry.

Subscribe now at www.hpcwire.com.

 

 

 

 

 

Immagine che contiene testo, Carattere, logo, Elementi grafici

Descrizione generata automaticamente

 

https://insidehpc.com/about/

 

About insideHPC

Founded in 2006, insideHPC is a global publication recognized for its comprehensive and insightful coverage across the HPC-AI community, linking vendors, end-users and HPC strategists. insideHPC has a large and loyal audience drawn from public and private companies of all sizes, government agencies, research centers, industry analysts and academic institutions. In short: the buyers and influencers of HPC, HPC-AI and associated emerging technologies.

 

THE News Analysis Site for HPC Insiders: Written and edited by seasoned technology journalists, we’re all about HPC and AI, offering feature stories, commentary, news coverage, podcasts and video interviews with HPC’s leading voices. Like the evolving HPC ecosystem, insideHPC’s coverage continually expands into emerging focus areas to better serve our growing readership and advertising base, insideHPC in 2023 will deliver an updated format and new spotlight coverage of of enterprise HPC, HPC-AI, exascale (and post-exascale) supercomputing, quantum computing, cloud HPC, edge computing, High Performance Data Analytics and the geopolitical implications of supercomputing.

 

 

 

 

 

Immagine che contiene logo, Carattere, Elementi grafici, simbolo

Descrizione generata automaticamente

 

Simr, formerly UberCloud, is dedicated to helping manufacturers thrive by advancing SimOps, a framework of best practices for automating simulation processes that enhance decision-making, innovation, efficiency, and quality in product design and engineering. Founded by engineers for engineers, Simr builds on a versatile, high-performance platform compatible with any compute infrastructure, setting a new standard in manufacturing design. Our platform allows engineers to securely design and test product concepts using existing workflows and tools, ensuring complete control over their simulation workflows and data. Visit us at simr.com and follow us on LinkedIn.

 

 

 

 

 

Speakers

 

 

FRANK BAETKE

EOFS

European Open File System Organization

GERMANY

 

MANDY BIRCH

CEO and founder of TreQ

UNITED KINGDOM

 

GEORGE BOSILCA

NVIDIA

Tel Aviv

ISRAEL

 

SVEN BREUNER

VAST DATA

GERMANY

 

TIM CLARKE

Sambanova Systems Inc.

Palo Alto, CA

USA

 

MADISON COTTERET

University of Groningen

Groningen

THE NETHERLANDS

 

DANIELE DRAGONI

Leonardo S.p.A.

High Performance Computing Lab.

Genova

ITALY

 

WOLFGANG GENTZSCH

Co-founder & President of Simr

SimOps Simulation Operations Org.

Regensburg

GERMANY

and

Sunnyvale, CA

USA

 

XAVIER GEOFFRET

Quandela

FRANCE

 

VLADIMIR GETOV

Distributed and Intelligent Systems Research Group

School of Computer Science and Engineering

University of Westminster

London

UNITED KINGDOM

 

VLAD GHEORGHIU

Institute for Quantum Computing, University of Waterloo

and

SoftwareQ Inc, Waterloo

Waterloo, Ontario

CANADA

 

BETTINA HEIM

NVIDIA

USA

 

FRANK HEROLD

ThinkParQ GmbH

GERMANY

 

NOBUYASU ITO

RIKEN Center for Computational Science, Kobe

JAPAN

 

MICHAEL JAMES

Cerebras Systems

Sunnyvale, California

USA

 

HIROAKI KOBAYASHI

Architecture Laboratory

Department of Computer and Mathematical Sciences

Graduate School of information Sciences

Tohoku University

JAPAN

 

DHIREESHA KUDITHIPUDI

University of Texas at San Antonio

San Antonio, TX

USA

 

LORENZO LEANDRO

Quantum Machines

Tel Aviv

ISRAEL

 

PEKKA MANNINEN

CSC Director of Science and Technology

Finnish IT Center for Science

Espoo

FINLAND

 

STEFANO MARKIDIS

KTH Royal Institute of Technology

Computer Science Department / Computational Science and Technology Division

Stockholm

SWEDEN

 

SATOSHI MATSUOKA

Director RIKEN Center for Computational Science, Kobe

and

Tokyo Institute of Technology, Tokyo

JAPAN

 

CHRISTIAN MAYR

Technical University Dresden

Dresden

GERMANY

 

LUCAS MENGER

University of Frankfurt

GERMANY

 

ERIC MUELLER

University of Heidelberg

Heidelberg

GERMANY

 

JOSH MUTUS

Rigetti Computing

Director Quantum Devices

USA/CANADA

 

KEVIN OBENLAND

Quantum Information and Integrated Nanosystems

Lincoln Laboratory

Massachusetts Institute of Technology MIT

Boston, MA

USA

 

IRWAN OWEN

D-Wave Systems Inc

CANADA - USA

 

Nash PALANISWAMY

Quantinuum

UK - USA

 

TROY PATTERSON

ThinkParQ

GERMANY

 

DAVID RIVAS

Rigetti Computing

Berkeley, CA

USA

 

RAFFAELE SANTAGATI

Quantum Group

Boheringen Ingelheim

GERMANY

 

JOHANNES SCHEMMEL

European Institute for Neuromorphic Computing

and

Kirchoff Institute for Physics

Heidelberg University

Heidelberg

GERMANY

 

THOMAS SCHULTHESS

CSCS Swiss Center of Supercomputing

Lugano

SWITZERLAND

 

PETE SHADBOLT

Co-founder

PsiQuantum Corp.

Palo Alto, California

USA

 

SARAH SHELDON

IBM

Yorktown Heights, NY

USA

 

THOMAS STERLING

Senior Research Scientist

University of Texas at Austin

Texas Advanced Computing Center

Austin, TX

USA

 

ANNA STOCKKLAUSER

Quantum Motion

UK

 

SERGII STRELCHUCK

Oxford University

Oxford

UK

 

ANDREA TABACCHINI

Quantum Brilliance

AUSTRALIA and GERMANY

 

WILLIAM TANG

Princeton University Dept. of Astrophysical Sciences,

Princeton Plasma Physics Laboratory

and

Center for Statistics and Machine Learning (CSML)

and

Princeton Institute for Computational Science & Engineering (PICSciE)

Princeton University

USA

 

ZACHARY VERNON

XANADU

CANADA

 

ALEKSANDER WENNERSTEEN

Pasqal

FRANCE

 

 

Workshop Agenda

Monday, June 23rd

 

Session

Time

Speaker/Activity

9:45 – 10:00

Welcome Address

Session I

State of the art and future scenarios

 

10:00– 10:30

T. STERLING

Towards an Active Memory Architecture for Graph Processing beyond Moore's Law

 

10:30 – 11:00

V. GETOV

Application-Driven Development and Evolution of the Computing Continuum

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

E. MUELLER

Sustainable Management of Complex Software Ecosystems for Novel Accelerators

12:00 – 12:30

N. PALANISWAMY

Reinventing HPC with Quantum – A year in perspective

12:30 – 12:45

CONCLUDING REMARKS

Session II

 

Emerging Computer Systems and Solutions

 

17:00 – 17:30

t.b.a.

 

17:30 – 18:00

H. KOBAYASHI

Graph-based Data Analysis of Three-dimensional Electron Diffraction Data

 

18:00 – 18:30

W. GENTZSCH

SimOps Introduces HPC Software Stack with HPC Software Certification and Training

18:30 – 19:00

COFFEE BREAK

 

19:00 – 19:30

F. BAETKE

Needs of the HPC Community vs. Computer Science Curricula – a Widening Gap

19:30 – 19:45

CONCLUDING REMARKS

 

 

Tuesday, June 24th

 

Session

Time

Speaker/Activity

Session III

Advanced AI Processing: Challenges and Perspectives

 

9:30 – 9:55

W. TANG

AI-Powered Machine Learning for Accelerating Scientific Grand Challenges

 

9:55 - 10:20

M. JAMES

Scaling Physics Simulations on AI Hardware: Insights from Molecular Dynamics and Planetary Modeling

 

10:20 – 10:45

G. BOSILCA

Unlocking the Full Potential of AI With Next-Gen Networking

 

10:45 – 11:15

COFFEE BREAK

 

11:15 – 11:40

P. MANNINEN

Towards AI supercomputing with LUMI-AI

 

11:40 – 12:05

S. BREUNER

The VAST Data Platform: Modern HPC and AI Storage as it should be

 

12:05 -12:30

T. CLARKE

Agentic AI at Scale: How SambaNova’s Architecture Powers Next-Gen LLM Workflows in National Labs

 

12:30 – 12:55

T. PATTERSON and F. HEROLD

Challenges for Software-defined storage in the modern HPC and AI World

 

12:55 – 13:10

CONCLUDING REMARKS

Session IV

 

Neuromorphic Computing

 

17:00 – 17:30

D. KUDITHIPUDI

THOR: The Neuromorphic Commons

 

17:30 – 18:00

C. MAYR

Neuromorphic Computing at Cloud Level

 

18:00 – 18:30

COFFEE BREAK

 

18:30 – 19:00

J. SCHEMMEL

Scaling Analog Neuromorphic Computing

 

19:00 – 19:30

M. COTTERET

Vector-Symbolic Architectures for Scalable Neuromorphic Systems Design

 

19:30 – 19:45

CONCLUDING REMARKS

 

 

Wednesday, June 25th

 

Session

Time

Speaker/Activity

Session V

 

The QUANTUM COMPUTING Promises 1

 

10:00 – 10:30

S. SHELDON

t.b.a.

 

10:30 – 11:00

J. MUTUS

Superconducting qubits at the utility scale: the potential and limitations of modularity

 

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

P. SHADBOLT

Progress towards large-scale fault-tolerant quantum computing with photons

 

12:00 – 12:30

Z. VERNON

Photonic fault-tolerant quantum computing: Scaling, networking, and modularity

 

12:30 – 12:45

CONCLUDING REMARKS

Session VI

 

The QUANTUM COMPUTING Promises 2

 

17:30 – 18:00

K. OBENLAND

Logical Resource Analysis of Utility-Scale Quantum Applications using pyLIQTR

 

18:00 – 18:30

V. GHEORGHIU

Embedding classical logic into quantum computation, or, mid-circuit operations on steroids

 

18:30 – 19:00

COFFEE BREAK

 

19:00 – 19:30

I. OWEN

Quantum Realised: The Energy-Efficient Frontier For HPC

 

19:30 – 19:45

CONCLUDING REMARKS

 

 

Thursday, June 26th

 

Session

Time

Speaker/Activity

 

Session VII

Quantum Computing Challenging Applications

 

 

9:30 – 10:00

R. SANTAGATI

Accelerating Quantum Chemistry Simulations on Quantum Computers

 

 

10:00 – 10:30

D. DRAGONI

Quantum Path at Leonardo: Enabling Innovation

 

 

10:30 – 11:00

S. STRELCHUCK

Quantum Pangenomics

 

 

11:00 – 11:30

COFFEE BREAK

 

Session VIII

 

The Quantum Computing Prospects and Deployments

 

 

11:30 – 12:00

B. HEIM

Defining the quantum accelerated supercomputer

 

 

12:00 – 12:30

A. WENNERSTEEN

Towards Quantum-Classical Supercomputers with Neutral Atoms

 

 

12:30 – 13:00

A. TABACCHINI

The Diamond Integrated Quantum Chip Revolution

 

 

13:00 – 13:15

CONCLUDING REMARKS

 

Session IX

 

The Quantum Computing Prospects and Deployments 2

 

 

17:00 – 17:25

L. LEANDRO

The Research Driving Control System Innovation Towards Utility Scale Quantum Computing

 

 

17:25 – 17:50

M. birch

Preparing HPC for Quantum Acceleration: Why Open Architecture Matters

 

 

17:50 –18:15

S. MARKIDIS

Rethinking Quantum Parallelism: From Superposition to Speed-Up

 

18:15 – 18:45

COFFEE BREAK

 

18:45 – 19:45

PANEL DISCUSSION

Chairperson: t.b.a.

 

Panelists:

Vladimir Getov, University of Westminster, UK

William Tang, Princeton University, USA

Pete Shadbolt, PsiQuantum Corp., USA (t.b.c.)

David Rivas, Rigetti Computing, USA

 

The Intersection of Quantum Computing and HPC

 

During the past several decades, supercomputing speeds have gone from Gigaflops to Teraflops, to Petaflops and Exaflops. As the end of Moore’s law approaches, the HPC community is increasingly interested in disruptive technologies that could help continue these dramatic improvements in capability. This interactive panel will identify key technical hurdles in advancing quantum computing to the point it becomes useful to the HPC community. Some questions to be considered:

 

  • When will quantum computing become part of the HPC infrastructure?
  • What are the key technical challenges (hardware and software)?
  • What HPC applications might be accelerated through quantum computing?

belle epoque

Is the “belle époque” of classical High Performance Computer Systems coming at the end?

 

 

Friday, June 27th

 

Session

Time

Speaker/Activity

Session X

Key Projects, Novel Developments and Challenging Applications

 

10:00 – 10:30

S. MATSUOKA

t.b.a.

 

10:30 – 11:00

N. ITO

JHPC-quantum project: HPC and QPU hybrid challenge of RIKEN

11:00 – 11:30

COFFEE BREAK

 

11:30 – 12:00

L. MENGER

Realizing Hybrid Quantum-Classical Applications in OmpSs-2

 

12:00 – 12:30

X. GEOFFRET

Bridging HPC and Quantum Computing: Quandela’s full-stack approach

12:30 – 12:45

CONCLUDING REMARKS

 

 

 

Chairpersons

 

 

SESSION I

 

WOLFGANG GENTZSCH

Co-founder & President of Simr

SimOps Simulation Operations Org., Regensburg

GERMANY

and

Sunnyvale, CA

USA

 

SESSION II

 

t.b.a.

WOLFGANG GENTZSCH

Co-founder & President of Simr

SimOps Simulation Operations Org., Regensburg

GERMANY

and

Sunnyvale, CA

USA

 

SESSION III

 

VLADIMIR GETOV

Distributed and Intelligent Systems Research Group

School of Computer Science and Engineering

University of Westminster, London

UNITED KINGDOM

 

SESSION IV

 

WOLFGANG GENTZSCH

Co-founder & President of Simr

SimOps Simulation Operations Org., Regensburg

GERMANY

and

Sunnyvale, CA

USA

 

SESSION V

 

RAFFAELE SANTAGATI

Quantum Group

Boheringen Ingelheim

GERMANY

 

SESSION VI

 

JOSH MUTUS

Rigetti Computing

Berkeley, CA

USA

 

SESSION VII

 

t.b.a.

FRANK BAETKE

EOFS

European Open File System Organization

formerly

Hewlett Packard Enterprise, Munich

GERMANY

 

SESSION VIII

 

VLAD GHEORGHIU

University of Waterloo

Institute for Quantum Computing

Waterloo, Ontario

CANADA

 

SESSION IX

 

HIROAKI KOBAYASHI

Architecture Laboratory

Department of Computer and Mathematical Sciences

Graduate School of information Sciences

Tohoku University

JAPAN

 

SESSION X

 

VLADIMIR GETOV

Distributed and Intelligent Systems Research Group

School of Computer Science and Engineering

University of Westminster

London

UNITED KINGDOM

 

 

Panel

 

 

The Intersection of Quantum Computing and HPC

 

Chairperson: t.b.a.

 

Panelists:

Vladimir Getov, University of Westminster, UK

William Tang, Princeton University, USA

Pete Shadbolt, PsiQuantum Corp., USA (t.b.c.)

David Rivas, Rigetti Computing, USA

 

 

During the past several decades, supercomputing speeds have gone from Gigaflops to Teraflops, to Petaflops and Exaflops. As the end of Moore’s law approaches, the HPC community is increasingly interested in disruptive technologies that could help continue these dramatic improvements in capability. This interactive panel will identify key technical hurdles in advancing quantum computing to the point it becomes useful to the HPC community. Some questions to be considered:

 

  • When will quantum computing become part of the HPC infrastructure?
  • What are the key technical challenges (hardware and software)?

What HPC applications might be accelerated through quantum computing?

 

Back to Session IX

 

 

 

 

Abstracts

Needs of the HPC Community vs. Computer Science Curricula – a Widening Gap

 

Frank Baetke

EOFS, European Open File System Organization, Germany

 

The talk will summarize observations made at an ISC panel and three EOFS workshops in 2022, 2024 and 2025 with representatives of the academic community and major European HPC Centers.

Core components of any HPC installation such as operating systems, storage, I/O and file systems are no longer considered interesting and important topics in computer science or information technology curricula. Related lectures at several universities have been abandoned in favor of AI, web services and other areas that are considered more relevant and/or interesting.

Communication between users of large HPC centers and IT staff responsible for system scheduling and storage/file-system management is becoming an increasing problem as many application developers and/or users are unaware or uninterested in the operational aspects of large application programs and the associated challenges such as multiple storage hierarchies, demand scheduling, etc.

This disconnect often causes issues related to load balancing, resource efficiency and system responsiveness and leads to frustration on both sides.

Back to Session II

Preparing HPC for Quantum Acceleration: Why Open Architecture Matters

 

Mandy Birch

CEO & Founder, TreQ, United Kingdom

 

As quantum computing advances, its role in high-performance computing (HPC) is becoming clear—not as a standalone solution, but as a specialized accelerator integrated with classical systems.

This talk explores how open-architecture design principles can future-proof HPC environments for quantum integration. We’ll examine how modularity, hardware diversity, and upgradeability enable adaptation to rapidly evolving quantum technologies across qubit modalities, control systems, and software layers.

Through practical examples and architectural strategies, we’ll discuss:

- Aligning infrastructure decisions with long-term interoperability

- Supporting heterogeneous quantum-classical workflows

- Enabling experimentation with configurable system components

As a concrete example, we’ll share insights from a UK-based testbed project that integrates multiple quantum processors, control stacks, and software environments—yielding eight distinct configurations in a single system. This work illustrates the practical value of modular design and the development of open specifications that span the quantum stack.

This session is intended for system architects, research leaders, and HPC infrastructure strategists aiming to stay adaptive while managing long-term risk.

 

Back to Session IX

The VAST Data Platform: HPC and AI Storage as it should be

 

Sven Breuner

Field CTO International @ VAST Data

 

Storage in HPC lacks innovation and has gotten way too complicated and inconvenient over the last years - for the system administrators that have to manage a whole zoo of different systems and for the researchers that should be able focus on their research instead of data management, access patterns and protocols. Time for a change before it gets even worse over the next years!

 

Back to Session III

Agentic AI at Scale: How SambaNova’s Architecture Powers Next-Gen LLM Workflows in National Labs

 

Tim Clarke

Account Executive for Public Sector SambaNova Systems, USA

 

As Large Language Models (LLMs) evolve from standalone tools to interconnected Agentic AI systems, traditional GPU-based infrastructure struggles with the demands of multi-model orchestration, memory constraints, and rapid task-switching. This presentation explores how SambaNova’s Reconfigurable Dataflow Unit (RDU)—designed for trillion-parameter workloads—enables National Labs to deploy state-of-the-art AI with unmatched efficiency.

 

We’ll highlight:

World-record inference performance and unified training/inference on a single system, eliminating GPU bottlenecks.

TBs of memory per node, allowing hundreds of models (or trillion-parameter LLMs) to reside in-memory simultaneously-critical for Agentic AI’s complex workflows.

Microsecond model switching (100x faster than GPUs), enabling dynamic multi-model pipelines for scientific research, data analysis, and operational automation.

Real-world use cases from National Labs leveraging these capabilities to accelerate tasks like hypothesis generation, knowledge synthesis, and scalable simulation.

 

Back to Session III

Vector-Symbolic Architectures for Scalable Neuromorphic Systems Design

 

Madison Cotteret

University of Groningen, Netherlands

 

As neuromorphic hardware grows in scale and complexity, their development is limited by the ease of configuring large neural systems to realise high-level behaviours. In conventional digital hardware, such complexity is managed through a hierarchy of abstraction layers, which permit development at one layer (e.g. machine code) without requiring expert knowledge of the lower levels (e.g. gate implementations). Neuromorphic computing sorely lacks a similar robust-yet-flexible hierarchy of abstractions, making it prohibitively difficult to build up to complex high-level function.

I present vector-symbolic architectures (VSAs) as a candidate abstraction layer for neuromorphic computing. VSAs form distributed representations of symbolic data structures using high-dimensional random vectors, and are inherently fault tolerant. When combined with attractor network theory, they give a reliable method to embed symbolic computational structures (e.g. state machines, generalised line attractors) into neuromorphic hardware, independent of the underlying neural representations. This is a significant step towards a hierarchical theory of neural processing suitable for programming large-scale neuromorphic systems that are capable of performing extended cognitive tasks.

 

Back to Session IV

Quantum Path at Leonardo: Enabling Innovation

 

Daniele Dragoni

Leonardo S.p.A., High Performance Computing Lab, Genova, Italy

 

Quantum Computing (QC) represents a transformative paradigm with the potential to address computational challenges that remain out of reach for classical systems. Although a clear quantum advantage in real-world applications is yet to be demonstrated, the momentum is growing, and several industries are investing early to explore its disruptive potential and secure future competitiveness.

 

In my presentation, I will outline Leonardo's strategic approach to thoroughly evaluate the capabilities and limitations of QC within the aerospace, security, and defense domains. I will delve into our stance on QC both from an industrial end-user perspective, showcasing ongoing initiatives and practical applications pursued through integrated HPC and QC methodologies in alignment with national strategic objectives, and from the viewpoint of a technology provider developing capabilities to support clients interested in targeted quantum experimentation.

 

Back to Session VII

SimOps Introduces HPC Software Stack with HPC Software Certification and Training

 

Wolfgang Gentzsch

Co-founder & President of Simr, SimOps Simulation Operations Org., Regensburg, Germany and Sunnyvale, CA, USA

 

SimOps (Simulation Operations Automation) recently introduced the SimOps Software Stack, a suite of HPC tools designed to simplify, optimize, and automate the use, operation, and management of HPC infrastructures—both in the cloud and on-premises—for HPC system administrators and simulation engineers. The SimOps Software Stack is a curated collection of HPC tools, services, and platforms that enable the implementation of SimOps principles to improve the engineering simulation and HPC operations lifecycle. SimOps defines tools that support these principles as ‘SimOps-compliant.’

 

Much like the DevOps Software Stack - used to optimize software development workflows - the SimOps Software Stack includes components across multiple layers such as provisioning and HPC middleware, platform and access layers, HPC infrastructure and workload management, workflow automation, data management, analytics, visualization and observability, CI/CD and DevOps for SimOps, as well as security and compliance tools. Together, these components accelerate and automate engineering simulations on optimized HPC infrastructures.<'text-align:center'>Quantum Path at Leonardo: Enabling Innovation

 

Daniele Dragoni

Leonardo S.p.A., High Performance Computing Lab, Genova, Italy

 

Quantum Computing (QC) represents a transformative paradigm with the potential to address computational challenges that remain out of reach for classical systems. Although a clear quantum advantage in real-world applications is yet to be demonstrated, the momentum is growing, and several industries are investing early to explore its disruptive potential and secure future competitiveness.

 

In my presentation, I will outline Leonardo's strategic approach to thoroughly evaluate the capabilities and limitations of QC within the aerospace, security, and defense domains. I will delve into our stance on QC both from an industrial end-user perspective, showcasing ongoing initiatives and practical applications pursued through integrated HPC and QC methodologies in alignment with national strategic objectives, and from the viewpoint of a technology provider developing capabilities to support clients interested in targeted quantum experimentation.

 

Back to Session VII

SimOps Introduces HPC Software Stack with HPC Software Certification and Training

 

Wolfgang Gentzsch

Co-founder & President of Simr, SimOps Simulation Operations Org., Regensburg, Germany and Sunnyvale, CA, USA

 

SimOps (Simulation Operations Automation) recently introduced the SimOps Software Stack, a suite of HPC tools designed to simplify, optimize, and automate the use, operation, and management of HPC infrastructures—both in the cloud and on-premises—for HPC system administrators and simulation engineers. The SimOps Software Stack is a curated collection of HPC tools, services, and platforms that enable the implementation of SimOps principles to improve the engineering simulation and HPC operations lifecycle. SimOps defines tools that support these principles as ‘SimOps-compliant.’

 

Much like the DevOps Software Stack - used to optimize software development workflows - the SimOps Software Stack includes components across multiple layers such as provisioning and HPC middleware, platform and access layers, HPC infrastructure and workload management, workflow automation, data management, analytics, visualization and observability, CI/CD and DevOps for SimOps, as well as security and compliance tools. Together, these components accelerate and automate engineering simulations on optimized HPC infrastructures.

What is SimOps? SimOps explores the potential of streamlining and automating on-premises and cloud-based simulation infrastructures, which are vital for enhancing and accelerating scientific inquiries and engineering designs. SimOps is a new community initiative and non-profit organization (you could say in short: the “DevOps of HPC”) bringing simulation experts and IT operations experts closer together by developing best practices, guidelines, and educational training courses, for setting up, operating, maintaining, supporting and efficiently using HPC/AI infrastructures for complex scientific and engineering applications. SimOps aims at automating and accelerating simulation processes, significantly increasing scientific and engineering productivity and organizational contributions (to innovation and competitiveness). SimOps will examine and collaborate on ways to reduce the complexities traditionally associated with high-performance computing environments.

 

Back to Session II

Bridging HPC and Quantum Computing: Quandela’s full-stack approach

 

Xavier Geoffret

Quandela, France

 

As quantum computing (QC) advances at a fast pace, its integration with high-performance computing (HPC) is becoming a critical topic for both researchers and industry leaders. There are in fact significant challenges for making hybrid HPC-QC workflows practical and scalable, and empower the broad community of users to QC.

This talk will explore the key challenges in bridging HPC and QC, including hardware and software integration, workload partitioning, and the need for new programming paradigms. We will discuss how quantum processors can complement HPC for applications in optimization, machine learning, and simulation, as well as the technical and economic considerations for HPC centers looking to incorporate QC into their infrastructure.

 

Back to Session X

Application-Driven Development and Evolution of the Computing Continuum

 

Vladimir Getov

Distributed and Intelligent Systems Research Group University of Westminster, London, U.K.

 

Over the last decade, a new concept – the computing continuum – has been gaining attention amongst the professional community. It encompasses the growing variety of interconnected computational, network, and storage resources across multiple layers of a high-speed distributed infrastructure. The most important components of the computing continuum include specialized cyber-physical systems, personal augmentation equipment, cloud and high-performance computing data centers, as well as Internet-of-Things edge devices.

At present, we still have centralized and limited visibility over the system performance, quality of service, and quality of data. Meanwhile, the rapidly evolving computing fabric is already composed of all traditional and emerging computational resources. A seamless integration of the computing continuum infrastructure leverages the best of each component. The representative application domains, such as artificial intelligence, physical system simulation, cryptography, machine learning, and multimedia, can be characterized by their service level objectives and requirements, which specify the development and evolution of the computing continuum components. Application domains with similar objectives and requirements can be merged to reduce the overall number of application domains under consideration. Some other domains, although distinctive and representative for specific applications, are negligible due to lower customer interest – e.g. they may not be recognized as “market drivers” – and can be left out of consideration.

We constantly need to understand better and improve the relationship between service-level objectives/requirements and the underlying architectures. Since both the application domains and the computing continuum components are rapidly developing and evolving entities, the most appropriate development approach is the application-architecture co-design which is considered and described in this presentation.

 

Back to Session I

Embedding classical logic into quantum computation, or, mid-circuit operations on steroids

 

Vlad Gheorghiu

Institute for Quantum Computing, University of Waterloo and SoftwareQ Inc, Waterloo, Ontario, Canada

 

We present our innovative solution on integrating arbitrary classical logic into quantum circuits at compile time. This feature reduces the complexity of quantum circuit design, particularly for fundamental tasks such as syndrome extraction, mid-circuit measurements, and variational quantum algorithms. It also lays a foundation for the seamless hybridization of classical and quantum computing. Live demonstrations will be provided using our open-source quantum computing framework, Quantum++, https://github.com/softwareqinc/qpp.

 

Back to Session VI

Challenges for Software-defined storage in the modern HPC and AI World

 

Frank Herold and Troy Patterson

ThinkParQ, Germany

 

This session will cover some of the challenges faced by the end users however by the providers  who create the solution infrastructure such as ThinkParQ with its Parallel file system BeeGFS. This session will highlight traditional HPC and AI Environments that have been utilizing BeeGFS for their workflows, and will also showcase the latest data management features of the newly launched BeeGFS 8.

 

Back to Session III

JHPC-quantum project: HPC and QPU hybrid challenge of RIKEN

 

Nobuyasu Ito

RIKEN Center for Computational Science, Kobe, Japan

 

The development of quantum information technology in recent years has been remarkable, and its applications are expanding the scope of computation beyond current computational and environmental limits.So far, QPUs have been operated as standalone processors, which has hindered the smooth and efficient development of QPU applications. Now is the time to introduce quantum computers into the IT environment, and last year, RIKEN, with financial support from NEDO JAPAN, began an effort to fuse QPUs with HPC. This project covers everything from hardware preparation to the development of industrial applications. In this presentation, we will provide an overview and current status of the JHPC-quantum project.

Back to Session X

Scaling Physics Simulations on AI Hardware: Insights from Molecular Dynamics and Planetary Modeling

 

Michael James

Cerebras Systems, Sunnyvale, California, USA

 

The evolution of AI hardware platforms is unlocking new opportunities for scaling physics simulations, offering capabilities previously absent in CPU and GPU-based platforms. Notably, these platforms provide the bandwidth necessary for high-utilization PDE solutions and network capabilities that support strong scaling.

 

With the advancements in modern AI hardware, we can now extend the reach of traditional high-performance computing (HPC) methods. In this talk, we will explore how AI-driven architectures can revolutionize physics simulations, enabling us to approach problems that have been beyond the reach of exascale platforms. We will delve into the Cerebras wafer-scale platform, showing its capabilities with examples in molecular dynamics and planetary modeling.

 

Bio:

Michael is the Founder and Chief Architect of Advanced Technologies at Cerebras, the company renowned for creating the world’s largest and most powerful computer processor. He leads the initiative to redefine the algorithmic foundations for the next generation of AI technologies. Before Cerebras, Michael was a Fellow at AMD, where he developed adaptive and self-healing circuits based on cellular automata, enhancing the reliability of distributed fault-tolerant machines. Throughout his career, Michael has focused on the intersection of natural phenomena, mathematics, and engineered systems. His degree from UC Berkeley is in Molecular Neurobiology, Computer Science, and Mathematics.

 

Back to Session III

Graph-based Data Analysis of Three-dimensional Electron Diffraction Data

 

Hiroaki Kobayashi

Tohoku University, Japan

 

In this talk, I will present a graph-based data analysis of three-dimensional electron diffraction data. Three-dimensional electron diffraction is an emerging technique that allows researchers to obtain detailed molecular structures from their small crystals using transmission electron microscopy. However, due to the relatively poor signal-to-noise ratio of many diffraction images, a large amount of data is generated, including data that misrepresents the molecule. Therefore, the development of automatic molecular structure identification from a large amount of data containing both correct and incorrect structure data is crucial.  We will mention an automatic graph generation that represents the molecular structure, and an identification method using multiple features that extracts features of correct molecular structures.

 

Back to Session II

The Research Driving Control System Innovation Towards Utility Scale Quantum Computing

 

Lorenzo Leandro

Quantum Solutions Physicist at Quantum Machines

 

Scaling quantum processors introduces new requirements on control, such as ensuring high-fidelity qubit operations by optimizing the analog front-end, automating calibration workflows, and integrating hybrid control for quantum error correction. To make significant progress, we need clear understanding of both present technology and the demands of future large-scale quantum computers. Deep research is needed, both in academia and industry, to unveil the important bottlenecks and their possible solution. In this talk, we will explore key technical challenges and focus on how the research done in QM facilitates informed definition of the control system requirements paving the way towards useful quantum computing.

 

Back to Session IX

Towards AI supercomputing with LUMI-AI

 

Pekka Manninen

CSC Finland

 

As a part of the AI Innovation Package of the European Union, the EuroHPC Joint Undertaking is currently planning a set of AI supercomputers in Europe, deployed within the announced 13 AI Factories. In this talk, we will discuss the largest of these upcoming systems, LUMI-AI, to be located in Kajaani, Finland. LUMI-AI will be one of the most powerful and advanced quantum-accelerated supercomputing systems in the world at the time of its completion. In this talk, I will present the 6-country consortium behind it, some history, the technical vision of the LUMI-AI infrastructure, its current status, as well as plans and ambitions.

 

Back to Session III

Rethinking Quantum Parallelism: From Superposition to Speed-Up

 

Stefano Markidis

KTH Royal Institute of Technology, Computer Science Department, Stockholm, Sweden

 

Quantum computing's power lies in its ability to explore multiple computational paths simultaneously through quantum parallelism, a concept often misunderstood or oversimplified. In this talk, we revisit the fundamental nature of quantum parallelism, drawing analogies with classical parallel computing models such as data and task parallelism. We introduce the concept of quantum dataflow diagrams as a tool for visualizing and quantifying parallelism in quantum circuits. By analyzing quantum algorithms, such as the Quantum Fourier Transform and Amplitude Amplification, we examine how quantum interference, both constructive and destructive, impacts algorithm efficiency. Furthermore, we challenge the direct applicability of classical parallelism laws (Amdahl's and Gustafson’s) in quantum computing, highlighting the unique role of classical-quantum I/O and the non-trivial relationship between parallelism and speed-up. This talk aims to deepen our understanding of quantum parallelism and its implications for algorithm design and performance evaluation.

 

Reference: Markidis, Stefano. "What is quantum parallelism, anyhow?" ISC High Performance 2024 Research Paper Proceedings (39th International Conference), 2024.

 

Back to Session IX

Neuromorphic Computing at Cloud Level

 

Christian Georg Mayr

TU Dresden/SpiNNcloud Systems, Germany

 

AI is having an increasingly large impact on our daily lives. However, current AI hardware and algorithms are still only partially inspired by the major blueprint for AI, i.e. the human brain. In particular, even the best AI hardware is still far away from the 20W power consumption, the low latency and the unprecedented large scale, high-throughput processing offered by the human brain.

In this talk, I will describe our bio-inspired AI hardware, in particular our award-winning SpiNNaker2 system, which achieves a unique fusion of GPU, CPU, neuromorphic and probabilistic components. It takes inspiration from biology not just at the single-neuron level like current neuromorphic chips, but throughout all architectural levels.

On the algorithm front, I will give examples on how to use general neurobiological computing principles (hierarchy, asynchronity, dynamic sparsity and distance-dependent topologies/hierarchical computing) to reframe conventional AI algorithms, usually achieving an order of magnitude improvement in energy-delay product, for both inference and training.

Back to Session IV

Realizing Hybrid Quantum-Classical Applications in OmpSs-2

 

Lucas Menger

MSQC @ Goethe Universität Frankfurt, Germany

 

High-Performance Computing increasingly explores heterogeneous architectures to accelerate demanding workloads. We present an extension of the OmpSs-2 programming model that enables offloading computations to quantum computers in an HPC context. By modifying the Clang compiler and the Nanos6 runtime, we integrate quantum devices into the OmpSs-2 ecosystem, allowing developers to write hybrid quantum-classical applications in a unified way. A custom-built simulator models quantum nodes in a networked environment, receiving and executing offloaded jobs. We illustrate the approach with four representative use cases: random number generation, a mean-field ansatz parameter scan, a variational quantum-classical algorithm, and a hybrid neural network for handwritten digit recognition.

 

Back to Session X

Sustainable Management of Complex Software Ecosystems for Novel Accelerators

 

Eric Mueller

University of Heidelberg, Germany

 

Accelerators are ubiquitous in today's HPC environments.

However, the integration of non-numerical accelerators - such as those leveraging physical computing principles - presents new and unique challenges.

These accelerators often require novel programming paradigms that differ significantly from traditional numerical computing, both in the way they interface with conventional systems and in the way computation is expressed.

The software ecosystems that support these accelerators can be particularly complex, with deep dependency trees and high coupling between modules, posing significant challenges to the development and deployment processes.

In addition, the lack of standardized approaches to managing these environments adds to the difficulty especially in the HPC context.

This talk will focus on the software ecosystem for novel accelerators and present sustainable strategies for managing, building, deploying, and containerizing these complex systems.

We will use a real-world case study to illustrate best practices for addressing the challenges of modern accelerator-driven HPC environments.

 

Back to Session I

Superconducting qubits at the utility scale: the potential and limitations of modularity

 

Josh Mutus

Rigetti Computing, Director Quantum Devices, USA/Canada

 

The development of fault-tolerant quantum computers (FTQCs) is receiving increasing attention

within the quantum computing community. Like conventional digital computers, FTQCs, which

utilize error correction and millions of physical qubits, have the potential to address some of humanity’s grand challenges. However, accurate estimates of the tangible scale of future FTQCs, based on transparent assumptions, are uncommon. How many physical qubits are necessary to solve a practical problem intractable for classical hardware? What costs arise from distributing quantum computation across multiple machines? We present an architectural model of a potential FTQC based on superconducting qubits, divided into discrete modules and interconnected via coherent links. We employ a resource estimation framework and software tool to assess the physical resources required to execute specific quantum algorithms compiled into their graph-state form and arranged onto a modular superconducting hardware architecture.

 

Back to Session V

Logical Resource Analysis of Utility-Scale Quantum Applications using pyLIQTR

 

Kevin Obenland

MIT Lincoln Laboratory, USA

 

As part of the DARPA Quantum Benchmarking program, MIT Lincoln Laboratory is actively evaluating proposed applications in physical science for their utility and amenability to fault-tolerant quantum computing platforms. Our team is developing a tool called pyLIQTR, which provides implementations of important quantum algorithms and encodings used in the workflows of applications in physical science. With the implementations provided by our tool, one can measure the quantum logical resources required for applications at utility scale in a number of different ways. pyLIQTR can provide a breakdown and count of gates used in an application, and it can produce a detailed time-schedule of the execution of the logical quantum circuit. Our logical circuits can also be used as the input to resource analysis that targets physical platforms. In this talk, we will describe the logical circuit implementations available in pyLIQTR and demonstrate the tool’s logical resource estimation capabilities by showing analysis of particular applications developed in the QB program.

 

Back to Session VI

Quantum Realised: The Energy-Efficient Frontier For HPC

 

Irwan Owen

VP of Business Development, D-Wave Systems Inc., CANADA - USA

 

At this presentation Irwan Owen, VP of Business Development, will provide updates to D-Wave’s progress in its technology roadmap and commercial use-cases.  He will also discuss why today’s HPC centres require energy-efficient compute for hard problems with quantum technology. Topics will include AI, research, and the quantum technology platform supporting production applications for industry use.

 

Irwan Owen is Vice President of Business Development at D-Wave, responsible for building strategic relationships with D-Wave’s largest customers and partners He is a technology industry veteran with 30 years of international experience in computing, web and mobile markets and has been instrumental in the commercialization of a number of ubiquitous technologies including the UNIX operating system and the Java platform. Prior to D-Wave, Irwan held sales leadership roles at Red Bend Software (now part of Samsung), Palm Inc., and Symbian Ltd. He also spent five years at Sun Microsystems, where he was a founding member of JavaSoft Europe. Irwan holds a BSc. (Hons) in Computation from the University of Manchester Institute of Science and Technology, and has held roles in engineering, pre-sales support, product marketing, sales and business development.

 

Back to Session VI

Reinventing HPC with Quantum – A year in perspective

 

Nash Palaniswamy

Quantinuum, UK – USA

 

Quantum is here — and like AI in the past, it is reinventing HPC. I share my learnings and perspectives on some key questions that I have encountered over the past year  – such as What does good like? how does quantum work with AI and HPC? What is the buying criteria? What are the Benchmarks? What can I do with these machines? And many more.

Back to Session I

Accelerating Quantum Chemistry Simulations On Quantum Computers

 

Raffaele Santagati

Quantum Group, Boheringen Ingelheim, Germany

 

Quantum chemistry simulations represent one of the most promising applications for fault-tolerant quantum computers. While recent algorithmic advancements, such as qubitization, and improved Hamiltonian representations, like tensor hyper-contraction, have significantly reduced resource requirements, achieving practical runtimes for industrially relevant systems remains challenging.

To address this, we combine these advancements with a novel active volume (AV) compilation technique. This technique optimizes resource utilization by eliminating the overhead associated with idling logical qubits, though it necessitates a specialized AV architecture. When paired with modifications to the tensor hyper-contraction method, AV compilation achieves substantial runtime reductions by two orders of magnitude.

We apply this approach to a challenging cytochrome P450 system, a key enzyme in drug metabolism. This demonstration highlights the potential of our combined strategy to bring quantum computing closer to practical applications in pharmaceutical research and other industries.

Back to Session VII

Scaling Analog Neuromorphic Hardware

 

Johannes Schemmel

European Institute for Neuromorphic Computing and Kirchoff Institute for Physics Heidelberg University, Germany

 

Event-based neuromorphic computing is a promising technology for energy-efficient bio-inspired AI. It also enables continuous learning based on local learning algorithms.

For maximum energy efficiency, a brain-like in-memory realization is desirable. The Heidelberg BrainScaleS platform is an example of a Neuromorphic architecture that combines true in-memory computing with hardware support for continuous local learning.

For real-world applications as well as neuroscience, some minimum network sizes are required. To realize the necessary upscaling, BrainScaleS has invented Wafer Scale integration.

For the future generations of BrainScaleS, this will not be feasible due to high mask-costs of modern semiconductor processes. This talk presents an alternate solution based on Chiplet technology.

It introduces concepts that not only allow to scale BrainScaleS-based networks but will also provide a general platform for upscaling all kinds of neuromorphic technologies.

 

Back to Session IV

Progress towards large-scale fault-tolerant quantum computing with photons

 

Pete Shadbolt

Co-founder PsiQuantum Corp., Palo Alto, California, USA

 

In this talk we will describe progress towards large-scale, fault-tolerant quantum computing with photons. This talk will span materials innovations for high-performance photonics, improvements in photonic component performance with an emphasis on improved optical loss, prototype systems of entangled photonic qubits, qubit networking, and novel high-power cryogenic cooling solutions designed for future datacenter-scale quantum computers. We will show new prototype systems designed to progressively overcome the key challenges to scaling up photonic quantum computers. We will also give an overview of the architecture of fusion-based photonic quantum computers, describe near-term systems milestones, and give a view on the long-term roadmap to useful, fault-tolerant machines.

Back to Session V

Towards an Active Memory Architecture for Graph Processing beyond Moore’s Law

 

Thomas Sterling

Senior Research Scientist, University of Texas at Austin, Texas Advanced Computing Center, Austin, TX, USA

 

Three significant challenges constrain future development of semiconductor-based computing architecture including 1) end of Denard and Moore's scaling, 2) current limited parallel execution models, and 3) lack of fine-grain graph processing. The Active Memory Architecture (AMA) addresses these issues through its innovative memory-centric non von Neumann parallel computer architecture. AMA is under development at TACC as a smart memory component eliminating conventional processor cores, dramatically advancing the parallel execution model, exploiting semantics for dynamic graph processing, and incorporating dynamic scheduling and resource management runtime. This presentation will describe the new principles and innovative mechanisms being pursued at the Texas Advanced Computing Center at near nanoscale.

 

Back to Session I

Quantum Pangenomics

 

Sergii Strelchuk

Department of Applied Mathematics and Theoretical Physics and Centre for Quantum Information and Foundations University of Cambridge and University of Warwick, Computer Science Department, Warwick Quantum Centre, UK

 

Genomics is a transformational technology for biology, driving a massive improvement in our understanding of human biology and disease. Pangenomics is an important next step on this journey, as understanding variation across many genomes is key to unravelling how genetic traits can affect health outcomes. Building and analysing a pangenome is computationally intensive. Many essential tasks in genomic analysis are extremely difficult for classical computers due to problems inherently hard to solve efficiently with classical (empirical) algorithms. Quantum computing offers novel possibilities with algorithmic techniques capable of achieving speedups over existing classical exact algorithms in large-scale genomic analyses.

Funded by the Wellcome Leap Q4Bio program (https://wellcomeleap.org/q4bio/), we pursue two main research thrusts.1. Algorithm Development: We design novel quantum algorithms for multiple sequence alignment subproblems and investigate heuristic methods (QAOA) for de novo assembly. 2. Data Encoding and State Preparation: We aim to develop efficient quantum circuits to encode genomic data and reduce the computational overhead with a variety of techniques, including tensor network representations. It facilitates data encoding into quantum states for a variety of machine-learning applications.

 

Back to Session VII

The Diamond Integrated Quantum Chip Revolution

 

Andrea Tabacchini

VP Quantum Brilliance Solutions, Australia and Germany

 

Nitrogen-vacancy (NV) centers in diamond have long stood out as a compelling platform for quantum technologies due to their exceptional properties - such as long coherence times at room temperature, high-fidelity operations, and high-speed gates. Despite these advantages, the field has historically regarded NV-based systems as limited by formidable engineering and scalability challenges.

At Quantum Brilliance, we are redefining those assumptions. With the founders’ 20+ years scientific experience in the field, sustained R&D, and strategic collaborations, we have made significant progress in overcoming the key hurdles to practical NV-diamond-based quantum technologies. Central to this progress is our Integrated Quantum Chip (IQC), a compact, scalable architecture envisioned to support applications ranging from quantum sensing to communication to computing.

This talk presents a high-level look at our proprietary five-step process for engineering high-performance quantum diamond materials, as well as recent experimental breakthroughs that validate our approach. I will also outline our technology roadmap for the IQC, highlighting key challenges and recent progresses.

 

Back to Session VIII

AI-Powered Machine Learning for Accelerating Scientific Grand Challenges

 

William Tang

Princeton University Dept. of Astrophysical Sciences, Princeton Plasma Physics Laboratory; Center for Statistics and Machine Learning (CSML) and Princeton Institute for Computational Science & Engineering (PICSciE), Princeton University, USA

 

This invited presentation represents an updated version of the Sidney Fernbach Memorial Award keynote talk at the international Supercomputing Conference (SC’24) in Atlanta, GA.  It deals with “Artificial Intelligence-Powered Machine Learning for Accelerating Scientific Grand Challenges” with highlights  including the deployment of recurrent and convolutional neural networks in Princeton's Deep Learning Code -- "FRNN" – that enabled the first adaptable predictive deep learning tool for carrying out efficient "transfer learning" between experimental facilities while delivering validated predictions of disruptive events across prominent tokamak devices.  Moreover, the AI/DL capability can provide not only the “disruption score,” as an indicator of the probability of an imminent dangerous disruption but also a “sensitivity score” in real-time to indicate the underlying reasons for the predicted disruption.  A real-time prediction and control capability has been significantly advanced with a novel surrogate model/HPC (high performance computing) simulator ("SGTC") -- a first-principles-based prediction and control surrogate necessary for projections to future experimental devices (such as targeted Fusion Power Plants (FPP's) and indeed ITER) for which no "ground truth" observational data exist at present.  The near future will feature findings from the deployment of real-time Surrogates – fast HPC simulators supported by newly validated 1st principles-based results enabled by using the exciting exascale class high-performance computing systems that include

• FRONTIER (ORNL) and Aurora (ANL) Exaflop computers in the US;

• ALPS with 5000 NVIDIA Grace-Hopper "Superchips" at the Switzerland’s Supercomputing Center (CSCS); and the upcoming larger

• JUPITER Exascale System at the German Supercomputing Center (JSC).

 

References:

[1] Julian Kates-Harbeck, Alexey Svyatkovskiy, and William Tang, "Predicting Disruptive Instabilities in Controlled Fusion Plasmas Through Deep Learning," NATURE 568, 526 (2019)

[2] William Tang, et al., Special Issue on Machine Learning Methods in Plasma Physics, Contributions to Plasma Physics (CPP), Volume 63, Issue 5-6, (2023).

  [3] Ge Dong, et al., 2021, Deep Learning-based Surrogate Model for First-principles Global Simulations of Fusion Plasmas, Nuclear Fusion 61 126061 (2021).

 

Back to Session III

Photonic fault-tolerant quantum computing: Scaling, networking, and modularity

 

Zachary Vernon

Chief Technology Officer—Hardware, Xanadu, Canada

 

I will discuss Aurora, Xanadu's latest photonic quantum computer, showcasing the scalability of our architecture through modularity and networking. I will also discuss some more recent hardware developments as we advance towards fault-tolerance.

 

Back to Session V

Towards Quantum-Classical Supercomputers with Neutral Atoms

 

Aleksander Wennersteen

Pasqal, France

 

Quantum computing promises to accelerate select high-performance computing (HPC) workloads.

Realizing this potential will require deep integration of quantum and classical resources, demanding close collaboration between the quantum and HPC communities to develop truly hybrid supercomputing systems.

In this talk, we focus on quantum processing units (QPUs) based on neutral atoms and discuss how they are being integrated into HPC centers. From the physical infrastructure to co-processing workflows and scheduling.

We present the capabilities of Pasqal’s devices and platform and then outline our near-term hardware developments and explain how these advances shape and support the broader goal of building a hybrid quantum-classical computing platform.

 

Back to Session VIII