AI-Events

You are looking for events  relevant to working with artificial intelligence methods on high-performance computers?
Here we offer you the filtered calendar of the Gaussian Alliance (all organizers, not only NHR!):
(Calendar source: GA HPC calendar)

 

AI - From Laptop to Supercomputer
Next date is planned for early 2026

Eventlink: go-nhr.de/ai_on_hpc_vconf | Language: English
Contact: aionsupercomputer@nhr-verein.de

 

Document:

 

AI - Open Q&A Hour
Every Thursday 14:00 - 15:00

Eventlink: go-nhr.de/ai_on_hpc_vconf | Language: English
Contact: aionsupercomputer@nhr-verein.de

 


AI - Open Q&A Hour with a special focus
Every 2nd Thursday of the month, 14:00 - 15:00
Next Open Q&A Hour with a special focus: 13.11.25

Eventlink: go-nhr.de/ai_on_hpc_vconf | Language: English
Contact: aionsupercomputer@nhr-verein.de

 

Topic: "Perspectives on LLM Inference Benchmarking"

Open-source large language models are increasingly used across diverse applications and deployed on HPC systems to leverage acceleration hardware. For both users and the HPC community, inference latency is a key performance factor affecting responsiveness and efficiency. However, comparing performance across different models, frameworks, and configurations remains challenging.
In this session, we present BALI [1] -- A Benchmark for accelerated Language Model Inference. It allows users to run LLMs with a fixed configuration on different Inference frameworks, comparing their text generation speed under user defined/application dependent settings. We aim for a discussion with the HPC and LLM Community to identify which perspectives, metrics or measurements on LLM inference are needed mostly and in the future to assess their efficiency.

[1] - L. Jurkschat, P. Gattogi, S. Vahdati and J. Lehmann, "BALI—A Benchmark for Accelerated Language Model Inference," in IEEE Access, vol. 13, pp. 98976-98989, 2025, doi: 10.1109/ACCESS.2025.3576898.


You can find more information about our AI services here.