HPC Users@UiO Newsletter #2, 2025

News on HPC systems @ UiO and NRIS, application deadline for CPU time through Sigma2, interesting conferences and external courses.


Contents:
 

Summer is nearing its end and we're heading into an exciting fall semester. Olivia, the new national supercomputer, will open for all users and go into full production by October. Additionally, the new Colossus, the supercomputer for sensitive data in TSD, will be installed. Both systems will provide more sorely needed GPU resources for researchers in Norway and at UiO. Please read on for more details, and happy computing!

New Colossus

Over summer a new Colossus was procured, it'll soon be delivered and it will be operational this fall. Colossus remains an AMD based system with NVIDIA GPUs:

  • 2880 AMD Turin cores
  • 192 cores per node
  • 1.5 TiB per node
  • 7.68 TB local scratch storage per node
  • 4 Nvidia H200 141GB accelerators
  • 8 Nvidia RTX PRO 6000 server 96GiB accelerators
  • 200 Gbps NDR infiniband interconnect

It will be a fully Sigma2-owned system with a simpler contribution model. We will provide new documentation and best practices as we gather hands on experience with the system.

Olivia inauguration

Solveig Kristensen, Gard Thomassen and Dagfinn Bergsager at Olivia inauguration.On June 17 2025 Olivia was officially inaugurated by Sigrun Aasland, Minister of Research and Higher Education. The opening ceremony was attended by several prominent guests, including Solveig Kristensen, Gard Thomassen and Dagfinn Bergsager.

Olivia is currently in pilot phase with several user groups testing out their applications on the supercomputer to ensure it can take any expected load during production. Production is expected to start by October 1 2025.

New Sigma2 e-Infrastructure allocation period 2025.2, application deadline 25 August 2025

The Sigma2 e-Infrastructure period 2025.2 (01.10.2025 - 31.03.2026) is getting nearer, and the deadline for applications for HPC CPU hours and storage (for both regular and sensitive data), is 25 August. This also includes access to the Sigma2 part of TSD (Colossus and storage), as well as LUMI-C and LUMI-G.

Please note that although applications for allocations can span multiple allocation periods, they require verification from the applicants prior to each application deadline to be processed by the Resource Allocation Committee for a subsequent period. Hence any existing multi-period application must be verified before the deadline to be evaluated and receive an allocation before the new period starts. This does not apply to LARGE projects.

Kind reminder: If you have many CPU hours remaining in the current period, you should of course try to utilize them ASAP, but since many users will be doing the same there is likely going to be a resource squeeze and potentially long queue times. The quotas are allocated according to several criteria, of which publications registered to Cristin is an important one (in addition to historical usage). The quotas are based on even use throughout the allocation period. If you think you will be unable to spend all your allocated CPU hours, it is highly appreciated that you notify sigma@uninett.no so that the CPU hours may be released for someone else. You may get extra hours if you need more later. For those of you that have run out of hours already, or are about to run out of hours, take a look at the Sigma2 extra allocation page to see how to ask for more. No guarantees of course.

Run

projects

to list project accounts you are able to use.

Run

cost -p nn0815k

to check your allocation (replace 0815 with your project's account name).

Run

cost -p nn0815k --detail

to check your allocation and print consumption for all users of that allocation.

AI-news: Local LLMs on Educloud OnDemand

We are happy to offer fully local Large Language Models (LLMs) through Educloud OnDemand. With the UiO Local LLM app, you can start a model that runs privately for you. Please keep in mind that since this model is designed for single-user use, the models are smaller than those accessible through GPT UiO

Since the models are running on UiO hardware, your data remains securely within Educloud offering enhanced privacy compared to the services that you encounter on public services such as for instance Azure or Google, or the current version of GPT UiO. However, please note that Educloud OnDemand supports data classified only up to the Yellow level. 

This is our first offer of local models to UiO users. But more will come in the fall. Stay tuned!

User contribution and costs

As was mentioned in previous newsletters, UiO users - like any other users of Sigma2 systems - will pay the actual operational costs, but we will pay up-front as a guarantee, and not per use.

It is of utmost importance that all researchers and especially those at UiO are aware of the actual operational cost connected to their work, and that you try to apply for external funding to cover your part of the operational cost. How much, and if, you will have to pay for your operational costs directly from your project depends on your faculty and how they choose to handle the invoice.  

Centrally at UiO, the administration will shave a significant part of the invoice off, this will function as a “centrally covered discount” before the remainder of the invoice is sent to the faculties and museums based on their usage.

Do note that UiO pays up-front for a very big part of the usage of the resources and so far actual usage has never surpassed the annual payment. Unless you plan on spending several hundred millions of CPU hours or large amounts of storage, the likelihood that your project will get a direct invoice is very small. And if your project is in danger of triggering an invoice, you will be contacted already when you apply for the resources. Regardless, it is always good to get in touch with your institute management before applying to see if there are any local regulations/recommendations.

Also notice that when applying for EU projects where Sigma2 resources are involved, for planning purposes it is very important to contact Sigma2 before sending in the application.

HPC Course week/training

Image may contain: Font, Electric blue, Logo, Brand, Symbol.

NRIS will host a hands-on workshop titled: “Fine-Tuning LLMs with Multi-GPU Training” on Olivia - Norway’s Next supercomputer.

It is designed to provide participants with practical experience in fine-tuning large language models (LLMs) with a special focus on LLaMA on high-performance computing (HPC) systems.  This workshop is ideal for researchers, developers, and students who are familiar with Python and want to gain practical insights into scalable LLM training with LLaMA.

Would you like to teach a workshop at Digital Scholarship Days 2026?

Digital Scholarship Days is an annual event designed for students and researchers who want to expand their knowledge of digital methods, generative AI, digital tools, open research, research data management, and much more!

The 2026 edition of Digital Scholarship Days will take place from January 6 to January 9, 2026.

We invite contributions in the form of workshops, discussion groups, or other interactive formats. Proposals are welcome from researchers (at any career stage), engineers, librarians, administrative staff, and anyone interested in sharing their knowledge and skills with colleagues. Sessions should be interactive and hands-on.

For more information and submission: DS Days 2026 Call for Ideas    

Course - Applied Machine Learning for Biological Data

The "Applied Machine Learning for Biological Data" course, co-organized by the BioNT consortium (Bio-Network for Training) and Scientific Computing Services at the Division for Research, Dissemination and Education (UiO), took place May 27th-28th and June 2nd-6th, 2025. This course featured a pre-workshop industry meetup event, followed by the main course.

The "AI in Biomedical Data - Industry Seminar: Bridging Innovation & Opportunity" meetup, attended by 75 participants in person or via streaming, targeted for industry experts and researchers in the field of AI and biomedical sciences. It highlighted the course - "Applied Machine Learning for Biological Data" and facilitated discussions that led to adaptations in the main course. The main course was divided into two modules: an optional Module 1 and a mandatory Module 2. Module 1 (two half-day sessions) focused on essential data handling techniques using NumPy and Pandas. Module 2 was scheduled for 5 full days and covered a broad range of machine learning (ML) topics. These included core ML concepts with short exercises, extended hands-on sessions applying ML to biological use cases, as well as theoretical knowledge and practical sessions on GPU-powered genomics workflows.

Given the significant interest in the course topics and extensive advertising, the course was oversubscribed, exceeding the allocated funding for high-end computing resources like GPUs. As a result, 40 learners were chosen based on criteria consistent with project goals, prioritizing individuals residing in Europe who were either seeking employment or involved with Small and Medium Enterprises (SMEs).

Course materials were developed from scratch, tailored to biological datasets and genomics use cases, and made available under the MIT License. The workshop utilized Zoom for delivery, HedgeDoc for collaborative documents, and virtual machines with GPUs for hands-on sessions. 

The insights evaluating the course were gathered through pre- and post-course surveys. Learners consistently reported a comfortable and supportive learning environment, high satisfaction with the knowledgeable and enthusiastic instructors, and the immediate applicability of the course material. The workshop's success in meeting participant needs and expectations is further corroborated by a high positive recommendation rate, feedback received during and after the event, and positive LinkedIn posts.

Fox Supercomputer - get access

Fox HPC cluster logo
The Fox cluster is the 'general use' HPC system within Educloud, open to researchers and students at UiO and their external collaborators. Access to Fox requires having an Educloud user, see registration instructions.

For instructions and guidelines on how to use Fox, see Foxdocs - the Fox User Manual

Software request form

If you need additional software or want us to upgrade an existing software package, we are happy to do this for you (or help you to install it yourself if you prefer that). In order for us to get all the relevant information and take care of the installation as quick as possible, we have created a software request form. After filling in the form, a ticket will be created in RT and we will get back to you with the installation progress.

To request software, go to the software request form.

Support request form

We have had great experience with users requesting software installations since we introduced the software request form. We now usually get all the information we need at first contact. We want to further improve our support and get to the root cause of an issue faster. Therefore we now encourage you to fill in a form when you need help with other types of issues as well. When the support form is submitted it will be sent to our hpc-drift queue in RT and will be handled as usual. The difference from emailing us directly is that we will now immediately get needed bits of information and your tickets will be labelled according to what system you are on and what issue you are facing.

The link to the new support form will be shown when you log in our servers and it has also been added to relevant documentation pages. You can have a look at it here:
https://nettskjema.no/a/hpc-support

We encourage users of all our HPC resources to use this form. Whether it concerns Fox, LightHPC, ML-nodes, Educloud OnDemand, Galaxy-Fox or our individual appnodes. 

Other hardware needs

For the last few years, the AI landscape has been evolving at a breathtaking pace. At the time of this writing, the most recent announcements and releases are GPT-5 and rumors of NVidias new B30A accelerator, but by the time of the publication of this newsletter, this might already be outdated.  We are keeping an eye on news from science and industry and will continue working on telling viable products apart from hype and PR.

If you are in need of particular types of hardware (fancy accelerators, GPUs, ARM, Kunluns, Graphcore, NVIDIA DIGITS etc.) not provided through our local infrastructure, please contact us (hpc-drift@usit.uio.no), and we'll try to help you as best we can.

Also, if you have a computational challenge where your laptop is too small but a full-blown HPC solution is a bit of an overkill, it might be worth checking out NREC. This service can provide you with your own dedicated server, with a range of operating systems to choose from.

With the ongoing turmoil about computing architectures we are also looking into RISC-V. The European Processor Initiative is aiming for ARM and RISC-V and UiO needs to stay on top of things.

With the advent of integrated accelerators (formerly known as GPUs) with shared cache-coherent among all execution units including accelerators (like AMD MI300 and NVIDIA Grace/Hopper) these might be of interest for early adopters. Call out if this sounds interesting.

Publication tracker

The Division for Research, Dissemination and Education (RDE) is interested in keeping track of publications where computation on RDE services are involved. We greatly appreciate an email to:

hpc-publications@usit.uio.no

about any publications (including in the general media). If you would like to cite use of our services, please follow this information.

Fox van Gogh

Slightly meta van Gogh impression of Fox - courtesy of Stable Diffusion ran on Fox through ondemand.educloud.no.
Slightly meta van Gogh impression of Fox - courtesy of Stable Diffusion ran on Fox through ondemand.educloud.no.
Published Aug. 21, 2025 11:26 AM - Last modified Aug. 26, 2025 11:26 AM