Norwegian version of this page

TSD Operational Log - Page 10

Published Oct. 1, 2019 1:32 PM

UPDATE, 09:30, starting unmounting of NFS-shares from /cluster on all machines.

We have now solved the problems we encountered on Monday, and are now ready to replace the NFS-exporter.

The work will start on Thursday 3rd October at 09:00 CET. We expect to be finished by the end of the day, possibly earlier.

During the maintenance, we have to unmount /cluster on all virtual machines (VMs) that mount it. This means that the /cluster/projects/pXX areas will be unavailable on the VMs, and it will not be possible to use the module load system for software on the VMs. Some VMs might also require a reboot.

Jobs on Colossus will continue to run as normal, but it will not be possible to submit new jobs during the stop.

Do not run jobs on VMs that need data from /cluster or software modules. If you do so, we will have to kill them to unmount the /cluster area. Also, if the VM needs to be rebooted, all ru...

Published Sep. 27, 2019 7:31 AM

We are currently performing maintenance on the self service and data portal

Published Sep. 26, 2019 9:41 AM

UPDATE: Unfortunately, we encountered some unforeseen problems, and were not able to switch to the new NFS-exporter today. The system is now back in normal production using the old exporter, and you can continue to work as normal again. We hope to solve the problems quickly, and come back with a new day soon for replacing the NFS-exporter.

We are sorry for the inconveniency.

 

We will replace the existing NFS-exporter on Colossus starting on Monday, 30th September 09:00 CET, and continue working throughout the day.

We will stop the NFS-export by dismounting it on all Virtual Machines, and some may also require a reboot.

You will not be able to run jobs on VMs that need data from /cluster or software modules. If we have to reboot the VM to unmount /cluster, the running jobs will also be killed.

Please save your data before the maintenance window, and follow our Operational Log for the update.

The...

Published Sep. 17, 2019 12:54 PM

We are experiencing issues with some services, which may lead to users being unable to login to TSD through VMWare Horizon Client with an error "all available desktop sources are currently busy". We are investigating the cause of this and working on a fix.

Published Sep. 16, 2019 11:35 AM

Dear TSD User

Due to issues with part of login infrastructure which is preventing some projects from logging in, we need to perform unplanned maintenance on the view-ous login gateway. This will mean that login sessions for p22, p149, p191, p192, p321, and p410 will be suspended while we reboot. Apologies for the inconvenience.

Published Aug. 27, 2019 8:41 AM

We are experiencing issues with some services, which may lead to  users being unable to login to TSD through VMWare Horizon Client with an error "all available desktop sources are currently bussy". We are investigating the cause of this and working on fix.

Published Aug. 25, 2019 10:54 AM

Dear TSD User

We are experiencing issues with Windows login and are working to fix it.

Published Aug. 22, 2019 3:18 PM

Dear TSD User

We are experiencing issues with Colossus, which is delaying jobs from being run. We are working to fix the problem.

Published Aug. 19, 2019 9:04 AM

As previously announced, we are starting today at 09:00, and continue working throughout the day. Colossus will not be available during this period. The maintenance will include an upgrade of both network and NFS-export.

Please note that this means the /cluster file system will be unavailable during the maintenance stop, and some of the VMs mounting /cluster might need to be rebooted.

No currently running jobs will be canceled due to the stop, but jobs that will not be able to finish before 09:00 on Monday, will be held in the queue until after the maintenance.

Update:

10:56

We have partially completed the upgrade, and Colossus is ready to use again. Due to a hardware-error, we were unable to replace it from the NFS-export machine. We will address this issue later. Also, we managed to run a command that will prevent similar crashes as the one happened yesterday.

Published Aug. 18, 2019 8:40 AM

We are having issues with Windows and linux login, and are working to fix the issue

Published Aug. 17, 2019 7:36 PM

Dear TSD User

The nfs export of /cluster to project VMs is currently down, we are diagnosing the issue and working to fix it.

Published Aug. 14, 2019 11:24 AM

Dear TSD users,

The /cluster file system was down between 10:15 and 11:10 due to a crash of one of the file system daemons. The file system is now up again, but many jobs on colossus have likely crashed in the mean time, so please check your jobs. The VMs mounting /cluster will also have experienced problems.

Things should be back to normal again now, but please don't hesitate to contact us if you're still experiencing problems.

Our apologies for the inconvenience.

-- 
The TSD team.

Published Aug. 12, 2019 8:25 AM

We are experiencing issues with some services, which may lead to  users being unable to login to TSD through VMWare Horizon Client. We are investigating the cause of this and working on fix.

Update:

- https://view.tsd.usit.no/ is up again.

Published Aug. 6, 2019 12:09 PM

Dear TSD User

We discovered that due to infrastructure issues, the selfservice portal's QR code generation did not work as intended from Monday up until today at 12:00. If you tried to reset your QR code during this period, we kindly ask you to do so again.

Published July 31, 2019 10:09 AM

We are experiencing issues with some services, which may lead to some users being unable to login to TSD through ThinLinc. We are investigating the cause of this and working on fix.

Published July 4, 2019 9:03 AM

TSDs self service portal will be unavailable for a short period at 9.15  2019-07-04. We will update this notice with more information and more precise time frames shortly.

Our apologies for any inconvenience this might cause.
 

Published June 27, 2019 3:16 PM

TSDs self service portal will be unavailable for a short period at 10:00, 2019-06-28. We will update this notice with more information and more precise time frames shortly.

Our apologies for any inconvenience this might cause.

-- 
Best regards,
TSD

Published June 25, 2019 9:17 AM

The self service portal will be unavailable for a short period, while the database group is performing an upgrade.

Published June 24, 2019 11:39 AM

Dear TSD User

As planned and announced, we have shut down sftp data transfers to and from TSD. For data import and export, please use https://data.tsd.usit.no - the new data transfer service works from all major browsers as long as javascript and cookies are enabled. If you prefer to use the command-line, or need further assistance please contact our user support.

Published June 20, 2019 9:48 AM

There will be a scheduled minor upgrade of PostgreSQL on 25th of June from 08:00 - 09:30.

During this downtime, the applications running PostgreSQL will not work, as we will restart the database in your project. Other services inside TSD will continue working as normal.

 

Published June 18, 2019 1:28 PM

Dear TSD users,

selfservice.tsd.usit.no is currently unavailable. We are working on getting it back up again as quickly as possible.

--
Best regards,
TSD

Published June 17, 2019 8:38 AM

We are experiencing issues with some services, which may lead to some users being unable to login to TSD. We are investigating the cause of this and working on fix.

Published June 7, 2019 10:49 AM

The DRAGEN node is now accessible on colossus and can take slurm workloads. Please read the updated docs:

/english/services/it/research/sensitive-data/use-tsd/hpc/dragen.html

Abdulrahman @ TSD

Published June 6, 2019 3:57 PM

We are doing maintenance on TSD login from 16:00 until 18:00 today. During the maintenance new login sessions will not be possible, but active sessions will continue working.

Published June 5, 2019 8:50 PM

The Colossus file system is having issues at the moment, making the cluster unusable. We are working on fixing it.

Update: The file system is up again. The problems started around 16:15 today, and lasted until 21:00. During that time, it is likely that jobs on Colossus have crashed, so check your results. It is also likely that the problems have caused nfs hangs on the Linux VMs that mount /cluster.