Messages
In this post you will find the detailed program for the WNNLP 2022 workshop, which will be held on June 9th at 12:00-14.
12:00 - 12:10 Welcome and awards
12:10 - 12:15 Task overview: Neural Machine Translation
12:15 - 12:20 Anastasiia Grishina
12:20 - 12:25 William Ho
12:25 - 12:30 Lucas Wagner and Tim Sigl
12:30 - 12:35 Jan ?amánek, Anna Palatkina and Elina Sigdel
12:35 - 12:40 Henrik Syversen Johansen
12:40 - 12:45 Task overview: Semantic Parsing
12:45 - 12:50 Matias Jentoft
12:50 - 12:55 Huiling You
12:55 - 13:00 -- Break --
13:00 - 13:05 Task overview: Word Sense Induction
13:05 - 13:10 Thomas Zemp
13:10 - 13:15 Aksel Tj?nn
13:15 - 13:20 Task overview: Targeted Sentiment Analysis
13:20 - 13:25 Ece Cetinoglu, Rohullah Akbari and Liang ...
All those registered for a track in the IN5550 home exam should now have received an e-mail invitation to join the WNNLP Program Committee in Easychair. Note that you will need to accept the invitation in order to complete the next *obligatory* part of the home exam (following submission of your papers): reviewing of other submitted papers.
If you have not received an invitation, please get in touch with General Chair Lilja ?vrelid (liljao@ifi.uio.no) as soon as possible.
Hi, here's the leaderboard for the third obligatory assignment. As promised, the first 3 teams will get a bonus point, congratulations!
The same pattern occurs again, the scores from this year are higher than those from the last year: the best performance was only 91.16% back then, good job everyone!
Team | F1 |
---|---|
Herman | 92.74% |
William | 91.45% |
Sigrid, Helene & Matias (teamed up) | 91.26% |
We are now entering the home exam phase of the IN5550 course.
No later than April 29, everyone should announce their team composition and choice of track by emailing the course contact address in5550-help@ifi.uio.no, or via Mattermost.
Please find the slides and video with the overview of the tracks at the course schedule page. Detailed information about each track (with data, papers and code) is available at the GitHub repository.
Good luck with finishing your Obligatory 3 and with the exam as well!
Hi, here's the leaderboard for the second obligatory assignment. As promised, the first 3 teams will get a bonus point, congratulations!
Again, the scores this year are higher than those from the last year: it was only 82.5% back then. So don't be sad if you're not among the first 3 teams, we saw a lot of great works when grading your assignments!
Team | Accuracy |
---|---|
Alexandra, Annika & Marie (teamed up) | 85.23% |
Tellef | 84.77% |
William / Sigrid, Helene & Matias Jentoft (teamed up) |
84.26% |
The third obligatory assignment is published in the Git repository. We will delve deep into the current state-of-the-art approach for most NLP tasks -- fine-tuning of large pre-trained contextualized language models.
The assignment is due April 22, make sure to start working on it early. We will cover contextualized language models in the upcoming lectures and group sessions.
Good luck!
The pre-recorded lectures for this week are now available. Note that Vortex (the UiO CMS) seems to have some hick-ups at the moment, however: If the video links do not appear to be active from the schedule, you can find them by directly accessing the video directory (see the files for the 10th session;`in5550_10_part*.mp4'): /studier/emner/matnat/ifi/IN5550/v22/videos/
The second obligatory assignment is now online, on the Git repository. It moves on into the realm of document classification using pre-trained word embeddings and recurrent neural networks (RNN).
The assignment is due March 21, but feel free to start working on it. We have already covered using pre-trained word embeddings at the group sessions and discussed RNNs at the Q&A session this week.
We will provide extensive practice with implementing RNNs in PyTorch at the group sessions next week and the week after.
Good luck!
Hi, here's the leaderboard for the first obligatory assignment. As promised, the first 3 teams will get a bonus point, congratulations! The competition was fierce this year, the winner from the last year's round of IN5550 scored "only" 47% F1. So don't be sad if you're not among the first 3 teams, we saw a lot of great works when grading your assignments!
Team | Macro F1 |
---|---|
Tellef | 53% |
Anna & Elina | 49% |
Herman |
48% |
The first obligatory assignment is now online, on the Git repository. It deals with building a simple neural document classifier using bags-of-words as features.
The assignment is due February 14, but feel free to start working on it. We have already covered linear classifiers this week, and will cover training feed-forward neural networks next week.
We will provide extensive practice with building neural classifiers with PyTorch at the group sessions.
The first set of pre-recorded lectures is out now, under the common title of "Supervised Machine Learning: From Linear Models to Neural Networks".
It consists of four shorter sub-lectures, and you can find the videos together with the slides at the course schedule page.
Please make sure to get yourself acquainted with the videos before our next Q&A session which will take place this Thursday, January 27, in Zoom.
Welcome to our IN5550 course which will guide you through deep learning applications to natural language processing!
The first introductory lecture this term will be held on Thursday, January 20, at 12:15. We will go through course logistics (including routines for assignments and the final project-based exam) and motivate the now dominant use of neural architectures in Natural Language Processing (and most other sub-fields of Artificial Intelligence). The first lecture will be virtual: you can find the room link in the course schedule and in our UiO GitHub repository. Look for further details and updates on the course web page (this page). You can also ask questions via our collective mailbox or Mattermost chat channel.
...