Registering for BSNLP 2021
https://2021.eacl.org/registration/
Keynote Speaker - Abstract
Marianna Apidianaki — Lexical Semantic Knowledge in Contextualised Representations
Given the high performance of pre-trained language models on natural language understanding tasks, an important strand of work has focused on investigating the linguistic knowledge encoded inside the models. Early interpretation works addressed structure-related aspects, while more recently the semantic knowledge encoded in language model representations has started being explored. In this talk, I will present two semantic tasks where traditional interpretation methodology succeeds to different degrees. I will show that semantic notions such as adjective intensity are encoded and quite straightforward to detect in contextualised representations. I will then describe a setting where it is harder to retrieve knowledge about noun properties by directly querying frozen representations, although this knowledge can still be leveraged by the models when they are exposed to data curated for this task.
Time Schedule
9:00 | Introduction |
9:15-11:00 | Regular papers I HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish Robert Mroczkowski, Piotr Rybak, Alina Wróblewska and Ireneusz Gawlik BERTić - The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian Nikola Ljubešić and Davor Lauc Russian Paraphrasers: Paraphrase with Transformers Alena Fenogenova Creating an Aligned Russian Text Simplification Dataset from Language Learner Data Anna Dmitrieva and Jörg Tiedemann |
11:00-11:30 | Coffee break |
11:30-13:30 | Regular papers II Abusive Language Recognition in Russian Kamil Saitov and Leon Derczynski Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company’s Reputation Nikolay Babakov, Varvara Logacheva, Olga Kozlova, Nikita Semenov and Alexander Panchenko RuSentEval: Linguistic Source, Encoder Force! Vladislav Mikhailov, Ekaterina Taktasheva, Elina Sigdel and Ekaterina Artemova Exploratory Analysis of News Sentiment Using Subgroup Discovery Anita Valmarska, Luis Cabrera-Diego, Elvys Linhares Pontes and Senja Pollak |
13:30-15:00 | Lunch break |
15:00-16:00 | Keynote Speaker Marianna Apidianaki — Lexical Semantic Knowledge in Contextualised Representations |
16:00-16:10 | Break |
16:10-16:30 | Shared Task overview Slav-NER: the 3rd Cross-lingual Challenge on Recognition, Normalization, Classification, and Linking of Named Entities across Slavic Language Jakub Piskorski, Bogdan Babych, Zara Kancheva, Olga Kanishcheva, Maria Lebedeva, Michał Marcińczuk, Preslav Nakov, Petya Osenova, Lidia Pivovarova, Senja Pollak, Pavel Přibáň, Ivaylo Radev, Marko Robnik-Sikonja, Vasyl Starko, Josef Steinberger and Roman Yangarber |
16:30-17:30 | Shared Task papers — Poster session at EACL Virtual Chair ( guide ) Multilingual Named Entity Recognition and Matching Using BERT and Dedupe for Slavic Languages Marko Prelevikj and Slavko Zitnik Priberam Labs at the 3rd Shared Task on SlavNER Pedro Ferreira, Ruben Cardoso and Afonso Mendes Multilingual Slavic Named Entity Recognition Rinalds Vīksna and Inguna Skadina Using a Frustratingly Easy Domain and Tagset Adaptation for Creating Slavic Named Entity Recognition Systems Luis Adrián Cabrera-Diego, Jose G. Moreno and Antoine Doucet Benchmarking Pre-trained Language Models for Multilingual NER: TraSpaS at the BSNLP2021 Shared Task Marek Suppa and Ondrej Jariabka Named Entity Recognition and Linking Augmented with Large-Scale Structured Data Paweł Rychlikowski, Adrian Lancucki, Adam Kaczmarek and Bartłomiej Najdecki |
17:30-17:40 | Workshop End |
Shared Task Description Paper
-
Slav-NER: the 3rd Cross-lingual Challenge on Recognition, Normalization, Classification, and Linking of Named Entities across Slavic Language Jakub Piskorski, Bogdan Babych, Zara Kancheva, Olga Kanishcheva, Maria Lebedeva, Michał Marcińczuk, Preslav Nakov, Petya Osenova, Lidia Pivovarova, Senja Pollak, Pavel Přibáň, Ivaylo Radev, Marko Robnik-Sikonja, Vasyl Starko, Josef Steinberger and Roman Yangarber
Accepted Papers for BSNLP 2021
-
HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish Robert Mroczkowski, Piotr Rybak, Alina Wróblewska and Ireneusz Gawlik
-
Russian Paraphrasers: Paraphrase with Transformers Alena Fenogenova
-
Abusive Language Recognition in Russian Kamil Saitov and Leon Derczynski
-
Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company’s Reputation Nikolay Babakov, Varvara Logacheva, Olga Kozlova, Nikita Semenov and Alexander Panchenko
-
BERTić - The Transformer Language Model for Bosnian, Croatian, Montenegrin and Serbian Nikola Ljubešić and Davor Lauc
-
RuSentEval: Linguistic Source, Encoder Force! Vladislav Mikhailov, Ekaterina Taktasheva, Elina Sigdel and Ekaterina Artemova
-
Exploratory Analysis of News Sentiment Using Subgroup Discovery Anita Valmarska, Luis Cabrera-Diego, Elvys Linhares Pontes and Senja Pollak
-
Creating an Aligned Russian Text Simplification Dataset from Language Learner Data Anna Dmitrieva and Jörg Tiedemann
Shared task Papers for BSNLP 2021
-
Multilingual Named Entity Recognition and Matching Using BERT and Dedupe for Slavic Languages Marko Prelevikj and Slavko Zitnik
-
Priberam Labs at the 3rd Shared Task on SlavNER Pedro Ferreira, Ruben Cardoso and Afonso Mendes
-
Multilingual Slavic Named Entity Recognition Rinalds Vīksna and Inguna Skadina
-
Using a Frustratingly Easy Domain and Tagset Adaptation for Creating Slavic Named Entity Recognition Systems Luis Adrián Cabrera-Diego, Jose G. Moreno and Antoine Doucet
-
Benchmarking Pre-trained Language Models for Multilingual NER: TraSpaS at the BSNLP2021 Shared Task Marek Suppa and Ondrej Jariabka
-
Named Entity Recognition and Linking Augmented with Large-Scale Structured Data Paweł Rychlikowski, Adrian Lancucki, Adam Kaczmarek and Bartłomiej Najdecki