diff --git a/bin/ingest.py b/bin/ingest.py index e75bfdaa3d..aebe601b50 100755 --- a/bin/ingest.py +++ b/bin/ingest.py @@ -294,6 +294,7 @@ def find_book(): meta["path"], "cdrom", f"{year}-{venue_name.lower()}-{volume_name}.pdf", + f"{venue_name.lower()}-{year}.{volume_name}.pdf", ), os.path.join(meta["path"], "cdrom", f"{venue_name.upper()}-{year}.pdf"), ] diff --git a/data/xml/2023.nejlt.xml b/data/xml/2023.nejlt.xml new file mode 100644 index 0000000000..c4cc5d8cc7 --- /dev/null +++ b/data/xml/2023.nejlt.xml @@ -0,0 +1,245 @@ + + + + + Northern European Journal of Language Technology, Volume 9 + LeonDerczynski + Linköping University Electronic Press +
Linköping, Sweden
+ https://doi.org/10.3384/nejlt.2000-1533.9.1 + 2023 + 2023.nejlt-1 + nejlt + + + Resource papers as registered reports: a proposal + Emielvan Miltenburg + This is a proposal for publishing resource papers as registered reports in the Northern European Journal of Language Technology. The idea is that authors write a data collection plan with a full data statement, to the extent that it can be written before data collection starts. Once the proposal is approved, publication of the final resource paper is guaranteed, as long as the data collection plan is followed (modulo reasonable changes due to unforeseen circumstances). This proposal changes the reviewing process from an antagonistic to a collaborative enterprise, and hopefully encourages NLP resources to develop and publish more high-quality datasets. The key advantage of this proposal is that it helps to promote responsible resource development (through constructive peer review) and to avoid research waste. + 2023.nejlt-1.1 + https://doi.org/10.3384/nejlt.2000-1533.2023.4884 + van-miltenburg-2023-resource + + + <fixed-case>PARSEME</fixed-case> Meets <fixed-case>U</fixed-case>niversal <fixed-case>D</fixed-case>ependencies: Getting on the Same Page in Representing Multiword Expressions + AgataSavary + SaraStymne + Verginica BarbuMititelu + NathanSchneider + CarlosRamisch + JoakimNivre + Multiword expressions (MWEs) are challenging and pervasive phenomena whose idiosyncratic properties show notably at the levels of lexicon, morphology, and syntax. Thus, they should best be annotated jointly with morphosyntax. We discuss two multilingual initiatives, Universal Dependencies and PARSEME, addressing these annotation layers in cross-lingually unified ways. We compare the annotation principles of these initiatives with respect to MWEs, and we put forward a roadmap towards their gradual unification. The expected outcomes are more consistent treebanking and higher universality in modeling idiosyncrasy. + 2023.nejlt-1.2 + https://doi.org/10.3384/nejlt.2000-1533.2023.4453 + savary-etal-2023-parseme-meets + + + Barriers and enabling factors for error analysis in <fixed-case>NLG</fixed-case> research + Emielvan Miltenburg + MirunaClinciu + OndřejDušek + DimitraGkatzia + StephanieInglis + LeoLeppänen + SaadMahamood + StephanieSchoch + CraigThomson + LuouWen + Earlier research has shown that few studies in Natural Language Generation (NLG) evaluate their system outputs using an error analysis, despite known limitations of automatic evaluation metrics and human ratings. This position paper takes the stance that error analyses should be encouraged, and discusses several ways to do so. This paper is based on our shared experience as authors as well as a survey we distributed as a means of public consultation. We provide an overview of existing barriers to carrying out error analyses, and propose changes to improve error reporting in the NLG literature. + 2023.nejlt-1.3 + https://doi.org/10.3384/nejlt.2000-1533.2023.4529 + van-miltenburg-etal-2023-barriers + + + Benchmark for Evaluation of <fixed-case>D</fixed-case>anish Clinical Word Embeddings + Martin SundahlLaursen + Jannik SkyttegaardPedersen + Pernille JustVinholt + Rasmus SøgaardHansen + Thiusius RajeethSavarimuthu + In natural language processing, benchmarks are used to track progress and identify useful models. Currently, no benchmark for Danish clinical word embeddings exists. This paper describes the development of a Danish benchmark for clinical word embeddings. The clinical benchmark consists of ten datasets: eight intrinsic and two extrinsic. Moreover, we evaluate word embeddings trained on text from the clinical domain, general practitioner domain and general domain on the established benchmark. All the intrinsic tasks of the benchmark are publicly available. + 2023.nejlt-1.4 + https://doi.org/10.3384/nejlt.2000-1533.2023.4132 + laursen-etal-2023-benchmark + + + <fixed-case>NL</fixed-case>-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation + KaustubhDhole + VarunGangal + SebastianGehrmann + AadeshGupta + ZhenhaoLi + SaadMahamood + AbinayaMahadiran + SimonMille + AshishShrivastava + SamsonTan + TongshangWu + JaschaSohl-Dickstein + JinhoChoi + EduardHovy + OndřejDušek + SebastianRuder + SajantAnand + NagenderAneja + RabinBanjade + LisaBarthe + HannaBehnke + IanBerlot-Attwell + ConnorBoyle + CarolineBrun + Marco Antonio SobrevillaCabezudo + SamuelCahyawijaya + EmileChapuis + WanxiangChe + MukundChoudhary + ChristianClauss + PierreColombo + FilipCornell + GautierDagan + MayukhDas + TanayDixit + ThomasDopierre + Paul-AlexisDray + SuchitraDubey + TatianaEkeinhor + Marco DiGiovanni + TanyaGoyal + RishabhGupta + LouanesHamla + SangHan + FabriceHarel-Canada + AntoineHonoré + IshanJindal + PrzemysławJoniak + DenisKleyko + VenelinKovatchev + KalpeshKrishna + AshutoshKumar + StefanLanger + Seungjae RyanLee + Corey JamesLevinson + HualouLiang + KaizhaoLiang + ZhexiongLiu + AndreyLukyanenko + VukosiMarivate + Gerardde Melo + SimonMeoni + MaxineMeyer + AfnanMir + Nafise SadatMoosavi + NiklasMeunnighoff + Timothy Sum HonMun + KentonMurray + MarcinNamysl + MariaObedkova + PritiOli + NivranshuPasricha + JanPfister + RichardPlant + VinayPrabhu + VasilePais + LiboQin + ShahabRaji + Pawan KumarRajpoot + VikasRaunak + RoyRinberg + NicholasRoberts + Juan DiegoRodriguez + ClaudeRoux + VasconcellosSamus + AnanyaSai + RobinSchmidt + ThomasScialom + TshephishoSefara + SaqibShamsi + XudongShen + YiwenShi + HaoyueShi + AnnaShvets + NickSiegel + DamienSileo + JamieSimon + ChandanSingh + RomanSitelew + PriyankSoni + TaylorSorensen + WilliamSoto + AmanSrivastava + AdityaSrivatsa + TonySun + MukundVarma + ATabassum + FionaTan + RyanTeehan + MoTiwari + MarieTolkiehn + AthenaWang + ZijianWang + ZijieWang + GloriaWang + FuxuanWei + BryanWilie + Genta IndraWinata + XinyuWu + WitoldWydmanski + TianbaoXie + UsamaYaseen + MichaelYee + JingZhang + YueZhang + Data augmentation is an important method for evaluating the robustness of and enhancing the diversity of training data for natural language processing (NLP) models. In this paper, we present NL-Augmenter, a new participatory Python-based natural language (NL) augmentation framework which supports the creation of transformations (modifications to the data) and filters (data splits according to specific features). We describe the framework and an initial set of 117 transformations and 23 filters for a variety of NL tasks annotated with noisy descriptive tags. The transformations incorporate noise, intentional and accidental human mistakes, socio-linguistic variation, semantically-valid style, syntax changes, as well as artificial constructs that are unambiguous to humans. We demonstrate the efficacy of NL-Augmenter by using its transformations to analyze the robustness of popular language models. We find different models to be differently challenged on different tasks, with quasi-systematic score decreases. The infrastructure, datacards, and robustness evaluation results are publicly available on GitHub for the benefit of researchers working on paraphrase generation, robustness analysis, and low-resource NLP. + 2023.nejlt-1.5 + https://doi.org/10.3384/nejlt.2000-1533.2023.4725 + dhole-etal-2023-nl + + + On the Relationship between Frames and Emotionality in Text + EnricaTroiano + RomanKlinger + SebastianPadó + Emotions, which are responses to salient events, can be realized in text implicitly, for instance with mere references to facts (e.g., “That was the beginning of a long war”). Interpreting affective meanings thus relies on the readers background knowledge, but that is hardly modeled in computational emotion analysis. Much work in the field is focused on the word level and treats individual lexical units as the fundamental emotion cues in written communication. We shift our attention to word relations. We leverage Frame Semantics, a prominent theory for the description of predicate-argument structures, which matches the study of emotions: frames build on a “semantics of understanding” whose assumptions rely precisely on peoples world knowledge. Our overarching question is whether and to what extent the events that are represented by frames possess an emotion meaning. To carry out a large corpus-based correspondence analysis, we automatically annotate texts with emotions as well as with FrameNet frames and roles, and we analyze the correlations between them. Our main finding is that substantial groups of frames have an emotional import. With an extensive qualitative analysis, we show that they capture several properties of emotions that are purported by theories from psychology. These observations boost insights on the two strands of research that we bring together: emotion analysis can profit from the event-based perspective of frame semantics; in return, frame semantics gains a better grip of its position vis-à-vis emotions, an integral part of word meanings. + 2023.nejlt-1.6 + https://doi.org/10.3384/nejlt.2000-1533.2023.4361 + troiano-etal-2023-relationship + + + An Empirical Configuration Study of a Common Document Clustering Pipeline + AntonEklund + MonaForsman + FrankDrewes + Document clustering is frequently used in applications of natural language processing, e.g. to classify news articles or creating topic models. In this paper, we study document clustering with the common clustering pipeline that includes vectorization with BERT or Doc2Vec, dimension reduction with PCA or UMAP, and clustering with K-Means or HDBSCAN. We discuss the inter- actions of the different components in the pipeline, parameter settings, and how to determine an appropriate number of dimensions. The results suggest that BERT embeddings combined with UMAP dimension reduction to no less than 15 dimensions provides a good basis for clustering, regardless of the specific clustering algorithm used. Moreover, while UMAP performed better than PCA in our experiments, tuning the UMAP settings showed little impact on the overall performance. Hence, we recommend configuring UMAP so as to optimize its time efficiency. According to our topic model evaluation, the combination of BERT and UMAP, also used in BERTopic, performs best. A topic model based on this pipeline typically benefits from a large number of clusters. + 2023.nejlt-1.7 + https://doi.org/10.3384/nejlt.2000-1533.2023.4396 + eklund-etal-2023-empirical + + + Prevention or Promotion? Predicting Author’s Regulatory Focus + AswathyVelutharambath + KaiSassenberg + RomanKlinger + People differ fundamentally in what motivates them to pursue a goal and how they approach it. For instance, some people seek growth and show eagerness, whereas others prefer security and are vigilant. The concept of regulatory focus is employed in psychology, to explain and predict this goal-directed behavior of humans underpinned by two unique motivational systems – the promotion and the prevention system. Traditionally, text analysis methods using closed-vocabularies are employed to assess the distinctive linguistic patterns associated with the two systems. From an NLP perspective, automatically detecting the regulatory focus of individuals from text provides valuable insights into the behavioral inclinations of the author, finding its applications in areas like marketing or health communication. However, the concept never made an impactful debut in computational linguistics research. To bridge this gap we introduce the novel task of regulatory focus classification from text and present two complementary German datasets – (1) experimentally generated event descriptions and (2) manually annotated short social media texts used for evaluating the generalizability of models on real-world data. First, we conduct a correlation analysis to verify if the linguistic footprints of regulatory focus reported in psychology studies are observable and to what extent in our datasets. For automatic classification, we compare closed-vocabulary-based analyses with a state-of-the-art BERT-based text classification model and observe that the latter outperforms lexicon-based approaches on experimental data and is notably better on out-of-domain Twitter data. + 2023.nejlt-1.8 + https://doi.org/10.3384/nejlt.2000-1533.2023.4561 + velutharambath-etal-2023-prevention + + + Unsupervised Text Embedding Space Generation Using Generative Adversarial Networks for Text Synthesis + Jun-MinLee + Tae-BinHa + Generative Adversarial Networks (GAN) is a model for data synthesis, which creates plausible data through the competition of generator and discriminator. Although GAN application to image synthesis is extensively studied, it has inherent limitations to natural language generation. Because natural language is composed of discrete tokens, a generator has difficulty updating its gradient through backpropagation; therefore, most text-GAN studies generate sentences starting with a random token based on a reward system. Thus, the generators of previous studies are pre-trained in an autoregressive way before adversarial training, causing data memorization that synthesized sentences reproduce the training data. In this paper, we synthesize sentences using a framework similar to the original GAN. More specifically, we propose Text Embedding Space Generative Adversarial Networks (TESGAN) which generate continuous text embedding spaces instead of discrete tokens to solve the gradient backpropagation problem. Furthermore, TESGAN conducts unsupervised learning which does not directly refer to the text of the training data to overcome the data memorization issue. By adopting this novel method, TESGAN can synthesize new sentences, showing the potential of unsupervised learning for text synthesis. We expect to see extended research combining Large Language Models with a new perspective of viewing text as an continuous space. + 2023.nejlt-1.9 + https://doi.org/10.3384/nejlt.2000-1533.2023.4855 + lee-ha-2023-unsupervised + + + <fixed-case>QUA</fixed-case>-<fixed-case>RC</fixed-case>: the semi-synthetic dataset of multiple choice questions for assessing reading comprehension in <fixed-case>U</fixed-case>krainian + MariiaZyrianova + DmytroKalpakchi + In this article we present the first dataset of multiple choice questions for assessing reading comprehension in Ukrainian. The dataset is based on the texts from the Ukrainian national tests for reading comprehension, and the MCQs themselves are created semi-automatically in three stages. The first stage was to use GPT-3 to generate the MCQs zero-shot, the second stage was to select MCQs of sufficient quality and revise the ones with minor errors, whereas the final stage was to expand the dataset with the MCQs written manually. The dataset is created by the Ukrainian language native speakers, one of whom is also a language teacher. The resulting corpus has slightly more than 900 MCQs, of which only 43 MCQs could be kept as they were generated by GPT-3. + 2023.nejlt-1.10 + https://doi.org/10.3384/nejlt.2000-1533.2023.4939 + zyrianova-kalpakchi-2023-qua + +
+