Natural language processing (NLP) has played an important role in several areas of computer science, and requirements engineering (RE) is not an exception. In the last years, the advent of massive and very heterogeneous natural language (NL) RE-relevant sources, like tweets and app reviews, has attracted even more interest from the RE community.
The main goal of the NLP4RE workshop is to serve as a regular meeting point for the researchers on NLP technologies in RE. NLP4RE aims to promote the timely communication of advances, challenges and barriers that the researchers encounter, and the workshop wishes to provide a friendly venue where collaborations may emerge naturally.
The NLP4RE Workshop is co-located with REFSQ'23. Check out the REFSQ'23 Conference here: https://2023.refsq.org
The workshop welcomes contributions regarding both theory and application of NLP technologies in RE. We also encourage contributions that highlight challenges faced by industrial practitioners when dealing with requirements expressed in NL, and the experiences of academics in technology transfer.
We are interested in Tool Papers (see the Call for Papers), in which the authors provide a brief description of an NLP tool for RE, and a plan for a tool demo at the workshop.
We are also interested in Report Papers (see the Call for Papers), in which the authors provide an overview on the current and past research of their teams. These contributions do not require novelty with respect to previous work, because the main goal of the workshop is to foster discussion and networking.
Within the area of NLP for RE, the topics of interest of the workshop include but are not limited to:
8 pages (plus 1 page for references), describing novel technical solutions for the application of NLP technologies to RE-relevant artifacts. The papers in this category include preliminary solutions to RE problems, with an early validation.
8 pages (plus 1 page for references), describing practical experiences in the application of NLP technologies to RE-relevant artifacts. The papers in this category include experience reports, industrial case studies, controlled experiments, and other types of empirical studies conducted to practically assess existing technical solutions.
4 pages (plus 1 page for references), in which the authors provide an overview on the current and past research of their team, describing what they have been doing on the workshop’s topics, and/or what they are doing, and/or what they plan to do. These contributions do not require novelty with respect to previous work, and are oriented to foster discussion and networking. A non-mandatory template for Report Papers can be dowloaded here. Please notice that we do not allow submission of Report Papers from teams who submitted these types of papers to NLP4RE’19 and NLP4RE’20.
Our recommended template for Report Papers can be dowloaded here.
4 pages (plus 1 page for references), outlining roadmaps for research in the workshop’s topics, including industrial and research challenges based on currently available knowledge. Specifically, we encourage contributions that highlight challenges faced by industrial practitioners when dealing with requirements expressed in NL, and faced by academics in technology transfer studies.
4 pages (including screenshots and references), in which the authors provide a short description of an NLP-based tool for RE with screenshots, together with a clear plan for a demo at the workshop. These contributions do not require novelty with respect to previous work, and the authors can also showcase tools that have been presented in past conferences and workshops. These papers will be evaluated based on the potential interest raised by the tool, and based on the clarity of the plan for the demo.
Submissions should be written in English and submitted in PDF format (page size A4, single column) formatted according to the CEUR Proceedings Style:
All papers will be peer-reviewed by three members of the Program Committee, and will appear in the CEUR Proceedings, with ISBN number.
Automated classifiers are increasingly used in software engineering for labeling previously unseen SE data. In the NLPRE community, researchers have proposed automated classifiers that predict if a requirement pertains to a certain quality characteristic, whether an app review represents a bug or a feature request, and more. We are accustomed to reading papers that present impressive results, with precision and recall often above 90%. But are those results a credible representation of the classifier's performance in operational environments? In this talk, I will describe the ECSER (Evaluating Classifiers in Software Engineering Research) pipeline, which assists researchers by suggesting a series of steps for the rigorous execution and evaluation of automated classification research in SE. Through excerpts from conducted replication studies, I will show how ECSER allows to draw more nuanced conclusions, sometimes contradicting the claims of the original studies. ECSER can be seen as a call to action for increasing the maturity of the NLP4RE community. Link to presentation slides is here
Fabiano Dalpiaz is an associate professor in the Department of Information and Computing Sciences at Utrecht University in the Netherlands. He is principal investigator in Utrecht University's Requirements Engineering lab. In his research, Fabiano blends artificial intelligence with information visualization to increase the quality of the requirements engineering process and artifacts, with the aim of delivering higher-quality software. His research is often validated in-vivo through collaborations with the software industry. He acts and acted as program co-chair of RE 2023, REFSQ 2021, RCIS 2020, and the RE@Next! track of RE 2021. He was the organization chair of the REFSQ 2018 conference, and he is an associate editor for the Requirements Engineering Journal and the Business & Information Systems Engineering Journal. He often serves on the program committee of conferences such as RE, CAiSE, REFSQ, ICSE, and AAMAS. You can find more about Fabiano at his website.
9:00 - 9:10 - Introduction to NLP4RE
9:10 - 9:40 - Comparing general purpose pre-trained Word and Sentence embeddings for Requirements Classification by Federico Cruciani, Samuel Moore, Chris Nugent and Heiko Struebing.
9:40 - 10:00 - Chatbots4Mobile: Feature-oriented Knowledge Base Generation Using Natural Language by Quim Motger, Xavier Franch and Jordi Marco.
10:00 - 10:30 - Rule-based NLP vs ChatGPT in ambiguity detection, a preliminary study by Alessandro Fantechi, Stefania Gnesi and Laura Semini.
11:00 - 11:30 - Let’s Stop Building at the Feet of Giants: Recovering unavailable Requirements Quality Artifacts by Julian Frattini, Lloyd Montgomery, Davide Fucci, Jannik Fischbach, Michael Unterkalmsteiner and Daniel Mendez. Preprint available.
11:30 - 12:00 - From US to Domain Models: Recommending Relationships between Entitiesby Maxim Bragilovski, Fabiano Dalpiaz and Arnon Sturm.
12:00 - 12:30 - Understanding Developers Privacy Concerns Through Reddit Thread Analysis by Jonathan Parsons, Michael Schrider, Oyebanjo Ogunlela and Sepideh Ghanavati.
14:00 - 15:00 - Keynote: On the quest for more credible results in ML4SE research by Dr. Fabiano Dalpiaz
15:00 - 15:30 - Extended Introduction to NLP4RE and discussion by Workshop Participants (YOU and US!)
~16:30 - 17:15 - Fun activity: "Guess what?!"
17:15 - 17:30 - Wrap Up
20:00 - Organized workshop dinner at Restaurant Karakala
(for those staying at the conference hotel, we meet at 19:30 in the lobby)
For questions about the workshop, reach us via e-mail.