6th Workshop on Natural Language Processing for Requirements Engineering

REFSQ'23 Workshop, April 17th, 2023

Submit Here!

Important Dates

  • Paper Submission: February 17th, 2023
  • Author notification: March 3rd, 2023
  • Camera Ready: March 10th, 2023
  • Workshop: April 17th, 2023

Workshop Overview

Natural language processing (NLP) has played an important role in several areas of computer science, and requirements engineering (RE) is not an exception. In the last years, the advent of massive and very heterogeneous natural language (NL) RE-relevant sources, like tweets and app reviews, has attracted even more interest from the RE community.

The main goal of the NLP4RE workshop is to serve as a regular meeting point for the researchers on NLP technologies in RE. NLP4RE aims to promote the timely communication of advances, challenges and barriers that the researchers encounter, and the workshop wishes to provide a friendly venue where collaborations may emerge naturally.

The NLP4RE Workshop is co-located with REFSQ'23. Check out the REFSQ'23 Conference here: https://2023.refsq.org

Contributions

The workshop welcomes contributions regarding both theory and application of NLP technologies in RE. We also encourage contributions that highlight challenges faced by industrial practitioners when dealing with requirements expressed in NL, and the experiences of academics in technology transfer.

We are interested in Tool Papers (see the Call for Papers), in which the authors provide a brief description of an NLP tool for RE, and a plan for a tool demo at the workshop.

We are also interested in Report Papers (see the Call for Papers), in which the authors provide an overview on the current and past research of their teams. These contributions do not require novelty with respect to previous work, because the main goal of the workshop is to foster discussion and networking.

Within the area of NLP for RE, the topics of interest of the workshop include but are not limited to:

  • Requirements quality assurance
  • App Review analysis and classification
  • Social media mining and analysis for RE
  • Bug report mining and analysis for RE
  • Requirements tracing
  • Requirements retrieval
  • Model generation
  • Test generation
  • Information extraction from legal documents
  • Information extraction from requirements
  • Dependency and relation extraction
  • Low-resource languages and RE
  • Domain-specific NLP for RE
  • Automated requirements management
  • Multi-modal requirements analysis 
  • Functional and non-functional requirements categorization
  • Formalisation of informal requirements
  • Question-answering for RE
  • Summarisation of requirements documents
  • Speech-to-text and speech analysis in requirements elicitation
  • Requirements datasets
  • Sustainability in RE
  • Ethics in RE
  • RE education
  • NLP for requirements other than “shall” requirements (e.g., user stories)

Call for Papers

Technical Design Papers

8 pages (plus 1 page for references), describing novel technical solutions for the application of NLP technologies to RE-relevant artifacts. The papers in this category include preliminary solutions to RE problems, with an early validation.

Experience Papers

8 pages (plus 1 page for references), describing practical experiences in the application of NLP technologies to RE-relevant artifacts. The papers in this category include experience reports, industrial case studies, controlled experiments, and other types of empirical studies conducted to practically assess existing technical solutions.

Report Papers

4 pages (plus 1 page for references), in which the authors provide an overview on the current and past research of their team, describing what they have been doing on the workshop’s topics, and/or what they are doing, and/or what they plan to do. These contributions do not require novelty with respect to previous work, and are oriented to foster discussion and networking. A non-mandatory template for Report Papers can be dowloaded here. Please notice that we do not allow submission of Report Papers from teams who submitted these types of papers to NLP4RE’19 and NLP4RE’20.

Our recommended template for Report Papers can be dowloaded here.

Vision Papers

4 pages (plus 1 page for references), outlining roadmaps for research in the workshop’s topics, including industrial and research challenges based on currently available knowledge. Specifically, we encourage contributions that highlight challenges faced by industrial practitioners when dealing with requirements expressed in NL, and faced by academics in technology transfer studies.

Tool Papers

4 pages (including screenshots and references), in which the authors provide a short description of an NLP-based tool for RE with screenshots, together with a clear plan for a demo at the workshop. These contributions do not require novelty with respect to previous work, and the authors can also showcase tools that have been presented in past conferences and workshops. These papers will be evaluated based on the potential interest raised by the tool, and based on the clarity of the plan for the demo.

Submissions should be written in English and submitted in PDF format (page size A4, single column) formatted according to the CEUR Proceedings Style:

All papers will be peer-reviewed by three members of the Program Committee, and will appear in the CEUR Proceedings, with ISBN number.

shape

Keynote: Dr. Fabiano Dalpiaz

On the quest for more credible results in ML4SE research

Automated classifiers are increasingly used in software engineering for labeling previously unseen SE data. In the NLPRE community, researchers have proposed automated classifiers that predict if a requirement pertains to a certain quality characteristic, whether an app review represents a bug or a feature request, and more. We are accustomed to reading papers that present impressive results, with precision and recall often above 90%. But are those results a credible representation of the classifier's performance in operational environments? In this talk, I will describe the ECSER (Evaluating Classifiers in Software Engineering Research) pipeline, which assists researchers by suggesting a series of steps for the rigorous execution and evaluation of automated classification research in SE. Through excerpts from conducted replication studies, I will show how ECSER allows to draw more nuanced conclusions, sometimes contradicting the claims of the original studies. ECSER can be seen as a call to action for increasing the maturity of the NLP4RE community. Link to presentation slides is here

about

Fabiano Dalpiaz is an associate professor in the Department of Information and Computing Sciences at Utrecht University in the Netherlands. He is principal investigator in Utrecht University's Requirements Engineering lab. In his research, Fabiano blends artificial intelligence with information visualization to increase the quality of the requirements engineering process and artifacts, with the aim of delivering higher-quality software. His research is often validated in-vivo through collaborations with the software industry. He acts and acted as program co-chair of RE 2023, REFSQ 2021, RCIS 2020, and the RE@Next! track of RE 2021. He was the organization chair of the REFSQ 2018 conference, and he is an associate editor for the Requirements Engineering Journal and the Business & Information Systems Engineering Journal. He often serves on the program committee of conferences such as RE, CAiSE, REFSQ, ICSE, and AAMAS. You can find more about Fabiano at his website.

Program: Monday, April 17th

Session 1 (9:00 - 10:30)

  • 9:00 - 9:10 - Introduction to NLP4RE

  • 9:10 - 9:40 - Comparing general purpose pre-trained Word and Sentence embeddings for Requirements Classification by Federico Cruciani, Samuel Moore, Chris Nugent and Heiko Struebing.

  • 9:40 - 10:00 - Chatbots4Mobile: Feature-oriented Knowledge Base Generation Using Natural Language by Quim Motger, Xavier Franch and Jordi Marco.

  • 10:00 - 10:30 - Rule-based NLP vs ChatGPT in ambiguity detection, a preliminary study by Alessandro Fantechi, Stefania Gnesi and Laura Semini.

Break (10:30 - 11:00)

Session 2 (11:00 - 12:30)

  • 11:00 - 11:30 - Let’s Stop Building at the Feet of Giants: Recovering unavailable Requirements Quality Artifacts by Julian Frattini, Lloyd Montgomery, Davide Fucci, Jannik Fischbach, Michael Unterkalmsteiner and Daniel Mendez. Preprint available.

  • 11:30 - 12:00 - From US to Domain Models: Recommending Relationships between Entitiesby Maxim Bragilovski, Fabiano Dalpiaz and Arnon Sturm.

  • 12:00 - 12:30 - Understanding Developers Privacy Concerns Through Reddit Thread Analysis by Jonathan Parsons, Michael Schrider, Oyebanjo Ogunlela and Sepideh Ghanavati.

Lunch (12:30 - 14:00)

Session 3 (14:00 - 15:30)

  • 14:00 - 15:00 - Keynote: On the quest for more credible results in ML4SE research by Dr. Fabiano Dalpiaz

  • 15:00 - 15:30 - Extended Introduction to NLP4RE and discussion by Workshop Participants (YOU and US!)

Session 4 (16:00 - 17:30)

  • 16:00 - ~16:30 - Surprise activity for the social good: (1) Fill in the ID Card for a recent publication, and (2) send it to sallam.abualhaija@uni.lu.

  • ~16:30 - 17:15 - Fun activity: "Guess what?!"

  • 17:15 - 17:30 - Wrap Up

Dinner (20:00 - open)

  • 20:00 - Organized workshop dinner at Restaurant Karakala
    (for those staying at the conference hotel, we meet at 19:30 in the lobby)

Organizing Committee

For questions about the workshop, reach us via e-mail.

Team
Sallam
Abualhaija

University of Luxembourg
(Luxembourg)

Team
Andreas
Vogelsang

University
of
Cologne
(Germany)

Team
Gouri Deshpande

University of Calgary
(Canada)

Program Committee

  • Muhammad Abbas, RISE Research Institute, Sweden
  • Chetan Arora, Deakin University, Australia
  • Fatma Başak Aydemir, Boğaziçi University, Turkey
  • Dan Berry, University of Waterloo, Canada
  • Fabiano Dalpiaz, Utrecht University, The Netherlands
  • Davide Dell’Anna, TuDelft, The Netherlands
  • Henning Femmer, Fachhochschule Südwestfalen, Germany
  • Xavier Franch, Universitat Politècnica de Catalunya, Spain
  • Julian Frattini, Blekinge Institute of Technology, Sweden
  • Davide Fucci, Blekinge Institute of Technology, Sweden
  • Smita Ghaisas, TCS, India
  • Sepideh Ghanavati, University of Maine, USA
  • Eduard Groen, Fraunhofer IESE, Germany
  • Emitzá Guzmán, Vrije Universiteit Amsterdam, The Netherlands
  • Frank Houdek, Daimler Ag, Germany
  • Clara Lüders, University of Hamburg, Germany
  • Lloyd Montgomery, University of Hamburg, Germany
  • Mohammad Moshirpour, University of Calgary, Canada
  • Nan Niu, University of Cincinnati, USA
  • Barbara Paech, Universität Heidelberg, Germany
  • Mehrdad Sabetzadeh, University of Ottawa, Canada
  • Michael Unterkalmsteiner, Blekinge Institute of Technology, Sweden
  • Liping Zhao, University of Manchester, UK
  • Jannik Fischbach, Netlight Consulting and fortiss, Germany
  • Han van der Aa, University of Mannheim, Germany
  • Sylwia Kopczyńska, Poznan University of Technology, Poland
  • Luisa Mich, University of Trento, Italy
  • Nicolas Sannier, University of Luxembourg, Luxembourg

Past Years