Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
In scientific research, “limitations” refer to the shortcomings, constraints, or weaknesses within a study. A transparent reporting of such limitations can enhance the quality and reproducibility of research and improve public trust in science. However, authors often a) underreport them in the papers’ text and b) use hedging strategies to satisfy editorial requirements at the cost of readers’ clarity and confidence. This underreporting behavior, along with an explosion in the number of publications, has created a pressing need to automatically extract/generate such limitations from scholarly papers. In that direction, we report a complete architecture for computational analysis of research limitations. Specifically, we a) create a dataset of limitations in ACL, NeurIPS, and PeerJ papers by extracting them from papers’ text and integrating them with external reviews; b) propose methods to automatically generate them using a novel Retrieval Augmented Generation (RAG) technique; c) create a fine-grained evaluation framework for generated limitations and provide a meta-evaluation for the proposed evaluation techniques.