Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Event argument extraction aims to identify event arguments and classify their roles within events, whereas relation extraction classifies semantic relationships between entities. Existing methods struggle to leverage semantic interactions effectively. To address this issue, we propose REAR, a reinforced optimization framework. REAR first initializes a Large Language Model (LLM) with supervised reasoning explanations. Subsequently, it enhances the LLM by dynamically exploring reasoning trajectories via reinforcement learning. Experimental results show that REAR yields substantial improvements over previous decoder-only LLM methods, achieving performance gains of at least 0.9% and 2.2% in F1-score on the ACE-E and ACE-E benchmark datasets, respectively. All code will be made publicly available.