Premium content
Access to this content requires a subscription. You must be a premium user to view this content.
poster
When Phrases Meet Probabilities: Enabling Open Relation Extraction with Cooperating Large Language Models
keywords:
large language model
open relation extraction
clustering
Current clustering-based open relation extraction (OpenRE) methods usually apply clustering algorithms on top of pre-trained language models. However, this practice has three drawbacks. First, embeddings from language models are high-dimensional and anisotropic, so using simple metrics to calculate distances between these embeddings may not accurately reflect the relational similarity. Second, there exists a gap between the pre-trained language models and downstream clustering for their different objective forms. Third, clustering with embeddings deviates from the primary aim of relation extraction, as it does not directly obtain relations. In this work, we propose a new idea for OpenRE in the era of LLMs, that is, extracting relational phrases and directly exploiting the knowledge in LLMs to assess the semantic similarity between phrases without relying on any additional metrics. Based on this idea, we developed a framework, \textsc{ore}LLM, that makes two LLMs work collaboratively to achieve clustering and address the above issues. Experimental results on different datasets show that \textsc{ore}LLM outperforms current baselines by $1.4\%\sim 3.13\%$ in terms of clustering accuracy.