Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Text generated by Large Language Models (LLMs) now rivals human writing, raising concerns about its misuse. However, mainstream AI-generated text detection (AGTD) methods primarily target document-level long texts and struggle to generalize effectively to sentence-level short texts. And current sentence-level AGTD (S-AGTD) research faces two significant limitations: (1) lack of a comprehensive evaluation on complex human-AI hybrid content, where human-written text (HWT) and AI-generated text (AGT) alternate irregularly, and (2) failure to incorporate contextual information, which serves as a crucial supplementary feature for identifying the origin of the detected sentence. Therefore, in our work, we propose \textbf{AutoFill-Refine}, a high-quality synthesis strategy for human-AI hybrid texts, and then construct a dedicated S-AGTD benchmark dataset. Besides, we introduce \textbf{SenDetEX}, a novel framework for \underline{s}entence-level AI-g\underline{en}erated text \underline{det}ection via styl\underline{e} and conte\underline{x}t fusion. Extensive experiments demonstrate that SenDetEX significantly outperforms in detection accuracy while exhibiting remarkable transferability and robustness among all baseline models.