AAAI 2026

January 23, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

As large language models (LLMs) become increasingly capable, concerns over the unauthorized use of copyrighted and licensed content in their training data have grown, especially in the context of code. Open-source code, often protected by open source licenses (e.g, GPL), poses legal and ethical challenges when used in pretraining. Detecting whether specific code samples were included in LLM training data is thus critical for transparency, accountability, and copyright compliance.

We propose \textsc{SynPrune}, a syntax-pruned membership inference attack method tailored for code. Unlike prior MIA approaches that treat code as plain text, \textsc{SynPrune} leverages the structured and rule-governed nature of programming languages. Specifically, it identifies and excludes consequent tokens that are syntactically required and not reflective of authorship, from attribution when computing membership scores. Experimental results show that \textsc{SynPrune} consistently outperforms the state-of-the-arts. Our method is also robust across varying function lengths and syntax categories.

Downloads

Paper

Next from AAAI 2026

ARDiff: Anisotropic Residual Diffusion for Heterogeneous Graph Learning
poster

ARDiff: Anisotropic Residual Diffusion for Heterogeneous Graph Learning

AAAI 2026

+2
Yong Chen and 4 other authors

23 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved