Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Today, Large language models (LLMs) are reshaping the norms of human communication, sometimes decoupling words from genuine human thought. This transformation is deep, and undermines the trust and interpretive norms that were historically tied to authorship. We draw from linguistic philosophy and AI ethics to detail how large-scale text generation can induce semantic drift, erode accountability, and obfuscate intent and authorship. Our work here introduces conceptual frameworks including hybrid authorship graphs (modeling humans, LLMs, and texts in a provenance network), epistemic doppelgängers (LLM-generated texts that are indistinguishable from human-authored texts), and authorship entropy. We explore mechanisms such as “proof-of-interaction” authorship verification and educational reforms to restore confidence in language. While LLMs' benefits are undeniable (broader access, increased fluency, automation, etc.), the upheavals they introduce to the linguistic landscape demand reckoning. This paper provides a conceptual lens to chart these changes.