Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
We investigate whether large language models encode latent knowledge of frame semantics, focusing on frame identification—a core challenge in Frame Semantic Parsing that involves selecting the appropriate semantic frame for a target word in context. Using the FrameNet lexical resource, we evaluate models under prompt-based inference and observe that they can perform frame disambiguation effectively even without explicit supervision. To assess the impact of task-specific training, we fine tune the model on FrameNet data and fine tuning substantially improves in-domain accuracy while generalizing well to out-of-domain benchmarks. Further analysis reveals that the model can generate semantically coherent frame definitions and remains partially robust to corrupted frame labels, suggesting an internalized understanding of frame semantics beyond surface cues.