Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
What if the next generation of human-computer interaction is not a screen... but a conversation? Large Language Models (LLMs) offer a new paradigm for interacting with computers through text, but they lack shape reasoning capabilities. We introduce Textual Anatomy Encoding (TAE), a workflow that connects LLMs with 3D anatomies. TAE employs clinician-validated semantic annotations and rule-based prompts to achieve deterministic and interpretable landmark localization. The results indicate that TAE enables LLMs to move beyond text knowledge, achieving accurate anatomy shape understanding. This framework opens opportunities for diagnosis, surgical planning, and scalable medical annotation, positioning LLMs as a foundation for next-generation human–computer interaction in healthcare.
