Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
We introduce a novel dual Graph Neural Network architecture that explicitly separates temporal dynamics from static handshape configurations in sign language processing. While handshapes serve as fundamental phonological units in sign languages, with American Sign Language employing 40--50 distinct handshapes, computational approaches rarely model them explicitly, limiting both recognition accuracy and linguistic analysis. Our approach combines anatomically-informed graph structures with contrastive learning to address key challenges in handshape recognition, including subtle inter-class distinctions and temporal variations. Achieving 46.07% accuracy across 37 handshape classes, a significant improvement over baseline methods (25.40%), we establish the first benchmark for structured handshape recognition in signing sequences. This work advances sign language processing by bridging computational models with linguistic structure, providing a framework for more accurate phonological modeling.