Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
This research proposes an extension to the Program Lattice Transformer (PLT) , a neuro-symbolic framework for program induction that embeds programs into a structured latent space. The current PLT model, which uses a flat lattice, is computationally inefficient when modeling invariant programs—operations that return to an initial state after a set number of applications (e.g., a 360° rotation). To address this, we propose embedding the program space onto a cylindrical manifold instead of a plane. This approach is grounded in the principle that only isometric transformations preserve the lattice's compositional structure, limiting valid manifolds to developable surfaces like cylinders . A cylindrical geometry naturally represents invariant programs as closed loops, enhancing efficiency. The proposed method will be evaluated on synthetic tasks like Rubik's Cube and the Abstraction and Reasoning Corpus (ARC) to demonstrate improved performance and efficiency. This work serves as a step toward models that can autonomously configure their own geometric latent spaces, connecting to future research in geometric deep learning and meta-learning.