
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

workshop paper
Tuning Language Models with Spatial Logic for Complex Reasoning
keywords:
constraint based learning
neuro-symbolic training
spatial question answering
spatial reasoning
Recent research shows that more data and larger models can provide more accurate solutions to natural language problems that require reasoning. However, models can easily fail to provide solutions in unobserved levels of compositional complexity because they might not obtain the level of abstraction needed for generalizability. To alleviate this issue, we propose to train the models with neuro-symbolic techniques that can exploit the logical rules of reasoning as constraints and provide additional supervision sources to the model. Training models to adhere to the regulations of reasoning pushes them to make more effective abstractions needed for generalizability and transfer learning. We focus on a challenging problem of spatial reasoning over text, and our results on multiple benchmarks confirm our hypothesis of effective domain transfer based on neuro-symbolic training. We utilize our symbolic training approach on multiple commonly used language models.