AAAI 2026

January 25, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Supervised learning with tabular data presents unique challenges, including low data sizes, the absence of structural cues, and heterogeneous features spanning both categorical and continuous domains. Unlike vision and language tasks, where CNNs, ViTs, and sequential attention-based models can exploit structural inductive biases, tabular data lacks inherent positional structure, hindering the effectiveness of self-attention mechanisms. While recent transformer-based models like TabTransformer, SAINT, and FT-Transformer (which we refer to as 3T) have shown promise on tabular data, they typically operate without leveraging structural cues such as positional encodings (PEs), as no prior structural information is usually available. In this work, we explore whether structural cues, specifically PEs derived from feature associations, can be harnessed to enrich transformer-based models for tabular data. Building on this idea, we propose Tab-PET (PEs for Transformers), which is a graph-based framework for estimating PEs to inject structure into tabular representations for transformer-based architectures. Inspired by approaches that derive PEs from graph topology, we explore two paradigms for graph estimation: association-based and causality-based. We provide theoretical analysis of the effect of PEs on the effective rank of embeddings and empirically demonstrate that graph-derived PEs significantly improve performance across 50 classification and regression datasets for 3T. Notably, association-based graphs consistently yield more stable and pronounced gains compared to causality-driven ones. We conclude with a deeper look into the role of PEs in adapting self-attention architectures to tabular learning.

Downloads

Paper

Next from AAAI 2026

OSVBench: Benchmarking LLMs on Specification Generation Tasks for Operating System Verification
poster

OSVBench: Benchmarking LLMs on Specification Generation Tasks for Operating System Verification

AAAI 2026

+1
Juyong Jiang and 3 other authors

25 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved