Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Linear models are widely used in high-stakes decision-making due to their interpretability, but fairness constraints like Demographic Parity (DP) create opaque effects on model coefficients and predictive bias distribution. We propose a post-processing framework that can be applied on top of any linear model to decompose bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how DP reshapes each coefficient, enabling transparent feature-level interpretation.
