Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
AI models are often evaluated based on their ability to predict the outcome of interest. However, in many AI for social impact applications, the presence of an intervention that affects the outcome can bias the evaluation. Randomized controlled trials (RCTs) randomly assign interventions, so data from the control group can be used to generate unbiased model performance estimates. However, this approach is inefficient because it ignores data from the treatment group. Given the complexity and cost often associated with RCTs, making the most use of the data is essential. Thus, we investigate model evaluation strategies that leverage all data from an RCT. First, we theoretically quantify estimation bias from naïvely aggregating performance estimates with data from treatment and control groups, and derive the condition under which the bias leads to incorrect model selection. Leveraging these theoretical insights, we propose an unbiased evaluation strategy that reweights data from the treatment group to mimic the distributions of samples that would or would not experience the outcome under no intervention. Using synthetic and real-world datasets, we show that our proposed evaluation approach consistently leads to better model selection than the standard approach that ignores data from the treatment group across various intervention effect and sample size settings. Our contribution represents a meaningful step towards more efficient model evaluation in real-world contexts.
