Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
computer science
robotics
human-computer interaction
artificial intelligence
machine learning
To investigate the effects of human feedback strategies on machine learning (ML), we collected data from participants (N=36) as they evaluated a robot with numeric feedback during a card game. We found that participants employed different partial credit feedback strategies for robot failures during the task (i.e., participants varied in how they scored the same robot failure actions). We then used the feedback from each participant to generate extrapolated feedback strategies. In simulations, we found that training a supervised ML model with these different extrapolated feedback strategies influenced how well the model was able to learn the task. Models trained with labels from some reasonable strategies significantly outperformed models trained with labels from other reasonable strategies. Participants' familiarity with ML, artificial intelligence, and the task did not significantly affect how well their extrapolated feedback strategy trained the model. These findings have implications for transferring learning algorithms into the real world.