Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
It is well understood that mental modeling forms the foundation of many everyday interactions between humans. This includes both collaborative and deceptive interactions. In fact, one could argue that the modeling and manipulation of mental states lies at the heart of effective deception. In this paper, we examine the security problem of insider threat attacks. In this case, an adversary has already infiltrated an organization. The primary challenge for this attacker is to avoid suspicion until their true goal can be achieved. We see how existing model-based explanatory methods can be leveraged to generate lies that explain away potentially suspicious activities. We also propose a novel planning formulation that generates plans that appear to achieve an assigned goal while getting close enough to achieve an alternative, covert goal. We evaluate the computational effectiveness of our formulation on multiple planning benchmarks.
