Although plenty of qualitative logical frameworks have been pro- posed to evaluate and model trust in multi-agent sittings, these ap- proaches generally ignore reasoning about quantitative aspects such as degrees of trust. In this paper, we address this limitation from the modelling and verification perspectives. We start by constructing TCTLG , a logical language to represent the quantitative aspect of trust and present a set of its reasoning postulates. Moreover, we develop and implement a new symbolic model checking algorithm and open source tool for quantifying the relationships among the interacting agents. Finally, we investigate the complexity and evaluate our approach using a case study in the health care domain.
Next from AAMAS 2020
Optimization of Large-scale Agent-based Simulations through Automated Abstraction and Simplification
09 May 2020