Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The response behaviors observed in online user-generated content (UGC) frequently demonstrate non-linear characteristics, such as conditional branching and selective avoidance. These patterns present additional challenges for ensuring the trustworthiness of Large Language Models (LLMs) reasoning, particularly as their unidirectional, left-to-right inference mechanisms may not adequately capture such complex reasoning dynamics. To address this, we propose a Forest of Thought Explanation (FoTE), a novel prompting that models the selective avoidance in UGC while ensuring explanation consensus through reasoning paths across all decision sub-trees. The FoTE employs an Iterative Chain of Thought (ICoT) to generate diverse reasoning thoughts. The thoughts are then assessed via a cooperative contribution evaluator with a fair contribution. The top-$k$ highest-contribution thoughts are retained for subsequent reasoning iterations, while subsets are randomly sampled to simulate selective avoidance—thereby constructing the FoTE. Through extensive evaluations across three open-source LLMs and two established social science problems (spanning four benchmark datasets), the FoTE demonstrates superior success rates compared to competing prompting strategies. Notably, its performance gains increase with the strength of selective avoidance in social problems. The trustworthiness of our FoTE is enhanced by the incorporation of (1) a cooperative game theory-based thought evaluator and (2) a transparent reasoning path that converges toward consensus.
