
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

poster
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
keywords:
process reward models
mathematical reasoning
large language models
In this paper, we present an innovative process-oriented math process reward model called Math-shepherd, which assigns a reward score to each step of math problem solutions. The training of Math-shepherd is achieved using automatically constructed process-wise supervision data, breaking the bottleneck of heavy reliance on manual annotation in existing work. We explore the effectiveness of Math-shepherd in two scenarios: 1) $\textit{Verification}$: Math-shepherd is utilized for reranking multiple outputs generated by Large Language Models (LLMs); 2) $\textit{Reinforcement Learning (RL)}$: Math-shepherd is employed to reinforce LLMs.With Math-shepherd, a series of open-source LLMs demonstrates exceptional performance. For instance, process RL with Math-shepherd significantly enhances Mistral-7B (77.9\%$\to$84.1\% on GSM8K and 28.6\%$\to$33.0\% on MATH).The accuracy can be further improved to 89.1\% and 43.5\% on two benchmarks with verification of Math-shepherd.We believe that automatic process supervision holds significant potential for the future evolution of LLMs.