
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

poster
Evaluating the Performance of ChatGPT-4o Vision Capabilities on Image-Based USMLE Step 1, Step 2, and Step 3 Examination Questions
Artificial intelligence (AI) has significant potential in medicine, especially in diagnostics and education. ChatGPT has achieved levels comparable to medical students on text-based USMLE questions, yet there's a gap in its evaluation on image-based questions. This study evaluated ChatGPT-4's performance on image-based questions from USMLE Step 1, Step 2, and Step 3. The overall performance of ChatGPT-4 on USMLE Steps 1, 2, and 3 was evaluated using 376 questions, including 54 with images. The accuracy was 85.7% for Step 1, 92.5% for Step 2, and 86.9% for Step 3. For image-based questions, the accuracy was 70.8% for Step 1, 92.9% for Step 2, and 62.5% for Step 3. In contrast, text-based questions showed higher accuracy: 89.5% for Step 1, 92.5% for Step 2, and 90.1% for Step 3. Performance dropped significantly for difficult image-based questions in Steps 1 and 3 (p=0.0196 and p=0.0020 respectively), but not in Step 2 (p=0.9574). Despite these challenges, the AI's accuracy on image-based questions exceeded the passing rate for all three exams. ChatGPT-4 can handle image-based USMLE questions above the passing rate, showing promise for its use in medical education and diagnostics. Further development is needed to improve its direct image processing capabilities and overall performance.