Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Most multilingual question-answering benchmarks, while covering a diverse pool of languages, do not factor in regional diversity in the information they capture and tend to be Western-centric. This introduces a significant gap in fairly evaluating multilingual models' comprehension of factual information from diverse geographical locations. To address this, we introduce XNationQA to investigate the cultural literacy of multilingual LLMs. XNationQA encompasses questions on the geography, culture, and history of nine countries, containing a total of 49,280 questions in seven languages. We benchmark eight standard multilingual LLMs on XNationQA. Our analyses uncover a considerable discrepancy in the model's accessibility to culturally specific facts across languages. Notably, we often find that a model demonstrates greater knowledge of cultural information in English than in the dominant language of the respective culture. The models show better performance in Western languages, though this does not translate to their being more literate for Western countries, which is counter-intuitive. Furthermore, we observe that models have a very limited ability to transfer knowledge across languages, particularly evident in open-source models.