Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Automated news summarization using large language models (LLMs) offers great potential to enhance information accessibility. However, critical challenges, such as hallucinations, bias, and toxicity, threaten their reliability and societal acceptance. In this paper, we present NewsLensAI, a novel summarization framework explicitly designed to address these trustworthiness concerns through Named Entity Recognition (NER)-guided prompting. By anchoring summaries in key factual entities extracted from source articles, our method significantly reduces factual inaccuracies without altering model weights or architectures. We evaluate NewsLensAI on a dataset of 1,500 real-world news articles using both open- source (LLaMA3) and proprietary (Gemini1.5) LLMs. Our analysis encompasses factual consistency, political bias shifts, sentiment preservation, and toxicity moderation. Our results indicate substantial improvements in factual align- ment, demonstrated by an average BERTScore increase from 0.80 (baseline) to 0.88 (NER-enhanced), and a marked 70% relative reduction in hallucinated entities. Furthermore, we identify and characterize a notable “centrist drift,” wherein summaries tend to moderate extreme biases present in source articles, along with a measurable reduction in toxic or emotionally charged language. Complementing our empirical findings, we introduce a real-time NewsLensAI demo that summarizes live news feeds from the Guardian API, providing dynamic bias and sentiment analysis. This practical implementation underscores the real-world applicability and potential societal benefit of our approach. Finally, we dis- cuss critical ethical implications, including potential impacts on media literacy and information diversity. Our interdisciplinary approach, linking NLP, journalism, and ethical analysis, positions NewsLensAI as a meaningful step toward safer, fairer and more trustworthy AI-generated news consumption.
