Mitigating Risks in Large Language Model Usage Through Critical Thinking

bookPart
The use of Large Language Models (LLMs) in information retrieval can lead to inaccurate, inconsistent, incomplete, irrelevant, or biased outputs. To address these risks, we argue that critical thinking serves as a powerful antidote, equipping users with the skills to navigate and mitigate these risks effectively. This paper examines leading conceptualizations of critical thinking and contextualizes its application in LLM-based information retrieval. We review state-of-the-art approaches for minimizing these risks and highlight their limitations. Building on this, we propose five novel Critical Thinking Support Functions, aimed at fostering user criticality during LLM interactions. We report on workshops conducted with subject matter experts and potential users to evaluate the support functions, highlight the function that appeared most promising and provide a first draft of an interface design. By emphasizing the need for user-centered solutions to complement technical advancements, we hope to contribute to safer and more effective use of LLMs. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
TNO Identifier
1025434
ISBN
978-3-032-13184-3
Publisher
Springer
Source title
HCI International 2025 – Late Breaking Papers. HCII 2025, Proceeding of the 27th International Conference on Human-Computer Interaction, HCII 2025, Gothenburg, Sweden, June 22–27, 2025
Editor(s)
Degen, H.
Ntoa, S.
Place of publication
Cham
Pages
450-463
Files
To receive the publication files, please send an e-mail request to TNO Repository.