CogSci 2025

August 01, 2025

San Francisco, United States

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

interactive behavior

ux

problem solving

human-computer interaction

artificial intelligence

natural language processing

Large language models (LLMs) such as ChatGPT have replaced conventional interface designs with prompt-based natural language interactions. LLMs exhibit dynamic capabilities to fulfill a broad range of tasks and ad-hoc functionalities (e.g., “rewrite these appliance installation instructions for a five-year-old”). However, their open-ended interface replaces Norman’s gulf of execution with a new cognitive challenge for end-users; namely, the gulf of envisioning clear intentions and task descriptions in prompts to obtain a desired LLM response. To address this gap, we propose a cognitive model of the Envisioning process based on protocols of generative AI prompt-based interactions. The model highlights three cognitive challenges people face when requesting help from LLMs: (1) what the task should be (intentionality gap), (2) how to give instructions to do the task (instruction gap), and (3) what to expect in the LLM’s output (capability gap). We make recommendations to narrow the gulf of envisioning in human-LLM interactions.

Downloads

Paper

Next from CogSci 2025

Does Precision Affect Categorization? Magnitude Categorization and Measurement Scales
poster

Does Precision Affect Categorization? Magnitude Categorization and Measurement Scales

CogSci 2025

Ling Sun and 2 other authors

01 August 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2026 Underline - All rights reserved