In this work, we investigate the role of generative AI agents as reflective partners in engineering design. While such models are increasingly used to generate design solutions, concerns remain about their potential to diminish designers’ critical thinking and reasoning skills. To address this, we develop a mixed-initiative conversational framework that positions large language and vision–language models as reflective thinking partners rather than solution providers. The framework is structured around five contextual information channels, namely task–role context, design representations, historical context, evaluation signals, and target references, that enable AI agents to ask reflective questions and provide explanations and suggestions. To study this framework within a concrete design context, we develop an interactive tool that embodies the notion of contextual fidelity of 2D structure design tasks. We implement varying levels of contextual fidelity, defined by the extent of contextual information available to the agent. We evaluate these levels through a between-subjects study with forty-six participants, comparing a high-fidelity and a low-fidelity agent against a control group without AI support. We examine the impact of the agents on how users think, talk and act, using a comprehensive set of metrics, including coarse-level design objectives (deformation and material usage), solution quality metrics (structural and geometric analysis), process-oriented measures (design space exploration patterns and trajectories, design strategy shifts), conversational dynamics (thematic and temporal analysis), and subjective surveys (NASA-TLX, Cognitive Load Theory, Trust in AI). Our analyses show that while conversational agents do not immediately help improve coarse-level design objectives, they significantly shape nuanced aspects of design processes and outcomes. Interaction with the agents critically influences how users explore the design space, where agent-supported groups exhibited more focused exploration patterns compared to the control group’s broader trial-and-error approaches. Furthermore, interactions with the high-fidelity agent led to solutions with higher symmetry and topological alignment with optimal designs, fostered deeper reflection, reduced mental demand, and supported more deliberate design decisions. Building on these findings, we discuss broader implications of AI agents for problem-solving processes and outline guidelines for designing adaptive and generalizable frameworks for different domains.
扫码关注我们
求助内容:
应助结果提醒方式:
