这是用户在 2024-7-23 12:08 为 https://dl.acm.org/doi/full/10.1145/3544548.3580999 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
skip to main content
research-article 研究文章
Open access 开放获取

Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools
探讨支持设计师学习与基于人工智能制造设计工具共同创造的挑战和机遇

Authors:
Frederic Gmeiner
Human-Computer Interaction Institute, Carnegie Mellon University, United States
,
Humphrey Yang
Human-Computer Interaction Institute, Carnegie Mellon University, United States
,
Lining Yao
Human-Computer Interaction Institute, Carnegie Mellon University, United States
,
Kenneth Holstein
Human-Computer Interaction Institute, Carnegie Mellon University, United States
, and
Nikolas Martelaro
Human-Computer Interaction Institute, Carnegie Mellon University, United States
Authors Info & Claims
作者: Frederic Gmeiner, Humphrey Yang, Lining Yao, Kenneth Holstein 和 Nikolas Martelaro 作者信息和声明
Published: 19 April 2023 Publication History
发表日期:2023 年 4 月 19 日出版历史

Abstract 摘要

AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as “co-creators.” Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation.
基于人工智能的设计工具在专业软件中不断增多,以协助工程师和工业设计师完成复杂的制造和设计任务。这些工具承担的角色比传统的计算机辅助设计工具更具主动性,并经常被描述为“共同创作者”。然而,要有效地与这些系统合作,需要不同于仅使用复杂 CAD 工具的技能。迄今为止,我们对工程设计师如何学习使用基于人工智能的设计工具知之甚少。在这项研究中,我们观察了受过训练的设计师在学习如何使用两种基于人工智能的工具完成一个真实的设计任务。我们发现,设计师在学习如何有效地与当前系统共同创作时面临许多挑战,包括理解和调整人工智能输出以及沟通他们的设计目标。根据我们的研究结果,我们强调了几个设计机会,以更好地支持设计师与人工智能的共同创作。

1 Introduction 1 引言

Modern manufacturing processes allow designers to produce complex parts optimizing strength-to-weight or leveraging new materials such as shape-changing plastics, yet their creation often surpasses the designers’ cognitive capabilities. Recently, computer-aided design (CAD) tools have begun incorporating AI-based features to generate part designs based on a designer’s myriad optimization goals [54]. For example, Japanese electric vehicle manufacturer WHILL used Autodesk Fusion 360 Generative Design to optimize material economy, strength, and sustainability for an electric wheelchair component [27]. AI tools are also helping designers generate parts using emerging manufacturing processes such as shape-changing smart material structures [80]. In both examples, working with AI allows designers to create designs that would be extremely tedious or infeasible without AI support.
现代制造工艺使设计师能够生产复杂零件,优化强度重量比或利用新材料,如可变形塑料,然而,这些创作往往超出了设计师的认知能力。最近,计算机辅助设计(CAD)工具已经开始整合基于人工智能的功能,根据设计师的多样化优化目标生成零件设计。例如,日本电动车制造商 WHILL 使用 Autodesk Fusion 360 生成设计来优化电动轮椅零件的材料经济性、强度和可持续性。人工智能工具还帮助设计师利用新兴制造工艺生成零件,如可变形智能材料结构。在这两个例子中,与人工智能合作使设计师能够创造出在没有人工智能支持的情况下将会非常繁琐或不可行的设计。
These generative AI tools take more agency and autonomy in parts of the human-AI design process and are often referred to as “co-creators” [18]. However, effective and practical co-creation presents a significant learning curve for designers, as they are required to work and think collaboratively with AI agents that operate differently than human collaborators or complex CAD tools.
这些生成式人工智能工具在人工智能设计过程的某些部分中拥有更多的代理权和自主权,通常被称为“共同创作者”。然而,有效和实用的共同创作对设计师来说存在显著的学习曲线,因为他们需要与操作方式与人类合作者或复杂的 CAD 工具不同的人工智能代理进行协作和思考。
Studies have shown that effectively working with professional feature-rich non-AI design software already requires substantial and continual learning as such software becomes more capable [38, 49]. Consequently, instead of simplifying the software’s interfaces, an active field within HCI studies how the learning of such complex software systems can be better supported through interactive interfaces [24, 44, 51]. However, working with AI "co-creators" is different from working with conventional CAD tools. Designers do not directly manipulate 3D geometry but rather formulate design goals for the AI system to build from. Yet, little is known about how to support designers in learning to work with AI tools that take on this more active and collaborative role.
研究表明,有效地使用专业功能丰富的非人工智能设计软件已经需要大量的持续学习,因为这类软件变得更加强大。因此,人机交互领域的一个活跃领域并非简化软件界面,而是研究如何通过交互界面更好地支持学习这种复杂软件系统[ 24, 44, 51]。然而,与传统的 CAD 工具不同,与 AI“共同创作者”合作是不同的。设计师不直接操纵 3D 几何图形,而是为 AI 系统制定设计目标。然而,目前对于如何支持设计师学习如何使用承担更积极和协作角色的 AI 工具还知之甚少。
In this study, we ask how designers can be better supported in learning to co-create with AI design tools. We guide our investigations by a recent wave of HCI literature that looks to inform the design of human-AI collaboration based on the mechanisms that make human-human collaborations effective—such as grounding in communication or shared mental models [5, 6, 10, 31, 40, 77, 85]. Furthermore, we seek inspiration from team learning [79] which models what actions help people learn to collaborate effectively with each other.
在这项研究中,我们探讨了如何更好地支持设计师学习如何与人工智能设计工具共同创造。我们的研究受到了最近一波人机交互文献的指导,该文献旨在根据使人与人之间的协作有效的机制来指导人工智能协作设计的设计,比如基于沟通或共享心智模型的基础[5, 6, 10, 31, 40, 77, 85]。此外,我们还从团队学习中寻求灵感[79],这种模型展示了哪些行动有助于人们学会有效地相互协作。
To gain insights on improving human-AI design collaboration, we study how engineering and industrial designers without prior AI co-creation experience learn to work with AI CAD tools in the context of advanced manufacturing design tasks. We chose this domain because of its increasing design task complexity, for which designers often require AI assistance. Through our studies, we aim to generate insights to inform future support interfaces of AI design tools. In particular, we investigate the following research questions:
为了深入了解如何改进人工智能设计协作,我们研究了没有先前人工智能共创经验的工程师和工业设计师如何学习在先进制造设计任务的背景下使用人工智能 CAD 工具。我们选择了这个领域,因为其设计任务复杂性不断增加,设计师通常需要人工智能的帮助。通过我们的研究,我们旨在产生见解,以指导未来人工智能设计工具的支持界面。特别是,我们调查以下研究问题:
RQ1a
What challenges do designers face when learning to co-create with computational AI tools?
设计师在学习如何与计算机人工智能工具共同创作时面临哪些挑战?
RQ1b
How do designers overcome these challenges?
设计师们如何克服这些挑战?
To answer these questions, we conducted a series of think-aloud studies observing how trained engineering and architectural designers (tried to) learn to co-create with two different computational AI tools on complex manufacturing design tasks (Study 1). Based on analyses of their interactions with the systems and retrospective interviews, we found that they generally valued the AI’s assistance but faced challenges in learning to effectively co-create with the tools and interpret the design outputs. Those who were able to produce feasible and satisfying designs learned to co-create with the tool by systematically testing the boundaries of its capabilities early on, by self-explaining AI behaviors they observed, and by sketching and reflecting on design issues.
为了回答这些问题,我们进行了一系列的思考研究,观察经过训练的工程和建筑设计师(试图)学习如何与两种不同的计算 AI 工具共同创造复杂制造设计任务(研究 1)。通过分析他们与系统的互动和事后访谈,我们发现他们通常重视 AI 的帮助,但在学习如何有效地与工具共同创造以及解释设计输出方面面临挑战。那些能够产生可行且令人满意的设计的人学会了通过系统地测试工具的能力边界,自我解释他们观察到的 AI 行为,并对设计问题进行草图和反思来与工具共同创造。
After learning about these challenges, we then explored how designers could be supported to better co-create by asking:
在了解了这些挑战之后,我们进一步探讨了如何支持设计师更好地共同创造,提出了以下问题:
RQ2
What are effective strategies to support designers in learning to co-create with computational AI tools?
支持设计师学习如何与计算机人工智能工具共同创作的有效策略是什么?
To answer this question, we took inspiration from prior work on human-human collaboration [9, 53, 68]. We conducted a human-human collaboration study to see how human guides would assist new users of AI tools in learning to co-create and how the new users learned with human assistance (Study 2). The observed effective support strategies included providing step-by-step instructions, prompting design reflection, and suggesting alternative strategies and goals for the design task. We also observed that the human guides relied heavily on multi-modal communication (e.g., screen annotations and mouse gesturing) to communicate more effectively with designers.
为了回答这个问题,我们从先前关于人与人之间合作的研究中获得了灵感[9, 53, 68]。我们进行了一项人与人之间合作的研究,以了解人类导师如何帮助 AI 工具的新用户学习共同创作,以及新用户如何在人类帮助下学习(研究 2)。观察到的有效支持策略包括提供逐步说明、促使设计反思,并为设计任务提供替代策略和目标。我们还观察到,人类导师在与设计师更有效地沟通时,主要依赖多模态交流(例如屏幕注释和鼠标手势)。
Lastly, to inform design opportunities for new support tools we asked:
最后,为了为新支持工具的设计机会提供信息,我们询问了:
RQ3
What are designers’ needs and expectations for human-AI co-creation?
设计师对人工智能协同创作的需求和期望是什么?
Synthesizing the results from both studies, we learned that many participants felt unable to communicate their design goals with the AI and wished for more conversational interactions and contextual awareness from the tool. We discuss potential support implications and future work from these needs and expectations.
综合两项研究的结果,我们了解到许多参与者感到无法与人工智能沟通他们的设计目标,并希望工具能提供更多对话交互和上下文意识。我们讨论了这些需求和期望带来的潜在支持影响以及未来工作。
In short, this study makes three main contributions:
简而言之,这项研究主要有三个贡献:
(1)
providing a set of observed challenges that engineering and architectural designers face when learning to collaborate with AI on complex co-creation tasks in the context of advanced manufacturing design;
在先进制造设计背景下,为工程和建筑设计师学习如何与人工智能在复杂的共同创作任务中合作时所面临的一系列观察到的挑战
(2)
advancing our understanding of designers’ needs and expectations for human-AI co-creative tools;
推进我们对设计师对人工智能协作创作工具的需求和期望的理解;
(3)
highlighting design opportunities to better support designers in learning to co-create.
突出设计机会,以更好地支持设计师学习共同创造。

2 Related Work 2 相关工作

2.1 AI-based design tools for manufacturing
2.1 用于制造业的基于人工智能的设计工具

AI-based design support tools use various computational methods for generating 2D and 3D design options based on constraints and objectives set by designers [1, 72]. In 3D architectural, industrial, and mechanical design, new generative design tools have helped designers create consumer goods [43], building layouts [60], and lightweight automotive and airplane components [59, 61]. In the context of emerging advanced materials, AI design tools assist designers in creating structures out of shape-changing or elasticity-changing materials [29, 80]. Such AI tools use multiple techniques to generate designs from a set of goals and requirements, including constraint-based solvers [73], style transfer [2, 28], simulation and optimization [80], and genetic algorithms [59]. Such techniques are becoming commercially available in 3D CAD design tools such as Siemens NX, Solidworks, and Autodesk Fusion360 [3, 17, 69]. Many of these tools operate as black boxes where designers first set objectives and then review generated designs. However, this interaction can make it hard for designers to quickly develop a mental model of how the tool works, limiting their creative use.
基于人工智能的设计支持工具使用各种计算方法,根据设计师设定的约束和目标生成二维和三维设计选项。在三维建筑、工业和机械设计中,新的生成设计工具帮助设计师创建消费品、建筑布局以及轻量级汽车和飞机零部件。在新兴先进材料的背景下,人工智能设计工具协助设计师利用可变形或弹性变化材料创建结构。这些人工智能工具使用多种技术从一组目标和要求生成设计,包括基于约束的求解器、风格转移、模拟和优化,以及遗传算法。这些技术正逐渐在 Siemens NX、Solidworks 和 Autodesk Fusion360 等三维 CAD 设计工具中商业化。许多工具作为黑匣子运行,设计师首先设定目标,然后审查生成的设计。然而,这种交互可能使设计师难以快速建立对工具运作方式的心智模型,从而限制了他们的创造性使用。
Recent research has developed generative design interfaces for interactively exploring multiple design options [39, 52, 83] or more iterative engagement between the designer and the tool through real-time design generation and assessment [14, 19, 37]. However, few empirical studies exist that evaluate how engineers and designers learn to work with AI design tools on realistic tasks. Some existing work has measured the performance impact of AI agents on human engineering teams [84] while other work has investigated what role professional makers expect for involving AI in digital fabrication workflows [81]. This study provides empirical observations on how engineering, industrial and architectural designers learn to work and co-create with computational AI-based design tools.
最近的研究已经开发出生成设计界面,用于交互式地探索多种设计选项[39, 52, 83]或者设计师与工具之间通过实时设计生成和评估进行更多的迭代互动[14, 19, 37]。然而,很少有实证研究评估工程师和设计师如何学习在现实任务中使用人工智能设计工具。一些现有的工作已经衡量了人工智能代理对人类工程团队绩效的影响[84],而其他工作则调查了专业制作者在数字制造工作流程中期望人工智能扮演的角色[81]。本研究提供了关于工程、工业和建筑设计师如何学习与计算人工智能设计工具共同创作的实证观察。

2.2 Learning complex software
2.2 学习复杂软件

Prior HCI research has looked to evaluate and improve the learnability of complex software systems. Past studies explore how people of different ages learn a feature-rich notetaking tool [49], how professional engineers learn 3D design software [38], or how casual designers learn professional motion graphics software [33]. Often, people learn by searching web forums or asking knowledgeable colleagues for help [38]. Research on interfaces to support people in learning complex software has proposed dynamic feedforward tool tips [44], guided tutorial systems [24], and widgets that support self-directed trial and error learning [51].
过去的人机交互研究旨在评估和改进复杂软件系统的易学性。过去的研究探讨了不同年龄段的人如何学习功能丰富的记笔记工具[49],专业工程师如何学习 3D 设计软件[38],或者休闲设计师如何学习专业动态图形软件[33]。通常,人们通过搜索网络论坛或向知识渊博的同事寻求帮助来学习[38]。支持人们学习复杂软件的界面研究提出了动态前馈工具提示[44]、引导式教程系统[24]以及支持自主试错学习的小部件[51]。
While prior work has explored lenses such as self-directed learning for working with complex software [13], it remains an open question of how best to support self-directed learning for co-creation with AI systems that take an active role in the design process. For example, prior studies in Human-AI collaboration show that the black box nature of AI systems introduces new challenges where users grapple with non-transparent and non-intuitive system behavior, hindering coordination and communication when completing “collaborative” tasks [10]. To address these issues, various strategies like explainable AI or intelligibility are aimed at helping users refine their mental models of AI systems [55, 70].
尽管先前的研究已经探讨了诸如自主学习等镜头,用于处理复杂软件[13],但如何最好地支持与在设计过程中扮演积极角色的人工智能系统共同创造的自主学习仍然是一个悬而未决的问题。例如,人机协作的先前研究表明,人工智能系统的黑匣子特性引入了新的挑战,用户需要应对非透明和非直观的系统行为,这会在完成“协作”任务时阻碍协调和沟通[10]。为了解决这些问题,各种策略,如可解释人工智能或可理解性,旨在帮助用户完善他们对人工智能系统的心智模型[55, 70]。
However, even as AI models become more intuitive for users, we expect that there will always remain a need for learning to work effectively with AI systems to, for example, develop shared mental models [35, 40]. Consequently, as we discuss in the next section, supporting humans in learning to effectively co-create with AI requires bringing in additional theoretical lenses.
然而,即使人工智能模型对用户变得更加直观,我们仍然预计,学习如何有效地与人工智能系统合作将始终存在需求,例如,发展共享的心智模型[35, 40]。因此,正如我们在下一节中讨论的那样,支持人类学习如何有效地与人工智能共同创造需要引入额外的理论视角。

2.3 Human-human collaboration as a lens for studying co-creative systems
2.3 以人际协作作为研究共创系统的视角

To design effective human–AI collaboration, researchers have suggested drawing lessons from studying what makes human–human collaboration effective [5, 6, 10, 31, 40, 77, 78, 85]. While it remains an open question to what extent scaffolds for human-AI collaboration should mirror the designs of supports for human–human collaboration [77, 85], human-AI interaction researchers suggest that theories and findings from psychology, education, and the learning sciences are currently underutilized. For instance, Koch and Oulasvirta [40] note that group cognition—the study of how agents relate to other agents’ decisions, abilities, beliefs, common goals, and understandings—provides powerful concepts for work on human–AI collaboration, yet is rarely referenced within this literature. Group cognition comprises phenomena such as grounding in communication [16] (creating mutual sense through verbal and non-verbal communication) and theory of mind [22] (the ability of agents to be aware of their own and the other’s beliefs, intentions, knowledge, or perspectives). Similarly, Kaur et al. [35] argue that like human-human collaboration, effective collaborations between humans and AI may require shared mental models between people and the AI to enable mechanisms such as adaptive coordination of actions among team members [16, 56]. These may include shared representations of the task to be accomplished, of each other’s abilities and limitations, or of each other’s goals and strategies [20, 25, 66, 74]. A line of work addressing these opportunities has begun to explore how humans might be supported in developing and maintaining more accurate mental models of an AI collaborator’s capabilities and limitations [5, 6, 41]. However, compared to concepts of human-human collaboration, honing only users’ mental models is not sufficient enough for effective collaboration, which requires shared mental models between the user and system [35].
为了设计有效的人工智能协作,研究人员建议从研究人际协作的有效性中汲取经验教训[5, 6, 10, 31, 40, 77, 78, 85]。尽管人工智能协作的支撑结构应该如何模仿人际协作的支持设计仍然是一个开放性问题[77, 85],人工智能交互研究人员建议,心理学、教育学和学习科学的理论和研究结果目前被低估了。例如,Koch 和 Oulasvirta[40]指出,群体认知——研究代理人如何与其他代理人的决策、能力、信念、共同目标和理解相关联——为人工智能协作的工作提供了强大的概念,但在这一文献中很少被引用。群体认知包括诸如交流中的基础[16](通过口头和非口头交流创造共同感知)和心灵理论[22](代理人意识到自己和他人的信念、意图、知识或观点的能力)等现象。同样,Kaur 等人。 [ 35] 认为,类似于人与人之间的合作,人类与人工智能之间的有效合作可能需要人与人工智能之间共享的心智模型,以实现团队成员之间的行动适应性协调等机制 [ 16, 56]。这些可能包括对待完成任务的共享表征,对彼此能力和局限性的共享认知,或对彼此目标和策略的共享认知 [ 20, 25, 66, 74]。一系列研究开始探讨如何支持人类发展和维护更准确的心智模型,以了解人工智能合作者的能力和局限性 [ 5, 6, 41]。然而,与人与人之间合作的概念相比,仅磨练用户的心智模型并不足以实现有效的合作,这需要用户与系统之间共享的心智模型 [ 35]。
To date, little work has explored how best to support humans in learning to collaborate with AI on authentic tasks, such as design tasks, despite growing recognition of the need for such supports [10, 45, 50, 85]. Design tasks represent compelling challenges for human–AI collaboration, given that design problems are often ill-defined and require teams to navigate and negotiate both the problem and solution space [21] through an iterative process of generating ideas, building prototypes, and testing [32].
迄今为止,尽管越来越多的人认识到需要支持人类学习如何与人工智能在设计等真实任务上进行协作,但很少有研究探讨如何最好地支持这一点。设计任务对人类与人工智能的协作提出了引人注目的挑战,因为设计问题通常定义不清晰,需要团队通过生成想法、建立原型和测试的迭代过程来导航和协商问题和解决方案空间。
In this study, we investigate human-AI collaboration for emerging manufacturing design tasks—an area where successful task performance sometimes requires human–AI collaboration, yet where effective collaboration may be challenging to achieve without strong supports [45, 85].
在这项研究中,我们调查了人工智能与人类在新兴制造设计任务中的合作——这是一个成功完成任务有时需要人工智能与人类合作的领域,然而有效的合作可能在没有强有力支持的情况下难以实现。

2.4 Team learning 2.4 团队学习

While phenomena such as grounding in communication, theory of mind and shared mental models provide useful concepts to explain which cognitive and social phenomena enable collaboration among a group of agents [35, 40], these theories do not explain how groups of individual agents learn to effectively collaborate. To address this gap, team learning emerged to study what actions and conditions contribute to how human groups learn to effectively collaborate together [79]. For example, team learning studies suggest that the development of effective shared mental models is supported through an active process of negotiation between team members, involving “constructive” forms of conflict, argumentation, and resolution [30, 34, 74]. However, to date, team learning has been under-utilized as a lens for studying human-AI co-creation. In this work, we draw upon concepts from team learning, such as active processes of communication, joint information processing, and coordination of actions [53], to study what actions and support strategies can help designers learn to co-create with AI-based design tools.
虽然诸如沟通中的基础、心灵理论和共享心智模型等现象提供了有用的概念,用以解释哪些认知和社会现象促进了一组代理人之间的协作[35, 40],但这些理论并未解释个体代理人组成的群体如何学会有效协作。为填补这一空白,团队学习应运而生,研究人类群体如何学会有效协作的行动和条件[79]。例如,团队学习研究表明,有效共享心智模型的发展得到了团队成员之间积极的协商过程的支持,其中涉及“建设性”的冲突形式、辩论和解决[30, 34, 74]。然而,迄今为止,团队学习作为研究人类与人工智能共同创造的视角被低估了。在这项工作中,我们借鉴了团队学习的概念,如积极的沟通过程、共同信息处理和行动协调[53],研究哪些行动和支持策略可以帮助设计师学会与基于人工智能的设计工具共同创造。
Table 1: 表 1:
IDGroup Age 年龄
Years 年份
Gender 性别Domain 领域Occupation 职业CAD Exp. CAD 设计。
Years 年份
Indus. Exp. 工业出口
Years 年份
P-F01Fusion36027MCivil & Environ. Engin. 土木与环境工程。Student / MA 学生/硕士>52 – 5 2 - 5
P-F02Fusion36027MMechanical Engineering 机械工程Student / PhD 学生/博士生>52 – 5 2 - 5
P-F03Fusion36025MMechanical Engineering 机械工程Student / PhD 学生/博士生>52 – 5 2 - 5
P-F04Fusion36026MMechanical Engineering 机械工程Student / MA 学生/硕士>51 – 2 1 - 2
P-F05Fusion36019MArchitecture, Mathematics
建筑,数学
Student / BA 学生/学士2 – 4 2 - 40
P-F11*Fusion36064MMechanical Engineering 机械工程Contractor 承包商>10>30
P-F12Fusion36059MMechanical Engineering 机械工程Designer 设计师>10>30
P-S01SimuLearn21FArchitecture 建筑Student / BA 学生/学士2 – 4 2 - 40
P-S03SimuLearn SimuLearn SimuLearn23FComputational Design 计算设计Student / MA 学生/硕士2 – 4 2 - 40
P-S04SimuLearn SimuLearn SimuLearn21MArchitecture 建筑Student / BA 学生/学士>51 – 2 1 - 2
P-S06SimuLearn SimuLearn SimuLearn23FArchitecture 建筑Student / MA 学生/硕士>5<1
P-S07SimuLearn SimuLearn SimuLearn23MArchitecture 建筑Student / MA 学生/硕士>50
P-S10SimuLearn SimuLearn SimuLearn33FIndustrial Design 工业设计Researcher 研究人员>56 – 10 6 - 10
P-S11*SimuLearn SimuLearn SimuLearn64MMechanical Engineering 机械工程Contractor 承包商>10>30
Table 1: Overview of study 1 participants. P-F11* and P-S11* are the same participant who had experience in using both Fusion360 and SimuLearn. Explanation for gaps in participant IDs: Some participants had dropped out after the first design session, or participants were assigned to Study 2 (see Section 6).
表 1:研究 1 参与者概况。P-F11*和 P-S11*是同一位参与者,他们有使用 Fusion360 和 SimuLearn 的经验。参与者 ID 间隙的解释:一些参与者在第一次设计会话后退出,或者被分配到研究 2(见第 6 节)。

3 Study 1: Think-aloud Design Sessions
研究 1:口述思考设计会议

We conducted a series of think-aloud studies [75] with trained designers new to working with AI, where they worked with an AI design tool to complete a realistic advanced manufacturing design challenge. Think-aloud studies have people verbalize their thoughts while performing a task so that researchers can understand their cognitive processes (e.g., forming mental models [15], learning [82]). In our study, we use the think-aloud method to see how AI novices, who encounter a real learning challenge and are less biased than experts, learn to co-create with the AI tools. Participants first completed a 30-minute moderated think-aloud session where a member of the research team observed them working and listened to what they said they were thinking and doing while working. Half of the participants had a mechanical engineering background and designed a light and strong mounting bracket for a ship engine while considering the optimal manufacturing method and material combination using Autodesk’s Fusion360 Generative Design (based on topology optimization that generates multiple options) [52]. The other half of the participants with a background in architecture or industrial design designed a bike bottle holder made with shape-changing materials—a complex design task that is challenging to complete without computational support. Designers working on the bottle holder task worked with SimuLearn [80], a machine learning-based research tool built on top of Rhino3d that helps designers create structures from shape-changing materials.
我们进行了一系列的大声思考研究[75],参与者是接受过培训的设计师,对人工智能工作还很陌生,他们使用人工智能设计工具完成了一个真实的先进制造设计挑战。大声思考研究要求参与者在执行任务时大声表达他们的想法,以便研究人员能够了解他们的认知过程(例如,形成心理模型[15],学习[82])。在我们的研究中,我们使用大声思考方法来观察人工智能新手如何应对真实的学习挑战,因为他们比专家更少有偏见,并学会与人工智能工具共同创作。参与者首先完成了一个 30 分钟的受控大声思考会话,研究团队的一名成员观察他们的工作,并倾听他们在工作时说的想法和所做的事情。一半的参与者具有机械工程背景,使用 Autodesk 的 Fusion360 生成设计(基于拓扑优化生成多个选项)设计了一种轻巧且坚固的船用发动机支架,同时考虑了最佳的制造方法和材料组合。 参与者中另一半具有建筑或工业设计背景,设计了一个使用形状变化材料制成的自行车水瓶架,这是一个复杂的设计任务,没有计算支持很难完成。致力于水瓶架任务的设计师使用 SimuLearn[80],这是一种基于机器学习的研究工具,建立在 Rhino3d 之上,帮助设计师从形状变化材料中创建结构。
After completing the design task, participants submitted their designs and joined a semi-structured interview to reflect on their experience of working with the design tools. Across the study, we collect the following data:
完成设计任务后,参与者提交了他们的设计,并参加了半结构化访谈,以反思他们使用设计工具的经验。在整个研究过程中,我们收集了以下数据:
Video and audio recordings and machine-generated transcripts of the open-ended think-aloud design sessions
视频和音频记录以及机器生成的开放式思考设计会话的转录
Audio recordings and machine-generated transcripts of the post-task interviews
后任务访谈的音频录音和机器生成的转录
3D designs created during the think-aloud sessions
在思想独白会话期间创建的 3D 设计

3.1 Participants 3.1 参与者

We recruited 14 designers (4 female / 10 male, aged 19 to 64 (M = 32.5, SD = 16.6)) with backgrounds in Architecture, Industrial Design, or Mechanical Engineering (Table 1). Most participants were recruited from our institution’s student body, but we also recruited three professional designers via an online forum for designers who work with Fusion360 [64]. Participants had a minimum of two years of experience using CAD (Fusion360 or Rhino 3D) but no experience working with the studied AI design tools, determined via a screening questionnaire. Participants included mostly undergraduate and Ph.D. students and three engineers with > 30 years of industry experience (Table 1). We recruited participants familiar with either Fusion360 or Rhino3d so that they could focus on learning to work with the AI co-creation features rather than on learning the CAD tool’s user interface. Before the study, all participants signed a consent form approved by our institution’s IRB (STUDY2021_00000202). Participants were paid 20 USD per hour.
我们招募了 14 名设计师(4 名女性/10 名男性,年龄在 19 至 64 岁之间(平均年龄=32.5 岁,标准差=16.6)),他们的背景包括建筑、工业设计或机械工程(见表 1)。大多数参与者是从我们机构的学生中招募的,但我们还通过一个与 Fusion360 合作的设计师在线论坛招募了三名专业设计师[64]。参与者至少有两年使用 CAD(Fusion360 或 Rhino 3D)的经验,但没有使用过研究的 AI 设计工具的经验,这是通过筛选问卷确定的。参与者主要包括本科生和博士生,以及三名具有超过 30 年行业经验的工程师(见表 1)。我们招募了熟悉 Fusion360 或 Rhino3d 的参与者,以便他们可以专注于学习如何使用 AI 共创功能,而不是学习 CAD 工具的用户界面。在研究开始前,所有参与者签署了我们机构 IRB 批准的同意书(STUDY2021_00000202)。参与者每小时获得 20 美元的报酬。
Figure 1: 图 1:
Figure 1: The Fusion 360 design task and workflow. The task involves (A) designing an engine bracket that connects the engine to a damper. (B) A starter file containing connection holes and bounding dimensions is provided to the users to initiate the design in (C) Fusions 360. The user is prompted to create (D) a viable design while minimizing weight. (E) The workflow involves five steps, and based on the AI system’s solutions, the user may iterate the design by adjusting the design constraints and criteria to produce new solutions. (Image A: © Rolls-Royce Solutions America Inc.)
图 1:Fusion 360 设计任务和工作流程。该任务涉及(A)设计连接发动机和减震器的发动机支架。(B)提供了一个包含连接孔和边界尺寸的起始文件,供用户在(C)Fusion 360 中开始设计。用户被提示要创建(D)一个可行的设计,同时尽量减轻重量。(E)工作流程包括五个步骤,基于 AI 系统的解决方案,用户可以通过调整设计约束和标准来迭代设计,以产生新的解决方案。(图片 A:©劳斯莱斯解决方案美国公司)

3.2 Study context: AI-based design tools and tasks
3.2 研究背景:基于人工智能的设计工具和任务

To gather generalizable insights into designers’ challenges, needs, and expectations around designer-AI co-creation, we observed how designers tried to learn to co-create with two different computational AI tools for digital manufacturing tasks. Participants were given a non-trivial, realistic design task to work on during the study. We selected a mechanical engineering design task concerning multi-dimensional optimization and an industrial design task exploring the use case of shape-changing materials. Both tasks are too complex to accomplish without AI and also have functional AI tools already developed. We collaborated with engineers and advanced manufacturing experts to identify and pilot the tasks to ensure they were adequately complex but not overwhelming for our target population. The tasks required participants to generate design solutions within a few hours over multiple sessions.
为了获得关于设计师在设计师-人工智能共创中面临的挑战、需求和期望的可推广见解,我们观察了设计师如何尝试学习与两种不同的计算人工智能工具共同进行数字制造任务的合作。在研究过程中,参与者被要求完成一个非平凡的、现实的设计任务。我们选择了一个涉及多维优化的机械工程设计任务和一个探索形状变化材料用例的工业设计任务。这两个任务对于没有人工智能的情况下完成来说都太复杂了,同时也已经开发出了功能性的人工智能工具。我们与工程师和先进制造专家合作,确定并试点这些任务,以确保它们对我们的目标人群来说足够复杂但不会过于压倒。这些任务要求参与者在多个会话中在几个小时内生成设计解决方案。

3.2.1 Mechanical design support tool.
3.2.1 机械设计支持工具。

Mechanical designers worked with the "Generative Design" feature of Autodesk Fusion360[3], which helps designers to create lightweight and strong parts through topology optimization and genetic algorithms. In this task (Figure 1 A-D), the designer is asked to design a material-efficient and structurally-sound engine mounting bracket by considering the optimal manufacturing and material combination from a large pool of possibilities. While designing mounting brackets is common for mechanical engineers, optimizing designs for different manufacturing methods and materials is difficult without simulation and AI support. Traditionally, engineers would first build a part and then gradually remove or add material based on structural analysis to derive a weight-optimized part. Exploring different manufacturing options would be necessary for every material and manufacturing constellation—which is time-consuming and tedious. In contrast, Generative Design can automatically generate many different design options based on specified high-level requirements, which the designer can explore and choose from.
机械设计师使用 Autodesk Fusion360 的“生成设计”功能[3],通过拓扑优化和遗传算法帮助设计师创建轻量且坚固的零部件。在这个任务中(图 1 A-D),设计师被要求设计一个材料高效且结构稳固的发动机安装支架,考虑从大量可能性中选择最佳制造和材料组合。虽然设计安装支架对于机械工程师来说很常见,但在没有模拟和人工智能支持的情况下,为不同的制造方法和材料优化设计是困难的。传统上,工程师会首先制造一个零件,然后根据结构分析逐渐去除或添加材料,以得到一个重量优化的零件。探索不同的制造选项对于每种材料和制造组合都是必要的,这是耗时且繁琐的。相比之下,“生成设计”可以根据指定的高级要求自动生成许多不同的设计选项,设计师可以进行探索和选择。
Participants were provided a starter file containing the geometric constraints and needed to specify the mechanical design criteria (e.g., loads, bolt connection clearance, boundary condition). Participants then ran the solver and evaluated the AI-generated solutions to identify three viable designs for submission (Figure 1 E). If none of the outcomes were deemed satisfactory, the user might choose to iterate the design by adjusting the input criteria.
参与者被提供了一个起始文件,其中包含几何约束,并需要指定机械设计标准(例如,载荷、螺栓连接间隙、边界条件)。然后,参与者运行求解器并评估人工智能生成的解决方案,以确定三种可提交的设计(图 1E)。如果没有一个结果被认为令人满意,用户可以选择通过调整输入标准来迭代设计。

3.2.2 Industrial design support tool.
3.2.2 工业设计支持工具。

Industrial and architectural designers worked with SimuLearn [80], a research system built on Rhino3D that uses ML-driven simulation and optimization to enable designers to rapidly create objects out of shape-changing materials. This manufacturing process creates 3D-printed flat grids out of PLA plastic that can transform into a volumetric shape when heated. The transformation and the resulting shape can be controlled by tuning the grid geometry and the portion of the active transformation element (i.e., actuator ratio) within the beams. This technique is envisioned to reduce production waste and shipping costs (e.g., flat packaging, reduced support material).
工业和建筑设计师与 SimuLearn 合作[80],这是一个基于 Rhino3D 的研究系统,利用机器学习驱动的模拟和优化,使设计师能够快速使用可变形材料创建物体。这种制造过程利用 PLA 塑料制作 3D 打印的平面网格,当加热时可以变形成体积形状。通过调整网格几何形状和激活变形元件的部分(即致动器比例)来控制变形和最终形状。这种技术被设想可以减少生产废料和运输成本(例如,平面包装,减少支撑材料)。
While new materials are being developed to manufacture shape-changing structures, designing shape-changing components poses unique challenges that designers are ill-equipped to handle. Unlike 2D and 3D design, designing with shape-changing materials involves a non-intuitive mismatch between the final target (3D shapes) and the design input (often 2D). Effectively designing for shape-changing materials requires an understanding of (often complex) spatiotemporal, self-assembling material behaviors that may push against the limits of what humans can mentally simulate. At the low level, designing such materials requires modifying volumes voxel-by-voxel, which is infeasible for complex structures if done manually. AI-driven tools allow designers to create complex artifacts that would otherwise be impossible to create by hand. SimuLearn aids the design process by providing real-time simulation and optimization to iterate designs toward the desired morphing behavior.
尽管正在开发新材料来制造可变形结构,但设计可变形组件面临独特挑战,设计师们缺乏应对能力。与 2D 和 3D 设计不同,使用可变形材料进行设计涉及最终目标(3D 形状)与设计输入(通常为 2D)之间的非直观不匹配。有效地为可变形材料设计需要理解(通常复杂的)时空自组装材料行为,这可能超出人类能够进行心理模拟的极限。在低层次上,设计这种材料需要逐体素地修改体积,如果手动完成,对于复杂结构来说是不可行的。AI 驱动的工具使设计师能够创建复杂的工件,否则无法手工制作。SimuLearn 通过提供实时模拟和优化来辅助设计过程,使设计朝着所需的变形行为进行迭代。
Figure 2: 图 2:
Figure 2: The SimuLearn design task. (A) The user is prompted to design a bike bottle holder using the provided (B) starter file and (C) the SimuLearn tool to create (D) morphing grid structures. (E) The transformed grid should assemble into the holder and fit around the bottle and bike frame. (F) The SimuLearn tool provides three types of design workflows. After initializing a morphing grid design, the user can choose between AI-assisted optimization functions or rapid simulations to iterate the design.
图 2:SimuLearn 设计任务。 (A)提示用户使用提供的(B)起始文件和(C)SimuLearn 工具设计自行车水瓶架,以创建(D)变形网格结构。 (E)转换后的网格应组装成支架,并围绕瓶子和自行车车架。 (F)SimuLearn 工具提供三种类型的设计工作流程。 在初始化变形网格设计后,用户可以选择 AI 辅助优化功能或快速模拟来迭代设计。
In this task (Figure 2 A-E), participants design a bike bottle holder using morphing grids. A starter file containing the bottle and bike frame geometry was provided to contextualize the design. This task was more open-ended than the mechanical engineering task since each designer may assemble the morphed grids in different ways to create the holder. To use the tool (Figure 2 F), the designer models the grid geometry and assigns bending actuator ratios. Next, participants simulate the design, observe the predicted transformation, and iterate the design by changing the grid model and actuator assignment. Alternatively, participants may opt to use functions to optimize the grids toward a targeted transformed shape. The optimization process can be either autonomous or interactive (i.e., the tool suggests edits for the user to choose from). To effectively work with the tool, users need to learn to work with the different levels of AI assistance to produce a satisfactory design iteratively.
在这个任务中(图 2 A-E),参与者使用变形网格设计自行车水瓶架。提供了一个包含瓶子和自行车框架几何形状的起始文件,以便让设计有上下文。这个任务比机械工程任务更加开放,因为每个设计师可以以不同的方式组装变形网格来创建支架。为了使用这个工具(图 2 F),设计师建模网格几何形状并分配弯曲执行器比例。接下来,参与者模拟设计,观察预测的变形,并通过改变网格模型和执行器分配来迭代设计。或者,参与者可以选择使用功能来优化网格以实现目标变形形状。优化过程可以是自主的或交互式的(即,工具会为用户提出编辑建议供选择)。为了有效地使用这个工具,用户需要学会与不同级别的人工智能辅助一起工作,以迭代地产生令人满意的设计。

3.3 Choosing design tools
3.3 选择设计工具

We specifically study these systems for two reasons. First, while both tools support advanced manufacturing tasks, they represent computational systems with distinct purposes and interaction paradigms. Fusion360’s Generative Design module assists engineering designers with the generally familiar task of creating light and structural solid parts. The AI system helps designers to navigate a large design space and explore opportunities while adhering to specified requirements and constraints. SimuLearn, on the other hand, supports designers in working with an emerging material and manufacturing process unfamiliar to most designers. SimuLearn’s AI tool provides rapid simulations of the shape-changing material and offers different levels of design assistance—from manual, over interactive, to autonomous optimization/iteration.
我们之所以专门研究这些系统,有两个原因。首先,虽然这两种工具都支持先进的制造任务,但它们代表着具有不同目的和交互范式的计算系统。Fusion360 的生成设计模块帮助工程设计师完成通常熟悉的轻量和结构坚固零件的创建任务。AI 系统帮助设计师在遵循指定要求和约束的同时,浏览大型设计空间并探索机会。另一方面,SimuLearn 支持设计师使用大多数设计师不熟悉的新兴材料和制造工艺。SimuLearn 的 AI 工具提供形状变化材料的快速模拟,并提供不同级别的设计辅助,从手动、交互到自主优化/迭代。
Second, each tool represents a different interaction style and synchronicity. In Fusion360, users follow a structured sequence of steps to set up the parameters and acquire solutions. It may take a few hours to generate new solutions, and the designer may export the generated models at any time or iterate the design by adjusting the parameters and rerunning the solver. By contrast, SimuLearn’s solver runs two to three magnitudes faster (5-180 seconds), and users interact with the system without a predefined workflow. Participants may also freely switch between the three levels of AI support at any point.
其次,每种工具代表着不同的交互风格和同步性。在 Fusion360 中,用户按照结构化的步骤设置参数并获取解决方案。生成新解决方案可能需要几个小时,设计师可以随时导出生成的模型或通过调整参数并重新运行求解器来迭代设计。相比之下,SimuLearn 的求解器运行速度快两到三个数量级(5-180 秒),用户可以在没有预定义工作流程的情况下与系统交互。参与者还可以在任何时候自由切换三个不同水平的人工智能支持。

4 Study Design 4 研究设计

Figure 3: 图 3:
Figure 3: Overview of Study 1 think-aloud design sessions procedure. Participants were first introduced to the design tool and task, then worked while thinking aloud across multiple sessions. They completed an interview after turning in their design.
图 3:研究 1 的思维翻译设计会话程序概述。参与者首先介绍设计工具和任务,然后在多个会话中边工作边大声思考。他们在提交设计后进行了访谈。

4.1 Study procedure 4.1 研究程序

The study was structured into four phases (see Figure 3):
研究分为四个阶段(见图 3):
1) On-Boarding: Before the first session, participants received instructions by email on how to access the design tools running on a dedicated remote machine. They also watched a 25-minute video tutorial demonstrating the tools’ core functionalities with a step-by-step example.
1) 入职培训:在第一次会议之前,参与者通过电子邮件收到了关于如何访问运行在专用远程机器上的设计工具的说明。他们还观看了一个 25 分钟的视频教程,展示了工具的核心功能以及逐步示例。
2) Intro Design Session: At the beginning of the first session, a research team member introduced the participants to the study, design brief, task, and starter file. Sessions were conducted over video conference (Zoom) with audio and video recording. Participants then worked while sharing their screens and thinking aloud. Participants were allowed to use all available support resources, such as internal help files, external video tutorials, or user forums. The researcher quietly observed the participants setting the parameters of the computational design system and took notes. Due to the complex and open-ended nature of the tasks, the research team interrupted the task after 30 minutes and asked participants to continue working on their own as a compensated "homework assignment" without any time limit.
2) 设计简介会议:在第一次会议开始时,研究团队成员向参与者介绍了研究、设计要求、任务和起始文件。会议通过视频会议(Zoom)进行,同时进行音频和视频录制。然后参与者在共享屏幕并大声思考的同时进行工作。参与者可以使用所有可用的支持资源,如内部帮助文件、外部视频教程或用户论坛。研究人员静静地观察参与者设置计算设计系统的参数,并做笔记。由于任务的复杂性和开放性质,研究团队在 30 分钟后中断任务,并要求参与者继续自行完成作为一项有报酬的“家庭作业”,没有时间限制。
3) Homework Sessions: Following the initial session, participants continued working independently for as long as needed to iterate and produce the final design submission. Participants used the same remote machine and joined a personalized video conference meeting with automatic recording to document their work while thinking aloud. We also deployed a simple web application to prompt the users to verbalize their thoughts. The application analyzed the speech input from the microphone and reminded the user to "keep talking" after twenty seconds of silence. Once the designer felt satisfied with the result, they submitted the 3D files to our team.
3) 作业会议:在初始会议之后,参与者继续独立工作,直到完成最终设计提交为止。参与者使用相同的远程机器,并加入个性化视频会议,自动记录他们的工作过程,同时大声思考。我们还部署了一个简单的网络应用程序,提示用户用语言表达他们的想法。该应用程序分析来自麦克风的语音输入,并在二十秒的沉默后提醒用户“继续交谈”。一旦设计师对结果感到满意,他们将 3D 文件提交给我们的团队。
4) Post-Task Interview: Within two days of submitting their result, participants completed a one-hour semi-structured interview with a research team member. Participants were asked to reflect on their experience working with the AI tool. The interviews were conducted remotely over video conferencing. The interviewer took notes, and the interview audio and video were recorded. The interview protocol contained 36 questions clustered into three topics: collaboration with the tool, design process, and learning process (see the Appendix for interview protocol). The topics and questions clustered under collaboration with the tool were inspired by measures from team learning literature on assessing collaboration quality and effectiveness of human teams [9, 34, 53, 74]. These include perceived team roles and coordination, communication between user and tool, conflict resolution, timing, and (shared) mental models.
4) 任务后访谈:参与者在提交结果后的两天内,与研究团队成员进行了为期一小时的半结构化访谈。参与者被要求反思他们使用人工智能工具的经验。访谈是通过视频会议远程进行的。访谈者做了笔记,访谈的音频和视频被记录下来。访谈方案包含了 36 个问题,分为三个主题:与工具的协作,设计过程和学习过程(请参见附录中的访谈方案)。在与工具协作的主题和问题中,受到了团队学习文献中评估协作质量和人类团队有效性的措施的启发。这些包括感知的团队角色和协调,用户与工具之间的沟通,冲突解决,时间安排和(共享的)心智模型。

4.2 Analysis 4.2 分析

To gain insight into research questions RQ1a What challenges do designers face when learning to co-create with computational AI tools? and RQ1b How do designers overcome these challenges? we (1) evaluated the design outcomes and analyzed more than 40 hours of think-aloud videos and 17 hours of interview recordings using (2) video interaction analysis of think-aloud videos, and (3) reflexive thematic analysis of think-aloud sessions (videos, transcripts) and interview transcripts.
为了深入了解研究问题 RQ1a 设计师在学习如何与计算 AI 工具共同创作时面临的挑战是什么?以及 RQ1b 设计师如何克服这些挑战?我们(1)评估了设计结果,并分析了超过 40 小时的大声思考视频和 17 小时的访谈录音,使用(2)大声思考视频的视频互动分析,以及(3)大声思考会话(视频,转录)和访谈转录的反思性主题分析。

4.2.1 Evaluation of design outcomes.
4.2.1 设计结果的评估。

We evaluated the effectiveness of Designer-AI collaboration by measuring the time required to complete the task and designer satisfaction with their results as rated on a three-point Likert scale (satisfied, neutral, unsatisfied) during the post-task interview. We also measured product feasibility for the mechanical engineering task by checking the designed engine brackets against the requirements in the design brief. The structural soundness was validated using finite element analysis (FEA), and the used material was checked by measuring part volume. We also checked the models for shape requirements (i.e., clear bolt holes, body within the bounding box). Since the bottle holder was a more free-form and aesthetic design task, we only checked if the user submitted their design and primarily relied on their self-reported satisfaction with the outcome.
我们通过测量完成任务所需的时间以及设计师在任务后采用的三点李克特量表(满意、中立、不满意)评价其对结果的满意度来评估设计师与人工智能的合作效果。我们还通过检查设计的发动机支架是否符合设计要求来评估机械工程任务的产品可行性。通过有限元分析(FEA)验证结构的稳固性,并通过测量零件体积来检查所使用的材料。我们还检查模型是否符合形状要求(即,螺栓孔清晰,主体在边界框内)。由于瓶架是一个更自由形式和审美设计任务,我们只检查用户是否提交了他们的设计,并主要依赖于他们对结果的自我报告满意度。

4.2.2 Video interaction analysis.
4.2.2 视频交互分析。

We used video interaction analysis [7] of the think-aloud recordings to understand participants’ learning process while working with the AI features. To understand how well participants learned over time to use the AI features effectively, we tracked their interactions with the AI features relevant to the design task and documented whether the actions would produce satisfactory outcomes. For Fusion360, we tracked how participants specified structural loads, mechanical constraints, and the obstacle geometry feature to control the bracket’s bolt clearance and overall size. For SimuLearn, we tracked how participants used different AI-assisted features (hybrid and automated optimization) throughout the think-aloud sessions.
我们使用视频交互分析[7]来分析思考出声记录,以了解参与者在使用人工智能功能时的学习过程。为了了解参与者随着时间如何学会有效使用人工智能功能,我们跟踪他们与设计任务相关的人工智能功能的互动,并记录这些行为是否会产生满意的结果。对于 Fusion360,我们跟踪参与者如何指定结构负载、机械约束和障碍几何特征,以控制支架的螺栓间隙和整体尺寸。对于 SimuLearn,我们跟踪参与者在思考出声会话期间如何使用不同的人工智能辅助功能(混合和自动化优化)。

4.2.3 Reflexive thematic analysis.
4.2.3 反身主题分析。

To understand participant’s challenges, needs, and expectations when learning to co-create with the AI system, we performed a reflexive thematic analysis [8] of the interview data (transcripts) and the think-aloud sessions (video, transcripts). We followed an iterative inductive coding process and generated themes through affinity diagramming. We used ATLAS.ti to analyze transcripts, audio, and video.
为了了解参与者在学习与人工智能系统共同创造时面临的挑战、需求和期望,我们对访谈数据(文本记录)和思考大声说会话(视频,文本记录)进行了反思性主题分析[8]。我们遵循了迭代归纳编码过程,并通过亲和图生成主题。我们使用 ATLAS.ti 分析文本记录、音频和视频。
In the initial coding, the think-aloud and interview transcript data were equally distributed among two researchers who generated preliminary codes utilizing both a semantic (what people said) and latent (our interpretations of the data) coding strategy. Next, the research team collectively identified initial codes and themes. We generated themes in a bottom-up manner. However, we looked at the data with a mindset of collaboration between the designer and the tool—inspired by previous studies on human-human collaboration, co-creation, and team learning [9, 53]. We also tried our best to identify and separate usability issues from the codes and themes to avoid confoundment.
在最初的编码过程中,口头思考和访谈记录数据均平均分配给两名研究人员,他们利用语义(人们说了什么)和潜在(我们对数据的解释)编码策略生成了初步编码。接下来,研究团队共同确定了初步编码和主题。我们以自下而上的方式生成主题。然而,我们以设计师和工具之间协作的心态审视数据,受到先前关于人际协作、共创和团队学习的研究的启发。我们还尽力识别和区分可用性问题,以避免混淆。
The two researchers then coded the think-aloud recordings to document where designers specified system parameters or evaluated design outcomes. These moments allowed us to find many of the problems that designers faced. We also coded non-verbal expressions like mouse gesturing or screen annotations that showed how designers attempted to communicate.
两位研究人员随后对思考出声记录进行编码,以记录设计师指定系统参数或评估设计结果的位置。这些时刻让我们发现了许多设计师面临的问题。我们还对非语言表达进行编码,如鼠标手势或屏幕注释,显示设计师如何尝试沟通。
Finally, we created summary videos highlighting specific situations related to co-creation with the tool (e.g., designers confused by AI-generated outcomes). The video clips were annotated with a time code, participant ID, and a contextual description of the situation to share and discuss with the entire research team (for an example, please see the video figure in the supplementary material). The research team collectively analyzed the think-aloud summary videos in a half-day session and discussed the themes. We completed the qualitative analysis by iteratively reviewing and revising codes and themes until we identified a stable network of coherent and rich themes.
最后,我们制作了总结视频,重点突出与该工具相关的特定情况(例如,设计师对人工智能生成的结果感到困惑)。视频剪辑带有时间代码、参与者 ID 和情境描述,以便与整个研究团队分享和讨论(有关示例,请参阅补充资料中的视频图)。研究团队在半天的时间里共同分析了思考出声总结视频,并讨论了主题。我们通过反复审查和修订代码和主题来完成定性分析,直到确定了一个稳定的连贯丰富的主题网络。
Table 2: 表 2:
Table 2: Evaluation of design outcomes for the engine bracket design task (left) and bottle holder design task (right). The designer’s satisfaction with the outcome is rated with green=satisfied, yellow=neutral, red=unsatisfied. For the engine bracket task, meeting structural and shape requirement checks are rated as X=fail, check mark=pass.
表 2:对发动机支架设计任务(左)和瓶架设计任务(右)的设计结果进行评估。 设计师对结果的满意度以绿色=满意,黄色=中立,红色=不满意进行评分。 对于发动机支架任务,满足结构和形状要求检查的评分为 X=失败,勾号=通过。
Figure 4: 图 4:
Figure 4: Overview of engine bracket designs created by participants. P-F05 met both the shape and structural requirements. P-F01 only met the shape requirements (the bracket has holes for bolts with sufficient clearance and respects the specified dimensions but is structurally too weak). All other brackets missed both requirements and were either too heavy, weak, larger than the specified dimensions, or had not enough bolt clearance. For simplicity, we only show one design option per participant. Please see the Appendix for all submitted design options.
图 4:参与者设计的发动机支架设计概览。P-F05 符合形状和结构要求。P-F01 仅符合形状要求(支架有足够间隙的螺栓孔并符合指定尺寸,但在结构上过于薄弱)。所有其他支架均未达到这两项要求,要么过重、薄弱、尺寸超过指定尺寸,要么螺栓间隙不足。为简单起见,我们仅展示每位参与者的一个设计选项。请参阅附录以查看所有提交的设计选项。
Figure 5: 图 5:
Figure 5: Overview of bottle holder designs created by participants. Participants came up with different designs using the shape-changing grid material in various ways. Participant P-S01 was not able to control the shape as intended and improvised by stacking grids together. While some designs (P-S06, P-S07) would not be able to hold a bottle, some designers (P-S04, P-S10) utilized the shape-changing grid structure to create promising bottle holder designs. Participant P-S11 stopped working on the task out of frustration and did not submit a design.
图 5:参与者设计的瓶架概览。参与者们运用可变形网格材料以不同方式设计出不同的瓶架。参与者 P-S01 未能按预期控制形状,通过叠放网格进行了即兴创作。虽然一些设计(P-S06,P-S07)无法承载瓶子,但一些设计者(P-S04,P-S10)利用可变形网格结构创造出有前景的瓶架设计。参与者 P-S11 因沮丧而停止了任务,并未提交设计。

5 Results 5 结果

Overall, participants expressed seeing potential value in the AI systems to support their design process—especially that the tools would enable them (at least in theory) to create and explore more complex designs in a shorter time than without AI. However, most faced unresolvable challenges in learning to effectively co-create with the tools. In the following subsections, we provide a brief overview of participants’ performance on the design tasks and then explore what challenges they faced in co-creating with them (RQ1a). We then highlight what some successful learning strategies looked like (RQ1b).
总体而言,参与者表示看到人工智能系统在支持他们的设计过程中具有潜在价值——尤其是这些工具将使他们能够(至少在理论上)比没有人工智能的情况下更短的时间内创建和探索更复杂的设计。然而,大多数人在学习如何有效地与这些工具共同创作时面临着无法解决的挑战。在接下来的小节中,我们将简要概述参与者在设计任务上的表现,然后探讨他们在共同创作中面临的挑战(RQ1a)。然后,我们将重点介绍一些成功的学习策略是什么样子的(RQ1b)。

5.1 Design Tasks Outcome Summary
5.1 设计任务结果摘要

In the engine bracket task, all participants were familiar with designing similar mechanical components by considering forces and constraints. Generally, such a task is a standard exercise in engineering education, and our task was comparable to the example provided by Autodesk in the introduction video that the participants watched. Participants required between 54 and 160 minutes (M=104, SD=39.7) to complete the task (see Table 2 left). No participant mentioned in the post-interview that the task itself was too difficult for them. Yet, only participant P-F05 was able to produce a self-satisfactory design that met both shape and structural requirements (see Figure 4 and Appendix for additional designs). P-F02 was also satisfied with their design but opted to manually refine the generated geometry that did not meet the requirements (i.e., using excessive materials and blocked bolt holes). We were surprised to find that few engineering participants produced satisfying results, even though they were familiar with the type of design task. Designers struggled to perform this otherwise familiar design task when they attempted to do so with AI assistance.
在发动机支架任务中,所有参与者都熟悉通过考虑力和约束设计类似机械部件。一般来说,这样的任务是工程教育中的标准练习,我们的任务与参与者观看的 Autodesk 介绍视频中提供的示例相当。参与者完成任务需要 54 至 160 分钟(平均值=104,标准偏差=39.7)(见表 2 左侧)。在后续访谈中,没有参与者提到任务本身对他们来说太难了。然而,只有参与者 P-F05 能够制作出符合形状和结构要求的自我满意设计(见图 4 和附录中的其他设计)。P-F02 对他们的设计也感到满意,但选择手动调整生成的几何形状,因为它们不符合要求(即使用过多材料和阻塞螺栓孔)。我们惊讶地发现,尽管工程参与者熟悉这类设计任务,但很少有人能够产生令人满意的结果。设计师们在尝试通过人工智能辅助完成这个熟悉的设计任务时遇到了困难。
For our industrial or architectural designers, designing a bottle holder in 3D was not perceived as difficult. However, working with shape-changing material structures was new and everyone expressed in the post-interview that working with the shape-changing material was "unintuitive" and "challenging." Participants worked on the task between 63 and 255 minutes in total (M=154, SD=56.6). All but one participant submitted a bottle holder design (Figure 5). This participant stopped working on the project after 151 minutes because he felt he could not control the material well, even with the AI. In the end, almost all designers were either dissatisfied with their final design or had a neutral opinion (see Table 2).
对于我们的工业或建筑设计师来说,在 3D 中设计一个瓶架并不被视为困难。然而,与可变形材料结构一起工作是新的,每个人在后续访谈中都表达了与可变形材料一起工作是“不直观”和“具有挑战性”的看法。参与者总共花费了 63 至 255 分钟的时间来完成任务(M=154,SD=56.6)。除一位参与者外,所有人都提交了一个瓶架设计(图 5)。这位参与者在 151 分钟后停止了项目工作,因为他觉得即使有人工智能,他也无法很好地控制材料。最终,几乎所有设计师要么对他们的最终设计感到不满意,要么持中立意见(见表 2)。
Table 3: 表 3:
Table 3: Schematic overview of each participant’s learning process of design task-relevant features of the AI system. Left: Engine Bracket Task. Right: Bottle Holder Task. Correct use of input parameters or AI tools is shown with check marks. Increasing numbers of check marks from the first iteration to the last iteration suggest participants learning to work with the AI system.
表 3:每位参与者对 AI 系统设计任务相关特征的学习过程的示意图概述。左侧:发动机支架任务。右侧:瓶架任务。正确使用输入参数或 AI 工具用勾号表示。从第一次迭代到最后一次迭代的勾号数量增加表明参与者学会了如何与 AI 系统合作。
Table 3 shows that only a few designers understood how to set relevant parameters for the design task to produce satisfying results on the first attempt in both tasks. In Fusion360, most designers learned over time to successfully specify the structural constraints and obstacle geometry for bolt clearances. However, many designers failed to correctly specify structural loads and the boundary box. As a result, some submitted designs were too heavy, weak, or larger than the specified dimensions. In particular, one designer (P-F02) decided to manually refine the design after the first iteration because they felt more in control this way. For SimuLearn, all participants quickly learned how to control the grid shape using manual adjustments and rapid simulations (Table 3 right). However, many avoided using the other two AI-assisted features after an unsuccessful first attempt and continued to work manually (see Section 5.2.2 for further explanation).
表 3 显示,只有少数设计师了解如何设置相关参数以在第一次尝试中为设计任务产生令人满意的结果。在 Fusion360 中,大多数设计师随着时间学会了成功地指定结构约束和螺栓间隙的障碍几何。然而,许多设计师未能正确指定结构负载和边界框。因此,一些提交的设计过重、过弱或大于指定尺寸。特别是,一位设计师(P-F02)决定在第一次迭代后手动完善设计,因为他们觉得这样更加控制。对于 SimuLearn,所有参与者迅速学会了如何使用手动调整和快速模拟来控制网格形状(见表 3 右侧)。然而,许多人在第一次尝试失败后避免使用其他两个 AI 辅助功能,并继续手动工作(详见第 5.2.2 节)。

5.2 Challenges designers faced when learning to co-create with AI (RQ1a)
5.2 设计师在学习与人工智能共同创作时面临的挑战(RQ1a)

We clustered challenges designers faced when learning to co-create into three themes: Understanding and fixing AI outputs, working “collaboratively” with the tools, and communicating their design goals to the AI.
我们将设计师在学习共同创作时面临的挑战聚类为三个主题:理解和修复人工智能输出,与工具“协作”工作,以及向人工智能传达他们的设计目标。

5.2.1 Challenges in understanding and adjusting AI outputs.
5.2.1 理解和调整人工智能输出的挑战。

Designers were often confused about the generated results and had difficulty understanding the cause and remedy of “weird” outcomes. This often occurred when generated solutions contained minor aesthetic flaws such as surface bumps, holes, or slightly twisted geometry. Designers wondered about the AI’s underlying design rationale and were unable to determine whether such design features were intended or caused by algorithmic glitches.
设计师经常对生成的结果感到困惑,难以理解“奇怪”结果的原因和解决方法。当生成的解决方案包含轻微的美学缺陷,如表面凸起、孔洞或略微扭曲的几何形状时,这种情况经常发生。设计师们对人工智能的潜在设计原理感到困惑,无法确定这些设计特征是有意为之还是由算法故障引起的。
Furthermore, designers were unsure how to correct the generated designs (e.g., identifying the parameters that led to the problem). Some designers were hesitant to manually refine the generated geometry because they felt uncomfortable changing the optimized structure:
此外,设计师们不确定如何纠正生成的设计(例如,识别导致问题的参数)。一些设计师不愿手动优化生成的几何结构,因为他们觉得改变优化结构会让他们感到不舒服。
"...I also realized that if I was making any change in the mesh maybe I was changing the stress that that area will have. So I didn’t feel very comfortable changing stuff because I knew that was optimized for my loads and my material." P-F04
“...我也意识到,如果我在网格中进行任何更改,可能会改变该区域的应力。因此,我不太愿意更改东西,因为我知道它是针对我的载荷和材料进行优化的。” P-F04
Designers were also unable to make sense of apparent structural issues in the generated designs, like when the generated parts were unreasonably thin or thick, provided little clearance, or produced confusing grid transformations. As verbalized by this designer while evaluating a generated engine bracket:
设计师们也无法理解生成设计中的明显结构问题,比如生成的零件过于薄或过厚,提供的间隙很小,或产生令人困惑的网格变换。就像这位设计师在评估生成的发动机支架时所说:
“it just doesn’t make sense that, like this region here, which is obviously pretty thick... well I guess it’s not. I just don’t think that it can hold up the weight. It says the factor of safety is five. That just seems ridiculous to me.” P-F01 Think-aloud
“就像这个区域,显然相当厚...嗯,我猜也不是。我觉得它无法承受这个重量。它说安全系数是五。这对我来说似乎荒谬。” P-F01 想法出声
At other moments, designers were uncertain whether the AI or the user was responsible for fixing the problem. In some cases, designers accepted imperfect results and attributed the flaws to the system. On one occasion, a designer verbalized their concerns when evaluating a generated engine bracket with insufficient bolt hole clearance:
在其他时刻,设计师们不确定是人工智能还是用户应该负责解决问题。在某些情况下,设计师们接受了不完美的结果,并将缺陷归因于系统。有一次,一位设计师在评估一个生成的发动机支架时表达了他们的担忧,因为螺栓孔的间隙不足。
"It’s a goofy-looking bracket. I would never design it like that, but this thing thinks it can do it like that […] It just seems [that there needs] to be bolt heads and stuff, so I just don’t see that bolt fitting in this area. But this is what it gave me, so I don’t have a problem with that.” P-F13 Think-aloud
“这是一个看起来很滑稽的支架。我绝对不会设计成这样,但这个东西认为它可以这样做[…] 看起来就好像需要螺栓头之类的东西,所以我就看不到那个螺栓能适应这个区域。但这就是它给我的,所以我对此没有问题。” P-F13 想法大声说

5.2.2 Challenges in working “collaboratively”.
5.2.2 “协作”工作中的挑战。

Designers frequently expressed challenges in sharing control with AI-based design tools, noting that it often felt like these tools were dominating the design process. In those situations, designers either gave up and accepted unsatisfying results, improvised ’hacky’ strategies to work around the AI or abandoned the AI assistance altogether and proceeded to work manually. For instance, one designer using SimuLearn expressed frustration about having insufficient control over the design process and decided to accept imperfect results:
设计师经常表示,在与基于人工智能的设计工具共享控制方面存在挑战,指出这些工具经常让设计过程感觉被主导。在这些情况下,设计师要么放弃并接受令人不满意的结果,要么采用“hacky”策略来规避人工智能,或者完全放弃人工智能的帮助,转而手动工作。例如,一位使用 SimuLearn 的设计师表达了对设计过程控制不足的挫折感,并决定接受不完美的结果:
"I feel like the collaborative process [...] it seemed a little difficult to control. I felt that SimuLearn had more control over it than I did." P-S01 Interview
“我觉得合作过程 [...] 看起来有点难以控制。我觉得 SimuLearn 比我更有控制权。” P-S01 采访
Similarly, another participant pointed out the lack of participation opportunities and conversation throughout the design process:
同样,另一位参与者指出设计过程中缺乏参与机会和交流
"I would say no, that it’s not co-creative. The whole program creates the thing but based on my limits. [...] I don’t feel like I interact in that creation of the shape. I just worked at the beginning and then I let the program do the rest. I missed that once there is one option, I cannot change anything, I cannot interact with the solutions that the program is giving me." P-F04 Interview
我会说不,这并不是共同创造的过程。整个程序创造了这个东西,但是基于我的限制。[...] 我觉得我并没有参与到形状的创造中。我只是在开始阶段工作,然后让程序完成剩下的部分。我错过了一点,一旦有了一个选项,我就无法改变任何东西,无法与程序给出的解决方案互动。" P-F04 采访
In other cases, designers developed their own ’hacky’ workarounds to attain feasible results. In Fusion360, this happened when designers tried to use simple boxes as obstacle geometries to limit the material growth within the specified perimeter. However, the algorithm often found ways to “squeeze” in additional, unwanted material through the gaps between obstacle geometries. A Fusion360 user expressed they felt like tricking the system when they were unsuccessful at blocking material generation:
在其他情况下,设计师们开发了他们自己的“hacky”解决方案来实现可行的结果。在 Fusion360 中,当设计师试图使用简单的盒子作为障碍几何体来限制指定范围内的材料增长时,就会出现这种情况。然而,算法经常会找到方法通过障碍几何体之间的间隙“挤入”额外的、不需要的材料。一位 Fusion360 用户表示,当他们无法阻止材料生成时,他们感觉自己在欺骗系统:
“I think the software did kind of dominate the design process a little bit in that I was making things to satisfy the software instead of it kind of adapting to my needs [...] I think for something that’s like supposed to be so smart and easy to use I wish it gave me more options instead of [me] trying to find these like little tricks. So I don’t want to use little tricks. I don’t want to use a hidden kind of ‘I’ll trick the software into thinking that it’s correct.’" P-F05 Interview
“我认为软件在设计过程中有点主导了,我是为了满足软件而制作东西,而不是软件适应我的需求[...] 我觉得对于一个被认为是如此智能和易于使用的东西,我希望它给我更多的选择,而不是[我]试图找到这些小技巧。所以我不想使用小技巧。我不想使用一种隐藏的‘我会欺骗软件以为它是正确的’的方式。” P-F05 采访
Finally, designers often avoided AI assistance when they could not learn to co-create with it effectively. This issue was particularly frequent when designers used SimuLearn, given that this tool explicitly allows designers to switch between different levels of AI assistance. When comparing which SimuLearn features designers used to complete the tasks, all designers learned over time to manually adjust, simulate, and iterate the design (see Table 3 right). However, only two designers (S04, S10) were able to work with the AI-assistance features (i.e., auto and hybrid optimizer) on the first try. All others expressed that they could not make sense of the AI output, even when they used the feature correctly. These participants subsequently avoided using the AI features and continued to work manually with rapid simulation.
最后,设计师们在无法有效地与人工智能共同创作时,通常会避免使用人工智能辅助。当设计师们使用 SimuLearn 时,这个问题尤为常见,因为这个工具明确允许设计师在不同级别的人工智能辅助之间切换。在比较设计师们用来完成任务的 SimuLearn 功能时,所有设计师随着时间学会了手动调整、模拟和迭代设计(见表 3 右侧)。然而,只有两位设计师(S04,S10)能够在第一次尝试时使用人工智能辅助功能(即自动和混合优化器)。其他所有人表示,即使他们正确使用了该功能,他们也无法理解人工智能的输出。这些参与者随后避免使用人工智能功能,并继续手动进行快速模拟工作。

5.2.3 Challenges in communicating design goals to the AI.
5.2.3 将设计目标传达给人工智能时面临的挑战。

Designers often had difficulty communicating design goals to the AI system. For example, designers were unsure about the use and implication of certain parameters. Furthermore, many designers recognized their knowledge gaps of parameters that defined manufacturing processes or materials. E.g., a designer was unfamiliar with the specificity of a manufacturing method and kept using default parameter values:
设计师经常在与人工智能系统沟通设计目标时遇到困难。例如,设计师对某些参数的使用和含义感到不确定。此外,许多设计师意识到自己对定义制造过程或材料的参数存在知识盲区。例如,设计师对某种制造方法的具体性不熟悉,一直使用默认参数值:
"I’ll have to see if they have five-axis milling... minimum tool diameter 10 millimeters... uh I don’t know, I’m not really a tooling guy... tool shoulder length... [laughs] minimum tool diameter... all right I guess I don’t know... I just hit okay on that." P-F12 Think-aloud
我得看看他们有没有五轴铣削...最小刀具直径 10 毫米...嗯,我不太懂,我不是一个工具人...刀具肩长度... [笑] 最小刀具直径...好吧,我想我不知道...我只是点击了确定。P-F12 思想出声。
Designers often relied on the AI system’s default settings or made assumptions about their effect when they were unsure about the parameters’ meanings. Instead of seeking clarification from helpful resources, they often tried to determine a parameter’s effect on the final result independently. However, it was oftentimes hard or impossible to notice and trace back parameter influences from the final results.
设计师经常依赖人工智能系统的默认设置,或者在不确定参数含义时做出假设。他们通常不会寻求帮助资源的澄清,而是试图独立确定参数对最终结果的影响。然而,往往很难或不可能从最终结果中注意到并追溯参数的影响。
In the think-aloud sessions, we also observed several occasions where designers had different interpretations of the parameters. For example, in the Fusion360 task, half of the users made mistakes when converting the loads into the correct unit expected by the system. Similarly, when applying loads to multiple targets (i.e., bolt holes), the system applied the same load to each of the targets instead of equally distributing the load across the targets, which the designers had anticipated 1. This mismatch led to higher load assignments and unnecessarily strong and bulky bracket designs. Interestingly, most designers verbalized their uncertainty about the load distribution when specifying parameters, as exemplified by this think-aloud comment:
在思考大声疏导会话中,我们还观察到设计师在解释参数时存在不同的情况。例如,在 Fusion360 任务中,有一半的用户在将负载转换为系统所期望的正确单位时犯了错误。同样,在将负载应用于多个目标(即螺栓孔)时,系统将相同的负载应用于每个目标,而不是均匀分配到各个目标上,这与设计师的预期不符。这种不匹配导致了更高的负载分配和不必要的强度大、笨重的支架设计。有趣的是,大多数设计师在指定参数时都表达了对负载分配的不确定性,正如这个思考大声评论所示:
“All the loads... I remember being a little wonky... so... I said three... let’s see...what would be the case here? I don’t know if all of these three forces are the same... that’s the issue. I don’t know if this is applying to each one in particular... like if it’s 12,000 here, here, and here... or if it’s split evenly? I hope it’s being split evenly... that’s what I’m assuming.” P-F03 Think-aloud
“所有的负载...我记得有点不稳定...所以...我说三个...让我们看看...这种情况会是什么?我不知道这三个力是否都相同...这是问题所在。我不知道这是否适用于每一个特定的力...比如这里是 12,000,这里,还有这里...或者是否均匀分配?我希望它是均匀分配的...这是我的假设。” P-F03 想法出口
Although participants were aware that the system might interpret the load assignments differently, only a few were actually able to figure out and correct the mistake.
尽管参与者意识到系统可能会以不同方式解释负载分配,但只有少数人实际能够找出并纠正错误。

5.3 Learning strategies among successful designers (RQ1b)
5.3 成功设计师之间的学习策略(RQ1b)

Here we present findings related to how designers overcame the previously reported learning challenges (RQ1b). We observed that all designers (after watching the introduction tutorial video) tried to learn to work with the tools through an iterative trial-and-error process. We also observed that participants sporadically consulted different support resources, including software tooltips and help files, and external resources like video tutorials, online user forums, and in some cases asking colleagues for help. Designers sought help from these support resources primarily after encountering interface or usability issues, which they often resolved. However, despite the available support resources, most designers struggled to learn to co-create with the tools effectively. Nonetheless, some designers employed successful strategies that helped them in learning to work better with the AI systems:
在这里,我们提出了与设计师如何克服先前报道的学习挑战(RQ1b)相关的发现。我们观察到,所有设计师(在观看介绍教程视频后)尝试通过迭代的试错过程学习如何使用工具。我们还观察到,参与者偶尔会查阅不同的支持资源,包括软件工具提示和帮助文件,以及外部资源,如视频教程、在线用户论坛,有时向同事寻求帮助。设计师主要在遇到界面或可用性问题后才从这些支持资源中寻求帮助,他们通常会解决这些问题。然而,尽管有可用的支持资源,大多数设计师仍然很难有效地学会与工具共同创作。尽管如此,一些设计师采用了成功的策略,帮助他们更好地学习如何与人工智能系统一起工作:
Table 4: 表 4:
IDGroup Age 年龄
Years 年份
Gender 性别Domain 领域Occupation 职业CAD Exp. CAD 设计。
Years 年份
Ind. Exp. 印度。Exp.
Years 年份
Guided by 引导者
Participant 参与者
P-F07Fusion36027MMechanical Engineering 机械工程Student / PhD 学生/博士生>51 – 2 1 - 2P-F05
P-F08Fusion36027MMechanical Engineering 机械工程Student / PhD 学生/博士生>51 – 2 1 - 2P-F05
P-F13Fusion36019MMechanical Engineering 机械工程Student / BA 学生/学士2 – 4 2 - 40P-F11
P-S05SimuLearn SimuLearn SimuLearn26FArchitecture 建筑Student / MA 学生/硕士>52 – 5 2 - 5P-S04
P-S08SimuLearn SimuLearn SimuLearn20FArchitecture 建筑Student / BA 学生/学士2 – 4 2 - 40P-S04
P-S09SimuLearn SimuLearn SimuLearn23FArchitecture 建筑Student / BA 学生/学士2 – 4 2 - 42 – 5 2 - 5P-S04
Table 4: Overview of Study 2: Learning with a Peer Guide participants. Six additional participants (3xFusion360, 3xSimuLearn) were paired with peer guides who had worked successfully with the AI design tools in Study 1.
表 4:研究 2 概述:与同伴指导者学习的参与者。另外六名参与者(3xFusion360,3xSimuLearn)与在研究 1 中成功使用 AI 设计工具的同伴指导者配对。

5.3.1 Systematically exploring AI’s limitations and capabilities.
5.3.1 系统地探索人工智能的局限性和能力。

We observed that, early on in their interactions, two designers (P-F11, P-S10) deliberately and systematically experimented with the AI tools to develop a better intuition of the AI’s behavior, capabilities, and limitations. These designers conducted tests to understand what effects different parameter values would have on the final result and documented the value-result correspondence to create a mental model. For example, PS-10 realized that their initial design sketches were not feasible with SimuLearn and the shape-changing materials, thus decided to systematically test different extreme grid shapes to hone their mental model of the AI’s behavior:
我们观察到,在设计师们(P-F11,P-S10)早期的互动中,他们有意识地并系统地尝试使用人工智能工具,以更好地直觉人工智能的行为、能力和局限性。这些设计师进行了测试,以了解不同参数值对最终结果的影响,并记录价值-结果对应关系,以建立心智模型。例如,P-S10 发现他们最初的设计草图在 SimuLearn 和可变形材料中是不可行的,因此决定系统地测试不同的极端网格形状,以磨练他们对人工智能行为的心智模型:
"Even though I tried sketching some stuff, I think it just didn’t work. So I thought it’s better if I just go into the tool and see if I will be able to do this. I tried stuff like folding one corner upwards and one corner downwards or stuff like that. I took lots of screenshots and those really helped me to understand like ’if I do this, then it’s gonna behave like that’ so I think initially it was a lot of trying to form a mental model and like what’s the capability of this tool." P-S10 Interview
尽管我尝试了一些草图,但我觉得效果并不好。所以我觉得最好的办法是直接使用工具,看看我能否做到这一点。我尝试了将一个角折叠向上,另一个角折叠向下之类的东西。我拍了很多截屏,这些截屏真的帮助我理解“如果我这样做,它就会表现出那样”的情况,所以我认为最初主要是试图形成一个心理模型,了解这个工具的功能。P-S10 采访

5.3.2 Sketching, explaining and reflecting on design issues.
5.3.2 描绘、解释和反思设计问题。

Another strategy that helped designers overcome flawed outcomes was to actively abstract and explain the problem. In the think-aloud sessions, we observed when facing similar fundamental challenges like over-constraining the engine bracket (such that the loads had no effect), some designers were able to understand and overcome the issue by sketching out the bracket’s free-body diagram and explaining the acting forces and constraints to themselves. We observed similar strategies in SimuLearn, where participants understood the significant influence of gravity during the transformation process by explaining the process to themselves:
帮助设计师克服缺陷结果的另一种策略是积极抽象和解释问题。在思考大声会话中,我们观察到,当面临类似的基本挑战时,比如过度约束发动机支架(导致载荷没有影响)时,一些设计师能够通过勾勒支架的自由体图并向自己解释作用力和约束来理解和克服问题。我们在 SimuLearn 中观察到类似的策略,参与者通过向自己解释过程来理解重力在转化过程中的重要影响。
"Whoa, I was not expecting that at all... uh... is that just because of gravity? And there’s something crazy going on here... there’s a lot of gravity... oh is it because I made the thing so big? Yikes, that is not at all what I expected." P-S11 Think-aloud
哇,我完全没有预料到这个……嗯……这是因为重力吗?这里发生了一些疯狂的事情……有很多重力……哦,是因为我把这个东西做得太大了吗?天啊,这完全不是我预期的。P-S11 想法出声。

6 Study 2: Learning with a peer guide (RQ2)
研究 2:与同伴导师学习(RQ2)

To gain insights into how designers can be better supported in learning to co-create with computational AI tools (RQ2), we conducted additional think-aloud sessions where designers were paired with experienced peers to support them during the task (i.e., guided sessions). The guide had prior experience using the AI tools and provided support as needed to help participants to more effectively co-create with AI-driven features. Motivated by human-human collaboration, we aimed to derive insights into when and how to effectively support users in learning to co-create with AI by observing the support strategies, pedagogical moves, and communication patterns of the human guide.
为了深入了解如何更好地支持设计师学习如何与计算 AI 工具共同创造(RQ2),我们进行了额外的思考独白会话,其中设计师与经验丰富的同行配对,以在任务过程中支持他们(即引导会话)。指导者具有使用 AI 工具的先前经验,并根据需要提供支持,以帮助参与者更有效地与 AI 驱动功能共同创造。受人际协作的启发,我们旨在通过观察人类指导者的支持策略、教学动作和沟通模式,推导出何时以及如何有效地支持用户学习如何与 AI 共同创造。

6.1 Method 6.1 方法

We recruited six additional participants (Table 4) following the same criteria as described in Section 3.1. The guides were recruited from the pool of participants who had completed Study 1 and demonstrated a thorough understanding of the domain, tool, and task (P-F05, P-F11, P-S04). We asked these guides to support the designers in learning to co-create with the tool. No script was provided to the guides because we intended to find possible support strategies from their natural interactions.
我们根据第 3.1 节描述的相同标准招募了六名额外参与者(表 4)。指南来自已完成研究 1 并对领域、工具和任务有深入了解的参与者群体(P-F05,P-F11,P-S04)。我们要求这些指南支持设计师学习如何与工具共同创作。我们没有向指南提供脚本,因为我们打算从他们的自然互动中找到可能的支持策略。
All guided sessions followed the same procedure as that of the unguided sessions, except that the homework sessions were limited to 50 minutes and the guides were present to help the designers. The designers and the guides communicated with each other through audio, screen sharing, and screen annotations. A researcher quietly observed and took notes. After the design session, we conducted separate 15-minute semi-structured interviews with the guide and designer. All sessions and interviews were recorded (video and audio) and automatically transcribed.
所有引导性会话遵循与未引导会话相同的程序,唯一的区别是作业会话被限制在 50 分钟,引导者在场协助设计者。设计者和引导者通过音频、屏幕共享和屏幕注释进行交流。一名研究人员静静观察并做笔记。设计会话结束后,我们对引导者和设计者分别进行了 15 分钟的半结构化访谈。所有会话和访谈都被录制(视频和音频),并自动转录。
We conducted a reflexive thematic analysis to identify situations and themes on how peer guides supported designers to overcome challenges in learning to work with AI tools. We specifically focused on aspects of collaboration and knowledge transfer such as communication, joint information processing, and coordination of actions [53]. We coded the video, think-aloud transcripts, and interview transcripts then generated themes with a focus on communication and learning by analyzing the guide’s actions and support strategies that helped designers overcome challenges previously observed in Study 1.
我们进行了一项反思性主题分析,以识别同行指导如何支持设计师克服学习使用人工智能工具中的挑战的情况和主题。我们特别关注协作和知识传递的方面,如沟通、共同信息处理和行动协调。我们对视频、思考独白文本和访谈文本进行编码,然后通过分析指南的行动和支持策略生成主题,重点关注沟通和学习,帮助设计师克服在第一项研究中观察到的挑战。

6.2 Results – Guided Sessions
6.2 结果 - 引导会话

The guides supported the designer in using the tools, understanding the AI’s behaviors, capabilities, and limitations, and sometimes suggesting and discussing alternative design goals or strategies. Guides primarily reacted to designers’ verbalization and actions when they asked for help, expressed uncertainty, or when guides observed common mistakes. To get an impression of conversational dynamics, please see the video figure in the supplementary material. We portray five of the most common peer support strategies that helped designers to learn better to co-create with AI systems:
指南支持设计师使用工具,理解人工智能的行为、能力和局限性,并有时提出并讨论替代的设计目标或策略。当设计师寻求帮助、表达不确定性或指南观察到常见错误时,指南主要会对设计师的言语和行动做出反应。为了了解对话动态,请参阅补充资料中的视频图。我们描述了五种最常见的同行支持策略,这些策略帮助设计师更好地学习与人工智能系统共同创造。

6.2.1 Guide providing step-by-step walk-through instructions.
6.2.1 提供逐步指导的指南。

Guides often provided designers with step-by-step instructions for setting specific parameters. Such instructions were provided in response to designers’ actions, such as showing confusion or struggle, but sometimes designers also specifically requested such assistance.
指南通常为设计师提供逐步说明,以设置特定参数。这些说明是针对设计师的行动提供的,比如表现出困惑或挣扎,但有时设计师也会明确请求这种帮助。

6.2.2 Guide reacting to designers’ expressions of uncertainty.
6.2.2 指导设计师对不确定性的表达做出反应。

We observed that guides were especially sensitive to moments when designers signaled uncertainty or when designers verbalized knowledge gaps with hedging expressions such as "I don’t know", "maybe", or "I assume." In these situations, the guides often intervened and offered support or suggested alternative design strategies. Here, the guide suggested creating an obstacle geometry in Fusion360 to prevent material build-up at the bottom of the part in response to the designer who was wondering how to keep the part within the specified dimensions.
我们观察到,导师特别敏感于设计师表达不确定性或用诸如“我不知道”、“也许”或“我假设”等模糊表达表达知识空白的时刻。在这些情况下,导师经常介入并提供支持或建议替代的设计策略。在这里,导师建议在 Fusion360 中创建一个障碍几何图形,以防止零件底部出现材料堆积,以回应设计师对如何保持零件在指定尺寸内的疑问。
DESIGNER: "...but because the bolt is not here, I don’t know where it would be..." GUIDE: "I mean maybe on the bottom face, right? That’s what’s resting on the body of the ship effectively." DESIGNER: Yeah, let’s go ahead and choose that bottom face." (Designer P-F13, Guide P-F11)
设计师:“...但是因为螺栓不在这里,我不知道它会在哪里...” 导游:“我的意思是也许在底部,对吧?那就是有效地靠在船体上的部分。” 设计师:“是的,让我们选择那个底部。”(设计师 P-F13,导游 P-F11)

6.2.3 Guide prompting designer reflection on generated designs.
6.2.3 引导设计师反思生成设计的指南。

We often observed situations where the guides prompted feedback from designers each time SimuLearn’s simulation or Fusion360’s solver had finished. In those moments, peer guides often asked the designer “Is that what you envisioned?” or “Is that what you wanted?” or even “Yeah, there you go! Is that how you want it to bend?”. These prompts triggered designers to reflect on the generated designs, which helped the guides better understand how to provide support.
我们经常观察到这样的情况,即每当 SimuLearn 的模拟或 Fusion360 的求解器完成时,导游都会促使设计师提供反馈。在那些时刻,同行导游经常问设计师:“这是您设想的吗?”或者“这是您想要的吗?”甚至是“是的,就是这样!这是您想要它弯曲的方式吗?”。这些提示促使设计师反思生成的设计,有助于导游更好地理解如何提供支持。

6.2.4 Guide suggesting alternative means and goals.
6.2.4 指导提出替代手段和目标的建议。

Beyond supporting tool operation and technical troubleshooting, the guides frequently suggested alternative means and goals to the designer, as in this dialogue:
超越支持工具操作和技术故障排除,指南经常向设计师提出替代手段和目标,就像在这段对话中所示:
DESIGNER: "It’s going for the green part, but it’s not able to figure out like a perfect way to get there without like touching this obstacle geometry." GUIDE: "Right, well I mean just make your obstacle geometry really long, say, all the way back past the connection to the ship, and then you’re saying ’no material is allowed to go here’ and that would make sure that you can always get a bolt in. See what I’m saying?" DESIGNER: "Yeah, I do" (Designer P-F13, Guide P-F11)
设计师:“它正朝着绿色部分前进,但无法找到一种完美的方式到达那里,而不碰触这个障碍几何体。” 导游:“是的,我的意思是让你的障碍几何体非常长,比如一直延伸到连接到飞船的地方,然后你就说‘这里不允许有任何材料通过’,这样就可以确保你总是可以安装螺栓。明白我的意思吗?” 设计师:“是的,我明白了”(设计师 P-F13,导游 P-F11)
Here the guide suggested an alternative way to achieve the designer’s goal by enlarging the existing obstacle geometry. This strategy helped designers to better communicate design goals to the AI system and develop an intuition for harnessing the AI’s capabilities.
这里指南建议通过扩大现有的障碍几何形状来实现设计师的目标。这种策略帮助设计师更好地向人工智能系统传达设计目标,并培养利用人工智能能力的直觉。

6.2.5 Guide and designer making use of screen annotations and mouse gesturing to discuss design strategy.
6.2.5 利用屏幕注释和鼠标手势进行设计策略讨论的指南和设计师。

An essential part of building understanding between the designer and the peer guide was through nonverbal communication, such as screen annotations, sketches, or mouse gestures. Guides frequently used the screen annotation feature built into Zoom to highlight elements they spoke about by circling or drawing arrows. We also observed that all designers naturally used the mouse cursor to emphasize design features through circling or pointing gestures when explaining something to the guide. Both behaviors are illustrated in this situation where the guide and the designer discussed a strategy to achieve a specific bottle holder shape:
建立设计师和同行指导之间的理解的一个重要部分是通过非语言交流,比如屏幕注释、草图或鼠标手势。指导者经常使用 Zoom 内置的屏幕注释功能,通过画圈或画箭头来突出他们谈论的元素。我们还观察到,所有设计师在向指导者解释某事时,自然地使用鼠标指针通过画圈或指向手势来强调设计特征。这两种行为在这种情况中得到了说明,指导者和设计师讨论如何实现特定的瓶架形状的策略。
GUIDE: "So in this case, maybe I suggest that you move these two points in particular more towards the center." [Guide draws arrows from points towards the center of the grid] DESIGNER: "And this one seems to have dropped downwards. Even this point here... The beam seems to be going downward." [Designer points with mouse at different beams] GUIDE: "oh this beam right here?"[Guide draws an arrow pointing at beam] DESIGNER: "Yeah." (Designer: P-S08, Guide: P-S04)
指南:“在这种情况下,也许我建议您将这两个特定点更靠近中心移动。”【指南从点画箭头指向网格中心】设计师:“而这个似乎向下移动了。甚至这个点在这里…梁似乎向下走。”【设计师用鼠标指向不同的梁】指南:“哦,这个梁对吧?”【指南画箭头指向梁】设计师:“对。”(设计师:P-S08,指南:P-S04)
Many designers and guides also annotated generated designs and sketched to clarify or illustrate their ideas. In summary, different forms of nonverbal communication helped designers and peer guides develop better shared mental models of the task and collectively overcome design issues.
许多设计师和指导者还会对生成的设计进行注释,并进行草图以阐明或说明他们的想法。总的来说,不同形式的非语言交流帮助设计师和同行指导者共同发展更好的任务共享心智模型,并共同克服设计问题。

7 Designers’ needs and expectations for co-creating with AI-based design tools (RQ3)
7 位设计师与基于人工智能设计工具共同创作的需求和期望(RQ3)

Based on our observations and interviews from both the unguided and guided sessions, we highlight four themes that capture the needs and expectations that designers expressed around co-creating with AI-based design tools (RQ3).
根据我们对无指导和有指导会话的观察和访谈,我们强调了四个主题,这些主题捕捉了设计师在与基于人工智能设计工具共同创作时表达的需求和期望(RQ3)。

7.1 Designers expect the AI system to have more contextual awareness about the design problem at hand
7.1 设计师期望人工智能系统对手头的设计问题有更多的上下文意识

In both systems, participants missed the kind of contextual awareness that a human collaborator might have about a design task, such as a part’s function or how the part interfaces with other elements in the environment. Such a lack of contextual awareness was one of the main reasons people thought working with the tool was not collaborative. This lack of context also meant that the tool could not support the designers more proactively like a human partner, as described by this participant:
在这两种系统中,参与者缺乏人类合作者可能具有的关于设计任务的上下文意识,比如零件的功能或者零件如何与环境中的其他元素进行交互。这种缺乏上下文意识是人们认为使用该工具不具有协作性的主要原因之一。这种缺乏上下文也意味着该工具无法像人类合作伙伴那样更主动地支持设计师,正如一位参与者所描述的那样:
"Certainly it would have saved me some time if at the beginning the software would have said ’oh I see that these are your connection points. Can you actually get a bolt in there?’ […] Things like that would have felt really much more collaborative and helpful." P-F11 Interview
“如果软件一开始就说‘哦,我看到这些是您的连接点。您能在那里真的安装螺栓吗?’,那肯定会为我节省一些时间。这样的事情会让人觉得更加合作和有帮助。” P-F11 采访
Others expected the tool to offer more intelligent manufacturing and material suggestions or help them anticipate real-world design issues, as this participant expressed:
其他人希望这个工具能提供更智能的制造和材料建议,或帮助他们预测现实世界的设计问题,正如这位参与者所表达的那样:
"I would like to see a design tool that would show me simulations of the water bottle in action, like ’oh is there enough friction’ or ’will it actually stay in place while a cyclist is on the bike’ and then provide suggestions of how to alleviate those problems." P-S04 Interview
我希望能看到一种设计工具,能够展示给我水瓶在使用过程中的模拟情况,比如“哦,摩擦力够吗”或者“骑车时它会不会真的固定在原位”,然后提供如何解决这些问题的建议。P-S04 采访

7.2 Designers desire a more conversational form of interaction with the tool
7.2 设计师希望与工具进行更多对话式互动

Most participants complained about the lack of reciprocal interaction between them and the tool. Participants compared designing with the AI systems to "programming" or "working with a skilled teammate who is not listening to you." Participants wished for a more conversational interaction with the tools, as desired by this designer:
大多数参与者抱怨他们与工具之间缺乏互动。参与者将与 AI 系统设计进行比较,称之为“编程”或“与一个不听你说话的技术娴熟的队友合作”。参与者希望与工具有更多对话式的互动,正如这位设计师所期望的那样:
“More like a tool that I can have a conversation with while I’m always sure that everything that I’m making is fulfilling the expectation of the piece and the loads and materials and everything.” P-F04 Interview
“更像是一个工具,我可以与之对话,同时我始终确信我所做的一切都能满足作品的期望、荷载、材料等。” P-F04 采访

7.3 Designers require support in thinking through design problems
7.3 设计师需要在思考设计问题时得到支持

Across all guided sessions, designers appreciated that the peer guide helped them learn to operate the tool but also to think through design problems—a feature they would eventually also expect from a co-creative AI tool. As summarized by this designer who reflects on working with their peer guide:
在所有指导会话中,设计师们都很感激同行指导帮助他们学会操作工具,同时也帮助他们思考设计问题——这是他们最终也期望从协作式人工智能工具中获得的特性。正如这位设计师总结的那样,他反思与同行指导合作的经历:
"I think working together with the peer guide was actually really helpful because he had a lot of insight. I felt like a couple of things that I didn’t really think about just from a fundamental engineering standpoint of how the thing can actually be made. I think that was really beneficial." P-F13 Interview
我认为与同行指导一起合作实际上非常有帮助,因为他有很多见解。我觉得有一些事情,从基本的工程角度来看,我并没有真正考虑到这个东西实际上如何制造。我认为这是非常有益的。P-F13 采访

7.4 Designers expect an AI tool to provide project-relevant work examples
7.4 设计师期望 AI 工具提供与项目相关的工作示例

Designers frequently suggested that a co-creative tool should suggest similar task- and project-related examples created by other users. Such examples could help designers learn the system’s capabilities and provide creative inspiration, as suggested by this SimuLearn user:
设计师经常建议,一个共创工具应该提供其他用户创建的类似任务和项目相关示例。这些示例可以帮助设计师了解系统的能力,并提供创意灵感,正如 SimuLearn 用户所建议的那样:
"Let’s say I’m modeling some form and in real-time, the tool is searching some sort of database to show me some possibilities that other people have previously done, just by the similarity of the shape, and then I’m like ‘yeah this shape can go there and go there.’ So that’s actually a creative input that can help—a bit more like you’re designing with someone. It doesn’t necessarily have to generate, it can pull up from other people and tell you ’here is how some other designers work with this.’” P-S10 Interview
假设我正在建模某种形式,实时工具正在搜索某种数据库,以展示其他人以前做过的可能性,仅仅通过形状的相似性,然后我会说“是的,这个形状可以放在那里,放在那里。” 所以这实际上是一种有助于创造性的输入,更像是与某人一起设计。它不一定要生成,它可以从其他人那里提取,并告诉你“其他设计师是如何处理这个问题的。” P-S10 采访
Table 5: 表 5:
Table 5: Overview of design opportunities and example applications in relation to group cognition and team learning.
表 5:与群体认知和团队学习相关的设计机会概述和示例应用。

8 Discussion 8 讨论

Co-creative AI design tools have the potential to amplify the abilities of engineering and industrial designers. However, we found that designers face major challenges in (learning to) effectively co-create through understanding and adjusting system outputs, communicating design goals, and working "collaboratively" with the AI. Designers who overcame challenges did so by systematically exploring AI’s limitations and capabilities, and by explaining and reflecting on their design issues. The observed support strategies of peer guides comprised step-by-step walk-through instructions, reacting to designers’ expressions of uncertainty, prompting designer reflection on generated designs, suggesting alternative means and goals, and making use of screen annotations and mouse gestures to discuss design strategies. Overall, designers expected the AI system to have more contextual awareness about the design problem at hand, desired a more conversational form of interaction with the tool, asked for more support in thinking through design problems, and expected project-relevant work examples from the tool.
协同创造的人工智能设计工具有潜力增强工程师和工业设计师的能力。然而,我们发现设计师在(学习)有效地通过理解和调整系统输出、沟通设计目标以及与人工智能“协作”方面面临重大挑战。克服挑战的设计师通过系统地探索人工智能的局限性和能力,并解释和反思他们的设计问题来实现。观察到的同行指导支持策略包括逐步指导说明、对设计师表达的不确定性做出反应、促使设计师反思生成的设计、建议替代手段和目标,并利用屏幕注释和鼠标手势讨论设计策略。总体而言,设计师期望人工智能系统对手头的设计问题有更多的背景意识,希望与工具进行更多对话形式的互动,要求在思考设计问题时获得更多支持,并期望工具提供与项目相关的工作示例。
We discuss our findings in the context of prior research studying how users learn to work with complex software tools and relate our findings to concepts and empirical findings from research on human collaboration. We highlight design opportunities (Table 5) to better support designer-AI co-creation by scaffolding designers in actively exploring the boundaries of AI capabilities and limitations, prompting designers to actively reflect on design problems and observed AI behaviors, enhancing AI systems’ contextual awareness of designers’ tasks and objectives, and supporting more conversational forms of multi-modal communication between designers and AI systems.
我们将研究结果放在先前研究探讨用户如何学习使用复杂软件工具的背景下,并将我们的发现与关于人类协作研究的概念和实证研究结果联系起来。我们强调设计机会(表 5),以更好地支持设计师与人工智能的共同创作,通过为设计师提供支架,积极探索人工智能能力和局限的边界,促使设计师积极反思设计问题和观察到的人工智能行为,增强人工智能系统对设计师任务和目标的情境意识,并支持设计师与人工智能系统之间更多对话形式的多模态沟通。

8.1 New learning challenges for human–AI co-creation
8.1 人工智能与人类共创的新学习挑战

Based on our findings, we believe that the challenges designers faced when learning to co-create with the tools go beyond learning the tools’ interfaces. Firstly, all participants were experienced designers familiar with the CAD software’s interface and had watched a step-by-step instructional video demonstrating the operation of the AI features. Furthermore, designers used different support resources, including tooltips, help files, video tutorials, online user forums, and asking colleagues for help to overcome interface issues they faced. However, even with the help of these resources, most designers could not produce feasible and satisfying outcomes with AI assistance. While this result may have been due, in part, to the limitations of the AI tools themselves, it is clear that this was not the full story. Indeed, a few of our participants were able to overcome challenges and learned to co-create effectively with the AI features. Hence we believe that the challenges participants faced when working with the AI systems (such as communicating design goals or understanding AI outputs) were partly new learning challenges due to the tool’s more active role in the design process. These learning challenges go beyond learning the tool’s interface. For example, a major challenge we identified relates to how designers specify all required parameters upfront instead of modeling and testing a part step-by-step. This workflow requires designers to think through the design problem in advance, which is challenging and different from the usual iterative design process. Our findings suggest that these new learning challenges require new support strategies for example, by prompting designers’ reflection in response to expressions of uncertainty or suggesting alternative design goals that align with the AI’s capabilities.
根据我们的研究结果,我们认为设计师在学习与工具共同创作时面临的挑战不仅仅是学习工具的界面。首先,所有参与者都是经验丰富的设计师,熟悉 CAD 软件的界面,并观看了逐步指导操作人工智能功能的视频。此外,设计师使用了不同的支持资源,包括工具提示、帮助文件、视频教程、在线用户论坛以及向同事寻求帮助来克服他们遇到的界面问题。然而,即使在这些资源的帮助下,大多数设计师仍无法通过人工智能的协助产生可行且令人满意的结果。虽然这一结果部分可能是由于人工智能工具本身的局限性,但显然这并非全部原因。事实上,我们的一些参与者能够克服挑战,有效地学会与人工智能功能共同创作。因此,我们认为参与者在与人工智能系统合作时面临的挑战(如沟通设计目标或理解人工智能输出)在一定程度上是由于工具在设计过程中扮演更积极角色而带来的新学习挑战。 这些学习挑战不仅仅是学习工具的界面。例如,我们发现的一个主要挑战与设计师如何事先指定所有必需的参数有关,而不是逐步对零件进行建模和测试。这种工作流程要求设计师提前思考设计问题,这是具有挑战性的,与通常的迭代设计过程不同。我们的研究结果表明,这些新的学习挑战需要新的支持策略,例如,通过促使设计师反思以应对不确定性的表达,或建议与人工智能能力相匹配的替代设计目标。

8.2 Toward models of human collaboration as lenses for studying and designing co-creative systems
8.2 朝着以人类协作模型为镜头来研究和设计共创系统的方向

Participants had trouble learning to predict how the AI might behave in response to the specified parameters. They struggled to make sense of the AI system’s reasoning and struggled to correct unwanted design issues. Prior literature on group cognition suggests that to achieve effective collaboration group members should be able to interpret each other’s reasoning and predict roughly how their partner might behave in response to their own actions [40]. Similarly, from a team learning perspective, our findings suggest that designers who systematically explored the AI’s limitations and capabilities early on were better at predicting the tool’s actions in response to their own and produced more satisfactory results. This result is in line with studies on human-AI collaboration in decision-making, suggesting that users learn to better predict the machine’s behavior through inductive mechanisms (i.e., via concrete examples and hands-on testing) than via general, declarative information about internal processes [11]. While explainable AI research focuses primarily on directly communicating information about the AI system to the user, recent research has suggested that more engaging and longer forms of learning and deliberate practice might improve human-AI collaboration [36]. However, in addition to supporting honing the user’s mental model of the AI’s capabilities and limitations, it is equally important for the AI system to have an understanding of the user’s capabilities, limitations, and task context to enable more effective human-AI collaboration. Hence, this would require the AI system to have better contextual awareness of the user and the current task at hand. We further discuss the resulting design opportunities in section 8.3.3.
参与者在学习如何预测人工智能对指定参数的反应时遇到了困难。他们努力理解人工智能系统的推理过程,并努力纠正不良的设计问题。有关群体认知的先前文献表明,为了实现有效的协作,群体成员应该能够解释彼此的推理,并大致预测伙伴对自己的行为会做出怎样的反应[40]。同样地,从团队学习的角度来看,我们的研究结果表明,早期系统地探索人工智能的局限性和能力的设计师更擅长预测工具对自己行为的反应,并产生了更加令人满意的结果。这一结果与有关人机协作决策的研究一致,表明用户通过归纳机制(即通过具体例子和实践测试)而不是通过有关内部过程的一般性声明信息,学会更好地预测机器的行为[11]。 尽管可解释的人工智能研究主要集中在直接向用户传达有关人工智能系统的信息,但最近的研究表明,更具吸引力和更长的学习形式和刻意练习可能会改善人工智能与人类的协作[36]。然而,除了支持磨练用户对人工智能能力和局限性的心智模型之外,让人工智能系统了解用户的能力、局限性和任务背景同样重要,以实现更有效的人工智能协作。因此,这需要人工智能系统更好地了解用户和当前任务的上下文意识。我们将在第 8.3.3 节进一步讨论由此产生的设计机会。
Most designers felt the tools were uncollaborative and had more control over the design process than they would have preferred. As a consequence, they accepted imperfect results, developed improvised workarounds, or avoided AI assistance altogether. Previous studies on group cognition and team learning suggest that group members feel more ownership of the mutual activity when the group learns to coordinate cognitive capabilities among participants—united by their interpretations of each other’s mental states [57]. Groups can reach this level of collaboration by an active process of communication, joint information processing, and coordination of actions [53].
大多数设计师认为工具缺乏协作性,对设计过程的控制超出他们的偏好。因此,他们接受了不完美的结果,制定了临时解决方案,或者完全避免了人工智能的帮助。先前关于群体认知和团队学习的研究表明,当群体学会协调参与者之间的认知能力,统一了对彼此心理状态的解释时,群体成员会更多地感到对共同活动的拥有权。群体可以通过积极的沟通过程、共同信息处理和行动协调达到这种协作水平。
Our findings from the guided sessions show that the guides’ active communicative support strategies, such as reacting to designers’ uncertainties and providing step-by-step instructions, helped designers learn to work confidently with the AI. Furthermore, when peer guides prompted designers’ feedback and reflection on generated designs, the designers were required to articulate their intentions explicitly. As a result, both the guides and designers were able to discuss and better coordinate further actions to improve the outcome.
我们在指导会话中的发现表明,导游的积极沟通支持策略,例如对设计师的不确定性做出反应并提供逐步指导,有助于设计师学会与人工智能自信地合作。此外,当同行导游促使设计师对生成的设计进行反馈和反思时,设计师需要明确表达他们的意图。因此,导游和设计师都能够讨论并更好地协调进一步的行动以改善结果。

8.3 Design Opportunities and Future Work
8.3 设计机会和未来工作

The following section highlights several design opportunities we identified in our findings to support designers in learning to co-create with computational AI tools.
以下部分突出了我们在研究中发现的几个设计机会,以支持设计师学习如何与计算机人工智能工具共同创作。

8.3.1 Scaffolding inductive learning of AI’s capabilities and limitations.
8.3.1 AI 能力和局限性归纳学习的脚手架。

As discussed, some designers learned the AI system’s capabilities and limitations by testing and documenting the effectual correspondence between various parameters and the generated result (see 5.3.1). In the guided sessions, peer guides also frequently provided step-by-step instructions for setting up parameters or walked designers through the sequence of steps (see 6.2.2). These inductive learning strategies helped designers to better predict the AI’s behavior and understand its capabilities. Previous work on novice-AI music co-creation has also found that users systematically tested AI limitations to hone their mental model of the system’s behavior [48]. Going further, to better support users in this learning activity, future co-creative systems may offer designers a set of hands-on mini-guided ‘experiments’ to better understand the system’s responses to specific (extreme) parameter inputs [42]. Systems may also offer designers opportunities to view sets of examples of input-output pairs to help designers develop useful mental models of an AI tool’s generative capabilities and limitations (cf. [58]). A co-creative tool may also proactively recommend similar tasks and project-related examples created by other human-AI teams (see 7.4) to help designers learn the system’s capabilities and provide creative inspiration.
正如讨论所述,一些设计师通过测试和记录各种参数与生成结果之间的有效对应关系(见 5.3.1),来了解人工智能系统的能力和局限性。在指导会话中,同行指导者经常提供逐步设置参数的说明,或者引导设计师按照步骤进行操作(见 6.2.2)。这些归纳学习策略帮助设计师更好地预测人工智能的行为并了解其能力。先前关于新手-人工智能音乐共创的研究还发现,用户系统地测试人工智能的局限性,以磨练他们对系统行为的心智模型[48]。为了更好地支持用户进行这种学习活动,未来的共创系统可能会为设计师提供一组实践性的迷你指导“实验”,以更好地了解系统对特定(极端)参数输入的响应[42]。系统还可以为设计师提供查看输入-输出对集合的机会,以帮助设计师发展对人工智能工具生成能力和局限性有用的心智模型(参见[58])。 协作工具还可以主动推荐其他人工智能团队创建的类似任务和项目相关示例(见 7.4 节),以帮助设计师了解系统的能力并提供创意灵感。

8.3.2 Prompting designers’ planning and reflection.
8.3.2 激发设计师的规划和反思。

Participants who were more successful at co-creating with the AI tools did so by abstracting and explaining their problems—either to themselves during think-aloud sessions or to the peer guides (see 5.3.2 and 6.2.3). Literature from the learning sciences shows that self-explanation positively affects understanding and problem-solving [76]. In addition, participants from the guided sessions appreciated the guides’ prompt for reflection on AI behaviors and their suggestions for alternative design goals or strategies (see 6.2.4 and 7.3). Such actions helped designers think through the tasks and plan actions with AI tools. Conversely, designers who did not reflect on the design problems were unable to learn to understand the system’s behavior, capabilities, and limitations well enough and failed to produce satisfactory outcomes. This observation was more prevalent in Fusion360 than in SimuLearn since Fusion360’s long simulation time doesn’t support the kind of rapid interactive adjustments as SimuLearn did, thus requiring the user to strategize their actions in advance. Hence, supporting users in thinking through the design problem for specifying parameters ahead would be especially beneficial for AI tools with longer processing time.
参与者在与人工智能工具共同创作时更成功的方法是通过抽象化和解释他们的问题——无论是在思考时向自己解释,还是向同行指导解释(见 5.3.2 和 6.2.3)。学习科学的文献显示,自我解释对理解和解决问题有积极影响[76]。此外,来自指导会话的参与者赞赏指导对人工智能行为进行反思以及对替代设计目标或策略提出建议的提示(见 6.2.4 和 7.3)。这些行动帮助设计师思考任务并与人工智能工具一起规划行动。相反,没有反思设计问题的设计师无法学会充分了解系统的行为、能力和局限性,也无法产生令人满意的结果。这种观察在 Fusion360 中比 SimuLearn 更为普遍,因为 Fusion360 的长时间模拟不支持像 SimuLearn 那样的快速互动调整,因此需要用户提前策划他们的行动。 因此,在设计问题中支持用户提前指定参数的思考,对于处理时间较长的人工智能工具尤为有益。
One additional explanation for why participants failed to produce satisfactory outcomes might be that AI systems can lead designers to over-rely on their support, creating an “illusion of success” that reduces their effort in solving the design problem [84], something we saw when designers accepted results even when they appeared unfeasible (see Section 5.2.1). To compensate for this tendency, a co-creative system may help designers reframe the problem or further explore the solution space by suggesting alternative goals or asking generative design questions [23] like “What could other alternatives look like?”. Moreover, actively reflecting on the design process is an essential part of professional design practices [67]. Much like the guides, a co-creative system could prompt feedback and active reflection on observed AI behaviors or generated designs by asking deep reasoning questions about the results, such as “Is this generated design what you envisioned?” (cf. [12]). Based on the feedback, the system may then offer support and help to coordinate further actions. This strategy would be complementary to the inductive learning support described in the previous section (8.3.1).
参与者未能产生令人满意的结果的另一个解释可能是,人工智能系统可能导致设计师过度依赖它们的支持,从而产生“成功的幻觉”,减少他们解决设计问题的努力[84],我们在设计师即使结果看似不可行时也接受结果时看到了这一点(见第 5.2.1 节)。为了弥补这种倾向,一个共创系统可能有助于设计师重新构思问题或进一步探索解决方案空间,提出替代目标或询问生成设计问题[23],比如“其他替代方案可能是什么样的?”。此外,积极反思设计过程是专业设计实践的重要组成部分[67]。类似指南,一个共创系统可以通过询问关于结果的深层推理问题,如“这个生成的设计是否符合您的预期?”(参见[12]),促使反馈和对观察到的人工智能行为或生成设计的积极反思。根据反馈,系统可以提供支持并协调进一步的行动。这种策略将是对前一节(8.3.1)中描述的归纳学习支持的补充。

8.3.3 Improving co-creative AI tools’ contextual awareness of designers’ tasks and objectives.
8.3.3 提高协作式人工智能工具对设计师任务和目标的上下文意识。

Our results show that designers felt a lack of the tool’s awareness about the design context and therefore missed the kind of proactive support a human partner might provide (see 7.1). While building contextual awareness into AI systems has long been a tradition in HCI research, it also presents many technical challenges. However, in the context of co-creative design tools, promising directions are being explored. For example, the system could derive its user model through explicit and implicit mechanisms to develop a shared mental model of the context by asking the user for information about the specific design task (i.e., parsing a written design brief for context and goals [71]) or infer design goals from user behaviors [46].
我们的研究结果显示,设计师们感到工具对设计背景的认知不足,因此错过了人类伙伴可能提供的积极支持(见 7.1)。尽管在人机交互研究中,将上下文意识融入人工智能系统早已成为传统,但这也带来了许多技术挑战。然而,在协同创作设计工具的背景下,正在探索一些有前途的方向。例如,系统可以通过显式和隐式机制推导出用户模型,通过向用户询问有关特定设计任务的信息(即,解析书面设计简报以获取上下文和目标[71])或从用户行为中推断设计目标[46]来发展共享的心智模型。
From a group cognition and team learning perspective, contextual awareness also includes an understanding of the other team member’s existing or missing knowledge about the design task. Based on our findings (see 6.2.2), a co-creative tool might learn about the designers’ knowledge by responding to verbalized knowledge gaps such as “I don’t know” or responding to hedging expressions such as “maybe” or “I assume.” This observation is in line with research showing student learning is positively affected by human tutors’ responses to their expressions of uncertainty [26]. Based on this phenomenon, literature has explored how intelligent tutoring systems can detect and respond to student hedging [26, 62]. A co-creative system may also allow designers to communicate uncertainties about parameters and their own knowledge gaps. For example, designers may flag an “???” checkbox next to a parameter’s input field to signal uncertainty. The AI system could then track designers’ “level of certainty” for each parameter and provide reactive help or re-surface those parameters later in the design process to identify possible reasons for unexpected outcomes.
从群体认知和团队学习的角度来看,情境意识还包括理解其他团队成员对设计任务的现有或缺失知识。根据我们的研究结果(见 6.2.2),一个共创工具可以通过回应口头表达的知识空白,如“我不知道”,或回应含糊表达,如“也许”或“我认为”,来了解设计师的知识。这一观察结果与研究表明,学生学习受到人类导师对他们表达不确定性的回应的积极影响[26]。基于这一现象,文献探讨了智能辅导系统如何检测并回应学生的含糊表达[26, 62]。一个共创系统还可以让设计师沟通关于参数和自身知识空白的不确定性。例如,设计师可以在参数输入字段旁标记一个“???”复选框以表示不确定性。然后,AI 系统可以跟踪设计师对每个参数的“确定程度”,并提供反应性帮助,或在设计过程中稍后重新呈现这些参数,以确定意外结果的可能原因。

8.3.4 Multi-modal interactive communication for human–AI co-creation.
8.3.4 人工智能与人类共同创造的多模态互动沟通。

Our findings show that designers felt communicating with the AI was like giving instructions without receiving much feedback (see 7.2). Results from the guided sessions show that peer guides used conversational strategies like confirming the reception of each other’s goals or asking for clarifications when they were unsure about the other’s intentions. As conversation is widely seen as a vital mode of designing [67] and empirical work suggests that much of design work lies within conversations between collaborators and stakeholders [47], co-creative systems should consider using more back-and-forth conversation as an interaction interface. Furthermore, studies on team learning show that the forming of effective, shared mental models is strengthened through an active process of iterative negotiation between team members, involving “constructive” forms of conflict, argumentation, and resolution [30, 34, 74]. Such a strategy may also prove useful for negotiating design goals.
我们的研究结果显示,设计师们觉得与人工智能交流就像是发出指令却没有得到太多反馈(见 7.2)。在引导性会话中的结果显示,同行导师使用了会话策略,比如确认彼此目标的接收情况,或者在不确定对方意图时请求澄清。由于会话被广泛视为设计的重要模式[67],并且实证研究表明,设计工作的很大部分发生在合作者和利益相关者之间的对话中[47],因此,协同创作系统应考虑使用更多来回对话作为交互界面。此外,关于团队学习的研究表明,有效共享的心智模型的形成通过团队成员之间的积极的迭代协商过程得以加强,其中涉及“建设性”的冲突形式、辩论和解决[30, 34, 74]。这样的策略也可能对谈判设计目标有所裨益。
Our findings from the guided sessions also suggest that non-verbal communication may support design partners in developing better shared mental models of design goals (see 6.2.5). Designers and guides discussed goals and strategies by pointing at features with the mouse cursor or sketching with the screen annotation feature, similar to a ’spatial-action language’ described by Schön [67] which explains gesturing and drawing along with verbal expressions as typical forms of communication in traditional design critique sessions. Such non-verbal interaction is still an underutilized medium in human-AI co-creation. Allowing designers to use sketching, annotation, or gesturing atop generated results may help them communicate design goals to an AI system.
我们在指导会话中的发现还表明,非语言交流可能有助于设计合作伙伴共同发展更好的设计目标共享心智模型(见 6.2.5)。设计师和指导人员通过用鼠标指向特征或在屏幕注释功能上草绘来讨论目标和策略,类似于 Schön 描述的“空间-行动语言”,该语言解释了在传统设计批判会话中作为典型交流形式的手势和绘图以及口头表达。这种非语言交互在人工智能共创中仍然是一种未充分利用的媒介。允许设计师在生成的结果上进行草绘、注释或手势可能有助于他们向人工智能系统传达设计目标。

8.4 Human–AI co-creation beyond manufacturing
8.4 人类与人工智能在制造业之外的共创

Although we identify opportunities to support learning to co-create with AI systems in the context of manufacturing, many of our findings could also apply to other human-AI co-creation domains such as image, music, or text generation. Given recent advancements in generative image AI models (such as DALL-E [63] or Stable Diffusion [65]) with fast release cycles of new tools and capabilities, supporting creative professionals in learning to effectively co-create with such tools might become increasingly important. Furthermore, many prompt-based AI models like DALL-E expect users to express their goals through text prompts, which is an unfamiliar modality for most of today’s visual designers. Consequently, our findings suggest that AI tool users across many domains could be supported in learning to better co-create with AI systems by scaffolding inductive learning of the AI’s capabilities, prompting users’ planning and reflection, improving the tools’ contextual awareness of tasks and objectives, and facilitating multi-modal interactive communication between tool and user. Future work might further explore interfaces for supporting learning to co-create with computational AI tools in other domains beyond manufacturing.
尽管我们在制造业的背景下确定了支持与人工智能系统共同创造学习的机会,但我们的许多发现也适用于其他人工智能协同创作领域,如图像、音乐或文本生成。鉴于生成图像人工智能模型(如 DALL-E [63]或 Stable Diffusion [65])的最新进展,新工具和功能的快速发布周期,支持创意专业人士学习如何有效地与这些工具共同创作可能变得越来越重要。此外,许多基于提示的人工智能模型,如 DALL-E,期望用户通过文本提示表达他们的目标,这对今天大多数视觉设计师来说是一种陌生的模式。因此,我们的发现表明,通过在人工智能的能力上进行归纳学习、促使用户规划和反思、改进工具对任务和目标的上下文意识,以及促进工具与用户之间的多模态互动沟通,可以支持许多领域的人工智能工具用户学习如何更好地与人工智能系统共同创作。 未来的工作可能进一步探索支持学习与计算 AI 工具在制造领域之外的其他领域共同创造的界面。

8.5 Limitations of the study
8.5 研究的局限性

We highlight three limitations of this study: first, our participants only represent a subset of engineering, architectural, and industrial designers. Although all participants had relevant training in their design fields and worked with 3D CAD software, most had minimal industry experience. We also included three professionals with substantial industry experience to compensate for this imbalance, however, even these participants struggled. Further, the self-selected participants in our study were presumably interested and open to the idea of co-creating with an AI system. Thus, some of our findings may be reflective of this openness to co-creative work. Second, both AI tools are still early in development, and we noticed user experience issues that could benefit from improved UIs. To our best extent, we isolated UI issues from more fundamental challenges in learning to co-create with AI systems. Although the tools are relatively new, we believe our findings provide value in exploring new opportunities for co-creative design systems. Finally, despite our effort to make sure the design tasks were realistic, designers knew that they were in a research study and that the designs would not be manufactured. In a professional context, participants might have spent longer learning the tool to produce feasible designs.
我们强调这项研究的三个局限性:首先,我们的参与者仅代表工程、建筑和工业设计师的一个子集。尽管所有参与者在其设计领域接受了相关培训并使用 3D CAD 软件工作,但大多数参与者几乎没有行业经验。我们还包括了三位具有丰富行业经验的专业人士来弥补这种不平衡,然而,即使这些参与者也遇到了困难。此外,我们研究中自愿参与的参与者可能对与 AI 系统共同创作的想法感兴趣并持开放态度。因此,我们的一些发现可能反映了这种对共同创作工作的开放态度。其次,两种 AI 工具仍处于早期开发阶段,我们注意到用户体验问题,这些问题可以从改进的用户界面中获益。在我们最大的努力下,我们将用户界面问题与学习与 AI 系统共同创作的更基本挑战隔离开来。尽管这些工具相对较新,但我们相信我们的发现在探索共同创作设计系统的新机会方面具有价值。 最后,尽管我们努力确保设计任务是现实的,设计师们知道他们参与的是一项研究,并且这些设计不会被制造。在专业环境中,参与者可能会花更多时间学习工具以制作可行的设计。

9 Conclusion 9 结论

In this paper, we presented an empirical study to understand how engineering and architectural designers learn to work with AI-based manufacturing design tools, documenting their challenges in working with an AI and probing their needs and expectations for co-creating with AI systems. We identified several support opportunities with an eye toward learning from effective human-human teams to improve future designer-AI co-creation. Overall, we aim to inspire others to explore untapped support opportunities and to work toward future co-creative tools that combine the strength of both human and AI systems to achieve complex designs that neither could achieve alone.
在本文中,我们展示了一项经验研究,以了解工程和建筑设计师如何学习使用基于人工智能的制造设计工具,记录了他们在与人工智能合作中面临的挑战,并探究了他们与人工智能系统共同创作的需求和期望。我们确定了几个支持机会,着眼于从有效的人际团队中学习,以改进未来设计师与人工智能共同创作。总体而言,我们的目标是激励他人探索未被利用的支持机会,并努力开发未来的共创工具,结合人类和人工智能系统的优势,实现复杂设计,这是任何一方单独无法实现的。

Acknowledgments 致谢

We want to thank all study participants, peer guides, and the research assistants Anita Sun, Linda Xue, and Sophia Timko for supporting this work.
我们要感谢所有的研究参与者、同行指导和研究助理安妮塔·孙、琳达·薛和索菲亚·廷科对这项工作的支持。
This material is based upon work supported by the National Science Foundation under Grant No. 2118924 Supporting Designers in Learning to Co-create with AI for Complex Computational Design Tasks. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
本材料基于国家科学基金会在授予号码 2118924 下支持的工作,支持设计师学习与人工智能共同创造复杂计算设计任务。本材料中表达的任何观点、发现、结论或建议均属于作者本人,不一定反映国家科学基金会的观点。

A Additional Materials 附加材料

Figure 6: 图 6:
Figure 6: Overview of all engine mounting brackets created in the Fusion360 task. Every participant submitted three design options. Since the AI system generated the three options based on the same parameter values, each trio either met or missed the same criteria.
图 6:Fusion360 任务中创建的所有发动机安装支架概览。每位参与者提交了三种设计选项。由于 AI 系统基于相同的参数值生成了三个选项,因此每组要么符合要求,要么不符合相同的标准。
Table 6: 表 6:
Table 6: Interview protocol with questions of the semi-structured post-task interview.

Footnote

1
This is a known issue in Fusion360 that many users have discussed in the user forum. See [4].

Supplementary Material

MP4 File (3544548.3580999-video-figure.mp4)
Video Figure
MP4 File (3544548.3580999-video-preview.mp4)
Video Preview
MP4 File (3544548.3580999-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
Grégoire Allaire, François Jouve, and Anca-Maria Toader. 2002. A level-set method for shape optimization. Comptes Rendus Mathematique 334, 12 (Jan. 2002), 1125–1130. https://doi.org/10.1016/S1631-073X(02)02412-3
[2]
Gantugs Atarsaikhan, Brian Kenji Iwana, and Seiichi Uchida. 2020. Guided neural style transfer for shape stylization. PLOS ONE 15, 6 (June 2020), e0233489. https://doi.org/10.1371/journal.pone.0233489
[3]
Autodesk. 2020. Fusion 360 Generative Design Extension. https://www.autodesk.com/products/fusion-360/generative-design-extension
[4]
Autodesk. 2021. Generative Design - Structural Load - Force per Entity?https://forums.autodesk.com/t5/fusion-360-design-validate/generative-design-structural-load-force-per-entity/td-p/10380547 Section: Fusion 360 Design, Validate & Document.
[5]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (Oct. 2019), 2–11. https://ojs.aaai.org/index.php/HCOMP/article/view/5285
[6]
Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. 2019. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 2019), 2429–2437. https://doi.org/10.1609/aaai.v33i01.33012429
[7]
Eric P.S. Baumer and Bill Tomlinson. 2011. Comparing Activity Theory with Distributed Cognition for Video Analysis: Beyond "Kicking the Tires". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11). Association for Computing Machinery, New York, NY, USA, 133–142. https://doi.org/10.1145/1978942.1978962 event-place: Vancouver, BC, Canada.
[8]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11, 4 (Aug. 2019), 589–597. https://doi.org/10.1080/2159676X.2019.1628806
[9]
Jean-Marie Burkhardt, Françoise Détienne, Anne-Marie Hébert, Laurence Perron, Stéphane safin, and Pierre Leclercq. 2009. An Approach to Assess the Quality of Collaboration in Technology-Mediated Design Situations. In European Conference on Cognitive Ergonomics: Designing beyond the Product — Understanding Activity and User Experience in Ubiquitous Environments(ECCE ’09). VTT Technical Research Centre of Finland, FI-02044 VTT, FIN. event-place: Helsinki, Finland.
[10]
Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1–24. https://doi.org/10.1145/3359206
[11]
Arjun Chandrasekaran, Deshraj Yadav, Prithvijit Chattopadhyay, Viraj Prabhu, and Devi Parikh. 2017. It Takes Two to Tango: Towards Theory of AI’s Mind. http://arxiv.org/abs/1704.00717 arXiv:1704.00717 [cs].
[12]
Catherine C. Chase, Jenna Marks, Laura J. Malkiewich, and Helena Connolly. 2019. How teacher talk guidance during Invention activities shapes students’ cognitive engagement and transfer. International Journal of STEM Education 6, 1 (Dec. 2019), 14. https://doi.org/10.1186/s40594-019-0170-7
[13]
Rimika Chaudhury, Philip J. Guo, and Parmit K. Chilana. 2022. "There’s no way to keep up!": Diverse Motivations and Challenges Faced by Informal Learners of ML. In 2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). 1–11. https://doi.org/10.1109/VL/HCC53370.2022.9833100 ISSN: 1943-6106.
[14]
Xiang ’Anthony’ Chen, Ye Tao, Guanyun Wang, Runchang Kang, Tovi Grossman, Stelian Coros, and Scott E. Hudson. 2018. Forte: User-Driven Generative Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, 1–12. https://doi.org/10.1145/3173574.3174070
[15]
Michelene T. H. Chi. 2006. Laboratory Methods for Assessing Experts’ and Novices’ Knowledge. In The Cambridge Handbook of Expertise and Expert Performance, K. Anders Ericsson, Neil Charness, Paul J. Feltovich, and Robert R. Hoffman (Eds.). Cambridge University Press, Cambridge, 167–184. https://doi.org/10.1017/CBO9780511816796.010
[16]
Herbert H. Clark and Susan E. Brennan. 1991. Grounding in communication. In Perspectives on socially shared cognition., Lauren B. Resnick, John M. Levine, and Stephanie D. Teasley (Eds.). American Psychological Association, Washington, 127–149. https://doi.org/10.1037/10096-006
[17]
Dassault Systèmes. 2021. Solidworks | Enhance Your Design Workflow with Generative Design. https://www.solidworks.com/media/enhance-your-design-workflow-generative-design
[18]
Nicholas Davis, Chih-Pin Hsiao, Yanna Popova, and Brian Magerko. 2015. An Enactive Model of Creativity for Computational Collaboration and Co-creation. In Creativity in the Digital Age, Nelson Zagalo and Pedro Branco (Eds.). Springer London, London, 109–133. https://doi.org/10.1007/978-1-4471-6681-8_7 Series Title: Springer Series on Cultural Computing.
[19]
Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, Sanat Moningi, and Brian Magerko. 2015. Drawing Apprentice: An Enactive Co-Creative Agent for Artistic Collaboration. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition. ACM, Glasgow United Kingdom, 185–186. https://doi.org/10.1145/2757226.2764555
[20]
Leslie A. DeChurch and Jessica R. Mesmer-Magnus. 2010. Measuring shared team mental models: A meta-analysis.Group Dynamics: Theory, Research, and Practice 14, 1 (2010), 1–14. https://doi.org/10.1037/a0017455
[21]
Kees Dorst and Nigel Cross. 2001. Creativity in the design process: co-evolution of problem–solution. Design Studies 22, 5 (Sept. 2001), 425–437. https://doi.org/10.1016/S0142-694X(01)00009-6
[22]
David Engel, Anita Woolley, Lisa Jing, Christopher Chabris, and Thomas Malone. 2014. Reading the Mind in the Eyes or Reading between the Lines? Theory of Mind Predicts Collective Intelligence Equally Well Online and Face-To-Face. PloS one 9 (Dec. 2014), e115212. https://doi.org/10.1371/journal.pone.0115212
[23]
Ozgur Eris. 2003. Asking generative design questions: a fundamental cognitive mechanism in design thinking. In Proceedings of the international conference on engineering design.
[24]
Jennifer Fernquist, Tovi Grossman, and George Fitzmaurice. 2011. Sketch-Sketch Revolution: An Engaging Tutorial System for Guided Sketching and Application Learning. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology(UIST ’11). Association for Computing Machinery, New York, NY, USA, 373–382. https://doi.org/10.1145/2047196.2047245 event-place: Santa Barbara, California, USA.
[25]
Stephen M. Fiore and Travis J. Wiltshire. 2016. Technology as Teammate: Examining the Role of External Cognition in Support of Team Cognitive Processes. Frontiers in Psychology 7 (Oct. 2016). https://doi.org/10.3389/fpsyg.2016.01531
[26]
Kate Forbes-Riley and Diane J. Litman. 2009. Adapting to Student Uncertainty Improves Tutoring Dialogues. (July 2009), 33–40. MAG ID: 1568422853.
[27]
formlabs. 2020. Generative Design 101. https://formlabs.com/blog/generative-design/
[28]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Las Vegas, NV, USA, 2414–2423. https://doi.org/10.1109/CVPR.2016.265
[29]
Aldair E. Gongora, Siddharth Mysore, Beichen Li, Wan Shou, Wojciech Matusik, Elise F. Morgan, Keith A. Brown, and Emily Whiting. 2021. Designing Composites with Target Effective Young’s Modulus using Reinforcement Learning. In Symposium on Computational Fabrication. ACM, Virtual Event USA, 1–11. https://doi.org/10.1145/3485114.3485123
[30]
George Head. 2003. Effective collaboration: Deep collaboration as an essential element of the learning process. The Journal of Educational Enquiry 4, 2 (2003).
[31]
Kenneth Holstein, Vincent Aleven, and Nikol Rummel. 2020. A Conceptual Framework for Human–AI Hybrid Adaptivity in Education. In Artificial Intelligence in Education, Ig Ibert Bittencourt, Mutlu Cukurova, Kasia Muldner, Rose Luckin, and Eva Millán (Eds.). Vol. 12163. Springer International Publishing, Cham, 240–254. https://doi.org/10.1007/978-3-030-52237-7_20 Series Title: Lecture Notes in Computer Science.
[32]
Ivan Hybs and John S. Gero. 1992. An evolutionary process model of design. Design Studies 13, 3 (July 1992), 273–290. https://doi.org/10.1016/0142-694X(92)90216-W
[33]
Amir Jahanlou, William Odom, and Parmit Chilana. 2021. Challenges in Getting Started in Motion Graphic Design: Perspectives from Casual and Professional Motion Designers. In Proceedings of Graphics Interface 2021(GI 2021). Canadian Information Processing Society, 35 – 45. https://doi.org/10.20380/GI2021.06 ISSN: 0713-5424 event-place: Virtual Event.
[34]
Heisawn Jeong and Michelene T. H. Chi. 2007. Knowledge convergence and collaborative learning. Instructional Science 35, 4 (May 2007), 287–315. https://doi.org/10.1007/s11251-006-9008-z
[35]
Harmanpreet Kaur, Alex C Williams, and Walter S Lasecki. 2019. Building Shared Mental Models between Humans and AI for Effective Collaboration. (2019), 7.
[36]
Anna Kawakami, Nikos Arechiga, Luke Guerdan, Matthew Lee, Yang Cheng, Scott Carter, Anita Sun, Haiyi Zhu, and Alison Hu. 2022. Towards a Learner-Centered Explainable AI. Workshop on Human-Centered Explainable AI (HCXAI) at the ACM Conference on Human Factors in Computing Systems, CHI 2022 (2022), 7.
[37]
Rubaiat Habib Kazi, Tovi Grossman, Hyunmin Cheong, Ali Hashemi, and George Fitzmaurice. 2017. DreamSketch: Early Stage 3D Design Explorations with Sketching and Generative Design. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, Québec City QC Canada, 401–414. https://doi.org/10.1145/3126594.3126662
[38]
Kimia Kiani, Parmit K. Chilana, Andrea Bunt, Tovi Grossman, and George Fitzmaurice. 2020. “I Would Just Ask Someone”: Learning Feature-Rich Design Software in the Modern Workplace. In 2020 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). 1–10. https://doi.org/10.1109/VL/HCC50065.2020.9127288 ISSN: 1943-6106.
[39]
Hannah Kim, Jaegul Choo, Haesun Park, and Alex Endert. 2016. InterAxis: Steering Scatterplot Axes via Observation-Level Interaction. IEEE Transactions on Visualization and Computer Graphics 22, 1 (Jan. 2016), 131–140. https://doi.org/10.1109/TVCG.2015.2467615
[40]
Janin Koch and Antti Oulasvirta. 2018. Group Cognition and Collaborative AI. In Human and Machine Learning, Jianlong Zhou and Fang Chen (Eds.). Springer International Publishing, Cham, 293–312. https://doi.org/10.1007/978-3-319-90403-0_15 Series Title: Human–Computer Interaction Series.
[41]
Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/3290605.3300641
[42]
Kenneth R. Koedinger, Albert T. Corbett, and Charles Perfetti. 2012. The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning. Cognitive Science 36, 5 (2012), 757–798. https://doi.org/10.1111/j.1551-6709.2012.01245.x _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1551-6709.2012.01245.x.
[43]
Sivam Krish. 2011. A practical generative design method. Computer-Aided Design 43, 1 (Jan. 2011), 88–100. https://doi.org/10.1016/j.cad.2010.09.009
[44]
Benjamin Lafreniere, Parmit K. Chilana, Adam Fourney, and Michael A. Terry. 2015. These Aren’t the Commands You’re Looking For: Addressing False Feedforward in Feature-Rich Software. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. ACM, Charlotte NC USA, 619–628. https://doi.org/10.1145/2807442.2807482
[45]
Matthew V. Law, Nikhil Dhawan, Hyunseung Bang, So-Yeon Yoon, Daniel Selva, and Guy Hoffman. 2019. Side-by-Side Human–Computer Design Using a Tangible User Interface. In Design Computing and Cognition ’18, John S. Gero (Ed.). Springer International Publishing, Cham, 155–173. https://doi.org/10.1007/978-3-030-05363-5_9
[46]
Matthew V. Law, Amritansh Kwatra, Nikhil Dhawan, Matthew Einhorn, Amit Rajesh, and Guy Hoffman. 2020. Design Intention Inference for Virtual Co-Design Agents. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. ACM, Virtual Event Scotland UK, 1–8. https://doi.org/10.1145/3383652.3423861
[47]
Bryan Lawson and Shee Ming Loke. 1997. Computers, words and pictures. Design Studies 18, 2 (April 1997), 171–183. https://doi.org/10.1016/S0142-694X(97)85459-2
[48]
Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai. 2020. Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376739
[49]
Shareen Mahmud, Jessalyn Alvina, Parmit K. Chilana, Andrea Bunt, and Joanna McGrenere. 2020. Learning Through Exploration: How Children, Adults, and Older Adults Interact with a New Feature-Rich Application. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376414
[50]
Mary Lou Maher, Brian Magerko, Dan Venura, Douglas Fisher, Rogelio E. Cardona-Rivera, Nancy Fulda, John Gero, Minwoo Lee, David Wilson, James C. Kaufman, Maithilee Kunda, Michael Muller, Rachel K.E. Bellamy, Maya Ackerman, and Evangelia Chrysikou. 2022. A Research Plan for Integrating Generative and Cognitive AI for Human Centered, Explainable Co-Creative AI. In CHI 2022 Workshop on Generative AI and HCI. https://generativeaiandhci.github.io/
[51]
Damien Masson, Jo Vermeulen, George Fitzmaurice, and Justin Matejka. 2022. Supercharging Trial-and-Error for Learning Complex Software Applications. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3491102.3501895 event-place: New Orleans, LA, USA.
[52]
Justin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, and George Fitzmaurice. 2018. Dream Lens: Exploration and Visualization of Large-Scale Generative Design Datasets. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, 1–12. https://doi.org/10.1145/3173574.3173943
[53]
Anne Meier, Hans Spada, and Nikol Rummel. 2007. A rating scheme for assessing the quality of computer-supported collaboration processes. International Journal of Computer-Supported Collaborative Learning 2, 1 (April 2007), 63–86. https://doi.org/10.1007/s11412-006-9005-x
[54]
Achim Menges and Sean Ahlquist (Eds.). 2011. Computational design thinking (1. publ ed.). Wiley, Chichester.
[55]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (Feb. 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
[56]
Susan Mohammed and Brad C. Dumville. 2001. Team mental models in a team knowledge framework: expanding theory and measurement across disciplinary boundaries. Journal of Organizational Behavior 22, 2 (March 2001), 89–106. https://doi.org/10.1002/job.86
[57]
Susan Mohammed and Erika Ringseis. 2001. Cognitive Diversity and Consensus in Group Decision Making: The Role of Inputs, Processes, and Outcomes. Organizational Behavior and Human Decision Processes 85, 2 (July 2001), 310–335. https://doi.org/10.1006/obhd.2000.2943
[58]
Hussein Mozannar, Arvind Satyanarayan, and David Sontag. 2021. Teaching Humans When To Defer to a Classifier via Exemplars. arXiv:2111.11297 [cs] (Dec. 2021). http://arxiv.org/abs/2111.11297 arXiv:2111.11297.
[59]
Danil Nagy, Dale Zhao, and David Benjamin. 2018. Nature-Based Hybrid Computational Geometry System for Optimizing Component Structure. In Humanizing Digital Reality, Klaas De Rycke, Christoph Gengnagel, Olivier Baverel, Jane Burry, Caitlin Mueller, Minh Man Nguyen, Philippe Rahm, and Mette Ramsgaard Thomsen (Eds.). Springer Singapore, Singapore, 167–176. https://doi.org/10.1007/978-981-10-6611-5_15
[60]
Nelson Nauata, Kai-Hung Chang, Chin-Yi Cheng, Greg Mori, and Yasutaka Furukawa. 2020. House-GAN: Relational Generative Adversarial Networks for Graph-Constrained House Layout Generation. In Computer Vision – ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Vol. 12346. Springer International Publishing, Cham, 162–177. https://doi.org/10.1007/978-3-030-58452-8_10 Series Title: Lecture Notes in Computer Science.
[61]
Ahmed K. Noor. 2017. AI and the Future of the Machine Design. Mechanical Engineering 139, 10 (Oct. 2017), 38–43. https://doi.org/10.1115/1.2017-Oct-2
[62]
Heather Pon-barry, Karl Schultz, Elizabeth Owen Bratt, Brady Clark, and Stanley Peters. 2006. Responding to Student Uncertainty in Spoken Tutorial Dialogue Systems. International Journal of Artificial Intelligence in Education 16 (2006), 171–194.
[63]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. http://arxiv.org/abs/2102.12092 arXiv:2102.12092 [cs].
[64]
reddit.com. 2022. r/Fusion360. https://www.reddit.com/r/Fusion360/
[65]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. https://doi.org/10.48550/arXiv.2112.10752 arXiv:2112.10752 [cs].
[66]
Matthias Scheutz, Scott A. DeLoach, and Julie A. Adams. 2017. A Framework for Developing and Using Shared Mental Models in Human-Agent Teams. Journal of Cognitive Engineering and Decision Making 11, 3 (Sept. 2017), 203–224. https://doi.org/10.1177/1555343416682891 Publisher: SAGE Publications.
[67]
Donald A. Schön. 1983. The reflective practitioner: how professionals think in action. Basic Books, New York.
[68]
Donghoon Shin, Sachin Grover, Kenneth Holstein, and Adam Perer. 2021. Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment. (2021). https://doi.org/10.48550/ARXIV.2111.02626 Publisher: arXiv Version Number: 1.
[69]
Siemens Software. 2021. Generative Engineering: Instantly create multivariations of your designs that meet or exceed your requirements. https://www.plm.automation.siemens.com/global/en/products/mechanical-design/generative-design.html
[70]
Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, and Justin D. Weisz. 2022. Investigating Explainability of Generative AI for Code through Scenario-based Design. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 212–228. https://doi.org/10.1145/3490099.3511119
[71]
Fuwen Tan, Song Feng, and Vicente Ordonez. 2019. Text2scene: Generating compositional scenes from textual descriptions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6710–6719.
[72]
Nurcan Gecer Ulu and Levent Burak Kara. 2015. DMS2015-33: Generative interface structure design for supporting existing objects. Journal of Visual Languages & Computing 31 (Dec. 2015), 171–183. https://doi.org/10.1016/j.jvlc.2015.10.016
[73]
Nobuyuki Umetani, Takeo Igarashi, and Niloy J. Mitra. 2012. Guided exploration of physically valid shapes for furniture design. ACM Transactions on Graphics 31, 4 (Aug. 2012), 1–11. https://doi.org/10.1145/2185520.2185582
[74]
Piet Van den Bossche, Wim Gijselaers, Mien Segers, Geert Woltjer, and Paul Kirschner. 2011. Team learning: building shared mental models. Instructional Science 39, 3 (May 2011), 283–301. https://doi.org/10.1007/s11251-010-9128-3
[75]
Maarten van Someren. 1995. The think aloud method: A practical guide to modelling cognitive processes. Information Processing & Management 31, 6 (Nov. 1995), 906–907. https://doi.org/10.1016/0306-4573(95)90031-4
[76]
Kurt VanLehn, Randolph M. Jones, and Michelene T.H. Chi. 1992. A Model of the Self-Explanation Effect. Journal of the Learning Sciences 2, 1 (Jan. 1992), 1–59. https://doi.org/10.1207/s15327809jls0201_1
[77]
Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying Wang. 2020. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–6. https://doi.org/10.1145/3334480.3381069
[78]
Qiaosi Wang, Koustuv Saha, Eric Gregori, David Joyner, and Ashok Goel. 2021. Towards Mutual Theory of Mind in Human-AI Interaction: How Language Reflects What Students Perceive About a Virtual Teaching Assistant. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–14. https://doi.org/10.1145/3411764.3445645
[79]
Christopher W. Wiese and C. Shawn Burke. 2019. Understanding Team Learning Dynamics Over Time. Frontiers in Psychology 10 (June 2019), 1417. https://doi.org/10.3389/fpsyg.2019.01417
[80]
Humphrey Yang, Kuanren Qian, Haolin Liu, Yuxuan Yu, Jianzhe Gu, Matthew McGehee, Yongjie Jessica Zhang, and Lining Yao. 2020. SimuLearn: Fast and Accurate Simulator to Support Morphing Materials Design and Workflows. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, Virtual Event USA, 71–84. https://doi.org/10.1145/3379337.3415867
[81]
Nur Yildirim, James McCann, and John Zimmerman. 2020. Digital Fabrication Tools at Work: Probing Professionals’ Current Needs and Desired Futures. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376621
[82]
Kirsty Young. 2009. Direct from the source: the value of ’think-aloud’ data in understanding learning. The Journal of Educational Enquiry 6 (2009).
[83]
Loutfouz Zaman, Wolfgang Stuerzlinger, Christian Neugebauer, Rob Woodbury, Maher Elkhaldi, Naghmi Shireen, and Michael Terry. 2015. GEM-NI: A System for Creating and Managing Alternatives In Generative Design. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 1201–1210. https://doi.org/10.1145/2702123.2702398
[84]
Guanglu Zhang, Ayush Raina, Jonathan Cagan, and Christopher McComb. 2021. A cautionary tale about the impact of AI on human design teams. Design Studies 72 (Jan. 2021), 100990. https://doi.org/10.1016/j.destud.2021.100990
[85]
Rui Zhang, Nathan J. McNeese, Guo Freeman, and Geoff Musick. 2021. "An Ideal Human": Expectations of AI Teammates in Human-AI Teaming. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (Jan. 2021), 1–25. https://doi.org/10.1145/3432945

Cited By

View all
  • (2024)Exploring the Factors Influencing Continuance Intention to Use AI Drawing Tools: Insights from DesignersSystems10.3390/systems1203006812:3(68)Online publication date: 22-Feb-2024
  • (2024)BI-CST: Behavioral Science-based Creativity Support Tool for Overcoming Design Fixation.Companion Publication of the 2024 ACM Designing Interactive Systems Conference10.1145/3656156.3663704(116-120)Online publication date: 1-Jul-2024
  • (2024)An Evidence-based Workflow for Studying and Designing Learning Supports for Human-AI Co-creationExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650763(1-15)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
      April 2023
      14911 pages
      ISBN:9781450394215
      DOI:10.1145/3544548
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 19 April 2023

      Check for updates

      Badges

      • Honorable Mention

      Author Tags

      1. computational co-creation
      2. generative AI
      3. group cognition
      4. human-AI collaboration
      5. team learning
      6. think-aloud study

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      Conference

      CHI '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3,011
      • Downloads (Last 6 weeks)399

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Exploring the Factors Influencing Continuance Intention to Use AI Drawing Tools: Insights from DesignersSystems10.3390/systems1203006812:3(68)Online publication date: 22-Feb-2024
      • (2024)BI-CST: Behavioral Science-based Creativity Support Tool for Overcoming Design Fixation.Companion Publication of the 2024 ACM Designing Interactive Systems Conference10.1145/3656156.3663704(116-120)Online publication date: 1-Jul-2024
      • (2024)An Evidence-based Workflow for Studying and Designing Learning Supports for Human-AI Co-creationExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650763(1-15)Online publication date: 11-May-2024
      • (2024)GenAICHI 2024: Generative AI and HCI at CHI 2024Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636294(1-7)Online publication date: 11-May-2024
      • (2024)The Metacognitive Demands and Opportunities of Generative AIProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642902(1-24)Online publication date: 11-May-2024
      • (2024)Design Principles for Generative AI ApplicationsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642466(1-22)Online publication date: 11-May-2024
      • (2024)Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-CreationProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642400(1-26)Online publication date: 11-May-2024
      • (2024)ContextCam: Bridging Context Awareness with Creative Human-AI Image Co-CreationProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642129(1-17)Online publication date: 11-May-2024
      • (2024)User Experience Design Professionals’ Perceptions of Generative Artificial IntelligenceProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642114(1-18)Online publication date: 11-May-2024
      • (2024)Sound Designer-Generative AI Interactions: Towards Designing Creative Support Tools for Professional Sound DesignersProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642040(1-17)Online publication date: 11-May-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Figure 1: The Fusion 360 design task and workflow. The task involves (A) designing an engine bracket that connects the engine to a damper. (B) A starter file containing connection holes and bounding dimensions is provided to the users to initiate the design in (C) Fusions 360. The user is prompted to create (D) a viable design while minimizing weight. (E) The workflow involves five steps, and based on the AI system’s solutions, the user may iterate the design by adjusting the design constraints and criteria to produce new solutions. (Image A: © Rolls-Royce Solutions America Inc.)
      Figure 2: The SimuLearn design task. (A) The user is prompted to design a bike bottle holder using the provided (B) starter file and (C) the SimuLearn tool to create (D) morphing grid structures. (E) The transformed grid should assemble into the holder and fit around the bottle and bike frame. (F) The SimuLearn tool provides three types of design workflows. After initializing a morphing grid design, the user can choose between AI-assisted optimization functions or rapid simulations to iterate the design.
      Figure 3: Overview of Study 1 think-aloud design sessions procedure. Participants were first introduced to the design tool and task, then worked while thinking aloud across multiple sessions. They completed an interview after turning in their design.
      Figure 4: Overview of engine bracket designs created by participants. P-F05 met both the shape and structural requirements. P-F01 only met the shape requirements (the bracket has holes for bolts with sufficient clearance and respects the specified dimensions but is structurally too weak). All other brackets missed both requirements and were either too heavy, weak, larger than the specified dimensions, or had not enough bolt clearance. For simplicity, we only show one design option per participant. Please see the Appendix for all submitted design options.
      Figure 5: Overview of bottle holder designs created by participants. Participants came up with different designs using the shape-changing grid material in various ways. Participant P-S01 was not able to control the shape as intended and improvised by stacking grids together. While some designs (P-S06, P-S07) would not be able to hold a bottle, some designers (P-S04, P-S10) utilized the shape-changing grid structure to create promising bottle holder designs. Participant P-S11 stopped working on the task out of frustration and did not submit a design.
      Figure 6: Overview of all engine mounting brackets created in the Fusion360 task. Every participant submitted three design options. Since the AI system generated the three options based on the same parameter values, each trio either met or missed the same criteria.

      Other

      Tables

      Table 1: Overview of study 1 participants. P-F11* and P-S11* are the same participant who had experience in using both Fusion360 and SimuLearn. Explanation for gaps in participant IDs: Some participants had dropped out after the first design session, or participants were assigned to Study 2 (see Section 6).
      Table 2: Evaluation of design outcomes for the engine bracket design task (left) and bottle holder design task (right). The designer’s satisfaction with the outcome is rated with green=satisfied, yellow=neutral, red=unsatisfied. For the engine bracket task, meeting structural and shape requirement checks are rated as X=fail, check mark=pass.
      Table 3: Schematic overview of each participant’s learning process of design task-relevant features of the AI system. Left: Engine Bracket Task. Right: Bottle Holder Task. Correct use of input parameters or AI tools is shown with check marks. Increasing numbers of check marks from the first iteration to the last iteration suggest participants learning to work with the AI system.
      Table 4: Overview of Study 2: Learning with a Peer Guide participants. Six additional participants (3xFusion360, 3xSimuLearn) were paired with peer guides who had worked successfully with the AI design tools in Study 1.
      Table 5: Overview of design opportunities and example applications in relation to group cognition and team learning.
      Table 6: Interview protocol with questions of the semi-structured post-task interview.

      Share

      Share

      Share this Publication link

      Share on social media

      References

      References

      [1]
      Grégoire Allaire, François Jouve, and Anca-Maria Toader. 2002. A level-set method for shape optimization. Comptes Rendus Mathematique 334, 12 (Jan. 2002), 1125–1130. https://doi.org/10.1016/S1631-073X(02)02412-3
      [2]
      Gantugs Atarsaikhan, Brian Kenji Iwana, and Seiichi Uchida. 2020. Guided neural style transfer for shape stylization. PLOS ONE 15, 6 (June 2020), e0233489. https://doi.org/10.1371/journal.pone.0233489
      [3]
      Autodesk. 2020. Fusion 360 Generative Design Extension. https://www.autodesk.com/products/fusion-360/generative-design-extension
      [4]
      Autodesk. 2021. Generative Design - Structural Load - Force per Entity?https://forums.autodesk.com/t5/fusion-360-design-validate/generative-design-structural-load-force-per-entity/td-p/10380547 Section: Fusion 360 Design, Validate & Document.
      [5]
      Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (Oct. 2019), 2–11. https://ojs.aaai.org/index.php/HCOMP/article/view/5285
      [6]
      Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. 2019. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 2019), 2429–2437. https://doi.org/10.1609/aaai.v33i01.33012429
      [7]
      Eric P.S. Baumer and Bill Tomlinson. 2011. Comparing Activity Theory with Distributed Cognition for Video Analysis: Beyond "Kicking the Tires". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11). Association for Computing Machinery, New York, NY, USA, 133–142. https://doi.org/10.1145/1978942.1978962 event-place: Vancouver, BC, Canada.
      [8]
      Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11, 4 (Aug. 2019), 589–597. https://doi.org/10.1080/2159676X.2019.1628806
      [9]
      Jean-Marie Burkhardt, Françoise Détienne, Anne-Marie Hébert, Laurence Perron, Stéphane safin, and Pierre Leclercq. 2009. An Approach to Assess the Quality of Collaboration in Technology-Mediated Design Situations. In European Conference on Cognitive Ergonomics: Designing beyond the Product — Understanding Activity and User Experience in Ubiquitous Environments(ECCE ’09). VTT Technical Research Centre of Finland, FI-02044 VTT, FIN. event-place: Helsinki, Finland.
      [10]
      Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1–24. https://doi.org/10.1145/3359206
      [11]
      Arjun Chandrasekaran, Deshraj Yadav, Prithvijit Chattopadhyay, Viraj Prabhu, and Devi Parikh. 2017. It Takes Two to Tango: Towards Theory of AI’s Mind. http://arxiv.org/abs/1704.00717 arXiv:1704.00717 [cs].
      [12]
      Catherine C. Chase, Jenna Marks, Laura J. Malkiewich, and Helena Connolly. 2019. How teacher talk guidance during Invention activities shapes students’ cognitive engagement and transfer. International Journal of STEM Education 6, 1 (Dec. 2019), 14. https://doi.org/10.1186/s40594-019-0170-7
      [13]
      Rimika Chaudhury, Philip J. Guo, and Parmit K. Chilana. 2022. "There’s no way to keep up!": Diverse Motivations and Challenges Faced by Informal Learners of ML. In 2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). 1–11. https://doi.org/10.1109/VL/HCC53370.2022.9833100 ISSN: 1943-6106.
      [14]
      Xiang ’Anthony’ Chen, Ye Tao, Guanyun Wang, Runchang Kang, Tovi Grossman, Stelian Coros, and Scott E. Hudson. 2018. Forte: User-Driven Generative Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, 1–12. https://doi.org/10.1145/3173574.3174070
      [15]
      Michelene T. H. Chi. 2006. Laboratory Methods for Assessing Experts’ and Novices’ Knowledge. In The Cambridge Handbook of Expertise and Expert Performance, K. Anders Ericsson, Neil Charness, Paul J. Feltovich, and Robert R. Hoffman (Eds.). Cambridge University Press, Cambridge, 167–184. https://doi.org/10.1017/CBO9780511816796.010
      [16]
      Herbert H. Clark and Susan E. Brennan. 1991. Grounding in communication. In Perspectives on socially shared cognition., Lauren B. Resnick, John M. Levine, and Stephanie D. Teasley (Eds.). American Psychological Association, Washington, 127–149. https://doi.org/10.1037/10096-006
      [17]
      Dassault Systèmes. 2021. Solidworks | Enhance Your Design Workflow with Generative Design. https://www.solidworks.com/media/enhance-your-design-workflow-generative-design
      [18]
      Nicholas Davis, Chih-Pin Hsiao, Yanna Popova, and Brian Magerko. 2015. An Enactive Model of Creativity for Computational Collaboration and Co-creation. In Creativity in the Digital Age, Nelson Zagalo and Pedro Branco (Eds.). Springer London, London, 109–133. https://doi.org/10.1007/978-1-4471-6681-8_7 Series Title: Springer Series on Cultural Computing.
      [19]
      Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, Sanat Moningi, and Brian Magerko. 2015. Drawing Apprentice: An Enactive Co-Creative Agent for Artistic Collaboration. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition. ACM, Glasgow United Kingdom, 185–186. https://doi.org/10.1145/2757226.2764555
      [20]
      Leslie A. DeChurch and Jessica R. Mesmer-Magnus. 2010. Measuring shared team mental models: A meta-analysis.Group Dynamics: Theory, Research, and Practice 14, 1 (2010), 1–14. https://doi.org/10.1037/a0017455
      [21]
      Kees Dorst and Nigel Cross. 2001. Creativity in the design process: co-evolution of problem–solution. Design Studies 22, 5 (Sept. 2001), 425–437. https://doi.org/10.1016/S0142-694X(01)00009-6
      [22]
      David Engel, Anita Woolley, Lisa Jing, Christopher Chabris, and Thomas Malone. 2014. Reading the Mind in the Eyes or Reading between the Lines? Theory of Mind Predicts Collective Intelligence Equally Well Online and Face-To-Face. PloS one 9 (Dec. 2014), e115212. https://doi.org/10.1371/journal.pone.0115212
      [23]
      Ozgur Eris. 2003. Asking generative design questions: a fundamental cognitive mechanism in design thinking. In Proceedings of the international conference on engineering design.
      [24]
      Jennifer Fernquist, Tovi Grossman, and George Fitzmaurice. 2011. Sketch-Sketch Revolution: An Engaging Tutorial System for Guided Sketching and Application Learning. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology(UIST ’11). Association for Computing Machinery, New York, NY, USA, 373–382. https://doi.org/10.1145/2047196.2047245 event-place: Santa Barbara, California, USA.
      [25]
      Stephen M. Fiore and Travis J. Wiltshire. 2016. Technology as Teammate: Examining the Role of External Cognition in Support of Team Cognitive Processes. Frontiers in Psychology 7 (Oct. 2016). https://doi.org/10.3389/fpsyg.2016.01531
      [26]
      Kate Forbes-Riley and Diane J. Litman. 2009. Adapting to Student Uncertainty Improves Tutoring Dialogues. (July 2009), 33–40. MAG ID: 1568422853.
      [27]
      formlabs. 2020. Generative Design 101. https://formlabs.com/blog/generative-design/
      [28]
      Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Las Vegas, NV, USA, 2414–2423. https://doi.org/10.1109/CVPR.2016.265
      [29]
      Aldair E. Gongora, Siddharth Mysore, Beichen Li, Wan Shou, Wojciech Matusik, Elise F. Morgan, Keith A. Brown, and Emily Whiting. 2021. Designing Composites with Target Effective Young’s Modulus using Reinforcement Learning. In Symposium on Computational Fabrication. ACM, Virtual Event USA, 1–11. https://doi.org/10.1145/3485114.3485123
      [30]
      George Head. 2003. Effective collaboration: Deep collaboration as an essential element of the learning process. The Journal of Educational Enquiry 4, 2 (2003).
      [31]
      Kenneth Holstein, Vincent Aleven, and Nikol Rummel. 2020. A Conceptual Framework for Human–AI Hybrid Adaptivity in Education. In Artificial Intelligence in Education, Ig Ibert Bittencourt, Mutlu Cukurova, Kasia Muldner, Rose Luckin, and Eva Millán (Eds.). Vol. 12163. Springer International Publishing, Cham, 240–254. https://doi.org/10.1007/978-3-030-52237-7_20 Series Title: Lecture Notes in Computer Science.
      [32]
      Ivan Hybs and John S. Gero. 1992. An evolutionary process model of design. Design Studies 13, 3 (July 1992), 273–290. https://doi.org/10.1016/0142-694X(92)90216-W
      [33]
      Amir Jahanlou, William Odom, and Parmit Chilana. 2021. Challenges in Getting Started in Motion Graphic Design: Perspectives from Casual and Professional Motion Designers. In Proceedings of Graphics Interface 2021(GI 2021). Canadian Information Processing Society, 35 – 45. https://doi.org/10.20380/GI2021.06 ISSN: 0713-5424 event-place: Virtual Event.
      [34]
      Heisawn Jeong and Michelene T. H. Chi. 2007. Knowledge convergence and collaborative learning. Instructional Science 35, 4 (May 2007), 287–315. https://doi.org/10.1007/s11251-006-9008-z
      [35]
      Harmanpreet Kaur, Alex C Williams, and Walter S Lasecki. 2019. Building Shared Mental Models between Humans and AI for Effective Collaboration. (2019), 7.
      [36]
      Anna Kawakami, Nikos Arechiga, Luke Guerdan, Matthew Lee, Yang Cheng, Scott Carter, Anita Sun, Haiyi Zhu, and Alison Hu. 2022. Towards a Learner-Centered Explainable AI. Workshop on Human-Centered Explainable AI (HCXAI) at the ACM Conference on Human Factors in Computing Systems, CHI 2022 (2022), 7.
      [37]
      Rubaiat Habib Kazi, Tovi Grossman, Hyunmin Cheong, Ali Hashemi, and George Fitzmaurice. 2017. DreamSketch: Early Stage 3D Design Explorations with Sketching and Generative Design. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, Québec City QC Canada, 401–414. https://doi.org/10.1145/3126594.3126662
      [38]
      Kimia Kiani, Parmit K. Chilana, Andrea Bunt, Tovi Grossman, and George Fitzmaurice. 2020. “I Would Just Ask Someone”: Learning Feature-Rich Design Software in the Modern Workplace. In 2020 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). 1–10. https://doi.org/10.1109/VL/HCC50065.2020.9127288 ISSN: 1943-6106.
      [39]
      Hannah Kim, Jaegul Choo, Haesun Park, and Alex Endert. 2016. InterAxis: Steering Scatterplot Axes via Observation-Level Interaction. IEEE Transactions on Visualization and Computer Graphics 22, 1 (Jan. 2016), 131–140. https://doi.org/10.1109/TVCG.2015.2467615
      [40]
      Janin Koch and Antti Oulasvirta. 2018. Group Cognition and Collaborative AI. In Human and Machine Learning, Jianlong Zhou and Fang Chen (Eds.). Springer International Publishing, Cham, 293–312. https://doi.org/10.1007/978-3-319-90403-0_15 Series Title: Human–Computer Interaction Series.
      [41]
      Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–14. https://doi.org/10.1145/3290605.3300641
      [42]
      Kenneth R. Koedinger, Albert T. Corbett, and Charles Perfetti. 2012. The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning. Cognitive Science 36, 5 (2012), 757–798. https://doi.org/10.1111/j.1551-6709.2012.01245.x _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1551-6709.2012.01245.x.
      [43]
      Sivam Krish. 2011. A practical generative design method. Computer-Aided Design 43, 1 (Jan. 2011), 88–100. https://doi.org/10.1016/j.cad.2010.09.009
      [44]
      Benjamin Lafreniere, Parmit K. Chilana, Adam Fourney, and Michael A. Terry. 2015. These Aren’t the Commands You’re Looking For: Addressing False Feedforward in Feature-Rich Software. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. ACM, Charlotte NC USA, 619–628. https://doi.org/10.1145/2807442.2807482
      [45]
      Matthew V. Law, Nikhil Dhawan, Hyunseung Bang, So-Yeon Yoon, Daniel Selva, and Guy Hoffman. 2019. Side-by-Side Human–Computer Design Using a Tangible User Interface. In Design Computing and Cognition ’18, John S. Gero (Ed.). Springer International Publishing, Cham, 155–173. https://doi.org/10.1007/978-3-030-05363-5_9
      [46]
      Matthew V. Law, Amritansh Kwatra, Nikhil Dhawan, Matthew Einhorn, Amit Rajesh, and Guy Hoffman. 2020. Design Intention Inference for Virtual Co-Design Agents. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. ACM, Virtual Event Scotland UK, 1–8. https://doi.org/10.1145/3383652.3423861
      [47]
      Bryan Lawson and Shee Ming Loke. 1997. Computers, words and pictures. Design Studies 18, 2 (April 1997), 171–183. https://doi.org/10.1016/S0142-694X(97)85459-2
      [48]
      Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai. 2020. Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Models. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376739
      [49]
      Shareen Mahmud, Jessalyn Alvina, Parmit K. Chilana, Andrea Bunt, and Joanna McGrenere. 2020. Learning Through Exploration: How Children, Adults, and Older Adults Interact with a New Feature-Rich Application. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376414
      [50]
      Mary Lou Maher, Brian Magerko, Dan Venura, Douglas Fisher, Rogelio E. Cardona-Rivera, Nancy Fulda, John Gero, Minwoo Lee, David Wilson, James C. Kaufman, Maithilee Kunda, Michael Muller, Rachel K.E. Bellamy, Maya Ackerman, and Evangelia Chrysikou. 2022. A Research Plan for Integrating Generative and Cognitive AI for Human Centered, Explainable Co-Creative AI. In CHI 2022 Workshop on Generative AI and HCI. https://generativeaiandhci.github.io/
      [51]
      Damien Masson, Jo Vermeulen, George Fitzmaurice, and Justin Matejka. 2022. Supercharging Trial-and-Error for Learning Complex Software Applications. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3491102.3501895 event-place: New Orleans, LA, USA.
      [52]
      Justin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, and George Fitzmaurice. 2018. Dream Lens: Exploration and Visualization of Large-Scale Generative Design Datasets. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, 1–12. https://doi.org/10.1145/3173574.3173943
      [53]
      Anne Meier, Hans Spada, and Nikol Rummel. 2007. A rating scheme for assessing the quality of computer-supported collaboration processes. International Journal of Computer-Supported Collaborative Learning 2, 1 (April 2007), 63–86. https://doi.org/10.1007/s11412-006-9005-x
      [54]
      Achim Menges and Sean Ahlquist (Eds.). 2011. Computational design thinking (1. publ ed.). Wiley, Chichester.
      [55]
      Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (Feb. 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
      [56]
      Susan Mohammed and Brad C. Dumville. 2001. Team mental models in a team knowledge framework: expanding theory and measurement across disciplinary boundaries. Journal of Organizational Behavior 22, 2 (March 2001), 89–106. https://doi.org/10.1002/job.86
      [57]
      Susan Mohammed and Erika Ringseis. 2001. Cognitive Diversity and Consensus in Group Decision Making: The Role of Inputs, Processes, and Outcomes. Organizational Behavior and Human Decision Processes 85, 2 (July 2001), 310–335. https://doi.org/10.1006/obhd.2000.2943
      [58]
      Hussein Mozannar, Arvind Satyanarayan, and David Sontag. 2021. Teaching Humans When To Defer to a Classifier via Exemplars. arXiv:2111.11297 [cs] (Dec. 2021). http://arxiv.org/abs/2111.11297 arXiv:2111.11297.
      [59]
      Danil Nagy, Dale Zhao, and David Benjamin. 2018. Nature-Based Hybrid Computational Geometry System for Optimizing Component Structure. In Humanizing Digital Reality, Klaas De Rycke, Christoph Gengnagel, Olivier Baverel, Jane Burry, Caitlin Mueller, Minh Man Nguyen, Philippe Rahm, and Mette Ramsgaard Thomsen (Eds.). Springer Singapore, Singapore, 167–176. https://doi.org/10.1007/978-981-10-6611-5_15
      [60]
      Nelson Nauata, Kai-Hung Chang, Chin-Yi Cheng, Greg Mori, and Yasutaka Furukawa. 2020. House-GAN: Relational Generative Adversarial Networks for Graph-Constrained House Layout Generation. In Computer Vision – ECCV 2020, Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.). Vol. 12346. Springer International Publishing, Cham, 162–177. https://doi.org/10.1007/978-3-030-58452-8_10 Series Title: Lecture Notes in Computer Science.
      [61]
      Ahmed K. Noor. 2017. AI and the Future of the Machine Design. Mechanical Engineering 139, 10 (Oct. 2017), 38–43. https://doi.org/10.1115/1.2017-Oct-2
      [62]
      Heather Pon-barry, Karl Schultz, Elizabeth Owen Bratt, Brady Clark, and Stanley Peters. 2006. Responding to Student Uncertainty in Spoken Tutorial Dialogue Systems. International Journal of Artificial Intelligence in Education 16 (2006), 171–194.
      [63]
      Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. http://arxiv.org/abs/2102.12092 arXiv:2102.12092 [cs].
      [64]
      reddit.com. 2022. r/Fusion360. https://www.reddit.com/r/Fusion360/
      [65]
      Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis with Latent Diffusion Models. https://doi.org/10.48550/arXiv.2112.10752 arXiv:2112.10752 [cs].
      [66]
      Matthias Scheutz, Scott A. DeLoach, and Julie A. Adams. 2017. A Framework for Developing and Using Shared Mental Models in Human-Agent Teams. Journal of Cognitive Engineering and Decision Making 11, 3 (Sept. 2017), 203–224. https://doi.org/10.1177/1555343416682891 Publisher: SAGE Publications.
      [67]
      Donald A. Schön. 1983. The reflective practitioner: how professionals think in action. Basic Books, New York.
      [68]
      Donghoon Shin, Sachin Grover, Kenneth Holstein, and Adam Perer. 2021. Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment. (2021). https://doi.org/10.48550/ARXIV.2111.02626 Publisher: arXiv Version Number: 1.
      [69]
      Siemens Software. 2021. Generative Engineering: Instantly create multivariations of your designs that meet or exceed your requirements. https://www.plm.automation.siemens.com/global/en/products/mechanical-design/generative-design.html
      [70]
      Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, and Justin D. Weisz. 2022. Investigating Explainability of Generative AI for Code through Scenario-based Design. In 27th International Conference on Intelligent User Interfaces. ACM, Helsinki Finland, 212–228. https://doi.org/10.1145/3490099.3511119
      [71]
      Fuwen Tan, Song Feng, and Vicente Ordonez. 2019. Text2scene: Generating compositional scenes from textual descriptions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6710–6719.
      [72]
      Nurcan Gecer Ulu and Levent Burak Kara. 2015. DMS2015-33: Generative interface structure design for supporting existing objects. Journal of Visual Languages & Computing 31 (Dec. 2015), 171–183. https://doi.org/10.1016/j.jvlc.2015.10.016
      [73]
      Nobuyuki Umetani, Takeo Igarashi, and Niloy J. Mitra. 2012. Guided exploration of physically valid shapes for furniture design. ACM Transactions on Graphics 31, 4 (Aug. 2012), 1–11. https://doi.org/10.1145/2185520.2185582
      [74]
      Piet Van den Bossche, Wim Gijselaers, Mien Segers, Geert Woltjer, and Paul Kirschner. 2011. Team learning: building shared mental models. Instructional Science 39, 3 (May 2011), 283–301. https://doi.org/10.1007/s11251-010-9128-3
      [75]
      Maarten van Someren. 1995. The think aloud method: A practical guide to modelling cognitive processes. Information Processing & Management 31, 6 (Nov. 1995), 906–907. https://doi.org/10.1016/0306-4573(95)90031-4
      [76]
      Kurt VanLehn, Randolph M. Jones, and Michelene T.H. Chi. 1992. A Model of the Self-Explanation Effect. Journal of the Learning Sciences 2, 1 (Jan. 1992), 1–59. https://doi.org/10.1207/s15327809jls0201_1
      [77]
      Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying Wang. 2020. From Human-Human Collaboration to Human-AI Collaboration: Designing AI Systems That Can Work Together with People. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–6. https://doi.org/10.1145/3334480.3381069
      [78]
      Qiaosi Wang, Koustuv Saha, Eric Gregori, David Joyner, and Ashok Goel. 2021. Towards Mutual Theory of Mind in Human-AI Interaction: How Language Reflects What Students Perceive About a Virtual Teaching Assistant. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–14. https://doi.org/10.1145/3411764.3445645
      [79]
      Christopher W. Wiese and C. Shawn Burke. 2019. Understanding Team Learning Dynamics Over Time. Frontiers in Psychology 10 (June 2019), 1417. https://doi.org/10.3389/fpsyg.2019.01417
      [80]
      Humphrey Yang, Kuanren Qian, Haolin Liu, Yuxuan Yu, Jianzhe Gu, Matthew McGehee, Yongjie Jessica Zhang, and Lining Yao. 2020. SimuLearn: Fast and Accurate Simulator to Support Morphing Materials Design and Workflows. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, Virtual Event USA, 71–84. https://doi.org/10.1145/3379337.3415867
      [81]
      Nur Yildirim, James McCann, and John Zimmerman. 2020. Digital Fabrication Tools at Work: Probing Professionals’ Current Needs and Desired Futures. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376621
      [82]
      Kirsty Young. 2009. Direct from the source: the value of ’think-aloud’ data in understanding learning. The Journal of Educational Enquiry 6 (2009).
      [83]
      Loutfouz Zaman, Wolfgang Stuerzlinger, Christian Neugebauer, Rob Woodbury, Maher Elkhaldi, Naghmi Shireen, and Michael Terry. 2015. GEM-NI: A System for Creating and Managing Alternatives In Generative Design. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 1201–1210. https://doi.org/10.1145/2702123.2702398
      [84]
      Guanglu Zhang, Ayush Raina, Jonathan Cagan, and Christopher McComb. 2021. A cautionary tale about the impact of AI on human design teams. Design Studies 72 (Jan. 2021), 100990. https://doi.org/10.1016/j.destud.2021.100990
      [85]
      Rui Zhang, Nathan J. McNeese, Guo Freeman, and Geoff Musick. 2021. "An Ideal Human": Expectations of AI Teammates in Human-AI Teaming. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (Jan. 2021), 1–25. https://doi.org/10.1145/3432945
      View figure
      Figure 1: The Fusion 360 design task and workflow. The task involves (A) designing an engine bracket that connects the engine to a damper. (B) A starter file containing connection holes and bounding dimensions is provided to the users to initiate the design in (C) Fusions 360. The user is prompted to create (D) a viable design while minimizing weight. (E) The workflow involves five steps, and based on the AI system’s solutions, the user may iterate the design by adjusting the design constraints and criteria to produce new solutions. (Image A: © Rolls-Royce Solutions America Inc.)
      View figure
      Figure 2: The SimuLearn design task. (A) The user is prompted to design a bike bottle holder using the provided (B) starter file and (C) the SimuLearn tool to create (D) morphing grid structures. (E) The transformed grid should assemble into the holder and fit around the bottle and bike frame. (F) The SimuLearn tool provides three types of design workflows. After initializing a morphing grid design, the user can choose between AI-assisted optimization functions or rapid simulations to iterate the design.
      View figure
      Figure 3: Overview of Study 1 think-aloud design sessions procedure. Participants were first introduced to the design tool and task, then worked while thinking aloud across multiple sessions. They completed an interview after turning in their design.
      View figure
      Figure 4: Overview of engine bracket designs created by participants. P-F05 met both the shape and structural requirements. P-F01 only met the shape requirements (the bracket has holes for bolts with sufficient clearance and respects the specified dimensions but is structurally too weak). All other brackets missed both requirements and were either too heavy, weak, larger than the specified dimensions, or had not enough bolt clearance. For simplicity, we only show one design option per participant. Please see the Appendix for all submitted design options.
      View figure
      Figure 5: Overview of bottle holder designs created by participants. Participants came up with different designs using the shape-changing grid material in various ways. Participant P-S01 was not able to control the shape as intended and improvised by stacking grids together. While some designs (P-S06, P-S07) would not be able to hold a bottle, some designers (P-S04, P-S10) utilized the shape-changing grid structure to create promising bottle holder designs. Participant P-S11 stopped working on the task out of frustration and did not submit a design.
      View figure
      Figure 6: Overview of all engine mounting brackets created in the Fusion360 task. Every participant submitted three design options. Since the AI system generated the three options based on the same parameter values, each trio either met or missed the same criteria.
      Table 1: Overview of study 1 participants. P-F11* and P-S11* are the same participant who had experience in using both Fusion360 and SimuLearn. Explanation for gaps in participant IDs: Some participants had dropped out after the first design session, or participants were assigned to Study 2 (see Section 6).
      Table 2: Evaluation of design outcomes for the engine bracket design task (left) and bottle holder design task (right). The designer’s satisfaction with the outcome is rated with green=satisfied, yellow=neutral, red=unsatisfied. For the engine bracket task, meeting structural and shape requirement checks are rated as X=fail, check mark=pass.
      Table 3: Schematic overview of each participant’s learning process of design task-relevant features of the AI system. Left: Engine Bracket Task. Right: Bottle Holder Task. Correct use of input parameters or AI tools is shown with check marks. Increasing numbers of check marks from the first iteration to the last iteration suggest participants learning to work with the AI system.
      Table 4: Overview of Study 2: Learning with a Peer Guide participants. Six additional participants (3xFusion360, 3xSimuLearn) were paired with peer guides who had worked successfully with the AI design tools in Study 1.
      Table 5: Overview of design opportunities and example applications in relation to group cognition and team learning.
      Table 6: Interview protocol with questions of the semi-structured post-task interview.