{"title":"Comparing and evaluating human and computationally derived representations of non-semantic design information","authors":"Elisa Kwon, Kosa Goucher-Lambert","doi":"10.1115/1.4063567","DOIUrl":null,"url":null,"abstract":"Abstract Design artifacts provide a mechanism for illustrating design information and concepts, but their effectiveness relies on alignment across design agents in what these artifacts represent. This work investigates the agreement between multi-modal representations of design artifacts by humans and artificial intelligence (AI). Design artifacts are considered to constitute stimuli designers interact with to become inspired (i.e., inspirational stimuli), for which retrieval often relies on computational methods using AI. To facilitate this process for multi-modal stimuli, a better understanding of human perspectives of non-semantic representations of design information, e.g., by form or function-based features, is motivated. This work compares and evaluates human and AI-based representations of 3D-model parts by visual and functional features. Humans and AI were found to share consistent representations of visual and functional similarities, which aligned well to coarse, but not more granular, levels of similarity. Human-AI alignment was higher for identifying low compared to high similarity parts, suggesting mutual representation of features underlying more obvious than nuanced differences. Human evaluation of part relationships in terms of belonging to same or different categories revealed that human and AI-derived relationships similarly reflect concepts of “near” and “far”. However, levels of similarity corresponding to “near” and “far” differed depending on the criteria evaluated, where “far” was associated with nearer visually than functionally related stimuli. These findings contribute to a fundamental understanding of human evaluation of information conveyed by AI-represented design artifacts needed for successful human-AI collaboration in design.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"133 1","pages":"0"},"PeriodicalIF":2.9000,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mechanical Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/1.4063567","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Design artifacts provide a mechanism for illustrating design information and concepts, but their effectiveness relies on alignment across design agents in what these artifacts represent. This work investigates the agreement between multi-modal representations of design artifacts by humans and artificial intelligence (AI). Design artifacts are considered to constitute stimuli designers interact with to become inspired (i.e., inspirational stimuli), for which retrieval often relies on computational methods using AI. To facilitate this process for multi-modal stimuli, a better understanding of human perspectives of non-semantic representations of design information, e.g., by form or function-based features, is motivated. This work compares and evaluates human and AI-based representations of 3D-model parts by visual and functional features. Humans and AI were found to share consistent representations of visual and functional similarities, which aligned well to coarse, but not more granular, levels of similarity. Human-AI alignment was higher for identifying low compared to high similarity parts, suggesting mutual representation of features underlying more obvious than nuanced differences. Human evaluation of part relationships in terms of belonging to same or different categories revealed that human and AI-derived relationships similarly reflect concepts of “near” and “far”. However, levels of similarity corresponding to “near” and “far” differed depending on the criteria evaluated, where “far” was associated with nearer visually than functionally related stimuli. These findings contribute to a fundamental understanding of human evaluation of information conveyed by AI-represented design artifacts needed for successful human-AI collaboration in design.
期刊介绍:
The Journal of Mechanical Design (JMD) serves the broad design community as the venue for scholarly, archival research in all aspects of the design activity with emphasis on design synthesis. JMD has traditionally served the ASME Design Engineering Division and its technical committees, but it welcomes contributions from all areas of design with emphasis on synthesis. JMD communicates original contributions, primarily in the form of research articles of considerable depth, but also technical briefs, design innovation papers, book reviews, and editorials.
Scope: The Journal of Mechanical Design (JMD) serves the broad design community as the venue for scholarly, archival research in all aspects of the design activity with emphasis on design synthesis. JMD has traditionally served the ASME Design Engineering Division and its technical committees, but it welcomes contributions from all areas of design with emphasis on synthesis. JMD communicates original contributions, primarily in the form of research articles of considerable depth, but also technical briefs, design innovation papers, book reviews, and editorials.