{"title":"人工智能架构中的偏见层次:集体、计算和认知","authors":"Andrew Kudless","doi":"10.1177/14780771231170272","DOIUrl":null,"url":null,"abstract":"This paper examines the prevalence of bias in artificial intelligence text-to-image models utilized in the architecture and design disciplines. The rapid pace of advancements in machine learning technologies, particularly in text-to-image generators, has significantly increased over the past year, making these tools more accessible to the design community. Accordingly, this paper aims to critically document and analyze the collective, computational, and cognitive biases that designers may encounter when working with these tools at this time. The paper delves into three hierarchical levels of operation and investigates the possible biases present at each level. Starting with the training data for large language models (LLM), the paper explores how these models may create biases privileging English-language users and perspectives. The paper subsequently investigates the digital materiality of models and how their weights generate specific aesthetic results. Finally, the report concludes by examining user biases through their prompt and image selections and the potential for platforms to perpetuate these biases through the application of user data during training. Graphical Abstract","PeriodicalId":45139,"journal":{"name":"International Journal of Architectural Computing","volume":"21 1","pages":"256 - 279"},"PeriodicalIF":1.6000,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hierarchies of bias in artificial intelligence architecture: Collective, computational, and cognitive\",\"authors\":\"Andrew Kudless\",\"doi\":\"10.1177/14780771231170272\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper examines the prevalence of bias in artificial intelligence text-to-image models utilized in the architecture and design disciplines. The rapid pace of advancements in machine learning technologies, particularly in text-to-image generators, has significantly increased over the past year, making these tools more accessible to the design community. Accordingly, this paper aims to critically document and analyze the collective, computational, and cognitive biases that designers may encounter when working with these tools at this time. The paper delves into three hierarchical levels of operation and investigates the possible biases present at each level. Starting with the training data for large language models (LLM), the paper explores how these models may create biases privileging English-language users and perspectives. The paper subsequently investigates the digital materiality of models and how their weights generate specific aesthetic results. Finally, the report concludes by examining user biases through their prompt and image selections and the potential for platforms to perpetuate these biases through the application of user data during training. Graphical Abstract\",\"PeriodicalId\":45139,\"journal\":{\"name\":\"International Journal of Architectural Computing\",\"volume\":\"21 1\",\"pages\":\"256 - 279\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Architectural Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/14780771231170272\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Architectural Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/14780771231170272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"ARCHITECTURE","Score":null,"Total":0}
Hierarchies of bias in artificial intelligence architecture: Collective, computational, and cognitive
This paper examines the prevalence of bias in artificial intelligence text-to-image models utilized in the architecture and design disciplines. The rapid pace of advancements in machine learning technologies, particularly in text-to-image generators, has significantly increased over the past year, making these tools more accessible to the design community. Accordingly, this paper aims to critically document and analyze the collective, computational, and cognitive biases that designers may encounter when working with these tools at this time. The paper delves into three hierarchical levels of operation and investigates the possible biases present at each level. Starting with the training data for large language models (LLM), the paper explores how these models may create biases privileging English-language users and perspectives. The paper subsequently investigates the digital materiality of models and how their weights generate specific aesthetic results. Finally, the report concludes by examining user biases through their prompt and image selections and the potential for platforms to perpetuate these biases through the application of user data during training. Graphical Abstract