Pub Date : 2025-03-05DOI: 10.1016/j.jretconser.2025.104281
Qian Qian Chen , Li Min Lin , Youjae Yi
Explainability is crucial for building trust in traditional recommendation systems, yet its role in conversational settings is underexplored. Across three experimental studies (N = 1,429), we used between-subjects designs featuring diverse product categories (cameras, smartwatches, headphones) to examine the interactive effects of post hoc explanations (expert validation-based vs. consensus validation-based) and decision-making domains (hedonic vs. utilitarian) on consumer responses to conversational recommendations. We further examined how consumer decision-making styles (intuitive vs. rational) and user interfaces (text-based vs. voice-based) moderated these effects. Results show that post hoc explanations enhance perceived transparency and interpretability, thereby increasing consumer trust in conversational recommendations. In text-based interfaces, consumers making hedonic decisions preferred consensus-based explanations, whereas no clear preference emerged for utilitarian decision-makers. In voice-based interfaces, utilitarian consumers favored consensus-based explanations, while no significant preference was observed for hedonic decisions. Furthermore, intuitive consumers preferred consensus-based explanations for hedonic decisions and expert-based explanations for utilitarian decisions. Rational consumers consistently favored consensus-based explanations across both decision-making domains. These findings provide valuable insights for designing conversational recommendation systems on e-commerce platforms. By tailoring explanations to decision domains, user interfaces, and consumer decision-making styles, businesses can foster greater trust and engagement, driving more favorable purchasing behaviors and improving business outcomes.
{"title":"Tailoring explanations in conversational recommendations: The impact of decision contexts and user interfaces","authors":"Qian Qian Chen , Li Min Lin , Youjae Yi","doi":"10.1016/j.jretconser.2025.104281","DOIUrl":"10.1016/j.jretconser.2025.104281","url":null,"abstract":"<div><div>Explainability is crucial for building trust in traditional recommendation systems, yet its role in conversational settings is underexplored. Across three experimental studies (N = 1,429), we used between-subjects designs featuring diverse product categories (cameras, smartwatches, headphones) to examine the interactive effects of post hoc explanations (expert validation-based vs. consensus validation-based) and decision-making domains (hedonic vs. utilitarian) on consumer responses to conversational recommendations. We further examined how consumer decision-making styles (intuitive vs. rational) and user interfaces (text-based vs. voice-based) moderated these effects. Results show that post hoc explanations enhance perceived transparency and interpretability, thereby increasing consumer trust in conversational recommendations. In text-based interfaces, consumers making hedonic decisions preferred consensus-based explanations, whereas no clear preference emerged for utilitarian decision-makers. In voice-based interfaces, utilitarian consumers favored consensus-based explanations, while no significant preference was observed for hedonic decisions. Furthermore, intuitive consumers preferred consensus-based explanations for hedonic decisions and expert-based explanations for utilitarian decisions. Rational consumers consistently favored consensus-based explanations across both decision-making domains. These findings provide valuable insights for designing conversational recommendation systems on e-commerce platforms. By tailoring explanations to decision domains, user interfaces, and consumer decision-making styles, businesses can foster greater trust and engagement, driving more favorable purchasing behaviors and improving business outcomes.</div></div>","PeriodicalId":48399,"journal":{"name":"Journal of Retailing and Consumer Services","volume":"85 ","pages":"Article 104281"},"PeriodicalIF":11.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-05DOI: 10.1007/s10257-025-00701-w
Markus Hafner, Miguel Mira da Silva, Henderik Alex Proper
In our data-centric society, the imperative to determine the value of data has risen. Therefore, this paper presents a taxonomy for a data valuation business capability. Utilizing an initial taxonomy version, which originated from a systematic literature review, this paper validates and extends the taxonomy, culminating in four layers, twelve dimensions, and 59 characteristics. The taxonomy validation was accomplished by conducting semi-structured expert interviews with eleven subject matter experts, followed by a cluster analysis of the interviews, leading to a taxonomy heatmap including practical extensions. This paper's implications are manifold. Firstly, the taxonomy promotes a common understanding of data valuation within an enterprise. Secondly, the taxonomy aids in categorizing, assessing, and optimizing data valuation endeavors. Thirdly, it lays the groundwork for potential data valuation standards and toolkits. Lastly, it strengthens theoretical assumptions by grounding them in practical insights and offers an interdisciplinary research agenda following the taxonomy dimensions and characteristics.
{"title":"Data valuation as a business capability: from research to practice","authors":"Markus Hafner, Miguel Mira da Silva, Henderik Alex Proper","doi":"10.1007/s10257-025-00701-w","DOIUrl":"https://doi.org/10.1007/s10257-025-00701-w","url":null,"abstract":"<p>In our data-centric society, the imperative to determine the value of data has risen. Therefore, this paper presents a taxonomy for a data valuation business capability. Utilizing an initial taxonomy version, which originated from a systematic literature review, this paper validates and extends the taxonomy, culminating in four layers, twelve dimensions, and 59 characteristics. The taxonomy validation was accomplished by conducting semi-structured expert interviews with eleven subject matter experts, followed by a cluster analysis of the interviews, leading to a taxonomy heatmap including practical extensions. This paper's implications are manifold. Firstly, the taxonomy promotes a common understanding of data valuation within an enterprise. Secondly, the taxonomy aids in categorizing, assessing, and optimizing data valuation endeavors. Thirdly, it lays the groundwork for potential data valuation standards and toolkits. Lastly, it strengthens theoretical assumptions by grounding them in practical insights and offers an interdisciplinary research agenda following the taxonomy dimensions and characteristics.</p>","PeriodicalId":13660,"journal":{"name":"Information Systems and e-Business Management","volume":"25 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143561182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-05DOI: 10.1016/j.technovation.2025.103209
Wenyi Chu, David Baxter, Yang Liu
Generative AI (GenAI) is now being used in many computer-based knowledge works by various human–AI collaborations, as a major recent technological shift. However, micro-level research of GenAI impacts is rare. Moreover, whilst the creative industries are early adopters and heavy users of GenAI, there is a lack of research in this domain. To bridge these gaps, this study implemented an inductive approach to evaluate the application of GenAI in artistic innovation based on a detailed case study in a show production firm making use of company documents, interviews, and observations. The theoretical lens of routine dynamics reveals the nature of the impacts. As both a working tool and a communication facilitator, the collective application of GenAI as the working medium led to the ostensive sequence change of routines as simultaneous exploration of problems and solutions for creativity and innovation. We provide two main theoretical implications. First, individual and collective application of GenAI as both digital working tool and medium in artistic creation can improve productivity of creation and iteration. Second, such human-AI collaboration results in the routine adaptation of ostensive aspect by changing the path and interface of routine clusters and mixtures the sequential routines within creation with local events rather than systematically transforming routines.
{"title":"Exploring the impacts of generative AI on artistic innovation routines","authors":"Wenyi Chu, David Baxter, Yang Liu","doi":"10.1016/j.technovation.2025.103209","DOIUrl":"10.1016/j.technovation.2025.103209","url":null,"abstract":"<div><div>Generative AI (GenAI) is now being used in many computer-based knowledge works by various human–AI collaborations, as a major recent technological shift. However, micro-level research of GenAI impacts is rare. Moreover, whilst the creative industries are early adopters and heavy users of GenAI, there is a lack of research in this domain. To bridge these gaps, this study implemented an inductive approach to evaluate the application of GenAI in artistic innovation based on a detailed case study in a show production firm making use of company documents, interviews, and observations. The theoretical lens of routine dynamics reveals the nature of the impacts. As both a working tool and a communication facilitator, the collective application of GenAI as the working medium led to the ostensive sequence change of routines as simultaneous exploration of problems and solutions for creativity and innovation. We provide two main theoretical implications. First, individual and collective application of GenAI as both digital working tool and medium in artistic creation can improve productivity of creation and iteration. Second, such human-AI collaboration results in the routine adaptation of ostensive aspect by changing the path and interface of routine clusters and mixtures the sequential routines within creation with local events rather than systematically transforming routines.</div></div>","PeriodicalId":49444,"journal":{"name":"Technovation","volume":"143 ","pages":"Article 103209"},"PeriodicalIF":11.1,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143552870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-05DOI: 10.1016/j.techfore.2025.124076
Olusegun Agbabiaka , Adegboyega Ojo , Niall Connolly
With AI adoption for decision-making in the public sector projected to rise with profound socio-ethical impacts, the need to ensure its trustworthy use continues to attract research attention. We analyze the existing body of evidence and establish trustworthiness requirements for AI-enabled automated decision-making (ADM) in the public sector, identifying eighteen aggregate facets. We link these facets to dimensions of trust in automation and institution-based trust to develop a theory-oriented research framework. We further map them to the OECD AI system lifecycle, creating a practice-focused framework. Our study has theoretical, practical and policy implications. First, we extend the theory on technological trust. We also contribute to trustworthy AI literature, shedding light on relatively well-known requirements like accountability and transparency and revealing novel ones like context sensitivity, feedback and policy learning. Second, we provide a roadmap for public managers and developers to improve ADM governance practices along the AI lifecycle. Third, we offer policymakers a basis for evaluating possible gaps in current AI policies. Overall, our findings present opportunities for further research and offer some guidance on how to navigate the multi-dimensional challenges of designing, developing and implementing ADM for improved trustworthiness and greater public trust.
{"title":"Requirements for trustworthy AI-enabled automated decision-making in the public sector: A systematic review","authors":"Olusegun Agbabiaka , Adegboyega Ojo , Niall Connolly","doi":"10.1016/j.techfore.2025.124076","DOIUrl":"10.1016/j.techfore.2025.124076","url":null,"abstract":"<div><div>With AI adoption for decision-making in the public sector projected to rise with profound socio-ethical impacts, the need to ensure its trustworthy use continues to attract research attention. We analyze the existing body of evidence and establish trustworthiness requirements for AI-enabled automated decision-making (ADM) in the public sector, identifying eighteen aggregate facets. We link these facets to dimensions of trust in automation and institution-based trust to develop a theory-oriented research framework. We further map them to the OECD AI system lifecycle, creating a practice-focused framework. Our study has theoretical, practical and policy implications. First, we extend the theory on technological trust. We also contribute to trustworthy AI literature, shedding light on relatively well-known requirements like accountability and transparency and revealing novel ones like context sensitivity, feedback and policy learning. Second, we provide a roadmap for public managers and developers to improve ADM governance practices along the AI lifecycle. Third, we offer policymakers a basis for evaluating possible gaps in current AI policies. Overall, our findings present opportunities for further research and offer some guidance on how to navigate the multi-dimensional challenges of designing, developing and implementing ADM for improved trustworthiness and greater public trust.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"215 ","pages":"Article 124076"},"PeriodicalIF":12.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}