Pub Date : 2025-10-30DOI: 10.1016/j.dss.2025.114562
Wei Wang , Yao Tong , Jian Mou
Although Artificial Intelligence (AI) agents are being increasingly deployed in crowdfunding platforms to address labor shortages, knowledge about their scope and limits is still limited. Across a secondary data analysis and three experiments (total N = 1027), we reveal that AI (vs. human) agents are more effective in reward-based (vs. donation-based) crowdfunding. This effect can be parallelly mediated by perceptions of warmth and competence, with AI agents evoking higher competence but weaker warmth perceptions. Importantly, anthropomorphic AI agents serve as an effective intervention to alleviate AI's negative impact on donation-based crowdfunding by enhancing warmth perceptions. Finally, we show that human agents outperform AI agents in boosting donation-based funding performance only for those with an interdependent versus independent self-construal. Overall, these findings expand the theoretical framework on AI applications in crowdfunding and offer actionable insights for fundraisers and platform operators to optimize agent deployment.
{"title":"Artificial intelligence agents or human agents? Impact of online customer service agents on crowdfunding performance","authors":"Wei Wang , Yao Tong , Jian Mou","doi":"10.1016/j.dss.2025.114562","DOIUrl":"10.1016/j.dss.2025.114562","url":null,"abstract":"<div><div>Although Artificial Intelligence (AI) agents are being increasingly deployed in crowdfunding platforms to address labor shortages, knowledge about their scope and limits is still limited. Across a secondary data analysis and three experiments (total <em>N</em> = 1027), we reveal that AI (vs. human) agents are more effective in reward-based (vs. donation-based) crowdfunding. This effect can be parallelly mediated by perceptions of warmth and competence, with AI agents evoking higher competence but weaker warmth perceptions. Importantly, anthropomorphic AI agents serve as an effective intervention to alleviate AI's negative impact on donation-based crowdfunding by enhancing warmth perceptions. Finally, we show that human agents outperform AI agents in boosting donation-based funding performance only for those with an interdependent versus independent self-construal. Overall, these findings expand the theoretical framework on AI applications in crowdfunding and offer actionable insights for fundraisers and platform operators to optimize agent deployment.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"200 ","pages":"Article 114562"},"PeriodicalIF":6.8,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.dss.2025.114561
Jun Yang , Hongchen Duan , Demei Kong
Multi-dimensional (MD) rating systems are increasingly adopted by online platforms to capture product evaluations across multiple attributes. While this structured format enriches product information, it also makes intra-review inconsistencies salient, raising new questions about how such inconsistencies shape review helpfulness—a topic largely overlooked in prior research dominated by single-dimensional (SD) reviews. This study examines the effects of cross-dimensional inconsistencies (in ratings, sentiment, and informativeness) and a cross-modal inconsistency (rating–sentiment misalignment within a dimension) on the perceived helpfulness of MD reviews, drawing on cognitive dissonance theory. Using a large dataset from a leading Chinese automobile review platform, we find that cross-dimensional rating inconsistency can enhance review helpfulness by signaling realistic product trade-offs, whereas sentiment, informativeness, and cross-modal inconsistencies reduce helpfulness by triggering unresolved dissonance. We further uncover interactive effects among cross-dimensional inconsistencies: the positive effect of rating inconsistency diminishes in the presence of high sentiment or informativeness inconsistencies. Conversely, the negative effects of sentiment and informativeness inconsistencies are mitigated when they co-occur. Additionally, the impact of these inconsistencies varies depending on reviewer characteristics, product characteristics, and review order. These findings advance the literature on review helpfulness and MD rating systems by introducing cross-dimensional and cross-modal inconsistencies as key determinants and clarifying when inconsistency serves as a credibility signal versus a cognitive burden.
{"title":"Consistency matters: Impacts of dimension-level characteristics on the helpfulness of multi-dimensional reviews","authors":"Jun Yang , Hongchen Duan , Demei Kong","doi":"10.1016/j.dss.2025.114561","DOIUrl":"10.1016/j.dss.2025.114561","url":null,"abstract":"<div><div>Multi-dimensional (MD) rating systems are increasingly adopted by online platforms to capture product evaluations across multiple attributes. While this structured format enriches product information, it also makes intra-review inconsistencies salient, raising new questions about how such inconsistencies shape review helpfulness—a topic largely overlooked in prior research dominated by single-dimensional (SD) reviews. This study examines the effects of cross-dimensional inconsistencies (in ratings, sentiment, and informativeness) and a cross-modal inconsistency (rating–sentiment misalignment within a dimension) on the perceived helpfulness of MD reviews, drawing on cognitive dissonance theory. Using a large dataset from a leading Chinese automobile review platform, we find that cross-dimensional rating inconsistency can enhance review helpfulness by signaling realistic product trade-offs, whereas sentiment, informativeness, and cross-modal inconsistencies reduce helpfulness by triggering unresolved dissonance. We further uncover interactive effects among cross-dimensional inconsistencies: the positive effect of rating inconsistency diminishes in the presence of high sentiment or informativeness inconsistencies. Conversely, the negative effects of sentiment and informativeness inconsistencies are mitigated when they co-occur. Additionally, the impact of these inconsistencies varies depending on reviewer characteristics, product characteristics, and review order. These findings advance the literature on review helpfulness and MD rating systems by introducing cross-dimensional and cross-modal inconsistencies as key determinants and clarifying when inconsistency serves as a credibility signal versus a cognitive burden.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114561"},"PeriodicalIF":6.8,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.dss.2025.114560
Qianqian Wang , Qiang Chen , Sai-Ho Chung , Junmei Rong
Within platform ecosystems, data protection transparency remains insufficient, and research on the dynamic interaction mechanisms governing user data authorization and utilization remains limited. This study develops a stylized analytical model to investigate three interrelated dimensions: platforms' optimal data protection capability (DPC) disclosure strategies, their capacity to enhance user experience, and complementors' utilization levels of user data for product improvement. Key findings are as follows: Platforms voluntarily disclose DPC when their DPC exceeds a critical threshold and disclosure costs are sufficiently low. Platform reputation diminishes disclosure propensity, whereas government reward mechanisms enhance it. Complementors' utilization of reasonably priced user data achieves Pareto improvements by boosting profits for both platforms and complementors. Lower user privacy sensitivity elevates user data authorization ratio, which in turn increases the platform's capability to enhance user experience, and complementors' data utilization levels to improve the product, creating a self-reinforcing cycle of enhanced user utility. While user subsidy and cost-sharing strategies effectively increase user demand and utility, they concurrently reduce platforms' propensity for active DPC disclosure.
{"title":"Data protection capability disclosure strategies and data utilization decisions in platform ecosystems","authors":"Qianqian Wang , Qiang Chen , Sai-Ho Chung , Junmei Rong","doi":"10.1016/j.dss.2025.114560","DOIUrl":"10.1016/j.dss.2025.114560","url":null,"abstract":"<div><div>Within platform ecosystems, data protection transparency remains insufficient, and research on the dynamic interaction mechanisms governing user data authorization and utilization remains limited. This study develops a stylized analytical model to investigate three interrelated dimensions: platforms' optimal data protection capability (DPC) disclosure strategies, their capacity to enhance user experience, and complementors' utilization levels of user data for product improvement. Key findings are as follows: Platforms voluntarily disclose DPC when their DPC exceeds a critical threshold and disclosure costs are sufficiently low. Platform reputation diminishes disclosure propensity, whereas government reward mechanisms enhance it. Complementors' utilization of reasonably priced user data achieves Pareto improvements by boosting profits for both platforms and complementors. Lower user privacy sensitivity elevates user data authorization ratio, which in turn increases the platform's capability to enhance user experience, and complementors' data utilization levels to improve the product, creating a self-reinforcing cycle of enhanced user utility. While user subsidy and cost-sharing strategies effectively increase user demand and utility, they concurrently reduce platforms' propensity for active DPC disclosure.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114560"},"PeriodicalIF":6.8,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1016/j.dss.2025.114559
Mi Chang , Eun Hye Jang , Woojin Kim, Daesub Yoon, Do Wook Kang
In Level 3 autonomous driving, drivers must quickly regain manual control when the vehicle exceeds its operational limits. Assessing driver readiness in real-time is crucial, especially under cognitive distraction, as delayed reactions can compromise safety. However, most vehicle systems rely on simple behavioral indicators, such as head movements from visual distractions, and struggle to predict driver readiness under complex cognitive distractions. Moreover, existing studies on cognitive distraction are primarily limited to laboratory settings or surveys, which limits their applicability to real-world driving conditions that require real-time decision making. To address these limitations, this study proposes an in-vehicle decision support system that analyzes cognitive distraction before take-over and predicts driver readiness in real-time. Phase 1 involved experiments with varying levels of cognitive distraction to collect data on driver behavior as well as psychological and physiological states to examine their relationship with driver readiness. Phase 2 used these findings to evaluate and compare deep learning models for predicting driver readiness. The results indicate that driver readiness can be predicted using eye-tracking data, with a model combining a transformer with a Random Forest Regressor achieving the best performance. This study enhances the understanding of the relationship between cognitive distraction and driver readiness. It applies these insights to an in-vehicle decision support system, improving the safety and reliability of autonomous vehicles. Furthermore, it provides a crucial foundation for advancing autonomous system design and driver monitoring technologies.
{"title":"Driver readiness prediction: Bridging cognitive distraction monitoring and in-vehicle decision support systems","authors":"Mi Chang , Eun Hye Jang , Woojin Kim, Daesub Yoon, Do Wook Kang","doi":"10.1016/j.dss.2025.114559","DOIUrl":"10.1016/j.dss.2025.114559","url":null,"abstract":"<div><div>In Level 3 autonomous driving, drivers must quickly regain manual control when the vehicle exceeds its operational limits. Assessing driver readiness in real-time is crucial, especially under cognitive distraction, as delayed reactions can compromise safety. However, most vehicle systems rely on simple behavioral indicators, such as head movements from visual distractions, and struggle to predict driver readiness under complex cognitive distractions. Moreover, existing studies on cognitive distraction are primarily limited to laboratory settings or surveys, which limits their applicability to real-world driving conditions that require real-time decision making. To address these limitations, this study proposes an in-vehicle decision support system that analyzes cognitive distraction before take-over and predicts driver readiness in real-time. Phase 1 involved experiments with varying levels of cognitive distraction to collect data on driver behavior as well as psychological and physiological states to examine their relationship with driver readiness. Phase 2 used these findings to evaluate and compare deep learning models for predicting driver readiness. The results indicate that driver readiness can be predicted using eye-tracking data, with a model combining a transformer with a Random Forest Regressor achieving the best performance. This study enhances the understanding of the relationship between cognitive distraction and driver readiness. It applies these insights to an in-vehicle decision support system, improving the safety and reliability of autonomous vehicles. Furthermore, it provides a crucial foundation for advancing autonomous system design and driver monitoring technologies.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114559"},"PeriodicalIF":6.8,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145382621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.1016/j.dss.2025.114558
Xulei Jin , Lihua Huang , Tan Cheng , Shuaiyong Xiao , Chenghong Zhang , Yajing Wang
In the era of artificial intelligence generated content, accurate user intent detection and effective response generation have become critical capabilities for LLM-based service agents. However, due to users' limited familiarity with domain-specific knowledge, their underspecified queries often introduce intent uncertainty, impeding the generation of responses that are both contextually relevant and operationally executable. To address this challenge, we propose uncertainty-aware augmented generation (UAG), a novel deep learning method that jointly detects user intents and quantifies their associated uncertainty, thereby bridging the gap between user queries and enterprise-executable actions. UAG enhances intent detection along a predefined intent tree by incorporating two hierarchical consistency losses, and improves the quality of generated responses by leveraging salient intent paths—extracted using a proposed uncertainty-aware intent (UI) score—as an augmented prompt. Experiment results based on two datasets showed that UAG outperformed state-of-the-art alternative benchmarks, and explanatory analysis rendered insight on the role of uncertainty in user intent detection and response generation.
{"title":"Uncertainty-aware augmented generation (UAG): A novel deep learning method for enriching in-conversation user intent toward improved LLM generation","authors":"Xulei Jin , Lihua Huang , Tan Cheng , Shuaiyong Xiao , Chenghong Zhang , Yajing Wang","doi":"10.1016/j.dss.2025.114558","DOIUrl":"10.1016/j.dss.2025.114558","url":null,"abstract":"<div><div>In the era of artificial intelligence generated content, accurate user intent detection and effective response generation have become critical capabilities for LLM-based service agents. However, due to users' limited familiarity with domain-specific knowledge, their underspecified queries often introduce intent uncertainty, impeding the generation of responses that are both contextually relevant and operationally executable. To address this challenge, we propose uncertainty-aware augmented generation (UAG), a novel deep learning method that jointly detects user intents and quantifies their associated uncertainty, thereby bridging the gap between user queries and enterprise-executable actions. UAG enhances intent detection along a predefined intent tree by incorporating two hierarchical consistency losses, and improves the quality of generated responses by leveraging salient intent paths—extracted using a proposed uncertainty-aware intent (UI) score—as an augmented prompt. Experiment results based on two datasets showed that UAG outperformed state-of-the-art alternative benchmarks, and explanatory analysis rendered insight on the role of uncertainty in user intent detection and response generation.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114558"},"PeriodicalIF":6.8,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1016/j.dss.2025.114557
Jobin Strunk, Anika Nissen, Stefan Smolnik
Artificial intelligence-based decision support systems (AI-DSSs) transform decision-making across diverse contexts including healthcare, finance, and personalized product and service recommendations. Each context exposes users to a dominant risk facet, such as physical risk when using a health-related AI-DSS, financial risk when using a robo-advisor, or psychosocial risk when interacting with an AI-DSS integrated in a social app. Utilizing risk theory, we systematically analyze how different risk facets and severities influence trust in and advice taking from AI-DSSs. We conduct a between-subjects online experiment with 958 participants who interact with AI-DSSs, covering three major risk facets and two risk severities for each. Our results reveal that risk facets and severities partially jointly influence advice taking. Additionally, while advice taking in physical risk scenarios remains relatively stable across severity levels, financial and psychosocial contexts show significantly greater sensitivity to changes in risk severity. This highlights an interaction effect, demonstrating that the impact of risk severity on advice taking is partially influenced by the risk facet. Furthermore, we found that trust mediates the effect of risk facet and risk severities on advice taking. Our insights enhance the theoretical understanding of the interplay between risk, trust, and advice taking in human-AI-DSS interaction. We contribute by bridging critical gaps in current literature, enriching the discourse on AI-DSS trust and advice taking in risk-laden environments. This helps developers of AI-DSSs understand the influence of risk facets related to their service and adapt their digital offerings accordingly.
{"title":"All risks ain't the same – A risk facets perspective on AI-based decision support systems","authors":"Jobin Strunk, Anika Nissen, Stefan Smolnik","doi":"10.1016/j.dss.2025.114557","DOIUrl":"10.1016/j.dss.2025.114557","url":null,"abstract":"<div><div>Artificial intelligence-based decision support systems (AI-DSSs) transform decision-making across diverse contexts including healthcare, finance, and personalized product and service recommendations. Each context exposes users to a dominant risk facet, such as physical risk when using a health-related AI-DSS, financial risk when using a robo-advisor, or psychosocial risk when interacting with an AI-DSS integrated in a social app. Utilizing risk theory, we systematically analyze how different risk facets and severities influence trust in and advice taking from AI-DSSs. We conduct a between-subjects online experiment with 958 participants who interact with AI-DSSs, covering three major risk facets and two risk severities for each. Our results reveal that risk facets and severities partially jointly influence advice taking. Additionally, while advice taking in physical risk scenarios remains relatively stable across severity levels, financial and psychosocial contexts show significantly greater sensitivity to changes in risk severity. This highlights an interaction effect, demonstrating that the impact of risk severity on advice taking is partially influenced by the risk facet. Furthermore, we found that trust mediates the effect of risk facet and risk severities on advice taking. Our insights enhance the theoretical understanding of the interplay between risk, trust, and advice taking in human-AI-DSS interaction. We contribute by bridging critical gaps in current literature, enriching the discourse on AI-DSS trust and advice taking in risk-laden environments. This helps developers of AI-DSSs understand the influence of risk facets related to their service and adapt their digital offerings accordingly.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114557"},"PeriodicalIF":6.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1016/j.dss.2025.114556
Xiya Guo , Jiahua Jin , Le Wang , Xiangbin Yan
Effective support for victims of online incivility is crucial for maintaining healthy digital communities and improving individual well-being. However, the decision-making processes underlying bystander intervention in social media environments remain insufficiently understood. Drawing on signaling theory, this study investigates how different forms of victim self-disclosure—specifically, the type of negative emotion expressed (introverted versus extraverted) and the degree of collective tendency—affect bystander empathy, moral judgment, and the intention to provide social support. Through an online experiment with Chinese social media users, we found that victim disclosures characterized by introverted negative emotions and high collective tendencies elicit greater bystander empathy and stronger intentions to provide both informational and emotional support. Our findings elucidate the decision mechanisms through which bystanders interpret signals and decide to intervene, offering actionable insights for the design of decision support systems that can facilitate effective bystander responses and improve comment section management on social media platforms. These results have significant implications for the development of intelligent, context-aware DSS interfaces and algorithms aimed at fostering pro-social behavior and mitigating the escalation of online deviance.
{"title":"Enhancing decision support for bystander interventions: The role of victim emotional disclosure and collective signals in social media incivility","authors":"Xiya Guo , Jiahua Jin , Le Wang , Xiangbin Yan","doi":"10.1016/j.dss.2025.114556","DOIUrl":"10.1016/j.dss.2025.114556","url":null,"abstract":"<div><div>Effective support for victims of online incivility is crucial for maintaining healthy digital communities and improving individual well-being. However, the decision-making processes underlying bystander intervention in social media environments remain insufficiently understood. Drawing on signaling theory, this study investigates how different forms of victim self-disclosure—specifically, the type of negative emotion expressed (introverted versus extraverted) and the degree of collective tendency—affect bystander empathy, moral judgment, and the intention to provide social support. Through an online experiment with Chinese social media users, we found that victim disclosures characterized by introverted negative emotions and high collective tendencies elicit greater bystander empathy and stronger intentions to provide both informational and emotional support. Our findings elucidate the decision mechanisms through which bystanders interpret signals and decide to intervene, offering actionable insights for the design of decision support systems that can facilitate effective bystander responses and improve comment section management on social media platforms. These results have significant implications for the development of intelligent, context-aware DSS interfaces and algorithms aimed at fostering pro-social behavior and mitigating the escalation of online deviance.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114556"},"PeriodicalIF":6.8,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145326836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1016/j.dss.2025.114555
Wei Du , Qianhui Huang , Ruiyun Xu
Blockchain phishing frauds have caused significant financial losses and eroded trust in blockchain platforms. While existing detection methods increasingly rely on mining transaction networks to identify fraudsters, they often fail to fully exploit transaction patterns or sufficiently model label dependencies—whether between victims and fraudsters or among fraudsters themselves. Informed by criminology theories, we develop a deep learning framework—DeepPhishDetect—that integrates both effective node representation learning and label dependency modeling across transaction networks. DeepPhishDetect models the joint distribution of object labels with a conditional random field (CRF), which can be effectively trained with the variational expectation maximization (EM) framework. Specifically, we design a novel Deep Multi-faceted Detector (DMFD) module to learn complex transactional features in E-step and adopt a Graph Attention Network (GAT) model to profile the label dependencies between fraudsters and victims or among fraudsters in M-step. Experimental results show that DeepPhishDetect significantly outperforms state-of-the-art blockchain phishing detection methods. An ablation study further validates the key design of our model. Intriguingly, a case study demonstrates that our model not only improves accuracy in detecting known phishing accounts but also identifies highly suspicious actors previously overlooked by existing labels. This work contributes to the cybersecurity literature by offering an innovative and more accurate blockchain phishing detection method and enhances business practices in blockchain platform regulation through proactive risk management.
区块链网络钓鱼欺诈造成了重大的经济损失,并侵蚀了对区块链平台的信任。虽然现有的检测方法越来越依赖于挖掘交易网络来识别欺诈者,但它们往往无法充分利用交易模式或充分模拟标签依赖关系——无论是受害者和欺诈者之间还是欺诈者自己之间。根据犯罪学理论,我们开发了一个深度学习框架——deepphishdetect——它集成了有效的节点表示学习和跨交易网络的标签依赖建模。DeepPhishDetect利用条件随机场(conditional random field, CRF)对目标标签的联合分布进行建模,并利用变分期望最大化(variational expectation maximization, EM)框架对目标标签进行有效训练。具体而言,我们设计了一种新颖的深度多面检测器(DMFD)模块来学习e步中的复杂交易特征,并采用图注意网络(GAT)模型来分析m步中欺诈者与受害者之间或欺诈者之间的标签依赖关系。实验结果表明,DeepPhishDetect显著优于最先进的b区块链网络钓鱼检测方法。消融研究进一步验证了我们模型的关键设计。有趣的是,一个案例研究表明,我们的模型不仅提高了检测已知网络钓鱼账户的准确性,而且还识别出了以前被现有标签忽视的高度可疑的参与者。本研究提供了一种创新的、更准确的区块链网络钓鱼检测方法,并通过主动风险管理加强了区块链平台监管的业务实践,为网络安全文献做出了贡献。
{"title":"Follow the vine to get the melon: A deep framework for blockchain phishing fraud detection","authors":"Wei Du , Qianhui Huang , Ruiyun Xu","doi":"10.1016/j.dss.2025.114555","DOIUrl":"10.1016/j.dss.2025.114555","url":null,"abstract":"<div><div>Blockchain phishing frauds have caused significant financial losses and eroded trust in blockchain platforms. While existing detection methods increasingly rely on mining transaction networks to identify fraudsters, they often fail to fully exploit transaction patterns or sufficiently model label dependencies—whether between victims and fraudsters or among fraudsters themselves. Informed by criminology theories, we develop a deep learning framework—DeepPhishDetect—that integrates both effective node representation learning and label dependency modeling across transaction networks. DeepPhishDetect models the joint distribution of object labels with a conditional random field (CRF), which can be effectively trained with the variational expectation maximization (EM) framework. Specifically, we design a novel <em>Deep Multi-faceted Detector (DMFD)</em> module to learn complex transactional features in <em>E</em>-step and adopt a <em>Graph Attention Network (GAT)</em> model to profile the label dependencies between fraudsters and victims or among fraudsters in M-step. Experimental results show that DeepPhishDetect significantly outperforms state-of-the-art blockchain phishing detection methods. An ablation study further validates the key design of our model. Intriguingly, a case study demonstrates that our model not only improves accuracy in detecting known phishing accounts but also identifies highly suspicious actors previously overlooked by existing labels. This work contributes to the cybersecurity literature by offering an innovative and more accurate blockchain phishing detection method and enhances business practices in blockchain platform regulation through proactive risk management.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114555"},"PeriodicalIF":6.8,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145326835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identifying the most helpful online customer reviews (OCRs) is crucial for online shopping sites aiming to support consumer purchase decisions. Equally important is understanding how OCR helpfulness varies across different types of goods. By focusing on the most informative part of OCRs – the OCR text – and applying a novel methodological approach, we provide this knowledge without relying on potentially biased, yet widely utilized, helpfulness votes. Grounded in the Elaboration Likelihood Model (ELM) of persuasion, we hypothesize that only selected thematic categories of OCR text are helpful, and that the type of goods moderates this helpfulness. Our findings reveal that product-related content (e.g., functionality or quality) is less helpful for experience goods than for search goods. Conversely, customer-related content (e.g., emotional attitudes or recommendations) is more helpful for experience goods than for search goods. Our contribution is threefold. First, we present an approach that allows the investigation of OCR helpfulness independent of potentially biased helpfulness votes in a generalizable, domain-independent setting. Second, using this approach, we provide insights into the helpfulness of OCR texts across thematic categories and types of goods. Third, we extend the application of the ELM by providing theoretically grounded explanations for the observed effects. From a practical perspective, our findings inform the design of OCR systems for online shopping sites that aim to provide consumers with the most helpful OCRs.
{"title":"Beyond helpfulness votes: Examining the helpfulness of content in online customer review text","authors":"Stefanie Erlebach , Kilian Züllig , Alexander Kupfer , Leonie Embacher , Steffen Zimmermann","doi":"10.1016/j.dss.2025.114546","DOIUrl":"10.1016/j.dss.2025.114546","url":null,"abstract":"<div><div>Identifying the most helpful online customer reviews (OCRs) is crucial for online shopping sites aiming to support consumer purchase decisions. Equally important is understanding how OCR helpfulness varies across different types of goods. By focusing on the most informative part of OCRs – the OCR text – and applying a novel methodological approach, we provide this knowledge without relying on potentially biased, yet widely utilized, helpfulness votes. Grounded in the Elaboration Likelihood Model (ELM) of persuasion, we hypothesize that only selected thematic categories of OCR text are helpful, and that the type of goods moderates this helpfulness. Our findings reveal that product-related content (e.g., functionality or quality) is less helpful for experience goods than for search goods. Conversely, customer-related content (e.g., emotional attitudes or recommendations) is more helpful for experience goods than for search goods. Our contribution is threefold. First, we present an approach that allows the investigation of OCR helpfulness independent of potentially biased helpfulness votes in a generalizable, domain-independent setting. Second, using this approach, we provide insights into the helpfulness of OCR texts across thematic categories and types of goods. Third, we extend the application of the ELM by providing theoretically grounded explanations for the observed effects. From a practical perspective, our findings inform the design of OCR systems for online shopping sites that aim to provide consumers with the most helpful OCRs.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"200 ","pages":"Article 114546"},"PeriodicalIF":6.8,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1016/j.dss.2025.114553
Yuwei Wan , Zheyuan Chen , Ying Liu , Chong Chen , Michael Packianather
Translating natural language inquiries into executable Cypher queries (text-to-Cypher) is a persistent bottleneck for non-technical teams relying on knowledge graphs (KGs) in fast-changing industrial settings. Rule and template converters need frequent updates as schemas evolve, while supervised and fine-tuned parsers require recurring training. This study proposes a schema-guided prompting approach, namely text-to-Cypher with semantic schema (T2CSS), to align large language models (LLMs) with domain knowledge for producing accurate Cypher. T2CSS distils a domain ontology into a lightweight semantic schema and uses adaptive filtering to inject the relevant subgraph and essential Cypher rules into the prompt for constraining generation and reducing schema-agnostic errors. This design keeps the prompt focused and within context length limits while providing the necessary domain grounding. Comparative experiments demonstrate that T2CSS with GPT-4 outperformed baseline models and achieved 86 % accuracy in producing correct Cypher queries. In practice, this study reduces retraining and maintenance effort, shortens turnaround times, and broadens KG access for non-experts.
{"title":"Prompting large language models based on semantic schema for text-to-Cypher transformation towards domain Q&A","authors":"Yuwei Wan , Zheyuan Chen , Ying Liu , Chong Chen , Michael Packianather","doi":"10.1016/j.dss.2025.114553","DOIUrl":"10.1016/j.dss.2025.114553","url":null,"abstract":"<div><div>Translating natural language inquiries into executable Cypher queries (text-to-Cypher) is a persistent bottleneck for non-technical teams relying on knowledge graphs (KGs) in fast-changing industrial settings. Rule and template converters need frequent updates as schemas evolve, while supervised and fine-tuned parsers require recurring training. This study proposes a schema-guided prompting approach, namely text-to-Cypher with semantic schema (T2CSS), to align large language models (LLMs) with domain knowledge for producing accurate Cypher. T2CSS distils a domain ontology into a lightweight semantic schema and uses adaptive filtering to inject the relevant subgraph and essential Cypher rules into the prompt for constraining generation and reducing schema-agnostic errors. This design keeps the prompt focused and within context length limits while providing the necessary domain grounding. Comparative experiments demonstrate that T2CSS with GPT-4 outperformed baseline models and achieved 86 % accuracy in producing correct Cypher queries. In practice, this study reduces retraining and maintenance effort, shortens turnaround times, and broadens KG access for non-experts.</div></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"199 ","pages":"Article 114553"},"PeriodicalIF":6.8,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145271378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}