首页 > 最新文献

Computer Standards & Interfaces最新文献

英文 中文
Human factors in phishing: Understanding susceptibility and resilience
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-04-22 DOI: 10.1016/j.csi.2025.104014
Ufuk Oner, Orcun Cetin, Erkay Savas
This study examines the demographic and organizational factors influencing phishing susceptibility and incident reporting behaviors among employees in a large European financial organization following realistic phishing simulations and how these factors correlate with susceptibility to phishing attacks. In the phishing simulations campaign with 8,102 participants, unannounced, monthly phishing emails with different templates are sent during regular work hours over a duration of 2 years, and the reactions (clicking the link and reporting the phishing email) are collected. The results are combined with demographic and relevant organizational data such as age, gender, level of education, department type, tenure, and job level. Multivariate logistic regression models are developed to analyze the relationship between these variables and phishing behaviors.
The analysis reveals significant differences in susceptibility to and resilience against phishing attacks across various demographic and organizational groups. Older employees are more susceptible to phishing, while males show lower vulnerability to phishing attacks. Additionally, our results revealed that higher-level employees often under report phishing emails. These findings highlight the necessity for targeted anti-phishing training tailored to different demographics and departments within the organization and the importance of fostering a culture of incident reporting. Recommendations include customized cyber awareness training programs, regular awareness sessions, and incentivizing reporting.
Future research is encouraged to prioritize investigating the root causes of phishing behaviors and evaluating the effectiveness of training programs.
{"title":"Human factors in phishing: Understanding susceptibility and resilience","authors":"Ufuk Oner,&nbsp;Orcun Cetin,&nbsp;Erkay Savas","doi":"10.1016/j.csi.2025.104014","DOIUrl":"10.1016/j.csi.2025.104014","url":null,"abstract":"<div><div>This study examines the demographic and organizational factors influencing phishing susceptibility and incident reporting behaviors among employees in a large European financial organization following realistic phishing simulations and how these factors correlate with susceptibility to phishing attacks. In the phishing simulations campaign with 8,102 participants, unannounced, monthly phishing emails with different templates are sent during regular work hours over a duration of 2 years, and the reactions (clicking the link and reporting the phishing email) are collected. The results are combined with demographic and relevant organizational data such as age, gender, level of education, department type, tenure, and job level. Multivariate logistic regression models are developed to analyze the relationship between these variables and phishing behaviors.</div><div>The analysis reveals significant differences in susceptibility to and resilience against phishing attacks across various demographic and organizational groups. Older employees are more susceptible to phishing, while males show lower vulnerability to phishing attacks. Additionally, our results revealed that higher-level employees often under report phishing emails. These findings highlight the necessity for targeted anti-phishing training tailored to different demographics and departments within the organization and the importance of fostering a culture of incident reporting. Recommendations include customized cyber awareness training programs, regular awareness sessions, and incentivizing reporting.</div><div>Future research is encouraged to prioritize investigating the root causes of phishing behaviors and evaluating the effectiveness of training programs.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104014"},"PeriodicalIF":4.1,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing generative AI for personalized E-commerce product descriptions: A framework and practical insights
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-04-17 DOI: 10.1016/j.csi.2025.104012
Adam Wasilewski
The role of electronic commerce (e-commerce) in the global economy has been steadily increasing, highlighting the benefits of business digitization for flexibility and resilience in response to environmental changes. Among emerging trends, the integration of artificial intelligence (AI) and machine learning is particularly notable, especially the application of large language models to personalize user interactions throughout the customer journey. A promising future direction is the use of generative AI to create customized e-commerce product descriptions for personalized, multivariant user interfaces. To validate this approach, a framework and metrics are proposed to assess the impact of segment-specific information on generated text. This led to the positioning of AI-generated content within a multivariant user interface architecture and the adaptation of a cosine similarity measure to evaluate text differentiation. The findings confirmed that the specific characteristics of e-commerce customer clusters enable generative AI to produce significantly distinct product descriptions. While differences were not statistically significant in 26.7% of cases, full differentiation was achieved for descriptions of sufficient length.
{"title":"Harnessing generative AI for personalized E-commerce product descriptions: A framework and practical insights","authors":"Adam Wasilewski","doi":"10.1016/j.csi.2025.104012","DOIUrl":"10.1016/j.csi.2025.104012","url":null,"abstract":"<div><div>The role of electronic commerce (e-commerce) in the global economy has been steadily increasing, highlighting the benefits of business digitization for flexibility and resilience in response to environmental changes. Among emerging trends, the integration of artificial intelligence (AI) and machine learning is particularly notable, especially the application of large language models to personalize user interactions throughout the customer journey. A promising future direction is the use of generative AI to create customized e-commerce product descriptions for personalized, multivariant user interfaces. To validate this approach, a framework and metrics are proposed to assess the impact of segment-specific information on generated text. This led to the positioning of AI-generated content within a multivariant user interface architecture and the adaptation of a cosine similarity measure to evaluate text differentiation. The findings confirmed that the specific characteristics of e-commerce customer clusters enable generative AI to produce significantly distinct product descriptions. While differences were not statistically significant in 26.7% of cases, full differentiation was achieved for descriptions of sufficient length.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104012"},"PeriodicalIF":4.1,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging activation and optimisation layers as dynamic strategies in the multi-task fuzzing scheme
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-04-12 DOI: 10.1016/j.csi.2025.104011
Sadegh Bamohabbat Chafjiri, Phil Legg, Michail-Antisthenis Tsompanas, Jun Hong
Fuzzing is a common technique for identifying vulnerabilities in software. Recent approaches, like She et al.’s Multi-Task Fuzzing (MTFuzz), use neural networks to improve fuzzing efficiency. However, key elements like network architecture and hyperparameter tuning are still not well-explored. Factors like activation layers, optimisation function design, and vanishing gradient strategies can significantly impact fuzzing results by improving test case selection. This paper delves into these aspects to improve neural network-driven fuzz testing.
We focus on three key neural network parameters to improve fuzz testing: the Leaky Rectified Linear Unit (LReLU) activation, Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimisation, and sensitivity analysis. LReLU adds non-linearity, aiding feature extraction, while Nadam helps to improve weight updates by considering both current and future gradient directions. Sensitivity analysis optimises layer selection for gradient calculation, enhancing fuzzing efficiency.
Based on these insights, we propose LMTFuzz, a novel fuzzing scheme optimised for these Machine Learning (ML) strategies. We explore the individual and combined effects of LReLU, Nadam, and sensitivity analysis, as well as their hybrid configurations, across six different software targets. Experimental results demonstrate that LReLU, individually or when paired with sensitivity analysis, significantly enhances fuzz testing performance. However, when combined with Nadam, LReLU shows improvement on some targets, though less pronounced than its combination with sensitivity analysis. This combination improves accuracy, reduces loss, and increases edge coverage, with improvements of up to 23.8%. Furthermore, it leads to a significant increase in unique bug detection, with some targets detecting up to 2.66 times more bugs than baseline methods.
{"title":"Leveraging activation and optimisation layers as dynamic strategies in the multi-task fuzzing scheme","authors":"Sadegh Bamohabbat Chafjiri,&nbsp;Phil Legg,&nbsp;Michail-Antisthenis Tsompanas,&nbsp;Jun Hong","doi":"10.1016/j.csi.2025.104011","DOIUrl":"10.1016/j.csi.2025.104011","url":null,"abstract":"<div><div>Fuzzing is a common technique for identifying vulnerabilities in software. Recent approaches, like She et al.’s Multi-Task Fuzzing (MTFuzz), use neural networks to improve fuzzing efficiency. However, key elements like network architecture and hyperparameter tuning are still not well-explored. Factors like activation layers, optimisation function design, and vanishing gradient strategies can significantly impact fuzzing results by improving test case selection. This paper delves into these aspects to improve neural network-driven fuzz testing.</div><div>We focus on three key neural network parameters to improve fuzz testing: the Leaky Rectified Linear Unit (LReLU) activation, Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimisation, and sensitivity analysis. LReLU adds non-linearity, aiding feature extraction, while Nadam helps to improve weight updates by considering both current and future gradient directions. Sensitivity analysis optimises layer selection for gradient calculation, enhancing fuzzing efficiency.</div><div>Based on these insights, we propose LMTFuzz, a novel fuzzing scheme optimised for these Machine Learning (ML) strategies. We explore the individual and combined effects of LReLU, Nadam, and sensitivity analysis, as well as their hybrid configurations, across six different software targets. Experimental results demonstrate that LReLU, individually or when paired with sensitivity analysis, significantly enhances fuzz testing performance. However, when combined with Nadam, LReLU shows improvement on some targets, though less pronounced than its combination with sensitivity analysis. This combination improves accuracy, reduces loss, and increases edge coverage, with improvements of up to 23.8%. Furthermore, it leads to a significant increase in unique bug detection, with some targets detecting up to 2.66 times more bugs than baseline methods.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104011"},"PeriodicalIF":4.1,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143828761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of inter-parameter dependencies in API gateways
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-04-12 DOI: 10.1016/j.csi.2025.104010
Saman Barakat, Sergio Segura
Web APIs usually include inter-parameter dependencies that constrain how input parameters can be combined to form valid calls to the services. API calls often violate these dependencies, resulting in unnecessary message exchanges, wasted time, and quota usage. Additionally, services may fail to adequately validate whether input requests meet these dependencies, causing critical failures or generating uninformative error messages. In this article, we propose extending API gateways to detect and explain inter-parameter dependency violations. We leverage the Inter-parameter Dependency Language (IDL) for specifying dependencies between input parameters in web APIs, and IDLReasoner, a constraint-based IDL analysis engine. We implemented our approach into a prototype tool, IDLFilter, on top of Spring Cloud Gateway. Evaluation results with 12 industrial API operations and about 30K automatically and manually generated API calls show that our approach effectively blocks invalid calls due to dependency violations, providing informative error messages and minimizing potential input validation failures. IDLFilter introduces a small 7% overhead when processing valid API calls, while reducing the response time of requests violating dependencies by 59%.
{"title":"Validation of inter-parameter dependencies in API gateways","authors":"Saman Barakat,&nbsp;Sergio Segura","doi":"10.1016/j.csi.2025.104010","DOIUrl":"10.1016/j.csi.2025.104010","url":null,"abstract":"<div><div>Web APIs usually include inter-parameter dependencies that constrain how input parameters can be combined to form valid calls to the services. API calls often violate these dependencies, resulting in unnecessary message exchanges, wasted time, and quota usage. Additionally, services may fail to adequately validate whether input requests meet these dependencies, causing critical failures or generating uninformative error messages. In this article, we propose extending API gateways to detect and explain inter-parameter dependency violations. We leverage the Inter-parameter Dependency Language (IDL) for specifying dependencies between input parameters in web APIs, and IDLReasoner, a constraint-based IDL analysis engine. We implemented our approach into a prototype tool, IDLFilter, on top of Spring Cloud Gateway. Evaluation results with 12 industrial API operations and about 30K automatically and manually generated API calls show that our approach effectively blocks invalid calls due to dependency violations, providing informative error messages and minimizing potential input validation failures. IDLFilter introduces a small 7% overhead when processing valid API calls, while reducing the response time of requests violating dependencies by 59%.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104010"},"PeriodicalIF":4.1,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving authentication protocol for user personal device security in Brain–Computer Interface
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-04-10 DOI: 10.1016/j.csi.2025.104009
Sunil Prajapat , Aryan Rana , Pankaj Kumar , Ashok Kumar Das , Willy Susilo
Brain–Computer Interface (BCI) technology has emerged as a transformative tool, particularly for individuals with severe motor disabilities. Non-invasive BCI systems, leveraging Electroencephalography (EEG), offer a direct interface between users and external devices, bypassing the need for muscular control. However, ensuring the security and privacy of users’ neural data remains a critical challenge. In this paper, we propose a novel privacy-preserving authentication scheme for EEG-based BCI systems, utilizing elliptic curve cryptography (ECC). Our scheme balances robust security with computational efficiency, making it suitable for resource-constrained environments. Since we are addressing security in a resource-constrained environment, such as EEG in BCI, we have constructed a lightweight authentication algorithm to meet the stringent requirements of minimal computational resources and energy consumption. The security analysis and performance evaluation of the authentication protocol show that our scheme is resistant to various attacks, such as replay, offline password guessing, privilege insider, user impersonation, and smart card stolen attacks. It offers mutual authentication and key agreement, requiring only 1632 bits of communication cost and 15.67139 ms of computational cost for the entire login authentication and key agreement phase. Our study lays a solid foundation for future investigation of innovative solutions for BCI security.
{"title":"Privacy-preserving authentication protocol for user personal device security in Brain–Computer Interface","authors":"Sunil Prajapat ,&nbsp;Aryan Rana ,&nbsp;Pankaj Kumar ,&nbsp;Ashok Kumar Das ,&nbsp;Willy Susilo","doi":"10.1016/j.csi.2025.104009","DOIUrl":"10.1016/j.csi.2025.104009","url":null,"abstract":"<div><div>Brain–Computer Interface (BCI) technology has emerged as a transformative tool, particularly for individuals with severe motor disabilities. Non-invasive BCI systems, leveraging Electroencephalography (EEG), offer a direct interface between users and external devices, bypassing the need for muscular control. However, ensuring the security and privacy of users’ neural data remains a critical challenge. In this paper, we propose a novel privacy-preserving authentication scheme for EEG-based BCI systems, utilizing elliptic curve cryptography (ECC). Our scheme balances robust security with computational efficiency, making it suitable for resource-constrained environments. Since we are addressing security in a resource-constrained environment, such as EEG in BCI, we have constructed a lightweight authentication algorithm to meet the stringent requirements of minimal computational resources and energy consumption. The security analysis and performance evaluation of the authentication protocol show that our scheme is resistant to various attacks, such as replay, offline password guessing, privilege insider, user impersonation, and smart card stolen attacks. It offers mutual authentication and key agreement, requiring only 1632 bits of communication cost and 15.67139 ms of computational cost for the entire login authentication and key agreement phase. Our study lays a solid foundation for future investigation of innovative solutions for BCI security.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104009"},"PeriodicalIF":4.1,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient rejection-free threshold ring signature from lattices and its application in receipt-free cloud e-voting
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-04-05 DOI: 10.1016/j.csi.2025.104008
Chunhui Wu , Youkang Zhou , Fangguo Zhang , Yusong Du , Qiping Lin
The threshold ring signature is a generalization of the ring signature. It confirms that t-out-of-N members are signing anonymously in a set of N users, and thus it can be well applied to e-voting. In this paper, we present a more efficient lattice-based threshold ring signature, using Lagrange polynomial interpolation to express the threshold. The scheme eliminates the dependence on Stern-like identification protocols with large soundness error, and achieves much shorter signature sizes. It also uses the technique of Gaussian convolution (G+G) proposed by Devevey et al. in Asiacrypt 2023 to remove the rejection sampling in BLISS signature. Compared with previous distributed FSwA (Fiat-Shamir with Aborts) signatures where the number of repetitions increases exponentially with that of signers, our scheme has much higher computation efficiency. We prove the unforgeability and strong anonymity, i.e., fellow-signer anonymity, unclaimability and anonymity against the untrusted leader of our proposed threshold ring signature scheme. Leveraging the security and efficiency advantages of our signature scheme, we propose a post-quantum receipt-free and verifiable e-voting protocol for large-scale elections with untrusted cloud servers.
{"title":"An efficient rejection-free threshold ring signature from lattices and its application in receipt-free cloud e-voting","authors":"Chunhui Wu ,&nbsp;Youkang Zhou ,&nbsp;Fangguo Zhang ,&nbsp;Yusong Du ,&nbsp;Qiping Lin","doi":"10.1016/j.csi.2025.104008","DOIUrl":"10.1016/j.csi.2025.104008","url":null,"abstract":"<div><div>The threshold ring signature is a generalization of the ring signature. It confirms that <span><math><mi>t</mi></math></span>-out-of-<span><math><mi>N</mi></math></span> members are signing anonymously in a set of <span><math><mi>N</mi></math></span> users, and thus it can be well applied to e-voting. In this paper, we present a more efficient lattice-based threshold ring signature, using Lagrange polynomial interpolation to express the threshold. The scheme eliminates the dependence on Stern-like identification protocols with large soundness error, and achieves much shorter signature sizes. It also uses the technique of Gaussian convolution (<span>G</span>+<span>G</span>) proposed by Devevey et al.<!--> <!-->in Asiacrypt 2023 to remove the rejection sampling in BLISS signature. Compared with previous distributed FSwA (Fiat-Shamir with Aborts) signatures where the number of repetitions increases exponentially with that of signers, our scheme has much higher computation efficiency. We prove the unforgeability and strong anonymity, i.e., fellow-signer anonymity, unclaimability and anonymity against the untrusted leader of our proposed threshold ring signature scheme. Leveraging the security and efficiency advantages of our signature scheme, we propose a post-quantum receipt-free and verifiable e-voting protocol for large-scale elections with untrusted cloud servers.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104008"},"PeriodicalIF":4.1,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143806843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new deep learning based electricity theft detection framework for smart grids in cloud computing
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-31 DOI: 10.1016/j.csi.2025.104007
Zhen Si , Zhaoqing Liu , Changchun Mu , Meng Wang , Tongxin Gong , Xiaofang Xia , Qing Hu , Yang Xiao
Electricity theft is a widespread problem in smart grids with significant economic and security implications. Although users’ electricity consumption patterns usually show obvious periodicity, they also exhibit considerable stochasticity and uncertainty. Existing mainstream electricity theft detection methods are the deep learning-based ones, which struggle to capture reliable long-term dependencies from the complex consumption data, leading to suboptimal identification of abnormal patterns. Moreover, the massive data generated by smart grids demands a scalable and robust computational infrastructure that traditional systems cannot provide. To solve these limitations, we propose a new deep learning-based electricity theft detection framework in cloud computing. At the cloud server, we deploy an electricity theft detector based on the auto-correlation mechanism, called the ETD-SAC detector, which progressively decomposes intricate consumption patterns throughout the detection process and aggregates the dependencies at the subsequence level to effectively discover reliable long-term dependencies from users’ electricity consumption data. Experimental results show that the proposed ETD-SAC detector outperforms state-of-the-art detectors in terms of accuracy, false negative rate, and false positive rate.
{"title":"A new deep learning based electricity theft detection framework for smart grids in cloud computing","authors":"Zhen Si ,&nbsp;Zhaoqing Liu ,&nbsp;Changchun Mu ,&nbsp;Meng Wang ,&nbsp;Tongxin Gong ,&nbsp;Xiaofang Xia ,&nbsp;Qing Hu ,&nbsp;Yang Xiao","doi":"10.1016/j.csi.2025.104007","DOIUrl":"10.1016/j.csi.2025.104007","url":null,"abstract":"<div><div>Electricity theft is a widespread problem in smart grids with significant economic and security implications. Although users’ electricity consumption patterns usually show obvious periodicity, they also exhibit considerable stochasticity and uncertainty. Existing mainstream electricity theft detection methods are the deep learning-based ones, which struggle to capture reliable long-term dependencies from the complex consumption data, leading to suboptimal identification of abnormal patterns. Moreover, the massive data generated by smart grids demands a scalable and robust computational infrastructure that traditional systems cannot provide. To solve these limitations, we propose a new deep learning-based electricity theft detection framework in cloud computing. At the cloud server, we deploy an electricity theft detector based on the auto-correlation mechanism, called the ETD-SAC detector, which progressively decomposes intricate consumption patterns throughout the detection process and aggregates the dependencies at the subsequence level to effectively discover reliable long-term dependencies from users’ electricity consumption data. Experimental results show that the proposed ETD-SAC detector outperforms state-of-the-art detectors in terms of accuracy, false negative rate, and false positive rate.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104007"},"PeriodicalIF":4.1,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fields of the future: Digital transformation in smart agriculture with large language models and generative AI
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-28 DOI: 10.1016/j.csi.2025.104005
Tawseef Ayoub Shaikh , Tabasum Rasool , Waseem Ahmad Mir
Language models (LLMs) have shown to be very useful in many fields like healthcare and finance, as natural language comprehension and generation have advanced. The capacity of LLM to participate in textual discussion has been the subject of much research, and the findings have proved encouraging across several domains. The inability of conventional image classification networks to comprehend the causes of crop diseases and etiology further impedes precise diagnosis. Agricultural diagnostic models on a grand scale will be based on generative pre-trained transformers (GPT) assisted with agrarian settings. By examining the efficacy of text corpora linked to agriculture for pretraining transformer-based language (TBL) models, this research delves into agricultural natural language processing (ANLP). To make the most of it, we looked at several important aspects, including prompt building, response parsing, and several ChatGPT versions. Despite the proven effectiveness and huge potential, there has been little exploration of LLM and Generative AI to agriculture artificial intelligence (AI). Therefore, this study aims to explore the possibility of LLM and Generative AI in smart agriculture. In particular, we present conceptual tools and technical background to facilitate understanding the problem space and uncover new research directions in this field. The paper presents an overview of the evolution of generative adversarial network (GAN) architectures followed by a first systematic review of various applications in smart agriculture and precision farming systems, involving a diversity of visual recognition tasks for smart farming and livestock, precision agriculture, agricultural language processing (ALP), agricultural robots (AR), plant phenotyping (PP), and postharvest quality assessment. We outline the possibilities, difficulties, constraints, and shortcomings. The study lays forth a road map of accessible areas in agriculture where LLM integration is likely to happen shortly. The research suggests exciting directions for further study in this area, which could lead to better agricultural NLP applications.
随着自然语言理解和生成技术的发展,语言模型(LLM)在医疗保健和金融等许多领域都显示出了巨大的作用。语言模型参与文本讨论的能力一直是许多研究的主题,研究结果在多个领域都证明令人鼓舞。传统的图像分类网络无法理解作物病害和病因,这进一步阻碍了精确诊断。大规模的农业诊断模型将基于生成式预训练变换器(GPT),并辅以农业环境。通过研究与农业相关的文本语料库对基于转换器的语言(TBL)模型进行预训练的功效,本研究深入探讨了农业自然语言处理(ANLP)。为了最大限度地利用它,我们研究了几个重要方面,包括提示构建、响应解析和几个 ChatGPT 版本。尽管 LLM 和生成式人工智能在农业人工智能(AI)方面的有效性和巨大潜力已得到证实,但这方面的探索还很少。因此,本研究旨在探索 LLM 和生成式人工智能在智能农业中应用的可能性。特别是,我们提出了概念工具和技术背景,以促进对问题空间的理解,并发现该领域的新研究方向。本文概述了生成式对抗网络(GAN)架构的演变,随后首次系统回顾了智能农业和精准农业系统中的各种应用,涉及智能农业和畜牧业、精准农业、农业语言处理(ALP)、农业机器人(AR)、植物表型(PP)和收获后质量评估等多种视觉识别任务。我们概述了其可能性、困难、制约因素和不足之处。这项研究为农业领域提供了一个路线图,在这些领域中,LLM 的整合可能会在短期内实现。研究为这一领域的进一步研究提出了令人兴奋的方向,这可能会带来更好的农业 NLP 应用。
{"title":"Fields of the future: Digital transformation in smart agriculture with large language models and generative AI","authors":"Tawseef Ayoub Shaikh ,&nbsp;Tabasum Rasool ,&nbsp;Waseem Ahmad Mir","doi":"10.1016/j.csi.2025.104005","DOIUrl":"10.1016/j.csi.2025.104005","url":null,"abstract":"<div><div>Language models (LLMs) have shown to be very useful in many fields like healthcare and finance, as natural language comprehension and generation have advanced. The capacity of LLM to participate in textual discussion has been the subject of much research, and the findings have proved encouraging across several domains. The inability of conventional image classification networks to comprehend the causes of crop diseases and etiology further impedes precise diagnosis. Agricultural diagnostic models on a grand scale will be based on generative pre-trained transformers (GPT) assisted with agrarian settings. By examining the efficacy of text corpora linked to agriculture for pretraining transformer-based language (TBL) models, this research delves into agricultural natural language processing (ANLP). To make the most of it, we looked at several important aspects, including prompt building, response parsing, and several ChatGPT versions. Despite the proven effectiveness and huge potential, there has been little exploration of LLM and Generative AI to agriculture artificial intelligence (AI). Therefore, this study aims to explore the possibility of LLM and Generative AI in smart agriculture. In particular, we present conceptual tools and technical background to facilitate understanding the problem space and uncover new research directions in this field. The paper presents an overview of the evolution of generative adversarial network (GAN) architectures followed by a first systematic review of various applications in smart agriculture and precision farming systems, involving a diversity of visual recognition tasks for smart farming and livestock, precision agriculture, agricultural language processing (ALP), agricultural robots (AR), plant phenotyping (PP), and postharvest quality assessment. We outline the possibilities, difficulties, constraints, and shortcomings. The study lays forth a road map of accessible areas in agriculture where LLM integration is likely to happen shortly. The research suggests exciting directions for further study in this area, which could lead to better agricultural NLP applications.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104005"},"PeriodicalIF":4.1,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143786282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WeAIR: Wearable Swarm Sensors for Air Quality Monitoring to Foster Citizens’ Awareness of Climate Change
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-28 DOI: 10.1016/j.csi.2025.104004
Giovanna Maria Dimitri , Lorenzo Parri , Eleonora Vitanza , Alessandro Pozzebon , Ada Fort , Chiara Mocenni
The present study proposes the implementation of an air quality measurement tool through the use of wearable devices, named WeAIR, consisting of wearable sensors for measuring NOx, CO2, CO, temperature, humidity, barometric pressure and PM10. In particular through the use of our novel sensor prototype, we performed a measurement collection campaign, acquiring an extensive set of geo-localized air quality data in the city of Siena (Italy). We further implemented and applied an AI neural network based model, capable of predicting the localization of an observation, having as input the air monitoring parameters and using the new spatio-temporal collected datasets. The promising performances obtained with the AI prediction approach enhanced the importance and possibilities of using such spatio-temporal air quality monitoring datasets, suggesting their crucial role both for raising citizen awareness on climate change and supporting policymakers’ decisions, as for instance the ones related to the positioning of new fixed monitoring stations.
{"title":"WeAIR: Wearable Swarm Sensors for Air Quality Monitoring to Foster Citizens’ Awareness of Climate Change","authors":"Giovanna Maria Dimitri ,&nbsp;Lorenzo Parri ,&nbsp;Eleonora Vitanza ,&nbsp;Alessandro Pozzebon ,&nbsp;Ada Fort ,&nbsp;Chiara Mocenni","doi":"10.1016/j.csi.2025.104004","DOIUrl":"10.1016/j.csi.2025.104004","url":null,"abstract":"<div><div>The present study proposes the implementation of an air quality measurement tool through the use of wearable devices, named WeAIR, consisting of wearable sensors for measuring NO<span><math><msub><mrow></mrow><mrow><mi>x</mi></mrow></msub></math></span>, CO<span><math><msub><mrow></mrow><mrow><mn>2</mn></mrow></msub></math></span>, CO, temperature, humidity, barometric pressure and PM10. In particular through the use of our novel sensor prototype, we performed a measurement collection campaign, acquiring an extensive set of geo-localized air quality data in the city of Siena (Italy). We further implemented and applied an AI neural network based model, capable of predicting the localization of an observation, having as input the air monitoring parameters and using the new spatio-temporal collected datasets. The promising performances obtained with the AI prediction approach enhanced the importance and possibilities of using such spatio-temporal air quality monitoring datasets, suggesting their crucial role both for raising citizen awareness on climate change and supporting policymakers’ decisions, as for instance the ones related to the positioning of new fixed monitoring stations.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104004"},"PeriodicalIF":4.1,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of reversible data hiding: Statistics, current trends, and future outlook
IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-03-25 DOI: 10.1016/j.csi.2025.104003
Sonal Gandhi, Rajeev Kumar
In the era of increasing digital media storage and transmission over networks, reversible data hiding (RDH) has evolved as a prominent area of research mitigating information security risk. To study the evolution of research, highlight its achievements over the years, and provide future prospects, this paper presents an extensive review of RDH utilizing the dataset extracted from one of the most popular and exhaustive databases, Web of Science. The study aims to first perform quantitative analysis that includes trend analysis, citation analysis, prominent authors and organizations, and geographical coverage, along with qualitative analysis focusing on key research areas and future prospects within RDH. The study further provides a structured view of sub-technologies within RDH, along with the key contributors and their proposed techniques that have led to the evolution of RDH over the years. Next, we provide a comprehensive review of some of the prominent works in each of the sub-technologies of RDH. Finally, several key research directions, identified based on current research trends and early-stage problems and motivations, are discussed. Overall, the proposed study provides valuable insights into the evolution, key milestones, current state, and future prospects of RDH, serving as a guide for the research community.
{"title":"Survey of reversible data hiding: Statistics, current trends, and future outlook","authors":"Sonal Gandhi,&nbsp;Rajeev Kumar","doi":"10.1016/j.csi.2025.104003","DOIUrl":"10.1016/j.csi.2025.104003","url":null,"abstract":"<div><div>In the era of increasing digital media storage and transmission over networks, reversible data hiding (RDH) has evolved as a prominent area of research mitigating information security risk. To study the evolution of research, highlight its achievements over the years, and provide future prospects, this paper presents an extensive review of RDH utilizing the dataset extracted from one of the most popular and exhaustive databases, Web of Science. The study aims to first perform quantitative analysis that includes trend analysis, citation analysis, prominent authors and organizations, and geographical coverage, along with qualitative analysis focusing on key research areas and future prospects within RDH. The study further provides a structured view of sub-technologies within RDH, along with the key contributors and their proposed techniques that have led to the evolution of RDH over the years. Next, we provide a comprehensive review of some of the prominent works in each of the sub-technologies of RDH. Finally, several key research directions, identified based on current research trends and early-stage problems and motivations, are discussed. Overall, the proposed study provides valuable insights into the evolution, key milestones, current state, and future prospects of RDH, serving as a guide for the research community.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"94 ","pages":"Article 104003"},"PeriodicalIF":4.1,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Standards & Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1