The complexity of automotive systems continues to grow, making software quality assessment crucial for vehicle performance, safety, and cybersecurity.
Objectives
This study explores Quality Assessment (QA) in this context, focusing on its key characteristics, practical implications, and expected deliverables.
Method
We performed a systematic literature review (SLR) by selecting 60 studies from digital libraries.
Results
This SLR highlighted essential QA characteristics that should be incorporated into a software validation phase. Our insights encourage the exploration of advanced techniques, such as Artificial Intelligence (AI), and Machine Learning (ML), to support safety-critical software quality assessments in the automotive domain.
Conclusion
The QA of software data validation requires a holistic approach that combines safety, security, and customer expectations, aligned with industry standards, requirements, and specifications. The relevance of AI and ML in managing complex technologies is evidenced, and the traditional real-world validation dependencies bring risks for safety-critical systems validation.
{"title":"Quality assessment for software data validation in automotive industry: A systematic literature review","authors":"Gilmar Pagoto , Luiz Eduardo Galvão Martins , Jefferson Seide Molléri","doi":"10.1016/j.csi.2025.104110","DOIUrl":"10.1016/j.csi.2025.104110","url":null,"abstract":"<div><h3>Context</h3><div>The complexity of automotive systems continues to grow, making software quality assessment crucial for vehicle performance, safety, and cybersecurity.</div></div><div><h3>Objectives</h3><div>This study explores Quality Assessment (QA) in this context, focusing on its key characteristics, practical implications, and expected deliverables.</div></div><div><h3>Method</h3><div>We performed a systematic literature review (SLR) by selecting 60 studies from digital libraries.</div></div><div><h3>Results</h3><div>This SLR highlighted essential QA characteristics that should be incorporated into a software validation phase. Our insights encourage the exploration of advanced techniques, such as Artificial Intelligence (AI), and Machine Learning (ML), to support safety-critical software quality assessments in the automotive domain.</div></div><div><h3>Conclusion</h3><div>The QA of software data validation requires a holistic approach that combines safety, security, and customer expectations, aligned with industry standards, requirements, and specifications. The relevance of AI and ML in managing complex technologies is evidenced, and the traditional real-world validation dependencies bring risks for safety-critical systems validation.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104110"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-31DOI: 10.1016/j.csi.2025.104126
B. Muthusenthil , K. Devi
Since Cloud Computing (CC) expands and makes use of information technology (IT) infrastructure, conventional operating systems and applications, it is now susceptible to IT threats. Therefore, a Deep Convolutional Spiking Neural Network and Block Chain based Dynamic Random Byzantine Fault Tolerance Consensus Algorithm fostered Intrusion Detection System is proposed in this paper for improving Privacy and Safety in the Cloud Computing Environment (DCSNN-BC-DRBFT-IDS-CC). Here, the data are gathered through NSL-KDD and CICIDS-2017 benchmark datasets. First-level privacy procedure is performed by the block chain-dependent Dynamic Random Byzantine Fault Tolerance Consensus Algorithm (DRBFT). The secondary level privacy procedure is performed by pre-processing. During data processing, the Markov Chain Random Field (MCRF) is used to remove the unwanted content and filter relevant data. The pre-processing output is provided into feature selection. The optimum feature is selected by using Dynamic Recursive Feature Selection (DRFS). The Deep Convolutional Spiking Neural Network (DCSNN) is employed for classifying data as normal and abnormal. The proposed DCSNN-BC-DRBFT-IDS-CC method is implemented using performance metrics. The DCSNN-BC-DRBFT-IDS-CC achieves 39.185 %, 14.37 %, 31.8 % and 27.06 % better accuracy,25.13 %, 21.75 %, 27.54 % and 23.08 % less computation time,8.15 %, 2.57 %, 3.64 %, 5.85 % higher AUC when compared to other existing models.
由于云计算(CC)扩展并利用了信息技术(IT)基础设施、传统操作系统和应用程序,它现在很容易受到IT威胁。为此,本文提出了一种基于深度卷积尖峰神经网络和区块链的动态随机拜占庭容错一致性算法的入侵检测系统(DCSNN-BC-DRBFT-IDS-CC),以提高云计算环境下的隐私和安全。这里的数据是通过NSL-KDD和CICIDS-2017基准数据集收集的。第一级隐私过程由区块链相关的动态随机拜占庭容错共识算法(DRBFT)执行。二级隐私过程通过预处理来执行。在数据处理过程中,利用马尔可夫链随机场(Markov Chain Random Field, MCRF)去除不需要的内容,过滤相关数据。将预处理输出提供给特征选择。采用动态递归特征选择(DRFS)方法选择最优特征。采用深度卷积脉冲神经网络(DCSNN)对数据进行正常和异常分类。采用性能指标实现了DCSNN-BC-DRBFT-IDS-CC方法。与其他模型相比,dcsnn - bc - drbft - ads - cc的准确率分别提高了39.185%、14.37%、31.8%和27.06%,计算时间分别减少了25.13%、21.75%、27.54%和23.08%,AUC分别提高了8.15%、2.57%、3.64%、5.85%。
{"title":"Deep convolutional spiking neural network and block chain based intrusion detection framework for enhancing privacy and security in cloud computing environment","authors":"B. Muthusenthil , K. Devi","doi":"10.1016/j.csi.2025.104126","DOIUrl":"10.1016/j.csi.2025.104126","url":null,"abstract":"<div><div>Since Cloud Computing (CC) expands and makes use of information technology (IT) infrastructure, conventional operating systems and applications, it is now susceptible to IT threats. Therefore, a Deep Convolutional Spiking Neural Network and Block Chain based Dynamic Random Byzantine Fault Tolerance Consensus Algorithm fostered Intrusion Detection System is proposed in this paper for improving Privacy and Safety in the Cloud Computing Environment (DCSNN-BC-DRBFT-IDS-CC). Here, the data are gathered through NSL-KDD and CICIDS-2017 benchmark datasets. First-level privacy procedure is performed by the block chain-dependent Dynamic Random Byzantine Fault Tolerance Consensus Algorithm (DRBFT). The secondary level privacy procedure is performed by pre-processing. During data processing, the Markov Chain Random Field (MCRF) is used to remove the unwanted content and filter relevant data. The pre-processing output is provided into feature selection. The optimum feature is selected by using Dynamic Recursive Feature Selection (DRFS). The Deep Convolutional Spiking Neural Network (DCSNN) is employed for classifying data as normal and abnormal. The proposed DCSNN-BC-DRBFT-IDS-CC method is implemented using performance metrics. The DCSNN-BC-DRBFT-IDS-CC achieves 39.185 %, 14.37 %, 31.8 % and 27.06 % better accuracy,25.13 %, 21.75 %, 27.54 % and 23.08 % less computation time,8.15 %, 2.57 %, 3.64 %, 5.85 % higher AUC when compared to other existing models.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104126"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-02DOI: 10.1016/j.csi.2026.104139
Hui Zhang , Junjun Fu , Xiaojuan Liao , Guangzhu Chen , Haichuan Ma , Rang Zhou
Proxy Re-Encryption with Keyword Search (PREKS) enables the secure delegation of search authority over encrypted data, which is highly valuable in scenarios such as electronic health systems: patients (data owners) upload encrypted Electronic Health Data (EHD) and keywords to the cloud, granting initial search permissions to attending doctors (delegators); delegators can share these permissions with consulting doctors (delegatees) by re-encrypting keywords via a proxy server, without the need to consult data owners again. Existing PREKS schemes require a trusted proxy to avoid collusion risks between the proxy and delegatees, resulting in high communication overhead. To address this issue, this paper proposes a Time-controlled public key Proxy Searchable Re-Encryption scheme against Collusion Attacks (TcPSRE-CA), which innovatively integrates proxy functionality with cloud servers to reduce overhead. The scheme supports time-limited authorization and conjunctive keyword search, with its security proven under the random oracle model. Experimental results demonstrate that the proposed scheme effectively reduces communication and storage overhead while maintaining high computational efficiency.
{"title":"Time-controlled proxy searchable re-encryption against collusion attacks","authors":"Hui Zhang , Junjun Fu , Xiaojuan Liao , Guangzhu Chen , Haichuan Ma , Rang Zhou","doi":"10.1016/j.csi.2026.104139","DOIUrl":"10.1016/j.csi.2026.104139","url":null,"abstract":"<div><div>Proxy Re-Encryption with Keyword Search (PREKS) enables the secure delegation of search authority over encrypted data, which is highly valuable in scenarios such as electronic health systems: patients (data owners) upload encrypted Electronic Health Data (EHD) and keywords to the cloud, granting initial search permissions to attending doctors (delegators); delegators can share these permissions with consulting doctors (delegatees) by re-encrypting keywords via a proxy server, without the need to consult data owners again. Existing PREKS schemes require a trusted proxy to avoid collusion risks between the proxy and delegatees, resulting in high communication overhead. To address this issue, this paper proposes a Time-controlled public key Proxy Searchable Re-Encryption scheme against Collusion Attacks (TcPSRE-CA), which innovatively integrates proxy functionality with cloud servers to reduce overhead. The scheme supports time-limited authorization and conjunctive keyword search, with its security proven under the random oracle model. Experimental results demonstrate that the proposed scheme effectively reduces communication and storage overhead while maintaining high computational efficiency.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104139"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-23DOI: 10.1016/j.csi.2025.104121
Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong
Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.
{"title":"Sharing as You Desire: A fuzzy certificateless proxy re-encryption scheme for efficient and privacy-preserving cloud data sharing","authors":"Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong","doi":"10.1016/j.csi.2025.104121","DOIUrl":"10.1016/j.csi.2025.104121","url":null,"abstract":"<div><div>Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104121"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-12DOI: 10.1016/j.csi.2026.104128
Shangping Wang, Haotong Cao, Ruoxin Yan
Sharding technique has significantly increased the transaction processing capacity and scalability in blockchain systems but compromise security, particularly with increased vulnerability to “1/3 attacks” as shard numbers rise. Clearly, more shards are not always better, how to determine the optimal number of shards is essential yet often overlooked in existing research. To address the aforementioned issues, we propose a broadly applicable algorithm for determining the optimal number of shards in sharded blockchain based on security and performance (SPSN). Firstly, this work proposes a novel optimization model for sharded blockchain by considering the impacts of sharding on system security and performance. The idea is to find an optimal shard number that balances system efficiency (time required to process transactions) and security (system failure probability), ensuring that the system failure probability within acceptable limits while maximizing efficiency. Secondly, we propose a widely applicable algorithm to determine the optimal number of shards, which can be executed independently before sharding operations to ascertain the shard count that maximizes performance while ensuring system security. Lastly, experiments are conducted under four different system settings to demonstrate the specific methods. The results show that the proposed algorithm can effectively calculate the optimal shard number for most systems, demonstrating broad applicability and effectiveness, helping achieve a high-security, high-performance sharded blockchain system.
{"title":"Optimal shard number determination algorithm based on security and performance in sharded blockchain","authors":"Shangping Wang, Haotong Cao, Ruoxin Yan","doi":"10.1016/j.csi.2026.104128","DOIUrl":"10.1016/j.csi.2026.104128","url":null,"abstract":"<div><div>Sharding technique has significantly increased the transaction processing capacity and scalability in blockchain systems but compromise security, particularly with increased vulnerability to “1/3 attacks” as shard numbers rise. Clearly, more shards are not always better, how to determine the optimal number of shards is essential yet often overlooked in existing research. To address the aforementioned issues, we propose a broadly applicable algorithm for determining the optimal number of shards in sharded blockchain based on security and performance (SPSN). Firstly, this work proposes a novel optimization model for sharded blockchain by considering the impacts of sharding on system security and performance. The idea is to find an optimal shard number that balances system efficiency (time required to process transactions) and security (system failure probability), ensuring that the system failure probability within acceptable limits while maximizing efficiency. Secondly, we propose a widely applicable algorithm to determine the optimal number of shards, which can be executed independently before sharding operations to ascertain the shard count that maximizes performance while ensuring system security. Lastly, experiments are conducted under four different system settings to demonstrate the specific methods. The results show that the proposed algorithm can effectively calculate the optimal shard number for most systems, demonstrating broad applicability and effectiveness, helping achieve a high-security, high-performance sharded blockchain system.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104128"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-07DOI: 10.1016/j.csi.2025.104106
Karishma, Harendra Kumar
Heterogeneous computing systems are widely adopted for their capacity to optimize performance and energy efficiency across diverse computational environments. However, most existing task scheduling techniques address either energy reduction or reliability enhancement, rarely achieving both simultaneously. This study proposes a novel hybrid whale optimization algorithm–grey wolf optimizer (WOA–GWO) integrated with dynamic voltage and frequency scaling (DVFS) and an insert-reversed block operation to overcome this dual challenge. The proposed Hybrid WOA–GWO (HWWO) framework enhances task prioritization using the dynamic variant rank heterogeneous earliest-finish-time (DVR-HEFT) approach to ensure efficient processor allocation and reduced computation time. The algorithm’s performance was evaluated on real-world constrained optimization problems from CEC 2020, as well as Fast Fourier Transform (FFT) and Gaussian Elimination (GE) applications. Experimental results demonstrate that HWWO achieves substantial gains in both energy efficiency and reliability, reducing total energy consumption by 55% (from 170.52 to 75.67 units) while increasing system reliability from 0.8804 to 0.9785 compared to state-of-the-art methods such as SASS, EnMODE, sCMAgES, and COLSHADE. The experimental results, implemented on varying tasks and processor counts, further demonstrate that the proposed algorithmic approach outperforms existing state-of-the-art and metaheuristic algorithms by delivering superior energy efficiency, maximizing reliability, minimizing computation time, reducing schedule length ratio (SLR), optimizing the communication-to-computation ratio (CCR), enhancing resource utilization, and minimizing sensitivity analysis. These findings confirm that the proposed model effectively bridges the existing research gap by providing a robust, energy-aware, and reliability-optimized scheduling framework for heterogeneous computing environments.
{"title":"A novel hybrid WOA–GWO algorithm for multi-objective optimization of energy efficiency and reliability in heterogeneous computing","authors":"Karishma, Harendra Kumar","doi":"10.1016/j.csi.2025.104106","DOIUrl":"10.1016/j.csi.2025.104106","url":null,"abstract":"<div><div>Heterogeneous computing systems are widely adopted for their capacity to optimize performance and energy efficiency across diverse computational environments. However, most existing task scheduling techniques address either energy reduction or reliability enhancement, rarely achieving both simultaneously. This study proposes a novel hybrid whale optimization algorithm–grey wolf optimizer (WOA–GWO) integrated with dynamic voltage and frequency scaling (DVFS) and an insert-reversed block operation to overcome this dual challenge. The proposed Hybrid WOA–GWO (HWWO) framework enhances task prioritization using the dynamic variant rank heterogeneous earliest-finish-time (DVR-HEFT) approach to ensure efficient processor allocation and reduced computation time. The algorithm’s performance was evaluated on real-world constrained optimization problems from CEC 2020, as well as Fast Fourier Transform (FFT) and Gaussian Elimination (GE) applications. Experimental results demonstrate that HWWO achieves substantial gains in both energy efficiency and reliability, reducing total energy consumption by 55% (from 170.52 to 75.67 units) while increasing system reliability from 0.8804 to 0.9785 compared to state-of-the-art methods such as SASS, EnMODE, sCMAgES, and COLSHADE. The experimental results, implemented on varying tasks and processor counts, further demonstrate that the proposed algorithmic approach outperforms existing state-of-the-art and metaheuristic algorithms by delivering superior energy efficiency, maximizing reliability, minimizing computation time, reducing schedule length ratio (SLR), optimizing the communication-to-computation ratio (CCR), enhancing resource utilization, and minimizing sensitivity analysis. These findings confirm that the proposed model effectively bridges the existing research gap by providing a robust, energy-aware, and reliability-optimized scheduling framework for heterogeneous computing environments.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104106"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-16DOI: 10.1016/j.csi.2026.104130
Ronghao Pan, José Antonio García-Díaz, Rafael Valencia-García
Understanding emotions in conversations is a fundamental challenge in affective computing. Emotional expressions evolve dynamically across dialogue turns and depend on multimodal cues such as speech, text, and facial behavior. However, existing multimodal models often rely on global attention mechanisms that overlook causal constraints. This allows information leakage from future turns and neglect of the speaker’s emotional evolution. To address these limitations, we propose the Dynamic Multimodal Causal Graph Emotion System (DMCGES). DMCGES integrates a restricted dynamic causal graph to ensure temporal coherence, as well as a speaker-specific memory module to capture affective trajectories and enhance multimodal alignment and robustness. The framework aligns with the IEEE 7010-2020 standard, which emphasizes integrating human well-being as a fundamental design principle in autonomous and intelligent systems. Experiments on the IEMOCAP and MELD benchmark datasets demonstrate that DMCGES outperforms state-of-the-art approaches in terms of accuracy and F1 score. On the IEMOCAP dataset, DMCGES achieved an accuracy of 69.36% and an F1 score of 69.49%, representing relative improvements of 1.95% and 2.39%, respectively. On the MELD dataset, our model achieved an accuracy of 62.38% and an F1 score of 62.03%, improving upon SACCMA’s results by 0.1% in accuracy and 2.73% in F1 score.
{"title":"A Dynamic Multimodal Causal Graph framework for standardized Emotion Recognition in Conversations","authors":"Ronghao Pan, José Antonio García-Díaz, Rafael Valencia-García","doi":"10.1016/j.csi.2026.104130","DOIUrl":"10.1016/j.csi.2026.104130","url":null,"abstract":"<div><div>Understanding emotions in conversations is a fundamental challenge in affective computing. Emotional expressions evolve dynamically across dialogue turns and depend on multimodal cues such as speech, text, and facial behavior. However, existing multimodal models often rely on global attention mechanisms that overlook causal constraints. This allows information leakage from future turns and neglect of the speaker’s emotional evolution. To address these limitations, we propose the Dynamic Multimodal Causal Graph Emotion System (DMCGES). DMCGES integrates a restricted dynamic causal graph to ensure temporal coherence, as well as a speaker-specific memory module to capture affective trajectories and enhance multimodal alignment and robustness. The framework aligns with the IEEE 7010-2020 standard, which emphasizes integrating human well-being as a fundamental design principle in autonomous and intelligent systems. Experiments on the IEMOCAP and MELD benchmark datasets demonstrate that DMCGES outperforms state-of-the-art approaches in terms of accuracy and F1 score. On the IEMOCAP dataset, DMCGES achieved an accuracy of 69.36% and an F1 score of 69.49%, representing relative improvements of 1.95% and 2.39%, respectively. On the MELD dataset, our model achieved an accuracy of 62.38% and an F1 score of 62.03%, improving upon SACCMA’s results by 0.1% in accuracy and 2.73% in F1 score.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104130"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-16DOI: 10.1016/j.csi.2026.104134
Zhicheng Li , Jian Xu , Nan Zhang , Teng Lu , Peijun Li , Nian Wang , Qiuyue Wang
Cloud and edge platforms increasingly host machine learning workloads, yet outsourcing -Nearest Neighbors (NN) raises acute risks to data privacy and computation integrity. Existing perturbation and partially homomorphic approaches either leak information or fail to scale, while conventional FHE comparisons and interactive MPC protocols impose heavy costs. We propose two complementary schemes for privacy-preserving NN tailored to cross-platform EV analytics. First, a single-server FHE design performs encrypted inner products and exact squared Euclidean distances using matrix-optimized packing to parallelize feature-wise operations and batch queries, reducing ciphertext count and multiplicative depth. Second, a two-server MPC design executes distance evaluation, fixed-network Top-, and majority voting entirely on additive shares, with re-sharing to refresh randomness and hide access patterns. We formalize semi-honest threat models and prove input privacy and correctness. Additive-share MPC demonstrates plaintext-level accuracy, with FHE achieving non-interactive cloud processing and MPC delivering near real-time online latency at practical communication cost. The combined results show that strong privacy and practical efficiency for NN can be achieved without exposing training data, queries, or intermediate computations.
{"title":"Privacy-preserving kNN classification for cross-platform electric vehicle data analytics","authors":"Zhicheng Li , Jian Xu , Nan Zhang , Teng Lu , Peijun Li , Nian Wang , Qiuyue Wang","doi":"10.1016/j.csi.2026.104134","DOIUrl":"10.1016/j.csi.2026.104134","url":null,"abstract":"<div><div>Cloud and edge platforms increasingly host machine learning workloads, yet outsourcing <span><math><mi>k</mi></math></span>-Nearest Neighbors (<span><math><mi>k</mi></math></span>NN) raises acute risks to data privacy and computation integrity. Existing perturbation and partially homomorphic approaches either leak information or fail to scale, while conventional FHE comparisons and interactive MPC protocols impose heavy costs. We propose two complementary schemes for privacy-preserving <span><math><mi>k</mi></math></span>NN tailored to cross-platform EV analytics. First, a single-server FHE design performs encrypted inner products and exact squared Euclidean distances using matrix-optimized packing to parallelize feature-wise operations and batch queries, reducing ciphertext count and multiplicative depth. Second, a two-server MPC design executes distance evaluation, fixed-network Top-<span><math><mi>k</mi></math></span>, and majority voting entirely on additive shares, with re-sharing to refresh randomness and hide access patterns. We formalize semi-honest threat models and prove input privacy and correctness. Additive-share MPC demonstrates plaintext-level accuracy, with FHE achieving non-interactive cloud processing and MPC delivering near real-time online latency at practical communication cost. The combined results show that strong privacy and practical efficiency for <span><math><mi>k</mi></math></span>NN can be achieved without exposing training data, queries, or intermediate computations.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104134"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-03DOI: 10.1016/j.csi.2025.104107
Sipeng Shen , Qiang Wang , Fucai Zhou, Jian Xu, Mingxing Jin
Modular exponentiation is a fundamental cryptographic operation extensively applied in the Internet of Vehicles (IoV). However, its computational intensity imposes significant resource and time demands on intelligent vehicles. Offloading such computations to Mobile Edge Computing (MEC) servers has emerged as a promising approach. Nonetheless, existing schemes are generally impractical, as they either fail to ensure fairness between intelligent vehicles and MEC servers, lack privacy protection for the bases and exponents, or cannot guarantee the correctness of results with overwhelming probability due to potential misbehavior by MEC servers. To address these limitations, we propose MExpm, a fair and efficient computation offloading scheme for batch modular exponentiation under a single untrusted server model. Our scheme leverages blockchain technology to ensure fairness through publicly verifiable results. Furthermore, MExpm achieves high checkability, offering a near-perfect probability of checkability. To enhance privacy, we introduce secure obfuscation and logical split techniques, effectively protecting both the bases and the exponents. Extensive theoretical analysis and experimental results demonstrate that our scheme is not only efficient in terms of computation, communication, and storage overheads but also significantly improves privacy protection and checkability.
{"title":"MExpm: Fair computation offloading for batch modular exponentiation with improved privacy and checkability in IoV","authors":"Sipeng Shen , Qiang Wang , Fucai Zhou, Jian Xu, Mingxing Jin","doi":"10.1016/j.csi.2025.104107","DOIUrl":"10.1016/j.csi.2025.104107","url":null,"abstract":"<div><div>Modular exponentiation is a fundamental cryptographic operation extensively applied in the Internet of Vehicles (IoV). However, its computational intensity imposes significant resource and time demands on intelligent vehicles. Offloading such computations to Mobile Edge Computing (MEC) servers has emerged as a promising approach. Nonetheless, existing schemes are generally impractical, as they either fail to ensure fairness between intelligent vehicles and MEC servers, lack privacy protection for the bases and exponents, or cannot guarantee the correctness of results with overwhelming probability due to potential misbehavior by MEC servers. To address these limitations, we propose MExpm, a fair and efficient computation offloading scheme for batch modular exponentiation under a single untrusted server model. Our scheme leverages blockchain technology to ensure fairness through publicly verifiable results. Furthermore, MExpm achieves high checkability, offering a near-perfect probability of checkability. To enhance privacy, we introduce secure obfuscation and logical split techniques, effectively protecting both the bases and the exponents. Extensive theoretical analysis and experimental results demonstrate that our scheme is not only efficient in terms of computation, communication, and storage overheads but also significantly improves privacy protection and checkability.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104107"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Process mining analyzes business processes using event logs. Existing tools generate models to facilitate this task and improve the original business process, but the results are often unsatisfactory due to the complexity of the obtained models. Among the challenges faced in this context, we identify the misalignment with specific business requirements, preventing managers from accessing key data and making effective decisions. In this paper, we propose a requirement-driven approach centered on meta-modeling, which can help the development of process mining tools specially tailored to organizational needs. Thus, we introduce a requirement-driven method to address the critical challenge of model misalignment with required information. The method employs Model-Driven Engineering to simplify how process mining results are formulated, analyzed, and interpreted. The proposed method is iterative and involves several steps. First, a service manager defines a specific business question. Second, service managers and developers collaboratively establish a meta-model representing the target data. Third, developers extract relevant data using appropriate analysis techniques and visualize it. Finally, service managers and developers jointly interpret these visualizations to inform strategic decisions. This requirement-driven methodology empowers developers to concentrate on relevant information. Unlike general-purpose frameworks (e.g., ProM, Disco), this method emphasizes specificity, iterative refinement, and close stakeholder collaboration. By reducing cognitive overload through focused modeling and filtering of irrelevant data, organizations adopting this approach can achieve faster response times to business questions and develop specialized in-house analytical tools. This requirement-driven methodology, therefore, improves decision-making capabilities within process mining and across related analytical domains. We illustrate our methodology through a real business process taken from the literature owned by the VOLVO group. We use several examples of process mining to illustrate the benefits of the proposed methodology compared to existing tools which are unable to provide the required information.
{"title":"A requirement-driven method for process mining based on model-driven engineering","authors":"Selsabil Ines Bouhidel , Mohammed Mounir Bouhamed , Gregorio Diaz , Nabil Belala","doi":"10.1016/j.csi.2025.104108","DOIUrl":"10.1016/j.csi.2025.104108","url":null,"abstract":"<div><div>Process mining analyzes business processes using event logs. Existing tools generate models to facilitate this task and improve the original business process, but the results are often unsatisfactory due to the complexity of the obtained models. Among the challenges faced in this context, we identify the misalignment with specific business requirements, preventing managers from accessing key data and making effective decisions. In this paper, we propose a requirement-driven approach centered on meta-modeling, which can help the development of process mining tools specially tailored to organizational needs. Thus, we introduce a requirement-driven method to address the critical challenge of model misalignment with required information. The method employs Model-Driven Engineering to simplify how process mining results are formulated, analyzed, and interpreted. The proposed method is iterative and involves several steps. First, a service manager defines a specific business question. Second, service managers and developers collaboratively establish a meta-model representing the target data. Third, developers extract relevant data using appropriate analysis techniques and visualize it. Finally, service managers and developers jointly interpret these visualizations to inform strategic decisions. This requirement-driven methodology empowers developers to concentrate on relevant information. Unlike general-purpose frameworks (e.g., ProM, Disco), this method emphasizes specificity, iterative refinement, and close stakeholder collaboration. By reducing cognitive overload through focused modeling and filtering of irrelevant data, organizations adopting this approach can achieve faster response times to business questions and develop specialized in-house analytical tools. This requirement-driven methodology, therefore, improves decision-making capabilities within process mining and across related analytical domains. We illustrate our methodology through a real business process taken from the literature owned by the VOLVO group. We use several examples of process mining to illustrate the benefits of the proposed methodology compared to existing tools which are unable to provide the required information.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104108"},"PeriodicalIF":3.1,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}