Pub Date : 2025-03-17DOI: 10.1109/TETC.2025.3546602
Chengjie Wang;Yuejun Zhang;Ziyu Zhou
Porcelain, as a significant cultural heritage, embodies the wisdom of human civilization. However, existing anti-counterfeiting and authentication technologies for porcelain are often unreliable and costly. This paper proposes a physical unclonable functions (PUF) design based on crack physical feature extraction for the anti-counterfeiting and authentication of Gold-Wire porcelain. The proposed method generates PUF information by extracting inherent physical deviations in the surface cracks of Gold-Wire porcelain. First, a standard crack extraction process is established using digital image processing to obtain crack information from the porcelain surface. Then, a physical feature extraction model based on the chain code encoding technique and the Delaunay triangulation technique is used to derive the physical feature values from the cracks. Subsequently, a PUF encoding algorithm is designed to convert these physical feature values into a PUF response. Finally, the security and reliability of the designed PUF are evaluated, and a PUF-based porcelain authentication protocol is developed. Experimental results show that the proposed PUF exhibits 50.16% uniqueness and 98.85% reliability, and the PUF data successfully passed the NIST randomness test, demonstrating that the proposed technology can effectively achieve low-cost, high-reliability anti-counterfeiting for commercial porcelain.
{"title":"A Novel Porcelain Fingerprinting Technique","authors":"Chengjie Wang;Yuejun Zhang;Ziyu Zhou","doi":"10.1109/TETC.2025.3546602","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546602","url":null,"abstract":"Porcelain, as a significant cultural heritage, embodies the wisdom of human civilization. However, existing anti-counterfeiting and authentication technologies for porcelain are often unreliable and costly. This paper proposes a physical unclonable functions (PUF) design based on crack physical feature extraction for the anti-counterfeiting and authentication of Gold-Wire porcelain. The proposed method generates PUF information by extracting inherent physical deviations in the surface cracks of Gold-Wire porcelain. First, a standard crack extraction process is established using digital image processing to obtain crack information from the porcelain surface. Then, a physical feature extraction model based on the chain code encoding technique and the Delaunay triangulation technique is used to derive the physical feature values from the cracks. Subsequently, a PUF encoding algorithm is designed to convert these physical feature values into a PUF response. Finally, the security and reliability of the designed PUF are evaluated, and a PUF-based porcelain authentication protocol is developed. Experimental results show that the proposed PUF exhibits 50.16% uniqueness and 98.85% reliability, and the PUF data successfully passed the NIST randomness test, demonstrating that the proposed technology can effectively achieve low-cost, high-reliability anti-counterfeiting for commercial porcelain.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"964-976"},"PeriodicalIF":5.4,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-13DOI: 10.1109/TETC.2025.3548672
Riya Samanta;Soumya K. Ghosh;Sajal K. Das
Traditional task assignment approaches in crowdsourcing platforms have focused on optimizing utility for workers or tasks, often neglecting the general utility of the platform and the influence of mutual preference considering skill availability and budget restrictions. This oversight can destabilize task allocation outcomes, diminishing user experience, and, ultimately, the platform’s long-term utility and gives rise to the Worker Task Stable Matching (WTSM) problem. To solve WTSM, we propose the Skill-oriented Stable Task Assignment with a Bi-directional Preference (SoSTA) method based on deferred acceptance strategy. SoSTA aims to generate stable allocations between tasks and workers considering mutually their preferences, optimizing overall utility while following skill and budget constraints. Our study redefines the general utility of the platform as an amalgamation of utilities on both the workers’ and tasks’ sides, incorporating the preference lists of each worker or task based on their respective utility scores for the other party. SoSTA incorporates Multi Skill-oriented Stable Worker Task Mapping (Multi-SoS-WTM) algorithm for contributions with multiple skills per worker. SoSTA is rational, non-wasteful, fair, and hence stable. SoSTA outperformed other approaches in the simulations of the MeetUp dataset. SoSTA improves execution speed by 80%, task completion rate by 60%, and user happiness by 8%.
在众包平台中,传统的任务分配方法侧重于优化工人或任务的效用,往往忽略了平台的一般效用以及考虑技能可用性和预算限制的相互偏好的影响。这种疏忽会破坏任务分配结果的稳定性,降低用户体验,最终影响平台的长期效用,并导致工作任务稳定匹配(Worker task stability Matching, WTSM)问题。为了解决WTSM问题,我们提出了基于延迟接受策略的双向偏好(SoSTA)的技能导向稳定任务分配方法。SoSTA的目标是在任务和工人之间产生稳定的分配,考虑他们的相互偏好,在遵循技能和预算约束的同时优化整体效用。我们的研究将平台的一般效用重新定义为工人和任务双方效用的合并,结合每个工人或任务的偏好列表,基于他们各自对另一方的效用得分。SoSTA结合了面向多技能的稳定工人任务映射(Multi- sos - wtm)算法,用于每个工人的多技能贡献。SoSTA是理性的、不浪费的、公平的,因此是稳定的。在MeetUp数据集的模拟中,SoSTA优于其他方法。SoSTA将执行速度提高了80%,任务完成率提高了60%,用户满意度提高了8%。
{"title":"SoSTA: Skill-Oriented Stable Task Assignment With Bidirectional Preferences in Crowdsourcing","authors":"Riya Samanta;Soumya K. Ghosh;Sajal K. Das","doi":"10.1109/TETC.2025.3548672","DOIUrl":"https://doi.org/10.1109/TETC.2025.3548672","url":null,"abstract":"Traditional task assignment approaches in crowdsourcing platforms have focused on optimizing utility for workers or tasks, often neglecting the general utility of the platform and the influence of mutual preference considering skill availability and budget restrictions. This oversight can destabilize task allocation outcomes, diminishing user experience, and, ultimately, the platform’s long-term utility and gives rise to the Worker Task Stable Matching (WTSM) problem. To solve WTSM, we propose the Skill-oriented Stable Task Assignment with a Bi-directional Preference (SoSTA) method based on deferred acceptance strategy. SoSTA aims to generate stable allocations between tasks and workers considering mutually their preferences, optimizing overall utility while following skill and budget constraints. Our study redefines the general utility of the platform as an amalgamation of utilities on both the workers’ and tasks’ sides, incorporating the preference lists of each worker or task based on their respective utility scores for the other party. SoSTA incorporates Multi Skill-oriented Stable Worker Task Mapping (Multi-SoS-WTM) algorithm for contributions with multiple skills per worker. SoSTA is rational, non-wasteful, fair, and hence stable. SoSTA outperformed other approaches in the simulations of the MeetUp dataset. SoSTA improves execution speed by 80%, task completion rate by 60%, and user happiness by 8%.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"947-963"},"PeriodicalIF":5.4,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10925570","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-11DOI: 10.1109/TETC.2025.3547612
Alberto Avritzer;Andrea Janes;Andrea Marin;Catia Trubiani;Andre van Hoorn;Matteo Camilli;Daniel S. Menasché;André B. Bondi
In this article, we report on the application of resiliency enforcement strategies that were applied to a microservices system running on a real-world deployment of a large cluster of heterogeneous Virtual Machines (VMs). We present the evaluation results obtained from measurement and modeling implementations. The measurement infrastructure was composed of 15 large and 15 extra-large VMs. The modeling approach used Markov Decision Processes (MDP). On the measurement testbed, we implemented three different levels of software rejuvenation granularity to achieve software resiliency. We have discovered two threats to resiliency in this environment. The first threat to resiliency was a memory leak that was part of the underlying open-source infrastructure in each VM. The second threat to resiliency was the result of the contention for resources in the physical host, which is dependent on the number and size of VMs deployed to the physical host. In the MDP modeling approach, we evaluated four strategies for assigning tasks to VMs with different configurations and different levels of parallelism. Using the large cluster under study, we compared our approach of using software aging and rejuvenation with the state-of-the-art approach of using a network of VMs deployed to a private cloud without software aging detection and rejuvenation. In summary, we show that in a private cloud with non-elastic resource allocation in the physical hosts, careful performance engineering needs to be performed to optimize the trade-offs between the number of VMs allocated and the total memory allocated to each VM.
{"title":"Software Aging Detection and Rejuvenation Assessment in Heterogeneous Virtual Networks","authors":"Alberto Avritzer;Andrea Janes;Andrea Marin;Catia Trubiani;Andre van Hoorn;Matteo Camilli;Daniel S. Menasché;André B. Bondi","doi":"10.1109/TETC.2025.3547612","DOIUrl":"https://doi.org/10.1109/TETC.2025.3547612","url":null,"abstract":"In this article, we report on the application of resiliency enforcement strategies that were applied to a microservices system running on a real-world deployment of a large cluster of heterogeneous Virtual Machines (VMs). We present the evaluation results obtained from measurement and modeling implementations. The measurement infrastructure was composed of 15 large and 15 extra-large VMs. The modeling approach used Markov Decision Processes (MDP). On the measurement testbed, we implemented three different levels of software rejuvenation granularity to achieve software resiliency. We have discovered two threats to resiliency in this environment. The first threat to resiliency was a memory leak that was part of the underlying open-source infrastructure in each VM. The second threat to resiliency was the result of the contention for resources in the physical host, which is dependent on the number and size of VMs deployed to the physical host. In the MDP modeling approach, we evaluated four strategies for assigning tasks to VMs with different configurations and different levels of parallelism. Using the large cluster under study, we compared our approach of using software aging and rejuvenation with the state-of-the-art approach of using a network of VMs deployed to a private cloud without software aging detection and rejuvenation. In summary, we show that in a private cloud with non-elastic resource allocation in the physical hosts, careful performance engineering needs to be performed to optimize the trade-offs between the number of VMs allocated and the total memory allocated to each VM.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"299-313"},"PeriodicalIF":5.1,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10923615","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TETC.2025.3546366
Wenming Cao;Jiewen Zeng;Qifan Liu
Few-shot classification is the task of recognizing unseen classes using a limited number of samples. In this paper, we propose a new contrastive learning method called Feature-Level Contrastive Learning (FLCL). FLCL conducts contrastive learning at the feature level and leverages the subtle relationships between positive and negative samples to achieve more effective classification. Additionally, we address the challenges of requiring a large number of negative samples and the difficulty of selecting high-quality negative samples in traditional contrastive learning methods. For feature learning, we design a Feature Enhancement Coding (FEC) module to analyze the interactions and correlations between nonlinear features, enhancing the quality of feature representations. In the metric stage, we propose a centered hypersphere projection metric to map feature vectors onto the hypersphere, improving the comparison between the support and query sets. Experimental results on four few-shot classification benchmark datasets demonstrate that our method, while simple in design, outperforms previous methods and achieves state-of-the-art performance. A detailed ablation study further confirms the effectiveness of each component of our model.
{"title":"FLCL: Feature-Level Contrastive Learning for Few-Shot Image Classification","authors":"Wenming Cao;Jiewen Zeng;Qifan Liu","doi":"10.1109/TETC.2025.3546366","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546366","url":null,"abstract":"Few-shot classification is the task of recognizing unseen classes using a limited number of samples. In this paper, we propose a new contrastive learning method called Feature-Level Contrastive Learning (FLCL). FLCL conducts contrastive learning at the feature level and leverages the subtle relationships between positive and negative samples to achieve more effective classification. Additionally, we address the challenges of requiring a large number of negative samples and the difficulty of selecting high-quality negative samples in traditional contrastive learning methods. For feature learning, we design a Feature Enhancement Coding (FEC) module to analyze the interactions and correlations between nonlinear features, enhancing the quality of feature representations. In the metric stage, we propose a centered hypersphere projection metric to map feature vectors onto the hypersphere, improving the comparison between the support and query sets. Experimental results on four few-shot classification benchmark datasets demonstrate that our method, while simple in design, outperforms previous methods and achieves state-of-the-art performance. A detailed ablation study further confirms the effectiveness of each component of our model.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"935-946"},"PeriodicalIF":5.4,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today, malware is one of the primary cyber threats to organizations, pervading all types of computing devices, including resource constrained devices such as mobile phones, tablets and embedded devices like Internet-of-Things (IoT) devices. In recent years, researchers have leveraged machine learning based strategies for malware detection and classification. However, malware analysis approaches can only be employed in resource constrained environments if the methods are lightweight in nature. In this paper, we present MALITE, a lightweight malware analysis system, that can distinguish between benign and malicious binaries and classify various malware families. MALITE converts a binary into a grayscale or an RGB image requiring low memory and battery power consumption and uses computationally inexpensive malware analysis strategies. We have designed MALITE-MN, a lightweight neural network based architecture and MALITE-HRF, an ultra lightweight random forest based method that uses histogram features extracted by a sliding window. An extensive empirical evaluation is conducted on seven publicly available datasets (Malimg, Microsoft BIG, Dumpware10, MOTIF, Drebin, CICAndMal2017 and MalNet), and performance is compared to four state-of-the-art baselines. The results show that MALITE-MN and MALITE-HRF not only accurately identify and classify malware but also respectively consume several orders of magnitude lower resources (in terms of both memory as well as computation capabilities), making them much more suitable for resource constrained environments.
{"title":"MALITE: Lightweight Malware Detection and Classification for Constrained Devices","authors":"Sidharth Anand;Barsha Mitra;Soumyadeep Dey;Abhinav Rao;Rupsha Dhar;Jaideep Vaidya","doi":"10.1109/TETC.2025.3566370","DOIUrl":"https://doi.org/10.1109/TETC.2025.3566370","url":null,"abstract":"Today, malware is one of the primary cyber threats to organizations, pervading all types of computing devices, including resource constrained devices such as mobile phones, tablets and embedded devices like Internet-of-Things (IoT) devices. In recent years, researchers have leveraged machine learning based strategies for malware detection and classification. However, malware analysis approaches can only be employed in resource constrained environments if the methods are lightweight in nature. In this paper, we present MALITE, a lightweight malware analysis system, that can distinguish between benign and malicious binaries and classify various malware families. MALITE converts a binary into a grayscale or an RGB image requiring low memory and battery power consumption and uses computationally inexpensive malware analysis strategies. We have designed MALITE-MN, a lightweight neural network based architecture and MALITE-HRF, an ultra lightweight random forest based method that uses histogram features extracted by a sliding window. An extensive empirical evaluation is conducted on seven publicly available datasets (Malimg, Microsoft BIG, Dumpware10, MOTIF, Drebin, CICAndMal2017 and MalNet), and performance is compared to four state-of-the-art baselines. The results show that MALITE-MN and MALITE-HRF not only accurately identify and classify malware but also respectively consume several orders of magnitude lower resources (in terms of both memory as well as computation capabilities), making them much more suitable for resource constrained environments.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"1099-1112"},"PeriodicalIF":5.4,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TETC.2025.3546648
Kyungbae Jang;Sejin Lim;Yujin Oh;Hyunjun Kim;Anubhab Baksi;Sumanta Chakraborty;Hwajeong Seo
Quantum computers have the potential to solve a number of hard problems that are believed to be almost impossible to solve by classical computers. This observation has sparked a surge of research to apply quantum algorithms against the cryptographic systems to evaluate its quantum resistance. In assessing the security strength of the cryptographic algorithms against the upcoming quantum threats, it is crucial to precisely estimate the quantum resource requirement (generally in terms of circuit depth and quantum bit count). The National Institute of Standards and Technology by the US government specified five quantum security levels so that the relative quantum strength of a given cipher can be compared to the standard ones. There have been some progress in the NIST-specified quantum security levels for the odd levels (i.e., 1, 3 and 5), following the work of Jaques et al. (Eurocrypt’20). However, levels 2 and 4, which correspond to the quantum collision finding attacks for the SHA-2 and SHA-3 hash functions, quantum attack complexities are arguably not well-studied. This is where our article fits in. In this article, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.
{"title":"Quantum Implementation and Analysis of SHA-2 and SHA-3","authors":"Kyungbae Jang;Sejin Lim;Yujin Oh;Hyunjun Kim;Anubhab Baksi;Sumanta Chakraborty;Hwajeong Seo","doi":"10.1109/TETC.2025.3546648","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546648","url":null,"abstract":"Quantum computers have the potential to solve a number of hard problems that are believed to be almost impossible to solve by classical computers. This observation has sparked a surge of research to apply quantum algorithms against the cryptographic systems to evaluate its quantum resistance. In assessing the security strength of the cryptographic algorithms against the upcoming quantum threats, it is crucial to precisely estimate the quantum resource requirement (generally in terms of circuit depth and quantum bit count). The National Institute of Standards and Technology by the US government specified five quantum security levels so that the relative quantum strength of a given cipher can be compared to the standard ones. There have been some progress in the NIST-specified quantum security levels for the odd levels (i.e., 1, 3 and 5), following the work of Jaques et al. (Eurocrypt’20). However, levels 2 and 4, which correspond to the quantum collision finding attacks for the SHA-2 and SHA-3 hash functions, quantum attack complexities are arguably not well-studied. This is where our article fits in. In this article, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"919-934"},"PeriodicalIF":5.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TETC.2025.3543119
{"title":"IEEE Transactions on Emerging Topics in Computing Publication Information","authors":"","doi":"10.1109/TETC.2025.3543119","DOIUrl":"https://doi.org/10.1109/TETC.2025.3543119","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 1","pages":"C2-C2"},"PeriodicalIF":5.1,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918568","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TETC.2025.3530016
We thank the following reviewers for the time and energy they have given to TETC:
我们感谢以下审稿人为TETC付出的时间和精力:
{"title":"2024 Reviewers List*","authors":"","doi":"10.1109/TETC.2025.3530016","DOIUrl":"https://doi.org/10.1109/TETC.2025.3530016","url":null,"abstract":"We thank the following reviewers for the time and energy they have given to <italic>TETC</i>:","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 1","pages":"276-278"},"PeriodicalIF":5.1,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918565","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TETC.2024.3472428
Radu Marculescu;Jorge Sá Silva
Edge Artificial Intelligence (AI) enables us to deploy distributed AI models, optimize computational and energy resources, minimize communication demands, and, most importantly, meet privacy requirements for Internet of Things (IoT) applications. Since data remains on the end-devices and only model parameters are shared with the server, it becomes possible to leverage the vast amount of data collected from smartphones and IoT devices without compromising the user's privacy. However, Federated Learning (FL) solutions also have well-known limitations. In particular, as systems that account for human behaviour become increasingly vital, future technologies need to become attuned to human behaviours. Indeed, we are already witnessing unparalleled advancements in technology that empower our tools and devices with intelligence, sensory abilities, and communication features. At the same time, continued advances in the miniaturization of computational capabilities can enable us to go far beyond the simple tagging and identification, towards integrating computational resources directly into these objects, thus making our tools “intelligent”. Yet, there is limited scientific work that considers humans as an integral part of these IoT-powered cyber-physical systems.
{"title":"Editorial Special Section on Emerging Edge AI for Human-in-the-Loop Cyber Physical Systems","authors":"Radu Marculescu;Jorge Sá Silva","doi":"10.1109/TETC.2024.3472428","DOIUrl":"https://doi.org/10.1109/TETC.2024.3472428","url":null,"abstract":"Edge Artificial Intelligence (AI) enables us to deploy distributed AI models, optimize computational and energy resources, minimize communication demands, and, most importantly, meet privacy requirements for Internet of Things (IoT) applications. Since data remains on the end-devices and only model parameters are shared with the server, it becomes possible to leverage the vast amount of data collected from smartphones and IoT devices without compromising the user's privacy. However, Federated Learning (FL) solutions also have well-known limitations. In particular, as systems that account for human behaviour become increasingly vital, future technologies need to become attuned to human behaviours. Indeed, we are already witnessing unparalleled advancements in technology that empower our tools and devices with intelligence, sensory abilities, and communication features. At the same time, continued advances in the miniaturization of computational capabilities can enable us to go far beyond the simple tagging and identification, towards integrating computational resources directly into these objects, thus making our tools “intelligent”. Yet, there is limited scientific work that considers humans as an integral part of these IoT-powered cyber-physical systems.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 1","pages":"3-4"},"PeriodicalIF":5.1,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918564","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of Aging-Related Bugs (ARBs) poses a significant challenge to software systems, resulting in performance degradation and increased error rates in resource-intensive systems. Consequently, numerous ARB prediction methods have been developed to mitigate these issues. However, in scenarios where training data is limited, the effectiveness of ARB prediction is often suboptimal. To address this problem, Cross-Project Aging-Related Bug Prediction (CPARBP) is proposed, which utilizes data from other projects (i.e., source projects) to train a model aimed at predicting potential ARBs in a target project. However, the use of source-project data raises privacy concerns and discourages companies from sharing their data. Therefore, we propose a method called Cross-Project Aging-Related Bug Prediction based on Negative Database (NegCPARBP) for privacy protection. NegCPARBP first converts the feature vector of a software file into a binary string. Second, the corresponding Negative DataBase (NDB) is generated based on this binary string, containing data that is significantly more expressive from the original feature vector. Furthermore, to ensure more accurate prediction of ARB-prone and ARB-free files based on privacy-protected data (i.e., maintain the data utility), we propose a novel negative database generation algorithm that captures more information about important features, using information gain as a measure. Finally, NegCPARBP extracts a new feature vector from the NDB to represent the original feature vector, facilitating data sharing and ARB prediction objectives. Experimental results on Linux, MySQL, and NetBSD datasets demonstrate that NegCPARBP achieves a high defense against attacks (privacy protection performance reaching 0.97) and better data utility compared to existing privacy protection methods.
{"title":"NegCPARBP: Enhancing Privacy Protection for Cross-Project Aging-Related Bug Prediction Based on Negative Database","authors":"Dongdong Zhao;Zhihui Liu;Fengji Zhang;Lei Liu;Jacky Wai Keung;Xiao Yu","doi":"10.1109/TETC.2025.3546549","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546549","url":null,"abstract":"The emergence of <underline>A</u>ging-<underline>R</u>elated <underline>B</u>ug<underline>s</u> (ARBs) poses a significant challenge to software systems, resulting in performance degradation and increased error rates in resource-intensive systems. Consequently, numerous ARB prediction methods have been developed to mitigate these issues. However, in scenarios where training data is limited, the effectiveness of ARB prediction is often suboptimal. To address this problem, <underline>C</u>ross-<underline>P</u>roject <underline>A</u>ging-<underline>R</u>elated <underline>B</u>ug <underline>P</u>rediction (CPARBP) is proposed, which utilizes data from other projects (i.e., source projects) to train a model aimed at predicting potential ARBs in a target project. However, the use of source-project data raises privacy concerns and discourages companies from sharing their data. Therefore, we propose a method called <underline>C</u>ross-<underline>P</u>roject <underline>A</u>ging-<underline>R</u>elated <underline>B</u>ug <underline>P</u>rediction based on <underline>Neg</u>ative Database (NegCPARBP) for privacy protection. NegCPARBP first converts the feature vector of a software file into a binary string. Second, the corresponding <underline>N</u>egative <underline>D</u>ata<underline>B</u>ase (<italic>NDB</i>) is generated based on this binary string, containing data that is significantly more expressive from the original feature vector. Furthermore, to ensure more accurate prediction of ARB-prone and ARB-free files based on privacy-protected data (i.e., maintain the data utility), we propose a novel negative database generation algorithm that captures more information about important features, using information gain as a measure. Finally, NegCPARBP extracts a new feature vector from the <italic>NDB</i> to represent the original feature vector, facilitating data sharing and ARB prediction objectives. Experimental results on Linux, MySQL, and NetBSD datasets demonstrate that NegCPARBP achieves a high defense against attacks (privacy protection performance reaching 0.97) and better data utility compared to existing privacy protection methods.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"283-298"},"PeriodicalIF":5.1,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}