Wei Zhao, Jie Kong, Baogang Li, Qihao Yang, Yaru Ding
This paper addressed the trade-off between timeliness and reliability in joint communication and over-the-air computation offloading (JCACO) system under short-packet communications (SPCs). The inevitable decoding errors introduced by SPC lead to errors in the data aggregation process of over-the-air computation (AirComp). Due to limited resources, pursuing high reliability may prevent the JCACO system from meeting delay requirements, resulting in a trade-off between reliability and timeliness. To address this issue, this paper investigates the timeliness and reliability of the JCACO system. Specifically, the moment generating function method is used to derive the delay outage probability (DOP) of the JCACO system, and the outage probability of AirComp is calculated based on the errors that occur during its data aggregation process. The paper established an asymptotic relationship between blocklength, DOP, and AirComp outage probability (AOP). To balance timeliness and reliability, an AOP minimization problem is formulated under constraints of delay, queue stability, and limited resources based on computation offloading strategies and beamformer design. To overcome the issues of slow convergence and susceptibility to local optima in traditional algorithms, this paper proposed a stochastic successive mean-field game (SS-MFG) algorithm. This algorithm utilizes stochastic continuous convex approximation methods to leverage Nash equilibria among different users, achieving faster convergence to the global optimal solution. Numerical results indicate that SS-MFG reduces AOP by up to 60%, offering up to a 20% improvement in optimization performance compared to other algorithms while also converging faster.
{"title":"Reliable and Timely Short-Packet Communications in Joint Communication and Over-the-Air Computation Offloading Systems: Analysis and Optimization","authors":"Wei Zhao, Jie Kong, Baogang Li, Qihao Yang, Yaru Ding","doi":"10.1155/2024/1168004","DOIUrl":"https://doi.org/10.1155/2024/1168004","url":null,"abstract":"<div>\u0000 <p>This paper addressed the trade-off between timeliness and reliability in joint communication and over-the-air computation offloading (JCACO) system under short-packet communications (SPCs). The inevitable decoding errors introduced by SPC lead to errors in the data aggregation process of over-the-air computation (AirComp). Due to limited resources, pursuing high reliability may prevent the JCACO system from meeting delay requirements, resulting in a trade-off between reliability and timeliness. To address this issue, this paper investigates the timeliness and reliability of the JCACO system. Specifically, the moment generating function method is used to derive the delay outage probability (DOP) of the JCACO system, and the outage probability of AirComp is calculated based on the errors that occur during its data aggregation process. The paper established an asymptotic relationship between blocklength, DOP, and AirComp outage probability (AOP). To balance timeliness and reliability, an AOP minimization problem is formulated under constraints of delay, queue stability, and limited resources based on computation offloading strategies and beamformer design. To overcome the issues of slow convergence and susceptibility to local optima in traditional algorithms, this paper proposed a stochastic successive mean-field game (SS-MFG) algorithm. This algorithm utilizes stochastic continuous convex approximation methods to leverage Nash equilibria among different users, achieving faster convergence to the global optimal solution. Numerical results indicate that SS-MFG reduces AOP by up to 60%, offering up to a 20% improvement in optimization performance compared to other algorithms while also converging faster.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/1168004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The expanding importance of technology, particularly mobile banking, in the financial industry, is examined in this literature review, as well as the crucial role that cybersecurity knowledge plays in protecting online transactions. Users now have the flexibility to conduct payments whenever and wherever they wish thanks to the advent of mobile banking. Further consumer behavior study is necessary due to difficulties with its acceptability. Given the hazards involved in online and mobile banking, cybersecurity is revealed as a critical component. Users’ actions might lead to financial losses since they represent security concerns. The evaluation places a strong emphasis on the necessity of increasing user cybersecurity knowledge and comprehension. Wireless banking is still in its early phases and needs more study on consumer acceptability and behavior despite the greater accessibility of technology. Furthermore, the research study emphasizes the socio-technical difficulties governments encounter in tackling cybersecurity and emphasizes how urgently better readiness is needed in the face of cyberwarfare threats. It investigated how user behavior in mobile banking in particular geographic areas, such as Thailand, relates to cyberspace knowledge and consciousness. The assessment emphasizes the value of technology in banking, the difficulties associated with cybersecurity, and the demand for increased customer knowledge and comprehension to ensure safe digital transactions. To conduct this research activity, standardized questionnaires are used. The technique employed to get this data was convenience sampling. The statistics collection size stood at 500 and was gathered from males as well as females of all ages, belonging to diverse revenue groups, and numerous professional backgrounds. The survey finds that even while these services are becoming widespread in the UAE, customers’ awareness and understanding of cyber security are still insufficient. Users frequently underrate the security dangers involved with online transactions, which might create openings. Additionally, the study underlines the necessity of more effective training programs and efforts to raise mobile banking consumers’ knowledge of cybersecurity issues. It also emphasizes how crucial it is to build cybersecurity precautions into the structure and functioning of services for mobile banking. The purpose of this study was to examine the demographic characteristics of consumers and companies (online transactions) that use mobile banking apps to get special advantages. It was shown that people’s satisfaction with mobile banking applications was influenced by a number of criteria, including age, employment, income, marital status, and educational attainment. Younger consumers, such as students and recent graduates, are seen to be happy than customers of various ages and vocations, and males are thought to be happier than women.
{"title":"The Effect of User Behavior in Online Banking on Cybersecurity Knowledge","authors":"Hamza Alrababah, Hena Iqbal, Muhammad Adnan Khan","doi":"10.1155/int/9949510","DOIUrl":"https://doi.org/10.1155/int/9949510","url":null,"abstract":"<div>\u0000 <p>The expanding importance of technology, particularly mobile banking, in the financial industry, is examined in this literature review, as well as the crucial role that cybersecurity knowledge plays in protecting online transactions. Users now have the flexibility to conduct payments whenever and wherever they wish thanks to the advent of mobile banking. Further consumer behavior study is necessary due to difficulties with its acceptability. Given the hazards involved in online and mobile banking, cybersecurity is revealed as a critical component. Users’ actions might lead to financial losses since they represent security concerns. The evaluation places a strong emphasis on the necessity of increasing user cybersecurity knowledge and comprehension. Wireless banking is still in its early phases and needs more study on consumer acceptability and behavior despite the greater accessibility of technology. Furthermore, the research study emphasizes the socio-technical difficulties governments encounter in tackling cybersecurity and emphasizes how urgently better readiness is needed in the face of cyberwarfare threats. It investigated how user behavior in mobile banking in particular geographic areas, such as Thailand, relates to cyberspace knowledge and consciousness. The assessment emphasizes the value of technology in banking, the difficulties associated with cybersecurity, and the demand for increased customer knowledge and comprehension to ensure safe digital transactions. To conduct this research activity, standardized questionnaires are used. The technique employed to get this data was convenience sampling. The statistics collection size stood at 500 and was gathered from males as well as females of all ages, belonging to diverse revenue groups, and numerous professional backgrounds. The survey finds that even while these services are becoming widespread in the UAE, customers’ awareness and understanding of cyber security are still insufficient. Users frequently underrate the security dangers involved with online transactions, which might create openings. Additionally, the study underlines the necessity of more effective training programs and efforts to raise mobile banking consumers’ knowledge of cybersecurity issues. It also emphasizes how crucial it is to build cybersecurity precautions into the structure and functioning of services for mobile banking. The purpose of this study was to examine the demographic characteristics of consumers and companies (online transactions) that use mobile banking apps to get special advantages. It was shown that people’s satisfaction with mobile banking applications was influenced by a number of criteria, including age, employment, income, marital status, and educational attainment. Younger consumers, such as students and recent graduates, are seen to be happy than customers of various ages and vocations, and males are thought to be happier than women.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9949510","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142764069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia Wu, Yuxia Niu, Ziqiang Ling, Jun Zhu, Fangfang Gou
Medical images play a significant part in biomedical diagnosis, but they have a significant feature. The medical images, influenced by factors such as imaging equipment limitations, local volume effect, and others, inevitably exhibit issues like noise, blurred edges, and inconsistent signal strength. These imperfections pose significant challenges and create obstacles for doctors during their diagnostic processes. To address these issues, we present a pathology image segmentation technique based on the multiscale dual attention mechanism (MSDAUnet), which consists of three primary components. Firstly, an image denoising and enhancement module is constructed by using dynamic residual attention and color histogram to remove image noise and improve image clarity. Then, we propose a dual attention module (DAM), which extracts messages from both channel and spatial dimensions, obtains key features, and makes the edge of the lesion area clearer. Finally, capturing multiscale information in the process of image segmentation addresses the issue of uneven signal strength to a certain extent. Each module is combined for automatic pathological image segmentation. Compared with the traditional and typical U-Net model, MSDAUnet has a better segmentation performance. On the dataset provided by the Research Center for Artificial Intelligence of Monash University, the IOU index is as high as 72.7%, which is nearly 7% higher than that of U-Net, and the DSC index is 84.9%, which is also about 7% higher than that of U-Net.
{"title":"Pathological Image Segmentation Method Based on Multiscale and Dual Attention","authors":"Jia Wu, Yuxia Niu, Ziqiang Ling, Jun Zhu, Fangfang Gou","doi":"10.1155/int/9987190","DOIUrl":"https://doi.org/10.1155/int/9987190","url":null,"abstract":"<div>\u0000 <p>Medical images play a significant part in biomedical diagnosis, but they have a significant feature. The medical images, influenced by factors such as imaging equipment limitations, local volume effect, and others, inevitably exhibit issues like noise, blurred edges, and inconsistent signal strength. These imperfections pose significant challenges and create obstacles for doctors during their diagnostic processes. To address these issues, we present a pathology image segmentation technique based on the multiscale dual attention mechanism (MSDAUnet), which consists of three primary components. Firstly, an image denoising and enhancement module is constructed by using dynamic residual attention and color histogram to remove image noise and improve image clarity. Then, we propose a dual attention module (DAM), which extracts messages from both channel and spatial dimensions, obtains key features, and makes the edge of the lesion area clearer. Finally, capturing multiscale information in the process of image segmentation addresses the issue of uneven signal strength to a certain extent. Each module is combined for automatic pathological image segmentation. Compared with the traditional and typical U-Net model, MSDAUnet has a better segmentation performance. On the dataset provided by the Research Center for Artificial Intelligence of Monash University, the IOU index is as high as 72.7%, which is nearly 7% higher than that of U-Net, and the DSC index is 84.9%, which is also about 7% higher than that of U-Net.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9987190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142749362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer is ranked as the second most common cancer among women globally, highlighting the critical need for precise and early detection methods. Our research introduces a novel approach for classifying benign and malignant breast ultrasound images. We leverage advanced deep learning methodologies, mainly focusing on the vision transformer (ViT) model. Our method distinctively features progressive fine-tuning, a tailored process that incrementally adapts the model to the nuances of breast tissue classification. Ultrasound imaging was chosen for its distinct benefits in medical diagnostics. This modality is noninvasive and cost-effective and demonstrates enhanced specificity, especially in dense breast tissues where traditional methods may struggle. Such characteristics make it an ideal choice for the sensitive task of breast cancer detection. Our extensive experiments utilized the breast ultrasound images dataset, comprising 780 images of both benign and malignant breast tissues. The dataset underwent a comprehensive analysis using several pretrained deep learning models, including VGG16, VGG19, DenseNet121, Inception, ResNet152V2, DenseNet169, DenseNet201, and the ViT. The results presented were achieved without employing data augmentation techniques. The ViT model demonstrated robust accuracy and generalization capabilities with the original dataset size, which consisted of 637 images. Each model’s performance was meticulously evaluated through a robust 10-fold cross-validation technique, ensuring a thorough and unbiased comparison. Our findings are significant, demonstrating that the progressive fine-tuning substantially enhances the ViT model’s capability. This resulted in a remarkable accuracy of 94.49% and an AUC score of 0.921, significantly higher than models without fine-tuning. These results affirm the efficacy of the ViT model and highlight the transformative potential of integrating progressive fine-tuning with transformer models in medical image classification tasks. The study solidifies the role of such advanced methodologies in improving early breast cancer detection and diagnosis, especially when coupled with the unique advantages of ultrasound imaging.
{"title":"Enhancing Breast Cancer Detection in Ultrasound Images: An Innovative Approach Using Progressive Fine-Tuning of Vision Transformer Models","authors":"Meshrif Alruily, Alshimaa Abdelraof Mahmoud, Hisham Allahem, Ayman Mohamed Mostafa, Hosameldeen Shabana, Mohamed Ezz","doi":"10.1155/int/6528752","DOIUrl":"https://doi.org/10.1155/int/6528752","url":null,"abstract":"<div>\u0000 <p>Breast cancer is ranked as the second most common cancer among women globally, highlighting the critical need for precise and early detection methods. Our research introduces a novel approach for classifying benign and malignant breast ultrasound images. We leverage advanced deep learning methodologies, mainly focusing on the vision transformer (ViT) model. Our method distinctively features progressive fine-tuning, a tailored process that incrementally adapts the model to the nuances of breast tissue classification. Ultrasound imaging was chosen for its distinct benefits in medical diagnostics. This modality is noninvasive and cost-effective and demonstrates enhanced specificity, especially in dense breast tissues where traditional methods may struggle. Such characteristics make it an ideal choice for the sensitive task of breast cancer detection. Our extensive experiments utilized the breast ultrasound images dataset, comprising 780 images of both benign and malignant breast tissues. The dataset underwent a comprehensive analysis using several pretrained deep learning models, including VGG16, VGG19, DenseNet121, Inception, ResNet152V2, DenseNet169, DenseNet201, and the ViT. The results presented were achieved without employing data augmentation techniques. The ViT model demonstrated robust accuracy and generalization capabilities with the original dataset size, which consisted of 637 images. Each model’s performance was meticulously evaluated through a robust 10-fold cross-validation technique, ensuring a thorough and unbiased comparison. Our findings are significant, demonstrating that the progressive fine-tuning substantially enhances the ViT model’s capability. This resulted in a remarkable accuracy of 94.49% and an AUC score of 0.921, significantly higher than models without fine-tuning. These results affirm the efficacy of the ViT model and highlight the transformative potential of integrating progressive fine-tuning with transformer models in medical image classification tasks. The study solidifies the role of such advanced methodologies in improving early breast cancer detection and diagnosis, especially when coupled with the unique advantages of ultrasound imaging.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6528752","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142749058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shiyuan Wang, Rugui Yao, Xiaoya Zuo, Ye Fan, Xiongfei Li, Qingyan Guo, Xudong Li
The unique fingerprints of radio frequency (RF) devices play a critical role in enhancing wireless security, optimizing spectrum management, and facilitating device authentication through accurate identification. However, high-accuracy identification models for radio frequency fingerprint (RFF) often come with a significant number of parameters and complexity, making them less practical for real-world deployment. To address this challenge, our research presents a deep convolutional neural network (CNN)–based architecture known as the separation and fusion convolutional neural network (SFCNN). This architecture focuses on enhancing the identification accuracy of RF devices with limited complexity. The SFCNN incorporates two customizable modules: the separation layer, which is responsible for partitioning the data group size adapted to the channel dimension to keep the low complexity, and the fusion layer which is designed to perform deep channel fusion to enhance feature representation. The proposed SFCNN demonstrates improved accuracy and enhanced robustness with fewer parameters compared to the state-of-the-art techniques, including the baseline CNN, Inception, ResNet, TCN, MSCNN, STFT-CNN, and the ResNet-50-1D. The experimental results based on the public datasets demonstrate an average identification accuracy of 97.78% among 21 USRP transmitters. The number of parameters is reduced by at least 8% compared with all the other models, and the identification accuracy is improved among all the models under any considered scenarios. The trade-off performance between the complexity and accuracy of the proposed SFCNN suggests that it is an effective architecture with remarkable development potential.
{"title":"SFCNN: Separation and Fusion Convolutional Neural Network for Radio Frequency Fingerprint Identification","authors":"Shiyuan Wang, Rugui Yao, Xiaoya Zuo, Ye Fan, Xiongfei Li, Qingyan Guo, Xudong Li","doi":"10.1155/int/4366040","DOIUrl":"https://doi.org/10.1155/int/4366040","url":null,"abstract":"<div>\u0000 <p>The unique fingerprints of radio frequency (RF) devices play a critical role in enhancing wireless security, optimizing spectrum management, and facilitating device authentication through accurate identification. However, high-accuracy identification models for radio frequency fingerprint (RFF) often come with a significant number of parameters and complexity, making them less practical for real-world deployment. To address this challenge, our research presents a deep convolutional neural network (CNN)–based architecture known as the separation and fusion convolutional neural network (SFCNN). This architecture focuses on enhancing the identification accuracy of RF devices with limited complexity. The SFCNN incorporates two customizable modules: the separation layer, which is responsible for partitioning the data group size adapted to the channel dimension to keep the low complexity, and the fusion layer which is designed to perform deep channel fusion to enhance feature representation. The proposed SFCNN demonstrates improved accuracy and enhanced robustness with fewer parameters compared to the state-of-the-art techniques, including the baseline CNN, Inception, ResNet, TCN, MSCNN, STFT-CNN, and the ResNet-50-1D. The experimental results based on the public datasets demonstrate an average identification accuracy of 97.78% among 21 USRP transmitters. The number of parameters is reduced by at least 8% compared with all the other models, and the identification accuracy is improved among all the models under any considered scenarios. The trade-off performance between the complexity and accuracy of the proposed SFCNN suggests that it is an effective architecture with remarkable development potential.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/4366040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142749057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Wen, Xiusheng Li, Yupeng Chen, Xiaoli Li, Hang Mao
Federated learning (FL) is a novel approach to privacy-preserving machine learning, enabling remote devices to collaborate on model training without exchanging data among clients. However, it faces several challenges, including limited client-side processing capabilities and non-IID data distributions. To address these challenges, we propose a partitioned FL architecture that a large CNN is divided into smaller networks, which train concurrently with other clients. Within a cluster, multiple clients concurrently train the ensemble model. The Jensen–Shannon divergence quantifies the similarity of predictions across submodels. To address discrepancies in model parameters between local and global models caused by data distribution, we propose an ensemble learning method that integrates a penalty term into the local model’s loss calculation, thereby ensuring synchronization. This method amalgamates predictions and losses across multiple submodels, effectively mitigating accuracy loss during the integration process. Extensive experiments with various Dirichlet parameters demonstrate that our system achieves accelerated convergence and enhanced performance on the CIFAR-10 and CIFAR-100 image classification tasks while remaining robust to partial participation, diverse datasets, and numerous clients. On the CIFAR-10 dataset, our method outperforms FedAvg, FedProx, and SplitFed by 6%–8%; in contrast, it outperforms them by 12%–18% on CIFAR-100.
{"title":"An Effective Approach for Resource-Constrained Edge Devices in Federated Learning","authors":"Jun Wen, Xiusheng Li, Yupeng Chen, Xiaoli Li, Hang Mao","doi":"10.1155/2024/8860376","DOIUrl":"https://doi.org/10.1155/2024/8860376","url":null,"abstract":"<div>\u0000 <p>Federated learning (FL) is a novel approach to privacy-preserving machine learning, enabling remote devices to collaborate on model training without exchanging data among clients. However, it faces several challenges, including limited client-side processing capabilities and non-IID data distributions. To address these challenges, we propose a partitioned FL architecture that a large CNN is divided into smaller networks, which train concurrently with other clients. Within a cluster, multiple clients concurrently train the ensemble model. The Jensen–Shannon divergence quantifies the similarity of predictions across submodels. To address discrepancies in model parameters between local and global models caused by data distribution, we propose an ensemble learning method that integrates a penalty term into the local model’s loss calculation, thereby ensuring synchronization. This method amalgamates predictions and losses across multiple submodels, effectively mitigating accuracy loss during the integration process. Extensive experiments with various Dirichlet parameters demonstrate that our system achieves accelerated convergence and enhanced performance on the CIFAR-10 and CIFAR-100 image classification tasks while remaining robust to partial participation, diverse datasets, and numerous clients. On the CIFAR-10 dataset, our method outperforms FedAvg, FedProx, and SplitFed by 6%–8%; in contrast, it outperforms them by 12%–18% on CIFAR-100.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/8860376","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142724219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Qiu, Xuan Wang, Tianzi Ma, Yaojun Wen, Jiajia Zhang
Counterfactual regret minimization (CFR) is an effective algorithm for solving extensive-form games with imperfect information (IIEGs). However, CFR is only allowed to be applied in known environments, where the transition function of the chance player and the reward function of the terminal node in IIEGs are known. In uncertain situations, such as reinforcement learning (RL) problems, CFR is not applicable. Thus, applying CFR in unknown environments is a significant challenge that can also address some difficulties in the real world. Currently, advanced solutions require more interactions with the environment and are limited by large single-sampling variances to narrow the gap with the real environment. In this paper, we propose a method that combines CFR with information gain to compute the Nash equilibrium (NE) of IIEGs with unknown environments. We use a curiosity-driven approach to explore unknown environments and minimize the discrepancy between uncertain and real environments. In addition, by incorporating information into the reward, the average strategy calculated by CFR can be directly implemented as the interaction policy with the environment, thereby improving the exploration efficiency of our method in uncertain environments. Through experiments on standard testbeds such as Kuhn poker and Leduc poker, our method significantly reduces the number of interactions with the environment compared to the different baselines and computes a more accurate approximate NE within the same number of interaction rounds.
{"title":"Combining Counterfactual Regret Minimization With Information Gain to Solve Extensive Games With Unknown Environments","authors":"Chen Qiu, Xuan Wang, Tianzi Ma, Yaojun Wen, Jiajia Zhang","doi":"10.1155/int/9482323","DOIUrl":"https://doi.org/10.1155/int/9482323","url":null,"abstract":"<div>\u0000 <p>Counterfactual regret minimization (CFR) is an effective algorithm for solving extensive-form games with imperfect information (IIEGs). However, CFR is only allowed to be applied in known environments, where the transition function of the chance player and the reward function of the terminal node in IIEGs are known. In uncertain situations, such as reinforcement learning (RL) problems, CFR is not applicable. Thus, applying CFR in unknown environments is a significant challenge that can also address some difficulties in the real world. Currently, advanced solutions require more interactions with the environment and are limited by large single-sampling variances to narrow the gap with the real environment. In this paper, we propose a method that combines CFR with information gain to compute the Nash equilibrium (NE) of IIEGs with unknown environments. We use a curiosity-driven approach to explore unknown environments and minimize the discrepancy between uncertain and real environments. In addition, by incorporating information into the reward, the average strategy calculated by CFR can be directly implemented as the interaction policy with the environment, thereby improving the exploration efficiency of our method in uncertain environments. Through experiments on standard testbeds such as Kuhn poker and Leduc poker, our method significantly reduces the number of interactions with the environment compared to the different baselines and computes a more accurate approximate NE within the same number of interaction rounds.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9482323","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142724236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gancheng Zhu, Yongkai Li, Shuai Zhang, Xiaoting Duan, Zehao Huang, Zhaomin Yao, Rong Wang, Zhiguo Wang
Eye tracking has emerged as a valuable tool for both research and clinical applications. However, traditional eye-tracking systems are often bulky and expensive, limiting their widespread adoption in various fields. Smartphone eye tracking has become feasible with advanced deep learning and edge computing technologies. However, the field still faces practical challenges related to large-scale datasets, model inference speed, and gaze estimation accuracy. The present study created a new dataset that contains over 3.2 million face images collected with recent phone models and presents a comprehensive smartphone eye-tracking pipeline comprising a deep neural network framework (MGazeNet), a personalized model calibration method, and a heuristic gaze signal filter. The MGazeNet model introduced a linear adaptive batch normalization module to efficiently combine eye and face features, achieving the state-of-the-art gaze estimation accuracy of 1.59 cm on the GazeCapture dataset and 1.48 cm on our custom dataset. In addition, an algorithm that utilizes multiverse optimization to optimize the hyperparameters of support vector regression (MVO–SVR) was proposed to improve eye-tracking calibration accuracy with 13 or fewer ground-truth gaze points, further improving gaze estimation accuracy to 0.89 cm. This integrated approach allows for eye tracking with accuracy comparable to that of research-grade eye trackers, offering new application possibilities for smartphone eye tracking.
{"title":"Neural Networks With Linear Adaptive Batch Normalization and Swarm Intelligence Calibration for Real-Time Gaze Estimation on Smartphones","authors":"Gancheng Zhu, Yongkai Li, Shuai Zhang, Xiaoting Duan, Zehao Huang, Zhaomin Yao, Rong Wang, Zhiguo Wang","doi":"10.1155/2024/2644725","DOIUrl":"https://doi.org/10.1155/2024/2644725","url":null,"abstract":"<div>\u0000 <p>Eye tracking has emerged as a valuable tool for both research and clinical applications. However, traditional eye-tracking systems are often bulky and expensive, limiting their widespread adoption in various fields. Smartphone eye tracking has become feasible with advanced deep learning and edge computing technologies. However, the field still faces practical challenges related to large-scale datasets, model inference speed, and gaze estimation accuracy. The present study created a new dataset that contains over 3.2 million face images collected with recent phone models and presents a comprehensive smartphone eye-tracking pipeline comprising a deep neural network framework (MGazeNet), a personalized model calibration method, and a heuristic gaze signal filter. The MGazeNet model introduced a linear adaptive batch normalization module to efficiently combine eye and face features, achieving the state-of-the-art gaze estimation accuracy of 1.59 cm on the GazeCapture dataset and 1.48 cm on our custom dataset. In addition, an algorithm that utilizes multiverse optimization to optimize the hyperparameters of support vector regression (MVO–SVR) was proposed to improve eye-tracking calibration accuracy with 13 or fewer ground-truth gaze points, further improving gaze estimation accuracy to 0.89 cm. This integrated approach allows for eye tracking with accuracy comparable to that of research-grade eye trackers, offering new application possibilities for smartphone eye tracking.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/2644725","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142708140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collaborative edge and cloud computing is a promising computing paradigm for reducing the task response delay and energy consumption of devices. In this paper, we aim to jointly optimize task offloading strategy, power control for devices, and resource allocation for edge servers within a collaborative device-edge-cloud computing system. We formulate this problem as a constrained multiobjective optimization problem and propose a joint optimization algorithm (JO-DEC) based on a multiobjective evolutionary algorithm to solve it. To address the tight coupling of the variables and the high-dimensional decision space, we propose a decoupling encoding strategy (DES) and a boundary point sampling strategy (BPS) to improve the performance of the algorithm. The DES is utilized to decouple the correlations among decision variables, and BPS is employed to enhance the convergence speed and population diversity of the algorithm. Simulation results demonstrate that JO-DEC outperforms three state-of-the-art algorithms in terms of convergence and diversity, enabling it to achieve a smaller task response delay and lower energy consumption.
{"title":"Joint Power Control and Resource Allocation With Task Offloading for Collaborative Device-Edge-Cloud Computing Systems","authors":"Shumin Xie, Kangshun Li, Wenxiang Wang, Hui Wang, Hassan Jalil","doi":"10.1155/2024/6852701","DOIUrl":"https://doi.org/10.1155/2024/6852701","url":null,"abstract":"<div>\u0000 <p>Collaborative edge and cloud computing is a promising computing paradigm for reducing the task response delay and energy consumption of devices. In this paper, we aim to jointly optimize task offloading strategy, power control for devices, and resource allocation for edge servers within a collaborative device-edge-cloud computing system. We formulate this problem as a constrained multiobjective optimization problem and propose a joint optimization algorithm (JO-DEC) based on a multiobjective evolutionary algorithm to solve it. To address the tight coupling of the variables and the high-dimensional decision space, we propose a decoupling encoding strategy (DES) and a boundary point sampling strategy (BPS) to improve the performance of the algorithm. The DES is utilized to decouple the correlations among decision variables, and BPS is employed to enhance the convergence speed and population diversity of the algorithm. Simulation results demonstrate that JO-DEC outperforms three state-of-the-art algorithms in terms of convergence and diversity, enabling it to achieve a smaller task response delay and lower energy consumption.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/6852701","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Application programming interface (API) misuse refers to misconceptions or carelessness in the anticipated usage of APIs, threatening the software system’s security. Moreover, API misuses demonstrate significant concealment and are challenging to uncover. Recent advancements have explored enhanced LLMs in a variety of software engineering (SE) activities, such as code repair. Nonetheless, the security implications of using LLMs for these purposes remain underexplored, particularly concerning the issue of API misuse. In this paper, we present an empirical study to observe the bug-fixing capabilities of LLMs in addressing API misuse related to monitoring resource management (MRM API misuse). Initially, we propose APImisRepair, a real-world benchmark for repairing MRM API misuse, including buggy programs, corresponding fixed programs, and descriptions of API misuse. Subsequently, we assess the performance of several LLMs using the APImisRepair benchmark. Findings reveal the vulnerabilities of LLMs in repairing MRM API misuse and find several reasons, encompassing factors such as fault localization and a lack of awareness regarding API misuse. Additionally, we have insights on improving LLMs in terms of their ability to fix MRM API misuse and introduce a crafted approach, APImisAP. Experimental results demonstrate that APImisAP exhibits a certain degree of improvement in the security of LLMs.
应用程序接口(API)滥用是指在预期使用 API 时出现误解或疏忽,从而威胁到软件系统的安全。此外,应用程序接口误用具有很大的隐蔽性,揭露起来也很困难。最近的进展是在代码修复等各种软件工程(SE)活动中探索增强型 LLM。然而,将 LLMs 用于这些目的的安全影响仍未得到充分探索,尤其是在 API 滥用问题上。在本文中,我们介绍了一项实证研究,以观察 LLM 在解决与监控资源管理相关的 API 滥用(MRM API 滥用)方面的错误修复能力。首先,我们提出了 APImisRepair,这是一个用于修复 MRM API 滥用的真实世界基准,其中包括错误程序、相应的修复程序以及 API 滥用的描述。随后,我们使用 APImisRepair 基准评估了几种 LLM 的性能。研究结果揭示了 LLM 在修复 MRM API 误用方面的漏洞,并发现了若干原因,其中包括故障定位和缺乏对 API 误用的认识等因素。此外,我们还就如何提高 LLM 修复 MRM API 误用的能力提出了见解,并介绍了一种精心设计的方法 APImisAP。实验结果表明,APImisAP 在一定程度上提高了 LLM 的安全性。
{"title":"Security Analysis of Large Language Models on API Misuse Programming Repair","authors":"Rui Zhang, Ziyue Qiao, Yong Yu","doi":"10.1155/2024/7135765","DOIUrl":"https://doi.org/10.1155/2024/7135765","url":null,"abstract":"<div>\u0000 <p>Application programming interface (API) misuse refers to misconceptions or carelessness in the anticipated usage of APIs, threatening the software system’s security. Moreover, API misuses demonstrate significant concealment and are challenging to uncover. Recent advancements have explored enhanced LLMs in a variety of software engineering (SE) activities, such as code repair. Nonetheless, the security implications of using LLMs for these purposes remain underexplored, particularly concerning the issue of API misuse. In this paper, we present an empirical study to observe the bug-fixing capabilities of LLMs in addressing API misuse related to monitoring resource management (MRM API misuse). Initially, we propose APImisRepair, a real-world benchmark for repairing MRM API misuse, including buggy programs, corresponding fixed programs, and descriptions of API misuse. Subsequently, we assess the performance of several LLMs using the APImisRepair benchmark. Findings reveal the vulnerabilities of LLMs in repairing MRM API misuse and find several reasons, encompassing factors such as fault localization and a lack of awareness regarding API misuse. Additionally, we have insights on improving LLMs in terms of their ability to fix MRM API misuse and introduce a crafted approach, APImisAP. Experimental results demonstrate that APImisAP exhibits a certain degree of improvement in the security of LLMs.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/7135765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}