Pub Date : 2023-05-02DOI: 10.2174/2666255816666230502100729
Masood Ahmad, M. Nadeem, Monamy Islam, Saquib Ali, A. Agrawal, Raees Ahmad Khan
The watermarking technique is a security algorithm for medical images and the patient's information. Watermarking is used for maintaining the robustness, integrity, confidentiality, authentication, and complexity of medical images. The selection of medical image watermarking technique is multi-criteria decision-making and an automatic way of algorithm selection for security and privacy. However, it is difficult to select a better watermarking technique through traditional selection techniques. To deal with this problem, a multicriteria-based fuzzy analytic hierarchy process (FAHP) was proposed. This method is applied for algorithm selection for the security of medical images in healthcare. In this method, we first determined the list of criteria and alternatives, which directly affect the decision of the medical image security algorithm. Then, the proposed method was applied to the criteria and alternatives. We provided the rank according to the obtained weights of the algorithm. Furthermore, the overall results and ranking of the algorithms were also presented in this article. Integrity was found to have the highest weight (0.509) compared to the other criteria. The weight of the other criteria, namely authentication, was 0.165, robustness was 0.151, confidentiality was 0.135, and complexity was 0.038. Thus, in terms of ranking, integrity was reported to be of the highest priority among the five criteria attributes.
{"title":"Selection of Digital Watermarking Techniques for Medical Image Security by using the Fuzzy Analytical Hierarchy Process","authors":"Masood Ahmad, M. Nadeem, Monamy Islam, Saquib Ali, A. Agrawal, Raees Ahmad Khan","doi":"10.2174/2666255816666230502100729","DOIUrl":"https://doi.org/10.2174/2666255816666230502100729","url":null,"abstract":"\u0000\u0000The watermarking technique is a security algorithm for medical images and the patient's information. Watermarking is used for maintaining the robustness, integrity, confidentiality, authentication, and complexity of medical images.\u0000\u0000\u0000\u0000The selection of medical image watermarking technique is multi-criteria decision-making and an automatic way of algorithm selection for security and privacy. However, it is difficult to select a better watermarking technique through traditional selection techniques.\u0000\u0000\u0000\u0000To deal with this problem, a multicriteria-based fuzzy analytic hierarchy process (FAHP) was proposed. This method is applied for algorithm selection for the security of medical images in healthcare. In this method, we first determined the list of criteria and alternatives, which directly affect the decision of the medical image security algorithm. Then, the proposed method was applied to the criteria and alternatives.\u0000\u0000\u0000\u0000We provided the rank according to the obtained weights of the algorithm. Furthermore, the overall results and ranking of the algorithms were also presented in this article.\u0000\u0000\u0000\u0000Integrity was found to have the highest weight (0.509) compared to the other criteria. The weight of the other criteria, namely authentication, was 0.165, robustness was 0.151, confidentiality was 0.135, and complexity was 0.038. Thus, in terms of ranking, integrity was reported to be of the highest priority among the five criteria attributes.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47425021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.2174/266625581604230119152239
Ihsan Ali
{"title":"Meet the Editorial Board Members","authors":"Ihsan Ali","doi":"10.2174/266625581604230119152239","DOIUrl":"https://doi.org/10.2174/266625581604230119152239","url":null,"abstract":"","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"434 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135753422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.2174/266625581604230119153144
{"title":"Patent Selections","authors":"","doi":"10.2174/266625581604230119153144","DOIUrl":"https://doi.org/10.2174/266625581604230119153144","url":null,"abstract":"","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135753423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Spectrum is the backbone for wireless communications including internet services. Now days, the business of industries providing wired communication is constant while the business of industries dealing with wireless communications is growing very fast. There is large demand of radio spectrum for new wireless multimedia services. Although the present fixed spectrum allotment schemes do not cause any interference between users, but this fixed scheme of spectrum allocation do not allow accommodating the spectrum required for new wireless services. Cognitive radio (CR) relies on spectrum sensing to discover available frequency bands so that the spectrum can be used to its full potential, thus avoiding interference to the primary users (PU). Objectives: The purpose of this work is to present an in-depth overview of traditional as well as advanced artificial intelligence and machine learning based cooperative spectrum sensing (CSS) in cognitive radio networks. Method: Using the principles of artificial intelligence (AI), systems are able to solve issues by mimicking the function of human brains. Moreover, since its inception, machine learning has demonstrated that it is capable of solving a wide range of computational issues. Recent advancements in artificial intelligence techniques and machine learning (ML) have made it an emergent technology in spectrum sensing. Result: The result shows that more than 80% papers are on traditional spectrum sensing while less than 20% deals with artificial intelligence and machine learning approaches. More than 75% papers address the limitation of local spectrum sensing. The study presents the various methods implemented in the spectrum sensing along with merits and challenges. Conclusion: Spectrum sensing techniques are hampered by a variety of issues, including fading, shadowing, and receiver unpredictability. Challenges, benefits, drawbacks, and scope of cooperative sensing are examined and summarized. With this survey article, academics may clearly know the numerous conventional artificial intelligence and machine learning methodologies used and can connect sharp audiences to contemporary research done in cognitive radio networks, which is now underway.
{"title":"Cooperative Spectrum Sensing in Cognitive Radio Networks: A Systematic Review","authors":"Vaishali Yadav, Sharad Jain, Ashwani Kumar Yadav, Raj Kumar","doi":"10.2174/2666255816666221005095538","DOIUrl":"https://doi.org/10.2174/2666255816666221005095538","url":null,"abstract":"Background: Spectrum is the backbone for wireless communications including internet services. Now days, the business of industries providing wired communication is constant while the business of industries dealing with wireless communications is growing very fast. There is large demand of radio spectrum for new wireless multimedia services. Although the present fixed spectrum allotment schemes do not cause any interference between users, but this fixed scheme of spectrum allocation do not allow accommodating the spectrum required for new wireless services. Cognitive radio (CR) relies on spectrum sensing to discover available frequency bands so that the spectrum can be used to its full potential, thus avoiding interference to the primary users (PU). Objectives: The purpose of this work is to present an in-depth overview of traditional as well as advanced artificial intelligence and machine learning based cooperative spectrum sensing (CSS) in cognitive radio networks. Method: Using the principles of artificial intelligence (AI), systems are able to solve issues by mimicking the function of human brains. Moreover, since its inception, machine learning has demonstrated that it is capable of solving a wide range of computational issues. Recent advancements in artificial intelligence techniques and machine learning (ML) have made it an emergent technology in spectrum sensing. Result: The result shows that more than 80% papers are on traditional spectrum sensing while less than 20% deals with artificial intelligence and machine learning approaches. More than 75% papers address the limitation of local spectrum sensing. The study presents the various methods implemented in the spectrum sensing along with merits and challenges. Conclusion: Spectrum sensing techniques are hampered by a variety of issues, including fading, shadowing, and receiver unpredictability. Challenges, benefits, drawbacks, and scope of cooperative sensing are examined and summarized. With this survey article, academics may clearly know the numerous conventional artificial intelligence and machine learning methodologies used and can connect sharp audiences to contemporary research done in cognitive radio networks, which is now underway.","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135702915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-30DOI: 10.2174/2666255816666230330100005
Shweta Taneja, Bhawna Suri, Aman Roy, Ashish Chowdhry, H. kumar, Kautuk Dwivedi
Magnetic resonance imaging (MRI) and computed tomography (CT) both have their areas of specialty in the medical imaging world. MRI is considered to be a safer modality as it exploits the magnetic properties of the hydrogen nucleus. Whereas a CT scan uses multiple X-rays, which is known to contribute to carcinogenesis and is associated with affecting the patient's health. In scenarios such as Radiation Therapy, where both MRI and CT are required for medical treatment, a unique approach to getting both scans would be to obtain MRI and generate a CT scan from it. In scenarios, such as radiation therapy, where both MRI and CT are required for medical treatment, a unique approach to getting both scans would be to obtain MRI and generate a CT scan from it. Current deep learning methods for MRI to CT synthesis purely use either paired data or unpaired data. Models trained with paired data suffer due to a lack of availability of well-aligned data. Training with unpaired data might generate visually realistic images, although it still does not guarantee good accuracy. To overcome this, we proposed a new model called PUPC-GANs (Paired Unpaired CycleGANs), based on CycleGANs (Cycle-Consistent Adversarial Networks). Training with unpaired data might generate visually realistic images, although it still does not guarantee good accuracy. To overcome this, we propose a new model called PUPC-GANs (Paired Unpaired CycleGANs), based on CycleGANs (Cycle-Consistent Adversarial Networks). This model is capable of learning transformations utilizing both paired and unpaired data. To support this, a paired loss is introduced. Comparing MAE, MSE, NRMSE, PSNR, and SSIM metrics, PUPC-GANs outperform CycleGANs. Despite MRI and CT having different areas of application, there are use cases like Radiation Therapy, where both of them are required. A feasible approach to obtaining these images is to synthesize CT from MRI scans. Current methods fail to use paired data along with abundantly available unpaired data. The proposed model (PUPC-GANs) is able to utilize the presence of paired data during the training phase. This ability in combination with the conventional model of CycleGANs produces significant improvement in results as compared to training only with unpaired data. When comparing the two models using loss metrics, which include MAE, MSE, NRMSE, and PSNR, the proposed model outperforms CycleGANs. An SSIM of 0.8 is achieved, which is superior to the one obtained by CycleGANs. The proposed model produces comparable results on visual examination.
{"title":"PUPC-GANs: A Novel Image Conversion Model using Modified CycleGANs in Healthcare","authors":"Shweta Taneja, Bhawna Suri, Aman Roy, Ashish Chowdhry, H. kumar, Kautuk Dwivedi","doi":"10.2174/2666255816666230330100005","DOIUrl":"https://doi.org/10.2174/2666255816666230330100005","url":null,"abstract":"\u0000\u0000Magnetic resonance imaging (MRI) and computed tomography (CT) both have their areas of specialty in the medical imaging world. MRI is considered to be a safer modality as it exploits the magnetic properties of the hydrogen nucleus. Whereas a CT scan uses multiple X-rays, which is known to contribute to carcinogenesis and is associated with affecting the patient's health.\u0000\u0000\u0000\u0000In scenarios such as Radiation Therapy, where both MRI and CT are required for medical treatment, a unique approach to getting both scans would be to obtain MRI and generate a CT scan from it.\u0000\u0000\u0000\u0000In scenarios, such as radiation therapy, where both MRI and CT are required for medical treatment, a unique approach to getting both scans would be to obtain MRI and generate a CT scan from it. Current deep learning methods for MRI to CT synthesis purely use either paired data or unpaired data. Models trained with paired data suffer due to a lack of availability of well-aligned data.\u0000\u0000\u0000\u0000Training with unpaired data might generate visually realistic images, although it still does not guarantee good accuracy. To overcome this, we proposed a new model called PUPC-GANs (Paired Unpaired CycleGANs), based on CycleGANs (Cycle-Consistent Adversarial Networks).\u0000\u0000\u0000\u0000Training with unpaired data might generate visually realistic images, although it still does not guarantee good accuracy. To overcome this, we propose a new model called PUPC-GANs (Paired Unpaired CycleGANs), based on CycleGANs (Cycle-Consistent Adversarial Networks).\u0000\u0000\u0000\u0000This model is capable of learning transformations utilizing both paired and unpaired data. To support this, a paired loss is introduced. Comparing MAE, MSE, NRMSE, PSNR, and SSIM metrics, PUPC-GANs outperform CycleGANs.\u0000\u0000\u0000\u0000Despite MRI and CT having different areas of application, there are use cases like Radiation Therapy, where both of them are required. A feasible approach to obtaining these images is to synthesize CT from MRI scans. Current methods fail to use paired data along with abundantly available unpaired data. The proposed model (PUPC-GANs) is able to utilize the presence of paired data during the training phase. This ability in combination with the conventional model of CycleGANs produces significant improvement in results as compared to training only with unpaired data. When comparing the two models using loss metrics, which include MAE, MSE, NRMSE, and PSNR, the proposed model outperforms CycleGANs. An SSIM of 0.8 is achieved, which is superior to the one obtained by CycleGANs. The proposed model produces comparable results on visual examination.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45928699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-30DOI: 10.2174/2666255816666230330153428
Zhang Huanjun
The image generation model based on generative adversarial network (GAN) has achieved remarkable achievements. However, traditional GAN has the disadvantage of unstable training, which affects the quality of the generated image. This method is to solve the GAN image generation problems of poor image quality, single image category, and slow model convergence. An improved image generation method is proposed based on (GAN). Firstly, the attention mechanism is introduced into the convolution layer of the generator and discriminator. And a batch normalization layer is added after each convolution layer. Secondly, the ReLU and leaky ReLU are used as the active layer of the generator and discriminator, respectively. Thirdly, the transposed convolution is used in the generator while the small step convolution is used in the discriminator, respectively. Fourthly, a new discarding method is applied in the dropout layer. The experiments are carried out on Caltech 101 dataset. The experimental results show that the image quality generated by the proposed method is better than that generated by GAN with attention mechanism (AM-GAN) and GAN with stable training strategy (STS-GAN). And the stability is improved. The proposed method is effectiveness for image generation with high quality.
{"title":"Image Generation Method Based on Improved Generative Adversarial Network","authors":"Zhang Huanjun","doi":"10.2174/2666255816666230330153428","DOIUrl":"https://doi.org/10.2174/2666255816666230330153428","url":null,"abstract":"\u0000\u0000The image generation model based on generative adversarial network (GAN) has achieved remarkable achievements. However, traditional GAN has the disadvantage of unstable training, which affects the quality of the generated image.\u0000\u0000\u0000\u0000This method is to solve the GAN image generation problems of poor image quality, single image category, and slow model convergence.\u0000\u0000\u0000\u0000An improved image generation method is proposed based on (GAN). Firstly, the attention mechanism is introduced into the convolution layer of the generator and discriminator. And a batch normalization layer is added after each convolution layer. Secondly, the ReLU and leaky ReLU are used as the active layer of the generator and discriminator, respectively. Thirdly, the transposed convolution is used in the generator while the small step convolution is used in the discriminator, respectively. Fourthly, a new discarding method is applied in the dropout layer.\u0000\u0000\u0000\u0000The experiments are carried out on Caltech 101 dataset. The experimental results show that the image quality generated by the proposed method is better than that generated by GAN with attention mechanism (AM-GAN) and GAN with stable training strategy (STS-GAN). And the stability is improved.\u0000\u0000\u0000\u0000The proposed method is effectiveness for image generation with high quality.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49558665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: 10.2174/2666255816666230321120653
Rajesh Kumar Tyagi, Ramander Singh, Anil Kumar Mishra, U. Choudhury
In typical Internet of Things (IoT) networks, data is sent from sensors to fog devices and then onto a central cloud server. One single point of failure, a slowdown in the flow of data, identification, security, connection, privacy concerns caused by a third party managing cloud servers, and the difficulty of frequently updating the firmware on millions of smart devices from both a maintenance and a security point of view are just some of the problems that can occur. The evolution of ubiquitous computing and blockchain technology has inspired researchers worldwide in recent years. Key features of blockchain technology, such as the fact that it can't be changed and a decentralised and distributed approach to data security, have made it a popular choice for developing diverse applications. With the practically significant applicability of blockchain concepts (specifically consensus methods), modern-day applications in ubiquitous computing and other related areas have significantly benefited. In addition, we have taken advantage of the widely available blockchain platforms and looked into potential new study fields. As a result, this review paper elaborates the novel alternative privacy preservation options while simultaneously focusing on the universal domain as a starting point for blockchain technology applications. We also discuss obstacles, research gaps, and solutions. This review can assist early researchers who are beginning to investigate the applicability of blockchain technology in ubiquitous computing. It is also possible to use it as a reference in order to speed up the process of finding the appropriate markers for ongoing research subjects that are of interest.
{"title":"Review on Applicability and Utilization of Blockchain Technology in Ubiquitous Computing","authors":"Rajesh Kumar Tyagi, Ramander Singh, Anil Kumar Mishra, U. Choudhury","doi":"10.2174/2666255816666230321120653","DOIUrl":"https://doi.org/10.2174/2666255816666230321120653","url":null,"abstract":"\u0000\u0000In typical Internet of Things (IoT) networks, data is sent from sensors to fog devices and then onto a central cloud server. One single point of failure, a slowdown in the flow of data, identification, security, connection, privacy concerns caused by a third party managing cloud servers, and the difficulty of frequently updating the firmware on millions of smart devices from both a maintenance and a security point of view are just some of the problems that can occur. The evolution of ubiquitous computing and blockchain technology has inspired researchers worldwide in recent years. Key features of blockchain technology, such as the fact that it can't be changed and a decentralised and distributed approach to data security, have made it a popular choice for developing diverse applications. With the practically significant applicability of blockchain concepts (specifically consensus methods), modern-day applications in ubiquitous computing and other related areas have significantly benefited. In addition, we have taken advantage of the widely available blockchain platforms and looked into potential new study fields. As a result, this review paper elaborates the novel alternative privacy preservation options while simultaneously focusing on the universal domain as a starting point for blockchain technology applications. We also discuss obstacles, research gaps, and solutions. This review can assist early researchers who are beginning to investigate the applicability of blockchain technology in ubiquitous computing. It is also possible to use it as a reference in order to speed up the process of finding the appropriate markers for ongoing research subjects that are of interest.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44090635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.2174/2666255816666230301091725
Hua Ma, Xiao Feng, Yijie Sun
As GAN-based deepfakes have become increasingly mature and real-istic, the demand for effective deepfake detectors has become essential. We are inspired by the fact that normal pulse rhythms present in real-face video can be decreased or even completely interrupted in a deepfake video; thus, we have in-troduced a new deepfake detection approach based on remote heart rate estima-tion using the 3D Cental Difference Convolution Attention Network (CDCAN). Our proposed fake detector is mainly composed of a 3D CDCAN with an inverse attention mechanism and LSTM architecture. It utilizes 3D central difference convolution to enhance the spatiotemporal representation, which can capture rich physiological-related temporal context by gathering the time differ-ence information. The soft attention mechanism is to focus on the skin region of interest, while the inverse attention mechanism is to further denoise rPPG signals. Results: The performance of our approach is evaluated on the two latest Ce-leb-DF and DFDC datasets, for which the experiment results show that our pro-posed approach achieves an accuracy of 99.5% and 97.4%, respectively. It utilizes 3D central difference convolution to enhance the spatiotemporal representation which can capture rich physiological related temporal context by gathering time difference information. The soft attention mechanism is to focus on the skin region of interest, while the inverse attention mechanism is to further denoise rPPG signals. Our approach outperforms the state-of-art methods and proves the effectiveness of our DeepFake detector. None
{"title":"DeepFake Detection with Remote Heart Rate Estimation Using 3D Central Difference Convolution Attention Network","authors":"Hua Ma, Xiao Feng, Yijie Sun","doi":"10.2174/2666255816666230301091725","DOIUrl":"https://doi.org/10.2174/2666255816666230301091725","url":null,"abstract":"\u0000\u0000As GAN-based deepfakes have become increasingly mature and real-istic, the demand for effective deepfake detectors has become essential. We are inspired by the fact that normal pulse rhythms present in real-face video can be decreased or even completely interrupted in a deepfake video; thus, we have in-troduced a new deepfake detection approach based on remote heart rate estima-tion using the 3D Cental Difference Convolution Attention Network (CDCAN).\u0000\u0000\u0000\u0000Our proposed fake detector is mainly composed of a 3D CDCAN with an inverse attention mechanism and LSTM architecture. It utilizes 3D central difference convolution to enhance the spatiotemporal representation, which can capture rich physiological-related temporal context by gathering the time differ-ence information. The soft attention mechanism is to focus on the skin region of interest, while the inverse attention mechanism is to further denoise rPPG signals.\u0000\u0000\u0000\u0000Results: The performance of our approach is evaluated on the two latest Ce-leb-DF and DFDC datasets, for which the experiment results show that our pro-posed approach achieves an accuracy of 99.5% and 97.4%, respectively.\u0000\u0000\u0000\u0000It utilizes 3D central difference convolution to enhance the spatiotemporal representation which can capture rich physiological related temporal context by gathering time difference information. The soft attention mechanism is to focus on the skin region of interest, while the inverse attention mechanism is to further denoise rPPG signals.\u0000\u0000\u0000\u0000Our approach outperforms the state-of-art methods and proves the effectiveness of our DeepFake detector.\u0000\u0000\u0000\u0000None\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41939338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-22DOI: 10.2174/2666255816666230222112313
E. Boonchieng, Aziz Nanthaamornphong
Open-source software (OSS) has become an important choice for developing software applications, and its usage has exponentially increased in recent years. Although many OSS systems have shown high reliability in terms of their functionality, they often exhibit several quality issues. Since most developers focus primarily on meeting clients’ functional requirements within the appropriate deadlines, the outcome suffers from poor design and implementation practices. This issue can also manifest as software code smells, resulting in a variety of quality issues such as software maintainability, comprehensibility, and extensibility. Generally speaking, OSS developers use code reviews during their software development to discover flaws or bugs in the updated code before it is merged with the code base. Nevertheless, despite the harmful impacts of code smells on software projects, the extent to which developers do consider them in the code review process is unclear in practice. Open-source software (OSS) has become an important choice for developing software applications, and its usage is exponentially increasing in recent years. While many OSS systems have shown high reliability in terms of their functionality, they often exhibit several quality issues. Since most developers focus primarily on meeting clients’ functional requirements within the appropriate deadlines, the outcome suffers from poor design and implementation practices. To better understand the code review process in OSS projects, we gathered the comments of code reviewers who specified where developers should fix code smells in two OSS projects, OpenStack and WikiMedia, between 2011 and 2015. Since most developers focus primarily on meeting clients’ functional requirements within the appropriate deadlines, the outcome suffers from poor design and implementation practices. This problem can further manifest in the form of software code smells leading to multiple quality issues including software maintainability, comprehensibility, and extensibility. Our findings indicate that most code reviewers do not pay much attention to code smells. Only a few code reviewers have attempted to motivate developers to improve their source code quality in general. The results also show that there is an increasing tendency to provide advice concerning code smells corrections over time. We believe that this study's findings will encourage developers to use new software engineering practices, such as refactoring, to reduce code smells when developing OSS. -
{"title":"An Exploratory Study on Code Smells During Code Review in OSS Projects: A Case Study on OpenStack and WikiMedia","authors":"E. Boonchieng, Aziz Nanthaamornphong","doi":"10.2174/2666255816666230222112313","DOIUrl":"https://doi.org/10.2174/2666255816666230222112313","url":null,"abstract":"\u0000\u0000Open-source software (OSS) has become an important choice for developing software applications, and its usage has exponentially increased in recent years. Although many OSS systems have shown high reliability in terms of their functionality, they often exhibit several quality issues. Since most developers focus primarily on meeting clients’ functional requirements within the appropriate deadlines, the outcome suffers from poor design and implementation practices. This issue can also manifest as software code smells, resulting in a variety of quality issues such as software maintainability, comprehensibility, and extensibility. Generally speaking, OSS developers use code reviews during their software development to discover flaws or bugs in the updated code before it is merged with the code base. Nevertheless, despite the harmful impacts of code smells on software projects, the extent to which developers do consider them in the code review process is unclear in practice.\u0000\u0000\u0000\u0000Open-source software (OSS) has become an important choice for developing software applications, and its usage is exponentially increasing in recent years. While many OSS systems have shown high reliability in terms of their functionality, they often exhibit several quality issues. Since most developers focus primarily on meeting clients’ functional requirements within the appropriate deadlines, the outcome suffers from poor design and implementation practices.\u0000\u0000\u0000\u0000To better understand the code review process in OSS projects, we gathered the comments of code reviewers who specified where developers should fix code smells in two OSS projects, OpenStack and WikiMedia, between 2011 and 2015.\u0000\u0000\u0000\u0000Since most developers focus primarily on meeting clients’ functional requirements within the appropriate deadlines, the outcome suffers from poor design and implementation practices. This problem can further manifest in the form of software code smells leading to multiple quality issues including software maintainability, comprehensibility, and extensibility.\u0000\u0000\u0000\u0000Our findings indicate that most code reviewers do not pay much attention to code smells. Only a few code reviewers have attempted to motivate developers to improve their source code quality in general. The results also show that there is an increasing tendency to provide advice concerning code smells corrections over time.\u0000\u0000\u0000\u0000We believe that this study's findings will encourage developers to use new software engineering practices, such as refactoring, to reduce code smells when developing OSS.\u0000\u0000\u0000\u0000-\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43047655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-12DOI: 10.2174/2666255816666230112165555
Zhixing Lv, Hui Yu, Kai Kang, Teng Chang Li, Guo Li Du
As innovative information technology, blockchain has combined the advantages of decentralization, immutability, data provenance, and contract operation automatically, which can be used to solve the issues of single point failure, high trading cost, low effectiveness, and data potential risk in power trading. However, in the traditional power blockchain, the design of functional components in blockchain, such as the data structure of the block, does not take the actual features of power into account, thus leading to a performance bottleneck in practical application. Motivated by business characteristics of power trading, a user-centric data model UCDM in consortium blockchain is proposed to achieve efficient data storage and quick data retrieval. The proposed UCDM is designed by considering the requirements of transaction retrieval and analysis, thus supporting the requirements of concurrent data requests and mass data storage. The ID of each user will independently form its own chain over the blockchain. Compared with the traditional data model, the extensive experimental results demonstrate that the proposed UCDM has shorter processing delay, higher throughput, and shorter response latency, thus having practical value. Furthermore, the participant of the blockchain network has a unique identity over the world, which ensures high security during trading. Furthermore, the participant of the blockchain network has a unique identity over the world, which ensures high security during trading.
{"title":"UCDM: A User-Centric Data Model in Power Blockchain","authors":"Zhixing Lv, Hui Yu, Kai Kang, Teng Chang Li, Guo Li Du","doi":"10.2174/2666255816666230112165555","DOIUrl":"https://doi.org/10.2174/2666255816666230112165555","url":null,"abstract":"\u0000\u0000As innovative information technology, blockchain has combined the advantages of decentralization, immutability, data provenance, and contract operation automatically, which can be used to solve the issues of single point failure, high trading cost, low effectiveness, and data potential risk in power trading. However, in the traditional power blockchain, the design of functional components in blockchain, such as the data structure of the block, does not take the actual features of power into account, thus leading to a performance bottleneck in practical application. Motivated by business characteristics of power trading, a user-centric data model UCDM in consortium blockchain is proposed to achieve efficient data storage and quick data retrieval.\u0000\u0000\u0000\u0000The proposed UCDM is designed by considering the requirements of transaction retrieval and analysis, thus supporting the requirements of concurrent data requests and mass data storage. The ID of each user will independently form its own chain over the blockchain.\u0000\u0000\u0000\u0000Compared with the traditional data model, the extensive experimental results demonstrate that the proposed UCDM has shorter processing delay, higher throughput, and shorter response latency, thus having practical value.\u0000\u0000\u0000\u0000Furthermore, the participant of the blockchain network has a unique identity over the world, which ensures high security during trading. Furthermore, the participant of the blockchain network has a unique identity over the world, which ensures high security during trading.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46856162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}