Pub Date : 2023-08-03DOI: 10.3390/computers12080158
Nahla Nurelmadina, R. Saeed, E. Saeid, E. Ali, Maha S. Abdelhaq, R. Alsaqour, N. Alharbe
This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize the power allocation in the downlink transmission, considering the unique characteristics of the CR-NOMA-based femtocell D2D system. The GADIA algorithm is utilized to mitigate interference and effectively optimize power allocation across the network. This research uses a fairness index to present a novel fairness-constrained power allocation algorithm for a downlink non-orthogonal multiple access (NOMA) system. Through extensive simulations, the maximum rate under fairness (MRF) algorithm is shown to optimize system performance while maintaining fairness among users effectively. The fairness index is demonstrated to be adaptable to various user counts, offering a specified range with excellent responsiveness. The implementation of the GADIA algorithm exhibits promising results for sub-optimal frequency band distribution within the network. Mathematical models evaluated in MATLAB further confirm the superiority of CR-NOMA over optimum power allocation NOMA (OPA) and fixed power allocation NOMA (FPA) techniques.
{"title":"Downlink Power Allocation for CR-NOMA-Based Femtocell D2D Using Greedy Asynchronous Distributed Interference Avoidance Algorithm","authors":"Nahla Nurelmadina, R. Saeed, E. Saeid, E. Ali, Maha S. Abdelhaq, R. Alsaqour, N. Alharbe","doi":"10.3390/computers12080158","DOIUrl":"https://doi.org/10.3390/computers12080158","url":null,"abstract":"This paper focuses on downlink power allocation for a cognitive radio-based non-orthogonal multiple access (CR-NOMA) system in a femtocell environment involving device-to-device (D2D) communication. The proposed power allocation scheme employs the greedy asynchronous distributed interference avoidance (GADIA) algorithm. This research aims to optimize the power allocation in the downlink transmission, considering the unique characteristics of the CR-NOMA-based femtocell D2D system. The GADIA algorithm is utilized to mitigate interference and effectively optimize power allocation across the network. This research uses a fairness index to present a novel fairness-constrained power allocation algorithm for a downlink non-orthogonal multiple access (NOMA) system. Through extensive simulations, the maximum rate under fairness (MRF) algorithm is shown to optimize system performance while maintaining fairness among users effectively. The fairness index is demonstrated to be adaptable to various user counts, offering a specified range with excellent responsiveness. The implementation of the GADIA algorithm exhibits promising results for sub-optimal frequency band distribution within the network. Mathematical models evaluated in MATLAB further confirm the superiority of CR-NOMA over optimum power allocation NOMA (OPA) and fixed power allocation NOMA (FPA) techniques.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"76 23 1","pages":"158"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86396022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.3390/computers12080156
Ron S. Hirschprung
The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to act in the digital world, taking steps that define their “privacy behavior”. It has been found that there is a gap between people’s privacy concern and their privacy behavior, a phenomenon that is called the “privacy paradox”. In this research, we investigated if the privacy paradox is domain-specific; in other words, does it vary for an individual when that person moves between different domains, for example, when using e-Health services vs. online social networks? A unique metric was developed to estimate the paradox in a way that enables comparisons, and an empirical study in which (n=437) validated participants acted in eight domains. It was found that the domain does indeed affect the magnitude of the privacy paradox. This finding has a profound significance both for understanding the privacy paradox phenomenon and for the process of developing effective means to protect privacy.
{"title":"Is the Privacy Paradox a Domain-Specific Phenomenon","authors":"Ron S. Hirschprung","doi":"10.3390/computers12080156","DOIUrl":"https://doi.org/10.3390/computers12080156","url":null,"abstract":"The digital era introduces significant challenges for privacy protection, which grow constantly as technology advances. Privacy is a personal trait, and individuals may desire a different level of privacy, which is known as their “privacy concern”. To achieve privacy, the individual has to act in the digital world, taking steps that define their “privacy behavior”. It has been found that there is a gap between people’s privacy concern and their privacy behavior, a phenomenon that is called the “privacy paradox”. In this research, we investigated if the privacy paradox is domain-specific; in other words, does it vary for an individual when that person moves between different domains, for example, when using e-Health services vs. online social networks? A unique metric was developed to estimate the paradox in a way that enables comparisons, and an empirical study in which (n=437) validated participants acted in eight domains. It was found that the domain does indeed affect the magnitude of the privacy paradox. This finding has a profound significance both for understanding the privacy paradox phenomenon and for the process of developing effective means to protect privacy.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"96 1","pages":"156"},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80571985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.3390/computers12080155
W. Villegas-Ch., J. Garcia-Ortiz, Jaime Govea
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop effective strategies that preserve the visual quality of images and are resistant to various attacks. The work focuses on developing a watermarking algorithm in Python, implemented with embedding in the spatial domain, transformation in the frequency domain, and pixel modification techniques. A thorough evaluation of efficiency, accuracy, and robustness is performed using numerical metrics and visual assessment to validate the embedded watermarks. The results demonstrate the algorithm’s effectiveness in protecting the integrity of the images, although some attacks may cause visible degradation. Likewise, a comparison with related works is made to highlight the relevance and effectiveness of the proposed techniques. It is concluded that watermarks are presented as an additional layer of protection in applications where the authenticity and integrity of the image are essential. In addition, the importance of future research that addresses perspectives for improvement and new applications to strengthen the protection of the goodness of pictures and other digital media is highlighted.
{"title":"A Comprehensive Approach to Image Protection in Digital Environments","authors":"W. Villegas-Ch., J. Garcia-Ortiz, Jaime Govea","doi":"10.3390/computers12080155","DOIUrl":"https://doi.org/10.3390/computers12080155","url":null,"abstract":"Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop effective strategies that preserve the visual quality of images and are resistant to various attacks. The work focuses on developing a watermarking algorithm in Python, implemented with embedding in the spatial domain, transformation in the frequency domain, and pixel modification techniques. A thorough evaluation of efficiency, accuracy, and robustness is performed using numerical metrics and visual assessment to validate the embedded watermarks. The results demonstrate the algorithm’s effectiveness in protecting the integrity of the images, although some attacks may cause visible degradation. Likewise, a comparison with related works is made to highlight the relevance and effectiveness of the proposed techniques. It is concluded that watermarks are presented as an additional layer of protection in applications where the authenticity and integrity of the image are essential. In addition, the importance of future research that addresses perspectives for improvement and new applications to strengthen the protection of the goodness of pictures and other digital media is highlighted.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"27 1","pages":"155"},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73686713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.3390/computation11080151
Ferdinand Lauren F. Carpena, L. Tayo
Knee osteoarthritis is a musculoskeletal defect specific to the soft tissues in the knee joint and is a degenerative disease that affects millions of people. Although drug intake can slow down progression, total knee arthroplasty has been the gold standard for the treatment of this disease. This surgical procedure involves replacing the tibiofemoral joint with an implant. The most common implants used for this require the removal of either the anterior cruciate ligament (ACL) alone or both cruciate ligaments which alters the native knee joint mechanics. Bi-cruciate-retaining implants have been developed but not frequently used due to the complexity of the procedure and the occurrences of intraoperative failures such as ACL and tibial eminence rupture. In this study, a knee joint implant was modified to have a bone graft that should aid in ACL reconstruction. The mechanical behavior of the bone graft was studied through finite element analysis (FEA). The results show that the peak Christensen safety factor for cortical bone is 0.021 while the maximum shear stress of the cancellous bone is 3 MPa which signifies that the cancellous bone could fail when subjected to the ACL loads, depending on the graft shear strength which could vary depending on the graft source, while cortical bone could withstand the walking load. It would be necessary to optimize the bone graft geometry for stress distribution as well as to evaluate the effectiveness of bone healing prior to implementation.
{"title":"Finite Element Analysis of ACL Reconstruction-Compatible Knee Implant Design with Bone Graft Component","authors":"Ferdinand Lauren F. Carpena, L. Tayo","doi":"10.3390/computation11080151","DOIUrl":"https://doi.org/10.3390/computation11080151","url":null,"abstract":"Knee osteoarthritis is a musculoskeletal defect specific to the soft tissues in the knee joint and is a degenerative disease that affects millions of people. Although drug intake can slow down progression, total knee arthroplasty has been the gold standard for the treatment of this disease. This surgical procedure involves replacing the tibiofemoral joint with an implant. The most common implants used for this require the removal of either the anterior cruciate ligament (ACL) alone or both cruciate ligaments which alters the native knee joint mechanics. Bi-cruciate-retaining implants have been developed but not frequently used due to the complexity of the procedure and the occurrences of intraoperative failures such as ACL and tibial eminence rupture. In this study, a knee joint implant was modified to have a bone graft that should aid in ACL reconstruction. The mechanical behavior of the bone graft was studied through finite element analysis (FEA). The results show that the peak Christensen safety factor for cortical bone is 0.021 while the maximum shear stress of the cancellous bone is 3 MPa which signifies that the cancellous bone could fail when subjected to the ACL loads, depending on the graft shear strength which could vary depending on the graft source, while cortical bone could withstand the walking load. It would be necessary to optimize the bone graft geometry for stress distribution as well as to evaluate the effectiveness of bone healing prior to implementation.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"40 1","pages":"151"},"PeriodicalIF":0.0,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82657571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.3390/computation11080150
O. Vatyukova, A. Klikunova, Anna A. Vasilchenko, A. Voronin, A. Khoperskov, M. Kharitonov
Extreme flooding of the floodplains of large lowland rivers poses a danger to the population due to the vastness of the flooded areas. This requires the organization of safe evacuation in conditions of a shortage of temporary and transport resources due to significant differences in the moments of flooding of different spatial parts. We consider the case of a shortage of evacuation vehicles, in which the safe evacuation of the entire population to permanent evacuation points is impossible. Therefore, the evacuation is divided into two stages with the organization of temporary evacuation points on evacuation routes. Our goal is to develop a method for analyzing the minimum resource requirement for the safe evacuation of the population of floodplain territories based on a mathematical model of flood dynamics and minimizing the number of vehicles on a set of safe evacuation schedules. The core of the approach is a numerical hydrodynamic model in shallow water approximation. Modeling the hydrological regime of a real water body requires a multi-layer geoinformation model of the territory with layers of relief, channel structure, and social infrastructure. High-performance computing is performed on GPUs using CUDA. The optimization problem is a variant of the resource investment problem of scheduling theory with deadlines for completing work and is solved on the basis of a heuristic algorithm. We use the results of numerical simulation of floods for the Northern part of the Volga-Akhtuba floodplain to plot the dependence of the minimum number of vehicles that ensure the safe evacuation of the population. The minimum transport resources depend on the water discharge in the Volga river, the start of the evacuation, and the localization of temporary evacuation points. The developed algorithm constructs a set of safe evacuation schedules for the minimum allowable number of vehicles in various flood scenarios. The population evacuation schedules constructed for the Volga-Akhtuba floodplain can be used in practice for various vast river valleys.
{"title":"The Problem of Effective Evacuation of the Population from Floodplains under Threat of Flooding: Algorithmic and Software Support with Shortage of Resources","authors":"O. Vatyukova, A. Klikunova, Anna A. Vasilchenko, A. Voronin, A. Khoperskov, M. Kharitonov","doi":"10.3390/computation11080150","DOIUrl":"https://doi.org/10.3390/computation11080150","url":null,"abstract":"Extreme flooding of the floodplains of large lowland rivers poses a danger to the population due to the vastness of the flooded areas. This requires the organization of safe evacuation in conditions of a shortage of temporary and transport resources due to significant differences in the moments of flooding of different spatial parts. We consider the case of a shortage of evacuation vehicles, in which the safe evacuation of the entire population to permanent evacuation points is impossible. Therefore, the evacuation is divided into two stages with the organization of temporary evacuation points on evacuation routes. Our goal is to develop a method for analyzing the minimum resource requirement for the safe evacuation of the population of floodplain territories based on a mathematical model of flood dynamics and minimizing the number of vehicles on a set of safe evacuation schedules. The core of the approach is a numerical hydrodynamic model in shallow water approximation. Modeling the hydrological regime of a real water body requires a multi-layer geoinformation model of the territory with layers of relief, channel structure, and social infrastructure. High-performance computing is performed on GPUs using CUDA. The optimization problem is a variant of the resource investment problem of scheduling theory with deadlines for completing work and is solved on the basis of a heuristic algorithm. We use the results of numerical simulation of floods for the Northern part of the Volga-Akhtuba floodplain to plot the dependence of the minimum number of vehicles that ensure the safe evacuation of the population. The minimum transport resources depend on the water discharge in the Volga river, the start of the evacuation, and the localization of temporary evacuation points. The developed algorithm constructs a set of safe evacuation schedules for the minimum allowable number of vehicles in various flood scenarios. The population evacuation schedules constructed for the Volga-Akhtuba floodplain can be used in practice for various vast river valleys.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"43 1","pages":"150"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85116853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.3390/computation11080148
G. Plusch, S. Arsenyev-Obraztsov, O. Kochueva
We present a new regularization method called Weights Reset, which includes periodically resetting a random portion of layer weights during the training process using predefined probability distributions. This technique was applied and tested on several popular classification datasets, Caltech-101, CIFAR-100 and Imagenette. We compare these results with other traditional regularization methods. The subsequent test results demonstrate that the Weights Reset method is competitive, achieving the best performance on Imagenette dataset and the challenging and unbalanced Caltech-101 dataset. This method also has sufficient potential to prevent vanishing and exploding gradients. However, this analysis is of a brief nature. Further comprehensive studies are needed in order to gain a deep understanding of the computing potential and limitations of the Weights Reset method. The observed results show that the Weights Reset method can be estimated as an effective extension of the traditional regularization methods and can help to improve model performance and generalization.
{"title":"The Weights Reset Technique for Deep Neural Networks Implicit Regularization","authors":"G. Plusch, S. Arsenyev-Obraztsov, O. Kochueva","doi":"10.3390/computation11080148","DOIUrl":"https://doi.org/10.3390/computation11080148","url":null,"abstract":"We present a new regularization method called Weights Reset, which includes periodically resetting a random portion of layer weights during the training process using predefined probability distributions. This technique was applied and tested on several popular classification datasets, Caltech-101, CIFAR-100 and Imagenette. We compare these results with other traditional regularization methods. The subsequent test results demonstrate that the Weights Reset method is competitive, achieving the best performance on Imagenette dataset and the challenging and unbalanced Caltech-101 dataset. This method also has sufficient potential to prevent vanishing and exploding gradients. However, this analysis is of a brief nature. Further comprehensive studies are needed in order to gain a deep understanding of the computing potential and limitations of the Weights Reset method. The observed results show that the Weights Reset method can be estimated as an effective extension of the traditional regularization methods and can help to improve model performance and generalization.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"37 1","pages":"148"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80598187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.3390/computation11080149
A. Morozov, D. Reviznikov
Problems with interval uncertainties arise in many applied fields. The authors have earlier developed, tested, and proved an adaptive interpolation algorithm for solving this class of problems. The algorithm’s idea consists of constructing a piecewise polynomial function that interpolates the dependence of the problem solution on point values of interval parameters. The classical version of the algorithm uses polynomial full grid interpolation and, with a large number of uncertainties, the algorithm becomes difficult to apply due to the exponential growth of computational costs. Sparse grid interpolation requires significantly less computational resources than interpolation on full grids, so their use seems promising. A representative number of examples have previously confirmed the effectiveness of using adaptive sparse grids with a linear basis in the adaptive interpolation algorithm. The purpose of this paper is to apply adaptive sparse grids with a nonlinear basis for modeling dynamic systems with interval parameters. The corresponding interpolation polynomials on the quadratic basis and the fourth-degree basis are constructed. The efficiency, performance, and robustness of the proposed approach are demonstrated on a representative set of problems.
{"title":"Adaptive Sparse Grids with Nonlinear Basis in Interval Problems for Dynamical Systems","authors":"A. Morozov, D. Reviznikov","doi":"10.3390/computation11080149","DOIUrl":"https://doi.org/10.3390/computation11080149","url":null,"abstract":"Problems with interval uncertainties arise in many applied fields. The authors have earlier developed, tested, and proved an adaptive interpolation algorithm for solving this class of problems. The algorithm’s idea consists of constructing a piecewise polynomial function that interpolates the dependence of the problem solution on point values of interval parameters. The classical version of the algorithm uses polynomial full grid interpolation and, with a large number of uncertainties, the algorithm becomes difficult to apply due to the exponential growth of computational costs. Sparse grid interpolation requires significantly less computational resources than interpolation on full grids, so their use seems promising. A representative number of examples have previously confirmed the effectiveness of using adaptive sparse grids with a linear basis in the adaptive interpolation algorithm. The purpose of this paper is to apply adaptive sparse grids with a nonlinear basis for modeling dynamic systems with interval parameters. The corresponding interpolation polynomials on the quadratic basis and the fourth-degree basis are constructed. The efficiency, performance, and robustness of the proposed approach are demonstrated on a representative set of problems.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"48 1","pages":"149"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79753995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-30DOI: 10.3390/computers12080154
J. Thunberg, Taqwa Saeed, G. Sidorenko, Felipe Valle, A. Vinel
Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages. We consider a scenario where vehicles approaching a traffic light are connected to each other, but the traffic light itself is not cooperative. Information about indented trajectories such as decisions on how and when to accelerate, decelerate and stop, is communicated among the vehicles involved. We provide an optimization-based procedure for efficient and safe passing of traffic lights (or other temporary road blockage) using vehicle-to-vehicle communication (V2V). We locally optimize objectives that promote efficiency such as less deceleration and larger minimum velocity, while maintaining safety in terms of no collisions. The procedure is computationally efficient as it mainly involves a gradient decent algorithm for one single parameter.
{"title":"Cooperative Vehicles versus Non-Cooperative Traffic Light: Safe and Efficient Passing","authors":"J. Thunberg, Taqwa Saeed, G. Sidorenko, Felipe Valle, A. Vinel","doi":"10.3390/computers12080154","DOIUrl":"https://doi.org/10.3390/computers12080154","url":null,"abstract":"Connected and automated vehicles (CAVs) will be a key component of future cooperative intelligent transportation systems (C-ITS). Since the adoption of C-ITS is not foreseen to happen instantly, not all of its elements are going to be connected at the early deployment stages. We consider a scenario where vehicles approaching a traffic light are connected to each other, but the traffic light itself is not cooperative. Information about indented trajectories such as decisions on how and when to accelerate, decelerate and stop, is communicated among the vehicles involved. We provide an optimization-based procedure for efficient and safe passing of traffic lights (or other temporary road blockage) using vehicle-to-vehicle communication (V2V). We locally optimize objectives that promote efficiency such as less deceleration and larger minimum velocity, while maintaining safety in terms of no collisions. The procedure is computationally efficient as it mainly involves a gradient decent algorithm for one single parameter.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"35 1","pages":"154"},"PeriodicalIF":0.0,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87878392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-29DOI: 10.3390/computers12080153
Marta Montenegro Rueda, José Fernández-Cerero, J. Fernández-Batanero, Eloy López-Meneses
The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the launch of this application (November 2022) in three leading scientific databases in the world of education (Web of Science, Scopus and Google Scholar). The sample consisted of 12 studies. Using a descriptive and quantitative methodology, the most significant data are presented. The results show that the implementation of ChatGPT in the educational environment has a positive impact on the teaching–learning process, however, the results also highlight the importance of teachers being trained to use the tool properly. Although ChatGPT can enhance the educational experience, its successful implementation requires teachers to be familiar with its operation. These findings provide a solid basis for future research and decision-making regarding the use of ChatGPT in the educational context.
本研究的目的是在对文献进行系统回顾的基础上,分析ChatGPT工具在教育中应用的影响。这些数据是通过审查自该应用程序启动(2022年11月)以来在教育界三个领先的科学数据库(Web of Science, Scopus和Google Scholar)中发表的研究结果而获得的。样本包括12项研究。采用描述性和定量的方法,提出了最重要的数据。研究结果表明,在教育环境中实施ChatGPT对教学过程具有积极影响,然而,研究结果也强调了培训教师正确使用该工具的重要性。虽然ChatGPT可以提升教育体验,但它的成功实施需要教师熟悉它的操作。这些发现为今后在教育环境中使用ChatGPT的研究和决策提供了坚实的基础。
{"title":"Impact of the Implementation of ChatGPT in Education: A Systematic Review","authors":"Marta Montenegro Rueda, José Fernández-Cerero, J. Fernández-Batanero, Eloy López-Meneses","doi":"10.3390/computers12080153","DOIUrl":"https://doi.org/10.3390/computers12080153","url":null,"abstract":"The aim of this study is to present, based on a systematic review of the literature, an analysis of the impact of the application of the ChatGPT tool in education. The data were obtained by reviewing the results of studies published since the launch of this application (November 2022) in three leading scientific databases in the world of education (Web of Science, Scopus and Google Scholar). The sample consisted of 12 studies. Using a descriptive and quantitative methodology, the most significant data are presented. The results show that the implementation of ChatGPT in the educational environment has a positive impact on the teaching–learning process, however, the results also highlight the importance of teachers being trained to use the tool properly. Although ChatGPT can enhance the educational experience, its successful implementation requires teachers to be familiar with its operation. These findings provide a solid basis for future research and decision-making regarding the use of ChatGPT in the educational context.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"140 1","pages":"153"},"PeriodicalIF":0.0,"publicationDate":"2023-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79955958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-28DOI: 10.3390/computers12080152
A. .. Gavade, R. Nerli, Neel Kanwal, Priyanka A. Gavade, Shridhar Sunilkumar Pol, Syed Sajjad Hussain Rizvi
Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of the anatomy and tissue characteristics of the prostate gland. However, interpreting mpMRI images can be challenging for humans due to the wide range of appearances and features of PCa, which can be subtle and difficult to distinguish from normal prostate tissue. Deep learning (DL) approaches can be beneficial in this regard by automatically differentiating relevant features and providing an automated diagnosis of PCa. DL models can assist the existing clinical decision support system by saving a physician’s time in localizing regions of interest (ROIs) and help in providing better patient care. In this paper, contemporary DL models are used to create a pipeline for the segmentation and classification of mpMRI images. Our DL approach follows two steps: a U-Net architecture for segmenting ROI in the first stage and a long short-term memory (LSTM) network for classifying the ROI as either cancerous or non-cancerous. We trained our DL models on the I2CVB (Initiative for Collaborative Computer Vision Benchmarking) dataset and conducted a thorough comparison with our experimental setup. Our proposed DL approach, with simpler architectures and training strategy using a single dataset, outperforms existing techniques in the literature. Results demonstrate that the proposed approach can detect PCa disease with high precision and also has a high potential to improve clinical assessment.
前列腺癌(PCa)是全世界男性的一个重大健康问题,早期发现和有效诊断对于成功治疗至关重要。在这方面,多参数磁共振成像(mpMRI)已经发展成为一种重要的成像方式,它提供了前列腺解剖和组织特征的详细图像。然而,由于前列腺癌的外观和特征范围广泛,很难与正常前列腺组织区分开来,因此解释mpMRI图像对人类来说是具有挑战性的。深度学习(DL)方法可以通过自动区分相关特征并提供PCa的自动诊断在这方面是有益的。深度学习模型可以通过节省医生定位感兴趣区域(roi)的时间来辅助现有的临床决策支持系统,并帮助提供更好的患者护理。在本文中,使用现代深度学习模型来创建mpMRI图像的分割和分类管道。我们的深度学习方法分为两个步骤:第一阶段用于分割ROI的U-Net架构和用于将ROI分类为癌变或非癌变的长短期记忆(LSTM)网络。我们在I2CVB (Initiative for Collaborative Computer Vision Benchmarking)数据集上训练我们的深度学习模型,并与我们的实验设置进行了彻底的比较。我们提出的深度学习方法具有更简单的架构和使用单个数据集的训练策略,优于文献中的现有技术。结果表明,该方法可以较准确地检测前列腺癌,并具有提高临床评估的潜力。
{"title":"Automated Diagnosis of Prostate Cancer Using mpMRI Images: A Deep Learning Approach for Clinical Decision Support","authors":"A. .. Gavade, R. Nerli, Neel Kanwal, Priyanka A. Gavade, Shridhar Sunilkumar Pol, Syed Sajjad Hussain Rizvi","doi":"10.3390/computers12080152","DOIUrl":"https://doi.org/10.3390/computers12080152","url":null,"abstract":"Prostate cancer (PCa) is a significant health concern for men worldwide, where early detection and effective diagnosis can be crucial for successful treatment. Multiparametric magnetic resonance imaging (mpMRI) has evolved into a significant imaging modality in this regard, which provides detailed images of the anatomy and tissue characteristics of the prostate gland. However, interpreting mpMRI images can be challenging for humans due to the wide range of appearances and features of PCa, which can be subtle and difficult to distinguish from normal prostate tissue. Deep learning (DL) approaches can be beneficial in this regard by automatically differentiating relevant features and providing an automated diagnosis of PCa. DL models can assist the existing clinical decision support system by saving a physician’s time in localizing regions of interest (ROIs) and help in providing better patient care. In this paper, contemporary DL models are used to create a pipeline for the segmentation and classification of mpMRI images. Our DL approach follows two steps: a U-Net architecture for segmenting ROI in the first stage and a long short-term memory (LSTM) network for classifying the ROI as either cancerous or non-cancerous. We trained our DL models on the I2CVB (Initiative for Collaborative Computer Vision Benchmarking) dataset and conducted a thorough comparison with our experimental setup. Our proposed DL approach, with simpler architectures and training strategy using a single dataset, outperforms existing techniques in the literature. Results demonstrate that the proposed approach can detect PCa disease with high precision and also has a high potential to improve clinical assessment.","PeriodicalId":10526,"journal":{"name":"Comput.","volume":"78 1","pages":"152"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89607173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}