Due to the pandemic, there has been a drastic change in the advancement of online learning platforms. This article will help us understand the reasons for the increase and decrease of using online learning platforms. Based on the research conducted, it was observed that almost the majority of the students (48.4%) have not completed the enrolled course. Few of the students have come at least halfway (14.5%) to the completion of the course. And the rest of the students (37.1%) responsibly completed the enrolled course; almost half the students who haven't completed the course indicated that the main barrier faced among the students is the lack of interaction (36.7%).
{"title":"A Potent View on the Effects of E-Learning","authors":"Sherin Eliyas, P. Ranjana","doi":"10.4018/ijghpc.335035","DOIUrl":"https://doi.org/10.4018/ijghpc.335035","url":null,"abstract":"Due to the pandemic, there has been a drastic change in the advancement of online learning platforms. This article will help us understand the reasons for the increase and decrease of using online learning platforms. Based on the research conducted, it was observed that almost the majority of the students (48.4%) have not completed the enrolled course. Few of the students have come at least halfway (14.5%) to the completion of the course. And the rest of the students (37.1%) responsibly completed the enrolled course; almost half the students who haven't completed the course indicated that the main barrier faced among the students is the lack of interaction (36.7%).","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"98 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139175042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Honglong Xu, Zhonghao Liang, Kaide Huang, Guoshun Huang, Yan He
Outlier detection is an important data mining technique. In this article, the triangle inequality of distances is leveraged to design a pre-cutoff value (PCV) algorithm that calculates the outlier degree pre-threshold without additional distance computations. This algorithm is suitable for accelerating various metric space outlier detection algorithms. Experimental results on multiple real datasets demonstrate that the PCV algorithm reduces the runtime and number of distance computations for the iORCA algorithm by 14.59% and 15.73%, respectively. Even compared to the new high-performance algorithm ADPOD, the PCV algorithm achieves 1.41% and 0.45% reductions. Notably, the non-outlier exclusion for the first data block in the dataset is significantly improved, with an exclusion rate of up to 36.5%, leading to a 23.54% reduction in detection time for that data block. While demonstrating excellent results, the PCV algorithm maintains the data type generality of metric space algorithms.
{"title":"Pre-Cutoff Value Calculation Method for Accelerating Metric Space Outlier Detection","authors":"Honglong Xu, Zhonghao Liang, Kaide Huang, Guoshun Huang, Yan He","doi":"10.4018/ijghpc.334125","DOIUrl":"https://doi.org/10.4018/ijghpc.334125","url":null,"abstract":"Outlier detection is an important data mining technique. In this article, the triangle inequality of distances is leveraged to design a pre-cutoff value (PCV) algorithm that calculates the outlier degree pre-threshold without additional distance computations. This algorithm is suitable for accelerating various metric space outlier detection algorithms. Experimental results on multiple real datasets demonstrate that the PCV algorithm reduces the runtime and number of distance computations for the iORCA algorithm by 14.59% and 15.73%, respectively. Even compared to the new high-performance algorithm ADPOD, the PCV algorithm achieves 1.41% and 0.45% reductions. Notably, the non-outlier exclusion for the first data block in the dataset is significantly improved, with an exclusion rate of up to 36.5%, leading to a 23.54% reduction in detection time for that data block. While demonstrating excellent results, the PCV algorithm maintains the data type generality of metric space algorithms.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"19 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139215761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is an information technology model that provides computing and storage resources as a service. Data storage security remains the main challenge in adapting this new model. The common solution to secure data in the cloud is data encryption. However, handling all the data with the same security policy does not appear to be good practice, because they do not have the same sensibility for the data owner. The present research proposes a new method to improve the security of data in cloud storage. It combines two domains represented by machine learning and multi criteria decision making, in order to provide a new classification method, that classifies data before being introduced into a suitable encryption system according to their category. A Cloudsim simulation has been used to demonstrate the effectiveness of the proposed method. The results of the simulation exhibit that our method is more efficient and accurate and takes less processing time, while ensuring data confidentiality and integrity.
{"title":"A Security Method for Cloud Storage Using Data Classification","authors":"Oussama Arki, Abdelhafid Zitouni, M. Djoudi","doi":"10.4018/ijghpc.329602","DOIUrl":"https://doi.org/10.4018/ijghpc.329602","url":null,"abstract":"Cloud computing is an information technology model that provides computing and storage resources as a service. Data storage security remains the main challenge in adapting this new model. The common solution to secure data in the cloud is data encryption. However, handling all the data with the same security policy does not appear to be good practice, because they do not have the same sensibility for the data owner. The present research proposes a new method to improve the security of data in cloud storage. It combines two domains represented by machine learning and multi criteria decision making, in order to provide a new classification method, that classifies data before being introduced into a suitable encryption system according to their category. A Cloudsim simulation has been used to demonstrate the effectiveness of the proposed method. The results of the simulation exhibit that our method is more efficient and accurate and takes less processing time, while ensuring data confidentiality and integrity.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47103801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article discusses the importance of designing an efficient medium access control (MAC) protocol for wireless sensor networks (WSNs) to optimize energy consumption at the data link layer while transmitting high traffic applications. The proposed protocol, EE-MMAC, is an energy-efficient multichannel MAC that reduces energy consumption by minimizing idle listening, collisions, overhearing, and control packet overhead. EE-MMAC utilizes a directional antenna and periodically sleep technique in a multi-channel environment. Nodes exchange control packets on the control channel to choose a data channel and decide the beam direction of the flow. Simulation results show that EE-MMAC achieves significant energy gains (30% to 45% less than comparable IEEE 802.11 and MMAC) based on energy efficiency, packet delivery ratio, and throughput.
{"title":"An Energy-Efficient Multi-Channel Design for Distributed Wireless Sensor Networks","authors":"Sunil Kumar","doi":"10.4018/ijghpc.329601","DOIUrl":"https://doi.org/10.4018/ijghpc.329601","url":null,"abstract":"This article discusses the importance of designing an efficient medium access control (MAC) protocol for wireless sensor networks (WSNs) to optimize energy consumption at the data link layer while transmitting high traffic applications. The proposed protocol, EE-MMAC, is an energy-efficient multichannel MAC that reduces energy consumption by minimizing idle listening, collisions, overhearing, and control packet overhead. EE-MMAC utilizes a directional antenna and periodically sleep technique in a multi-channel environment. Nodes exchange control packets on the control channel to choose a data channel and decide the beam direction of the flow. Simulation results show that EE-MMAC achieves significant energy gains (30% to 45% less than comparable IEEE 802.11 and MMAC) based on energy efficiency, packet delivery ratio, and throughput.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49160398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Single-chip multicore processors and their network on chip interconnection mechanisms have received extensive interest since the early 2000s. The mesh topology is popular in networks on chip. A common issue in mesh is that it can result in high energy consumption and chip temperatures. It has been recently shown that mapping communicating tasks to neighboring cores can reduce communication delays and the associated power consumption and improve throughput. This paper evaluates the contiguous allocation strategy first fit and non-contiguous allocation strategies that attempt to achieve a degree of contiguity among the cores allocated to a job. One of the non-contiguous strategies is a new strategy, referred to as neighbor allocation strategy, which decomposes the job request so that it can be accommodated by free core submeshes and individual cores that have degree of contiguity. The results show that the relative merits of the policies depend on the job's communication pattern.
{"title":"On Allocation Algorithms for Manycore Systems With Network on Chip","authors":"Abeer Shdefat, S. Bani-Mohammad, I. Ababneh","doi":"10.4018/ijghpc.320789","DOIUrl":"https://doi.org/10.4018/ijghpc.320789","url":null,"abstract":"Single-chip multicore processors and their network on chip interconnection mechanisms have received extensive interest since the early 2000s. The mesh topology is popular in networks on chip. A common issue in mesh is that it can result in high energy consumption and chip temperatures. It has been recently shown that mapping communicating tasks to neighboring cores can reduce communication delays and the associated power consumption and improve throughput. This paper evaluates the contiguous allocation strategy first fit and non-contiguous allocation strategies that attempt to achieve a degree of contiguity among the cores allocated to a job. One of the non-contiguous strategies is a new strategy, referred to as neighbor allocation strategy, which decomposes the job request so that it can be accommodated by free core submeshes and individual cores that have degree of contiguity. The results show that the relative merits of the policies depend on the job's communication pattern.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"22 1","pages":"1-22"},"PeriodicalIF":1.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84798840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Particle swarm optimization (PSO) has been successfully applied to feature selection (FS) due to its efficiency and ease of implementation. Like most evolutionary algorithms, they still suffer from a high computational burden and poor generalization ability. Multifactorial optimization (MFO), as an effective evolutionary multitasking paradigm, has been widely used for solving complex problems through implicit knowledge transfer between related tasks. Based on MFO, this study proposes a PSO-based FS method to solve high-dimensional classification via information sharing between two related tasks generated from a dataset using two different measures of correlation. To be specific, two subsets of relevant features are generated using symmetric uncertainty measure and Pearson correlation coefficient, then each subset is assigned to one task. To improve runtime, the authors proposed a parallel fitness evaluation of particles under Apache Spark. The results show that the proposed FS method can achieve higher classification accuracy with a smaller feature subset in a reasonable time.
{"title":"A Parallel Hybrid Feature Selection Approach Based on Multi-Correlation and Evolutionary Multitasking","authors":"Mohamed Amine Azaiz, Djamel Amar Bensaber","doi":"10.4018/ijghpc.320475","DOIUrl":"https://doi.org/10.4018/ijghpc.320475","url":null,"abstract":"Particle swarm optimization (PSO) has been successfully applied to feature selection (FS) due to its efficiency and ease of implementation. Like most evolutionary algorithms, they still suffer from a high computational burden and poor generalization ability. Multifactorial optimization (MFO), as an effective evolutionary multitasking paradigm, has been widely used for solving complex problems through implicit knowledge transfer between related tasks. Based on MFO, this study proposes a PSO-based FS method to solve high-dimensional classification via information sharing between two related tasks generated from a dataset using two different measures of correlation. To be specific, two subsets of relevant features are generated using symmetric uncertainty measure and Pearson correlation coefficient, then each subset is assigned to one task. To improve runtime, the authors proposed a parallel fitness evaluation of particles under Apache Spark. The results show that the proposed FS method can achieve higher classification accuracy with a smaller feature subset in a reasonable time.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"8 1","pages":"1-23"},"PeriodicalIF":1.0,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78269566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Earlier detection and classification of squamous cell carcinoma (OSCC) is a widespread issue for efficient treatment, enhancing survival rate, and reducing the death rate. Thus, it becomes necessary to design effective diagnosis models for assisting pathologists in the OSCC examination process. In recent times, deep learning (DL) models have exhibited considerable improvement in the design of effective computer-aided diagnosis models for OSCC using histopathological images. In this view, this paper develops a novel duck pack optimization with deep transfer learning enabled oral squamous cell carcinoma classification (DPODTL-OSC3) model using histopathological images. The goal of the DPODTL-OSC3 model is to improve the classifier outcomes of OSCC using histopathological images into normal and cancerous class labels. Finally, the variational autoencoder (VAE) model is utilized for the detection and classification of OSCC. The performance validation and comparative result analysis for the DPODTL-OSC3 model are tested using a histopathological imaging database.
{"title":"Duck Pack Optimization With Deep Transfer Learning-Enabled Oral Squamous Cell Carcinoma Classification on Histopathological Images","authors":"Savita Shetty, A. Patil","doi":"10.4018/ijghpc.320474","DOIUrl":"https://doi.org/10.4018/ijghpc.320474","url":null,"abstract":"Earlier detection and classification of squamous cell carcinoma (OSCC) is a widespread issue for efficient treatment, enhancing survival rate, and reducing the death rate. Thus, it becomes necessary to design effective diagnosis models for assisting pathologists in the OSCC examination process. In recent times, deep learning (DL) models have exhibited considerable improvement in the design of effective computer-aided diagnosis models for OSCC using histopathological images. In this view, this paper develops a novel duck pack optimization with deep transfer learning enabled oral squamous cell carcinoma classification (DPODTL-OSC3) model using histopathological images. The goal of the DPODTL-OSC3 model is to improve the classifier outcomes of OSCC using histopathological images into normal and cancerous class labels. Finally, the variational autoencoder (VAE) model is utilized for the detection and classification of OSCC. The performance validation and comparative result analysis for the DPODTL-OSC3 model are tested using a histopathological imaging database.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"32 1","pages":"1-21"},"PeriodicalIF":1.0,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72995993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Copyright law is important in the media sector since an original creative work owner has the exclusive right to consent, publish, broadcast, and even translate or modify their work. A growing number of digital copyright issues can be found behind the widespread use of multimedia technologies. Improvements must be made right away to the copyright infringement prevention approach using digital watermarking. Zero-watermarking has lately gained popularity as one of the alternatives being considered. A novel sparse representation persistent-based digital audio watermarking algorithm (SRP-DAWA) has been presented to increase zero resilience. Using the improved singular value decomposition (iSVD) technique, an optimum over-complete dictionary can be generated from the background audio signal in the suggested method. Using the orthogonal matching pursuit (OMP) method, the sparse coefficient of a fragmented sample data is calculated, and the corresponding sparse matrix is generated.
{"title":"Copyright Protection of Music Multimedia Works Fused With Digital Audio Watermarking Algorithm","authors":"Wanxing Huang","doi":"10.4018/ijghpc.318406","DOIUrl":"https://doi.org/10.4018/ijghpc.318406","url":null,"abstract":"Copyright law is important in the media sector since an original creative work owner has the exclusive right to consent, publish, broadcast, and even translate or modify their work. A growing number of digital copyright issues can be found behind the widespread use of multimedia technologies. Improvements must be made right away to the copyright infringement prevention approach using digital watermarking. Zero-watermarking has lately gained popularity as one of the alternatives being considered. A novel sparse representation persistent-based digital audio watermarking algorithm (SRP-DAWA) has been presented to increase zero resilience. Using the improved singular value decomposition (iSVD) technique, an optimum over-complete dictionary can be generated from the background audio signal in the suggested method. Using the orthogonal matching pursuit (OMP) method, the sparse coefficient of a fragmented sample data is calculated, and the corresponding sparse matrix is generated.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"230 1","pages":"1-17"},"PeriodicalIF":1.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86690267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the problems of low decoding accuracy, long decoding time, and low quality of decoded video image in the traditional lossless decoding method of compressed coded video, a lossless decoding method of compressed coded video based on inter frame difference background model is proposed. At the coding end, the inter frame difference background model is used to extract the single frame video image, and the mixed coding method is used to compress the video losslessly. At the decoding side, the CS-SOMP (compressive sensing-synchronous orthogonal matching pursuit algorithm) joint reconstruction algorithm is composed of synchronous orthogonal matching pursuit algorithm (SOMP) and K-SVD (kernel singular value decomposition) algorithm to losslessly decode the compressed encoded video. The simulation results show that the lossless decoding method based on the inter frame difference background model has higher accuracy, shorter decoding time, and ensures the quality of the decoded video image.
{"title":"Lossless Decoding Method of Compressed Coded Video Based on Inter-Frame Differential Background Model: Multi-Algorithm Joint Lossless Decoding","authors":"Lehua Hu","doi":"10.4018/ijghpc.318407","DOIUrl":"https://doi.org/10.4018/ijghpc.318407","url":null,"abstract":"For the problems of low decoding accuracy, long decoding time, and low quality of decoded video image in the traditional lossless decoding method of compressed coded video, a lossless decoding method of compressed coded video based on inter frame difference background model is proposed. At the coding end, the inter frame difference background model is used to extract the single frame video image, and the mixed coding method is used to compress the video losslessly. At the decoding side, the CS-SOMP (compressive sensing-synchronous orthogonal matching pursuit algorithm) joint reconstruction algorithm is composed of synchronous orthogonal matching pursuit algorithm (SOMP) and K-SVD (kernel singular value decomposition) algorithm to losslessly decode the compressed encoded video. The simulation results show that the lossless decoding method based on the inter frame difference background model has higher accuracy, shorter decoding time, and ensures the quality of the decoded video image.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"171 1","pages":"1-13"},"PeriodicalIF":1.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90433305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to solve the problems of high distortion rate and low decoding efficiency of the decoded video when the current coding and decoding methods are used to encode and decode the remote video monitoring system, considering the local area network, research on the optimization method of the coding and decoding of the remote video monitoring system is proposed. The local area network is used to collect image information, to process, and to output the image information. By preprocessing the remote video monitoring system, the low frame rate remote video monitoring system is decoded in parallel. The motion information of the lost frame is estimated to realize the fast coding and decoding of the remote video monitoring system. The experimental results show that the proposed method has low distortion rate and high decoding efficiency and has high practical value.
{"title":"Coding and Decoding Optimization of Remote Video Surveillance Systems: Consider Local Area Network","authors":"Lehua Hu","doi":"10.4018/ijghpc.318405","DOIUrl":"https://doi.org/10.4018/ijghpc.318405","url":null,"abstract":"In order to solve the problems of high distortion rate and low decoding efficiency of the decoded video when the current coding and decoding methods are used to encode and decode the remote video monitoring system, considering the local area network, research on the optimization method of the coding and decoding of the remote video monitoring system is proposed. The local area network is used to collect image information, to process, and to output the image information. By preprocessing the remote video monitoring system, the low frame rate remote video monitoring system is decoded in parallel. The motion information of the lost frame is estimated to realize the fast coding and decoding of the remote video monitoring system. The experimental results show that the proposed method has low distortion rate and high decoding efficiency and has high practical value.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"54 1","pages":"1-15"},"PeriodicalIF":1.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89942392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}