Abstract In an era of open data sharing, the scientific research field puts forward an urgent need for the value of big data. However, big data still form “data islands,” which seriously affects the level of scientific research and the progress of scientific research. In this regard, this article proposes the research and realization of the big data scientific research model and key mechanism based on blockchain. This article uses the K-means algorithm to cluster scientific research data and reasonably utilizes the decentralization, smart contracts, and non-tampering characteristics of the blockchain to design a distributed data model based on the blockchain. This article proposes that a BIZi network is formed based on a blockchain Interplanetary File System (IPFS) and Zigzag code (blockchain, IPF Sand Zigzag code, BIZi for short) to achieve reliable data connection and through a set of data access control mechanisms and data service customization mechanism to effectively provide data requirements for scientific research. Finally, IPFS network transmission speed performance can better meet the needs of scientific research. The larger the number of file blocks, the higher the fault tolerance rate of the scheme and the better the storage efficiency. In a completely open data-sharing scenario, the fault tolerance rate of Byzantine nodes is extremely high to ensure the stability of the blockchain. The current optimal consensus algorithm fault tolerance rate reaches 49%.
{"title":"A study on the big data scientific research model and the key mechanism based on blockchain","authors":"Shen Wen","doi":"10.1515/comp-2022-0258","DOIUrl":"https://doi.org/10.1515/comp-2022-0258","url":null,"abstract":"Abstract In an era of open data sharing, the scientific research field puts forward an urgent need for the value of big data. However, big data still form “data islands,” which seriously affects the level of scientific research and the progress of scientific research. In this regard, this article proposes the research and realization of the big data scientific research model and key mechanism based on blockchain. This article uses the K-means algorithm to cluster scientific research data and reasonably utilizes the decentralization, smart contracts, and non-tampering characteristics of the blockchain to design a distributed data model based on the blockchain. This article proposes that a BIZi network is formed based on a blockchain Interplanetary File System (IPFS) and Zigzag code (blockchain, IPF Sand Zigzag code, BIZi for short) to achieve reliable data connection and through a set of data access control mechanisms and data service customization mechanism to effectively provide data requirements for scientific research. Finally, IPFS network transmission speed performance can better meet the needs of scientific research. The larger the number of file blocks, the higher the fault tolerance rate of the scheme and the better the storage efficiency. In a completely open data-sharing scenario, the fault tolerance rate of Byzantine nodes is extremely high to ensure the stability of the blockchain. The current optimal consensus algorithm fault tolerance rate reaches 49%.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"357 - 363"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47776424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hussain A. Jaber, Hadeel K. Aljobouri, Ilyas Çankaya
Abstract An electrocardiogram (ECG) is a noninvasive test, determining any defect in the heart rate or rhythm or changes in the shape of the QRS complex is very significant to detect cardiac arrhythmia. In this study, novel web-ECG simulation tools were proposed using MATLAB Builder NE with WebFigure and ASP.NET platform. The proposed web-ECG simulation tools consisted of two components. First, involved the analyses of normal real ECG signals by calculating the P, Q, R, S, and T values and detecting heart rate, while the second part related to extracting the futures of several types of abnormality real ECG. For calculating the PQRST values, simple and new mathematical equations are proposed in the current study using MATLAB. The Web ECG is capable to plot normal ECG signals and five arrhythmia cases, so the users are able to calculate PQRST easily using the proposed simple method. ECG simulation tools have been tested for validity and educational contributions with 62 undergraduate and graduate students at the Al-Nahrain University-Biomedical Engineering Department, Iraq. The proposed ECG simulation tools have been designed for academic learning to be run easily by a student using only any web browsers without the need for installing MATLAB or any extra programs. The proposed tools could provide a laboratory course for ECG signal analysis using a few buttons, as well as increase and develop the educational skills of students and researchers.
{"title":"Design of a web laboratory interface for ECG signal analysis using MATLAB builder NE","authors":"Hussain A. Jaber, Hadeel K. Aljobouri, Ilyas Çankaya","doi":"10.1515/comp-2022-0244","DOIUrl":"https://doi.org/10.1515/comp-2022-0244","url":null,"abstract":"Abstract An electrocardiogram (ECG) is a noninvasive test, determining any defect in the heart rate or rhythm or changes in the shape of the QRS complex is very significant to detect cardiac arrhythmia. In this study, novel web-ECG simulation tools were proposed using MATLAB Builder NE with WebFigure and ASP.NET platform. The proposed web-ECG simulation tools consisted of two components. First, involved the analyses of normal real ECG signals by calculating the P, Q, R, S, and T values and detecting heart rate, while the second part related to extracting the futures of several types of abnormality real ECG. For calculating the PQRST values, simple and new mathematical equations are proposed in the current study using MATLAB. The Web ECG is capable to plot normal ECG signals and five arrhythmia cases, so the users are able to calculate PQRST easily using the proposed simple method. ECG simulation tools have been tested for validity and educational contributions with 62 undergraduate and graduate students at the Al-Nahrain University-Biomedical Engineering Department, Iraq. The proposed ECG simulation tools have been designed for academic learning to be run easily by a student using only any web browsers without the need for installing MATLAB or any extra programs. The proposed tools could provide a laboratory course for ECG signal analysis using a few buttons, as well as increase and develop the educational skills of students and researchers.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"227 - 237"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45678639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This article has developed and verified a mathematical aggregated approximate model of developing a gas condensate field using a cyclic process. The essence of the cyclic process is to pump the drained gas into the productive formation to reduce the pressure drop into the deposit. This process allows for increased condensate recovery in the future. The model discussed in this article is a continuous dynamic system with control parameters. It is a modification of the dynamic aggregated model of a purely gas field, designed for planning for a sufficiently long period with limited information about the state of the reservoir (the initial flow rate of wells, the initial recoverable gas reserves, the initial reservoir pressure, the dependence of potential condensate content per unit volume of fatty gas on the reservoir pressure). A non-standard approach underlies the model construction. Logical simplifications and a priori assumptions about the processes occurring in the field during its development are at its core. The instruments in the model are the increase in production and injection wells and the proportion of injection wells involved in the production. The purpose of the article is to calculate various variants of the dynamics of the fundamental indicators of the development of a gas condensate field for a sufficiently long-term period at the stage of preliminary design.
{"title":"Construction of a gas condensate field development model","authors":"A. Skiba","doi":"10.1515/comp-2020-0226","DOIUrl":"https://doi.org/10.1515/comp-2020-0226","url":null,"abstract":"Abstract This article has developed and verified a mathematical aggregated approximate model of developing a gas condensate field using a cyclic process. The essence of the cyclic process is to pump the drained gas into the productive formation to reduce the pressure drop into the deposit. This process allows for increased condensate recovery in the future. The model discussed in this article is a continuous dynamic system with control parameters. It is a modification of the dynamic aggregated model of a purely gas field, designed for planning for a sufficiently long period with limited information about the state of the reservoir (the initial flow rate of wells, the initial recoverable gas reserves, the initial reservoir pressure, the dependence of potential condensate content per unit volume of fatty gas on the reservoir pressure). A non-standard approach underlies the model construction. Logical simplifications and a priori assumptions about the processes occurring in the field during its development are at its core. The instruments in the model are the increase in production and injection wells and the proportion of injection wells involved in the production. The purpose of the article is to calculate various variants of the dynamics of the fundamental indicators of the development of a gas condensate field for a sufficiently long-term period at the stage of preliminary design.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"103 - 111"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45315359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suchismita Das, S. Bose, G. K. Nayak, Sanjay Saxena
Abstract Glioma is a type of fast-growing brain tumor in which the shape, size, and location of the tumor vary from patient to patient. Manual extraction of a region of interest (tumor) with the help of a radiologist is a very difficult and time-consuming task. To overcome this problem, we proposed a fully automated deep learning-based ensemble method of brain tumor segmentation on four different 3D multimodal magnetic resonance imaging (MRI) scans. The segmentation is performed by three most efficient encoder–decoder deep models for segmentation and their results are measured through the well-known segmentation metrics. Then, a statistical analysis of the models was performed and an ensemble model is designed by considering the highest Matthews correlation coefficient using a particular MRI modality. There are two main contributions of the article: first the detailed comparison of the three models, and second proposing an ensemble model by combining the three models based on their segmentation accuracy. The model is evaluated using the brain tumor segmentation (BraTS) 2017 dataset and the F1 score of the final combined model is found to be 0.92, 0.95, 0.93, and 0.84 for whole tumor, core, enhancing tumor, and edema sub-tumor, respectively. Experimental results show that the model outperforms the state of the art.
{"title":"Deep learning-based ensemble model for brain tumor segmentation using multi-parametric MR scans","authors":"Suchismita Das, S. Bose, G. K. Nayak, Sanjay Saxena","doi":"10.1515/comp-2022-0242","DOIUrl":"https://doi.org/10.1515/comp-2022-0242","url":null,"abstract":"Abstract Glioma is a type of fast-growing brain tumor in which the shape, size, and location of the tumor vary from patient to patient. Manual extraction of a region of interest (tumor) with the help of a radiologist is a very difficult and time-consuming task. To overcome this problem, we proposed a fully automated deep learning-based ensemble method of brain tumor segmentation on four different 3D multimodal magnetic resonance imaging (MRI) scans. The segmentation is performed by three most efficient encoder–decoder deep models for segmentation and their results are measured through the well-known segmentation metrics. Then, a statistical analysis of the models was performed and an ensemble model is designed by considering the highest Matthews correlation coefficient using a particular MRI modality. There are two main contributions of the article: first the detailed comparison of the three models, and second proposing an ensemble model by combining the three models based on their segmentation accuracy. The model is evaluated using the brain tumor segmentation (BraTS) 2017 dataset and the F1 score of the final combined model is found to be 0.92, 0.95, 0.93, and 0.84 for whole tumor, core, enhancing tumor, and edema sub-tumor, respectively. Experimental results show that the model outperforms the state of the art.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"211 - 226"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47559941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Billah, Md. Nasim Adnan, Mostafijur Rahman Akhond, Romana Rahman Ema, Md. Alam Hossain, S. Galib
Abstract Rainfall prediction is a challenging task and has extreme significance in weather forecasting. Accurate rainfall prediction can play a great role in agricultural, aviation, natural phenomenon, flood, construction, transport, etc. Weather or climate is assumed to be one of the most complex systems. Again, chaos, also called as “butterfly effect,” limits our ability to make weather predictable. So, it is not easy to predict rainfall by conventional machine learning approaches. However, several kinds of research have been proposed to predict rainfall by using different computational methods. To accomplish chaotic rainfall prediction system for Bangladesh, in this study, historical data set-driven long short term memory (LSTM) networks method has been used, which overcomes the complexities and chaos-related problems faced by other approaches. The proposed method has three principal phases: (i) The most useful 10 features are chosen from 20 data attributes. (ii) After that, a two-layer LSTM model is designed. (iii) Both conventional machine learning approaches and recent works are compared with the LSTM model. This approach has gained 97.14% accuracy in predicting rainfall (in millimeters), which outperforms the state-of-the-art solutions. Also, this work is a pioneer work to the rainfall prediction system for Bangladesh.
{"title":"Rainfall prediction system for Bangladesh using long short-term memory","authors":"M. Billah, Md. Nasim Adnan, Mostafijur Rahman Akhond, Romana Rahman Ema, Md. Alam Hossain, S. Galib","doi":"10.1515/comp-2022-0254","DOIUrl":"https://doi.org/10.1515/comp-2022-0254","url":null,"abstract":"Abstract Rainfall prediction is a challenging task and has extreme significance in weather forecasting. Accurate rainfall prediction can play a great role in agricultural, aviation, natural phenomenon, flood, construction, transport, etc. Weather or climate is assumed to be one of the most complex systems. Again, chaos, also called as “butterfly effect,” limits our ability to make weather predictable. So, it is not easy to predict rainfall by conventional machine learning approaches. However, several kinds of research have been proposed to predict rainfall by using different computational methods. To accomplish chaotic rainfall prediction system for Bangladesh, in this study, historical data set-driven long short term memory (LSTM) networks method has been used, which overcomes the complexities and chaos-related problems faced by other approaches. The proposed method has three principal phases: (i) The most useful 10 features are chosen from 20 data attributes. (ii) After that, a two-layer LSTM model is designed. (iii) Both conventional machine learning approaches and recent works are compared with the LSTM model. This approach has gained 97.14% accuracy in predicting rainfall (in millimeters), which outperforms the state-of-the-art solutions. Also, this work is a pioneer work to the rainfall prediction system for Bangladesh.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"323 - 331"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46565814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vignesh Srinivasakumar, Muthumanikandan Vanamoorthy, Siddarth Sairaj, S. Ganesh
Abstract MapReduce (MR) is a technique used to improve distributed data processing vastly and can massively speed up computation. Hadoop and MR rely on memory-intensive JVM and Java. A MR framework based on High-Performance Computing (HPC) could be used, which is both memory-efficient and faster than standard MR. This article explores a C++-based approach to MR and its feasibility on multiple factors like developer friendliness, deployment interface, efficiency, and scalability. This article also introduces Eager Reduction and Delayed Reduction techniques to speed up MR.
{"title":"An alternative C++-based HPC system for Hadoop MapReduce","authors":"Vignesh Srinivasakumar, Muthumanikandan Vanamoorthy, Siddarth Sairaj, S. Ganesh","doi":"10.1515/comp-2022-0246","DOIUrl":"https://doi.org/10.1515/comp-2022-0246","url":null,"abstract":"Abstract MapReduce (MR) is a technique used to improve distributed data processing vastly and can massively speed up computation. Hadoop and MR rely on memory-intensive JVM and Java. A MR framework based on High-Performance Computing (HPC) could be used, which is both memory-efficient and faster than standard MR. This article explores a C++-based approach to MR and its feasibility on multiple factors like developer friendliness, deployment interface, efficiency, and scalability. This article also introduces Eager Reduction and Delayed Reduction techniques to speed up MR.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"238 - 247"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48560209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Today, data and information are flooded every day. Data are a reliable basis for scientific research. Their function is not only to clearly show real problems in various fields, but also to guide people to find the key factors that cause problems. The emergence of big data responds to this era of information explosion, and it is precisely by virtue of the accumulation of quantity that it presents the rules more clearly. No matter political, economic, cultural, and other fields are closely related to data. The application of microcontroller and sensor technology can help explore new branches of multisource data. However, the collection and analysis of multisource data only stays in the aspects of computer and communication technology. In view of the earlier problems, this article carried out scientific data collection and analysis of multisource data based on single-chip microcomputer and sensor technology. The research results showed that based on two algorithms, random early detection and weighted fair queuing, the analysis algorithm according to the Genetic Algorithm had a higher successful conversion rate. The power consumption of a node with better antenna performance was 9–10% lower than that of a node with poor antenna performance, which provided a basis for multisource data collection and analysis.
{"title":"Multisource data acquisition based on single-chip microcomputer and sensor technology","authors":"Yahui Huang, Daozhong Lei","doi":"10.1515/comp-2022-0261","DOIUrl":"https://doi.org/10.1515/comp-2022-0261","url":null,"abstract":"Abstract Today, data and information are flooded every day. Data are a reliable basis for scientific research. Their function is not only to clearly show real problems in various fields, but also to guide people to find the key factors that cause problems. The emergence of big data responds to this era of information explosion, and it is precisely by virtue of the accumulation of quantity that it presents the rules more clearly. No matter political, economic, cultural, and other fields are closely related to data. The application of microcontroller and sensor technology can help explore new branches of multisource data. However, the collection and analysis of multisource data only stays in the aspects of computer and communication technology. In view of the earlier problems, this article carried out scientific data collection and analysis of multisource data based on single-chip microcomputer and sensor technology. The research results showed that based on two algorithms, random early detection and weighted fair queuing, the analysis algorithm according to the Genetic Algorithm had a higher successful conversion rate. The power consumption of a node with better antenna performance was 9–10% lower than that of a node with poor antenna performance, which provided a basis for multisource data collection and analysis.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"416 - 426"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48091832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Mobile ad hoc networks (MANETs) are considered as decentralized networks, which can communicate without pre-existing infrastructure. Owning to utilization of open medium access and dynamically changing network topology, MANETs are vulnerable to different types of attacks such as blackhole attack, gray hole attack, Sybil attack, rushing attack, jellyfish attack, wormhole attack (WHA), byzantine attack, selfishness attack, and network partition attack. Out of these, worm hole attack is the most common and severe attack that substantially undermines the performance of the network and disrupts the most routing protocols. In the past two decades, numerous researchers have explored the number of techniques to detect and mitigate the effect of WHAs to ensure the safe operation of wireless networks. Hence, in this article, we mainly focus on the WHAs and present the different state of art methods, which have been employed in previous years to discern WHA in wireless networks. The existing WHA detection techniques are lacking due to usage of additional hardware, higher delay, and consumption of higher energy. Round trip time (RTT) based detection methods are showing better results as they do not require additional hardware. Machine learning (ML) techniques can also be applied to ad-hoc network for anomaly detection and has a great influence in future; therefore, ML techniques are also analyzed for WHA detection in this article. SVM technique is mostly used by the researchers for outstanding results. It has been analyzed that hybrid approach which uses the traditional detection technique and ML technique are showing better results for WHA detection. Finally, we have identified the areas where further research can be focused so that we can apply the WHA detection methods for larger topological area for more flexibility and accurate results.
{"title":"Wormhole attack detection techniques in ad-hoc network: A systematic review","authors":"C. Gupta, Laxman Singh, Rajdev Tiwari","doi":"10.1515/comp-2022-0245","DOIUrl":"https://doi.org/10.1515/comp-2022-0245","url":null,"abstract":"Abstract Mobile ad hoc networks (MANETs) are considered as decentralized networks, which can communicate without pre-existing infrastructure. Owning to utilization of open medium access and dynamically changing network topology, MANETs are vulnerable to different types of attacks such as blackhole attack, gray hole attack, Sybil attack, rushing attack, jellyfish attack, wormhole attack (WHA), byzantine attack, selfishness attack, and network partition attack. Out of these, worm hole attack is the most common and severe attack that substantially undermines the performance of the network and disrupts the most routing protocols. In the past two decades, numerous researchers have explored the number of techniques to detect and mitigate the effect of WHAs to ensure the safe operation of wireless networks. Hence, in this article, we mainly focus on the WHAs and present the different state of art methods, which have been employed in previous years to discern WHA in wireless networks. The existing WHA detection techniques are lacking due to usage of additional hardware, higher delay, and consumption of higher energy. Round trip time (RTT) based detection methods are showing better results as they do not require additional hardware. Machine learning (ML) techniques can also be applied to ad-hoc network for anomaly detection and has a great influence in future; therefore, ML techniques are also analyzed for WHA detection in this article. SVM technique is mostly used by the researchers for outstanding results. It has been analyzed that hybrid approach which uses the traditional detection technique and ML technique are showing better results for WHA detection. Finally, we have identified the areas where further research can be focused so that we can apply the WHA detection methods for larger topological area for more flexibility and accurate results.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"260 - 288"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66887337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Dua, Drishti Makhija, Pilla Yamini Lakshmi Manasa, Prashant Mishra
Abstract Data security is vital for multimedia communication. A number of cryptographic algorithms have been developed for the secure transmission of text and image data. Very few contributions have been made in the area of video encryption because of the large input data size and time constraints. However, due to the massive increase in digital media transfer within networks, the security of video data has become one of the most important features of network reliability. Block encryption techniques and 1D-chaotic maps have been previously used for the process of video encryption. Although the results obtained by using 1D-chaotic maps were quite satisfactory, the approach had many limitations as these maps have less dynamic behavior. To overcome these drawbacks, this article proposes an Intertwining Logistic Map (ILM)-Cosine transformation-based video encryption technique. The first step involved segmenting the input video into multiple frames based on the frames per second (FPS) value and the length of the video. Next, each frame was selected, and the correlation among the pixels was reduced by a process called permutation/scrambling. In addition, each frame was rotated by 90° in the anticlockwise direction to induce more randomness into the encryption process. Furthermore, by using an approach called the random order substitution technique, changes were made in each of the images, row-wise and column-wise. Finally, all the encrypted frames were jumbled according to a frame selection key and were joined to generate an encrypted video, which was the output delivered to the user. The efficiency of this method was tested based on the state of various parameters like Entropy, Unified Average Change in Intensity (UACI), and correlation coefficient (CC). The presented approach also decrypts the encrypted video, and the decryption quality was checked using parameters such as mean square error (MSE) and peak signal-to-noise ratio (PSNR).
{"title":"3D chaotic map-cosine transformation based approach to video encryption and decryption","authors":"M. Dua, Drishti Makhija, Pilla Yamini Lakshmi Manasa, Prashant Mishra","doi":"10.1515/comp-2020-0225","DOIUrl":"https://doi.org/10.1515/comp-2020-0225","url":null,"abstract":"Abstract Data security is vital for multimedia communication. A number of cryptographic algorithms have been developed for the secure transmission of text and image data. Very few contributions have been made in the area of video encryption because of the large input data size and time constraints. However, due to the massive increase in digital media transfer within networks, the security of video data has become one of the most important features of network reliability. Block encryption techniques and 1D-chaotic maps have been previously used for the process of video encryption. Although the results obtained by using 1D-chaotic maps were quite satisfactory, the approach had many limitations as these maps have less dynamic behavior. To overcome these drawbacks, this article proposes an Intertwining Logistic Map (ILM)-Cosine transformation-based video encryption technique. The first step involved segmenting the input video into multiple frames based on the frames per second (FPS) value and the length of the video. Next, each frame was selected, and the correlation among the pixels was reduced by a process called permutation/scrambling. In addition, each frame was rotated by 90° in the anticlockwise direction to induce more randomness into the encryption process. Furthermore, by using an approach called the random order substitution technique, changes were made in each of the images, row-wise and column-wise. Finally, all the encrypted frames were jumbled according to a frame selection key and were joined to generate an encrypted video, which was the output delivered to the user. The efficiency of this method was tested based on the state of various parameters like Entropy, Unified Average Change in Intensity (UACI), and correlation coefficient (CC). The presented approach also decrypts the encrypted video, and the decryption quality was checked using parameters such as mean square error (MSE) and peak signal-to-noise ratio (PSNR).","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"37 - 56"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44121357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The cloud market is growing every day. So are cloud crimes. To investigate crimes that happen in a cloud environment, an investigation is carried out adhering to the court of law. Forensics investigations require evidence from the cloud. Evidence acquisition in the cloud requires formidable efforts because of physical inaccessibility and the lack of cloud forensics tools. Time is very crucial in any forensic investigation. If the evidence is preserved before the cloud forensic investigation, it can give the investigators a head start. To identify and preserve such potential evidence in the cloud, we propose a system with an artificial intelligence (AI)-based agent, equipped for binary classification that monitors and profiles the virtual machine (VM) from hypervisor level activities. The proposed system classifies and preserves evidence data generated in the cloud. The evidence repository module of the system uses a novel blockchain model approach to maintain the data provenance. The proposed system works at the hypervisor level, which makes it robust for anti-forensics techniques in the cloud. The proposed system identifies potential evidence reducing the effective storage space requirement of the evidence repository. Data provenance incorporated in the proposed system reduces trust dependencies on the cloud service provider (CSP).
{"title":"BiSHM: Evidence detection and preservation model for cloud forensics","authors":"Prasad Purnaye, Vrushali Kulkarni","doi":"10.1515/comp-2022-0241","DOIUrl":"https://doi.org/10.1515/comp-2022-0241","url":null,"abstract":"Abstract The cloud market is growing every day. So are cloud crimes. To investigate crimes that happen in a cloud environment, an investigation is carried out adhering to the court of law. Forensics investigations require evidence from the cloud. Evidence acquisition in the cloud requires formidable efforts because of physical inaccessibility and the lack of cloud forensics tools. Time is very crucial in any forensic investigation. If the evidence is preserved before the cloud forensic investigation, it can give the investigators a head start. To identify and preserve such potential evidence in the cloud, we propose a system with an artificial intelligence (AI)-based agent, equipped for binary classification that monitors and profiles the virtual machine (VM) from hypervisor level activities. The proposed system classifies and preserves evidence data generated in the cloud. The evidence repository module of the system uses a novel blockchain model approach to maintain the data provenance. The proposed system works at the hypervisor level, which makes it robust for anti-forensics techniques in the cloud. The proposed system identifies potential evidence reducing the effective storage space requirement of the evidence repository. Data provenance incorporated in the proposed system reduces trust dependencies on the cloud service provider (CSP).","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"154 - 170"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48427821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}