首页 > 最新文献

Open Computer Science最新文献

英文 中文
A study on the big data scientific research model and the key mechanism based on blockchain 基于b区块链的大数据科研模式及关键机制研究
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0258
Shen Wen
Abstract In an era of open data sharing, the scientific research field puts forward an urgent need for the value of big data. However, big data still form “data islands,” which seriously affects the level of scientific research and the progress of scientific research. In this regard, this article proposes the research and realization of the big data scientific research model and key mechanism based on blockchain. This article uses the K-means algorithm to cluster scientific research data and reasonably utilizes the decentralization, smart contracts, and non-tampering characteristics of the blockchain to design a distributed data model based on the blockchain. This article proposes that a BIZi network is formed based on a blockchain Interplanetary File System (IPFS) and Zigzag code (blockchain, IPF Sand Zigzag code, BIZi for short) to achieve reliable data connection and through a set of data access control mechanisms and data service customization mechanism to effectively provide data requirements for scientific research. Finally, IPFS network transmission speed performance can better meet the needs of scientific research. The larger the number of file blocks, the higher the fault tolerance rate of the scheme and the better the storage efficiency. In a completely open data-sharing scenario, the fault tolerance rate of Byzantine nodes is extremely high to ensure the stability of the blockchain. The current optimal consensus algorithm fault tolerance rate reaches 49%.
在数据开放共享的时代,科研领域对大数据的价值提出了迫切的需求。但是,大数据仍然形成了“数据孤岛”,严重影响了科研水平和科研进程。对此,本文提出了基于区块链的大数据科研模式和关键机制的研究与实现。本文采用K-means算法对科研数据进行聚类,合理利用区块链的去中心化、智能合约、不可篡改等特性,设计基于区块链的分布式数据模型。本文提出基于区块链星际文件系统(IPFS)和Zigzag码(区块链,IPF Sand Zigzag码,简称BIZi)组成BIZi网络,实现可靠的数据连接,并通过一套数据访问控制机制和数据服务定制机制,有效地为科研提供数据需求。最后,IPFS网络传输速度性能能更好地满足科研需要。文件块数量越大,该方案的容错率越高,存储效率越高。在完全开放的数据共享场景下,拜占庭节点的容错率极高,保证了区块链的稳定性。目前最优共识算法容错率达到49%。
{"title":"A study on the big data scientific research model and the key mechanism based on blockchain","authors":"Shen Wen","doi":"10.1515/comp-2022-0258","DOIUrl":"https://doi.org/10.1515/comp-2022-0258","url":null,"abstract":"Abstract In an era of open data sharing, the scientific research field puts forward an urgent need for the value of big data. However, big data still form “data islands,” which seriously affects the level of scientific research and the progress of scientific research. In this regard, this article proposes the research and realization of the big data scientific research model and key mechanism based on blockchain. This article uses the K-means algorithm to cluster scientific research data and reasonably utilizes the decentralization, smart contracts, and non-tampering characteristics of the blockchain to design a distributed data model based on the blockchain. This article proposes that a BIZi network is formed based on a blockchain Interplanetary File System (IPFS) and Zigzag code (blockchain, IPF Sand Zigzag code, BIZi for short) to achieve reliable data connection and through a set of data access control mechanisms and data service customization mechanism to effectively provide data requirements for scientific research. Finally, IPFS network transmission speed performance can better meet the needs of scientific research. The larger the number of file blocks, the higher the fault tolerance rate of the scheme and the better the storage efficiency. In a completely open data-sharing scenario, the fault tolerance rate of Byzantine nodes is extremely high to ensure the stability of the blockchain. The current optimal consensus algorithm fault tolerance rate reaches 49%.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"357 - 363"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47776424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of a web laboratory interface for ECG signal analysis using MATLAB builder NE 利用MATLAB builder NE设计一个用于心电信号分析的网络实验室界面
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0244
Hussain A. Jaber, Hadeel K. Aljobouri, Ilyas Çankaya
Abstract An electrocardiogram (ECG) is a noninvasive test, determining any defect in the heart rate or rhythm or changes in the shape of the QRS complex is very significant to detect cardiac arrhythmia. In this study, novel web-ECG simulation tools were proposed using MATLAB Builder NE with WebFigure and ASP.NET platform. The proposed web-ECG simulation tools consisted of two components. First, involved the analyses of normal real ECG signals by calculating the P, Q, R, S, and T values and detecting heart rate, while the second part related to extracting the futures of several types of abnormality real ECG. For calculating the PQRST values, simple and new mathematical equations are proposed in the current study using MATLAB. The Web ECG is capable to plot normal ECG signals and five arrhythmia cases, so the users are able to calculate PQRST easily using the proposed simple method. ECG simulation tools have been tested for validity and educational contributions with 62 undergraduate and graduate students at the Al-Nahrain University-Biomedical Engineering Department, Iraq. The proposed ECG simulation tools have been designed for academic learning to be run easily by a student using only any web browsers without the need for installing MATLAB or any extra programs. The proposed tools could provide a laboratory course for ECG signal analysis using a few buttons, as well as increase and develop the educational skills of students and researchers.
心电图(electrocardiogram, ECG)是一种无创检查,确定心率或心律的任何缺陷或QRS复合体形状的变化对检测心律失常具有重要意义。本研究利用MATLAB Builder NE,结合WebFigure和ASP,提出了一种新的网络心电仿真工具。网络平台。所提出的网络心电仿真工具由两个部分组成。首先对正常的真实心电信号进行分析,计算P、Q、R、S、T值并检测心率;第二部分对几种异常的真实心电信号进行未来提取。为了计算PQRST值,本研究在MATLAB中提出了简单的新的数学方程。网络心电能够绘制正常心电信号和五种心律失常病例,因此用户可以使用本文提出的简单方法轻松计算PQRST。伊拉克Al-Nahrain大学生物医学工程系的62名本科生和研究生对ECG模拟工具的有效性和教育贡献进行了测试。所提出的心电仿真工具是为学术学习而设计的,学生只需使用任何web浏览器即可轻松运行,而无需安装MATLAB或任何额外的程序。所提出的工具可以提供一个使用几个按钮进行心电信号分析的实验课程,并提高和发展学生和研究人员的教育技能。
{"title":"Design of a web laboratory interface for ECG signal analysis using MATLAB builder NE","authors":"Hussain A. Jaber, Hadeel K. Aljobouri, Ilyas Çankaya","doi":"10.1515/comp-2022-0244","DOIUrl":"https://doi.org/10.1515/comp-2022-0244","url":null,"abstract":"Abstract An electrocardiogram (ECG) is a noninvasive test, determining any defect in the heart rate or rhythm or changes in the shape of the QRS complex is very significant to detect cardiac arrhythmia. In this study, novel web-ECG simulation tools were proposed using MATLAB Builder NE with WebFigure and ASP.NET platform. The proposed web-ECG simulation tools consisted of two components. First, involved the analyses of normal real ECG signals by calculating the P, Q, R, S, and T values and detecting heart rate, while the second part related to extracting the futures of several types of abnormality real ECG. For calculating the PQRST values, simple and new mathematical equations are proposed in the current study using MATLAB. The Web ECG is capable to plot normal ECG signals and five arrhythmia cases, so the users are able to calculate PQRST easily using the proposed simple method. ECG simulation tools have been tested for validity and educational contributions with 62 undergraduate and graduate students at the Al-Nahrain University-Biomedical Engineering Department, Iraq. The proposed ECG simulation tools have been designed for academic learning to be run easily by a student using only any web browsers without the need for installing MATLAB or any extra programs. The proposed tools could provide a laboratory course for ECG signal analysis using a few buttons, as well as increase and develop the educational skills of students and researchers.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"227 - 237"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45678639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Construction of a gas condensate field development model 凝析气田开发模型的构建
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2020-0226
A. Skiba
Abstract This article has developed and verified a mathematical aggregated approximate model of developing a gas condensate field using a cyclic process. The essence of the cyclic process is to pump the drained gas into the productive formation to reduce the pressure drop into the deposit. This process allows for increased condensate recovery in the future. The model discussed in this article is a continuous dynamic system with control parameters. It is a modification of the dynamic aggregated model of a purely gas field, designed for planning for a sufficiently long period with limited information about the state of the reservoir (the initial flow rate of wells, the initial recoverable gas reserves, the initial reservoir pressure, the dependence of potential condensate content per unit volume of fatty gas on the reservoir pressure). A non-standard approach underlies the model construction. Logical simplifications and a priori assumptions about the processes occurring in the field during its development are at its core. The instruments in the model are the increase in production and injection wells and the proportion of injection wells involved in the production. The purpose of the article is to calculate various variants of the dynamics of the fundamental indicators of the development of a gas condensate field for a sufficiently long-term period at the stage of preliminary design.
摘要本文建立并验证了一个利用循环过程开发凝析气田的数学聚合近似模型。循环过程的本质是将排出的气体泵入生产地层,以减少进入矿床的压降。该工艺可在未来提高冷凝水回收率。本文讨论的模型是一个具有控制参数的连续动态系统。它是对纯气田动态聚合模型的修改,设计用于在储层状态信息有限的情况下进行足够长的时间规划(井的初始流速、初始可采天然气储量、初始储层压力、每单位体积脂肪气的潜在凝析油含量对储层压力的依赖性)。模型构建的基础是非标准方法。关于该领域发展过程中发生的过程的逻辑简化和先验假设是其核心。模型中的仪器是产量和注入井的增加以及注入井在生产中所占的比例。本文的目的是在初步设计阶段计算凝析气田开发的基本指标在足够长的时间内的动态变化。
{"title":"Construction of a gas condensate field development model","authors":"A. Skiba","doi":"10.1515/comp-2020-0226","DOIUrl":"https://doi.org/10.1515/comp-2020-0226","url":null,"abstract":"Abstract This article has developed and verified a mathematical aggregated approximate model of developing a gas condensate field using a cyclic process. The essence of the cyclic process is to pump the drained gas into the productive formation to reduce the pressure drop into the deposit. This process allows for increased condensate recovery in the future. The model discussed in this article is a continuous dynamic system with control parameters. It is a modification of the dynamic aggregated model of a purely gas field, designed for planning for a sufficiently long period with limited information about the state of the reservoir (the initial flow rate of wells, the initial recoverable gas reserves, the initial reservoir pressure, the dependence of potential condensate content per unit volume of fatty gas on the reservoir pressure). A non-standard approach underlies the model construction. Logical simplifications and a priori assumptions about the processes occurring in the field during its development are at its core. The instruments in the model are the increase in production and injection wells and the proportion of injection wells involved in the production. The purpose of the article is to calculate various variants of the dynamics of the fundamental indicators of the development of a gas condensate field for a sufficiently long-term period at the stage of preliminary design.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"103 - 111"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45315359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep learning-based ensemble model for brain tumor segmentation using multi-parametric MR scans 基于深度学习的多参数MR扫描脑肿瘤分割集成模型
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0242
Suchismita Das, S. Bose, G. K. Nayak, Sanjay Saxena
Abstract Glioma is a type of fast-growing brain tumor in which the shape, size, and location of the tumor vary from patient to patient. Manual extraction of a region of interest (tumor) with the help of a radiologist is a very difficult and time-consuming task. To overcome this problem, we proposed a fully automated deep learning-based ensemble method of brain tumor segmentation on four different 3D multimodal magnetic resonance imaging (MRI) scans. The segmentation is performed by three most efficient encoder–decoder deep models for segmentation and their results are measured through the well-known segmentation metrics. Then, a statistical analysis of the models was performed and an ensemble model is designed by considering the highest Matthews correlation coefficient using a particular MRI modality. There are two main contributions of the article: first the detailed comparison of the three models, and second proposing an ensemble model by combining the three models based on their segmentation accuracy. The model is evaluated using the brain tumor segmentation (BraTS) 2017 dataset and the F1 score of the final combined model is found to be 0.92, 0.95, 0.93, and 0.84 for whole tumor, core, enhancing tumor, and edema sub-tumor, respectively. Experimental results show that the model outperforms the state of the art.
摘要胶质瘤是一种生长迅速的脑肿瘤,其形状、大小和位置因患者而异。在放射科医生的帮助下手动提取感兴趣区域(肿瘤)是一项非常困难和耗时的任务。为了克服这个问题,我们提出了一种基于深度学习的全自动集成方法,用于在四种不同的3D多模式磁共振成像(MRI)扫描上进行脑肿瘤分割。分割由三个最有效的编码器-解码器深度模型执行,并通过众所周知的分割度量来衡量其结果。然后,对模型进行统计分析,并通过使用特定MRI模态考虑最高Matthews相关系数来设计集成模型。本文的主要贡献有两个:第一,对三个模型进行了详细的比较,第二,根据三个模型的分割精度,将它们结合起来,提出了一个集成模型。使用脑肿瘤分割(BraTS)2017数据集对该模型进行评估,发现最终组合模型的整个肿瘤、核心肿瘤、增强肿瘤和水肿亚肿瘤的F1得分分别为0.92、0.95、0.93和0.84。实验结果表明,该模型的性能优于现有技术。
{"title":"Deep learning-based ensemble model for brain tumor segmentation using multi-parametric MR scans","authors":"Suchismita Das, S. Bose, G. K. Nayak, Sanjay Saxena","doi":"10.1515/comp-2022-0242","DOIUrl":"https://doi.org/10.1515/comp-2022-0242","url":null,"abstract":"Abstract Glioma is a type of fast-growing brain tumor in which the shape, size, and location of the tumor vary from patient to patient. Manual extraction of a region of interest (tumor) with the help of a radiologist is a very difficult and time-consuming task. To overcome this problem, we proposed a fully automated deep learning-based ensemble method of brain tumor segmentation on four different 3D multimodal magnetic resonance imaging (MRI) scans. The segmentation is performed by three most efficient encoder–decoder deep models for segmentation and their results are measured through the well-known segmentation metrics. Then, a statistical analysis of the models was performed and an ensemble model is designed by considering the highest Matthews correlation coefficient using a particular MRI modality. There are two main contributions of the article: first the detailed comparison of the three models, and second proposing an ensemble model by combining the three models based on their segmentation accuracy. The model is evaluated using the brain tumor segmentation (BraTS) 2017 dataset and the F1 score of the final combined model is found to be 0.92, 0.95, 0.93, and 0.84 for whole tumor, core, enhancing tumor, and edema sub-tumor, respectively. Experimental results show that the model outperforms the state of the art.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"211 - 226"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47559941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Rainfall prediction system for Bangladesh using long short-term memory 使用长短期记忆的孟加拉国降雨量预测系统
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0254
M. Billah, Md. Nasim Adnan, Mostafijur Rahman Akhond, Romana Rahman Ema, Md. Alam Hossain, S. Galib
Abstract Rainfall prediction is a challenging task and has extreme significance in weather forecasting. Accurate rainfall prediction can play a great role in agricultural, aviation, natural phenomenon, flood, construction, transport, etc. Weather or climate is assumed to be one of the most complex systems. Again, chaos, also called as “butterfly effect,” limits our ability to make weather predictable. So, it is not easy to predict rainfall by conventional machine learning approaches. However, several kinds of research have been proposed to predict rainfall by using different computational methods. To accomplish chaotic rainfall prediction system for Bangladesh, in this study, historical data set-driven long short term memory (LSTM) networks method has been used, which overcomes the complexities and chaos-related problems faced by other approaches. The proposed method has three principal phases: (i) The most useful 10 features are chosen from 20 data attributes. (ii) After that, a two-layer LSTM model is designed. (iii) Both conventional machine learning approaches and recent works are compared with the LSTM model. This approach has gained 97.14% accuracy in predicting rainfall (in millimeters), which outperforms the state-of-the-art solutions. Also, this work is a pioneer work to the rainfall prediction system for Bangladesh.
降雨预测是一项极具挑战性的任务,在天气预报中具有极其重要的意义。准确的降雨预报可以在农业、航空、自然现象、防洪、建筑、交通等方面发挥很大的作用。天气或气候被认为是最复杂的系统之一。同样,混沌,也被称为“蝴蝶效应”,限制了我们预测天气的能力。因此,通过传统的机器学习方法预测降雨并不容易。然而,已经提出了几种使用不同计算方法来预测降雨的研究。为了实现孟加拉国的混沌降雨预报系统,本研究采用了历史数据集驱动的长短期记忆(LSTM)网络方法,克服了其他方法所面临的复杂性和与混沌相关的问题。提出的方法有三个主要阶段:(i)从20个数据属性中选择最有用的10个特征。(ii)之后,设计两层LSTM模型。(iii)将传统的机器学习方法和最近的研究成果与LSTM模型进行比较。该方法在预测降雨量(以毫米为单位)方面的准确率达到97.14%,优于最先进的解决方案。此外,这项工作是孟加拉国降雨预报系统的先驱工作。
{"title":"Rainfall prediction system for Bangladesh using long short-term memory","authors":"M. Billah, Md. Nasim Adnan, Mostafijur Rahman Akhond, Romana Rahman Ema, Md. Alam Hossain, S. Galib","doi":"10.1515/comp-2022-0254","DOIUrl":"https://doi.org/10.1515/comp-2022-0254","url":null,"abstract":"Abstract Rainfall prediction is a challenging task and has extreme significance in weather forecasting. Accurate rainfall prediction can play a great role in agricultural, aviation, natural phenomenon, flood, construction, transport, etc. Weather or climate is assumed to be one of the most complex systems. Again, chaos, also called as “butterfly effect,” limits our ability to make weather predictable. So, it is not easy to predict rainfall by conventional machine learning approaches. However, several kinds of research have been proposed to predict rainfall by using different computational methods. To accomplish chaotic rainfall prediction system for Bangladesh, in this study, historical data set-driven long short term memory (LSTM) networks method has been used, which overcomes the complexities and chaos-related problems faced by other approaches. The proposed method has three principal phases: (i) The most useful 10 features are chosen from 20 data attributes. (ii) After that, a two-layer LSTM model is designed. (iii) Both conventional machine learning approaches and recent works are compared with the LSTM model. This approach has gained 97.14% accuracy in predicting rainfall (in millimeters), which outperforms the state-of-the-art solutions. Also, this work is a pioneer work to the rainfall prediction system for Bangladesh.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"323 - 331"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46565814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An alternative C++-based HPC system for Hadoop MapReduce 一个可选的基于c++的Hadoop MapReduce HPC系统
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0246
Vignesh Srinivasakumar, Muthumanikandan Vanamoorthy, Siddarth Sairaj, S. Ganesh
Abstract MapReduce (MR) is a technique used to improve distributed data processing vastly and can massively speed up computation. Hadoop and MR rely on memory-intensive JVM and Java. A MR framework based on High-Performance Computing (HPC) could be used, which is both memory-efficient and faster than standard MR. This article explores a C++-based approach to MR and its feasibility on multiple factors like developer friendliness, deployment interface, efficiency, and scalability. This article also introduces Eager Reduction and Delayed Reduction techniques to speed up MR.
MapReduce (MR)是一种用于大幅度改进分布式数据处理的技术,可以大大提高计算速度。Hadoop和MR依赖于内存密集型的JVM和Java。可以使用基于高性能计算(HPC)的MR框架,它既节省内存,又比标准MR更快。本文从开发人员友好性、部署接口、效率和可伸缩性等多个因素探讨了基于c++的MR方法及其可行性。本文还介绍了热切约简和延迟约简技术,以加快MR的速度。
{"title":"An alternative C++-based HPC system for Hadoop MapReduce","authors":"Vignesh Srinivasakumar, Muthumanikandan Vanamoorthy, Siddarth Sairaj, S. Ganesh","doi":"10.1515/comp-2022-0246","DOIUrl":"https://doi.org/10.1515/comp-2022-0246","url":null,"abstract":"Abstract MapReduce (MR) is a technique used to improve distributed data processing vastly and can massively speed up computation. Hadoop and MR rely on memory-intensive JVM and Java. A MR framework based on High-Performance Computing (HPC) could be used, which is both memory-efficient and faster than standard MR. This article explores a C++-based approach to MR and its feasibility on multiple factors like developer friendliness, deployment interface, efficiency, and scalability. This article also introduces Eager Reduction and Delayed Reduction techniques to speed up MR.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"238 - 247"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48560209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multisource data acquisition based on single-chip microcomputer and sensor technology 基于单片机和传感器技术的多源数据采集
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0261
Yahui Huang, Daozhong Lei
Abstract Today, data and information are flooded every day. Data are a reliable basis for scientific research. Their function is not only to clearly show real problems in various fields, but also to guide people to find the key factors that cause problems. The emergence of big data responds to this era of information explosion, and it is precisely by virtue of the accumulation of quantity that it presents the rules more clearly. No matter political, economic, cultural, and other fields are closely related to data. The application of microcontroller and sensor technology can help explore new branches of multisource data. However, the collection and analysis of multisource data only stays in the aspects of computer and communication technology. In view of the earlier problems, this article carried out scientific data collection and analysis of multisource data based on single-chip microcomputer and sensor technology. The research results showed that based on two algorithms, random early detection and weighted fair queuing, the analysis algorithm according to the Genetic Algorithm had a higher successful conversion rate. The power consumption of a node with better antenna performance was 9–10% lower than that of a node with poor antenna performance, which provided a basis for multisource data collection and analysis.
摘要今天,数据和信息每天都在泛滥。数据是科学研究的可靠基础。它们的作用不仅是清楚地展示各个领域的真实问题,而且引导人们找到导致问题的关键因素。大数据的出现回应了这个信息爆炸的时代,而正是凭借数量的积累,它将规则呈现得更加清晰。无论是政治、经济、文化等领域都与数据密切相关。微控制器和传感器技术的应用可以帮助探索多源数据的新分支。然而,多源数据的收集和分析只停留在计算机和通信技术方面。针对前期存在的问题,本文采用单片机和传感器技术对多源数据进行了科学的数据采集和分析。研究结果表明,基于随机早期检测和加权公平排队两种算法,基于遗传算法的分析算法具有较高的成功转换率。天线性能较好的节点的功耗比天线性能较差的节点低9-10%,这为多源数据收集和分析提供了基础。
{"title":"Multisource data acquisition based on single-chip microcomputer and sensor technology","authors":"Yahui Huang, Daozhong Lei","doi":"10.1515/comp-2022-0261","DOIUrl":"https://doi.org/10.1515/comp-2022-0261","url":null,"abstract":"Abstract Today, data and information are flooded every day. Data are a reliable basis for scientific research. Their function is not only to clearly show real problems in various fields, but also to guide people to find the key factors that cause problems. The emergence of big data responds to this era of information explosion, and it is precisely by virtue of the accumulation of quantity that it presents the rules more clearly. No matter political, economic, cultural, and other fields are closely related to data. The application of microcontroller and sensor technology can help explore new branches of multisource data. However, the collection and analysis of multisource data only stays in the aspects of computer and communication technology. In view of the earlier problems, this article carried out scientific data collection and analysis of multisource data based on single-chip microcomputer and sensor technology. The research results showed that based on two algorithms, random early detection and weighted fair queuing, the analysis algorithm according to the Genetic Algorithm had a higher successful conversion rate. The power consumption of a node with better antenna performance was 9–10% lower than that of a node with poor antenna performance, which provided a basis for multisource data collection and analysis.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"416 - 426"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48091832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wormhole attack detection techniques in ad-hoc network: A systematic review 自组织网络中的虫洞攻击检测技术:系统综述
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0245
C. Gupta, Laxman Singh, Rajdev Tiwari
Abstract Mobile ad hoc networks (MANETs) are considered as decentralized networks, which can communicate without pre-existing infrastructure. Owning to utilization of open medium access and dynamically changing network topology, MANETs are vulnerable to different types of attacks such as blackhole attack, gray hole attack, Sybil attack, rushing attack, jellyfish attack, wormhole attack (WHA), byzantine attack, selfishness attack, and network partition attack. Out of these, worm hole attack is the most common and severe attack that substantially undermines the performance of the network and disrupts the most routing protocols. In the past two decades, numerous researchers have explored the number of techniques to detect and mitigate the effect of WHAs to ensure the safe operation of wireless networks. Hence, in this article, we mainly focus on the WHAs and present the different state of art methods, which have been employed in previous years to discern WHA in wireless networks. The existing WHA detection techniques are lacking due to usage of additional hardware, higher delay, and consumption of higher energy. Round trip time (RTT) based detection methods are showing better results as they do not require additional hardware. Machine learning (ML) techniques can also be applied to ad-hoc network for anomaly detection and has a great influence in future; therefore, ML techniques are also analyzed for WHA detection in this article. SVM technique is mostly used by the researchers for outstanding results. It has been analyzed that hybrid approach which uses the traditional detection technique and ML technique are showing better results for WHA detection. Finally, we have identified the areas where further research can be focused so that we can apply the WHA detection methods for larger topological area for more flexibility and accurate results.
摘要移动自组织网络(manet)被认为是一种分散的网络,它可以在没有预先存在的基础设施的情况下进行通信。由于使用开放的介质访问和动态变化的网络拓扑结构,manet容易受到不同类型的攻击,如黑洞攻击、灰洞攻击、Sybil攻击、rush攻击、水母攻击、虫洞攻击(WHA)、拜占庭攻击、自私攻击、网络分区攻击等。其中,蠕虫洞攻击是最常见和最严重的攻击,它极大地破坏了网络的性能并破坏了大多数路由协议。在过去的二十年中,许多研究人员已经探索了许多技术来检测和减轻无线网络的影响,以确保无线网络的安全运行。因此,在本文中,我们主要关注WHA,并介绍了前几年用于识别无线网络中的WHA的不同最新方法。现有的世卫病毒检测技术由于使用额外的硬件、较高的延迟和较高的能量消耗而缺乏。基于往返时间(RTT)的检测方法显示出更好的结果,因为它们不需要额外的硬件。机器学习(ML)技术也可以应用于ad-hoc网络进行异常检测,在未来有很大的影响;因此,本文还分析了ML技术用于WHA检测。支持向量机技术因其突出的效果而被研究人员广泛使用。分析表明,将传统检测技术与ML技术相结合的混合方法对WHA检测效果较好。最后,我们确定了进一步研究的重点领域,以便我们能够将卫生大会检测方法应用于更大的拓扑区域,以获得更大的灵活性和更准确的结果。
{"title":"Wormhole attack detection techniques in ad-hoc network: A systematic review","authors":"C. Gupta, Laxman Singh, Rajdev Tiwari","doi":"10.1515/comp-2022-0245","DOIUrl":"https://doi.org/10.1515/comp-2022-0245","url":null,"abstract":"Abstract Mobile ad hoc networks (MANETs) are considered as decentralized networks, which can communicate without pre-existing infrastructure. Owning to utilization of open medium access and dynamically changing network topology, MANETs are vulnerable to different types of attacks such as blackhole attack, gray hole attack, Sybil attack, rushing attack, jellyfish attack, wormhole attack (WHA), byzantine attack, selfishness attack, and network partition attack. Out of these, worm hole attack is the most common and severe attack that substantially undermines the performance of the network and disrupts the most routing protocols. In the past two decades, numerous researchers have explored the number of techniques to detect and mitigate the effect of WHAs to ensure the safe operation of wireless networks. Hence, in this article, we mainly focus on the WHAs and present the different state of art methods, which have been employed in previous years to discern WHA in wireless networks. The existing WHA detection techniques are lacking due to usage of additional hardware, higher delay, and consumption of higher energy. Round trip time (RTT) based detection methods are showing better results as they do not require additional hardware. Machine learning (ML) techniques can also be applied to ad-hoc network for anomaly detection and has a great influence in future; therefore, ML techniques are also analyzed for WHA detection in this article. SVM technique is mostly used by the researchers for outstanding results. It has been analyzed that hybrid approach which uses the traditional detection technique and ML technique are showing better results for WHA detection. Finally, we have identified the areas where further research can be focused so that we can apply the WHA detection methods for larger topological area for more flexibility and accurate results.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"260 - 288"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66887337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
3D chaotic map-cosine transformation based approach to video encryption and decryption 基于三维混沌映射余弦变换的视频加解密方法
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2020-0225
M. Dua, Drishti Makhija, Pilla Yamini Lakshmi Manasa, Prashant Mishra
Abstract Data security is vital for multimedia communication. A number of cryptographic algorithms have been developed for the secure transmission of text and image data. Very few contributions have been made in the area of video encryption because of the large input data size and time constraints. However, due to the massive increase in digital media transfer within networks, the security of video data has become one of the most important features of network reliability. Block encryption techniques and 1D-chaotic maps have been previously used for the process of video encryption. Although the results obtained by using 1D-chaotic maps were quite satisfactory, the approach had many limitations as these maps have less dynamic behavior. To overcome these drawbacks, this article proposes an Intertwining Logistic Map (ILM)-Cosine transformation-based video encryption technique. The first step involved segmenting the input video into multiple frames based on the frames per second (FPS) value and the length of the video. Next, each frame was selected, and the correlation among the pixels was reduced by a process called permutation/scrambling. In addition, each frame was rotated by 90° in the anticlockwise direction to induce more randomness into the encryption process. Furthermore, by using an approach called the random order substitution technique, changes were made in each of the images, row-wise and column-wise. Finally, all the encrypted frames were jumbled according to a frame selection key and were joined to generate an encrypted video, which was the output delivered to the user. The efficiency of this method was tested based on the state of various parameters like Entropy, Unified Average Change in Intensity (UACI), and correlation coefficient (CC). The presented approach also decrypts the encrypted video, and the decryption quality was checked using parameters such as mean square error (MSE) and peak signal-to-noise ratio (PSNR).
摘要数据安全是多媒体通信的关键。为了保证文本和图像数据的安全传输,已经开发了许多加密算法。由于输入数据量大和时间限制,在视频加密领域做出的贡献很少。然而,由于网络内数字媒体传输的大量增加,视频数据的安全性已成为网络可靠性的重要特征之一。块加密技术和一维混沌映射先前已用于视频加密过程。虽然使用一维混沌映射得到的结果非常令人满意,但由于这些映射的动态行为较小,该方法有许多局限性。为了克服这些缺点,本文提出了一种基于交织逻辑映射(ILM)-余弦变换的视频加密技术。第一步涉及到基于每秒帧数(FPS)值和视频长度将输入视频分割成多个帧。接下来,选择每一帧,并通过一种称为排列/置乱的过程降低像素之间的相关性。此外,每帧在逆时针方向旋转90°,以在加密过程中引入更多的随机性。此外,通过使用一种称为随机顺序替换技术的方法,在每个图像中进行了行方向和列方向的更改。最后,根据帧选择密钥对所有加密帧进行混叠并拼接,生成加密视频,并将其输出给用户。基于熵、统一平均强度变化(UACI)、相关系数(CC)等参数的状态对该方法的有效性进行了检验。该方法还可以对加密视频进行解密,并使用均方误差(MSE)和峰值信噪比(PSNR)等参数来检查解密质量。
{"title":"3D chaotic map-cosine transformation based approach to video encryption and decryption","authors":"M. Dua, Drishti Makhija, Pilla Yamini Lakshmi Manasa, Prashant Mishra","doi":"10.1515/comp-2020-0225","DOIUrl":"https://doi.org/10.1515/comp-2020-0225","url":null,"abstract":"Abstract Data security is vital for multimedia communication. A number of cryptographic algorithms have been developed for the secure transmission of text and image data. Very few contributions have been made in the area of video encryption because of the large input data size and time constraints. However, due to the massive increase in digital media transfer within networks, the security of video data has become one of the most important features of network reliability. Block encryption techniques and 1D-chaotic maps have been previously used for the process of video encryption. Although the results obtained by using 1D-chaotic maps were quite satisfactory, the approach had many limitations as these maps have less dynamic behavior. To overcome these drawbacks, this article proposes an Intertwining Logistic Map (ILM)-Cosine transformation-based video encryption technique. The first step involved segmenting the input video into multiple frames based on the frames per second (FPS) value and the length of the video. Next, each frame was selected, and the correlation among the pixels was reduced by a process called permutation/scrambling. In addition, each frame was rotated by 90° in the anticlockwise direction to induce more randomness into the encryption process. Furthermore, by using an approach called the random order substitution technique, changes were made in each of the images, row-wise and column-wise. Finally, all the encrypted frames were jumbled according to a frame selection key and were joined to generate an encrypted video, which was the output delivered to the user. The efficiency of this method was tested based on the state of various parameters like Entropy, Unified Average Change in Intensity (UACI), and correlation coefficient (CC). The presented approach also decrypts the encrypted video, and the decryption quality was checked using parameters such as mean square error (MSE) and peak signal-to-noise ratio (PSNR).","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"37 - 56"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44121357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
BiSHM: Evidence detection and preservation model for cloud forensics BiSHM:用于云取证的证据检测和保存模型
IF 1.5 Q3 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2022-01-01 DOI: 10.1515/comp-2022-0241
Prasad Purnaye, Vrushali Kulkarni
Abstract The cloud market is growing every day. So are cloud crimes. To investigate crimes that happen in a cloud environment, an investigation is carried out adhering to the court of law. Forensics investigations require evidence from the cloud. Evidence acquisition in the cloud requires formidable efforts because of physical inaccessibility and the lack of cloud forensics tools. Time is very crucial in any forensic investigation. If the evidence is preserved before the cloud forensic investigation, it can give the investigators a head start. To identify and preserve such potential evidence in the cloud, we propose a system with an artificial intelligence (AI)-based agent, equipped for binary classification that monitors and profiles the virtual machine (VM) from hypervisor level activities. The proposed system classifies and preserves evidence data generated in the cloud. The evidence repository module of the system uses a novel blockchain model approach to maintain the data provenance. The proposed system works at the hypervisor level, which makes it robust for anti-forensics techniques in the cloud. The proposed system identifies potential evidence reducing the effective storage space requirement of the evidence repository. Data provenance incorporated in the proposed system reduces trust dependencies on the cloud service provider (CSP).
摘要云市场每天都在增长。云犯罪也是如此。为了调查发生在云环境中的犯罪,我们会根据法庭进行调查。法医学调查需要来自云端的证据。由于物理上的不可访问性和缺乏云取证工具,在云中获取证据需要付出巨大的努力。时间在任何法医调查中都是至关重要的。如果证据在云取证调查之前得到保存,可以让调查人员领先一步。为了在云中识别和保存这些潜在的证据,我们提出了一个带有基于人工智能(AI)的代理的系统,该系统配备了二进制分类功能,可以从系统管理程序级别的活动中监控和配置虚拟机(VM)。所提出的系统对云中生成的证据数据进行分类和保存。该系统的证据库模块使用了一种新颖的区块链模型方法来维护数据来源。所提出的系统在系统管理程序级别工作,这使其对云中的反取证技术具有鲁棒性。拟议的系统可识别潜在证据,从而减少证据库的有效存储空间需求。所提出的系统中包含的数据来源减少了对云服务提供商(CSP)的信任依赖。
{"title":"BiSHM: Evidence detection and preservation model for cloud forensics","authors":"Prasad Purnaye, Vrushali Kulkarni","doi":"10.1515/comp-2022-0241","DOIUrl":"https://doi.org/10.1515/comp-2022-0241","url":null,"abstract":"Abstract The cloud market is growing every day. So are cloud crimes. To investigate crimes that happen in a cloud environment, an investigation is carried out adhering to the court of law. Forensics investigations require evidence from the cloud. Evidence acquisition in the cloud requires formidable efforts because of physical inaccessibility and the lack of cloud forensics tools. Time is very crucial in any forensic investigation. If the evidence is preserved before the cloud forensic investigation, it can give the investigators a head start. To identify and preserve such potential evidence in the cloud, we propose a system with an artificial intelligence (AI)-based agent, equipped for binary classification that monitors and profiles the virtual machine (VM) from hypervisor level activities. The proposed system classifies and preserves evidence data generated in the cloud. The evidence repository module of the system uses a novel blockchain model approach to maintain the data provenance. The proposed system works at the hypervisor level, which makes it robust for anti-forensics techniques in the cloud. The proposed system identifies potential evidence reducing the effective storage space requirement of the evidence repository. Data provenance incorporated in the proposed system reduces trust dependencies on the cloud service provider (CSP).","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"154 - 170"},"PeriodicalIF":1.5,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48427821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Open Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1