Pub Date : 2019-06-14DOI: 10.7287/PEERJ.PREPRINTS.27800V1
Yaohua Xie
It is important to get data with Signal-Noise-Ratios (SNR) as high as possible. Compared to other techniques, filtering methods are fast. But they do not make full use of the characteristics of sample structure which reflected by relevant high SNR images. In this study, we propose a technique termed “TransFiltering”. It transplants the characteristics of a high SNR image to the frequency spectrum of a low SNR image by filtering. Usually, the high SNR and the low SNR image should have similar structure pattern. For example, they all come from the same image sequence. In the proposed method, Fourier transform is first performed on both of the images. Then, the frequency spectrum of the low SNR image is filtered according to that of the high SNR image. Finally, inverse Fourier transform is performed to get the image with improved SNR. Experiment results show that the proposed method is both effective and efficient.
{"title":"Improving the quality of low SNR images using high SNR images","authors":"Yaohua Xie","doi":"10.7287/PEERJ.PREPRINTS.27800V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27800V1","url":null,"abstract":"It is important to get data with Signal-Noise-Ratios (SNR) as high as possible. Compared to other techniques, filtering methods are fast. But they do not make full use of the characteristics of sample structure which reflected by relevant high SNR images. In this study, we propose a technique termed “TransFiltering”. It transplants the characteristics of a high SNR image to the frequency spectrum of a low SNR image by filtering. Usually, the high SNR and the low SNR image should have similar structure pattern. For example, they all come from the same image sequence. In the proposed method, Fourier transform is first performed on both of the images. Then, the frequency spectrum of the low SNR image is filtered according to that of the high SNR image. Finally, inverse Fourier transform is performed to get the image with improved SNR. Experiment results show that the proposed method is both effective and efficient.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"305 1","pages":"e27800"},"PeriodicalIF":0.0,"publicationDate":"2019-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79822092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-10DOI: 10.7287/peerj.preprints.27790v1
Nadheesh Jihan, Malith Jayasinghe, S. Perera
Online learning is an essential tool for predictive analysis based on continuous, endless data streams. Adopting Bayesian inference for online settings allows hierarchical modeling while representing the uncertainty of model parameters. Existing online inference techniques are motivated by either the traditional Bayesian updating or the stochastic optimizations. However, traditional Bayesian updating suffers from overconfidence posteriors, where posterior variance becomes too inadequate to adapt to new changes to the posterior. On the other hand, stochastic optimization of variational objective demands exhausting additional analysis to optimize a hyperparameter that controls the posterior variance. In this paper, we present ''Streaming Stochastic Variational Bayes" (SSVB)—a novel online approximation inference framework for data streaming to address the aforementioned shortcomings of the current state-of-the-art. SSVB adjusts its posterior variance duly without any user-specified hyperparameters while efficiently accommodating the drifting patterns to the posteriors. Moreover, SSVB can be easily adopted by practitioners for a wide range of models (i.e. simple regression models to complex hierarchical models) with little additional analysis. We appraised the performance of SSVB against Population Variational Inference (PVI), Stochastic Variational Inference (SVI) and Black-box Streaming Variational Bayes (BB-SVB) using two non-conjugate probabilistic models; multinomial logistic regression and linear mixed effect model. Furthermore, we also discuss the significant accuracy gain with SSVB based inference against conventional online learning models for each task.
{"title":"Streaming stochastic variational Bayes; An improved approach for Bayesian inference with data streams","authors":"Nadheesh Jihan, Malith Jayasinghe, S. Perera","doi":"10.7287/peerj.preprints.27790v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27790v1","url":null,"abstract":"Online learning is an essential tool for predictive analysis based on continuous, endless data streams. Adopting Bayesian inference for online settings allows hierarchical modeling while representing the uncertainty of model parameters. Existing online inference techniques are motivated by either the traditional Bayesian updating or the stochastic optimizations. However, traditional Bayesian updating suffers from overconfidence posteriors, where posterior variance becomes too inadequate to adapt to new changes to the posterior. On the other hand, stochastic optimization of variational objective demands exhausting additional analysis to optimize a hyperparameter that controls the posterior variance. In this paper, we present ''Streaming Stochastic Variational Bayes\" (SSVB)—a novel online approximation inference framework for data streaming to address the aforementioned shortcomings of the current state-of-the-art. SSVB adjusts its posterior variance duly without any user-specified hyperparameters while efficiently accommodating the drifting patterns to the posteriors. Moreover, SSVB can be easily adopted by practitioners for a wide range of models (i.e. simple regression models to complex hierarchical models) with little additional analysis. We appraised the performance of SSVB against Population Variational Inference (PVI), Stochastic Variational Inference (SVI) and Black-box Streaming Variational Bayes (BB-SVB) using two non-conjugate probabilistic models; multinomial logistic regression and linear mixed effect model. Furthermore, we also discuss the significant accuracy gain with SSVB based inference against conventional online learning models for each task.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"52 1","pages":"e27790"},"PeriodicalIF":0.0,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81560349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-03DOI: 10.7287/PEERJ.PREPRINTS.27777V1
Farhaan Noor Hamdani, Farheen Siddiqui
With the advent of the internet, there is a major concern regarding the growing number of attacks, where the attacker can target any computing or network resource remotely Also, the exponential shift towards the use of smart-end technology devices, results in various security related concerns, which include detection of anomalous data traffic on the internet. Unravelling legitimate traffic from malignant traffic is a complex task itself. Many attacks affect system resources thereby degenerating their computing performance. In this paper we propose a framework of supervised model implemented using machine learning algorithms which can enhance or aid the existing intrusion detection systems, for detection of variety of attacks. Here KDD (knowledge data and discovery) dataset is used as a benchmark. In accordance with detective abilities, we also analyze their performance, accuracy, alerts-logs and compute their overall detection rate. These machine learning algorithms are validated and tested in terms of accuracy, precision, true-false positives and negatives. Experimental results show that these methods are effective, generating low false positives and can be operative in building a defense line against network intrusions. Further, we compare these algorithms in terms of various functional parameters
{"title":"Machine learning approach for automated defense against network intrusions","authors":"Farhaan Noor Hamdani, Farheen Siddiqui","doi":"10.7287/PEERJ.PREPRINTS.27777V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27777V1","url":null,"abstract":"With the advent of the internet, there is a major concern regarding the growing number of attacks, where the attacker can target any computing or network resource remotely Also, the exponential shift towards the use of smart-end technology devices, results in various security related concerns, which include detection of anomalous data traffic on the internet. Unravelling legitimate traffic from malignant traffic is a complex task itself. Many attacks affect system resources thereby degenerating their computing performance. In this paper we propose a framework of supervised model implemented using machine learning algorithms which can enhance or aid the existing intrusion detection systems, for detection of variety of attacks. Here KDD (knowledge data and discovery) dataset is used as a benchmark. In accordance with detective abilities, we also analyze their performance, accuracy, alerts-logs and compute their overall detection rate.\u0000 These machine learning algorithms are validated and tested in terms of accuracy, precision, true-false positives and negatives. Experimental results show that these methods are effective, generating low false positives and can be operative in building a defense line against network intrusions. Further, we compare these algorithms in terms of various functional parameters","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"42 1","pages":"e27777"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82418352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-02DOI: 10.7287/PEERJ.PREPRINTS.27771V1
Mohammad Rezaei, Naser Zohorian, Nemat Soltani, P. Mohajeri
This paper presents a new approach for band detection and pattern recognition for molecule types. Although a few studies have examined band detection, but there is still no automatic method that can perform well despite the high noise. The band detection algorithm was designed in two parts, including band location and lane pattern recognition. In order to improve band detection and remove undesirable bands, the shape and light intensity of the bands were used as features. One-hundred lane images were selected for the training stage and 350 lane images for the testing stage to evaluate the proposed algorithm in a random fashion. All the images were prepared using PFGE BIORAD at the Microbiology Laboratory of Kermanshah University of Medical Sciences. An adaptive median filter with a filter size of 5x5 was selected as the optimal filter for removing noise. The results showed that the proposed algorithm has a 98.45% accuracy and is associated with less errors compared to other methods. The proposed algorithm has a good accuracy for band detection in pulsed-field gel electrophoresis images. Considering the shape of the peaks caused by the bands in the vertical projection profile of the signal, this method can reduce band detection errors. To improve accuracy, we recommend that the designed algorithm be examined for other types of molecules as well.
{"title":"A new algorithm for band detection and pattern extraction on pulsed-field gel electrophoresis images","authors":"Mohammad Rezaei, Naser Zohorian, Nemat Soltani, P. Mohajeri","doi":"10.7287/PEERJ.PREPRINTS.27771V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27771V1","url":null,"abstract":"This paper presents a new approach for band detection and pattern recognition for molecule types. Although a few studies have examined band detection, but there is still no automatic method that can perform well despite the high noise. The band detection algorithm was designed in two parts, including band location and lane pattern recognition. In order to improve band detection and remove undesirable bands, the shape and light intensity of the bands were used as features. One-hundred lane images were selected for the training stage and 350 lane images for the testing stage to evaluate the proposed algorithm in a random fashion. All the images were prepared using PFGE BIORAD at the Microbiology Laboratory of Kermanshah University of Medical Sciences. An adaptive median filter with a filter size of 5x5 was selected as the optimal filter for removing noise. The results showed that the proposed algorithm has a 98.45% accuracy and is associated with less errors compared to other methods. The proposed algorithm has a good accuracy for band detection in pulsed-field gel electrophoresis images. Considering the shape of the peaks caused by the bands in the vertical projection profile of the signal, this method can reduce band detection errors. To improve accuracy, we recommend that the designed algorithm be examined for other types of molecules as well.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"49 1","pages":"e27771"},"PeriodicalIF":0.0,"publicationDate":"2019-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84462576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-28DOI: 10.17608/K6.AUCKLAND.9789461.V1
T. Etherington, B. Jolly, Jan Zörner, Nicholas K. Spencer
Reproducible science is greatly aided by open publishing of scientific computer code. There are also many institutional benefits for encouraging the publication of scientific code, but there are also institutional considerations around intellectual property and risk. We discuss questions around scientific code publishing from the perspective of a research organisation asking: who will be involved, how should code be licensed, where should code be published, how to get credit, what standards, and what costs? In reviewing advice and evidence relevant to these questions we propose a research institution framework for publishing open scientific code to enable reproducible science.
{"title":"A research institution framework for publishing open code to enable reproducible science","authors":"T. Etherington, B. Jolly, Jan Zörner, Nicholas K. Spencer","doi":"10.17608/K6.AUCKLAND.9789461.V1","DOIUrl":"https://doi.org/10.17608/K6.AUCKLAND.9789461.V1","url":null,"abstract":"Reproducible science is greatly aided by open publishing of scientific computer code. There are also many institutional benefits for encouraging the publication of scientific code, but there are also institutional considerations around intellectual property and risk. We discuss questions around scientific code publishing from the perspective of a research organisation asking: who will be involved, how should code be licensed, where should code be published, how to get credit, what standards, and what costs? In reviewing advice and evidence relevant to these questions we propose a research institution framework for publishing open scientific code to enable reproducible science.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"43 1","pages":"e27762"},"PeriodicalIF":0.0,"publicationDate":"2019-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86660004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-17DOI: 10.7287/peerj.preprints.27740v1
Davide Nardone, A. Ciaramella, A. Staiano
In this work, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.
{"title":"A Sparse-Modeling based approach for Class-Specific feature selection","authors":"Davide Nardone, A. Ciaramella, A. Staiano","doi":"10.7287/peerj.preprints.27740v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27740v1","url":null,"abstract":"In this work, we propose a novel Feature Selection framework, called Sparse-Modeling Based Approach for Class Specific Feature Selection (SMBA-CSFS), that simultaneously exploits the idea of Sparse Modeling and Class-Specific Feature Selection. Feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and might speed the experimental validation up. Unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. The proposed feature selection procedure conceives a two steps approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. To this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. In order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large B-cell lymphoma, and the malignant glioma. SMBA-CSFS is able to identify/retrieve the most representative features that maximize the classification accuracy. With top 20 and 80 features, SMBA-CSFS exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. Experiments show that the proposed approach might outperform the state-of-the-art methods when the number of features is high. For this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"83 1","pages":"e27740"},"PeriodicalIF":0.0,"publicationDate":"2019-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77317804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-08DOI: 10.7287/peerj.preprints.27712v1
Suleka Helmini, Nadheesh Jihan, Malith Jayasinghe, S. Perera
In the retail domain, estimating the sales before actual sales become known plays a key role in maintaining a successful business. This is due to the fact that most crucial decisions are bound to be based on these forecasts. Statistical sales forecasting models like ARIMA (Auto-Regressive Integrated Moving Average), can be identified as one of the most traditional and commonly used forecasting methodologies. Even though these models are capable of producing satisfactory forecasts for linear time series data they are not suitable for analyzing non-linear data. Therefore, machine learning models (such as Random Forest Regression, XGBoost) have been employed frequently as they were able to achieve better results using non-linear data. The recent research shows that deep learning models (e.g. recurrent neural networks) can provide higher accuracy in predictions compared to machine learning models due to their ability to persist information and identify temporal relationships. In this paper, we adopt a special variant of Long Short Term Memory (LSTM) network called LSTM model with peephole connections for sales prediction. We first build our model using historical features for sales forecasting. We compare the results of this initial LSTM model with multiple machine learning models, namely, the Extreme Gradient Boosting model (XGB) and Random Forest Regressor model(RFR). We further improve the prediction accuracy of the initial model by incorporating features that describe the future that is known to us in the current moment, an approach that has not been explored in previous state-of-the-art LSTM based forecasting models. The initial LSTM model we develop outperforms the machine learning models achieving 12% - 14% improvement whereas the improved LSTM model achieves 11% - 13% improvement compared to the improved machine learning models. Furthermore, we also show that our improved LSTM model can obtain a 20% - 21% improvement compared to the initial LSTM model, achieving significant improvement.
{"title":"Sales forecasting using multivariate long short term memory network models","authors":"Suleka Helmini, Nadheesh Jihan, Malith Jayasinghe, S. Perera","doi":"10.7287/peerj.preprints.27712v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27712v1","url":null,"abstract":"In the retail domain, estimating the sales before actual sales become known plays a key role in maintaining a successful business. This is due to the fact that most crucial decisions are bound to be based on these forecasts. Statistical sales forecasting models like ARIMA (Auto-Regressive Integrated Moving Average), can be identified as one of the most traditional and commonly used forecasting methodologies. Even though these models are capable of producing satisfactory forecasts for linear time series data they are not suitable for analyzing non-linear data. Therefore, machine learning models (such as Random Forest Regression, XGBoost) have been employed frequently as they were able to achieve better results using non-linear data. The recent research shows that deep learning models (e.g. recurrent neural networks) can provide higher accuracy in predictions compared to machine learning models due to their ability to persist information and identify temporal relationships. In this paper, we adopt a special variant of Long Short Term Memory (LSTM) network called LSTM model with peephole connections for sales prediction. We first build our model using historical features for sales forecasting. We compare the results of this initial LSTM model with multiple machine learning models, namely, the Extreme Gradient Boosting model (XGB) and Random Forest Regressor model(RFR). We further improve the prediction accuracy of the initial model by incorporating features that describe the future that is known to us in the current moment, an approach that has not been explored in previous state-of-the-art LSTM based forecasting models. The initial LSTM model we develop outperforms the machine learning models achieving 12% - 14% improvement whereas the improved LSTM model achieves 11% - 13% improvement compared to the improved machine learning models. Furthermore, we also show that our improved LSTM model can obtain a 20% - 21% improvement compared to the initial LSTM model, achieving significant improvement.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"135 1","pages":"e27712"},"PeriodicalIF":0.0,"publicationDate":"2019-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86829706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-07DOI: 10.7287/peerj.preprints.3165v1
Y. Zhai, Shenglong Chen, Qianwen Ouyang
It is of great significance to conduct seismic hazard prediction in mitigating the damage caused by earthquakes in urban area. In this study, a geographic information system (GIS)-based seismic hazard prediction system for urban earthquake disaster prevention planning is developed, incorporating structural vulnerability analysis, program development, and GIS. The system is integrated with proven building vulnerability analysis models, data search function, spatial analysis function, and plotting function. It realizes the batching and automation of seismic hazard prediction and the interactive visualization of predicted results. Finally, the system is applied to a test area and the results are compared with results from previous studies, the precision of which was improved because the construction time of the building was taken into consideration. Moreover, the system is of high intelligence and minimal manual intervention. It meets the operating requirements of non-professionals and provides a feasible technique and operating procedure for large-scale urban seismic hazard prediction. Above all, the system can provide data support and aid decision-making for the establishment and implementation of urban earthquake disaster prevention planning.
{"title":"GIS-based seismic hazard prediction system for urban earthquake disaster prevention planning","authors":"Y. Zhai, Shenglong Chen, Qianwen Ouyang","doi":"10.7287/peerj.preprints.3165v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.3165v1","url":null,"abstract":"It is of great significance to conduct seismic hazard prediction in mitigating the damage caused by earthquakes in urban area. In this study, a geographic information system (GIS)-based seismic hazard prediction system for urban earthquake disaster prevention planning is developed, incorporating structural vulnerability analysis, program development, and GIS. The system is integrated with proven building vulnerability analysis models, data search function, spatial analysis function, and plotting function. It realizes the batching and automation of seismic hazard prediction and the interactive visualization of predicted results. Finally, the system is applied to a test area and the results are compared with results from previous studies, the precision of which was improved because the construction time of the building was taken into consideration. Moreover, the system is of high intelligence and minimal manual intervention. It meets the operating requirements of non-professionals and provides a feasible technique and operating procedure for large-scale urban seismic hazard prediction. Above all, the system can provide data support and aid decision-making for the establishment and implementation of urban earthquake disaster prevention planning.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"92 1","pages":"e3165"},"PeriodicalIF":0.0,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87330491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-04DOI: 10.7287/peerj.preprints.27702v1
Ester Giallonardo, Francesco Poggi, D. Rossi, E. Zimeo
In recent years, new classes of highly dynamic, complex systems are gaining momentum. These systems are characterized by the need to express behaviors driven by external and/or internal changes, i.e. they are reactive and context-aware. These classes include, but are not limited to IoT, smart cities, cyber-physical systems and sensor networks. An important design feature of these systems should be the ability of adapting their behavior to environment changes. This requires handling a runtime representation of the context enriched with variation points that relate different behaviors to possible changes of the representation. In this paper, we present a reference architecture for reactive, context-aware systems able to handle contextual knowledge (that defines what the system perceives) by means of virtual sensors and able to react to environment changes by means of virtual actuators, both represented in a declarative manner through semantic web technologies. To improve the ability to react with a proper behavior to context changes (e.g. faults) that may influence the ability of the system to observe the environment, we allow the definition of logical sensors and actuators through an extension of the SSN ontology (a W3C standard). In our reference architecture a knowledge base of sensors and actuators (hosted by an RDF triple store) is bound to real world by grounding semantic elements to physical devices via REST APIs. The proposed architecture along with the defined ontology try to address the main problems of dynamically reconfigurable systems by exploiting a declarative, queryable approach to enable runtime reconfiguration with the help of (a) semantics to support discovery in heterogeneous environment, (b) composition logic to define alternative behaviors for variation points, (c) bi-causal connection life-cycle to avoid dangling links with the external environment. The proposal is validated in a case study aimed at designing an edge node for smart buildings dedicated to cultural heritage preservation.
{"title":"An architecture for context-aware reactive systems based on run-time semantic models","authors":"Ester Giallonardo, Francesco Poggi, D. Rossi, E. Zimeo","doi":"10.7287/peerj.preprints.27702v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27702v1","url":null,"abstract":"In recent years, new classes of highly dynamic, complex systems are gaining momentum. These systems are characterized by the need to express behaviors driven by external and/or internal changes, i.e. they are reactive and context-aware. These classes include, but are not limited to IoT, smart cities, cyber-physical systems and sensor networks.\u0000 An important design feature of these systems should be the ability of adapting their behavior to environment changes. This requires handling a runtime representation of the context enriched with variation points that relate different behaviors to possible changes of the representation.\u0000 In this paper, we present a reference architecture for reactive, context-aware systems able to handle contextual knowledge (that defines what the system perceives) by means of virtual sensors and able to react to environment changes by means of virtual actuators, both represented in a declarative manner through semantic web technologies. To improve the ability to react with a proper behavior to context changes (e.g. faults) that may influence the ability of the system to observe the environment, we allow the definition of logical sensors and actuators through an extension of the SSN ontology (a W3C standard). In our reference architecture a knowledge base of sensors and actuators (hosted by an RDF triple store) is bound to real world by grounding semantic elements to physical devices via REST APIs.\u0000 The proposed architecture along with the defined ontology try to address the main problems of dynamically reconfigurable systems by exploiting a declarative, queryable approach to enable runtime reconfiguration with the help of (a) semantics to support discovery in heterogeneous environment, (b) composition logic to define alternative behaviors for variation points, (c) bi-causal connection life-cycle to avoid dangling links with the external environment. The proposal is validated in a case study aimed at designing an edge node for smart buildings dedicated to cultural heritage preservation.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"1 1","pages":"e27702"},"PeriodicalIF":0.0,"publicationDate":"2019-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87760347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-30DOI: 10.7287/peerj.preprints.27694v1
M. Alberti
GIS techniques enable the quantitative analysis of geological structures. In particular, topographic traces of geological lineaments can be compared with the theoretical ones for geological planes, to determine the best fitting theoretical planes. qgSurf, a Python plugin for QGIS, implements this kind of processing, in addition to the determination of the best-fit plane to a set of topographic points, the calculation of the distances between topographic traces and geological planes and also basic stereonet plottings. By applying these tools to a case study of a Cenozoic thrust lineament in the Southern Apennines (Calabria, Southern Italy), we deduce the approximate orientations of the lineament in different fault-delimited sectors and calculate the misfits between the theoretical orientations and the actual topographic traces.
{"title":"GIS analysis of geological surfaces orientations: the qgSurf plugin for QGIS","authors":"M. Alberti","doi":"10.7287/peerj.preprints.27694v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27694v1","url":null,"abstract":"GIS techniques enable the quantitative analysis of geological structures. In particular, topographic traces of geological lineaments can be compared with the theoretical ones for geological planes, to determine the best fitting theoretical planes. qgSurf, a Python plugin for QGIS, implements this kind of processing, in addition to the determination of the best-fit plane to a set of topographic points, the calculation of the distances between topographic traces and geological planes and also basic stereonet plottings. By applying these tools to a case study of a Cenozoic thrust lineament in the Southern Apennines (Calabria, Southern Italy), we deduce the approximate orientations of the lineament in different fault-delimited sectors and calculate the misfits between the theoretical orientations and the actual topographic traces.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"4 1","pages":"e27694"},"PeriodicalIF":0.0,"publicationDate":"2019-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84631488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}