Pub Date : 2020-05-15DOI: 10.7287/peerj.preprints.3502v2
S. Banerjee
Intelligence and consciousness have fascinated humanity for a long time and we have long sought to replicate this in machines. In this work, we show some design principles for a compassionate and conscious artificial intelligence. We present a computational framework for engineering intelligence, empathy, and consciousness in machines. We hope that this framework will allow us to better understand consciousness and design machines that are conscious and empathetic. Our hope is that this will also shift the discussion from fear of artificial intelligence towards designing machines that embed our cherished values. Consciousness, intelligence, and empathy would be worthy design goals that can be engineered in machines.
{"title":"A framework for designing compassionate and ethical artificial intelligence and artificial consciousness","authors":"S. Banerjee","doi":"10.7287/peerj.preprints.3502v2","DOIUrl":"https://doi.org/10.7287/peerj.preprints.3502v2","url":null,"abstract":"Intelligence and consciousness have fascinated humanity for a long time and we have long sought to replicate this in machines. In this work, we show some design principles for a compassionate and conscious artificial intelligence. We present a computational framework for engineering intelligence, empathy, and consciousness in machines. We hope that this framework will allow us to better understand consciousness and design machines that are conscious and empathetic. Our hope is that this will also shift the discussion from fear of artificial intelligence towards designing machines that embed our cherished values. Consciousness, intelligence, and empathy would be worthy design goals that can be engineered in machines.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"5 1","pages":"e3502"},"PeriodicalIF":0.0,"publicationDate":"2020-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82528476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-12DOI: 10.7287/peerj.preprints.27959v1
Srishti Mishra, Zohair Shafi, S. Pathak
Data driven decision making is becoming increasingly an important aspect for successful business execution. More and more organizations are moving towards taking informed decisions based on the data that they are generating. Most of this data are in temporal format - time series data. Effective analysis across time series data sets, in an efficient and quick manner is a challenge. The most interesting and valuable part of such analysis is to generate insights on correlation and causation across multiple time series data sets. This paper looks at methods that can be used to analyze such data sets and gain useful insights from it, primarily in the form of correlation and causation analysis. This paper focuses on two methods for doing so, Two Sample Test with Dynamic Time Warping and Hierarchical Clustering and looks at how the results returned from both can be used to gain a better understanding of the data. Moreover, the methods used are meant to work with any data set, regardless of the subject domain and idiosyncrasies of the data set, primarily, a data agnostic approach.
{"title":"Time series event correlation with DTW and Hierarchical Clustering methods","authors":"Srishti Mishra, Zohair Shafi, S. Pathak","doi":"10.7287/peerj.preprints.27959v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27959v1","url":null,"abstract":"Data driven decision making is becoming increasingly an important aspect for successful business execution. More and more organizations are moving towards taking informed decisions based on the data that they are generating. Most of this data are in temporal format - time series data. Effective analysis across time series data sets, in an efficient and quick manner is a challenge. The most interesting and valuable part of such analysis is to generate insights on correlation and causation across multiple time series data sets. This paper looks at methods that can be used to analyze such data sets and gain useful insights from it, primarily in the form of correlation and causation analysis. This paper focuses on two methods for doing so, Two Sample Test with Dynamic Time Warping and Hierarchical Clustering and looks at how the results returned from both can be used to gain a better understanding of the data. Moreover, the methods used are meant to work with any data set, regardless of the subject domain and idiosyncrasies of the data set, primarily, a data agnostic approach.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"18 1","pages":"e27959"},"PeriodicalIF":0.0,"publicationDate":"2019-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82187969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Ad hoc network is the collection of nodes without having any physical structure involved i.e. access points, routers etc. MANETs are wide-open to similar forms of threats as other wireless mobile communication systems. In Ad-hoc Network nodes performing both as end-points of the communication and routers which makes the Ad-hoc routing protocols further prone towards the security attacks. Black Hole attack is a common security issue encountered in MANET routing protocols. The Black-Hole attack is security attack in which a malicious node imposters themselves as a node with the shortest hop count to the destination node during a packet transmission. A malicious node is capable of disturbing the network with Black Hole attack pretends to have the minimum hop-count route to the destination node (DS). This node responds to all route requests (RREQ) messages in positive and thus catches all the transmission to it. The source node (SN) not knowing the malicious nature of the Black-Hole node thus transmits all the important data. The Black Hole node discards all the important data packets. In this paper a comparatively effective, efficient and easy implemented way for identifying and therefore eluding the attacks of Black-Hole in mobile Ad-hoc networks is presented. The Network Simulator (NS-2) has been used for the implementation of our proposed solution to assess its work in terms of Network Routing load, End-to-End delay and Packet delivery ratio. The results show a considerable improvement in the performance metrics.
{"title":"Securing ad hoc on-demand distance vector routing protocol against the black hole DoS attack in MANETs","authors":"Rohi Tariq, Sheeraz Ahmed, Raees Shah Sani, Dr.Zeeshan Najam, Shahryar Shafique","doi":"10.7287/peerj.preprints.27905v1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27905v1","url":null,"abstract":"Mobile Ad hoc network is the collection of nodes without having any physical structure involved i.e. access points, routers etc. MANETs are wide-open to similar forms of threats as other wireless mobile communication systems. In Ad-hoc Network nodes performing both as end-points of the communication and routers which makes the Ad-hoc routing protocols further prone towards the security attacks. Black Hole attack is a common security issue encountered in MANET routing protocols. The Black-Hole attack is security attack in which a malicious node imposters themselves as a node with the shortest hop count to the destination node during a packet transmission. A malicious node is capable of disturbing the network with Black Hole attack pretends to have the minimum hop-count route to the destination node (DS). This node responds to all route requests (RREQ) messages in positive and thus catches all the transmission to it. The source node (SN) not knowing the malicious nature of the Black-Hole node thus transmits all the important data. The Black Hole node discards all the important data packets. In this paper a comparatively effective, efficient and easy implemented way for identifying and therefore eluding the attacks of Black-Hole in mobile Ad-hoc networks is presented. The Network Simulator (NS-2) has been used for the implementation of our proposed solution to assess its work in terms of Network Routing load, End-to-End delay and Packet delivery ratio. The results show a considerable improvement in the performance metrics.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"79 1","pages":"e27905"},"PeriodicalIF":0.0,"publicationDate":"2019-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76685632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-06DOI: 10.7287/PEERJ.PREPRINTS.27885V1
David Lähnemann, Johannes Köster, E. Szczurek, Davis J. McCarthy, S. Hicks, M. Robinson, C. Vallejos, N. Beerenwinkel, Kieran R. Campbell, A. Mahfouz, Luca Pinello, P. Skums, A. Stamatakis, Camille Stephan-Otto Attolini, Samuel Aparicio, J. Baaijens, M. Balvert, B. D. Barbanson, A. Cappuccio, G. Corleone, B. Dutilh, M. Florescu, V. Guryev, Rens Holmer, Katharina Jahn, Thamar Jessurun Lobo, Emma M. Keizer, Indu Khatri, S. Kiełbasa, J. Korbel, Alexey M. Kozlov, Tzu-Hao Kuo, B. Lelieveldt, I. Măndoiu, J. Marioni, T. Marschall, Felix Mölder, A. Niknejad, Lukasz Raczkowski, M. Reinders, J. Ridder, A. Saliba, A. Somarakis, O. Stegle, Fabian J Theis, Huan Yang, A. Zelikovsky, A. Mchardy, Benjamin J. Raphael, Sohrab P. Shah, A. Schönhuth
The recent upswing of microfluidics and combinatorial indexing strategies, further enhanced by very low sequencing costs, have turned single cell sequencing into an empowering technology; analyzing thousands—or even millions—of cells per experimental run is becoming a routine assignment in laboratories worldwide. As a consequence, we are witnessing a data revolution in single cell biology. Although some issues are similar in spirit to those experienced in bulk sequencing, many of the emerging data science problems are unique to single cell analysis; together, they give rise to the new realm of 'Single-Cell Data Science'. Here, we outline twelve challenges that will be central in bringing this new field forward. For each challenge, the current state of the art in terms of prior work is reviewed, and open problems are formulated, with an emphasis on the research goals that motivate them. This compendium is meant to serve as a guideline for established researchers, newcomers and students alike, highlighting interesting and rewarding problems in 'Single-Cell Data Science' for the coming years.
{"title":"12 Grand Challenges in Single-Cell Data Science","authors":"David Lähnemann, Johannes Köster, E. Szczurek, Davis J. McCarthy, S. Hicks, M. Robinson, C. Vallejos, N. Beerenwinkel, Kieran R. Campbell, A. Mahfouz, Luca Pinello, P. Skums, A. Stamatakis, Camille Stephan-Otto Attolini, Samuel Aparicio, J. Baaijens, M. Balvert, B. D. Barbanson, A. Cappuccio, G. Corleone, B. Dutilh, M. Florescu, V. Guryev, Rens Holmer, Katharina Jahn, Thamar Jessurun Lobo, Emma M. Keizer, Indu Khatri, S. Kiełbasa, J. Korbel, Alexey M. Kozlov, Tzu-Hao Kuo, B. Lelieveldt, I. Măndoiu, J. Marioni, T. Marschall, Felix Mölder, A. Niknejad, Lukasz Raczkowski, M. Reinders, J. Ridder, A. Saliba, A. Somarakis, O. Stegle, Fabian J Theis, Huan Yang, A. Zelikovsky, A. Mchardy, Benjamin J. Raphael, Sohrab P. Shah, A. Schönhuth","doi":"10.7287/PEERJ.PREPRINTS.27885V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27885V1","url":null,"abstract":"The recent upswing of microfluidics and combinatorial indexing strategies, further enhanced by very low sequencing costs, have turned single cell sequencing into an empowering technology; analyzing thousands—or even millions—of cells per experimental run is becoming a routine assignment in laboratories worldwide. As a consequence, we are witnessing a data revolution in single cell biology. Although some issues are similar in spirit to those experienced in bulk sequencing, many of the emerging data science problems are unique to single cell analysis; together, they give rise to the new realm of 'Single-Cell Data Science'.\u0000 Here, we outline twelve challenges that will be central in bringing this new field forward. For each challenge, the current state of the art in terms of prior work is reviewed, and open problems are formulated, with an emphasis on the research goals that motivate them.\u0000 This compendium is meant to serve as a guideline for established researchers, newcomers and students alike, highlighting interesting and rewarding problems in 'Single-Cell Data Science' for the coming years.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"30 1","pages":"e27885"},"PeriodicalIF":0.0,"publicationDate":"2019-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89902684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.7287/PEERJ.PREPRINTS.27880V1
H. M. Peixoto, R. Menezes, John Victor Alves Luiz, A. M. Henriques-Alves, Rossana Moreno Santa Cruz
The computational tool developed in this study is based on convolutional neural networks and the You Only Look Once (YOLO) algorithm for detecting and tracking mice in videos recorded during behavioral neuroscience experiments. We analyzed a set of data composed of 13622 images, made up of behavioral videos of three important researches in this area. The training set used 50% of the images, 25% for validation, and 25% for the tests. The results show that the mean Average Precision (mAP) reached by the developed system was 90.79% and 90.75% for the Full and Tiny versions of YOLO, respectively. Considering the high accuracy of the results, the developed work allows the experimentalists to perform mice tracking in a reliable and non-evasive way.
本研究开发的计算工具基于卷积神经网络和You Only Look Once (YOLO)算法,用于在行为神经科学实验中记录的视频中检测和跟踪小鼠。我们分析了一组由13622张图像组成的数据,这些图像由该领域三个重要研究的行为视频组成。训练集使用50%的图像,25%用于验证,25%用于测试。结果表明,该系统对全版和小版YOLO的平均精度(mAP)分别为90.79%和90.75%。考虑到结果的高准确性,开发的工作使实验人员能够以可靠和无规避的方式进行小鼠跟踪。
{"title":"Mice tracking using the YOLO algorithm","authors":"H. M. Peixoto, R. Menezes, John Victor Alves Luiz, A. M. Henriques-Alves, Rossana Moreno Santa Cruz","doi":"10.7287/PEERJ.PREPRINTS.27880V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27880V1","url":null,"abstract":"The computational tool developed in this study is based on convolutional neural networks and the You Only Look Once (YOLO) algorithm for detecting and tracking mice in videos recorded during behavioral neuroscience experiments. We analyzed a set of data composed of 13622 images, made up of behavioral videos of three important researches in this area. The training set used 50% of the images, 25% for validation, and 25% for the tests. The results show that the mean Average Precision (mAP) reached by the developed system was 90.79% and 90.75% for the Full and Tiny versions of YOLO, respectively. Considering the high accuracy of the results, the developed work allows the experimentalists to perform mice tracking in a reliable and non-evasive way.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"118 1","pages":"e27880"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72796741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.7287/PEERJ.PREPRINTS.27881V1
G. Gabriel, Afonso Teberga Campos, Aline de Lima Magacho, Lucas Cavallieri Segismondi, F. F. Vilela, José Antonio de Queiroz, J. A. B. Montevechi
Background. Discrete Event Simulation (DES) and Lean Healthcare are management tools that are efficient and assist in the quality and efficiency of health services. In this sense, the purpose of the study is to use lean principles jointly with DES to plan the expansion of a Canadian emergency department and to the demand that comes from small closed care centers. Methods. For this, we used simulation and modeling method. We simulated the emergency department in FlexSim Healthcare® software and, with the Design of Experiments (DoE), we defined the optimal number of locations and resources for each shift. Results. The results show that the ED cannot meet expected demand in the current state. Only 17.2% of the patients were completed treated, and the Length of Stay (LOS), on average, was 2213.7, with a confidence interval of (2131.8 - 2295.6) minutes. However, after changing decision variables, the number of treated patients increased to 95.7% (approximately 600%). Average LOS decreased to 461.2, with a confidence interval of (453.7 - 468.7) minutes, about 79.0%. In addition, the study shows that emergency department staff are balanced, according to Lean principles.
{"title":"Lean healthcare integrated with discrete event simulation and design of experiments: an emergency department expansion","authors":"G. Gabriel, Afonso Teberga Campos, Aline de Lima Magacho, Lucas Cavallieri Segismondi, F. F. Vilela, José Antonio de Queiroz, J. A. B. Montevechi","doi":"10.7287/PEERJ.PREPRINTS.27881V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27881V1","url":null,"abstract":"Background. Discrete Event Simulation (DES) and Lean Healthcare are management tools that are efficient and assist in the quality and efficiency of health services. In this sense, the purpose of the study is to use lean principles jointly with DES to plan the expansion of a Canadian emergency department and to the demand that comes from small closed care centers.\u0000 Methods. For this, we used simulation and modeling method. We simulated the emergency department in FlexSim Healthcare® software and, with the Design of Experiments (DoE), we defined the optimal number of locations and resources for each shift.\u0000 Results. The results show that the ED cannot meet expected demand in the current state. Only 17.2% of the patients were completed treated, and the Length of Stay (LOS), on average, was 2213.7, with a confidence interval of (2131.8 - 2295.6) minutes. However, after changing decision variables, the number of treated patients increased to 95.7% (approximately 600%). Average LOS decreased to 461.2, with a confidence interval of (453.7 - 468.7) minutes, about 79.0%. In addition, the study shows that emergency department staff are balanced, according to Lean principles.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"2 1","pages":"e27881"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90525703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-15DOI: 10.7287/PEERJ.PREPRINTS.27858V3
Loïc Fürhoff
Although the notion of ‘too many markers’ have been mentioned in several research, in practice, displaying hundreds of Points of Interests (POI) on a web map in two dimensions with an acceptable usability remains a real challenge. Web practitioners often make an excessive use of clustering aggregation to overcome performance bottlenecks without successfully resolving issues of perceived performance. This paper tries to bring a broad awareness by identifying sample issues which describe a general reality of clustering, and provide a pragmatic survey of potential technologies optimisations. At the end, we discuss the usage of technologies and the lack of documented client-server workflows, along with the need to enlarge our vision of the various clutter reduction methods.
{"title":"Rethinking the usage and experience of clustering markers in web mapping","authors":"Loïc Fürhoff","doi":"10.7287/PEERJ.PREPRINTS.27858V3","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27858V3","url":null,"abstract":"Although the notion of ‘too many markers’ have been mentioned in several research, in practice, displaying hundreds of Points of Interests (POI) on a web map in two dimensions with an acceptable usability remains a real challenge. Web practitioners often make an excessive use of clustering aggregation to overcome performance bottlenecks without successfully resolving issues of perceived performance. This paper tries to bring a broad awareness by identifying sample issues which describe a general reality of clustering, and provide a pragmatic survey of potential technologies optimisations. At the end, we discuss the usage of technologies and the lack of documented client-server workflows, along with the need to enlarge our vision of the various clutter reduction methods.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"195 1","pages":"e27858"},"PeriodicalIF":0.0,"publicationDate":"2019-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72790937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-06DOI: 10.7287/peerj.preprints.27849/supp-1
Yaohua Xie
Super-resolution microscopes (such as STED) illuminate samples with a tiny spot, and achieve very high resolution. But structures smaller than the spot cannot be resolved in this way. Therefore, we propose a technique to solve this problem. It is termed “Deconvolution after Dense Scan (DDS)”. First, a preprocessing stage is introduced to eliminate the optical uncertainty of the peripheral areas around the sample’s ROI (Region of Interest). Then, the ROI is scanned densely together with its peripheral areas. Finally, the high resolution image is recovered by deconvolution. The proposed technique does not need to modify the apparatus much, and is mainly performed by algorithm. Simulation experiments show that the technique can further improve the resolution of super-resolution microscopes.
{"title":"Improving the resolution of microscope by deconvolution after dense scan","authors":"Yaohua Xie","doi":"10.7287/peerj.preprints.27849/supp-1","DOIUrl":"https://doi.org/10.7287/peerj.preprints.27849/supp-1","url":null,"abstract":"Super-resolution microscopes (such as STED) illuminate samples with a tiny spot, and achieve very high resolution. But structures smaller than the spot cannot be resolved in this way. Therefore, we propose a technique to solve this problem. It is termed “Deconvolution after Dense Scan (DDS)”. First, a preprocessing stage is introduced to eliminate the optical uncertainty of the peripheral areas around the sample’s ROI (Region of Interest). Then, the ROI is scanned densely together with its peripheral areas. Finally, the high resolution image is recovered by deconvolution. The proposed technique does not need to modify the apparatus much, and is mainly performed by algorithm. Simulation experiments show that the technique can further improve the resolution of super-resolution microscopes.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"54 1 1","pages":"e27849"},"PeriodicalIF":0.0,"publicationDate":"2019-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82687821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-25DOI: 10.7287/PEERJ.PREPRINTS.27822V1
A. Kmoch, E. Uuemaa, H. Klug
Geographical Information Science (GIScience), also Geographical Information Science and Systems, is a multi-faceted research discipline and comprises a wide variety of topics. Investigation into data management and interoperability of geographical data and environmental data sets for scientific analysis, visualisation and modelling is an important driver of the Information Science aspect of GIScience, that underpins comprehensive Geographical Information Systems (GIS) and Spatial Data Infrastructure (SDI) research and development. In this article we present the 'Grounded Design' method, a fusion of Design Science Research (DSR) and Grounded Theory (GT), and how they can act as guiding principles to link GIScience, Computer Science and Earth Sciences into a converging GI systems development framework. We explain how this bottom-up research framework can yield holistic and integrated perspectives when designing GIS and SDI systems and software. This would allow GIScience academics, GIS and SDI practitioners alike to reliably draw from interdisciplinary knowledge to consistently design and innovate GI systems.
{"title":"Grounded Design and GIScience - A framework for informing the design of geographical information systems and spatial data infrastructures","authors":"A. Kmoch, E. Uuemaa, H. Klug","doi":"10.7287/PEERJ.PREPRINTS.27822V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27822V1","url":null,"abstract":"Geographical Information Science (GIScience), also Geographical Information Science and Systems, is a multi-faceted research discipline and comprises a wide variety of topics. Investigation into data management and interoperability of geographical data and environmental data sets for scientific analysis, visualisation and modelling is an important driver of the Information Science aspect of GIScience, that underpins comprehensive Geographical Information Systems (GIS) and Spatial Data Infrastructure (SDI) research and development. In this article we present the 'Grounded Design' method, a fusion of Design Science Research (DSR) and Grounded Theory (GT), and how they can act as guiding principles to link GIScience, Computer Science and Earth Sciences into a converging GI systems development framework. We explain how this bottom-up research framework can yield holistic and integrated perspectives when designing GIS and SDI systems and software. This would allow GIScience academics, GIS and SDI practitioners alike to reliably draw from interdisciplinary knowledge to consistently design and innovate GI systems.","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"7 1","pages":"e27822"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79708459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-24DOI: 10.7287/PEERJ.PREPRINTS.27820V1
M. El-Dosuky, G. El-adl
There is no doubt that the Blockchain has become an important technology that imposes itself in its use. With the increasing demand for this technology it is necessary to develop and update techniques proposed to deal with other technologies, especially in the field of cyber-security, which represents a vital and important field. This paper discussed the integration of Recurrence Qualitative Analysis (RQA) technology with the blockchain as well as exciting technical details of RQA operation in increasing Blockchain security. This paper found significant improvements, remarkable and differentiated compared to previous methods
{"title":"Data security analysis based on Blockchain Recurrence Qualitative Analysis (BRQA)","authors":"M. El-Dosuky, G. El-adl","doi":"10.7287/PEERJ.PREPRINTS.27820V1","DOIUrl":"https://doi.org/10.7287/PEERJ.PREPRINTS.27820V1","url":null,"abstract":"There is no doubt that the Blockchain has become an important technology that imposes itself in its use. With the increasing demand for this technology it is necessary to develop and update techniques proposed to deal with other technologies, especially in the field of cyber-security, which represents a vital and important field. This paper discussed the integration of Recurrence Qualitative Analysis (RQA) technology with the blockchain as well as exciting technical details of RQA operation in increasing Blockchain security. This paper found significant improvements, remarkable and differentiated compared to previous methods","PeriodicalId":93040,"journal":{"name":"PeerJ preprints","volume":"13 1","pages":"e27820"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72513921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}