In the last decade, web-based applications have grown exponentially, especially with the wave of smart phones which has made both mobile and web applications accessible everywhere. This new revolution of internet played a major role in the growing complexity of today's software applications. Consequently, the quality assurance of such applications has become an inevitable feature for a web application to survive in such a highly competitive industry. Different methodologies have been proposed by the literature to test the quality of a system. Evaluating the test suite and improving the quality of the test sets is one of most known approaches used in the field. This technique is known as mutation testing. The purpose of this paper is to demonstrate the application of mutation testing technique on the widely used language in the development of web-based applications, JavaScript. Additionally, the paper will introduce a new approach to reduce the cost of mutation operators through the implementation of a novel taxonomy based on the blocks encapsulated within predicates.
{"title":"Reducing the cost of mutation operators through a novel taxonomy: application on scripting languages","authors":"Issar Arab, Safae Bourhnane","doi":"10.1145/3220228.3220264","DOIUrl":"https://doi.org/10.1145/3220228.3220264","url":null,"abstract":"In the last decade, web-based applications have grown exponentially, especially with the wave of smart phones which has made both mobile and web applications accessible everywhere. This new revolution of internet played a major role in the growing complexity of today's software applications. Consequently, the quality assurance of such applications has become an inevitable feature for a web application to survive in such a highly competitive industry. Different methodologies have been proposed by the literature to test the quality of a system. Evaluating the test suite and improving the quality of the test sets is one of most known approaches used in the field. This technique is known as mutation testing. The purpose of this paper is to demonstrate the application of mutation testing technique on the widely used language in the development of web-based applications, JavaScript. Additionally, the paper will introduce a new approach to reduce the cost of mutation operators through the implementation of a novel taxonomy based on the blocks encapsulated within predicates.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125925619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Belyakov, A. Bozhenyuk, M. Belyakova, S. Zubkov
This paper is devoted to the study of representation problem and use of experience in planning and implementing logistics projects in the field of e-business using geoinformation models. Geoinformation models can be presented as a description of the site of the territory on which the logistics project is implemented. The description consists of maps, charts and plans, as well as links to non-cartographic sources of information. The role of the quality criterion is considered as the reliability of the generated solutions. The reliability of the project is means the practical possibility of its implementation with minimal losses. Factors determining reliability in this sense are considered. The figurative representation of knowledge about the precedents of logistics projects is analyzed. Mapping of permissible transformations of the components of a logistics project in the form of cartographic objects is a distinctive feature of a figurative representation. Most often they are polygonal objects of location of logistics centers and transportation trajectories. The possibility of transferring an allowable transformation into another space-time domain with observance of topological constraints is considered as a way of preserving the meaning of the project. A method of transforming images into a given region of the territory is described. An example of a completed project with an estimate of reliability is given.
{"title":"Models experience presentation for logistics projects implementation based on geoinformation models","authors":"S. Belyakov, A. Bozhenyuk, M. Belyakova, S. Zubkov","doi":"10.1145/3220228.3220239","DOIUrl":"https://doi.org/10.1145/3220228.3220239","url":null,"abstract":"This paper is devoted to the study of representation problem and use of experience in planning and implementing logistics projects in the field of e-business using geoinformation models. Geoinformation models can be presented as a description of the site of the territory on which the logistics project is implemented. The description consists of maps, charts and plans, as well as links to non-cartographic sources of information. The role of the quality criterion is considered as the reliability of the generated solutions. The reliability of the project is means the practical possibility of its implementation with minimal losses. Factors determining reliability in this sense are considered. The figurative representation of knowledge about the precedents of logistics projects is analyzed. Mapping of permissible transformations of the components of a logistics project in the form of cartographic objects is a distinctive feature of a figurative representation. Most often they are polygonal objects of location of logistics centers and transportation trajectories. The possibility of transferring an allowable transformation into another space-time domain with observance of topological constraints is considered as a way of preserving the meaning of the project. A method of transforming images into a given region of the territory is described. An example of a completed project with an estimate of reliability is given.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"2003 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128287168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, there is a tendency for collective use of Computer Aided Design of electronic circuits. As an information of similar systems that uses a database server that contains all the necessary background and design information that is accessible from the workstations. Therefore, the actual task is the development of integrated databases, electronic components, for both analog and digital circuits. Modern CAD systems support the electronic equipment, so called, the library design method. The essence of it lies in the fact that the development of an object is detailed to some elementary pieces, called structural primitives.
{"title":"Research and development of integrated data model of circuit components for CAD of electronic circuits","authors":"M. Abu-Dawas","doi":"10.1145/3220228.3220256","DOIUrl":"https://doi.org/10.1145/3220228.3220256","url":null,"abstract":"Currently, there is a tendency for collective use of Computer Aided Design of electronic circuits. As an information of similar systems that uses a database server that contains all the necessary background and design information that is accessible from the workstations. Therefore, the actual task is the development of integrated databases, electronic components, for both analog and digital circuits. Modern CAD systems support the electronic equipment, so called, the library design method. The essence of it lies in the fact that the development of an object is detailed to some elementary pieces, called structural primitives.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124073830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joel Tanzouak, Ndiouma Bame, B. Yenke, Idrissa Sarr
Forecast Data provided by meteorological stations (MS) are crucial for Flood Forecasting Systems (FFS). These data are mainly related to temperature and precipitation. However, having enough MS to produce paramount of such data is challenging due to the high cost of their set up as well as their maintenance. As a consequence, it is almost impossible to get flood predictions in some regions due to the lack of meteorological forecast data. One solution to overcome such a drawback is to envision extending the data validity of a given area to another one. That is, we aim at using a MS of region A for estimating data we may have in region B if ever it had its own MS. In this respect, we propose an extension of MS forecast capacity by introducing a data analysis system based on a linear correlation technique. The system uses data collected from sensors networks installed on a given area not covered by a MS with data from a reference area that has a MS. Afterwards, it checks whether there is a linear correlation between the data of the two zones. In the affirmative case, a correlation function is deduced between the two areas and will be used for estimating data of the area without a MS. The results obtained from empiric experiments show the feasibility of our approach and its benefits.
{"title":"A data analysis system to extend the coverage capacity of meteorological stations for flood forecasting","authors":"Joel Tanzouak, Ndiouma Bame, B. Yenke, Idrissa Sarr","doi":"10.1145/3220228.3220235","DOIUrl":"https://doi.org/10.1145/3220228.3220235","url":null,"abstract":"Forecast Data provided by meteorological stations (MS) are crucial for Flood Forecasting Systems (FFS). These data are mainly related to temperature and precipitation. However, having enough MS to produce paramount of such data is challenging due to the high cost of their set up as well as their maintenance. As a consequence, it is almost impossible to get flood predictions in some regions due to the lack of meteorological forecast data. One solution to overcome such a drawback is to envision extending the data validity of a given area to another one. That is, we aim at using a MS of region A for estimating data we may have in region B if ever it had its own MS. In this respect, we propose an extension of MS forecast capacity by introducing a data analysis system based on a linear correlation technique. The system uses data collected from sensors networks installed on a given area not covered by a MS with data from a reference area that has a MS. Afterwards, it checks whether there is a linear correlation between the data of the two zones. In the affirmative case, a correlation function is deduced between the two areas and will be used for estimating data of the area without a MS. The results obtained from empiric experiments show the feasibility of our approach and its benefits.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114862817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a large growing in the last years in using of mathematical modeling to analyze and predict the temperature distribution in the processing engineering. In this work, the analysis of temperature distribution during the sterilization of two liquid models in two dimensional cans was presented using computational fluid dynamics (CFD). The partial differential equations of continuity, energy, and momentum were solved numerically using a CFD software package (PHOENICS) version 3.5, which is based on finite volume method of analysis (FVM). All the physical properties of the liquid food used in this study were assumed constant except those for viscosity and density. The results of the simulations were presented in the form of transient temperature. The simulations show clearly the profile of temperature distribution during the whole process, the shapes and movement of the Slowest Heating Area (SHA), and the formation of the secondary flow..
{"title":"Using computer simulation and computational fluid dynamics in analysis of temperature distribution in thermal sterilization process","authors":"A. Shawaqfeh, Ghani Albaali","doi":"10.1145/3220228.3220258","DOIUrl":"https://doi.org/10.1145/3220228.3220258","url":null,"abstract":"There is a large growing in the last years in using of mathematical modeling to analyze and predict the temperature distribution in the processing engineering. In this work, the analysis of temperature distribution during the sterilization of two liquid models in two dimensional cans was presented using computational fluid dynamics (CFD). The partial differential equations of continuity, energy, and momentum were solved numerically using a CFD software package (PHOENICS) version 3.5, which is based on finite volume method of analysis (FVM). All the physical properties of the liquid food used in this study were assumed constant except those for viscosity and density. The results of the simulations were presented in the form of transient temperature. The simulations show clearly the profile of temperature distribution during the whole process, the shapes and movement of the Slowest Heating Area (SHA), and the formation of the secondary flow..","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129030352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous agents collaboration at run-time is a fundamental part of multi-agent oriented adaptive collaboration systems (ACSs) when an agent cannot accomplish a task by itself under changing environment. Such collaborations are usually regulated by commitment protocols, which are typically defined at design-time. However, agent commitments among the participants to meld collaborative patterns typically defined in static environments, which may not fit the need of ACSs. To address this, we propose a Goal-Capability-Commitment (GCC) based mediation for agents collaboration at run-time that combines invalid diagnoser, capability reconciliation and commitment compensation. We elaborate on each of GCC elements, focusing in particular on the feasibility afforded by (1) the ability to diagnose the failures of capability and commitment leaded by the changing environments, and (2) the ability of mediation to select appropriate capability or generate new commitment in order to satisfy the invalid goals. We represent the GCC model in a hospital medical waste AGV transportation case study and illustrate our approach through experiments.
{"title":"GCC based agents collaboration mediation: a case study of medical waste AGV transportation","authors":"Wei Liu, Jing Wang, Deng Chen","doi":"10.1145/3220228.3220250","DOIUrl":"https://doi.org/10.1145/3220228.3220250","url":null,"abstract":"Autonomous agents collaboration at run-time is a fundamental part of multi-agent oriented adaptive collaboration systems (ACSs) when an agent cannot accomplish a task by itself under changing environment. Such collaborations are usually regulated by commitment protocols, which are typically defined at design-time. However, agent commitments among the participants to meld collaborative patterns typically defined in static environments, which may not fit the need of ACSs. To address this, we propose a Goal-Capability-Commitment (GCC) based mediation for agents collaboration at run-time that combines invalid diagnoser, capability reconciliation and commitment compensation. We elaborate on each of GCC elements, focusing in particular on the feasibility afforded by (1) the ability to diagnose the failures of capability and commitment leaded by the changing environments, and (2) the ability of mediation to select appropriate capability or generate new commitment in order to satisfy the invalid goals. We represent the GCC model in a hospital medical waste AGV transportation case study and illustrate our approach through experiments.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122321608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Antonio García Carrión, Medardo Delgado Paredes, Percy Huertas Niquen, Álvaro Fernández del Carpio
Requirements traceability is an important quality factor in the life cycle of the software, there is a lot of software standards that demands his implementation, for example, ISO/IEC 29110, a standard for very small entities; however, projects of small organizations do not perform an adequate application of traceability. In the literature there are proposals with a high level of abstraction; the realization of a traceability information model for small software development organizations provides a narrow alternative, reducing the gap between the theory and the reality in software development. We present a traceability information model for small organizations within the framework of ISO/IEC 29110, in this way organizations access an illustrative and reusable traceability solution that allows them to improve the quality of software development. The article contemplates the definition of the models.
{"title":"Traceability information model for very small entities with ISO/IEC 29110","authors":"Marco Antonio García Carrión, Medardo Delgado Paredes, Percy Huertas Niquen, Álvaro Fernández del Carpio","doi":"10.1145/3220228.3220259","DOIUrl":"https://doi.org/10.1145/3220228.3220259","url":null,"abstract":"Requirements traceability is an important quality factor in the life cycle of the software, there is a lot of software standards that demands his implementation, for example, ISO/IEC 29110, a standard for very small entities; however, projects of small organizations do not perform an adequate application of traceability. In the literature there are proposals with a high level of abstraction; the realization of a traceability information model for small software development organizations provides a narrow alternative, reducing the gap between the theory and the reality in software development. We present a traceability information model for small organizations within the framework of ISO/IEC 29110, in this way organizations access an illustrative and reusable traceability solution that allows them to improve the quality of software development. The article contemplates the definition of the models.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121802942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Use case is a model delivered by requirements engineering phase, which is considered as an input to the forthcoming design phase and test phase. A use case model is a simplest representation of an actor's interactions with the system in which the user is involved. The development of a use case model requires the finding out the use case itself and the actor that uses this use case to interact with the system. These two tasks are achieved manually via analyst's experience, who starts with different sources of data to develop use case model. User requirements document is a common source of data that may be started with to develop use case model. The extracting of actors and their actions (use cases) is subjected to the linguistic properties of each on. The aim of this paper is to define a new algorithmic approach for extracting actors and their use cases by using thematic role technique. This algorithmic approach had been manually tested using known examples, and shown its validity. The success of this technique will lead to develop an Intelligent Computer Aided Software Engineering (I-CASE) tool that automatically extracts actions and actors of use case model from functional requirements by using Semantic Role Labelling (SRL) of Natural Language Processing (NLP) approach.
{"title":"An algorithmic approach to extract actions and actors (AAEAA)","authors":"Eyad M. Jebril, A. Imam, Mohammad Al-Fayuomi","doi":"10.1145/3220228.3220247","DOIUrl":"https://doi.org/10.1145/3220228.3220247","url":null,"abstract":"Use case is a model delivered by requirements engineering phase, which is considered as an input to the forthcoming design phase and test phase. A use case model is a simplest representation of an actor's interactions with the system in which the user is involved. The development of a use case model requires the finding out the use case itself and the actor that uses this use case to interact with the system. These two tasks are achieved manually via analyst's experience, who starts with different sources of data to develop use case model. User requirements document is a common source of data that may be started with to develop use case model. The extracting of actors and their actions (use cases) is subjected to the linguistic properties of each on. The aim of this paper is to define a new algorithmic approach for extracting actors and their use cases by using thematic role technique. This algorithmic approach had been manually tested using known examples, and shown its validity. The success of this technique will lead to develop an Intelligent Computer Aided Software Engineering (I-CASE) tool that automatically extracts actions and actors of use case model from functional requirements by using Semantic Role Labelling (SRL) of Natural Language Processing (NLP) approach.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125199231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geospatial Big Data analytics are changing the way that businesses operate in many industries. Although a good number of research works have reported in the literature on geospatial data analytics and real-time data processing of large spatial data streams, only a few have addressed the full geospatial big data analytics project lifecycle and geospatial data science project lifecycle. Big data analysis differs from traditional data analysis primarily due to the volume, velocity and variety characteristics of the data being processed. One of a motivation of introducing new framework is to address these big data analysis challenges. Geospatial data science projects differ from most traditional data analysis projects because they could be complex and in need of advanced technologies in comparison to the traditional data analysis projects. For this reason, it is essential to have a process to govern the project and ensure that the project participants are competent enough to carry on the process. To this end, this paper presents, new geospatial big data mining and machine learning framework for geospatial data acquisition, data fusion, data storing, managing, processing, analysing, visualising and modelling and evaluation. Having a good process for data analysis and clear guidelines for comprehensive analysis is always a plus point for any data science project. It also helps to predict required time and resources early in the process to get a clear idea of the business problem to be solved.
{"title":"A new data science framework for analysing and mining geospatial big data","authors":"M. Saraee, Charith Silva","doi":"10.1145/3220228.3220236","DOIUrl":"https://doi.org/10.1145/3220228.3220236","url":null,"abstract":"Geospatial Big Data analytics are changing the way that businesses operate in many industries. Although a good number of research works have reported in the literature on geospatial data analytics and real-time data processing of large spatial data streams, only a few have addressed the full geospatial big data analytics project lifecycle and geospatial data science project lifecycle. Big data analysis differs from traditional data analysis primarily due to the volume, velocity and variety characteristics of the data being processed. One of a motivation of introducing new framework is to address these big data analysis challenges. Geospatial data science projects differ from most traditional data analysis projects because they could be complex and in need of advanced technologies in comparison to the traditional data analysis projects. For this reason, it is essential to have a process to govern the project and ensure that the project participants are competent enough to carry on the process. To this end, this paper presents, new geospatial big data mining and machine learning framework for geospatial data acquisition, data fusion, data storing, managing, processing, analysing, visualising and modelling and evaluation. Having a good process for data analysis and clear guidelines for comprehensive analysis is always a plus point for any data science project. It also helps to predict required time and resources early in the process to get a clear idea of the business problem to be solved.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127670361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to improve the Moroccan Health Emergency Services (HESs) at the regional level, we have established the HESs-RN (Regional Network) Geographic Information System (GIS) mapping on the basis of 1) proposing the structuring of HESs in the form of the HESs-RN modeled as a graph 2) modeling the HESs-RN by considering a medical emergency event as the Occurrence of events of Health Emergency (OHE), 3) proposing a Decision Support Model (DSM) to manage and control at best the different HESs-RN's OHE, and in particular to determine, in real time, the fastest path for the patient's transfer. This DSM supposes that the HES-RN nodes cooperate autonomously using best practices HES protocols and that they are regulated by the Health Emergency Assistance node.
{"title":"GIS platform for the optimized management of the health emergency services regional network (HES-RN) in Morocco","authors":"Ibtissam Khalfaoui, A. Hammouche","doi":"10.1145/3220228.3220240","DOIUrl":"https://doi.org/10.1145/3220228.3220240","url":null,"abstract":"In order to improve the Moroccan Health Emergency Services (HESs) at the regional level, we have established the HESs-RN (Regional Network) Geographic Information System (GIS) mapping on the basis of 1) proposing the structuring of HESs in the form of the HESs-RN modeled as a graph 2) modeling the HESs-RN by considering a medical emergency event as the Occurrence of events of Health Emergency (OHE), 3) proposing a Decision Support Model (DSM) to manage and control at best the different HESs-RN's OHE, and in particular to determine, in real time, the fastest path for the patient's transfer. This DSM supposes that the HES-RN nodes cooperate autonomously using best practices HES protocols and that they are regulated by the Health Emergency Assistance node.","PeriodicalId":169105,"journal":{"name":"Proceedings of the International Conference on Geoinformatics and Data Analysis","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122420287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}