Many maturity models have been used to evaluate or classify e-government portals. In the case of evaluating electronic services provided to citizens, an appropriate e-government maturity model should be selected. Ecuadorian municipalities' portals have been developed in order to provide citizens the e-services. This paper provides a website analysis in order to incentive to municipalities leaders to improve their websites. A case study is presented to determine the level of development of small cities that belong to a small province of Ecuador, as population and as area. The results show that there is a difference between the e-services that offer the municipalities with different sizes and it is necessary to improve their municipal services.
{"title":"Maturity Model for Local E-Government: A Case Study","authors":"Sussy Bayona Oré, Vicente Morales Lozada","doi":"10.1145/3036331.3050419","DOIUrl":"https://doi.org/10.1145/3036331.3050419","url":null,"abstract":"Many maturity models have been used to evaluate or classify e-government portals. In the case of evaluating electronic services provided to citizens, an appropriate e-government maturity model should be selected. Ecuadorian municipalities' portals have been developed in order to provide citizens the e-services. This paper provides a website analysis in order to incentive to municipalities leaders to improve their websites. A case study is presented to determine the level of development of small cities that belong to a small province of Ecuador, as population and as area. The results show that there is a difference between the e-services that offer the municipalities with different sizes and it is necessary to improve their municipal services.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"3 1","pages":"78-83"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88974005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud storages are used to expand the storage on mobile devices. It can be accessed through an internet connection. On mobile devices, network data can be expensive in terms of tariff and power consumption. Deduplication can be used to reduce the amount of data to be transferred. This paper proposes a fast deduplication system for mobile devices named rapid deduplication for mobile cloud storage (RDM). RDM is based on smart deduplication for mobile cloud storage (SDM) and rapid asymmetric extremum (RAM). RDM used RAM as the chunking algorithm instead of Rabin which is used in SDM. Our experimental results show RDM is at least 39.5% to 44.7% faster at the cost of 4.2% to 9.1% higher size after deduplication.
{"title":"RDM: Rapid Deduplication for Mobile Cloud Storage","authors":"R. N. S. Widodo, Hyotaek Lim","doi":"10.1145/3036331.3036357","DOIUrl":"https://doi.org/10.1145/3036331.3036357","url":null,"abstract":"Cloud storages are used to expand the storage on mobile devices. It can be accessed through an internet connection. On mobile devices, network data can be expensive in terms of tariff and power consumption. Deduplication can be used to reduce the amount of data to be transferred. This paper proposes a fast deduplication system for mobile devices named rapid deduplication for mobile cloud storage (RDM). RDM is based on smart deduplication for mobile cloud storage (SDM) and rapid asymmetric extremum (RAM). RDM used RAM as the chunking algorithm instead of Rabin which is used in SDM. Our experimental results show RDM is at least 39.5% to 44.7% faster at the cost of 4.2% to 9.1% higher size after deduplication.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"7 1","pages":"14-18"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83522081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Petri nets (PNs) are a central formalism for distributed systems, though, PNs turn out to be inadequate to model changes in real systems' layout. To cope with such a major issue, a framework based on Reisig's algebraic PNs has been recently proposed. It consists of an algebraic net emulating a P/T system encoded in net's inscriptions, and a set of rewriting rules defined at the net level. This approach permits the reuse of classical techniques and is provided with a sound initial semantics. In this paper the framework extends to Stochastic PNs, to make performance analysis possible. The ability to set up state-dependent transition rates is retained.
{"title":"Rewritable Stochastic Petri Nets","authors":"L. Capra","doi":"10.1145/3036331.3036346","DOIUrl":"https://doi.org/10.1145/3036331.3036346","url":null,"abstract":"Petri nets (PNs) are a central formalism for distributed systems, though, PNs turn out to be inadequate to model changes in real systems' layout. To cope with such a major issue, a framework based on Reisig's algebraic PNs has been recently proposed. It consists of an algebraic net emulating a P/T system encoded in net's inscriptions, and a set of rewriting rules defined at the net level. This approach permits the reuse of classical techniques and is provided with a sound initial semantics. In this paper the framework extends to Stochastic PNs, to make performance analysis possible. The ability to set up state-dependent transition rates is retained.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"1 1","pages":"155-159"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83600767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's data intensive applications require to process large amount of data on cluster systems that consist of commodity computing nodes with high performance network interconnects. Individual computing nodes in the cluster system have indentical resource specification although applications vary greatly in their resource needs. From imbalance of the amount of memory used different computing nodes, we can consider the use of remote idle memory to alleviate the memory pressure on individual computing nodes. We propose a remote memory block device (RMBD) using remote memory access capabilities provided by both Infiniband and RoCE as one of remote swapping techniques. We evaluates the I/O performance of RMBD with it of the block device using TCP/IP protocols. From performance experiments, we find that RMBD using RDMA is able to improve througput by up to 20 times in read operation and 6 times in write operation compared to it using TCP/IP. We demonstrate that RMBD as swap device can provide performance benefits that is up to 2 times speed up compared to HDD for quick-sorting.
{"title":"Performance Evaluation of a Remote Block Device with High-Speed Cluster Interconnects","authors":"Hyun-Hwa Choi, Kangho Kim, Dong-Jae Kang","doi":"10.1145/3036331.3050420","DOIUrl":"https://doi.org/10.1145/3036331.3050420","url":null,"abstract":"Today's data intensive applications require to process large amount of data on cluster systems that consist of commodity computing nodes with high performance network interconnects. Individual computing nodes in the cluster system have indentical resource specification although applications vary greatly in their resource needs. From imbalance of the amount of memory used different computing nodes, we can consider the use of remote idle memory to alleviate the memory pressure on individual computing nodes. We propose a remote memory block device (RMBD) using remote memory access capabilities provided by both Infiniband and RoCE as one of remote swapping techniques. We evaluates the I/O performance of RMBD with it of the block device using TCP/IP protocols. From performance experiments, we find that RMBD using RDMA is able to improve througput by up to 20 times in read operation and 6 times in write operation compared to it using TCP/IP. We demonstrate that RMBD as swap device can provide performance benefits that is up to 2 times speed up compared to HDD for quick-sorting.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"527 1","pages":"84-88"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77124406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media sites have become a platform for interaction during disaster and emergency situations. Twitter, for instance, is found to be one of the top preferred for sharing and seeking information. To reveal more information about the interaction of users regarding a specific event or topic, Social Network Analysis (SNA) is utilized. Content analysis and SNA have been implemented on the tweets regarding different situations to be able to understand and depict a visual representation of interaction between users. In this study, SNA was employed to reveal the user interaction of the Filipino community between two major typhoons that hit the Philippines. Similarities and differences were revealed by the social networks that were derived. Results also revealed that users tend to seek and share information from reliable sources such as news websites and verified Twitter users. Determining the interaction of Twitter users in an online community plays a vital role in information dissemination and allows appropriate response during disaster and emergency situations.
{"title":"Social Network Analysis of Tweets on Typhoon during Haiyan and Hagupit","authors":"Ryan Rey M. Daga","doi":"10.1145/3036331.3036345","DOIUrl":"https://doi.org/10.1145/3036331.3036345","url":null,"abstract":"Social media sites have become a platform for interaction during disaster and emergency situations. Twitter, for instance, is found to be one of the top preferred for sharing and seeking information. To reveal more information about the interaction of users regarding a specific event or topic, Social Network Analysis (SNA) is utilized. Content analysis and SNA have been implemented on the tweets regarding different situations to be able to understand and depict a visual representation of interaction between users. In this study, SNA was employed to reveal the user interaction of the Filipino community between two major typhoons that hit the Philippines. Similarities and differences were revealed by the social networks that were derived. Results also revealed that users tend to seek and share information from reliable sources such as news websites and verified Twitter users. Determining the interaction of Twitter users in an online community plays a vital role in information dissemination and allows appropriate response during disaster and emergency situations.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"16 1","pages":"151-154"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73921331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guoyan Cao, T. Luspay, B. Ebrahimi, K. Grigoriadis, M. Franchek, N. Marques, Jordan Wolf, G. Kramer
In this paper, we introduce a model-based simulator for hypotensive patients' blood pressure response to vasopressor drug phenylephrine (PHP) delivery. The simulator is designed based on a model of the mean arterial pressure (MAP) response to PHP infusion. The model is data-driven learning model which is illustrated to be adequately describing inter - and intra patients' response to PHP. In the simulator, besides open loop operation, such as manual PHP bolus injection and continuous infusion, a closed-loop control module is also designed, including an anti-windup PI controller, an adaptive controller and an empirical controller, to regulate the blood pressure at target level and maintain hemodynamic stability in hypotensive patients. In addition, three frequent scenarios happened in clinical treatment are modeled in challenge module. They are sodium nitroprusside (SNP) treatment, baseline pressure drop and hemorrhage. The simulator can be operated with two different interfaces; one is the MPA trend response interface and the other is real-time monitoring interface. The real-time monitoring is real-time synchronization presenting blood pressure waves, heart rate and EtCO2 waves under open and closed-loop treatment. The simulator is capable to train the doctors on the dose of PHP usage for the hypotensive patients with different challenges during the treatment.
{"title":"Simulator for Simulating and Monitoring the Hypotensive Patients Blood Pressure Response of Phenylephrine Bolus Injection and Infusion with Open-loop and Closed-loop Treatment","authors":"Guoyan Cao, T. Luspay, B. Ebrahimi, K. Grigoriadis, M. Franchek, N. Marques, Jordan Wolf, G. Kramer","doi":"10.1145/3036331.3036341","DOIUrl":"https://doi.org/10.1145/3036331.3036341","url":null,"abstract":"In this paper, we introduce a model-based simulator for hypotensive patients' blood pressure response to vasopressor drug phenylephrine (PHP) delivery. The simulator is designed based on a model of the mean arterial pressure (MAP) response to PHP infusion. The model is data-driven learning model which is illustrated to be adequately describing inter - and intra patients' response to PHP. In the simulator, besides open loop operation, such as manual PHP bolus injection and continuous infusion, a closed-loop control module is also designed, including an anti-windup PI controller, an adaptive controller and an empirical controller, to regulate the blood pressure at target level and maintain hemodynamic stability in hypotensive patients. In addition, three frequent scenarios happened in clinical treatment are modeled in challenge module. They are sodium nitroprusside (SNP) treatment, baseline pressure drop and hemorrhage. The simulator can be operated with two different interfaces; one is the MPA trend response interface and the other is real-time monitoring interface. The real-time monitoring is real-time synchronization presenting blood pressure waves, heart rate and EtCO2 waves under open and closed-loop treatment. The simulator is capable to train the doctors on the dose of PHP usage for the hypotensive patients with different challenges during the treatment.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"24 1","pages":"175-181"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74752954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Texture feature extraction consolidated with texture feature detection and feature matching solves many typical problems of image processing and computer vision; such as, texture classification, pattern recognition, object detection, and image segmentation. Through this paper, a new method for texture feature extraction is presented which uses Log-Gabor Filter and Singular Value Decomposition (SVD) algorithm. In the proposed model, sample images are converted to gray level images. And then, to elicit suitable distinctive texture orientation, a 2D Log-Gabor filter with various frequencies and different edges disintegrated with the SVD employ on each converted gray level images. Finally, singular values of SVD used as feature vector for this texture feature extraction model. For training and testing of experimental datasets, Naive Bayes classifier has been used. The Log-Gabor and SVD based feature extraction shows improved performance by exhibiting higher classification accuracy for our tested dataset compare to conventional Gabor and SVD feature extraction method. Furthermore, in order to decrease the computational and time complexity, an NVIDIA GeForce GTX780 GPU is used to implement our proposed model in parallel. The GPU implementation of proposed model showed average 3X speedup for per image than conventional CPU implementation.
{"title":"A Novel Parallel Texture Feature Extraction Method using Log-Gabor Filter and Singular Value Decomposition (SVD)","authors":"Md. Aminur Rab Ratul, S. Raja, J. Uddin","doi":"10.1145/3036331.3036352","DOIUrl":"https://doi.org/10.1145/3036331.3036352","url":null,"abstract":"Texture feature extraction consolidated with texture feature detection and feature matching solves many typical problems of image processing and computer vision; such as, texture classification, pattern recognition, object detection, and image segmentation. Through this paper, a new method for texture feature extraction is presented which uses Log-Gabor Filter and Singular Value Decomposition (SVD) algorithm. In the proposed model, sample images are converted to gray level images. And then, to elicit suitable distinctive texture orientation, a 2D Log-Gabor filter with various frequencies and different edges disintegrated with the SVD employ on each converted gray level images. Finally, singular values of SVD used as feature vector for this texture feature extraction model. For training and testing of experimental datasets, Naive Bayes classifier has been used. The Log-Gabor and SVD based feature extraction shows improved performance by exhibiting higher classification accuracy for our tested dataset compare to conventional Gabor and SVD feature extraction method. Furthermore, in order to decrease the computational and time complexity, an NVIDIA GeForce GTX780 GPU is used to implement our proposed model in parallel. The GPU implementation of proposed model showed average 3X speedup for per image than conventional CPU implementation.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"16 1","pages":"187-191"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79191449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Suprapto, M. A. Heryanto, H. Suprijono, B. Kusumoputro
This paper proposes the use of Direct Inverse Control (DIC) with Elman Recurrent Neural Network (ERNN) learning algorithm for the altitude control of a heavy-lift hexacopter. The study was conducted analytically using the real flight data obtained from real plant experiment. The results showed that the ERNN can successfully control the altitude of the heavy-lift hexacopter, where the response generated by the DIC system was in good agreement with the test data with low error. Furthermore, the proposed DIC system can also control the attitude, e.g. roll, pitch and yaw of the hexacopter which are also crucial for the hexacopter movement control.
{"title":"Altitude Control of Heavy-Lift Hexacopter using Direct Inverse Control Based on Elman Recurrent Neural Network","authors":"B. Suprapto, M. A. Heryanto, H. Suprijono, B. Kusumoputro","doi":"10.1145/3036331.3036354","DOIUrl":"https://doi.org/10.1145/3036331.3036354","url":null,"abstract":"This paper proposes the use of Direct Inverse Control (DIC) with Elman Recurrent Neural Network (ERNN) learning algorithm for the altitude control of a heavy-lift hexacopter. The study was conducted analytically using the real flight data obtained from real plant experiment. The results showed that the ERNN can successfully control the altitude of the heavy-lift hexacopter, where the response generated by the DIC system was in good agreement with the test data with low error. Furthermore, the proposed DIC system can also control the attitude, e.g. roll, pitch and yaw of the hexacopter which are also crucial for the hexacopter movement control.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"44 1","pages":"135-140"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75927701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software development is economically attractive, not merely in software industries but rather in the whole economy. Multiple companies are interested in software economics. This paper introduces an approach to build a framework of a simulation software to analyse the economics of software development processes. We seek to clarify the general characteristics of a simulation software design. Our framework was implemented in the software PAS - Process analysis studio. Next to the operation model, we illustrated a valid application model. Special emphasis is put on the hooks which enable further modifications. The presented framework facilitates the discussion on the construction of simulation models. It divides the monolithic phrase into different model elements which have to be implemented in order to obtain a valid framework.
{"title":"The way of designing a simulation software in order to evaluate the economic performance in software development","authors":"David Kuhlen, A. Speck","doi":"10.1145/3036331.3036347","DOIUrl":"https://doi.org/10.1145/3036331.3036347","url":null,"abstract":"Software development is economically attractive, not merely in software industries but rather in the whole economy. Multiple companies are interested in software economics. This paper introduces an approach to build a framework of a simulation software to analyse the economics of software development processes. We seek to clarify the general characteristics of a simulation software design.\u0000 Our framework was implemented in the software PAS - Process analysis studio. Next to the operation model, we illustrated a valid application model. Special emphasis is put on the hooks which enable further modifications.\u0000 The presented framework facilitates the discussion on the construction of simulation models. It divides the monolithic phrase into different model elements which have to be implemented in order to obtain a valid framework.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"28 1","pages":"109-113"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80330261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliability is a qualitative feature of software that has been the focus of numerous researchers. There are several measures for quantitative evaluation of reliability, such as mean time to failure, mean time to repair, high availability, security and probability of failure of any component in the system or the whole system. In the approaches that exploit Use-case diagrams to predict reliability, the probability of failure of each use-case in the program is calculated, and using this information the reliability of whole system is achieved. However, to our knowledge those approaches consider the probability of failure of executive paths in each use-case the same. The current study intends toimprove the accuracy of pastapproach and obtain the probability of different failures for various execution paths and calculate the reliability more carefully. A web-based transaction processing system is checked to predict and evaluate the reliability. As a result, the confidence interval of the probability of failure for the entire system is .0008 which is smaller than that of the base approach.
{"title":"A New Approach to Reliability Prediction in Component-based Systems","authors":"T. Hayat, Habib Seifzadeh","doi":"10.1145/3036331.3050421","DOIUrl":"https://doi.org/10.1145/3036331.3050421","url":null,"abstract":"Reliability is a qualitative feature of software that has been the focus of numerous researchers. There are several measures for quantitative evaluation of reliability, such as mean time to failure, mean time to repair, high availability, security and probability of failure of any component in the system or the whole system. In the approaches that exploit Use-case diagrams to predict reliability, the probability of failure of each use-case in the program is calculated, and using this information the reliability of whole system is achieved. However, to our knowledge those approaches consider the probability of failure of executive paths in each use-case the same. The current study intends toimprove the accuracy of pastapproach and obtain the probability of different failures for various execution paths and calculate the reliability more carefully. A web-based transaction processing system is checked to predict and evaluate the reliability. As a result, the confidence interval of the probability of failure for the entire system is .0008 which is smaller than that of the base approach.","PeriodicalId":22356,"journal":{"name":"Tenth International Conference on Computer Modeling and Simulation (uksim 2008)","volume":"98 1","pages":"89-94"},"PeriodicalIF":0.0,"publicationDate":"2017-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81064805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}