Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397204
T. Schrader, K. Loewe, Lutz Pelchen, Eberhard Beck
5%-10% of all medical procedures are erroneous. It is estimated that about 10.000 persons in Germany die due to errors in medical interventions. The clinical risk management analyses the reasons for errors and avoidable events, in most cases retrospectively. Approaches such as Hazards & Operability Study (HAZOP) are applied routinely in technical environments. In a quite small number of rather medical specific procedures especially in laboratories the risk management demands HAZOP. In the clinical domain this approach is not used due to the difficulties to describe medical processes and task with all different kinds of properties.
{"title":"Prospective, knowledge based clinical risk analysis: The OPT-model","authors":"T. Schrader, K. Loewe, Lutz Pelchen, Eberhard Beck","doi":"10.1109/INTELCIS.2015.7397204","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397204","url":null,"abstract":"5%-10% of all medical procedures are erroneous. It is estimated that about 10.000 persons in Germany die due to errors in medical interventions. The clinical risk management analyses the reasons for errors and avoidable events, in most cases retrospectively. Approaches such as Hazards & Operability Study (HAZOP) are applied routinely in technical environments. In a quite small number of rather medical specific procedures especially in laboratories the risk management demands HAZOP. In the clinical domain this approach is not used due to the difficulties to describe medical processes and task with all different kinds of properties.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"89 3","pages":"94-99"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72617552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397245
Amira Ali, N. Badr
Software testing is a vital activity that is undertaken during software engineering life cycle to ensure software quality and reliability. Performance testing is a type of software testing that is done to shows how web application behaves under a certain workload. Cloud computing as an emerging technology can be used in the field of software engineering to provide cloud testing in order to overcome all deficiencies of conventional testing by leveraging cloud computing resources. As a result, testing-as-a-service (TaaS) is introduced as a service model that performs all testing activities in fully automated manner, on demand with a pay-for use basis. Moreover, TaaS increases testing efficiency and reduces time and cost required for testing. In this paper, performance TaaS framework for web applications is introduced which provides all performance testing activities including automatic test case generation and test execution. In addition, the proposed framework addresses many issues as: maximize resource utilization and continuous monitoring to ensure system reliability.
{"title":"Performance testing as a service for web applications","authors":"Amira Ali, N. Badr","doi":"10.1109/INTELCIS.2015.7397245","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397245","url":null,"abstract":"Software testing is a vital activity that is undertaken during software engineering life cycle to ensure software quality and reliability. Performance testing is a type of software testing that is done to shows how web application behaves under a certain workload. Cloud computing as an emerging technology can be used in the field of software engineering to provide cloud testing in order to overcome all deficiencies of conventional testing by leveraging cloud computing resources. As a result, testing-as-a-service (TaaS) is introduced as a service model that performs all testing activities in fully automated manner, on demand with a pay-for use basis. Moreover, TaaS increases testing efficiency and reduces time and cost required for testing. In this paper, performance TaaS framework for web applications is introduced which provides all performance testing activities including automatic test case generation and test execution. In addition, the proposed framework addresses many issues as: maximize resource utilization and continuous monitoring to ensure system reliability.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"57 1","pages":"356-361"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84272092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397235
D. Khattab, H. M. Ebeid, M. Tolba, A. S. Hussein
Automatic Multi-label GrabCut is an extension of the standard GrabCut technique to segment a given image automatically into its natural segments without any user intervention. The Normalized Probabilistic Rand (NPR) index is able to give meaningful comparisons by comparing different images and different segmentations of the same image. In this paper, more analysis is conducted to evaluate the efficiency of the developed automatic multi-label GrabCut using the NPR index. Based on using more than one human ground truth, segmentations are conducted on a large scale of the Berkeley's benchmark of natural images. The NPR, PR and GCE metrics produced acceptable accuracy measures emphasizing the scalability of the proposed technique for large scale datasets. Comparisons are applied for different images and experiments show that the NPR is the most efficient score to determine good segmentation compared to other metrics.
{"title":"Analysis of Automatic Multi-label GrabCut using NPR for natural image segmentation","authors":"D. Khattab, H. M. Ebeid, M. Tolba, A. S. Hussein","doi":"10.1109/INTELCIS.2015.7397235","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397235","url":null,"abstract":"Automatic Multi-label GrabCut is an extension of the standard GrabCut technique to segment a given image automatically into its natural segments without any user intervention. The Normalized Probabilistic Rand (NPR) index is able to give meaningful comparisons by comparing different images and different segmentations of the same image. In this paper, more analysis is conducted to evaluate the efficiency of the developed automatic multi-label GrabCut using the NPR index. Based on using more than one human ground truth, segmentations are conducted on a large scale of the Berkeley's benchmark of natural images. The NPR, PR and GCE metrics produced acceptable accuracy measures emphasizing the scalability of the proposed technique for large scale datasets. Comparisons are applied for different images and experiments show that the NPR is the most efficient score to determine good segmentation compared to other metrics.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"14 1","pages":"288-292"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78807880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397198
Gamal Abd El-Nasser A. Said, El-Sayed M. El-Horbaty
Storage space allocation problem at container terminals is NP-hard combinatorial optimization problem. This paper proposes a new approach based on genetic algorithm to optimize the solution for storage space allocation problem in container terminal at seaports. A new mathematical model is formulated to avoid bottlenecks in container yard operations, and to minimize vessels service time in port. Also, a simulation model using discrete event simulation tool is conducted to optimize the solution for storage space allocation problem is presented in this study. The proposed approach is applied on a real case study data of container terminal at Damietta port. The computational results show the effectiveness of the proposed approach.
{"title":"An intelligent optimization approach for storage space allocation at seaports: A case study","authors":"Gamal Abd El-Nasser A. Said, El-Sayed M. El-Horbaty","doi":"10.1109/INTELCIS.2015.7397198","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397198","url":null,"abstract":"Storage space allocation problem at container terminals is NP-hard combinatorial optimization problem. This paper proposes a new approach based on genetic algorithm to optimize the solution for storage space allocation problem in container terminal at seaports. A new mathematical model is formulated to avoid bottlenecks in container yard operations, and to minimize vessels service time in port. Also, a simulation model using discrete event simulation tool is conducted to optimize the solution for storage space allocation problem is presented in this study. The proposed approach is applied on a real case study data of container terminal at Damietta port. The computational results show the effectiveness of the proposed approach.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"1 1","pages":"66-72"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75295620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397239
E. Elsayed, Eman M. Elgamal, K. Eldahshan
Reading the opinion behind the text is a big challenge. In another way, we need to automatically read opinions and moods as a natural language. Ontology -based plays a main role to solve the problems in this field. That is from the features of the ontology based as covering the semantics of the concepts. So, in this paper, we propose a flexible classification opinion mining tool. This proposed method based on ontology- based. The proposed method uses NLTK (Natural Language Processing Toolkit) with Python as a useful knowledge to get more representative word occurrences in the corpus. Also, we not only use a WordNet and SentiWordNet ontologies to assign the word as POS (part of speech), but we also create a specific purpose ontology by OWL editor as Protégé. Then we create a more general opinion mining tool where the specific purpose ontology file was selected to use for classification the text. We apply our proposed method on lists of long texts for different writers, and then we can classify these writers depending on their writings.
{"title":"Deep analysis of knowledge in one's writings","authors":"E. Elsayed, Eman M. Elgamal, K. Eldahshan","doi":"10.1109/INTELCIS.2015.7397239","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397239","url":null,"abstract":"Reading the opinion behind the text is a big challenge. In another way, we need to automatically read opinions and moods as a natural language. Ontology -based plays a main role to solve the problems in this field. That is from the features of the ontology based as covering the semantics of the concepts. So, in this paper, we propose a flexible classification opinion mining tool. This proposed method based on ontology- based. The proposed method uses NLTK (Natural Language Processing Toolkit) with Python as a useful knowledge to get more representative word occurrences in the corpus. Also, we not only use a WordNet and SentiWordNet ontologies to assign the word as POS (part of speech), but we also create a specific purpose ontology by OWL editor as Protégé. Then we create a more general opinion mining tool where the specific purpose ontology file was selected to use for classification the text. We apply our proposed method on lists of long texts for different writers, and then we can classify these writers depending on their writings.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"174 1","pages":"306-312"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88442436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397283
Hussein Mazaar, E. Emary, H. Onsi
The paper presents an approach for feature selection in human activity recognition. Features are extracted based on spatiotemporal orientation energy and activity template, while feature reduction has been studied thoroughly using various techniques. Due to high dimensional data from extraction phase, a model with less features which are important and significant can build attractive, interpretative and accurate model. Finally, activity classification is done using SVM. With experiments to classify six activities of the KTH Dataset, significant feature reductions were reported with optimal embedded selection recorded for Gradient Boosting and R-Square techniques. The results show a reduction in time and improvement in accuracy. The Comparison to related work were given.
{"title":"Evaluation of feature selection on human activity recognition","authors":"Hussein Mazaar, E. Emary, H. Onsi","doi":"10.1109/INTELCIS.2015.7397283","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397283","url":null,"abstract":"The paper presents an approach for feature selection in human activity recognition. Features are extracted based on spatiotemporal orientation energy and activity template, while feature reduction has been studied thoroughly using various techniques. Due to high dimensional data from extraction phase, a model with less features which are important and significant can build attractive, interpretative and accurate model. Finally, activity classification is done using SVM. With experiments to classify six activities of the KTH Dataset, significant feature reductions were reported with optimal embedded selection recorded for Gradient Boosting and R-Square techniques. The results show a reduction in time and improvement in accuracy. The Comparison to related work were given.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"213 1","pages":"591-599"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89086057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397217
Zainab Abdel Wahed El-Waily
The enormous acceleration in using information technology beside the Changing and complexity of the Beneficiaries tasks although that the researchers have no much time, for one side, From another side The emergence of the Internet network the strong contender for digital libraries, all of that led to the need to make our libraries keep pace with the current reality through opening the digital libraries as a part of its traditional libraries or independently from it. But unfortunately most of the digital libraries establishment experiments have failed in doing its job for many reasons, from the most important reasons is that there is no the needed strategy for establish the digital library, and from that the researchers start focus on how important is the establishment strategy and it is needed to go throw each stage in the Strategy in order to have a successful experiment achieving the tasks which it has been built for.
{"title":"Strategy to establish an e-library for university theses in the central library of the University of Mustansiriya (experimental study)","authors":"Zainab Abdel Wahed El-Waily","doi":"10.1109/INTELCIS.2015.7397217","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397217","url":null,"abstract":"The enormous acceleration in using information technology beside the Changing and complexity of the Beneficiaries tasks although that the researchers have no much time, for one side, From another side The emergence of the Internet network the strong contender for digital libraries, all of that led to the need to make our libraries keep pace with the current reality through opening the digital libraries as a part of its traditional libraries or independently from it. But unfortunately most of the digital libraries establishment experiments have failed in doing its job for many reasons, from the most important reasons is that there is no the needed strategy for establish the digital library, and from that the researchers start focus on how important is the establishment strategy and it is needed to go throw each stage in the Strategy in order to have a successful experiment achieving the tasks which it has been built for.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"38 1","pages":"172-178"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78141073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397227
Merrihan B. Monir, M. H. AbdElaziz, Abdelaziz A. Abdelhamid, El-Sayed M. Ei-Horbaty
Cloud computing is a new computing model that involves outsourcing of computer technologies due to the lack of their availability in certain locations. However, when there is no previous experience between cloud service providers and their consumers, consumers often hold a degree of uncertainty about the reliability, quality and performance of the services being offered. This paper presents a survey about the current trust management techniques regarding to the performance of cloud service providers taking into consideration other aspects like privacy, security, credibility, user feedback, etc.
{"title":"Trust management in cloud computing: A survey","authors":"Merrihan B. Monir, M. H. AbdElaziz, Abdelaziz A. Abdelhamid, El-Sayed M. Ei-Horbaty","doi":"10.1109/INTELCIS.2015.7397227","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397227","url":null,"abstract":"Cloud computing is a new computing model that involves outsourcing of computer technologies due to the lack of their availability in certain locations. However, when there is no previous experience between cloud service providers and their consumers, consumers often hold a degree of uncertainty about the reliability, quality and performance of the services being offered. This paper presents a survey about the current trust management techniques regarding to the performance of cloud service providers taking into consideration other aspects like privacy, security, credibility, user feedback, etc.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"16 1","pages":"231-242"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82155591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397209
S. Soliman, Safia Abbas, A. M. Salem
Recently, collagen diseases propagated due to many factors such as pressure and pollution. Thrombosis is one of the most famous collagen diseases that obstruct the blood flow causing vital complications for crucial parts of the circulatory system. Such diseases cause a high risk for the doctors due to the huge number of the laboratory examinations and the efforts to diagnosis. Accordingly, this paper implements C4.5 algorithm, as one of the most famous data mining techniques, on real thrombosis dataset. The dataset was collected from Chiba University as a challenging dataset for thrombosis diagnosis. The results show that the C4.5 could diagnose the thrombosis degree with accuracy 98.4%.
{"title":"Classification of thrombosis collagen diseases based on C4.5 algorithm","authors":"S. Soliman, Safia Abbas, A. M. Salem","doi":"10.1109/INTELCIS.2015.7397209","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397209","url":null,"abstract":"Recently, collagen diseases propagated due to many factors such as pressure and pollution. Thrombosis is one of the most famous collagen diseases that obstruct the blood flow causing vital complications for crucial parts of the circulatory system. Such diseases cause a high risk for the doctors due to the huge number of the laboratory examinations and the efforts to diagnosis. Accordingly, this paper implements C4.5 algorithm, as one of the most famous data mining techniques, on real thrombosis dataset. The dataset was collected from Chiba University as a challenging dataset for thrombosis diagnosis. The results show that the C4.5 could diagnose the thrombosis degree with accuracy 98.4%.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"40 1","pages":"131-136"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80418066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/INTELCIS.2015.7397215
M. Bassiouni, W. Khalifa, EL-Sayed A. El Dahshan, Abdel-Badeeh M. Salam
Phonocardiogram (PCG) is an emerging biometric modality that has seen about fifteen years of development. This paper provides a review of some of the most famous techniques that have been applied to the use of the PCG for biometric recognition. The paper also presents datasets used in PCG as well as the devices needed to capture them. PCG main modules are features extraction, reduction and classification schemes. The paper is concluded by a comparative analysis of the authentication performance of PCG biometric systems.
{"title":"A study on PCG as a biometric approach","authors":"M. Bassiouni, W. Khalifa, EL-Sayed A. El Dahshan, Abdel-Badeeh M. Salam","doi":"10.1109/INTELCIS.2015.7397215","DOIUrl":"https://doi.org/10.1109/INTELCIS.2015.7397215","url":null,"abstract":"Phonocardiogram (PCG) is an emerging biometric modality that has seen about fifteen years of development. This paper provides a review of some of the most famous techniques that have been applied to the use of the PCG for biometric recognition. The paper also presents datasets used in PCG as well as the devices needed to capture them. PCG main modules are features extraction, reduction and classification schemes. The paper is concluded by a comparative analysis of the authentication performance of PCG biometric systems.","PeriodicalId":6478,"journal":{"name":"2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS)","volume":"55 1","pages":"161-166"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81413307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}