Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030953
M. A. Elhefny, Mohammed M Elmogy, A. A. Elfetouh
Cancer is a term used for a disease in which abnormal cells divide without control and are able to invade other tissues. Obesity is an overnutrition disease that is associated with increased risks of many types of cancers. The knowledge of this medical domain is highly required to be represented with its concepts, properties and types of association using ontologies to provide the biomedical community with consistent, reusable and sustainable descriptions of human obesity related cancer terms. In this paper, we propose building Obesity Related Cancer (ORC) Ontology involving diseases, symptoms, diagnosis, and treatment, using the latest standard Web Ontology language (OWL 2). The diseases hierarchy and terms are defined upon the standard Disease Ontology (DO). By developing (ORC) Ontology, both intelligent systems and physicians can benefit from it in knowledge sharing, reasoning and reusing in different ways.
癌症是一种疾病的术语,其中异常细胞不受控制地分裂,并能够侵入其他组织。肥胖是一种营养过剩的疾病,与多种癌症的风险增加有关。这一医学领域的知识非常需要用本体来表示其概念、属性和关联类型,以便为生物医学界提供一致的、可重复使用的和可持续的人类肥胖相关癌症术语描述。本文提出使用最新的标准Web Ontology语言(OWL 2)构建肥胖症相关癌症(Obesity Related Cancer, ORC)本体,包括疾病、症状、诊断和治疗。疾病层次和术语在标准疾病本体(Disease Ontology, DO)上定义。通过开发(ORC)本体,智能系统和医生都可以以不同的方式从知识共享、推理和重用中受益。
{"title":"Building OWL ontology for obesity related cancer","authors":"M. A. Elhefny, Mohammed M Elmogy, A. A. Elfetouh","doi":"10.1109/ICCES.2014.7030953","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030953","url":null,"abstract":"Cancer is a term used for a disease in which abnormal cells divide without control and are able to invade other tissues. Obesity is an overnutrition disease that is associated with increased risks of many types of cancers. The knowledge of this medical domain is highly required to be represented with its concepts, properties and types of association using ontologies to provide the biomedical community with consistent, reusable and sustainable descriptions of human obesity related cancer terms. In this paper, we propose building Obesity Related Cancer (ORC) Ontology involving diseases, symptoms, diagnosis, and treatment, using the latest standard Web Ontology language (OWL 2). The diseases hierarchy and terms are defined upon the standard Disease Ontology (DO). By developing (ORC) Ontology, both intelligent systems and physicians can benefit from it in knowledge sharing, reasoning and reusing in different ways.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124774575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030975
A. Yousef
As a common software engineering practice, software dynamic defect models are used to estimate and predict the software testing process progress, effectiveness, and the number of future defects over the next weeks. Practitioners use these dynamic defect models to ensure that the delivery of software to customers is possible from the quality point of view and to predict the release date. Old literature suggested several classic defect models including Putnam, Exponential, Rayleigh and Weibull. Recent literature claimed that modern projects follow linear combinations of Rayleigh due to projects complexity. This claim verification has not been generalized because the project samples size was very small. This paper proposes a tool suite for dynamic defect models. The tool suite consists of an open repository of dynamic defects empirical data and many supporting tools. Data concerning defects are collected from several software projects and products and added to the open repository. This includes open source software and commercial software projects. The proposed tools are designed and implemented and made publicly available on the web. They are used to view the dynamic defects, find the best dynamic defect model that fits the data according to several performance criteria and predict future number of defects. The application of these tools on the empirical data showed that linear combinations of Rayleigh and Weibull has better performance than classic models in both curve fitting and predictability of commercial software.
{"title":"A tool suite for estimation and prediction of software dynamic defect models","authors":"A. Yousef","doi":"10.1109/ICCES.2014.7030975","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030975","url":null,"abstract":"As a common software engineering practice, software dynamic defect models are used to estimate and predict the software testing process progress, effectiveness, and the number of future defects over the next weeks. Practitioners use these dynamic defect models to ensure that the delivery of software to customers is possible from the quality point of view and to predict the release date. Old literature suggested several classic defect models including Putnam, Exponential, Rayleigh and Weibull. Recent literature claimed that modern projects follow linear combinations of Rayleigh due to projects complexity. This claim verification has not been generalized because the project samples size was very small. This paper proposes a tool suite for dynamic defect models. The tool suite consists of an open repository of dynamic defects empirical data and many supporting tools. Data concerning defects are collected from several software projects and products and added to the open repository. This includes open source software and commercial software projects. The proposed tools are designed and implemented and made publicly available on the web. They are used to view the dynamic defects, find the best dynamic defect model that fits the data according to several performance criteria and predict future number of defects. The application of these tools on the empirical data showed that linear combinations of Rayleigh and Weibull has better performance than classic models in both curve fitting and predictability of commercial software.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131380881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030986
A. S. Elons
Sign Language (SL) recognition has been explored for a long time now. Two main aspects of successful SL recognition systems are required: High recognition accuracy and real-time response. This paper shows a contribution in these issues, the first contribution describes a real-time response recognition for Arabic Sign Language (ArSL) based on a Graphics Processing Unit (GPU) implantation. The second contribution exploits Multi-level Multiplicative Neural Network(MMNN) for hand gesture classification. The system architecture mainly depends on two consequent layers of (MMNN), the first layer determines if the signer uses one hand or two hands and the second determines the final class. The experiment was conducted on 200signs and the resultreaches83% recognition accuracy for test data confirming objects dataset offline extendibility. The recognition system is being accelerated using NVIDIA GPU and programming in CUDA.
{"title":"GPU implementation for Arabic Sign Language real time recognition using Multi-level Multiplicative Neural Networks","authors":"A. S. Elons","doi":"10.1109/ICCES.2014.7030986","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030986","url":null,"abstract":"Sign Language (SL) recognition has been explored for a long time now. Two main aspects of successful SL recognition systems are required: High recognition accuracy and real-time response. This paper shows a contribution in these issues, the first contribution describes a real-time response recognition for Arabic Sign Language (ArSL) based on a Graphics Processing Unit (GPU) implantation. The second contribution exploits Multi-level Multiplicative Neural Network(MMNN) for hand gesture classification. The system architecture mainly depends on two consequent layers of (MMNN), the first layer determines if the signer uses one hand or two hands and the second determines the final class. The experiment was conducted on 200signs and the resultreaches83% recognition accuracy for test data confirming objects dataset offline extendibility. The recognition system is being accelerated using NVIDIA GPU and programming in CUDA.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030933
Mohamed A. Meselhi, Hitham M. Abo Bakr, I. Ziedan, K. Shaalan
Most Named Entity Recognition (NER) systems follow either a rule-based approach or machine learning approach. In this paper, we introduce out attempt at developing a hybrid NER system, which combines the rule-based approach with a machine learning approach in order to obtain the advantages of both approaches and overcomes their problems [1]. The system is able to recognize eight types of named entities including Location, Person, Organization, Date, Time, Price, Measurement and Percent. Experimental results on ANERcorp dataset indicated that our hybrid approach outperforms the rule-based approach and the machine learning approach when they are processed separately. Moreover, our hybrid approach outperforms the state-of-the-art of Arabic NER.
{"title":"Hybrid Named Entity Recognition - Application to Arabic Language","authors":"Mohamed A. Meselhi, Hitham M. Abo Bakr, I. Ziedan, K. Shaalan","doi":"10.1109/ICCES.2014.7030933","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030933","url":null,"abstract":"Most Named Entity Recognition (NER) systems follow either a rule-based approach or machine learning approach. In this paper, we introduce out attempt at developing a hybrid NER system, which combines the rule-based approach with a machine learning approach in order to obtain the advantages of both approaches and overcomes their problems [1]. The system is able to recognize eight types of named entities including Location, Person, Organization, Date, Time, Price, Measurement and Percent. Experimental results on ANERcorp dataset indicated that our hybrid approach outperforms the rule-based approach and the machine learning approach when they are processed separately. Moreover, our hybrid approach outperforms the state-of-the-art of Arabic NER.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"65 41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132380446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030960
Mridula Sharma, H. Elmiligi, F. Gebali
The development of broad range of multi-core processors in desktop and server systems has lead to a definite need of an overall performance evaluation tool. This paper presents a new tool to analyze the performance of multi-core systems at early design phases. The proposed tool helps developers test different design options and choose the best solution for multi-core applications. Different design factors can be considered and evaluated to get the best core utilization of multi-core systems while achieving the best response time for the real-time applications. The paper explores the implementation of different algorithms at four different design stages: dependability analysis, task execution sequence, real-time scheduling and core mapping. As a proof of concept, a case study is presented to show the significance of changing one design parameter on the overall system performance. Experimental results show an increase of the CPU utilization by 31.25% when changing the number of cores from 3 to 2.
{"title":"Simulations and performance evaluation of Real-Time Multi-core Systems","authors":"Mridula Sharma, H. Elmiligi, F. Gebali","doi":"10.1109/ICCES.2014.7030960","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030960","url":null,"abstract":"The development of broad range of multi-core processors in desktop and server systems has lead to a definite need of an overall performance evaluation tool. This paper presents a new tool to analyze the performance of multi-core systems at early design phases. The proposed tool helps developers test different design options and choose the best solution for multi-core applications. Different design factors can be considered and evaluated to get the best core utilization of multi-core systems while achieving the best response time for the real-time applications. The paper explores the implementation of different algorithms at four different design stages: dependability analysis, task execution sequence, real-time scheduling and core mapping. As a proof of concept, a case study is presented to show the significance of changing one design parameter on the overall system performance. Experimental results show an increase of the CPU utilization by 31.25% when changing the number of cores from 3 to 2.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114218418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030968
Asmaa Hashem Sweidan, Nashwa El-Bendary, A. Hassanien, O. Hegazy, A. Mohamed
This article presents an automatic classification approach for assessing water quality based on fish liver histopathology. As fish liver is a good bioindicator for detecting water chemical pollution, the proposed approach utilizes fish liver microscopic images in order to detect water pollution. The proposed approach consists of three phases; namely pre-processing, feature extraction, and classification phases. Since color and texture are the most important characteristics of microscopic fish liver images, the proposed system uses colored histogram and Gabor wavelet transform for classifying water quality degree. Also, it implemented Principal Components Analysis (PCA) along with Support Vector Machines (SVMs) algorithms for feature extraction and water quality degree classification, respectively. Collected datasets contain colored JPEG images of 125 images as training dataset and 45 images as testing dataset, respectively. Training dataset is divided into 4 classes representing the different histopathlogical changes and their corresponding water quality degrees. Experimental results showed that the proposed classification approach has obtained water quality classification accuracy of 93.3%, using SVMs linear kernel function with 37 images per class for training.
{"title":"Machine Learning based Approach for Water pollution detection via fish liver microscopic images analysis","authors":"Asmaa Hashem Sweidan, Nashwa El-Bendary, A. Hassanien, O. Hegazy, A. Mohamed","doi":"10.1109/ICCES.2014.7030968","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030968","url":null,"abstract":"This article presents an automatic classification approach for assessing water quality based on fish liver histopathology. As fish liver is a good bioindicator for detecting water chemical pollution, the proposed approach utilizes fish liver microscopic images in order to detect water pollution. The proposed approach consists of three phases; namely pre-processing, feature extraction, and classification phases. Since color and texture are the most important characteristics of microscopic fish liver images, the proposed system uses colored histogram and Gabor wavelet transform for classifying water quality degree. Also, it implemented Principal Components Analysis (PCA) along with Support Vector Machines (SVMs) algorithms for feature extraction and water quality degree classification, respectively. Collected datasets contain colored JPEG images of 125 images as training dataset and 45 images as testing dataset, respectively. Training dataset is divided into 4 classes representing the different histopathlogical changes and their corresponding water quality degrees. Experimental results showed that the proposed classification approach has obtained water quality classification accuracy of 93.3%, using SVMs linear kernel function with 37 images per class for training.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125881478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030969
E. Emary, Rania E. Elesawy, Salwa M. Abou El Ella, A. Hassanien
Aquatic weeds are the greatest generator of biomass in aquatic environment which motivates using intelligent methods for the prediction and estimation of indicators that affect the growth of such weeds. In this study a set of new interpolation methods are used and assessed over the study area for predicting a set of chemical indicators that can predict and affect the growth of weeds. The used methods are bi-harmonic, regularized spline with tension, Barnes, tri-scatter, and kriging. The different interpolants are used to create thematic maps representing the different chemical indicators that are sensed at discrete positions for supporting decision making. The performance of individual interpolants is assessed using mean square error over a set of test sites. Results prove that the Tri-scatter interpolant is the one with best performance for all the sensed indicators while the regularized spline performs well when the number of points for interpolation is large enough.
{"title":"Aquatic weeds prediction: A comparative study","authors":"E. Emary, Rania E. Elesawy, Salwa M. Abou El Ella, A. Hassanien","doi":"10.1109/ICCES.2014.7030969","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030969","url":null,"abstract":"Aquatic weeds are the greatest generator of biomass in aquatic environment which motivates using intelligent methods for the prediction and estimation of indicators that affect the growth of such weeds. In this study a set of new interpolation methods are used and assessed over the study area for predicting a set of chemical indicators that can predict and affect the growth of weeds. The used methods are bi-harmonic, regularized spline with tension, Barnes, tri-scatter, and kriging. The different interpolants are used to create thematic maps representing the different chemical indicators that are sensed at discrete positions for supporting decision making. The performance of individual interpolants is assessed using mean square error over a set of test sites. Results prove that the Tri-scatter interpolant is the one with best performance for all the sensed indicators while the regularized spline performs well when the number of points for interpolation is large enough.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125088495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030983
Ebtsam Adel, Mohammed M Elmogy, Hazem Elbakry
Image mosaicing/stitching is considered as an active research area in computer vision and computer graphics. Image mosaicing is concerned with combining two or more images of the same scene into one panoramic image with high resolution. There are two main types of techniques used for creating image stitching: direct methods and feature-based methods. The greatest advantages of feature-based methods over the other methods are their speed, robustness, and the availability of creating panoramic image of a non-planar scene with unrestricted camera motion. In this paper, we propose a real time image stitching system based on ORB feature-based technique. We compared the performance of our proposed system with SIFT and SURF feature-based techniques. The experiment results show that the ORB algorithm is the fastest, the highest performance, and it needs very low memory requirements. In addition, we make a comparison between different feature-based detectors. The experimental result shows that SIFT is a robust algorithm but it takes more time for computations. MSER and FAST techniques have better performance with respect to speed and accuracy.
{"title":"Real time image mosaicing system based on feature extraction techniques","authors":"Ebtsam Adel, Mohammed M Elmogy, Hazem Elbakry","doi":"10.1109/ICCES.2014.7030983","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030983","url":null,"abstract":"Image mosaicing/stitching is considered as an active research area in computer vision and computer graphics. Image mosaicing is concerned with combining two or more images of the same scene into one panoramic image with high resolution. There are two main types of techniques used for creating image stitching: direct methods and feature-based methods. The greatest advantages of feature-based methods over the other methods are their speed, robustness, and the availability of creating panoramic image of a non-planar scene with unrestricted camera motion. In this paper, we propose a real time image stitching system based on ORB feature-based technique. We compared the performance of our proposed system with SIFT and SURF feature-based techniques. The experiment results show that the ORB algorithm is the fastest, the highest performance, and it needs very low memory requirements. In addition, we make a comparison between different feature-based detectors. The experimental result shows that SIFT is a robust algorithm but it takes more time for computations. MSER and FAST techniques have better performance with respect to speed and accuracy.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124236393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030919
Ahmed O. Abdul Salam, R. Sheriff, S. Al-Araji, K. Mezher, Q. Nasir
The scarcity of frequency spectrum used for wireless communication systems has attracted a considerable amount of attention in recent years. The cognitive radio (CR) terminology has been widely accepted as a smart platform mainly aimed at the efficient interrogation and utilization of permitted spectrum. Non-parametric spectrum sensing, or estimation, represents one of the prominent tools that can be proposed when CR works under an undetermined environment. As such, the periodogram, filter bank, and multi-taper methods are well considered in many studies without relying on the transmission channel's characteristics. A unified approach to all these non-parametric spectrum sensing techniques is presented in this paper with analytical and performance comparison using simulation methods. Results show that the multi-taper method outperforms the others.
{"title":"An overview on non-parametric spectrum sensing in cognitive radio","authors":"Ahmed O. Abdul Salam, R. Sheriff, S. Al-Araji, K. Mezher, Q. Nasir","doi":"10.1109/ICCES.2014.7030919","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030919","url":null,"abstract":"The scarcity of frequency spectrum used for wireless communication systems has attracted a considerable amount of attention in recent years. The cognitive radio (CR) terminology has been widely accepted as a smart platform mainly aimed at the efficient interrogation and utilization of permitted spectrum. Non-parametric spectrum sensing, or estimation, represents one of the prominent tools that can be proposed when CR works under an undetermined environment. As such, the periodogram, filter bank, and multi-taper methods are well considered in many studies without relying on the transmission channel's characteristics. A unified approach to all these non-parametric spectrum sensing techniques is presented in this paper with analytical and performance comparison using simulation methods. Results show that the multi-taper method outperforms the others.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124460013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/ICCES.2014.7030964
A. Mamdouh, Ayman M. Bahaa-Eldin, M. Sobh
After the evolution of Java-based smart cards, security issues arises concerning Java applets not to be vulnerable to modifications or malicious attacks that may threaten applications supported by these applets. Bytecode verification fills the latter gap. Java Sandbox Security model and Common Criteria standard suggest on-board bytecode verification to maximize security. This paper suggests an on-card bytecode verification whose execution is distributed within Java applet's lifecycle. Part of the verification runs on-demand at the run-time execution phase of the Java applets. The proposed solution targets a real Java-based card operating system.
{"title":"On-demand distributed on-card bytecode verification","authors":"A. Mamdouh, Ayman M. Bahaa-Eldin, M. Sobh","doi":"10.1109/ICCES.2014.7030964","DOIUrl":"https://doi.org/10.1109/ICCES.2014.7030964","url":null,"abstract":"After the evolution of Java-based smart cards, security issues arises concerning Java applets not to be vulnerable to modifications or malicious attacks that may threaten applications supported by these applets. Bytecode verification fills the latter gap. Java Sandbox Security model and Common Criteria standard suggest on-board bytecode verification to maximize security. This paper suggests an on-card bytecode verification whose execution is distributed within Java applet's lifecycle. Part of the verification runs on-demand at the run-time execution phase of the Java applets. The proposed solution targets a real Java-based card operating system.","PeriodicalId":339697,"journal":{"name":"2014 9th International Conference on Computer Engineering & Systems (ICCES)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125752553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}