Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457816
Z. Bougroun, A. Zeaaraoui, M. Belkasmi, T. Bouchentouf
In this paper we present a new classification of object-oriented metrics, in order to facilitate software evaluation. This classification is based on standardized model “ISO 9126” and focuses on design and quality properties. The crux of this work is to make the relationship between sub-characteristics of ISO model and properties of object-oriented design and quality. Thus our purpose is to find a way to evaluate software without neglecting the properties of the object-oriented design such as: abstraction, inheritance, encapsulation etc.
{"title":"Classification of the metric in ISO model by oriented object properties","authors":"Z. Bougroun, A. Zeaaraoui, M. Belkasmi, T. Bouchentouf","doi":"10.1109/INTECH.2012.6457816","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457816","url":null,"abstract":"In this paper we present a new classification of object-oriented metrics, in order to facilitate software evaluation. This classification is based on standardized model “ISO 9126” and focuses on design and quality properties. The crux of this work is to make the relationship between sub-characteristics of ISO model and properties of object-oriented design and quality. Thus our purpose is to find a way to evaluate software without neglecting the properties of the object-oriented design such as: abstraction, inheritance, encapsulation etc.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133271545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457756
S. Hamissi, H. Merouani
In this paper we present a novel fully automated scheme for detection of abnormal masses by anatomical segmentation of Breast Region and classification of regions of Interest (ROI). The system consists of three main processing steps, we perform essential pre-processing to remove noise, suppress artifacts and labels, enhance the breast region, extract breast region by the process of segmentation and remove unwanted parts as Pectoral Muscle. After segregating the breast region, we use an Adaptive Segmentation Procedure based on Kmeans Clustering followed by a Merging Regions method. With the obtained Regions of Interest, the extraction of Statistical and Textural Features is done by using gray level co-occurrence matrices (GLCM) and a Decision Tree Classification is performed to isolate normal and abnormal regions in the breast tissue. If any suspicious regions are present, they get accurately highlighted by this algorithm thus helping the radiologists to further investigate these regions. A set of Mini-MIAS mammograms is used to validate the effectiveness of the method. The precision of the method has been verified with the ground truth given in database and has obtained sensitivity as high as 90%. The CAD system proposed is fully autonomous and is able to isolate different types of abnormalities and it shows promising results.
{"title":"Novel fully automated Computer Aided-Detection of suspicious regions within mammograms","authors":"S. Hamissi, H. Merouani","doi":"10.1109/INTECH.2012.6457756","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457756","url":null,"abstract":"In this paper we present a novel fully automated scheme for detection of abnormal masses by anatomical segmentation of Breast Region and classification of regions of Interest (ROI). The system consists of three main processing steps, we perform essential pre-processing to remove noise, suppress artifacts and labels, enhance the breast region, extract breast region by the process of segmentation and remove unwanted parts as Pectoral Muscle. After segregating the breast region, we use an Adaptive Segmentation Procedure based on Kmeans Clustering followed by a Merging Regions method. With the obtained Regions of Interest, the extraction of Statistical and Textural Features is done by using gray level co-occurrence matrices (GLCM) and a Decision Tree Classification is performed to isolate normal and abnormal regions in the breast tissue. If any suspicious regions are present, they get accurately highlighted by this algorithm thus helping the radiologists to further investigate these regions. A set of Mini-MIAS mammograms is used to validate the effectiveness of the method. The precision of the method has been verified with the ground truth given in database and has obtained sensitivity as high as 90%. The CAD system proposed is fully autonomous and is able to isolate different types of abnormalities and it shows promising results.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133914375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457747
A. S. Djaanfar, B. Frikh, B. Ouhbi
The PageRank algorithm is used in the Google search engine to calculate a single list of popularity scores for each page in the Web. These popularity scores are used to rank query results when presented to the user. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. Thereupon, several algorithms are introduced to improve the last one. In this paper, we introduce a more intelligent surfer model based on combining ontology, web contents and PageRank. Firstly, we propose a relevance measure of a web page relative to a multiple-term query. Then, we develop our performed intelligent surfer model. Efficient execution of our algorithm in a local database is performed. Results show that our algorithm significantly outperforms the existing algorithms in the quality of the pages returned, while remaining efficient enough to be used in today's large search engines.
{"title":"A hybrid method for improving the SQD-PageRank algorithm","authors":"A. S. Djaanfar, B. Frikh, B. Ouhbi","doi":"10.1109/INTECH.2012.6457747","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457747","url":null,"abstract":"The PageRank algorithm is used in the Google search engine to calculate a single list of popularity scores for each page in the Web. These popularity scores are used to rank query results when presented to the user. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. Thereupon, several algorithms are introduced to improve the last one. In this paper, we introduce a more intelligent surfer model based on combining ontology, web contents and PageRank. Firstly, we propose a relevance measure of a web page relative to a multiple-term query. Then, we develop our performed intelligent surfer model. Efficient execution of our algorithm in a local database is performed. Results show that our algorithm significantly outperforms the existing algorithms in the quality of the pages returned, while remaining efficient enough to be used in today's large search engines.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129196306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457772
A. El-Attar, H. Tairi, Mohammed Karim
In this paper we introduce a simple framework for the Structure From Motion problem resolution using the classical and general formulation of the problem. We propose a two steps algorithm that does not impose any constraints on the scene. The purpose of the first step is the extraction of a convenient “Projective Reconstruction” using a simple method. Indeed, it firstly initializes the projective calibration of the scene and then refines it using non-linear optimization. The extracted reconstruction is then integrated in the second step consisting on the upgrade of the reconstruction to a metric one; this is done through the research of the camera intrinsic parameters necessary for the upgrade stage. A multi-stage auto-calibration algorithm is proposed to iteratively estimate each parameter alone and finally refines all of them at once. Once these parameters are obtained, the 3D Euclidean reconstruction of the scene is extracted using a publicly available Multi-View Stereo software.
{"title":"An easy framework for an accurate estimation of the Structure From Motion","authors":"A. El-Attar, H. Tairi, Mohammed Karim","doi":"10.1109/INTECH.2012.6457772","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457772","url":null,"abstract":"In this paper we introduce a simple framework for the Structure From Motion problem resolution using the classical and general formulation of the problem. We propose a two steps algorithm that does not impose any constraints on the scene. The purpose of the first step is the extraction of a convenient “Projective Reconstruction” using a simple method. Indeed, it firstly initializes the projective calibration of the scene and then refines it using non-linear optimization. The extracted reconstruction is then integrated in the second step consisting on the upgrade of the reconstruction to a metric one; this is done through the research of the camera intrinsic parameters necessary for the upgrade stage. A multi-stage auto-calibration algorithm is proposed to iteratively estimate each parameter alone and finally refines all of them at once. Once these parameters are obtained, the 3D Euclidean reconstruction of the scene is extracted using a publicly available Multi-View Stereo software.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115849846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457794
S. Riadi, A. Maach
In optical burst switched networks contention is the main source of burst loss. Burst retransmission is a reactive loss recovery mechanism that attempts to resolve contention by retransmitting the contended burst at the optical burst switching layer. Burst cloning is a proactive loss recovery mechanism that attempts to prevent burst loss by sending two copies of the same burst, if the first copy is lost, the second copy may still be able to reach the destination. Burst retransmission is better suited when the load is low, however burst cloning is better suited when the load is high. In this paper, we propose a hybrid scheme for star OBS networks that combines the advantages of burst retransmission at low load with the benefits of burst cloning at high load through a decision algorithm which aims to control the extra load due to the both loss recovery mechanisms. The results obtained from simulation confirm that our hybrid scheme through the decision algorithm can achieve better overall network performance than both burst retransmission scheme and burst cloning scheme.
{"title":"A decision algorithm for efficient hybrid burst retransmission and burst cloning scheme over star OBS networks","authors":"S. Riadi, A. Maach","doi":"10.1109/INTECH.2012.6457794","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457794","url":null,"abstract":"In optical burst switched networks contention is the main source of burst loss. Burst retransmission is a reactive loss recovery mechanism that attempts to resolve contention by retransmitting the contended burst at the optical burst switching layer. Burst cloning is a proactive loss recovery mechanism that attempts to prevent burst loss by sending two copies of the same burst, if the first copy is lost, the second copy may still be able to reach the destination. Burst retransmission is better suited when the load is low, however burst cloning is better suited when the load is high. In this paper, we propose a hybrid scheme for star OBS networks that combines the advantages of burst retransmission at low load with the benefits of burst cloning at high load through a decision algorithm which aims to control the extra load due to the both loss recovery mechanisms. The results obtained from simulation confirm that our hybrid scheme through the decision algorithm can achieve better overall network performance than both burst retransmission scheme and burst cloning scheme.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116657287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457803
M. Lafkih, M. Mikram, S. Ghouzali, M. Haziti, D. Aboutajdine
Unlike traditional authentication based on passwords, biometric authentication is seen as an alternative solution because it offers facility, comfort and more security to users. However, biometric systems based on the storage of user model in a database are vulnerable to attacks because the stored model can be stolen or illegitimately used by an attacker to impersonate the user. To protect the users templates from attacks, two broad categories of methods are proposed in the literature: Transformation Biometric Characteristics, and Biometric Cryptosystems. Although biometric cryptosystems are used in several applications (e.g. smart cards), their major challenge is the lack of a security analysis and the limitation of the work on attacks against this type of methods. Hence the aim of this paper is to present criteria for security analysis and to demonstrate the vulnerability of Fuzzy Vault method; a famous approach in biometric cryptosystems.
{"title":"Biometric cryptosystems based Fuzzy Vault approach: Security analysis","authors":"M. Lafkih, M. Mikram, S. Ghouzali, M. Haziti, D. Aboutajdine","doi":"10.1109/INTECH.2012.6457803","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457803","url":null,"abstract":"Unlike traditional authentication based on passwords, biometric authentication is seen as an alternative solution because it offers facility, comfort and more security to users. However, biometric systems based on the storage of user model in a database are vulnerable to attacks because the stored model can be stolen or illegitimately used by an attacker to impersonate the user. To protect the users templates from attacks, two broad categories of methods are proposed in the literature: Transformation Biometric Characteristics, and Biometric Cryptosystems. Although biometric cryptosystems are used in several applications (e.g. smart cards), their major challenge is the lack of a security analysis and the limitation of the work on attacks against this type of methods. Hence the aim of this paper is to present criteria for security analysis and to demonstrate the vulnerability of Fuzzy Vault method; a famous approach in biometric cryptosystems.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127206886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457759
A. Zeaaraoui, Z. Bougroun, M. Belkasmi, T. Bouchentouf
In software development process, developers feel the gap when moving from requirement engineering phase (using scenario-based approach) to construction phase; this is due to that models resulted in RE cannot be easily mapped to models in construction phase. This paper discusses this problem, and offers a new approach that handles this issue so that developers feel no break during all software development activities from RE to coding and testing.
{"title":"Object-oriented analysis and design approach for requirements engineering","authors":"A. Zeaaraoui, Z. Bougroun, M. Belkasmi, T. Bouchentouf","doi":"10.1109/INTECH.2012.6457759","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457759","url":null,"abstract":"In software development process, developers feel the gap when moving from requirement engineering phase (using scenario-based approach) to construction phase; this is due to that models resulted in RE cannot be easily mapped to models in construction phase. This paper discusses this problem, and offers a new approach that handles this issue so that developers feel no break during all software development activities from RE to coding and testing.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124437863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457790
H. Krouma, M. Benslama, Farouk Othmani-Marabout
In this paper, we propose a low-rank minimum mean-square error (OLR-MMSE) channel estimator for multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems. We evaluate the performance of the proposed low-rank channel estimator for slowly fading channel environments in terms of the symbol error rate (SER) by computer simulations. It is shown that the proposed channel estimator gives the best tradeoff between performance and complexity.
{"title":"Low rank MMSE channel estimation in MIMO-OFDM systems","authors":"H. Krouma, M. Benslama, Farouk Othmani-Marabout","doi":"10.1109/INTECH.2012.6457790","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457790","url":null,"abstract":"In this paper, we propose a low-rank minimum mean-square error (OLR-MMSE) channel estimator for multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems. We evaluate the performance of the proposed low-rank channel estimator for slowly fading channel environments in terms of the symbol error rate (SER) by computer simulations. It is shown that the proposed channel estimator gives the best tradeoff between performance and complexity.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123248273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457791
Abderrazak Daoudi, Youssef Kerfi, Imade Benelallam, E. Bouyakhf
Wireless Sensor Networks (WSN) constitutes the platform of a broad range of applications related to surveillance such as military, and environmental monitoring. In these fields, the deployment of WSN to support optimal coverage is obviously a fastidious task, especially when a multiple coverage is needed. In this paper, we present a multi-objective constraints optimization approach for solving the above issue. Therefore, we investigate two main ideas: First, a pre-processing step with field analysis in three dimensions (3D) environment. Second, a robust based-constraint modeling approach with multi-objective function. Experimental results are presented to evaluate the effectiveness of our approach.
{"title":"A constraint programming approach for coverage optimization problem in WSN","authors":"Abderrazak Daoudi, Youssef Kerfi, Imade Benelallam, E. Bouyakhf","doi":"10.1109/INTECH.2012.6457791","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457791","url":null,"abstract":"Wireless Sensor Networks (WSN) constitutes the platform of a broad range of applications related to surveillance such as military, and environmental monitoring. In these fields, the deployment of WSN to support optimal coverage is obviously a fastidious task, especially when a multiple coverage is needed. In this paper, we present a multi-objective constraints optimization approach for solving the above issue. Therefore, we investigate two main ideas: First, a pre-processing step with field analysis in three dimensions (3D) environment. Second, a robust based-constraint modeling approach with multi-objective function. Experimental results are presented to evaluate the effectiveness of our approach.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125449933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-09-01DOI: 10.1109/INTECH.2012.6457788
Z. Sakhi, R. Kabil, A. Tragha, M. Bennai
Some cryptographic applications of quantum algorithm on many qubits system are presented. We analyze a basic concept of Grover algorithm and it's implementation in the case of four qubits system. We show specially that Grover algorithm allows as obtaining a maximal probability to get the result. Some features of quantum cryptography and Quantum Secret-Sharing protocol based on Grover's algorithm are also presented.
{"title":"Quantum cryptography based on Grover's algorithm","authors":"Z. Sakhi, R. Kabil, A. Tragha, M. Bennai","doi":"10.1109/INTECH.2012.6457788","DOIUrl":"https://doi.org/10.1109/INTECH.2012.6457788","url":null,"abstract":"Some cryptographic applications of quantum algorithm on many qubits system are presented. We analyze a basic concept of Grover algorithm and it's implementation in the case of four qubits system. We show specially that Grover algorithm allows as obtaining a maximal probability to get the result. Some features of quantum cryptography and Quantum Secret-Sharing protocol based on Grover's algorithm are also presented.","PeriodicalId":369113,"journal":{"name":"Second International Conference on the Innovative Computing Technology (INTECH 2012)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130917750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}