Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648101
Ferhaoui Chafia, Chitroub Salim, Benhammadi Farid
One of the main challenges of the Crypto-systems in practice is the maintenance of the confidentiality of the cryptographic key. A technique of hybridization between the biometrics and the cryptography has been proposed for the authentication. This last rests on two methods: the Fuzzy commitment [1] and the Fuzzy vault [2]. However no work to our knowledge approached the problem of display in clear of the abscissas corresponding to the true minutiae in the vault. To palliate this problem, we propose an approach based on the storage of the true minutiae but under encoded shape, giving birth to a new approach that uses the advantages of ‘Fuzzy commitment’ to fill the weaknesses of 'Fuzzy Vault'.
{"title":"A biometric crypto-system for authentication","authors":"Ferhaoui Chafia, Chitroub Salim, Benhammadi Farid","doi":"10.1109/ICMWI.2010.5648101","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648101","url":null,"abstract":"One of the main challenges of the Crypto-systems in practice is the maintenance of the confidentiality of the cryptographic key. A technique of hybridization between the biometrics and the cryptography has been proposed for the authentication. This last rests on two methods: the Fuzzy commitment [1] and the Fuzzy vault [2]. However no work to our knowledge approached the problem of display in clear of the abscissas corresponding to the true minutiae in the vault. To palliate this problem, we propose an approach based on the storage of the true minutiae but under encoded shape, giving birth to a new approach that uses the advantages of ‘Fuzzy commitment’ to fill the weaknesses of 'Fuzzy Vault'.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"43 25","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120966216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1007/978-3-642-21984-9_63
Sid Ahmed Fezza, K. Faraoun
{"title":"Stereoscopic video coding based on the H.264/AVC standard","authors":"Sid Ahmed Fezza, K. Faraoun","doi":"10.1007/978-3-642-21984-9_63","DOIUrl":"https://doi.org/10.1007/978-3-642-21984-9_63","url":null,"abstract":"","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127503085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648121
T. Danisman, Ian Marius Bilasco, C. Djeraba, Nacim Ihaddadene
This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.
{"title":"Drowsy driver detection system using eye blink patterns","authors":"T. Danisman, Ian Marius Bilasco, C. Djeraba, Nacim Ihaddadene","doi":"10.1109/ICMWI.2010.5648121","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648121","url":null,"abstract":"This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126605460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648020
C. Rouabhia, Kheira Hamdaoui, H. Tebbikh
This paper proposes a novel weighted distance metric based on 2D matrices rather than 1D vectors and the eigenvalues for face images classification and recognition. This distance is measured between two feature matrices obtained by two-dimensional principal component analysis (2DPCA) and two-dimensional linear discriminant analysis (2DLDA). The weights are the inverse of the eigenvalues of the total scatter matrix of face matrices sorted in decreasing order and the classification strategy adopted is the nearest neighbour algorithm. To test and evaluate the efficiency of the proposed distance metric, experiments were carried out using the international ORL face database. The experimental results show the high performance of the weighted matrix distance metric over the Yang and the Frobenius distances.
{"title":"Weighted matrix distance metric for face images classification","authors":"C. Rouabhia, Kheira Hamdaoui, H. Tebbikh","doi":"10.1109/ICMWI.2010.5648020","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648020","url":null,"abstract":"This paper proposes a novel weighted distance metric based on 2D matrices rather than 1D vectors and the eigenvalues for face images classification and recognition. This distance is measured between two feature matrices obtained by two-dimensional principal component analysis (2DPCA) and two-dimensional linear discriminant analysis (2DLDA). The weights are the inverse of the eigenvalues of the total scatter matrix of face matrices sorted in decreasing order and the classification strategy adopted is the nearest neighbour algorithm. To test and evaluate the efficiency of the proposed distance metric, experiments were carried out using the international ORL face database. The experimental results show the high performance of the weighted matrix distance metric over the Yang and the Frobenius distances.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114470415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647884
Samir Amir, Ioan Marius Bilasco, T. Danisman, T. Urruty, Ismail Elsayad, C. Djeraba
The recent growing of multimedia in our lives requires an extensive use of metadata for multimedia management. Consequently, many metadata standards have appeared. Using these standards has become very complicated since they have been developed by independent communities. The content and context are usually described using several metadata standards. Accordingly, a multimedia user must be able to interpret all these standards. In this context, several metadata integration techniques have been proposed in order to deal with this challenge. These integrations are made by domain experts which is costly and time-consuming. This paper presents a new system for a semi-automatic integration of multimedia metadata. This system will automatically map between metadata needed by the user and those encoded in different formats. The integration process makes use of several information: XML Schema entity names, their corresponding comments as well as the hierarchical features of XML Schema. Our experimental results demonstrate the integration benefits of the proposed system.
{"title":"Schema matching for integrating multimedia metadata","authors":"Samir Amir, Ioan Marius Bilasco, T. Danisman, T. Urruty, Ismail Elsayad, C. Djeraba","doi":"10.1109/ICMWI.2010.5647884","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647884","url":null,"abstract":"The recent growing of multimedia in our lives requires an extensive use of metadata for multimedia management. Consequently, many metadata standards have appeared. Using these standards has become very complicated since they have been developed by independent communities. The content and context are usually described using several metadata standards. Accordingly, a multimedia user must be able to interpret all these standards. In this context, several metadata integration techniques have been proposed in order to deal with this challenge. These integrations are made by domain experts which is costly and time-consuming. This paper presents a new system for a semi-automatic integration of multimedia metadata. This system will automatically map between metadata needed by the user and those encoded in different formats. The integration process makes use of several information: XML Schema entity names, their corresponding comments as well as the hierarchical features of XML Schema. Our experimental results demonstrate the integration benefits of the proposed system.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130535761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647848
Kang Zhang, Jun Kong
The adaptability of Web interfaces in response to changes in the interaction context, display environments (e.g., mobile screens) and user's personal preferences is becoming increasingly desirable due to the pervasive use of Web information. One of the major challenges in Web interface adaptation is to discover the semantic structure underlying a Web interface. This paper presents a robust and formal approach to recovering interface semantics using a graph grammar approach. Due to its distinct characteristics of spatial specification in the abstract syntax, the Spatial Graph Grammar (SGG) is used to perform semantic grouping and interpretation of segmented screen objects. We use the well-established image processing technology to recognize atomic interface objects in an interface image. The output is a spatial graph, which records significant spatial relations among recognized objects. Based on the spatial graph, the SGG parser recovers the hierarchical relations among interface objects and thus provides semantic interpretation suitable for adaptation.
{"title":"Exploring semantic roles of Web interface components","authors":"Kang Zhang, Jun Kong","doi":"10.1109/ICMWI.2010.5647848","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647848","url":null,"abstract":"The adaptability of Web interfaces in response to changes in the interaction context, display environments (e.g., mobile screens) and user's personal preferences is becoming increasingly desirable due to the pervasive use of Web information. One of the major challenges in Web interface adaptation is to discover the semantic structure underlying a Web interface. This paper presents a robust and formal approach to recovering interface semantics using a graph grammar approach. Due to its distinct characteristics of spatial specification in the abstract syntax, the Spatial Graph Grammar (SGG) is used to perform semantic grouping and interpretation of segmented screen objects. We use the well-established image processing technology to recognize atomic interface objects in an interface image. The output is a spatial graph, which records significant spatial relations among recognized objects. Based on the spatial graph, the SGG parser recovers the hierarchical relations among interface objects and thus provides semantic interpretation suitable for adaptation.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114534961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647955
Soumaya Fellah, Mejdi Kaddour
Wireless Mesh Networks are a wireless multi-hop networks in which the nodes are characterized by theirs stability. In this sort of network the minimum cost tree connects sources and receivers by implying a minimum number of forwarding nodes. This paper introduces an adaptation to the popular multicast protocol ODMRP (On-Demand Multicast Routing Protocol) named OODMRP (Optimized On-Demand Multicast Routing Protocol). The main effect of OODMRP is to minimize the number of forwarding nodes which leads to the optimization of resources consumption and the network congestion. Indeed, before selecting a forwarding node, each node looks in its neighborhood for the existence of a forwarding node. In this case, it selects this node as an ascending one in the multicast tree. Our simulation results show that OODMRP reduces dramatically the number of forwarding nodes and forwarded packets, compared to original ODMRP.
{"title":"A Multicast Routing Protocol adapted to the characteristics of Wireless Mesh Networks","authors":"Soumaya Fellah, Mejdi Kaddour","doi":"10.1109/ICMWI.2010.5647955","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647955","url":null,"abstract":"Wireless Mesh Networks are a wireless multi-hop networks in which the nodes are characterized by theirs stability. In this sort of network the minimum cost tree connects sources and receivers by implying a minimum number of forwarding nodes. This paper introduces an adaptation to the popular multicast protocol ODMRP (On-Demand Multicast Routing Protocol) named OODMRP (Optimized On-Demand Multicast Routing Protocol). The main effect of OODMRP is to minimize the number of forwarding nodes which leads to the optimization of resources consumption and the network congestion. Indeed, before selecting a forwarding node, each node looks in its neighborhood for the existence of a forwarding node. In this case, it selects this node as an ascending one in the multicast tree. Our simulation results show that OODMRP reduces dramatically the number of forwarding nodes and forwarded packets, compared to original ODMRP.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116487200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647854
M. Kantardzic, C. Walgampaya, Wael Emara
Multi-sensor data fusion has been an area of intense recent research and development activity. This concept has been applied to numerous fields and new applications are being explored constantly. Multi-sensor based Collaborative Click Fraud Detection and Prevention (CCFDP) system can be viewed as a problem of evidence fusion. In this paper we detail the multi level data fusion mechanism used in CCFDP for real time click fraud detection and prevention. Prevention mechanisms are based on blocking suspicious traffic by IP, referrer, city, country, ISP, etc. Our system maintains an online database of these suspicious parameters. We have tested the system with real-world data from an actual ad campaign where the results show that use of multilevel data fusion improves the quality of click fraud analysis.
{"title":"Click fraud prevention in pay-per-click model: Learning through multi-model evidence fusion","authors":"M. Kantardzic, C. Walgampaya, Wael Emara","doi":"10.1109/ICMWI.2010.5647854","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647854","url":null,"abstract":"Multi-sensor data fusion has been an area of intense recent research and development activity. This concept has been applied to numerous fields and new applications are being explored constantly. Multi-sensor based Collaborative Click Fraud Detection and Prevention (CCFDP) system can be viewed as a problem of evidence fusion. In this paper we detail the multi level data fusion mechanism used in CCFDP for real time click fraud detection and prevention. Prevention mechanisms are based on blocking suspicious traffic by IP, referrer, city, country, ISP, etc. Our system maintains an online database of these suspicious parameters. We have tested the system with real-world data from an actual ad campaign where the results show that use of multilevel data fusion improves the quality of click fraud analysis.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130762376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648191
Frédéric Dugardin, L. Amodeo, F. Yalaoui
This article deals with the multiobjective reentrant hybrid flowshop scheduling problem. In the latter several tasks has to be processed in a system and the special feature here is that they must be processed several times on each machines. The system is composed of multiple stages which contain parallel identical machines. Since the tasks must reenter the system at the end of the normal process they create conflict with the following tasks. This problem is NP-Hard and we have developped a metaheuristics to solve it. The latter is an evolutionary algorithm based on the well known SPEA2 mechanism. This algorithm has a Fuzzy-Logic Controller to adapt mutation and crossover probability of generation t with respect to the structure of the population in previous generations (t −1) and (t −2). The two objectives are the makespan and the total tardiness minimization. In this works we compare the classic SPEA2 and the latter improved by FLC (so called FLC-archive). These two algorithms are tested on multiple instances adapted from the literature. Finally the comparisons of the results obtained by the algorithms are done with two different multi-objectives measures.
{"title":"FLC-archive to solve multiobjective reentrant hybride flowshop scheduling problem","authors":"Frédéric Dugardin, L. Amodeo, F. Yalaoui","doi":"10.1109/ICMWI.2010.5648191","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648191","url":null,"abstract":"This article deals with the multiobjective reentrant hybrid flowshop scheduling problem. In the latter several tasks has to be processed in a system and the special feature here is that they must be processed several times on each machines. The system is composed of multiple stages which contain parallel identical machines. Since the tasks must reenter the system at the end of the normal process they create conflict with the following tasks. This problem is NP-Hard and we have developped a metaheuristics to solve it. The latter is an evolutionary algorithm based on the well known SPEA2 mechanism. This algorithm has a Fuzzy-Logic Controller to adapt mutation and crossover probability of generation t with respect to the structure of the population in previous generations (t −1) and (t −2). The two objectives are the makespan and the total tardiness minimization. In this works we compare the classic SPEA2 and the latter improved by FLC (so called FLC-archive). These two algorithms are tested on multiple instances adapted from the literature. Finally the comparisons of the results obtained by the algorithms are done with two different multi-objectives measures.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127386190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647916
H. Chehade, F. Yalaoui, L. Amodeo, Frédéric Dugardin
In this paper, a new multiobjective resolution approach is proposed for solving buffers sizing problems in assembly lines. The considered problem consists of sizing the buffers between the different stations in a line taking in consideration that the size of each buffer is bounded by a lower and an upper value. Two objectives are taken in consideration: the maximization of the throughput rate and the minimization of the total size of the buffers. The resolution method is based on a multiobjective ant colony algorithm but using the Lorenz dominance instead of the well-known Pareto dominance relationship. The Lorenz dominance relationship provides a better domination area by rejecting the solutions founded on the extreme sides of the Pareto front. The obtained results are compared with those of a classical Multiobjective Ant Colony Optimization Algorithm. For that purpose, three different measuring criteria are applied. The numerical results show the advantages and the efficiency of the Lorenz dominance.
{"title":"Buffers sizing in assembly lines using a Lorenz multiobjective ant colony optimization algorithm","authors":"H. Chehade, F. Yalaoui, L. Amodeo, Frédéric Dugardin","doi":"10.1109/ICMWI.2010.5647916","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647916","url":null,"abstract":"In this paper, a new multiobjective resolution approach is proposed for solving buffers sizing problems in assembly lines. The considered problem consists of sizing the buffers between the different stations in a line taking in consideration that the size of each buffer is bounded by a lower and an upper value. Two objectives are taken in consideration: the maximization of the throughput rate and the minimization of the total size of the buffers. The resolution method is based on a multiobjective ant colony algorithm but using the Lorenz dominance instead of the well-known Pareto dominance relationship. The Lorenz dominance relationship provides a better domination area by rejecting the solutions founded on the extreme sides of the Pareto front. The obtained results are compared with those of a classical Multiobjective Ant Colony Optimization Algorithm. For that purpose, three different measuring criteria are applied. The numerical results show the advantages and the efficiency of the Lorenz dominance.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116086456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}