Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086199
S. Alam, S. Zakariya, N. Akhtar
Steganography is a science of embedding private, confidential, sensitive data or information within the given cover media without making any visible changes to it. In this paper, we present a modified Triple-A method for RGB image based steganography. This method introduces the concept of storing variable number of bits in each channel (R, G or B) of pixel. We come out with extended Randomize pixel Steganography algorithm without any limitations on the type of images being used. In this analysis, we focus on the property of human vision system that helps to increase the amount of data hiding in the images practically. In this work, we hide the data in pixel which is selected by randomly using Fisher Yates algorithm. The security can be enhanced by cleverly embedding the data, along with a random choice of pixel position. It offers very high security of messages that hidden in images.
{"title":"Analysis of modified triple — A steganography technique using Fisher Yates algorithm","authors":"S. Alam, S. Zakariya, N. Akhtar","doi":"10.1109/HIS.2014.7086199","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086199","url":null,"abstract":"Steganography is a science of embedding private, confidential, sensitive data or information within the given cover media without making any visible changes to it. In this paper, we present a modified Triple-A method for RGB image based steganography. This method introduces the concept of storing variable number of bits in each channel (R, G or B) of pixel. We come out with extended Randomize pixel Steganography algorithm without any limitations on the type of images being used. In this analysis, we focus on the property of human vision system that helps to increase the amount of data hiding in the images practically. In this work, we hide the data in pixel which is selected by randomly using Fisher Yates algorithm. The security can be enhanced by cleverly embedding the data, along with a random choice of pixel position. It offers very high security of messages that hidden in images.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116022205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086193
Saima Waseem, M. Akram, Bilal Ashfaq Ahmed
Automatic screening and diagnosis of ocular disease through fundus images are in place and considered worldwide. One of the leading sight loosing disease known as age related macular degeneration (AMD) has many proposed automatic screening systems. These systems detect yellow bright lesion and through the number of lesion and their size the disease is graded as advance and earlier stage. It becomes difficult for these systems to differentiate drusens from exudates another bright lesion associated with Diabetic retinopathy. These two lesions look similar on retinal surface. Differentiating these two lesions can improve the performance of any automatic system. In this paper we proposed a novel approach to discriminate these lesions. The approach consists of two stage procedure. The first stage after pre-processing detects all bright pixels from the image. The suspicious pixels are removed from the detected region. On the second stage bright regions are classified as drusen and exudates through Support Vector Machine (SVM). Proposed method was evaluated on publically available dataset STARE. The system achieve 92% accuracy.
{"title":"Drusen exudate lesion discrimination in colour fundus images","authors":"Saima Waseem, M. Akram, Bilal Ashfaq Ahmed","doi":"10.1109/HIS.2014.7086193","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086193","url":null,"abstract":"Automatic screening and diagnosis of ocular disease through fundus images are in place and considered worldwide. One of the leading sight loosing disease known as age related macular degeneration (AMD) has many proposed automatic screening systems. These systems detect yellow bright lesion and through the number of lesion and their size the disease is graded as advance and earlier stage. It becomes difficult for these systems to differentiate drusens from exudates another bright lesion associated with Diabetic retinopathy. These two lesions look similar on retinal surface. Differentiating these two lesions can improve the performance of any automatic system. In this paper we proposed a novel approach to discriminate these lesions. The approach consists of two stage procedure. The first stage after pre-processing detects all bright pixels from the image. The suspicious pixels are removed from the detected region. On the second stage bright regions are classified as drusen and exudates through Support Vector Machine (SVM). Proposed method was evaluated on publically available dataset STARE. The system achieve 92% accuracy.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122994567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086192
M. Tantawi, A. Salem, M. Tolba
Electrocardiogram (ECG) as a new biometric trait has the advantage of being a liveliness indicator and difficult to be spoofed or falsified. According to the utilized features, the existing ECG based biometric systems can be classified to fiducial and non-fiducial systems. The computation of fiducial features requires the accurate detection of 11 fiducial points which is a very challenging task. On the other hand, non-fiducial approaches relax the detection process but usually result in high dimension feature space. This paper presents a systematic study for ECG based individual identification. A fiducial based approach that utilizes a feature set selected by information gain IG criterion is first introduced. Furthermore, a non-fiducial wavelet based approach is proposed. To avoid the high dimensionality of the resultant wavelet coefficient structure, the structure has been investigated and reduced using also IG criterion. The proposed feature sets were examined and compared using radial basis functions (RBF) neural network classifier. The conducted experiments using Physionet databases revealed the superiority of our suggested non-fiducial approach.
{"title":"ECG signals analysis for biometric recognition","authors":"M. Tantawi, A. Salem, M. Tolba","doi":"10.1109/HIS.2014.7086192","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086192","url":null,"abstract":"Electrocardiogram (ECG) as a new biometric trait has the advantage of being a liveliness indicator and difficult to be spoofed or falsified. According to the utilized features, the existing ECG based biometric systems can be classified to fiducial and non-fiducial systems. The computation of fiducial features requires the accurate detection of 11 fiducial points which is a very challenging task. On the other hand, non-fiducial approaches relax the detection process but usually result in high dimension feature space. This paper presents a systematic study for ECG based individual identification. A fiducial based approach that utilizes a feature set selected by information gain IG criterion is first introduced. Furthermore, a non-fiducial wavelet based approach is proposed. To avoid the high dimensionality of the resultant wavelet coefficient structure, the structure has been investigated and reduced using also IG criterion. The proposed feature sets were examined and compared using radial basis functions (RBF) neural network classifier. The conducted experiments using Physionet databases revealed the superiority of our suggested non-fiducial approach.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127231102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086209
Atta-ur-Rahman, I. Qureshi
In this paper a modified iterative decoding algorithm (MIDA) is proposed for decoding the Cubic Product Codes (CPC), also called three dimensional product block codes. It is a hard decision decoder that was initially proposed by the same authors for decoding simple product codes, where the decoding complexity of the basic iterative algorithm was significantly reduced with negligible performance degradation. Two versions of the proposed algorithm are investigated that are with and without complexity reduction. A complexity and performance trade-off is also highlighted. Bit error rate (BER) performance of the proposed algorithm over a Rayleigh flat fading channel, is demonstrated by the simulations.
{"title":"Effectiveness of modified iterative decoding algorithm for Cubic Product Codes","authors":"Atta-ur-Rahman, I. Qureshi","doi":"10.1109/HIS.2014.7086209","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086209","url":null,"abstract":"In this paper a modified iterative decoding algorithm (MIDA) is proposed for decoding the Cubic Product Codes (CPC), also called three dimensional product block codes. It is a hard decision decoder that was initially proposed by the same authors for decoding simple product codes, where the decoding complexity of the basic iterative algorithm was significantly reduced with negligible performance degradation. Two versions of the proposed algorithm are investigated that are with and without complexity reduction. A complexity and performance trade-off is also highlighted. Bit error rate (BER) performance of the proposed algorithm over a Rayleigh flat fading channel, is demonstrated by the simulations.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125995114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086177
F. Bouslama, Michelle Housley, Andrew Steele
Academic institutions across the world face the challenge of providing new intakes of students with appropriate orientation and counseling services to better help students cope with the changes and challenges of life at university or college. These sessions are often based on measurements of the students' technical skills or Intelligence Quotient (IQ) levels such as mathematical computation and communications abilities. However, Emotional Intelligence (E.I.) tests, which have become an essential tool and an integral part of the recruiting, orientation, and counseling strategies of many individuals and organizations, are not often part of these evaluation schemes. At some academic institutions, a partial test of those skills is conducted but may not provide a holistic view of the emotional intelligence of each individual. In this paper, a set of EI tests covering four general areas of EI is proposed to evaluate the emotional intelligence of the new intakes at the HCT Dubai Colleges. These tests will help identify students who lack experience with non-cognitive capabilities including competencies and skills that may influence their abilities to succeed in coping with educational environmental demands and pressures. A fuzzy-based emotional intelligence modeling and processing framework is also proposed to better model and capture uncertainties in surveys of new intakes, and which will deal well with the complexities of the classification system. This new system is expected to help the HCT Dubai Colleges better design and prepare orientation and counseling interventions which will help students develop their abilities to perceive, to access and generate emotions to promote their emotional and intellectual growth.
{"title":"A fuzzy logic-based emotional intelligence framework for evaluating and orienting new students at HCT Dubai colleges","authors":"F. Bouslama, Michelle Housley, Andrew Steele","doi":"10.1109/HIS.2014.7086177","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086177","url":null,"abstract":"Academic institutions across the world face the challenge of providing new intakes of students with appropriate orientation and counseling services to better help students cope with the changes and challenges of life at university or college. These sessions are often based on measurements of the students' technical skills or Intelligence Quotient (IQ) levels such as mathematical computation and communications abilities. However, Emotional Intelligence (E.I.) tests, which have become an essential tool and an integral part of the recruiting, orientation, and counseling strategies of many individuals and organizations, are not often part of these evaluation schemes. At some academic institutions, a partial test of those skills is conducted but may not provide a holistic view of the emotional intelligence of each individual. In this paper, a set of EI tests covering four general areas of EI is proposed to evaluate the emotional intelligence of the new intakes at the HCT Dubai Colleges. These tests will help identify students who lack experience with non-cognitive capabilities including competencies and skills that may influence their abilities to succeed in coping with educational environmental demands and pressures. A fuzzy-based emotional intelligence modeling and processing framework is also proposed to better model and capture uncertainties in surveys of new intakes, and which will deal well with the complexities of the classification system. This new system is expected to help the HCT Dubai Colleges better design and prepare orientation and counseling interventions which will help students develop their abilities to perceive, to access and generate emotions to promote their emotional and intellectual growth.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130251385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086188
Faraz Tahir, M. Akram, M. Abbass, Albab Ahmad Khan
Eye diseases such as diabetic retinopathy may cause blindness. At the advanced stages of diabetic retinopathy further disease progression is stopped using laser treatment. Laser treatment leaves behind marks on the retinal surface that causes misbehaviors in automated retinal diagnostic system. These laser marks hinders the further analysis of the retinal images so it is desirable to detect laser marks and remove them to avoid any unnecessary processing. This paper presents a method to automatically detect laser marks from the retinal images and present some results based on the performance evaluation.
{"title":"Laser marks detection from fundus images","authors":"Faraz Tahir, M. Akram, M. Abbass, Albab Ahmad Khan","doi":"10.1109/HIS.2014.7086188","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086188","url":null,"abstract":"Eye diseases such as diabetic retinopathy may cause blindness. At the advanced stages of diabetic retinopathy further disease progression is stopped using laser treatment. Laser treatment leaves behind marks on the retinal surface that causes misbehaviors in automated retinal diagnostic system. These laser marks hinders the further analysis of the retinal images so it is desirable to detect laser marks and remove them to avoid any unnecessary processing. This paper presents a method to automatically detect laser marks from the retinal images and present some results based on the performance evaluation.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121700993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086206
Rodrigo Miranda Feitosa, S. Labidi, André Luis Silva dos Santos
The research aims to create an application that uses techniques from Machine Learning to extract and collate data geolocated - collected a Social Network, aiming to promote the Social Recommendation users. Existing research in the field of social recommendation deficiencies remain regarding the effectiveness of the filtered data. This paper presents a study and implementation using Text Mining techniques as a proposal for resolution of problems found in social recommendation and more effective results.
{"title":"Hybrid model for information filtering in location based social networks using text mining","authors":"Rodrigo Miranda Feitosa, S. Labidi, André Luis Silva dos Santos","doi":"10.1109/HIS.2014.7086206","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086206","url":null,"abstract":"The research aims to create an application that uses techniques from Machine Learning to extract and collate data geolocated - collected a Social Network, aiming to promote the Social Recommendation users. Existing research in the field of social recommendation deficiencies remain regarding the effectiveness of the filtered data. This paper presents a study and implementation using Text Mining techniques as a proposal for resolution of problems found in social recommendation and more effective results.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113939373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086164
Rim Rekik, I. Kallel, A. Alimi
The amount of circulating data on the internet has witnessed a considerable increase during the last decades. A web site is the main source that provides users' needs. However, some of the existing web sites are not well intentioned by users. Many studies have treated the problem of assessing the web sites' quality of different categories such as ecommerce, education, entertainment, health, etc. The problematic implies a multiple criteria decision making (MCDM) due to the multiple conflicting criteria for assessment. Existing methods are mainly based on making a hierarchy to divide high level criteria, sub-level criteria and alternatives. There is no standard until now that defines important criteria for evaluation. Indeed, this paper presents a process of collecting and extracting data from a list of studies according to a Systematic Literature Review (SLR) method. In fact, it is necessary to know frequent criteria used in the literature for establishing the task of assessment. This paper proposes also a determination of an association rules' set extracted from a set of criteria by applying an Apriori method.
{"title":"Extraction of association rules used for assessing web sites' quality from a set of criteria","authors":"Rim Rekik, I. Kallel, A. Alimi","doi":"10.1109/HIS.2014.7086164","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086164","url":null,"abstract":"The amount of circulating data on the internet has witnessed a considerable increase during the last decades. A web site is the main source that provides users' needs. However, some of the existing web sites are not well intentioned by users. Many studies have treated the problem of assessing the web sites' quality of different categories such as ecommerce, education, entertainment, health, etc. The problematic implies a multiple criteria decision making (MCDM) due to the multiple conflicting criteria for assessment. Existing methods are mainly based on making a hierarchy to divide high level criteria, sub-level criteria and alternatives. There is no standard until now that defines important criteria for evaluation. Indeed, this paper presents a process of collecting and extracting data from a list of studies according to a Systematic Literature Review (SLR) method. In fact, it is necessary to know frequent criteria used in the literature for establishing the task of assessment. This paper proposes also a determination of an association rules' set extracted from a set of criteria by applying an Apriori method.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123046397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086195
G. Fahmy
The detection of JPEG prior compression has become an essential task in the detection of forgery in image forensics. In this paper we propose a novel DCT implementation technique that can be utilized in the detection of any hacking or tampering of JPEG/DCT compressed images. The proposed approach is based upon recent literature ideas of recompressing JPEG image blocks and detecting if this block has been compressed before or not and how many times. In this paper we proposed a DCT implementation that has a onetime signature on processed coefficients or pixels and can be used as a tool to detect if this block has been compressed before using the proposed implementation or not. Any further processing can be easily detected and identified. The proposed DCT transformation is nonorthogonal and results in a minor amount of error due to this nonorthogonality, however it maintains an excellent tradeoff between compression performance, and transform error. Illustrative examples on several processed images are presented with complexity analysis.
{"title":"Nonorthogonal DCT implementation for JPEG forensics","authors":"G. Fahmy","doi":"10.1109/HIS.2014.7086195","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086195","url":null,"abstract":"The detection of JPEG prior compression has become an essential task in the detection of forgery in image forensics. In this paper we propose a novel DCT implementation technique that can be utilized in the detection of any hacking or tampering of JPEG/DCT compressed images. The proposed approach is based upon recent literature ideas of recompressing JPEG image blocks and detecting if this block has been compressed before or not and how many times. In this paper we proposed a DCT implementation that has a onetime signature on processed coefficients or pixels and can be used as a tool to detect if this block has been compressed before using the proposed implementation or not. Any further processing can be easily detected and identified. The proposed DCT transformation is nonorthogonal and results in a minor amount of error due to this nonorthogonality, however it maintains an excellent tradeoff between compression performance, and transform error. Illustrative examples on several processed images are presented with complexity analysis.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127921111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/HIS.2014.7086171
M. Krishnamurthy, N. Jaishree, A. S. Pillai, A. Kannan
Domain - specific search focuses on one area of knowledge. Applying broad based ranking algorithms to vertical search domains is not desirable. The broad based ranking model builds upon the data from multiple domains existing on the web. Vertical search engines attempt to use a focused crawler that index only relevant web pages to a predefined topic. With Ranking Adaptation Model, one can adapt an existing ranking model of a unique new domain. The binary classifiers classify the members of a given set of objects into two groups on the basis of whether they have some property or not. If it is property of relevancy, it is returned to the search query of that particular domain vertical. Sponsored ads are then placed alongside the organic search results and they are ranked with the help of bid, budget and quality score. The ad with the highest bid is placed first in the ad listings. Later, the ad with a maximum quality score is found by click through logs which is replaced in first position. Thus, both organic search and sponsored ads are returned for the specific domain, making it easy for the users to get access to real time ads and connect directly with advertisers as well as to get information on the search query.
{"title":"Ranking model adaptation for domain specific mining using binary classifier for sponsored ads","authors":"M. Krishnamurthy, N. Jaishree, A. S. Pillai, A. Kannan","doi":"10.1109/HIS.2014.7086171","DOIUrl":"https://doi.org/10.1109/HIS.2014.7086171","url":null,"abstract":"Domain - specific search focuses on one area of knowledge. Applying broad based ranking algorithms to vertical search domains is not desirable. The broad based ranking model builds upon the data from multiple domains existing on the web. Vertical search engines attempt to use a focused crawler that index only relevant web pages to a predefined topic. With Ranking Adaptation Model, one can adapt an existing ranking model of a unique new domain. The binary classifiers classify the members of a given set of objects into two groups on the basis of whether they have some property or not. If it is property of relevancy, it is returned to the search query of that particular domain vertical. Sponsored ads are then placed alongside the organic search results and they are ranked with the help of bid, budget and quality score. The ad with the highest bid is placed first in the ad listings. Later, the ad with a maximum quality score is found by click through logs which is replaced in first position. Thus, both organic search and sponsored ads are returned for the specific domain, making it easy for the users to get access to real time ads and connect directly with advertisers as well as to get information on the search query.","PeriodicalId":161103,"journal":{"name":"2014 14th International Conference on Hybrid Intelligent Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124465144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}