Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122143
P. S. Satapure, A. Rajurkar, V. G. Kottawar
In this paper a method for cartilage segmentation of human knee from MRI images using multiple models is presented. Initially we trained a model with three types of knee MRI scans using existing set of large data called as training set. This training set includes features of pixels and their classes such as background and cartilage. Multiple k-NN models based on MRI scan type and slice number are used to segment cartilage from knee MRI scan. Multiple models are required for different types of MRI scans which have different levels of intensities. Each MRI scan has around 20 slices in which few slices in middle have more cartilage pixels than other slices. The performance of proposed method is evaluated on knee MRI scan and comparison is carried out with manual segmentation by a radiologist. It is revealed that proposed technique improves accuracy and processing time during segmentation of cartilage.
{"title":"Automatic articular cartilage segmentation with multiple models","authors":"P. S. Satapure, A. Rajurkar, V. G. Kottawar","doi":"10.1109/ICISIM.2017.8122143","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122143","url":null,"abstract":"In this paper a method for cartilage segmentation of human knee from MRI images using multiple models is presented. Initially we trained a model with three types of knee MRI scans using existing set of large data called as training set. This training set includes features of pixels and their classes such as background and cartilage. Multiple k-NN models based on MRI scan type and slice number are used to segment cartilage from knee MRI scan. Multiple models are required for different types of MRI scans which have different levels of intensities. Each MRI scan has around 20 slices in which few slices in middle have more cartilage pixels than other slices. The performance of proposed method is evaluated on knee MRI scan and comparison is carried out with manual segmentation by a radiologist. It is revealed that proposed technique improves accuracy and processing time during segmentation of cartilage.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122175
Tanvir Ambekar, V. Musande
Due to the increasing growth of the web, these days Internet is broadly utilized by users to fulfill different data needs. Sometimes, more precise information related to specific streams such as Healthcare is not available on the internet that satisfies the user's information need. There is a specific category of users such as doctors who really interested in the videos related to disease diagnosis and its treatment. Sometimes, doctors are not able to find the root cause of disease, so they are interested in the previous treatment given to that patient or similar disease patients in order to give better disease treatment. So making such videos available through a specific video search engine is very important, as these videos are useful to handle the very critical situations while diagnosis and treatment. The proposed system intends to show the most relevant videos for a specific users query with the help of video search engine for healthcare data. Healthcare data is easily available or can be recorded at low cost. The proposed method is used to show various relevant videos for a given user's need by keyword based label matching. The proposed method performs video data collection and speech to text conversion to create the transcription snippets. Finally, keyword based labeling is done with the help of that transcription snippets and prescription reports in order to show more precise and relevant video search results for a given users query. Then these keywords can be used to rearrange the video search outputs. This proposed system is very effective for disease prescription analysis as well as it helps practitioners who are new.
{"title":"A novel approach to personalize the healthcare video search","authors":"Tanvir Ambekar, V. Musande","doi":"10.1109/ICISIM.2017.8122175","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122175","url":null,"abstract":"Due to the increasing growth of the web, these days Internet is broadly utilized by users to fulfill different data needs. Sometimes, more precise information related to specific streams such as Healthcare is not available on the internet that satisfies the user's information need. There is a specific category of users such as doctors who really interested in the videos related to disease diagnosis and its treatment. Sometimes, doctors are not able to find the root cause of disease, so they are interested in the previous treatment given to that patient or similar disease patients in order to give better disease treatment. So making such videos available through a specific video search engine is very important, as these videos are useful to handle the very critical situations while diagnosis and treatment. The proposed system intends to show the most relevant videos for a specific users query with the help of video search engine for healthcare data. Healthcare data is easily available or can be recorded at low cost. The proposed method is used to show various relevant videos for a given user's need by keyword based label matching. The proposed method performs video data collection and speech to text conversion to create the transcription snippets. Finally, keyword based labeling is done with the help of that transcription snippets and prescription reports in order to show more precise and relevant video search results for a given users query. Then these keywords can be used to rearrange the video search outputs. This proposed system is very effective for disease prescription analysis as well as it helps practitioners who are new.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"50 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125832738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122178
Diva Pant, Vishal Kumar, J. Kishore, Ritu Pal
The unprecedented interest in big data has paved way for augmented technologies. One of the major usefulness of big data is found in the field of healthcare analytics. The healthcare data come from varied sources. Specifically EHR data provide a comprehensive view of patient's health. People are paying more attention to their health and want the best possible healthcare especially with new technologies evolving every now and then. We can analyze this astronomical patient's information and try to study certain patterns, which can give us the better understanding of the data present. In this study a neurological dataset of thousand patients has been collected from a hospital. Out of this data the particular cases of head injury are taken into account and specific attributes like pulse rate, blood pressure, Glasgow coma scale, respiratory rate, CNS are studied and analyzed. The analysis is performed on the basis of two factors: duration of patient's stay in the hospital and seriousness level of the injury. A classification model is prepared on the data and the implementation is carried out in R Programming, using its statistical packages and graphical abilities.
{"title":"Healthcare data modeling in R","authors":"Diva Pant, Vishal Kumar, J. Kishore, Ritu Pal","doi":"10.1109/ICISIM.2017.8122178","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122178","url":null,"abstract":"The unprecedented interest in big data has paved way for augmented technologies. One of the major usefulness of big data is found in the field of healthcare analytics. The healthcare data come from varied sources. Specifically EHR data provide a comprehensive view of patient's health. People are paying more attention to their health and want the best possible healthcare especially with new technologies evolving every now and then. We can analyze this astronomical patient's information and try to study certain patterns, which can give us the better understanding of the data present. In this study a neurological dataset of thousand patients has been collected from a hospital. Out of this data the particular cases of head injury are taken into account and specific attributes like pulse rate, blood pressure, Glasgow coma scale, respiratory rate, CNS are studied and analyzed. The analysis is performed on the basis of two factors: duration of patient's stay in the hospital and seriousness level of the injury. A classification model is prepared on the data and the implementation is carried out in R Programming, using its statistical packages and graphical abilities.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122194
Pooja R. Katre, A. Thakare
Aim of this paper is to find out shortest path navigation route to reach nearest required location. With increasing development in society the structure of the road networks are more complicated and finding the shortest path in such network is difficult one. Some situations where we need quick response and shorter path to reach to the destination. In emergency situation selecting a wrong path may increase the travel time. In this paper we provide an algorithm which calculates the shortest path for Quick Response System. This system provides three types of services like police station, hospital and fire bridged services. With the help of Global positioning system (GPS), it provides current location of the incident place and algorithm then determines the shortest path for that location. It helps rescue team to reach at destination on time. This paper presents new approach for calculating shortest navigation using improved A∗ algorithm.
{"title":"Design of quick response system for road network in emergency services","authors":"Pooja R. Katre, A. Thakare","doi":"10.1109/ICISIM.2017.8122194","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122194","url":null,"abstract":"Aim of this paper is to find out shortest path navigation route to reach nearest required location. With increasing development in society the structure of the road networks are more complicated and finding the shortest path in such network is difficult one. Some situations where we need quick response and shorter path to reach to the destination. In emergency situation selecting a wrong path may increase the travel time. In this paper we provide an algorithm which calculates the shortest path for Quick Response System. This system provides three types of services like police station, hospital and fire bridged services. With the help of Global positioning system (GPS), it provides current location of the incident place and algorithm then determines the shortest path for that location. It helps rescue team to reach at destination on time. This paper presents new approach for calculating shortest navigation using improved A∗ algorithm.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132057173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122186
Supriya S. Thakare, P. Kaur
Use of online applications in day-to-day life is increasing. In parallel to this increase the threat to the security of these applications is also increasing. The security of these applications is breached by different cyber attacks. Denial-of-Service (DoS) is one such type of cyber attack. DoS makes the online application or the resources of the server unavailable to the intended users. For detecting these DoS attacks a detection system is proposed which can be used for detecting both known and unknown attacks. In the proposed system makes use of multivariate correlation analysis (MCA) technique which extracts the geometrical correlation between network traffic. This geometrical correlation is used for detecting DoS attack. Triangle area based technique to used enhance and speedup the MCA process. KDD cup 99 dataset is for examining the effectiveness of the proposed system. To increase the detection rate and to reduce the complexity of the proposed system a subset of features of the record is used. This subset is used in the whole detection process.
在线应用程序在日常生活中的使用越来越多。与此同时,对这些应用程序的安全威胁也在不断增加。这些应用程序的安全性被不同的网络攻击所破坏。拒绝服务(DoS)就是这样一种网络攻击。DoS使在线应用程序或服务器的资源对预期用户不可用。为了检测这些DoS攻击,提出了一种可以同时检测已知和未知攻击的检测系统。该系统利用多元相关分析(MCA)技术提取网络流量之间的几何相关性。这种几何相关性用于检测DoS攻击。采用基于三角面积的技术,增强和加快了MCA过程。KDD cup 99数据集用于检查所提议系统的有效性。为了提高检测率并降低所提出系统的复杂性,使用了记录特征的子集。该子集用于整个检测过程。
{"title":"Denial-of-service attack detection system","authors":"Supriya S. Thakare, P. Kaur","doi":"10.1109/ICISIM.2017.8122186","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122186","url":null,"abstract":"Use of online applications in day-to-day life is increasing. In parallel to this increase the threat to the security of these applications is also increasing. The security of these applications is breached by different cyber attacks. Denial-of-Service (DoS) is one such type of cyber attack. DoS makes the online application or the resources of the server unavailable to the intended users. For detecting these DoS attacks a detection system is proposed which can be used for detecting both known and unknown attacks. In the proposed system makes use of multivariate correlation analysis (MCA) technique which extracts the geometrical correlation between network traffic. This geometrical correlation is used for detecting DoS attack. Triangle area based technique to used enhance and speedup the MCA process. KDD cup 99 dataset is for examining the effectiveness of the proposed system. To increase the detection rate and to reduce the complexity of the proposed system a subset of features of the record is used. This subset is used in the whole detection process.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131265348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122140
S. D. Raut, V. Humbe, Arjun V. Mane
Biometrie Authentication is the main stream to attract attention of researcher to develop algorithm for data and security concern. The palm vein biometric is emerging as the most promising physiological characteristic to develop efficient recognition system. This paper discuss about the new dimension to generate biometric trait key rather a template free key generation extracted by means of rigorous pattern recognition and information security tactics. The generation of key is exercised through mapping of certain digital image processing operation, distance metric computation and information security policies. The model of recognition system proposed that includes phases such as feature extraction and detection followed by the development of recognition technique based on unique and distinct detected palm vein feature characteristics. The proposed work gives novel and robust algorithm for the recognition of the subject. The experimental work gives result with 99.47% high rate of accuracy for the recognition of the subject.
{"title":"Development of biometrie palm vein trait based person recognition system: Palm vein biometrics system","authors":"S. D. Raut, V. Humbe, Arjun V. Mane","doi":"10.1109/ICISIM.2017.8122140","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122140","url":null,"abstract":"Biometrie Authentication is the main stream to attract attention of researcher to develop algorithm for data and security concern. The palm vein biometric is emerging as the most promising physiological characteristic to develop efficient recognition system. This paper discuss about the new dimension to generate biometric trait key rather a template free key generation extracted by means of rigorous pattern recognition and information security tactics. The generation of key is exercised through mapping of certain digital image processing operation, distance metric computation and information security policies. The model of recognition system proposed that includes phases such as feature extraction and detection followed by the development of recognition technique based on unique and distinct detected palm vein feature characteristics. The proposed work gives novel and robust algorithm for the recognition of the subject. The experimental work gives result with 99.47% high rate of accuracy for the recognition of the subject.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115735215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122166
Shraddha K. Popat, Pramod B. Deshmukh, Vishakha A. Metre
Clustering is one of the prime topics in data mining. Clustering partitions the data and classifies the data into meaningful subgroups. Document clustering is a set of the document into groups such that two groups show different characteristics with respect to likeness. In this paper, an experimental exploration of similarity based method, HSC for measuring the similarity between data objects particularly text documents is introduced. It also provides an algorithm which has an incremental approach and evaluates cluster likeness between documents that leads to much improved results over other traditional methods. It also focuses on the selection of appropriate similarity measure for analyzing similarity between the documents.
{"title":"Hierarchical document clustering based on cosine similarity measure","authors":"Shraddha K. Popat, Pramod B. Deshmukh, Vishakha A. Metre","doi":"10.1109/ICISIM.2017.8122166","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122166","url":null,"abstract":"Clustering is one of the prime topics in data mining. Clustering partitions the data and classifies the data into meaningful subgroups. Document clustering is a set of the document into groups such that two groups show different characteristics with respect to likeness. In this paper, an experimental exploration of similarity based method, HSC for measuring the similarity between data objects particularly text documents is introduced. It also provides an algorithm which has an incremental approach and evaluates cluster likeness between documents that leads to much improved results over other traditional methods. It also focuses on the selection of appropriate similarity measure for analyzing similarity between the documents.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114636901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122176
Mahendra K. Ugale, Shweta J. Patil, V. Musande
Paperless Document Management System is used to eliminate the losses that businesses suffer because of physical paper files and filing systems. This Paper addresses some of the technologies that are helping professionals shift toward a paperless business world. A DMS based on organizing digital documents to search and store documents and to reduce paper. Most of the workplace consists a variety of documents having a mixture of handwritten and printed text. The detection of such documents is a crucial task for OCR developers. This paper describes different steps for processing different documents using scanning, tagging, and indexing for effective data retrieval with OCR and Indexing techniques.
{"title":"Document management system: A notion towards paperless office","authors":"Mahendra K. Ugale, Shweta J. Patil, V. Musande","doi":"10.1109/ICISIM.2017.8122176","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122176","url":null,"abstract":"Paperless Document Management System is used to eliminate the losses that businesses suffer because of physical paper files and filing systems. This Paper addresses some of the technologies that are helping professionals shift toward a paperless business world. A DMS based on organizing digital documents to search and store documents and to reduce paper. Most of the workplace consists a variety of documents having a mixture of handwritten and printed text. The detection of such documents is a crucial task for OCR developers. This paper describes different steps for processing different documents using scanning, tagging, and indexing for effective data retrieval with OCR and Indexing techniques.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122059095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122160
K. Bhosle, V. Musande
Red Edge point (R E P) is very much related with chlorophyll foliar concentration and contents. Deep absorption of chlorophyll a and chlorophyll b affects the sudden change in region starting from 680 nm to 800 nm of green vegetation reflectance spectrum. Greenness area of the observation can be recognized by Red Edge Point. The Vegetation which is given by remote sensing methods consist of Red Edge Point in spectrum. REP also can be observed using lab or field experiments. In which canopy spectral reflectance were obtained with an A S D Field Spec PRO spectro radiometer that provides measurements in the spectral range starting from 350 nm to 2500 nm with 3 nm spectral resolutions and 1 nm sampling step. These experimental results can be used to identify different crops. Unhealthy crops can be found using remote sensing data. Spectro radiometer gives us refraction and reflection same as of remote sensing data. This can be possible if we can found Red Edge Point. Dryness of plants are detected using this technique. Current work in this paper consist of finding stress of mulberry, cotton and sugarcane plants estimating result using peak derivative, linear interpolation, linear extrapolation method. Finally result is compared using above all methods.
红边点(rep)与叶片叶绿素浓度和含量密切相关。叶绿素a和叶绿素b的深度吸收影响绿色植被反射光谱从680 nm到800 nm区域的突变。观测的绿色区域可以通过红边缘点来识别。遥感方法给出的植被由光谱中的红边缘点组成。REP也可以通过实验室或现场实验来观察。其中,冠层的光谱反射率是用A S D Field Spec PRO光谱仪获得的,该光谱仪在350 nm至2500 nm的光谱范围内测量,光谱分辨率为3 nm,采样步长为1 nm。这些实验结果可以用来识别不同的作物。利用遥感数据可以发现不健康的作物。光谱仪给我们的折射和反射与遥感数据相同。如果我们能找到红边点,这是可能的。利用这种技术可以检测植物的干燥程度。本文的研究工作主要包括桑树、棉花和甘蔗等植物的应力分布,采用峰值导数法、线性插值法、线性外推法估算结果。最后用以上几种方法对结果进行了比较。
{"title":"Red edge point detection for mulberry leaf","authors":"K. Bhosle, V. Musande","doi":"10.1109/ICISIM.2017.8122160","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122160","url":null,"abstract":"Red Edge point (R E P) is very much related with chlorophyll foliar concentration and contents. Deep absorption of chlorophyll a and chlorophyll b affects the sudden change in region starting from 680 nm to 800 nm of green vegetation reflectance spectrum. Greenness area of the observation can be recognized by Red Edge Point. The Vegetation which is given by remote sensing methods consist of Red Edge Point in spectrum. REP also can be observed using lab or field experiments. In which canopy spectral reflectance were obtained with an A S D Field Spec PRO spectro radiometer that provides measurements in the spectral range starting from 350 nm to 2500 nm with 3 nm spectral resolutions and 1 nm sampling step. These experimental results can be used to identify different crops. Unhealthy crops can be found using remote sensing data. Spectro radiometer gives us refraction and reflection same as of remote sensing data. This can be possible if we can found Red Edge Point. Dryness of plants are detected using this technique. Current work in this paper consist of finding stress of mulberry, cotton and sugarcane plants estimating result using peak derivative, linear interpolation, linear extrapolation method. Finally result is compared using above all methods.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124668633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICISIM.2017.8122168
B. Dange, D. B. Kshirsagar
Nowadays online image search become more essential. In this paper, we have extended existing system for image re-ranking is explained. The existing system is divided into offline and online parts. In offline part various semantic spaces are automatically learns for different query keywords. Image Semantic content as signatures are generated by mapping the image features i.e. visual features into its semantic spaces related to image context. In online stage, semantic signatures computed from the different semantic space mentioned by the query keyword are equated with semantic signatures of query image for image re-ranking. We are extended the current frame work by adding new technique of hashing. Semantic signatures are small in dimensions, it is possible to make it more compressed and with use of hashing technologies it further enhance their matching efficiency. In this we use locality sensitive hashing concept based on nearest neighbor algorithms. To find more similar item in d-dimensional space, these algorithms are already been applied in different practical scenarios. In this, we implemented a recently discovered hashing-based algorithm to improve the online matching effectiveness of image re-ranking system, for the case the images are represented as objects as points in the rf-dimensional Euclidean space. The locality sensitive hashing algorithm produces the output which is optimal near in the class of nearest neighbor algorithms. The online matching efficiency is improved by using the hashing technique as compare to existing search methods. With the use of hashing technique the system performance is improved by 38%.
{"title":"Hashing based re-ranking of web images using query-specific semantic signatures","authors":"B. Dange, D. B. Kshirsagar","doi":"10.1109/ICISIM.2017.8122168","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122168","url":null,"abstract":"Nowadays online image search become more essential. In this paper, we have extended existing system for image re-ranking is explained. The existing system is divided into offline and online parts. In offline part various semantic spaces are automatically learns for different query keywords. Image Semantic content as signatures are generated by mapping the image features i.e. visual features into its semantic spaces related to image context. In online stage, semantic signatures computed from the different semantic space mentioned by the query keyword are equated with semantic signatures of query image for image re-ranking. We are extended the current frame work by adding new technique of hashing. Semantic signatures are small in dimensions, it is possible to make it more compressed and with use of hashing technologies it further enhance their matching efficiency. In this we use locality sensitive hashing concept based on nearest neighbor algorithms. To find more similar item in d-dimensional space, these algorithms are already been applied in different practical scenarios. In this, we implemented a recently discovered hashing-based algorithm to improve the online matching effectiveness of image re-ranking system, for the case the images are represented as objects as points in the rf-dimensional Euclidean space. The locality sensitive hashing algorithm produces the output which is optimal near in the class of nearest neighbor algorithms. The online matching efficiency is improved by using the hashing technique as compare to existing search methods. With the use of hashing technique the system performance is improved by 38%.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131933398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}