首页 > 最新文献

2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)最新文献

英文 中文
Exploring the emergence of a new smart city model: Case analysis of the Moroccan urbanization 探索智慧城市新模式的出现:摩洛哥城市化案例分析
El M'Hadi Hajar, Cherkaoui Abdelghani
More than half world population lives in cities. In Morocco, the urbanization has more than doubled during the last fifty years to reach 59.2 % today. More than ever, this sociodemographic situation conditions the challenges which the country has to raise to assure an optimal quality of life for the Moroccan citizens. The idea of smart-cities is examined with respect to the intent, including current urbanization models, development issues and city planning in Morocco; the case of the proposed smart-city of Casablanca, a flagship of proposals and current realities is looked at. An indigenous alternative following the model proposed of smart-villages instead is examined for appropriateness.
世界上一半以上的人口居住在城市。在摩洛哥,城市化率在过去50年里翻了一倍多,目前达到59.2%。这种社会人口状况比以往任何时候都更构成了该国为确保摩洛哥公民的最佳生活质量而必须提出的挑战。智慧城市的概念在意图方面进行了研究,包括摩洛哥当前的城市化模式、发展问题和城市规划;本文以卡萨布兰卡的智慧城市为例,研究了各种建议和当前现实的典范。根据提出的智慧村模式,对一种本土替代方案进行了适当性审查。
{"title":"Exploring the emergence of a new smart city model: Case analysis of the Moroccan urbanization","authors":"El M'Hadi Hajar, Cherkaoui Abdelghani","doi":"10.1109/ICISIM.2017.8122188","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122188","url":null,"abstract":"More than half world population lives in cities. In Morocco, the urbanization has more than doubled during the last fifty years to reach 59.2 % today. More than ever, this sociodemographic situation conditions the challenges which the country has to raise to assure an optimal quality of life for the Moroccan citizens. The idea of smart-cities is examined with respect to the intent, including current urbanization models, development issues and city planning in Morocco; the case of the proposed smart-city of Casablanca, a flagship of proposals and current realities is looked at. An indigenous alternative following the model proposed of smart-villages instead is examined for appropriateness.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128995052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hybrid technique for splice site prediction 剪接位点预测的杂交技术
Srabanti Maji, M. L. Garg
The gene structure is consist of intron, exons, promoter, start codon, stop codon, etc. for the eukaryotic organism. The boundary between intron and exon is splice site. There is the need for accurate algorithms to be used in the splice sites identification and more attention was paid during past few years. This proposed system, Splice Hybrid have three layered architecture — in this layer2nd orderMM is used in the initial stage, i.e. feature extraction; intermediate stage for feature selection principal feature analysis is used; and in the final layer a SVM with RBF kernel is used. In comparison Splice Hybrid tool gives better performance.
真核生物的基因结构由内含子、外显子、启动子、起始密码子、终止密码子等组成。内含子和外显子的边界是剪接位点。在剪接位点的识别中需要使用精确的算法,近年来受到越来越多的关注。Splice Hybrid系统采用三层结构,在该结构中,在初始阶段,即特征提取阶段,使用第二层orderMM;中间阶段为特征选择阶段,主要进行特征分析;最后一层采用带RBF核的支持向量机。相比之下,Splice Hybrid工具提供了更好的性能。
{"title":"Hybrid technique for splice site prediction","authors":"Srabanti Maji, M. L. Garg","doi":"10.1109/ICISIM.2017.8122137","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122137","url":null,"abstract":"The gene structure is consist of intron, exons, promoter, start codon, stop codon, etc. for the eukaryotic organism. The boundary between intron and exon is splice site. There is the need for accurate algorithms to be used in the splice sites identification and more attention was paid during past few years. This proposed system, Splice Hybrid have three layered architecture — in this layer2nd orderMM is used in the initial stage, i.e. feature extraction; intermediate stage for feature selection principal feature analysis is used; and in the final layer a SVM with RBF kernel is used. In comparison Splice Hybrid tool gives better performance.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"6 21","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114046289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of hemoglobin using AI technique 利用人工智能技术估算血红蛋白
Suhas B. Dhoke, Anil R. Karwankar, V. Ratnaparkhe
Anemia is a condition in which the hemoglobin (Hb) content becomes less than that of the normal value. In this project, hemoglobin value is estimated using ANN (Artificial Neural Network). Database of blood sample images and their actual Hb values is collected from a local laboratory. Red, green and blue normalized values of images' samples are fed to the ANN as input. Cyanemethemoglobin method based calculated values of Hb obtained from the laboratory are given as output. Comparing the outputs of ANN model results with actual Hb values, accuracy of the network is calculated. This paper covers comparison of performance of different types of Neural Networks for carrying out the stipulated task. It is observed that there is a strong relation between red, green and blue color components of the image with the hemoglobin content of the blood.
贫血是血红蛋白(Hb)含量低于正常值的一种情况。在这个项目中,血红蛋白值是使用ANN(人工神经网络)估计的。血液样本图像及其实际Hb值数据库是从当地实验室收集的。将图像样本的红、绿、蓝归一化值作为输入馈送到人工神经网络。基于从实验室获得的Hb计算值的氰铁血红蛋白方法作为输出。将人工神经网络模型输出结果与实际Hb值进行比较,计算网络的精度。本文比较了不同类型的神经网络在执行规定任务时的性能。可以观察到,图像中的红、绿、蓝三色成分与血液中的血红蛋白含量之间有很强的关系。
{"title":"Estimation of hemoglobin using AI technique","authors":"Suhas B. Dhoke, Anil R. Karwankar, V. Ratnaparkhe","doi":"10.1109/ICISIM.2017.8122147","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122147","url":null,"abstract":"Anemia is a condition in which the hemoglobin (Hb) content becomes less than that of the normal value. In this project, hemoglobin value is estimated using ANN (Artificial Neural Network). Database of blood sample images and their actual Hb values is collected from a local laboratory. Red, green and blue normalized values of images' samples are fed to the ANN as input. Cyanemethemoglobin method based calculated values of Hb obtained from the laboratory are given as output. Comparing the outputs of ANN model results with actual Hb values, accuracy of the network is calculated. This paper covers comparison of performance of different types of Neural Networks for carrying out the stipulated task. It is observed that there is a strong relation between red, green and blue color components of the image with the hemoglobin content of the blood.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129014553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical study of important keyword extraction techniques from documents 文档关键字提取技术的实证研究
H. Hasan, Falguni Sanyal, Dipankar Chaki, Md. Haider Ali
Keyword extraction is an automated process that collects a set of terms, illustrating an overview of the document. The term is defined how the keyword identifies the core information of a particular document. Analyzing huge number of documents to find out the relevant information, keyword extraction will be the key approach. This approach will help us to understand the depth of it even before we read it. In this paper, we have given an overview of different approaches and algorithms that have been used in keyword extraction technique and compare them to find out the better approach to work in the future. We have studied various algorithms like support vector machine (SVM), conditional random fields (CRF), NP-chunk, n-grams, multiple linear regression, and logistic regression to find out important keywords in a document. We have figured out that SVM and CRF give better results where CRF accuracy is greater than SVM based on F1 score (The balance between precision and recall). According to precision, SVM shows a better result than CRF. But, in case of the recall, logit shows the greater result. Also, we have found out that, there are two more approaches that have been used in keyword extraction technique. One is statistical approach and another is machine learning approach. Statistical approaches show good result with statistical data. Machine learning approaches provide better result than the statistical approaches using training data. Some specimens of statistical approaches are Expectation-Maximization, K-Nearest Neighbor and Bayesian. Extractor and GenEx are the example of machine learning approaches in keyword extraction fields. Apart from these two approaches, semantic relation between words is another key feature in keyword extraction techniques.
关键字提取是一个自动过程,它收集一组术语,说明文档的概述。该术语定义了关键字如何标识特定文档的核心信息。分析海量的文档,找出相关信息,关键字提取将是关键方法。这种方法可以帮助我们在阅读之前就理解它的深度。在本文中,我们概述了在关键字提取技术中使用的不同方法和算法,并对它们进行比较,以找出未来更好的工作方法。我们研究了各种算法,如支持向量机(SVM)、条件随机场(CRF)、NP-chunk、n-grams、多元线性回归、逻辑回归等,以找出文档中的重要关键词。我们已经发现SVM和CRF给出了更好的结果,其中基于F1分数(precision and recall之间的平衡)的CRF准确率大于SVM。从精度上看,SVM优于CRF。但是,在召回的情况下,logit显示了更大的结果。此外,我们还发现,在关键字提取技术中使用了另外两种方法。一种是统计方法,另一种是机器学习方法。统计方法对统计数据显示出良好的效果。机器学习方法比使用训练数据的统计方法提供更好的结果。统计方法的一些例子是期望最大化,k近邻和贝叶斯。Extractor和GenEx是关键字提取领域中机器学习方法的例子。除了这两种方法之外,词间的语义关系是关键词提取技术的另一个关键特征。
{"title":"An empirical study of important keyword extraction techniques from documents","authors":"H. Hasan, Falguni Sanyal, Dipankar Chaki, Md. Haider Ali","doi":"10.1109/ICISIM.2017.8122154","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122154","url":null,"abstract":"Keyword extraction is an automated process that collects a set of terms, illustrating an overview of the document. The term is defined how the keyword identifies the core information of a particular document. Analyzing huge number of documents to find out the relevant information, keyword extraction will be the key approach. This approach will help us to understand the depth of it even before we read it. In this paper, we have given an overview of different approaches and algorithms that have been used in keyword extraction technique and compare them to find out the better approach to work in the future. We have studied various algorithms like support vector machine (SVM), conditional random fields (CRF), NP-chunk, n-grams, multiple linear regression, and logistic regression to find out important keywords in a document. We have figured out that SVM and CRF give better results where CRF accuracy is greater than SVM based on F1 score (The balance between precision and recall). According to precision, SVM shows a better result than CRF. But, in case of the recall, logit shows the greater result. Also, we have found out that, there are two more approaches that have been used in keyword extraction technique. One is statistical approach and another is machine learning approach. Statistical approaches show good result with statistical data. Machine learning approaches provide better result than the statistical approaches using training data. Some specimens of statistical approaches are Expectation-Maximization, K-Nearest Neighbor and Bayesian. Extractor and GenEx are the example of machine learning approaches in keyword extraction fields. Apart from these two approaches, semantic relation between words is another key feature in keyword extraction techniques.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129265647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Contractive autoencoder and SVM for recognition of handwritten Devanagari numerals 手写体德文数字的压缩自动编码器和支持向量机识别
R. Kabra
Representation of data is very important in case of machine learning. Better the representation, the classifiers will give better results. Contractive autoencoders are used to learn the representation of data which are robust to small changes in the input. This paper uses contractive autoencoder and SVM classifier for handwritten Devanagari numerals recognition. The accuracy obtained using CAE+SVM is 96 %.
在机器学习中,数据的表示是非常重要的。更好的表示,分类器将给出更好的结果。压缩自编码器用于学习对输入的微小变化具有鲁棒性的数据表示。本文采用压缩自编码器和支持向量机分类器进行手写体数字识别。CAE+SVM的准确率为96%。
{"title":"Contractive autoencoder and SVM for recognition of handwritten Devanagari numerals","authors":"R. Kabra","doi":"10.1109/ICISIM.2017.8122142","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122142","url":null,"abstract":"Representation of data is very important in case of machine learning. Better the representation, the classifiers will give better results. Contractive autoencoders are used to learn the representation of data which are robust to small changes in the input. This paper uses contractive autoencoder and SVM classifier for handwritten Devanagari numerals recognition. The accuracy obtained using CAE+SVM is 96 %.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115549161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Embedded home surveillance system with pyroelectric infrared sensor using GSM 基于GSM的热释电红外传感器嵌入式家庭监控系统
R. R. Ragade
Surveillance is one of the most important security system in today's life as it protects home from theft, burglaries and murders, as become routine in big cities. This system is very useful as it is used in many places like offices, industrial, storehouse or bank locker room, ATM etc. This embedded based home security system designed by use of smart sensors like pyroelectric infrared sensor (PIR), ultrasonic sensor to detect an intruder in home. The ultrasonic sensor is used to detect movement of objects and PIR function is to detect changes in temperature of human in infrared radiation. These sensors are built around microcontroller. When the system detects is there any unauthorized person or intruder is present, System triggers a buzzer and sends SMS. After this MCU (microcontroller unit) sends sensor signal to embedded system, to capture an image by web camera.
监控是当今生活中最重要的安全系统之一,它保护家庭免受盗窃、入室盗窃和谋杀,这在大城市已经成为常态。该系统适用于办公室、工业、仓库或银行更衣室、自动取款机等场所。这种嵌入式家庭安全系统是利用热释电红外传感器(PIR)、超声波传感器等智能传感器来检测家中的入侵者而设计的。超声波传感器用于检测物体的运动,PIR功能是通过红外辐射检测人体温度的变化。这些传感器是围绕微控制器构建的。当系统检测到有未经授权的人或入侵者存在时,系统触发蜂鸣器并发送短信。单片机将传感器信号发送到嵌入式系统后,由网络摄像头采集图像。
{"title":"Embedded home surveillance system with pyroelectric infrared sensor using GSM","authors":"R. R. Ragade","doi":"10.1109/ICISIM.2017.8122192","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122192","url":null,"abstract":"Surveillance is one of the most important security system in today's life as it protects home from theft, burglaries and murders, as become routine in big cities. This system is very useful as it is used in many places like offices, industrial, storehouse or bank locker room, ATM etc. This embedded based home security system designed by use of smart sensors like pyroelectric infrared sensor (PIR), ultrasonic sensor to detect an intruder in home. The ultrasonic sensor is used to detect movement of objects and PIR function is to detect changes in temperature of human in infrared radiation. These sensors are built around microcontroller. When the system detects is there any unauthorized person or intruder is present, System triggers a buzzer and sends SMS. After this MCU (microcontroller unit) sends sensor signal to embedded system, to capture an image by web camera.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114775939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Development of an efficient secure biometric system by using iris, fingerprint, face 基于虹膜、指纹、人脸的高效安全生物识别系统的开发
R. Telgad, Almas M. N. Siddiqui, Savita A. Lothe, P. Deshmukh, Gajanan Jadhao
In this research paper three biometric characteristics are used i.e. Fingerprint, Face, Iris at score level of Fusion. For finger print images two methods are used i.e. Minutiae Extraction and Gabor filter approach. For Iris recognition system Gabor wavelet is used for feature selection. For Face biometric system P.C.A. is used for feature selection. The match count of every trait is calculated. Then the generated result of match and non match is utilized for the sum score level fusion. Then decision is find out for persons recognition. The system is tested on std. Dataset and KVK data set. On KVK dataset it generates an the results as 99.7 % with FAR of 0.02% and FRR of 0.1% and for FVC 2004 dataset and MMU dataset it gives the result as 99.8 % with FAR of 0.11% and FRR of 0.09%
本文采用指纹、人脸、虹膜三种生物特征进行融合得分。对于指纹图像,采用了两种方法:Minutiae Extraction和Gabor filter。虹膜识别系统采用Gabor小波进行特征选择。人脸生物识别系统采用pca进行特征选择。计算每个性状的匹配数。然后利用匹配和不匹配生成的结果进行总分数水平融合。然后决定为人们识别找出。在std数据集和KVK数据集上对系统进行了测试。对于KVK数据集,它生成的结果为99.7%,FAR为0.02%,FRR为0.1%;对于FVC 2004数据集和MMU数据集,它给出的结果为99.8%,FAR为0.11%,FRR为0.09%
{"title":"Development of an efficient secure biometric system by using iris, fingerprint, face","authors":"R. Telgad, Almas M. N. Siddiqui, Savita A. Lothe, P. Deshmukh, Gajanan Jadhao","doi":"10.1109/ICISIM.2017.8122156","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122156","url":null,"abstract":"In this research paper three biometric characteristics are used i.e. Fingerprint, Face, Iris at score level of Fusion. For finger print images two methods are used i.e. Minutiae Extraction and Gabor filter approach. For Iris recognition system Gabor wavelet is used for feature selection. For Face biometric system P.C.A. is used for feature selection. The match count of every trait is calculated. Then the generated result of match and non match is utilized for the sum score level fusion. Then decision is find out for persons recognition. The system is tested on std. Dataset and KVK data set. On KVK dataset it generates an the results as 99.7 % with FAR of 0.02% and FRR of 0.1% and for FVC 2004 dataset and MMU dataset it gives the result as 99.8 % with FAR of 0.11% and FRR of 0.09%","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134003548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Twitter sentiment classification using stanford NLP 使用斯坦福NLP的Twitter情感分类
Shital Anil Phand, Jeevan Anil Phand
Twitter is a micro blogging site where users review or tweet their approach i.e., opinion towards the service providers twitter page in words and it is useful to analyze the sentiments from it. Analyze means finding approach of users or customers where it is positive, negative, neutral, or in between positive-neutral or in between negative-neutral and represent it. In such a system or tool tweets are fetch from twitter regarding shopping websites, or any other twitter pages like some business, mobile brands, cloth brands, live events like sport match, election etc. get the polarity of it. These results will help the service provider to find out about the customers view toward their products.
Twitter是一个微博网站,用户评论或推文他们的方法,即对服务提供商Twitter页面的意见,这是有用的分析它的情绪。分析意味着找到用户或客户的方法,其中它是积极的,消极的,中立的,或介于积极中立或消极中立之间,并表示它。在这样的系统或工具中,推文是从twitter上获取的,关于购物网站,或任何其他twitter页面,如一些商业,移动品牌,服装品牌,现场活动,如体育比赛,选举等。这些结果将有助于服务提供商了解客户对其产品的看法。
{"title":"Twitter sentiment classification using stanford NLP","authors":"Shital Anil Phand, Jeevan Anil Phand","doi":"10.1109/ICISIM.2017.8122138","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122138","url":null,"abstract":"Twitter is a micro blogging site where users review or tweet their approach i.e., opinion towards the service providers twitter page in words and it is useful to analyze the sentiments from it. Analyze means finding approach of users or customers where it is positive, negative, neutral, or in between positive-neutral or in between negative-neutral and represent it. In such a system or tool tweets are fetch from twitter regarding shopping websites, or any other twitter pages like some business, mobile brands, cloth brands, live events like sport match, election etc. get the polarity of it. These results will help the service provider to find out about the customers view toward their products.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126490605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Land use land cover change detection by different supervised classifiers on LISS-III temporal datasets 基于LISS-III时间数据集的不同监督分类器土地利用土地覆盖变化检测
Ajay D. Nagne, Rajesh K. Dhumal, Amol D. Vibhute, S. Gaikwad, K. Kale, S. Mehrotra
The objective of this paper is to report the study carried out to assess and evaluate changes in Land-Use Land-Cover(LULC) at the region of Aurangabad Municipal Corporation (AMC) for the year 2009 and 2015 using multispectral images acquired from remotely sensed Linear-Imaging-Self-Scanning-Sensor-HI(LISS-HI). The area was categorized into six types, viz. Residential(R), Vegetation(V), Water_Body(W), Rock(Ro), Barren Land(B) and Fallow_Land(F). Four different types of supervised classifiers have been used and it was found the Maximum Likelihood classifier has provided satisfactory and reliable results. The overall accuracy with the classifier was found to be 83% and 93% with Kappa Coefficient 0.78 and 0.90 for the year 2009 and 2015, respectively. The residential area was found to be increased by 1.35% whereas area related to Water Body, Vegetation and Fallow Land have decreased by 0.83%, 2.59% and 18.43% respectively. The areas for Rock remain same, as it was reserved. The area covered by Barren Land increased by 20.44%. The results are of significant for planning and management of AMC.
本文的目的是报告利用遥感线性成像-自扫描-传感器- hi (lss - hi)获取的多光谱图像对奥兰加巴德市政公司(AMC)地区2009年和2015年土地利用-土地覆盖(LULC)变化进行评估和评价的研究。该地区分为6类,即住宅(R)、植被(V)、水体(W)、岩石(Ro)、荒地(B)和休耕地(F)。使用了四种不同类型的监督分类器,发现最大似然分类器提供了令人满意和可靠的结果。2009年和2015年,该分类器的总体准确率分别为83%和93%,Kappa系数分别为0.78和0.90。住区面积增加了1.35%,而水体、植被和休耕地面积分别减少了0.83%、2.59%和18.43%。岩石的区域保持不变,因为它是保留的。荒地面积增加20.44%。研究结果对AMC的规划和管理具有重要意义。
{"title":"Land use land cover change detection by different supervised classifiers on LISS-III temporal datasets","authors":"Ajay D. Nagne, Rajesh K. Dhumal, Amol D. Vibhute, S. Gaikwad, K. Kale, S. Mehrotra","doi":"10.1109/ICISIM.2017.8122150","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122150","url":null,"abstract":"The objective of this paper is to report the study carried out to assess and evaluate changes in Land-Use Land-Cover(LULC) at the region of Aurangabad Municipal Corporation (AMC) for the year 2009 and 2015 using multispectral images acquired from remotely sensed Linear-Imaging-Self-Scanning-Sensor-HI(LISS-HI). The area was categorized into six types, viz. Residential(R), Vegetation(V), Water_Body(W), Rock(Ro), Barren Land(B) and Fallow_Land(F). Four different types of supervised classifiers have been used and it was found the Maximum Likelihood classifier has provided satisfactory and reliable results. The overall accuracy with the classifier was found to be 83% and 93% with Kappa Coefficient 0.78 and 0.90 for the year 2009 and 2015, respectively. The residential area was found to be increased by 1.35% whereas area related to Water Body, Vegetation and Fallow Land have decreased by 0.83%, 2.59% and 18.43% respectively. The areas for Rock remain same, as it was reserved. The area covered by Barren Land increased by 20.44%. The results are of significant for planning and management of AMC.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115613671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adaptive thresholding to robust image binarization for degraded document images 退化文档图像鲁棒二值化的自适应阈值法
Prashant Devidas Ingle, Parminder Kaur
Owing to the elevated intra/inter variation among the foreground and background text of various document images, the text segmentation from the poorly degraded document images is the difficult job. This paper presents the document image binarization method by adaptive image contrast which is the integration of the local image gradient and the local image contrast which is lenient to background and text variation generated by various document degradations. Initially, an adaptive contrast map is constructed for the input degraded document image by the proposed document image binarization method. Then, the binarization is performed on this adaptive contrast map and the binarized contrast map is integrated with the Canny's edge map for determining the text stroke edge pixels. After that, depends on the local threshold which is defined by the identified text stroke edge pixels' intensities in the local window, the document text is divided. The proposed method is straight forward, vigorous, and it requires least amount of parameter tuning. The experimentation is performed on DIBCO 2011 dataset and the results of the experimentation show that the proposed method achieved high performance than the state-of-the-art methods.
由于各种文档图像的前景文本和背景文本之间存在较大的内/间变化,因此对退化程度较差的文档图像进行文本分割是一项难点工作。本文提出了一种基于自适应图像对比度的文档图像二值化方法,该方法将局部图像梯度与局部图像对比度相结合,对各种文档降级所产生的背景和文本变化比较宽容。首先,采用本文提出的文档图像二值化方法对输入的降级文档图像构建自适应对比度映射。然后,对该自适应对比度图进行二值化处理,将二值化后的对比度图与Canny边缘图相结合,确定文本笔画边缘像素。然后,根据在局部窗口中识别的文本笔画边缘像素的强度定义的局部阈值,对文档文本进行分割。所提出的方法是直接的,有力的,并且需要最少的参数调整。在DIBCO 2011数据集上进行了实验,实验结果表明,该方法比现有方法具有更高的性能。
{"title":"Adaptive thresholding to robust image binarization for degraded document images","authors":"Prashant Devidas Ingle, Parminder Kaur","doi":"10.1109/ICISIM.2017.8122172","DOIUrl":"https://doi.org/10.1109/ICISIM.2017.8122172","url":null,"abstract":"Owing to the elevated intra/inter variation among the foreground and background text of various document images, the text segmentation from the poorly degraded document images is the difficult job. This paper presents the document image binarization method by adaptive image contrast which is the integration of the local image gradient and the local image contrast which is lenient to background and text variation generated by various document degradations. Initially, an adaptive contrast map is constructed for the input degraded document image by the proposed document image binarization method. Then, the binarization is performed on this adaptive contrast map and the binarized contrast map is integrated with the Canny's edge map for determining the text stroke edge pixels. After that, depends on the local threshold which is defined by the identified text stroke edge pixels' intensities in the local window, the document text is divided. The proposed method is straight forward, vigorous, and it requires least amount of parameter tuning. The experimentation is performed on DIBCO 2011 dataset and the results of the experimentation show that the proposed method achieved high performance than the state-of-the-art methods.","PeriodicalId":139000,"journal":{"name":"2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131312521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2017 1st International Conference on Intelligent Systems and Information Management (ICISIM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1