首页 > 最新文献

Koomesh最新文献

英文 中文
A Comparative Analysis on Image Fusion Algorithms based on Compressive Sensing 基于压缩感知的图像融合算法比较分析
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653701
M. Gayathri Devi, S. Manjula
This paper is about study of comparative analysis of Spatial and Transform domain fusion techniques under Compressive Sensing or Compressive Sampling principle. The compressive measurements of two source images are obtained using star shaped sampling pattern and fuse the measurements. The output image is reconstructed from 25% of samples using Minimum Total Variation method with equality constraints and with reduced computational time. Finally, for different fusion techniques under Compressive Sensing are performed and compared. Multi focus and Multi modal images are used for simulation and no prior knowledge of source images is required for reconstruction. Based on fusion evaluation metric with reference and without reference image conclude that in spatial domain, simple average & principal component analysis and in transform domain, DCTav and Laplacian Pyramid are performed well.
本文对压缩感知和压缩采样原理下的空间域和变换域融合技术进行了比较分析研究。采用星形采样模式获得两源图像的压缩测量值,并将测量值融合。输出图像采用最小总变分法从25%的样本中重建,该方法具有相等约束,减少了计算时间。最后,对不同的压缩感知融合技术进行了比较。仿真采用多焦点和多模态图像,重构时不需要对源图像有先验知识。基于参考图像和无参考图像的融合评价指标,得出在空间域、简单平均和主成分分析以及在变换域,DCTav和拉普拉斯金字塔具有良好的性能。
{"title":"A Comparative Analysis on Image Fusion Algorithms based on Compressive Sensing","authors":"M. Gayathri Devi, S. Manjula","doi":"10.1109/I-SMAC.2018.8653701","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653701","url":null,"abstract":"This paper is about study of comparative analysis of Spatial and Transform domain fusion techniques under Compressive Sensing or Compressive Sampling principle. The compressive measurements of two source images are obtained using star shaped sampling pattern and fuse the measurements. The output image is reconstructed from 25% of samples using Minimum Total Variation method with equality constraints and with reduced computational time. Finally, for different fusion techniques under Compressive Sensing are performed and compared. Multi focus and Multi modal images are used for simulation and no prior knowledge of source images is required for reconstruction. Based on fusion evaluation metric with reference and without reference image conclude that in spatial domain, simple average & principal component analysis and in transform domain, DCTav and Laplacian Pyramid are performed well.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"2012 1","pages":"295-301"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86399877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extraction of Visual Features from Video Sequences for Better Visual Analysis 从视频序列中提取视觉特征以进行更好的视觉分析
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653686
P. Rajarapollu, V. Mankar
Video have a basic and non basic features, where basic features includes e. g. color, shape, size and non basic features include orientation of a image. Whereas Video Sequences is a series of shots/frames on a subject that are edited together to tell a story. Visual features describes the details about the image contents, which are used in various applications like, visual search, object recognition, image registration and object tracking. Many visual analysis task requires the features to be transmitted, thus it calls for the different coding algorithms to attain a target level of efficiency. Here an effort has been taken to implement a coding algorithm for local features extraction such as SIFT (Scale Invariant Feature Transform). The first stage comprises of using the SIFT algorithm property to find the ‘point of interest’ of an image. Further the use Kalman Filter algorithm is done as an application purpose of motion based single or multiple object detection and tracking.
视频有基本特征和非基本特征,其中基本特征包括颜色、形状、大小等,非基本特征包括图像的方向。而视频序列是一个主题上的一系列镜头/帧,它们被编辑在一起讲述一个故事。视觉特征描述了图像内容的细节,用于各种应用,如视觉搜索、对象识别、图像配准和对象跟踪。许多可视化分析任务需要特征的传输,因此需要不同的编码算法来达到目标的效率水平。本文提出了一种用于局部特征提取的编码算法,如SIFT (Scale Invariant Feature Transform)。第一阶段包括使用SIFT算法的属性来找到图像的“兴趣点”。进一步将卡尔曼滤波算法作为基于运动的单目标或多目标检测与跟踪的应用目的。
{"title":"Extraction of Visual Features from Video Sequences for Better Visual Analysis","authors":"P. Rajarapollu, V. Mankar","doi":"10.1109/I-SMAC.2018.8653686","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653686","url":null,"abstract":"Video have a basic and non basic features, where basic features includes e. g. color, shape, size and non basic features include orientation of a image. Whereas Video Sequences is a series of shots/frames on a subject that are edited together to tell a story. Visual features describes the details about the image contents, which are used in various applications like, visual search, object recognition, image registration and object tracking. Many visual analysis task requires the features to be transmitted, thus it calls for the different coding algorithms to attain a target level of efficiency. Here an effort has been taken to implement a coding algorithm for local features extraction such as SIFT (Scale Invariant Feature Transform). The first stage comprises of using the SIFT algorithm property to find the ‘point of interest’ of an image. Further the use Kalman Filter algorithm is done as an application purpose of motion based single or multiple object detection and tracking.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"16 1","pages":"220-223"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80073098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Emotion Recognition using DWT based Similarity and Difference features 基于相似和差异特征的小波变换面部情感识别
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653742
S. Poorna, S. Anjana, P. Varma, Anjana Sajeev, K. Arya, S. Renjith, G. Nair
Recognizing emotions from facial images has become one of the major fields in affective computing arena since it has wide spread applications in robotics, medicine, surveillance, defense, e-learning, gaming, customer services etc. The study used Ekman model with 7 basic emotions- anger, happy, disgust, sad, fear, surprise and neutral acquired from subjects of Indian ethnicity. The acquired data base, Amritaemo consisted of 700 still images of Indian male and female subjects in seven emotions. The images were then cropped manually to obtain the region of analysis i.e. the face and converted to grayscale for further processing. Preprocessing techniques, histogram equalization and median filtering were applied to these after resizing. Discrete Wavelet Transform (DWT) was applied to these pre-processed images. The 2 D Haar wavelet coefficients (WC) were used to obtain the feature parameters. The maximum 2D correlation of mean value of one specific emotion versus all others was considered as the similarity feature. The squared difference of the emotional and neutral images in the transformed domain was considered as the difference feature. Supervised learning methods, K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANN) were used to classify these features separately as well as together. The performance of these parameters were evaluated based on the measures accuracy, sensitivity and specificity.
情感计算在机器人、医学、监控、国防、电子学习、游戏、客户服务等领域有着广泛的应用,从面部图像中识别情感已成为情感计算领域的主要领域之一。该研究采用Ekman模型,对从印度裔被试身上获得的七种基本情绪——愤怒、快乐、厌恶、悲伤、恐惧、惊讶和中性进行分析。所获得的数据库Amritaemo由700张印度男性和女性以七种情绪为主题的静态图像组成。然后手动裁剪图像以获得分析区域,即人脸,并将其转换为灰度以进行进一步处理。调整尺寸后,采用预处理技术、直方图均衡化和中值滤波。对预处理后的图像进行离散小波变换(DWT)。利用二维Haar小波系数(WC)获得特征参数。一种特定情绪与所有其他情绪的均值的最大2D相关性被认为是相似特征。将变换域内情感图像与中性图像的平方差作为差分特征。使用监督学习方法、k -最近邻(KNN)和人工神经网络(ANN)分别对这些特征进行分类。根据测量的准确性、敏感性和特异性对这些参数的性能进行评价。
{"title":"Facial Emotion Recognition using DWT based Similarity and Difference features","authors":"S. Poorna, S. Anjana, P. Varma, Anjana Sajeev, K. Arya, S. Renjith, G. Nair","doi":"10.1109/I-SMAC.2018.8653742","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653742","url":null,"abstract":"Recognizing emotions from facial images has become one of the major fields in affective computing arena since it has wide spread applications in robotics, medicine, surveillance, defense, e-learning, gaming, customer services etc. The study used Ekman model with 7 basic emotions- anger, happy, disgust, sad, fear, surprise and neutral acquired from subjects of Indian ethnicity. The acquired data base, Amritaemo consisted of 700 still images of Indian male and female subjects in seven emotions. The images were then cropped manually to obtain the region of analysis i.e. the face and converted to grayscale for further processing. Preprocessing techniques, histogram equalization and median filtering were applied to these after resizing. Discrete Wavelet Transform (DWT) was applied to these pre-processed images. The 2 D Haar wavelet coefficients (WC) were used to obtain the feature parameters. The maximum 2D correlation of mean value of one specific emotion versus all others was considered as the similarity feature. The squared difference of the emotional and neutral images in the transformed domain was considered as the difference feature. Supervised learning methods, K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANN) were used to classify these features separately as well as together. The performance of these parameters were evaluated based on the measures accuracy, sensitivity and specificity.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"35 12 1","pages":"524-527"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77080863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MIPS reduction using ORD-MIN function in COBOL 在COBOL中使用ORD-MIN函数减少MIPS
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653600
Niranjan R Pandeshwar, Preeti Jagadev
The splitting of data using DFSORT utility contributes hugely to MIPS which is an acronym for "Million Instructions Per Second". The MIPS has evolved to determine the processing power and CPU resource consumption. This study investigates and provides an optimized way of performing a balanced splitting of data and makes an effort to reduce the CPU consumption by decreasing the number of instructions needed to perform the same task. The three methods that are used for the same are the SUM FIELDS method, the EASYTRIEVE method and the ORD-MIN functions method. The three data split methods mentioned above have been compared for a large amount of input data and it has been shown that the ORD-MIN function aids MIPS reduction.
使用DFSORT工具进行数据分割对MIPS(每秒百万指令)做出了巨大贡献。MIPS已经发展到可以确定处理能力和CPU资源消耗。本研究调查并提供了一种执行平衡数据分割的优化方法,并通过减少执行相同任务所需的指令数量来减少CPU消耗。用于相同的三个方法是SUM FIELDS方法,easytrive方法和ORD-MIN函数方法。在大量输入数据的情况下,对上述三种数据分割方法进行了比较,结果表明ORD-MIN函数有助于降低MIPS。
{"title":"MIPS reduction using ORD-MIN function in COBOL","authors":"Niranjan R Pandeshwar, Preeti Jagadev","doi":"10.1109/I-SMAC.2018.8653600","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653600","url":null,"abstract":"The splitting of data using DFSORT utility contributes hugely to MIPS which is an acronym for \"Million Instructions Per Second\". The MIPS has evolved to determine the processing power and CPU resource consumption. This study investigates and provides an optimized way of performing a balanced splitting of data and makes an effort to reduce the CPU consumption by decreasing the number of instructions needed to perform the same task. The three methods that are used for the same are the SUM FIELDS method, the EASYTRIEVE method and the ORD-MIN functions method. The three data split methods mentioned above have been compared for a large amount of input data and it has been shown that the ORD-MIN function aids MIPS reduction.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"45 1","pages":"196-199"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87553371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure Data sharing in Distributed Cloud Environment 分布式云环境下的安全数据共享
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653722
P. M. Salunke, Vishal. V. Mahale
In this paper we proposed a novel technique to store the data authentically in the cloud. Since because of elastic property from the point of view of storage space cloud is most popular to store data. Cloud is inquiring about stores data on it. Therefore, security is must for store data for preserving it from malicious attackers as well as there may be leakage in stored data security. Many times there is necessary condition for hiding the user or data owner identity from cloud. A technique use in this paper shows anonymous authentication for stored data on cloud. This system preserves the identity of the users before storing it on cloud. The main aim of this paper is only valid user have ability to encrypt stored data with help of accessing its control features. The basic idea behind this scenario is to prevent stored data of cloud as well as its creation, modification etc. In this approach, cloud is familiar with access policies of every stored data also several KDCs are used for key management. Expensive operations are carried out easily in decentralized cloud system than centralized cloud system approach. The proposed system is implemented and tested in java with mysql database.
本文提出了一种将数据真实地存储在云中的新技术。由于从存储空间的角度来看,云计算具有弹性,是最受欢迎的数据存储方式。云正在查询存储在上面的数据。因此,存储数据的安全性是必须的,以防止恶意攻击者,并且存储的数据可能存在安全泄漏。很多时候,对云隐藏用户或数据所有者身份是必要的条件。本文中使用的一种技术展示了云存储数据的匿名身份验证。该系统在将用户的身份存储在云上之前会保留用户的身份。本文的主要目的是只有合法用户才能通过访问其控制特性对存储的数据进行加密。这个场景背后的基本思想是防止存储在云上的数据以及它的创建、修改等。在这种方法中,云熟悉每个存储数据的访问策略,并且使用几个kdc进行密钥管理。分布式云系统比集中式云系统更容易实现昂贵的操作。系统在java语言中使用mysql数据库进行了实现和测试。
{"title":"Secure Data sharing in Distributed Cloud Environment","authors":"P. M. Salunke, Vishal. V. Mahale","doi":"10.1109/I-SMAC.2018.8653722","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653722","url":null,"abstract":"In this paper we proposed a novel technique to store the data authentically in the cloud. Since because of elastic property from the point of view of storage space cloud is most popular to store data. Cloud is inquiring about stores data on it. Therefore, security is must for store data for preserving it from malicious attackers as well as there may be leakage in stored data security. Many times there is necessary condition for hiding the user or data owner identity from cloud. A technique use in this paper shows anonymous authentication for stored data on cloud. This system preserves the identity of the users before storing it on cloud. The main aim of this paper is only valid user have ability to encrypt stored data with help of accessing its control features. The basic idea behind this scenario is to prevent stored data of cloud as well as its creation, modification etc. In this approach, cloud is familiar with access policies of every stored data also several KDCs are used for key management. Expensive operations are carried out easily in decentralized cloud system than centralized cloud system approach. The proposed system is implemented and tested in java with mysql database.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"119 1","pages":"262-266"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/I-SMAC.2018.8653722","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72522988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A High Speed Flash Analog to Digital Converter 一种高速闪存模数转换器
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653782
K. Kumar, K. Krishna, K. Raghavendra, K. Harish
This paper presents the design and implementation of a 4-b Flash Analog to Digital Converter (ADC) in 180nm digital CMOS technology. The proposed flash ADC utilizes resistive ladder logic network, high speed comparators and a encoder logic to convert the given continuous input signal into output binary code. The flash ADC utilizes a novel encoder realized using pseudo dynamic CMOS logic which has been implemented with fewer transistors compared to the previous other techniques. Without the need of time interleaving technique, the proposed ADC is capable of operating at its full sampling rate. The designed flash ADC consumes 0.686mW when operated from a power supply voltage of 1.8V. The operating speed of this circuit is 10GHz and the simulated integral non-linearity error (INL) and differential non-linearity error (DNL) are between 0.1/-0.02LSB and 0.33/-0.12LSB respectively. It occupies an effective area of 0.32mm2.
本文介绍了一种基于180nm数字CMOS技术的4b闪存模数转换器(ADC)的设计与实现。所提出的flash ADC利用电阻阶跃逻辑网络、高速比较器和编码器逻辑将给定的连续输入信号转换为输出二进制代码。flash ADC采用了一种新颖的编码器,该编码器使用伪动态CMOS逻辑实现,与以前的其他技术相比,该编码器使用了更少的晶体管。在不需要时间交错技术的情况下,所提出的ADC能够以其全采样率工作。当电源电压为1.8V时,所设计的闪存ADC功耗为0.686mW。该电路的工作速度为10GHz,模拟的积分非线性误差(INL)和微分非线性误差(DNL)分别在0.1/-0.02LSB和0.33/-0.12LSB之间。它的有效面积为0.32mm2。
{"title":"A High Speed Flash Analog to Digital Converter","authors":"K. Kumar, K. Krishna, K. Raghavendra, K. Harish","doi":"10.1109/I-SMAC.2018.8653782","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653782","url":null,"abstract":"This paper presents the design and implementation of a 4-b Flash Analog to Digital Converter (ADC) in 180nm digital CMOS technology. The proposed flash ADC utilizes resistive ladder logic network, high speed comparators and a encoder logic to convert the given continuous input signal into output binary code. The flash ADC utilizes a novel encoder realized using pseudo dynamic CMOS logic which has been implemented with fewer transistors compared to the previous other techniques. Without the need of time interleaving technique, the proposed ADC is capable of operating at its full sampling rate. The designed flash ADC consumes 0.686mW when operated from a power supply voltage of 1.8V. The operating speed of this circuit is 10GHz and the simulated integral non-linearity error (INL) and differential non-linearity error (DNL) are between 0.1/-0.02LSB and 0.33/-0.12LSB respectively. It occupies an effective area of 0.32mm2.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"49 1","pages":"283-288"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74986381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effective data lookup scheme for Cluster based Data Sharing in MANET 基于簇的MANET数据共享的有效数据查找方案
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653763
A. Misra, D. S. Yadav
In this paper we have proposed a novel cluster based data sharing scheme that tries to employ the advantages of both clustering as well as proactive routing. Our clustering algorithm divides the MANET into clusters forming cluster heads, cluster gateways and cluster members during the cluster formation phase. We restrict the use of DSDV routing protocol within the cluster thereby reducing the control overhead and routing table size. Cluster maintenance and file index distribution within the cluster is done by adding extra fields in the DSDV control messages. Routing is done using the DSDV routing table. Unlike existing data lookup schemes, where a data lookup query is flooded in the entire MANET or in the cluster backbone, here it is forwarded to the nearest gateway node thereby completely eliminating flooding.
在本文中,我们提出了一种新的基于集群的数据共享方案,它试图利用集群和主动路由的优点。我们的聚类算法在聚类形成阶段将MANET分为簇头、簇网关和簇成员。我们限制了DSDV路由协议在集群中的使用,从而减少了控制开销和路由表大小。集群维护和集群内的文件索引分发是通过在DSDV控制消息中添加额外的字段来完成的。路由使用DSDV路由表完成。与现有的数据查找方案不同,在现有的数据查找方案中,数据查找查询在整个MANET或集群主干中被淹没,而在这里,它被转发到最近的网关节点,从而完全消除了洪水。
{"title":"Effective data lookup scheme for Cluster based Data Sharing in MANET","authors":"A. Misra, D. S. Yadav","doi":"10.1109/I-SMAC.2018.8653763","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653763","url":null,"abstract":"In this paper we have proposed a novel cluster based data sharing scheme that tries to employ the advantages of both clustering as well as proactive routing. Our clustering algorithm divides the MANET into clusters forming cluster heads, cluster gateways and cluster members during the cluster formation phase. We restrict the use of DSDV routing protocol within the cluster thereby reducing the control overhead and routing table size. Cluster maintenance and file index distribution within the cluster is done by adding extra fields in the DSDV control messages. Routing is done using the DSDV routing table. Unlike existing data lookup schemes, where a data lookup query is flooded in the entire MANET or in the cluster backbone, here it is forwarded to the nearest gateway node thereby completely eliminating flooding.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"51 1","pages":"367-372"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77925041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tuning the Parameters of Weighted ELM using IWO and BAT Algorithm to Improve the Classification Performance 利用IWO和BAT算法对加权ELM的参数进行调优以提高分类性能
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653748
S. Priya, Dr. R. Manavalan
Weighted Extreme Learning Machine (WELM) is one among the machine learning algorithms with extremely learning and good generalization capabilities. WELM handles the imbalanced dataset efficiently for assigning less weight to majority class and more weight to minority class. In general, the classification performance of WELM extremely depends on the parameters such as the input weight matrix, the value of bias and the number of hidden neurons and the weights associated with majority and minority classes. The arbitrary selection of hidden biases and the input weight, WELM produces inconsistent result. In this paper, hybridization of WELM with Invasive Weed optimization and WELM with BAT algorithm are proposed to tune the parameters for WELM such as initial weight and hidden bias values. The proposed methodologies are called as WELM- IWO and WELM-BAT. The proposed methods are evaluated over three real world medical diagnosis problems such as Hepatitis, Diabetes and Thyroid diseases. The experimental results proved that one of the proposed methods WELM-IWO outperforms well on all three datasets.
加权极限学习机(WELM)是一种学习能力极强、泛化能力强的机器学习算法。WELM有效地处理不平衡数据集,为多数类分配更少的权重,为少数类分配更多的权重。一般来说,WELM的分类性能很大程度上取决于输入权矩阵、偏置值、隐藏神经元数量以及多数类和少数类相关的权值等参数。由于隐藏偏差和输入权值的任意选择,WELM会产生不一致的结果。本文提出将WELM与入侵杂草优化算法和WELM与BAT算法相结合,对WELM的初始权值和隐偏值等参数进行优化。所提出的方法被称为WELM- IWO和WELM- bat。针对肝炎、糖尿病和甲状腺疾病等三种现实医疗诊断问题,对所提出的方法进行了评估。实验结果证明,其中一种方法WELM-IWO在所有三个数据集上都表现优异。
{"title":"Tuning the Parameters of Weighted ELM using IWO and BAT Algorithm to Improve the Classification Performance","authors":"S. Priya, Dr. R. Manavalan","doi":"10.1109/I-SMAC.2018.8653748","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653748","url":null,"abstract":"Weighted Extreme Learning Machine (WELM) is one among the machine learning algorithms with extremely learning and good generalization capabilities. WELM handles the imbalanced dataset efficiently for assigning less weight to majority class and more weight to minority class. In general, the classification performance of WELM extremely depends on the parameters such as the input weight matrix, the value of bias and the number of hidden neurons and the weights associated with majority and minority classes. The arbitrary selection of hidden biases and the input weight, WELM produces inconsistent result. In this paper, hybridization of WELM with Invasive Weed optimization and WELM with BAT algorithm are proposed to tune the parameters for WELM such as initial weight and hidden bias values. The proposed methodologies are called as WELM- IWO and WELM-BAT. The proposed methods are evaluated over three real world medical diagnosis problems such as Hepatitis, Diabetes and Thyroid diseases. The experimental results proved that one of the proposed methods WELM-IWO outperforms well on all three datasets.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"35 1","pages":"547-552"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78078402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Hybrid Approach for Detection and Recognition of Traffic Text Sign using MSER and OCR 一种基于MSER和OCR的混合交通文本标志检测与识别方法
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653761
Richa Jain, Prof. Deepa Gianchandani
Detection and Recognition in traffic sign pictures or common pictures has applications in Computer vision frameworks like enlistment number plate identification, programmed movement sign location, picture recovery and help for outwardly disabled individuals. In this paper a hybrid approach based on MSER and OCR, utilizing clamor expulsion strategy, i.e. Lucy-Richardson calculation. After clamor evacuation, content district location stage begins with complexity upgraded edge improved MSER area discovery system is utilized there after morphological division is utilized to section content locale in the picture. After location stage acknowledgment stage begins in which content applicants are separated utilizing geometric filtration utilizing properties, for example, viewpoint proportion, unusualness, solidicity, and so forth. At that point Bounding box strategy is utilized to distinguish letter competitors and shape word out of them. At long last, Optical Character Recognition (OCR) instrument is utilized to concentrate message out of picture. The framework displayed beats best in class techniques on the dataset of the movement content sign information that were gotten from Jaguar Land Rover Research.
交通标志图像或普通图像的检测与识别在计算机视觉框架中有应用,如入伍号牌识别、程序化运动标志定位、图像恢复和对外表残疾人士的帮助。本文提出了一种基于MSER和OCR的混合方法,利用噪声排除策略,即Lucy-Richardson计算。在喧嚣疏散之后,内容区域定位阶段以复杂性升级边缘开始,在利用形态划分对图片中的内容区域进行分割后,利用改进的MSER区域发现系统。在定位阶段确认阶段之后,内容申请人开始使用几何过滤,利用属性进行分离,例如视点比例,不寻常性,坚固性等。此时,使用边界盒策略来区分字母竞争对手并从中形成单词。最后,利用光学字符识别(OCR)仪器对图像外的信息进行集中处理。该框架在捷豹路虎研究中心获得的运动内容标志信息数据集上展示了一流的技术。
{"title":"A Hybrid Approach for Detection and Recognition of Traffic Text Sign using MSER and OCR","authors":"Richa Jain, Prof. Deepa Gianchandani","doi":"10.1109/I-SMAC.2018.8653761","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653761","url":null,"abstract":"Detection and Recognition in traffic sign pictures or common pictures has applications in Computer vision frameworks like enlistment number plate identification, programmed movement sign location, picture recovery and help for outwardly disabled individuals. In this paper a hybrid approach based on MSER and OCR, utilizing clamor expulsion strategy, i.e. Lucy-Richardson calculation. After clamor evacuation, content district location stage begins with complexity upgraded edge improved MSER area discovery system is utilized there after morphological division is utilized to section content locale in the picture. After location stage acknowledgment stage begins in which content applicants are separated utilizing geometric filtration utilizing properties, for example, viewpoint proportion, unusualness, solidicity, and so forth. At that point Bounding box strategy is utilized to distinguish letter competitors and shape word out of them. At long last, Optical Character Recognition (OCR) instrument is utilized to concentrate message out of picture. The framework displayed beats best in class techniques on the dataset of the movement content sign information that were gotten from Jaguar Land Rover Research.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"24 1","pages":"775-778"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78217691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
INSTANTANEOUS FEEDBACK PEDOMETER WITH EMERGENCY GPS TRACKER 即时反馈计步器与紧急GPS跟踪器
Q3 Medicine Pub Date : 2018-08-01 DOI: 10.1109/I-SMAC.2018.8653718
Manish Nair, Samineni Rohith Kumar, N. A, Nihal Mohan, Anudev J
Physical activity is closely related to one’s health status. Man has always maintained a good health, improved blood circulation and has often come across through some superlative ideas while ambulating. Movement, meditation, health of blood pumping and rhythm of footsteps has been a primeval way of connecting with one’s deeper self. However, this seldom explains the importance of walking and tracking our health parameters. This paper is an attempt in the development of a prototype of a wearable fitness band with advance pedometer applications. Usually, pedometers are just designed to calculate the number of steps taken by the user or calculate the number of calories burnt. The prototype illustrated in the paper is an extension to the very same. Along with the above mentioned features, it also calculates the walking speed of the user and gives a vibration feedback if the current speed is below a certain threshold level.This system also contains a heart rate monitoring system along with a GPS and Bluetooth module. An Android app was also developed using MIT app inventor. The Bluetooth module is paired with the user’s smart phone. If the user experiences a sudden cardiac emergency, an SMS alert and call is made to his relatives with the app. The text message consists the latitude and longitude value where user is present. This band is not just a fitness tracker, but also an effort to contribute something towards a humanitarian cause.
体育活动与一个人的健康状况密切相关。人类一直保持着良好的健康,改善了血液循环,并经常在走动时遇到一些最高级的想法。运动、冥想、健康的血液循环和脚步的节奏一直是连接一个人更深层次自我的原始方式。然而,这很难解释步行和跟踪我们的健康参数的重要性。本文尝试开发一种具有先进计步器应用的可穿戴健身手环的原型。通常,计步器只是用来计算用户走了多少步或燃烧了多少卡路里。文中所示的原型是对该模型的扩展。除了上述功能外,它还可以计算用户的步行速度,如果当前速度低于某一阈值水平,则会给出振动反馈。该系统还包含一个心率监测系统以及GPS和蓝牙模块。麻省理工学院的app inventor还开发了一款安卓应用程序。蓝牙模块与用户的智能手机配对。如果用户突发心脏病,该应用程序会向他的亲戚发出短信警报并打电话。短信包含用户所在的经纬度值。这款腕带不仅是一款健身追踪器,也是一种为人道主义事业做出贡献的努力。
{"title":"INSTANTANEOUS FEEDBACK PEDOMETER WITH EMERGENCY GPS TRACKER","authors":"Manish Nair, Samineni Rohith Kumar, N. A, Nihal Mohan, Anudev J","doi":"10.1109/I-SMAC.2018.8653718","DOIUrl":"https://doi.org/10.1109/I-SMAC.2018.8653718","url":null,"abstract":"Physical activity is closely related to one’s health status. Man has always maintained a good health, improved blood circulation and has often come across through some superlative ideas while ambulating. Movement, meditation, health of blood pumping and rhythm of footsteps has been a primeval way of connecting with one’s deeper self. However, this seldom explains the importance of walking and tracking our health parameters. This paper is an attempt in the development of a prototype of a wearable fitness band with advance pedometer applications. Usually, pedometers are just designed to calculate the number of steps taken by the user or calculate the number of calories burnt. The prototype illustrated in the paper is an extension to the very same. Along with the above mentioned features, it also calculates the walking speed of the user and gives a vibration feedback if the current speed is below a certain threshold level.This system also contains a heart rate monitoring system along with a GPS and Bluetooth module. An Android app was also developed using MIT app inventor. The Bluetooth module is paired with the user’s smart phone. If the user experiences a sudden cardiac emergency, an SMS alert and call is made to his relatives with the app. The text message consists the latitude and longitude value where user is present. This band is not just a fitness tracker, but also an effort to contribute something towards a humanitarian cause.","PeriodicalId":53631,"journal":{"name":"Koomesh","volume":"78 1","pages":"122-126"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79744255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Koomesh
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1