首页 > 最新文献

2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)最新文献

英文 中文
Agent Technology for Data Analytics of Gene Expression Data: A Literature Review 基因表达数据分析的Agent技术:文献综述
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-000189
K. Santhosh, S. Ajitha
Analytics of gene expression data is the prolonged research area of present situation. Analysis of gene expression data requires enormous amount of work and huge set of algorithms. Using agent computing we deal with complex systems which are discovered many opportunities for developing data mining systems in a different ways. Hence to create predictive models, there is a huge need for intelligent and autonomous software agents which can procure useful information from the large datasets of raw information. Predictive analytics models can be created from these datasets which can be further used for various applications in security, future prediction etc. This research paper gives an overall function of multi agent systems in analytics of gene expression data, in terms of characteristics, adaptability, reliability and robotics of agents. Analytics on gene expression data is one of the emerging research fields. A large set of methodology and algorithms are existing in the field but in the application of agent technology in the field of gene expression data is at the infant stage. So the aim of this review paper is to integrate agent technology in the gene expression data analytics.
基因表达数据的分析是目前较为长期的研究领域。基因表达数据的分析需要大量的工作和大量的算法。使用代理计算,我们处理复杂的系统,这些系统发现了许多以不同方式开发数据挖掘系统的机会。因此,为了创建预测模型,需要智能和自主的软件代理,这些代理可以从原始信息的大型数据集中获取有用的信息。可以从这些数据集创建预测分析模型,这些模型可以进一步用于安全,未来预测等方面的各种应用。本文从智能体的特性、适应性、可靠性和机器人技术等方面阐述了多智能体系统在基因表达数据分析中的总体功能。基因表达数据分析是一个新兴的研究领域。该领域已有大量的方法和算法,但agent技术在基因表达数据领域的应用尚处于起步阶段。因此,本文的目的是将代理技术整合到基因表达数据分析中。
{"title":"Agent Technology for Data Analytics of Gene Expression Data: A Literature Review","authors":"K. Santhosh, S. Ajitha","doi":"10.1109/ICCMC48092.2020.ICCMC-000189","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-000189","url":null,"abstract":"Analytics of gene expression data is the prolonged research area of present situation. Analysis of gene expression data requires enormous amount of work and huge set of algorithms. Using agent computing we deal with complex systems which are discovered many opportunities for developing data mining systems in a different ways. Hence to create predictive models, there is a huge need for intelligent and autonomous software agents which can procure useful information from the large datasets of raw information. Predictive analytics models can be created from these datasets which can be further used for various applications in security, future prediction etc. This research paper gives an overall function of multi agent systems in analytics of gene expression data, in terms of characteristics, adaptability, reliability and robotics of agents. Analytics on gene expression data is one of the emerging research fields. A large set of methodology and algorithms are existing in the field but in the application of agent technology in the field of gene expression data is at the infant stage. So the aim of this review paper is to integrate agent technology in the gene expression data analytics.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121102353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation Study of Cascade H-bridge Multilevel Inverter 7-Level Inverter by SHE Technique 级联h桥多电平逆变器的SHE技术仿真研究
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-000124
P. Bute, S. K. Mittal
In this Paper, fuzzy logic approach is implemented as a switching technique. The traditional logic gate design is eliminated by using the proposed technique. There is a pulse generator which is designed by fuzzy logic acts as a pulse generator and a look-up table also. An input is based on a modulation index, well controlled membership functions used and the related fuzzy logic controller’s rules can be used to produce pulses directly without using any extra block. The mentioned technique is being implemented by using the cascaded multilevel H-Bridge inverter with symmetric operation using pulse width modulation with selective harmonic elimination technique (SHE-PWM). MFs are designed as per the calculated firing angles for various modulation index.
本文将模糊逻辑方法作为一种切换技术来实现。该技术消除了传统的逻辑门设计。采用模糊逻辑设计的脉冲发生器作为脉冲发生器,并提供了一个查询表。输入基于调制指标,使用控制良好的隶属函数和相关的模糊逻辑控制器规则可以直接产生脉冲,而无需使用任何额外的块。上述技术是通过使用对称操作的级联多电平h桥逆变器实现的,该逆变器使用脉冲宽度调制和选择性谐波消除技术(SHE-PWM)。根据计算得到的不同调制指数下的发射角,设计了MFs。
{"title":"Simulation Study of Cascade H-bridge Multilevel Inverter 7-Level Inverter by SHE Technique","authors":"P. Bute, S. K. Mittal","doi":"10.1109/ICCMC48092.2020.ICCMC-000124","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-000124","url":null,"abstract":"In this Paper, fuzzy logic approach is implemented as a switching technique. The traditional logic gate design is eliminated by using the proposed technique. There is a pulse generator which is designed by fuzzy logic acts as a pulse generator and a look-up table also. An input is based on a modulation index, well controlled membership functions used and the related fuzzy logic controller’s rules can be used to produce pulses directly without using any extra block. The mentioned technique is being implemented by using the cascaded multilevel H-Bridge inverter with symmetric operation using pulse width modulation with selective harmonic elimination technique (SHE-PWM). MFs are designed as per the calculated firing angles for various modulation index.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126630454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Review of IoT Based Smart Industrial System for Controlling and Monitoring 基于物联网的智能工业控制与监控系统研究综述
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-00012
Prajakta Karemore, P. Jagtap
In recent years, Industrial Control [IC] is emerging as a fundamental aspect to gather all the pertinent data, insights and information identified with different mechanical procedures, engines, machines, and gadgets utilized in industrial premises. This focuses on delivering controlled access, better efficiency and great aftereffects of the modern items being manufactured. In this new period of innovative research improvements in remote control and monitoring by means of correspondence methods, for example, ZigBee, RF, Infrared, systems have been generally utilized in industries. In any case, these remote correspondence strategies are commonly confined to basic applications on account of their moderate correspondence paces, separations, and information security. The recently proposed framework is evolving as the fundamental need of the industry for monitoring, control, security and wellbeing of various exercises. The monitoring framework incorporates sensors like fire sensor, smoke sensor, ultrasonic sensor, moisture and temperature sensor, current and voltage with Wi-Fi module for control operations. With the benefits of unusual exercises, reasonable activities will be activated. This framework can likewise be controlled by using remote server with application in PC/Laptop. This undertaking additionally incorporates facial recognition by utilizing an open CV to perceive the essence of the approved individual of the industry to sign in / log out and the subtleties will be put away in the database sheet or refreshed to cloud. On the off chance that, any invalid section or attempt to the break-in, immediately an alarm email will be sent to the particular approved authority/individual/group.
近年来,工业控制[IC]正在成为收集工业场所中使用的不同机械程序、发动机、机器和小工具所识别的所有相关数据、见解和信息的基本方面。这侧重于提供受控的访问,更好的效率和现代产品制造的巨大后遗症。在这个创新研究的新时期,通过通信方式进行远程控制和监控的改进,例如ZigBee、RF、红外等系统已在工业中得到普遍应用。在任何情况下,由于这些远程通信策略具有适度的通信速度、间隔和信息安全性,它们通常局限于基本应用程序。最近提出的框架正在演变为行业对各种演习的监测、控制、安全和福祉的基本需求。监控框架包括火灾传感器、烟雾传感器、超声波传感器、湿度和温度传感器、电流和电压等传感器,以及用于控制操作的Wi-Fi模块。有了非常规运动的好处,合理的活动就会被激活。该框架也可以在PC/笔记本电脑上使用远程服务器进行控制。这项工作还结合了面部识别,利用开放的简历来识别行业中被批准的个人的本质,以登录/注销,并将细微之处保存在数据库表中或刷新到云端。如果发现任何无效的部分或企图闯入,将立即向特定的批准机构/个人/团体发送警报电子邮件。
{"title":"A Review of IoT Based Smart Industrial System for Controlling and Monitoring","authors":"Prajakta Karemore, P. Jagtap","doi":"10.1109/ICCMC48092.2020.ICCMC-00012","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-00012","url":null,"abstract":"In recent years, Industrial Control [IC] is emerging as a fundamental aspect to gather all the pertinent data, insights and information identified with different mechanical procedures, engines, machines, and gadgets utilized in industrial premises. This focuses on delivering controlled access, better efficiency and great aftereffects of the modern items being manufactured. In this new period of innovative research improvements in remote control and monitoring by means of correspondence methods, for example, ZigBee, RF, Infrared, systems have been generally utilized in industries. In any case, these remote correspondence strategies are commonly confined to basic applications on account of their moderate correspondence paces, separations, and information security. The recently proposed framework is evolving as the fundamental need of the industry for monitoring, control, security and wellbeing of various exercises. The monitoring framework incorporates sensors like fire sensor, smoke sensor, ultrasonic sensor, moisture and temperature sensor, current and voltage with Wi-Fi module for control operations. With the benefits of unusual exercises, reasonable activities will be activated. This framework can likewise be controlled by using remote server with application in PC/Laptop. This undertaking additionally incorporates facial recognition by utilizing an open CV to perceive the essence of the approved individual of the industry to sign in / log out and the subtleties will be put away in the database sheet or refreshed to cloud. On the off chance that, any invalid section or attempt to the break-in, immediately an alarm email will be sent to the particular approved authority/individual/group.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127261398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Detection of Rice Leaf Diseases Using Image Processing 利用图像处理技术检测水稻叶片病害
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-00080
Minu Eliz Pothen, Dr. Maya L Pai
Diseases infected on plant leaves particularly in rice leaves are one of the significant issues faced by the farmers. As a result, it is extremely hard to deliver the quantity of food needed for the growing human population. Rice diseases have caused production and economic losses in the agricultural sector. It will like-wise influence the earnings of farmers who rely upon agriculture and nowadays farmers commit suicide because of misfortune experienced in agriculture. Detection of definite disease infected on plants will assist to plan various disease control procedures. Proposed method describes different strategies utilized for rice leaf disease classification purpose. Bacterial leaf blight, Leaf smut and Brown spot diseased images are segmented using Otsu’s method. From the segmented area. various features are separated utilizing “Local Binary Patterns (LBP)” and “Histogram of Oriented Gradients (HOG)”. Then the features are classified with the assistance of Support Vector Machine (SVM) and accomplished 94.6% with polynomial Kernel SVM and HOG.
植物叶片特别是水稻叶片的病害是农民面临的重要问题之一。因此,为不断增长的人口提供所需的食物数量是极其困难的。水稻病害给农业部门造成了生产和经济损失。它同样会影响依赖农业的农民的收入,现在农民因为农业的不幸而自杀。对植物感染的明确疾病的检测将有助于制定各种疾病控制程序。提出的方法描述了用于水稻叶病分类的不同策略。采用Otsu方法对细菌性叶枯病、黑穗病和褐斑病图像进行分割。从分割的区域。利用“局部二值模式(LBP)”和“定向梯度直方图(HOG)”分离各种特征。然后在支持向量机(SVM)的辅助下对特征进行分类,多项式核支持向量机和HOG的分类准确率为94.6%。
{"title":"Detection of Rice Leaf Diseases Using Image Processing","authors":"Minu Eliz Pothen, Dr. Maya L Pai","doi":"10.1109/ICCMC48092.2020.ICCMC-00080","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-00080","url":null,"abstract":"Diseases infected on plant leaves particularly in rice leaves are one of the significant issues faced by the farmers. As a result, it is extremely hard to deliver the quantity of food needed for the growing human population. Rice diseases have caused production and economic losses in the agricultural sector. It will like-wise influence the earnings of farmers who rely upon agriculture and nowadays farmers commit suicide because of misfortune experienced in agriculture. Detection of definite disease infected on plants will assist to plan various disease control procedures. Proposed method describes different strategies utilized for rice leaf disease classification purpose. Bacterial leaf blight, Leaf smut and Brown spot diseased images are segmented using Otsu’s method. From the segmented area. various features are separated utilizing “Local Binary Patterns (LBP)” and “Histogram of Oriented Gradients (HOG)”. Then the features are classified with the assistance of Support Vector Machine (SVM) and accomplished 94.6% with polynomial Kernel SVM and HOG.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Efficient Handling of Incomplete basic Partitions by Spectral Greedy K-Means Consensus Clustering 基于谱贪婪k均值一致聚类的不完全基本分区的高效处理
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-00056
M. Vasuki, S. Revathy
Cluster ensemble approaches are combining different clustering results into single partitions. To enhance the quality of single partitions, this paper examines a comparative study of different methods with advantage and drawbacks. Performing spectral ensemble cluster (SEC) via weighted k-means are not efficient to handle incomplete basic partitions and big data problems. To overcome the problems in SEC, Greedy k-means consensus clustering is combined with SEC. By solving the above challenges, named spectral greedy k-means consensus clustering (SGKCC) is proposed. The proposed SGKCC efficient to handle incomplete basic partitions in big data which enhance the quality of single partition. Extensive evaluation NMI and RI used to calculate the performance efficiency compared with existing approach proving the result of proposed algorithm.
聚类集成方法将不同的聚类结果组合成单个分区。为了提高单隔板的质量,本文对不同方法的优缺点进行了比较研究。基于加权k均值的谱集合聚类(SEC)在处理不完全基本分区和大数据问题时效率不高。为了克服SEC存在的问题,将贪婪k-means共识聚类与SEC相结合,提出了频谱贪婪k-means共识聚类(SGKCC)。提出的SGKCC有效地处理了大数据中不完整的基本分区,提高了单个分区的质量。采用广泛的评价NMI和RI来计算性能效率,并与现有方法进行比较,验证了所提算法的结果。
{"title":"Efficient Handling of Incomplete basic Partitions by Spectral Greedy K-Means Consensus Clustering","authors":"M. Vasuki, S. Revathy","doi":"10.1109/ICCMC48092.2020.ICCMC-00056","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-00056","url":null,"abstract":"Cluster ensemble approaches are combining different clustering results into single partitions. To enhance the quality of single partitions, this paper examines a comparative study of different methods with advantage and drawbacks. Performing spectral ensemble cluster (SEC) via weighted k-means are not efficient to handle incomplete basic partitions and big data problems. To overcome the problems in SEC, Greedy k-means consensus clustering is combined with SEC. By solving the above challenges, named spectral greedy k-means consensus clustering (SGKCC) is proposed. The proposed SGKCC efficient to handle incomplete basic partitions in big data which enhance the quality of single partition. Extensive evaluation NMI and RI used to calculate the performance efficiency compared with existing approach proving the result of proposed algorithm.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121457584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Survey of Iris Image Segmentation and Localization 虹膜图像分割与定位研究进展
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-000100
S. S. Rao, R. Shreyas, G. Maske, A. Choudhury
Iris recognition is one of the best methods in the biometric identification field because the iris has features that are not unique but also stay throughout the person’s lifetime. Iris recognition has multiple phases namely Image Acquisition, Iris Segmentation, Iris Localization, Feature Extraction and Matching. Image Acquisition is simply the capturing of the iris image at an optimal distance. Iris Segmentation is the process of obtaining all the different segments of the eye. Iris Localization is the process of finding inner and outer boundaries of the iris differentiating it from the sclera and pupil and mainly focusing on the iris alone. Feature extraction is the process of extracting the biometric template from the Iris, giving the unique data required for the next step. Matching is the process of finding the best match in the database for the extracted biometric template. The future implementation of this paper focuses only on the processes of Image Acquisition, Iris Segmentation and Iris Localization. The paper aims to optimise these processes in terms of image capture distance, computation time and memory requirement, using the Dynamic Reconfigurable Processor (DRP) technology along with suitable algorithms for segmentation and localization processes as described in sections 2.2 and 2.3 respectively.
虹膜识别是生物识别领域中最好的方法之一,因为虹膜的特征不是唯一的,而且会伴随人的一生。虹膜识别包括图像采集、虹膜分割、虹膜定位、特征提取和匹配等多个阶段。图像采集就是在最佳距离处捕获虹膜图像。虹膜分割是获得眼睛所有不同部分的过程。虹膜定位是寻找虹膜内外边界的过程,将其与巩膜和瞳孔区分开来,主要以虹膜为中心。特征提取是从虹膜中提取生物特征模板的过程,为下一步提供所需的唯一数据。匹配是对提取的生物特征模板在数据库中寻找最佳匹配的过程。本文的后续实现只关注图像采集、虹膜分割和虹膜定位过程。本文旨在使用动态可重构处理器(Dynamic Reconfigurable Processor, DRP)技术以及分别在2.2节和2.3节中描述的分割和定位过程的合适算法,在图像捕获距离、计算时间和内存需求方面优化这些过程。
{"title":"Survey of Iris Image Segmentation and Localization","authors":"S. S. Rao, R. Shreyas, G. Maske, A. Choudhury","doi":"10.1109/ICCMC48092.2020.ICCMC-000100","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-000100","url":null,"abstract":"Iris recognition is one of the best methods in the biometric identification field because the iris has features that are not unique but also stay throughout the person’s lifetime. Iris recognition has multiple phases namely Image Acquisition, Iris Segmentation, Iris Localization, Feature Extraction and Matching. Image Acquisition is simply the capturing of the iris image at an optimal distance. Iris Segmentation is the process of obtaining all the different segments of the eye. Iris Localization is the process of finding inner and outer boundaries of the iris differentiating it from the sclera and pupil and mainly focusing on the iris alone. Feature extraction is the process of extracting the biometric template from the Iris, giving the unique data required for the next step. Matching is the process of finding the best match in the database for the extracted biometric template. The future implementation of this paper focuses only on the processes of Image Acquisition, Iris Segmentation and Iris Localization. The paper aims to optimise these processes in terms of image capture distance, computation time and memory requirement, using the Dynamic Reconfigurable Processor (DRP) technology along with suitable algorithms for segmentation and localization processes as described in sections 2.2 and 2.3 respectively.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"9 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120806530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Evaluation of Dimensionality Reduction Techniques for Big data 大数据降维技术评价
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-00043
R. Ramachandran, Gopika Ravichandran, Aswathi Raveendran
In this digital era, big data has very high dimension and requires large amount of space for its data storage. Hence a lossless data interpretation will be difficult when big data contains large dimension. But, all these dimensions in big data may not be relevant or they may be interrelated and hence redundancy may exist in attribute set. Dimensionality reduction is a technique which focusses on downsizing the attributes and complication of a high dimensional data. In this paper, a detailed study of different dimensionality reduction techniques namely principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA), singular value decomposition (SVD), independent component analysis (ICA) has been proposed. Furthermore, it also provides comparative analysis based on various parameters.
在这个数字时代,大数据具有非常高的维度,需要大量的数据存储空间。因此,当大数据包含大维度时,对数据进行无损解释将是困难的。但是,在大数据中,这些维度可能是不相关的,也可能是相互关联的,因此属性集可能存在冗余。降维是一种致力于降低高维数据属性和复杂性的技术。本文对不同的降维技术,即主成分分析(PCA)、线性判别分析(LDA)、核主成分分析(KPCA)、奇异值分解(SVD)、独立成分分析(ICA)进行了详细研究。此外,还提供了基于各参数的对比分析。
{"title":"Evaluation of Dimensionality Reduction Techniques for Big data","authors":"R. Ramachandran, Gopika Ravichandran, Aswathi Raveendran","doi":"10.1109/ICCMC48092.2020.ICCMC-00043","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-00043","url":null,"abstract":"In this digital era, big data has very high dimension and requires large amount of space for its data storage. Hence a lossless data interpretation will be difficult when big data contains large dimension. But, all these dimensions in big data may not be relevant or they may be interrelated and hence redundancy may exist in attribute set. Dimensionality reduction is a technique which focusses on downsizing the attributes and complication of a high dimensional data. In this paper, a detailed study of different dimensionality reduction techniques namely principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA), singular value decomposition (SVD), independent component analysis (ICA) has been proposed. Furthermore, it also provides comparative analysis based on various parameters.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124404576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Enhanced Computer Aided Bone Fracture Detection Employing X-Ray Images by Harris Corner Technique 基于哈里斯角技术的x射线图像增强计算机辅助骨折检测
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-000184
C. Z. Basha, M. Reddy, K. Nikhil, P. M. Venkatesh, A. Asish
Rapidly creative innovations are emerging day by day in various fields, particularly in restorative condition. Bone fracture is one of the most common human problem and happens when the high pressure is applied to the bones, or simply because of accidents. High precision diagnosis of bone fracture is an important feature in the medical profession. Owing to fewer physicians, remotely based hospitals cannot have any of the equipment to diagnose fractures. X-ray scans are used to assess the fractures. These X-rays are one of the less expensive techniques for identification of fractures. Harris corner based detection algorithm is proposed to extract features from the image and the extracted features from this algorithm can identify edges, fractures and corners present in the image.300 different X-ray images are collected from Osmania hospital, Hyderabad. Proposed method gives an accuracy of 92% which is better in recognizing fracture compared to the existing methods.
在各个领域,特别是在恢复性条件下,快速的创造性创新日益出现。骨折是人类最常见的问题之一,当骨骼受到高压时,或仅仅因为事故而发生骨折。骨折的高精度诊断是医学界的一个重要特点。由于医生较少,远程医院无法拥有任何诊断骨折的设备。x射线扫描用于评估骨折。这些x射线是一种较便宜的骨折鉴定技术。提出了一种基于Harris角点的图像特征提取算法,该算法提取的特征可以识别图像中存在的边缘、裂缝和角点。从海得拉巴的Osmania医院收集了300张不同的x射线图像。与现有方法相比,该方法的裂缝识别准确率达到92%。
{"title":"Enhanced Computer Aided Bone Fracture Detection Employing X-Ray Images by Harris Corner Technique","authors":"C. Z. Basha, M. Reddy, K. Nikhil, P. M. Venkatesh, A. Asish","doi":"10.1109/ICCMC48092.2020.ICCMC-000184","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-000184","url":null,"abstract":"Rapidly creative innovations are emerging day by day in various fields, particularly in restorative condition. Bone fracture is one of the most common human problem and happens when the high pressure is applied to the bones, or simply because of accidents. High precision diagnosis of bone fracture is an important feature in the medical profession. Owing to fewer physicians, remotely based hospitals cannot have any of the equipment to diagnose fractures. X-ray scans are used to assess the fractures. These X-rays are one of the less expensive techniques for identification of fractures. Harris corner based detection algorithm is proposed to extract features from the image and the extracted features from this algorithm can identify edges, fractures and corners present in the image.300 different X-ray images are collected from Osmania hospital, Hyderabad. Proposed method gives an accuracy of 92% which is better in recognizing fracture compared to the existing methods.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127762224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Application of RLSA for Skew Detection and Correction in Kannada Text Images RLSA在卡纳达语文本图像倾斜检测与校正中的应用
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-000146
R. Salagar, Pushpa B. Patil
The presence of the skew in a captured document image through a photographic camera, mobile camera or scanner is inevitable. In a document image detection and correction of skew are challenging phases before further processing like segmentation and analysis. In this paper, Run Length Smoothing Algorithm (RLSA) is proposed for the detection and correction of skew for handwritten Kannada document images. The proposed work has mainly two parts, the first part is preprocessing of a document using methods like thresholding, the maximum gradient for extraction of text and text line area with no loss of any data. The second part is skew detection and correction. The algorithm RLSA is used row and column-wise of a document image. The RLSA is applied for skew detection to determine skew (slant) angle further the document is turned in the anti-clockwise direction with the preferred angle, which will remove the skew of a document that has occurred while taking the photocopy of the document. The performance proposed method is evaluated for handwritten Kannada documents; the experiment outcomes are significantly better.
通过照相机、移动相机或扫描仪捕获的文档图像中存在倾斜是不可避免的。在文档图像中,在进一步处理(如分割和分析)之前,倾斜的检测和校正是具有挑战性的阶段。本文提出了一种运行长度平滑算法(RLSA),用于卡纳达语手写文档图像的倾斜检测和校正。本文提出的工作主要分为两部分,第一部分是在不丢失任何数据的情况下,利用阈值分割、最大梯度提取文本和文本行面积等方法对文档进行预处理。第二部分是偏斜检测与校正。RLSA算法是对文档图像逐行和逐列使用的。应用RLSA进行倾斜检测,确定倾斜(倾斜)角度,然后以首选角度逆时针方向旋转文档,这将消除文档在复印文档时发生的倾斜。对手写的卡纳达语文档进行了性能评价;实验结果明显较好。
{"title":"Application of RLSA for Skew Detection and Correction in Kannada Text Images","authors":"R. Salagar, Pushpa B. Patil","doi":"10.1109/ICCMC48092.2020.ICCMC-000146","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-000146","url":null,"abstract":"The presence of the skew in a captured document image through a photographic camera, mobile camera or scanner is inevitable. In a document image detection and correction of skew are challenging phases before further processing like segmentation and analysis. In this paper, Run Length Smoothing Algorithm (RLSA) is proposed for the detection and correction of skew for handwritten Kannada document images. The proposed work has mainly two parts, the first part is preprocessing of a document using methods like thresholding, the maximum gradient for extraction of text and text line area with no loss of any data. The second part is skew detection and correction. The algorithm RLSA is used row and column-wise of a document image. The RLSA is applied for skew detection to determine skew (slant) angle further the document is turned in the anti-clockwise direction with the preferred angle, which will remove the skew of a document that has occurred while taking the photocopy of the document. The performance proposed method is evaluated for handwritten Kannada documents; the experiment outcomes are significantly better.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116940410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Diagnosis of Crime Rate against Women using k-fold Cross Validation through Machine Learning 通过机器学习使用k-fold交叉验证诊断针对妇女的犯罪率
Pub Date : 2020-03-01 DOI: 10.1109/ICCMC48092.2020.ICCMC-000193
P. Tamilarasi, R. Rani
Crime against women has become a very big problem of our nation. Many countries are trying to control this offence continuously and its prevention is an essential task. In recent years crimes are significantly increasing against women. Currently the Indian government show interest to address this problem and give more importance to develop our society. Every year a huge amount of data collection is generated on the basis of the crime reporting. This data can be very useful for assessing and predicting crime, and can help us to some degree stop the crime. Data analysis is a process of examining, cleansing, transformation and modelling data with the goal of establish useful information, reporting conclusion and sustaining decision-making. Feature Scaling is one of the most important techniques to standardize the independent features to place the data in a fixed range. It is performed at the time of data pre-processing. K-fold cross-validation is a re-sampling method used for calculating machine learning models on a small sample of data. It is a common strategy since it is easy to understand and usually results in a model deftness calculation that is less biased or less negative than other approaches, such as a simple train or test divide. Machine learning plays a large part in data processing. This paper introduces six different types of Machine learning algorithms such as KNN and decision trees, Naïve Bayes, Linear Regression CART (Classification and Regression Tree) and SVM using similar characteristics on crime data. Those algorithms are tested for accuracy. The main objective of this research is to evaluate the efficacy and application of the machine learning algorithms in data analytics.
针对妇女的犯罪已经成为我国的一个大问题。许多国家正在不断努力控制这一罪行,预防这一罪行是一项重要任务。近年来,针对妇女的犯罪显著增加。目前,印度政府表现出了解决这个问题的兴趣,并更加重视社会的发展。每年在犯罪报告的基础上产生大量的数据收集。这些数据对于评估和预测犯罪非常有用,并能在一定程度上帮助我们制止犯罪。数据分析是一个检查、清理、转换和建模数据的过程,目的是建立有用的信息、报告结论和维持决策。特征缩放是对独立特征进行标准化,将数据置于固定范围内的重要技术之一。它是在数据预处理时执行的。K-fold交叉验证是一种重新抽样方法,用于在小样本数据上计算机器学习模型。这是一种常见的策略,因为它易于理解,并且通常会导致模型灵巧性计算比其他方法(如简单训练或测试划分)更少的偏差或负面影响。机器学习在数据处理中起着很大的作用。本文介绍了六种不同类型的机器学习算法,如KNN和决策树,Naïve贝叶斯,线性回归CART(分类与回归树)和支持向量机,利用犯罪数据的相似特征。这些算法经过了准确性测试。本研究的主要目的是评估机器学习算法在数据分析中的有效性和应用。
{"title":"Diagnosis of Crime Rate against Women using k-fold Cross Validation through Machine Learning","authors":"P. Tamilarasi, R. Rani","doi":"10.1109/ICCMC48092.2020.ICCMC-000193","DOIUrl":"https://doi.org/10.1109/ICCMC48092.2020.ICCMC-000193","url":null,"abstract":"Crime against women has become a very big problem of our nation. Many countries are trying to control this offence continuously and its prevention is an essential task. In recent years crimes are significantly increasing against women. Currently the Indian government show interest to address this problem and give more importance to develop our society. Every year a huge amount of data collection is generated on the basis of the crime reporting. This data can be very useful for assessing and predicting crime, and can help us to some degree stop the crime. Data analysis is a process of examining, cleansing, transformation and modelling data with the goal of establish useful information, reporting conclusion and sustaining decision-making. Feature Scaling is one of the most important techniques to standardize the independent features to place the data in a fixed range. It is performed at the time of data pre-processing. K-fold cross-validation is a re-sampling method used for calculating machine learning models on a small sample of data. It is a common strategy since it is easy to understand and usually results in a model deftness calculation that is less biased or less negative than other approaches, such as a simple train or test divide. Machine learning plays a large part in data processing. This paper introduces six different types of Machine learning algorithms such as KNN and decision trees, Naïve Bayes, Linear Regression CART (Classification and Regression Tree) and SVM using similar characteristics on crime data. Those algorithms are tested for accuracy. The main objective of this research is to evaluate the efficacy and application of the machine learning algorithms in data analytics.","PeriodicalId":130581,"journal":{"name":"2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132785991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1