首页 > 最新文献

2013 International Conference on Recent Trends in Information Technology (ICRTIT)最新文献

英文 中文
Multiresolution feature extraction (MRFE) based speech recognition system 基于多分辨率特征提取的语音识别系统
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844197
M. Priyanka, V. S. Solomi, P. Vijayalakshmi, Tushar Nagarajan
A speech recognition system will recognise the speech uttered into text. The accuracy of the recognition system depends on the models generated. Models are trained based on the features extracted from the available training data. These models are used to recognise the spoken text. In the conventional feature extraction method, features are extracted using single window size (say 20ms). Instead of this fixed window size, we propose to extract features using multiple window sizes from the same speech signal. When multiple window sizes are used, multiple sets of feature vectors are derived for the same word thereby increasing the number of examples. Experiments show that when features are extracted with multiple window sizes, the variations among the feature vectors are considerably increased, which will lead to better acoustic models. This multiresolution feature extraction technique is successfully used for building a speech recogniser. To analyse the performance of multiresolution feature extraction, isolated word speech recognition system is developed for the TIMIT speech corpus. Results reveal that around 8% improvement in recognition accuracy is obtained over conventional single resolution feature extraction based method.
语音识别系统将把语音识别成文本。识别系统的准确性取决于所生成的模型。基于从可用训练数据中提取的特征来训练模型。这些模型用于识别口语文本。在传统的特征提取方法中,特征提取使用单个窗口大小(例如20ms)。代替固定的窗口大小,我们建议从相同的语音信号中使用多个窗口大小来提取特征。当使用多个窗口大小时,为同一个单词衍生出多组特征向量,从而增加了示例的数量。实验表明,在多窗口尺寸下提取特征时,特征向量之间的变化会大大增加,从而得到更好的声学模型。该多分辨率特征提取技术已成功用于构建语音识别系统。为了分析多分辨率特征提取的性能,针对TIMIT语音语料库开发了孤立词语音识别系统。结果表明,与传统的单分辨率特征提取方法相比,该方法的识别精度提高了8%左右。
{"title":"Multiresolution feature extraction (MRFE) based speech recognition system","authors":"M. Priyanka, V. S. Solomi, P. Vijayalakshmi, Tushar Nagarajan","doi":"10.1109/ICRTIT.2013.6844197","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844197","url":null,"abstract":"A speech recognition system will recognise the speech uttered into text. The accuracy of the recognition system depends on the models generated. Models are trained based on the features extracted from the available training data. These models are used to recognise the spoken text. In the conventional feature extraction method, features are extracted using single window size (say 20ms). Instead of this fixed window size, we propose to extract features using multiple window sizes from the same speech signal. When multiple window sizes are used, multiple sets of feature vectors are derived for the same word thereby increasing the number of examples. Experiments show that when features are extracted with multiple window sizes, the variations among the feature vectors are considerably increased, which will lead to better acoustic models. This multiresolution feature extraction technique is successfully used for building a speech recogniser. To analyse the performance of multiresolution feature extraction, isolated word speech recognition system is developed for the TIMIT speech corpus. Results reveal that around 8% improvement in recognition accuracy is obtained over conventional single resolution feature extraction based method.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124738976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ANN-based predictive analytics of forecasting with sparse data: Applications in data mining contexts 基于人工神经网络的稀疏数据预测分析:在数据挖掘环境中的应用
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844181
Mohammad A. Dabbas, P. Neelakanta, D. DeGroff
Technoeconomics of a business structure exhibit evolving performance attributes as decided by various exogenous and endogenous causative variables. Proposed in this paper is a predictive model to elucidate the forecast performance on such evolving traits in large business structures (like electric power utility companies). The method uses artificial neural network (ANN) based predictive analytics viewed in data mining contexts. Specifically, should the available data be sparse, a method of scarcity removal in the knowledge domain is proposed for subsequent use in the ANN-based data mining exercise. Hence forecast projections on the growth/decay profile across the ex ante regime are determined. Further, for each forecast projection, a cone-of-forecast is suggested toward the corresponding limits (error-bounds) on the accuracy of rules extraction in data mining. Example simulations pertinent to real-world data on the performance of wind-power generation versus wind-speed are presented demonstrating the efficacy of forecasting strategy pursued. Possible shortcomings of the proposals are identified.
业务结构的技术经济学表现出由各种外生和内生的原因变量决定的不断变化的绩效属性。本文提出了一个预测模型来阐明大型企业结构(如电力公用事业公司)对这些演化特征的预测性能。该方法采用基于人工神经网络(ANN)的数据挖掘预测分析方法。具体而言,如果可用数据是稀疏的,则提出了一种在知识领域中稀缺性去除的方法,以便随后在基于人工神经网络的数据挖掘中使用。因此,确定了对整个事前制度的增长/衰减剖面的预测。此外,对于每个预测投影,提出了一个预测锥,指向数据挖掘中规则提取精度的相应极限(误差界)。给出了与风力发电性能随风速变化的实际数据相关的示例模拟,以证明所采用的预测策略的有效性。指出了这些建议可能存在的缺点。
{"title":"ANN-based predictive analytics of forecasting with sparse data: Applications in data mining contexts","authors":"Mohammad A. Dabbas, P. Neelakanta, D. DeGroff","doi":"10.1109/ICRTIT.2013.6844181","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844181","url":null,"abstract":"Technoeconomics of a business structure exhibit evolving performance attributes as decided by various exogenous and endogenous causative variables. Proposed in this paper is a predictive model to elucidate the forecast performance on such evolving traits in large business structures (like electric power utility companies). The method uses artificial neural network (ANN) based predictive analytics viewed in data mining contexts. Specifically, should the available data be sparse, a method of scarcity removal in the knowledge domain is proposed for subsequent use in the ANN-based data mining exercise. Hence forecast projections on the growth/decay profile across the ex ante regime are determined. Further, for each forecast projection, a cone-of-forecast is suggested toward the corresponding limits (error-bounds) on the accuracy of rules extraction in data mining. Example simulations pertinent to real-world data on the performance of wind-power generation versus wind-speed are presented demonstrating the efficacy of forecasting strategy pursued. Possible shortcomings of the proposals are identified.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125023403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiresolution analysis for computer-aided mass detection in mammogram using pixel based segmentation method 基于像素分割方法的计算机辅助乳房x线质量检测的多分辨率分析
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844207
J. Pragathi, H. Patil
Mammography is an X-ray imaging technique for diagnosing breast tumour. Segmentation of tumour in the mammogram images are difficult task because they are poor in contrast and the lesions are surrounded by tissue with similar characteristics. In this paper, an automatic detection algorithm is proposed to segment the suspicious masses or lesions. Mammogram images are analyzed by wavelet and the algorithm utilizes combination of region based segmentation and pixel based segmentation to detect the masses. The performance of the system is then evaluated using a dataset containing 60 images. From the experimental results the relative error calculated for each image is less than 15% and exhibits a sensitivity of 90%.
乳房x线照相术是一种诊断乳腺肿瘤的x线成像技术。乳房x光图像中肿瘤的分割是一项困难的任务,因为它们的对比度很差,病变被具有相似特征的组织包围。本文提出了一种对可疑肿块或病变进行分割的自动检测算法。对乳房x线图像进行小波分析,结合区域分割和像素分割进行肿块检测。然后使用包含60张图像的数据集评估系统的性能。实验结果表明,对每幅图像计算的相对误差小于15%,灵敏度达到90%。
{"title":"Multiresolution analysis for computer-aided mass detection in mammogram using pixel based segmentation method","authors":"J. Pragathi, H. Patil","doi":"10.1109/ICRTIT.2013.6844207","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844207","url":null,"abstract":"Mammography is an X-ray imaging technique for diagnosing breast tumour. Segmentation of tumour in the mammogram images are difficult task because they are poor in contrast and the lesions are surrounded by tissue with similar characteristics. In this paper, an automatic detection algorithm is proposed to segment the suspicious masses or lesions. Mammogram images are analyzed by wavelet and the algorithm utilizes combination of region based segmentation and pixel based segmentation to detect the masses. The performance of the system is then evaluated using a dataset containing 60 images. From the experimental results the relative error calculated for each image is less than 15% and exhibits a sensitivity of 90%.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130009473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SK-IR: Secured keyword based retrieval of sensor data in cloud SK-IR:云端传感器数据的安全关键字检索
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844227
M. Sumalatha, K. Praveenraj, C. Selvakumar
The sensor data are collected in regular interval and securely stored in cloud. In order to provide security this research work aims in proposing SK-IR: secured keyword based retrieval. In security mechanism the symmetric key encryption scheme AES is being used to secure the sensor data. The data are retrieved from cloud based on its keywords, scores and file location which are available in the posting list. The hash function is implemented to the posting list to enhance the security. The posting lists are stored in unknown cloud server location and it protects the data from getting hacked. In this paper we studied the performance of encryption methodology over unstructured database by choosing Hbase/Hadoop platform. Since HBase can handle the huge volume, variety, and complexity of data used on the Hadoop platform.
传感器数据定期收集,并安全存储在云端。为了保证检索的安全性,本研究提出了基于关键字的安全检索(SK-IR)。在安全机制上,采用对称密钥加密方案AES对传感器数据进行安全保护。数据是基于它的关键字,分数和文件的位置,可在张贴列表从云检索。对发布列表实现哈希函数,增强安全性。张贴列表存储在未知的云服务器位置,它可以保护数据不被黑客攻击。本文选择Hbase/Hadoop平台,对非结构化数据库加密方法的性能进行了研究。因为HBase可以处理Hadoop平台上海量、多样和复杂的数据。
{"title":"SK-IR: Secured keyword based retrieval of sensor data in cloud","authors":"M. Sumalatha, K. Praveenraj, C. Selvakumar","doi":"10.1109/ICRTIT.2013.6844227","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844227","url":null,"abstract":"The sensor data are collected in regular interval and securely stored in cloud. In order to provide security this research work aims in proposing SK-IR: secured keyword based retrieval. In security mechanism the symmetric key encryption scheme AES is being used to secure the sensor data. The data are retrieved from cloud based on its keywords, scores and file location which are available in the posting list. The hash function is implemented to the posting list to enhance the security. The posting lists are stored in unknown cloud server location and it protects the data from getting hacked. In this paper we studied the performance of encryption methodology over unstructured database by choosing Hbase/Hadoop platform. Since HBase can handle the huge volume, variety, and complexity of data used on the Hadoop platform.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126353894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Online franchise capturing using IPv6 through Automated Teller Machines 通过自动柜员机使用IPv6捕获在线特许经营权
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844264
Kausal Malladi, S. Sridharan
This paper aims at realization and implementation of an online voting that could be implemented in Automated Teller Machine (ATM) using IPv6. All ATMs are now currently in a private network of their respective bank servers. If it is possible to transform them into IPv6, they enter into a new domain of public networking. Also there is a possibility of many security related threats when these transformations are successfully deployed in all ATM terminals. A basic solution would be to distribute prior to the configuration of ATM terminals, the digital certificates of both bank and election commission servers. This would almost make impossible for others to intrude and disturb the transaction established by the ATM terminals between the bank or election commission servers. The number of transactions between the election commission server and the ATM terminals that will be required in case of the current scenario of routing all transactions through national financial switch (NFS) would reduce by at least half. Also additional encryption mechanisms established ensure secure state of these transactions. The two phases of transactions, the voter registration and online voting[1] aims at making entire process of franchise capturing automated.
本文旨在实现一种基于IPv6的自动柜员机在线投票系统。所有的自动取款机目前都在各自银行服务器的专用网络中。如果有可能将它们转换为IPv6,它们就进入了一个新的公共网络领域。当这些转换成功地部署到所有ATM终端中时,还可能出现许多与安全相关的威胁。一个基本的解决方案是在配置ATM终端之前分发银行和选举委员会服务器的数字证书。这将使其他人几乎不可能侵入和干扰由银行或选举委员会服务器之间的ATM终端建立的交易。如果目前通过国家金融交换机(NFS)进行交易,则选举管理委员会服务器和ATM终端之间的交易次数将减少一半以上。此外,建立了额外的加密机制,以确保这些事务的安全状态。选民登记和网上投票两个阶段的交易[1]旨在使整个特许获取过程自动化。
{"title":"Online franchise capturing using IPv6 through Automated Teller Machines","authors":"Kausal Malladi, S. Sridharan","doi":"10.1109/ICRTIT.2013.6844264","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844264","url":null,"abstract":"This paper aims at realization and implementation of an online voting that could be implemented in Automated Teller Machine (ATM) using IPv6. All ATMs are now currently in a private network of their respective bank servers. If it is possible to transform them into IPv6, they enter into a new domain of public networking. Also there is a possibility of many security related threats when these transformations are successfully deployed in all ATM terminals. A basic solution would be to distribute prior to the configuration of ATM terminals, the digital certificates of both bank and election commission servers. This would almost make impossible for others to intrude and disturb the transaction established by the ATM terminals between the bank or election commission servers. The number of transactions between the election commission server and the ATM terminals that will be required in case of the current scenario of routing all transactions through national financial switch (NFS) would reduce by at least half. Also additional encryption mechanisms established ensure secure state of these transactions. The two phases of transactions, the voter registration and online voting[1] aims at making entire process of franchise capturing automated.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"21 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120923451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modified Max-Log-MAP turbo decoding algorithm using optimized scaling factor 改进Max-Log-MAP turbo译码算法,优化比例因子
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844174
R. Krishnamoorthy, N. Pradeep
The Max-Log-MAP is a Soft Input Soft Output (SISO) algorithm, which determines the probability of most likely path through the trellis and hence it gives sub optimal performance compared to Log-MAP algorithm. A simple but effective technique to improve the performance of Max-Log-MAP (MLMAP) algorithm is to scale the extrinsic information exchanged between two decoders using appropriate Scaling Factor (SF). Modified Max-Log-MAP (M-MLMAP) algorithm is achieved by fixing an arbitrary SF for inner decoder S2 and an optimized SF for the outer decoder S1. This paper presents the performance of the Modified Max-Log-MAP decoding algorithm by reducing the over estimation of reliability values to achieve low Bit Error Rate (BER). Appropriate mathematical relationship between SF and Eb/N0 is also proposed. The numerical results show that M-MLMAP algorithm improved the performance of turbo decoding over Additive White Gaussian Noise (AWGN) and Rayleigh fading channels. The proposed M-MLMAP algorithm showed a gain of 0.75dB over MLMAP algorithm at BER of 2×10-5 for Rayleigh fading channel.
Max-Log-MAP是一种软输入软输出(SISO)算法,它确定通过网格的最可能路径的概率,因此与Log-MAP算法相比,它提供了次优的性能。利用适当的缩放因子(SF)对两个解码器之间交换的外部信息进行缩放是一种简单而有效的改进MLMAP算法性能的方法。改进的Max-Log-MAP (M-MLMAP)算法通过为内部解码器S2固定一个任意的SF和为外部解码器S1固定一个优化的SF来实现。本文介绍了改进的Max-Log-MAP译码算法的性能,该算法通过减少可靠性值的过估计来实现低误码率。并提出了SF与Eb/N0之间的数学关系。数值结果表明,M-MLMAP算法在加性高斯白噪声(AWGN)和瑞利衰落信道下提高了turbo译码性能。在瑞利衰落信道中,M-MLMAP算法在BER为2×10-5时比MLMAP算法增益0.75dB。
{"title":"Modified Max-Log-MAP turbo decoding algorithm using optimized scaling factor","authors":"R. Krishnamoorthy, N. Pradeep","doi":"10.1109/ICRTIT.2013.6844174","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844174","url":null,"abstract":"The Max-Log-MAP is a Soft Input Soft Output (SISO) algorithm, which determines the probability of most likely path through the trellis and hence it gives sub optimal performance compared to Log-MAP algorithm. A simple but effective technique to improve the performance of Max-Log-MAP (MLMAP) algorithm is to scale the extrinsic information exchanged between two decoders using appropriate Scaling Factor (SF). Modified Max-Log-MAP (M-MLMAP) algorithm is achieved by fixing an arbitrary SF for inner decoder S2 and an optimized SF for the outer decoder S1. This paper presents the performance of the Modified Max-Log-MAP decoding algorithm by reducing the over estimation of reliability values to achieve low Bit Error Rate (BER). Appropriate mathematical relationship between SF and Eb/N0 is also proposed. The numerical results show that M-MLMAP algorithm improved the performance of turbo decoding over Additive White Gaussian Noise (AWGN) and Rayleigh fading channels. The proposed M-MLMAP algorithm showed a gain of 0.75dB over MLMAP algorithm at BER of 2×10-5 for Rayleigh fading channel.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance evaluation of full adders in ASIC using logical effort calculation 用逻辑功计算方法评价ASIC中全加法器的性能
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844271
R. Uma, P. Dhavachelvan
Device scaling has been a relatively straight forward issue in terms of power, speed and noise aspect. For submicron CMOS technology area, topology selection, power dissipation and speed are imperative aspect especially for designing Clocked Storage Element (CSE), adder circuits and MAC unit for high-speed and low-energy design like portable batteries and microprocessors. This paper presents a logical based delay model for different adder topologies in order to obtain minimum delay, minimum number of stages in minimizing the transistor count and the power consumption of the circuit. In this work a full adder is designed with 10 carry and 6 sum logic constructions and its delay is observed with wide spectrum of electrical effort and its performance is observed in terms of number of stages and transistor sizes. From this mathematical analysis the optimized circuits are implemented using Tanner EDA with TSMC MOSIS 250 nm technology and its performance is analyzed in terms of transistor count, delay and power dissipation with respect to the mathematical model. All the logical construction (carry logic and sum logic) used for designing full adder are realized in terms of CMOS logic.
在功率、速度和噪音方面,设备缩放一直是一个相对直接的问题。在亚微米CMOS技术领域,拓扑选择、功耗和速度是设计时钟存储元件(CSE)、加法电路和MAC单元等高速低能耗设计(如便携式电池和微处理器)的重要方面。本文针对不同的加法器拓扑结构,提出了一种基于逻辑的延迟模型,以获得最小的延迟和最小的级数,从而使电路的晶体管数量和功耗最小化。在这项工作中,设计了一个具有10进位和6和逻辑结构的全加法器,并通过广泛的电气努力观察其延迟,并根据级数和晶体管尺寸观察其性能。在此基础上,采用TSMC MOSIS 250nm技术的Tanner EDA实现了优化电路,并根据数学模型从晶体管数量、延迟和功耗等方面分析了优化电路的性能。用于设计全加法器的所有逻辑结构(进位逻辑和和逻辑)均采用CMOS逻辑实现。
{"title":"Performance evaluation of full adders in ASIC using logical effort calculation","authors":"R. Uma, P. Dhavachelvan","doi":"10.1109/ICRTIT.2013.6844271","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844271","url":null,"abstract":"Device scaling has been a relatively straight forward issue in terms of power, speed and noise aspect. For submicron CMOS technology area, topology selection, power dissipation and speed are imperative aspect especially for designing Clocked Storage Element (CSE), adder circuits and MAC unit for high-speed and low-energy design like portable batteries and microprocessors. This paper presents a logical based delay model for different adder topologies in order to obtain minimum delay, minimum number of stages in minimizing the transistor count and the power consumption of the circuit. In this work a full adder is designed with 10 carry and 6 sum logic constructions and its delay is observed with wide spectrum of electrical effort and its performance is observed in terms of number of stages and transistor sizes. From this mathematical analysis the optimized circuits are implemented using Tanner EDA with TSMC MOSIS 250 nm technology and its performance is analyzed in terms of transistor count, delay and power dissipation with respect to the mathematical model. All the logical construction (carry logic and sum logic) used for designing full adder are realized in terms of CMOS logic.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122762867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Clustering of lung cancer data using Foggy K-means 肺癌数据雾k均值聚类
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844173
A. Yadav, Divya Tomar, Sonali Agarwal
In the medical field, huge data is available, which leads to the need of a powerful data analysis tool for extraction of useful information. Several studies have been carried out in data mining field to improve the capability of data analysis on huge datasets. Cancer is one of the most fatal diseases in the world. Lung Cancer with high rate of accurance is one of the serious problems and biggest killing disease in India. Prediction of occurance of the lung cancer is very difficult because it depends upon multiple attributes which could not be analyzedeasily. In this paper a real time lung cancer dataset is taken from SGPGI (Sanjay Gandhi Post Graduate Institute of Medical Sciences) Lucknow. A realtime dataset is always associated with its obvious challenges such as missing values, highly dimensional, noise, and outlier, which is not suitable for efficient classification. A clustering approach is an alternative solution to analyze the data in an unsupervised manner. In this current research work main focus is to develop a novel approach to create accurate clusters of desired real time datasets called Foggy K-means clustering. The result of the experiment indicates that foggy k-means clustering algorithm gives better result on real datasets as compared to simple k-means clustering algorithm and provides a better solution to the real world problem.
在医疗领域,海量的数据导致需要一个强大的数据分析工具来提取有用的信息。为了提高海量数据集的数据分析能力,在数据挖掘领域开展了多项研究。癌症是世界上最致命的疾病之一。肺癌是印度最严重的问题之一,也是最致命的疾病之一。预测肺癌的发生是非常困难的,因为它取决于多种属性,而这些属性不容易分析。在本文中,实时肺癌数据集取自勒克瑙桑杰甘地医学科学研究生院。实时数据集总是与缺失值、高维、噪声和离群值等明显的挑战相关联,不适合进行高效分类。聚类方法是一种以无监督方式分析数据的替代解决方案。在目前的研究工作中,主要重点是开发一种新的方法来创建所需的实时数据集的精确聚类,称为fog K-means聚类。实验结果表明,雾蒙蒙的k-means聚类算法在真实数据集上的聚类效果优于简单的k-means聚类算法,能够更好地解决现实问题。
{"title":"Clustering of lung cancer data using Foggy K-means","authors":"A. Yadav, Divya Tomar, Sonali Agarwal","doi":"10.1109/ICRTIT.2013.6844173","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844173","url":null,"abstract":"In the medical field, huge data is available, which leads to the need of a powerful data analysis tool for extraction of useful information. Several studies have been carried out in data mining field to improve the capability of data analysis on huge datasets. Cancer is one of the most fatal diseases in the world. Lung Cancer with high rate of accurance is one of the serious problems and biggest killing disease in India. Prediction of occurance of the lung cancer is very difficult because it depends upon multiple attributes which could not be analyzedeasily. In this paper a real time lung cancer dataset is taken from SGPGI (Sanjay Gandhi Post Graduate Institute of Medical Sciences) Lucknow. A realtime dataset is always associated with its obvious challenges such as missing values, highly dimensional, noise, and outlier, which is not suitable for efficient classification. A clustering approach is an alternative solution to analyze the data in an unsupervised manner. In this current research work main focus is to develop a novel approach to create accurate clusters of desired real time datasets called Foggy K-means clustering. The result of the experiment indicates that foggy k-means clustering algorithm gives better result on real datasets as compared to simple k-means clustering algorithm and provides a better solution to the real world problem.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122850803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Detection of dropped non protruding objects in video surveillance using clustered data stream 利用聚类数据流检测视频监控中掉落的非突出物体
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844232
P. Jayasuganthi, V. Jeyaprabha, P. S. A. Kumar, Dr.V. Vaidehi
As more and more surveillance cameras are deployed in a facility or area the demand for automatic detection of suspicious objects is increasing. Most of the work in recent literature concentrated on protruding object detection in video sequences. This paper proposes a novel approach to detect protruding as well as non protruding objects in sequences of walking pedestrians based on texture of the foreground objects. Initially static background is modeled with the help of mixture of Gaussian algorithm and the foreground objects are segmented. Later object is detected frame by frame which is followed by the calculation of statistical parameters such as mean and standard deviation, in every blob, to form data streams. These parameters are clustered online using k-means methodology over data streams, in order to find the outliers (dropped objects). Here k is based on the number of objects present in the video. Finally we have implemented on a standard data set from the website Video Surveillance Online Repository [15] and also our own dataset. The experimental results show that our system performs reasonable well and can accurately detect dropped objects in video data streams.
随着越来越多的监控摄像机部署在一个设施或地区,对自动检测可疑物体的需求也在增加。在最近的文献中,大部分的工作集中在视频序列中的突出目标检测上。本文提出了一种基于前景物体纹理的行人序列中突出物体和非突出物体的检测方法。首先利用混合高斯算法对静态背景进行建模,并对前景目标进行分割。然后逐帧检测目标,然后计算每个blob的均值和标准差等统计参数,形成数据流。这些参数在数据流上使用k-means方法在线聚类,以便找到异常值(丢弃的对象)。这里k是基于视频中出现的物体的数量。最后,我们在视频监控在线存储库[15]和我们自己的数据集上实现了一个标准数据集。实验结果表明,该系统性能合理,能够准确地检测出视频数据流中的掉落物体。
{"title":"Detection of dropped non protruding objects in video surveillance using clustered data stream","authors":"P. Jayasuganthi, V. Jeyaprabha, P. S. A. Kumar, Dr.V. Vaidehi","doi":"10.1109/ICRTIT.2013.6844232","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844232","url":null,"abstract":"As more and more surveillance cameras are deployed in a facility or area the demand for automatic detection of suspicious objects is increasing. Most of the work in recent literature concentrated on protruding object detection in video sequences. This paper proposes a novel approach to detect protruding as well as non protruding objects in sequences of walking pedestrians based on texture of the foreground objects. Initially static background is modeled with the help of mixture of Gaussian algorithm and the foreground objects are segmented. Later object is detected frame by frame which is followed by the calculation of statistical parameters such as mean and standard deviation, in every blob, to form data streams. These parameters are clustered online using k-means methodology over data streams, in order to find the outliers (dropped objects). Here k is based on the number of objects present in the video. Finally we have implemented on a standard data set from the website Video Surveillance Online Repository [15] and also our own dataset. The experimental results show that our system performs reasonable well and can accurately detect dropped objects in video data streams.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"517 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123102218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cake cutting of CPU resources among multiple HPC agents on a cloud 云上多个HPC代理之间的CPU资源分割
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844172
Kausal Malladi, Debargha Ganguly
“You cut, I choose” is a classical algorithm for fair sharing of resources among two agents which guarantees “envy-freeness”. In a multi-agent scenario, several algorithms were proposed for sharing resources fairly on a Cloud. However, no algorithm has been proposed till now for High Performance Computing (HPC) agents which are computational intensive, where not just the resources are to be fair-shared but used to the utmost. This paper proposes an algorithm that considers a specific number of HPC agents that can be run on a host machine and tries to do a fair-share of resources. The proposed algorithm assumes the agents demanding resources to be taking a game-theoretic approach and gives a decent proportion of the demand as the allocation value. This algorithm works for a real-world scenario in which, the agents keep getting added dynamically to a host machine and assumes that the agents will not depart after they are allocated.
“你切,我选”是两个智能体公平共享资源的经典算法,保证了“无嫉妒”。在多智能体场景下,提出了几种在云上公平共享资源的算法。然而,目前还没有针对计算密集型的高性能计算(HPC)代理的算法,这些代理不仅需要公平共享资源,而且需要最大限度地利用资源。本文提出了一种算法,该算法考虑了可以在主机上运行的特定数量的HPC代理,并试图公平地共享资源。该算法假设需求资源的代理采用博弈论方法,并给出需求的适当比例作为分配值。该算法适用于一个真实世界的场景,在这个场景中,代理不断被动态地添加到主机上,并假设代理在分配后不会离开。
{"title":"Cake cutting of CPU resources among multiple HPC agents on a cloud","authors":"Kausal Malladi, Debargha Ganguly","doi":"10.1109/ICRTIT.2013.6844172","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844172","url":null,"abstract":"“You cut, I choose” is a classical algorithm for fair sharing of resources among two agents which guarantees “envy-freeness”. In a multi-agent scenario, several algorithms were proposed for sharing resources fairly on a Cloud. However, no algorithm has been proposed till now for High Performance Computing (HPC) agents which are computational intensive, where not just the resources are to be fair-shared but used to the utmost. This paper proposes an algorithm that considers a specific number of HPC agents that can be run on a host machine and tries to do a fair-share of resources. The proposed algorithm assumes the agents demanding resources to be taking a game-theoretic approach and gives a decent proportion of the demand as the allocation value. This algorithm works for a real-world scenario in which, the agents keep getting added dynamically to a host machine and assumes that the agents will not depart after they are allocated.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115376248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2013 International Conference on Recent Trends in Information Technology (ICRTIT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1