首页 > 最新文献

2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)最新文献

英文 中文
An efficient implementation of rotational radix-4 CORDIC based FFT processor 基于旋转基数-4 CORDIC的FFT处理器的高效实现
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745443
A. Yasodai, A. Ramprasad
A new technique for implementing low power FFTs based on memory less Z path eliminated CORDIC is proposed in this paper. The vector rotation in the x/y plane can be realized by rotating a vector through a series of elementary angles. These elementary angles are chosen such that the vector rotation through each of them may be approximated easily with a simple shift and add operation, and their algebraic sum approaches the required rotation angle. This can be exercised by CORDIC (CO-ordinate Rotation Digital Computer) algorithm in rotation mode. Pipelined architecture by pre computation of direction of micro rotation, radix-4 number representation, and the angle generator has been processed in terms of hardware complexity, iteration delay and memory reduction. The proposed algorithm also exercises an addressing scheme and the associated angle generator logic in order to eliminate the ROM usage for bottling the twiddle factors. It incorporates parallelism and pipe line processing. The latency of the system is n/2 clock cycles. The throughput rate is one valid result per eight clock cycles. The approached architecture for radix-4, 16-bit precision and 16-point FFT was implemented on FPGA platform virtex 5 and simulated to validate the results. This contributes to the minimization of the dynamic power consumption of the proposed system to 28.52mW at 100MHz and 5.70mW at 20MHz with the maximum operating frequency of 450.564MHZ.
本文提出了一种基于无内存Z路径消除CORDIC的低功耗fft实现新技术。矢量在x/y平面上的旋转可以通过旋转矢量经过一系列初等角来实现。这些初等角度的选择使得通过每个初等角度的矢量旋转可以很容易地通过简单的移位和加法运算来近似,并且它们的代数和接近所需的旋转角度。这可以通过CORDIC(坐标旋转数字计算机)算法在旋转模式下执行。通过微旋转方向的预计算、基数4数表示和角度生成器的流水线架构,从硬件复杂度、迭代延迟和内存减少等方面进行了处理。该算法还练习了寻址方案和相关的角度生成器逻辑,以消除装瓶旋转因素的ROM使用。它结合了并行和管线处理。系统时延为n/2个时钟周期。吞吐量是每8个时钟周期的一个有效结果。在FPGA平台virtex 5上实现了基数4、16位精度和16点FFT的结构,并进行了仿真验证。这有助于将拟议系统的动态功耗降至100MHz时的28.52mW和20MHz时的5.70mW,最大工作频率为450.564MHZ。
{"title":"An efficient implementation of rotational radix-4 CORDIC based FFT processor","authors":"A. Yasodai, A. Ramprasad","doi":"10.1109/RAICS.2013.6745443","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745443","url":null,"abstract":"A new technique for implementing low power FFTs based on memory less Z path eliminated CORDIC is proposed in this paper. The vector rotation in the x/y plane can be realized by rotating a vector through a series of elementary angles. These elementary angles are chosen such that the vector rotation through each of them may be approximated easily with a simple shift and add operation, and their algebraic sum approaches the required rotation angle. This can be exercised by CORDIC (CO-ordinate Rotation Digital Computer) algorithm in rotation mode. Pipelined architecture by pre computation of direction of micro rotation, radix-4 number representation, and the angle generator has been processed in terms of hardware complexity, iteration delay and memory reduction. The proposed algorithm also exercises an addressing scheme and the associated angle generator logic in order to eliminate the ROM usage for bottling the twiddle factors. It incorporates parallelism and pipe line processing. The latency of the system is n/2 clock cycles. The throughput rate is one valid result per eight clock cycles. The approached architecture for radix-4, 16-bit precision and 16-point FFT was implemented on FPGA platform virtex 5 and simulated to validate the results. This contributes to the minimization of the dynamic power consumption of the proposed system to 28.52mW at 100MHz and 5.70mW at 20MHz with the maximum operating frequency of 450.564MHZ.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"56 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120870457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Relaxed context-aware machine learning midddleware (RCAMM) for Android 用于Android的轻松的上下文感知机器学习中间件(RCAMM)
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745453
Jitesh Punjabi, Shekhar Parkhi, Gaurav Taneja, N. Giri
Context Aware Computing is a promising approach of developing mobile applications that provide experiences and services in a manner that is fine-tuned based on the user's preferences. Applications such as Google Now, Apple Siri learn the User's activities from context related information and subsequently provide suggestions to the users in real-time. However, in almost all cases, application developers have to develop the same set of mechanisms to consume the context information and storing it in an appropriate form rather than focusing on the parts of the application that consume the context information. This approach results in the repetition of the same task and multiple copies of data. This paper presents our work detailing the development of a middleware that handles context information collection and its storage. The work provides a framework that allows the developers to easily implement context aware applications that consume the services provided by the middleware. Applications will only have to react to context data (past and present) while the middleware takes care of everything else such as the background service for context information collection and storage, thus reducing the redundancy, increasing adaptability and flexibility, and simultaneously supporting developers in rapid prototyping of context-aware applications. Thus the paper presents our work towards building sustainable Android Framework which follows the principle of Reformat, Reduce, Regenerate, Reuse and Repurpose.
上下文感知计算是开发移动应用程序的一种很有前途的方法,它可以根据用户的偏好进行微调,以提供体验和服务。谷歌等应用程序现在,苹果Siri从上下文相关信息中了解用户的活动,随后实时向用户提供建议。然而,在几乎所有情况下,应用程序开发人员都必须开发相同的机制集来使用上下文信息并将其以适当的形式存储,而不是专注于使用上下文信息的应用程序部分。这种方法导致重复相同的任务和数据的多个副本。本文详细介绍了我们开发处理上下文信息收集及其存储的中间件的工作。该工作提供了一个框架,允许开发人员轻松实现使用中间件提供的服务的上下文感知应用程序。应用程序只需要对上下文数据(过去的和现在的)做出反应,而中间件则负责其他一切,比如上下文信息收集和存储的后台服务,从而减少冗余,增加适应性和灵活性,同时支持开发人员快速构建上下文感知应用程序的原型。因此,本文介绍了我们为构建可持续的Android框架所做的工作,该框架遵循Reformat, Reduce, Regenerate, Reuse and Repurpose原则。
{"title":"Relaxed context-aware machine learning midddleware (RCAMM) for Android","authors":"Jitesh Punjabi, Shekhar Parkhi, Gaurav Taneja, N. Giri","doi":"10.1109/RAICS.2013.6745453","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745453","url":null,"abstract":"Context Aware Computing is a promising approach of developing mobile applications that provide experiences and services in a manner that is fine-tuned based on the user's preferences. Applications such as Google Now, Apple Siri learn the User's activities from context related information and subsequently provide suggestions to the users in real-time. However, in almost all cases, application developers have to develop the same set of mechanisms to consume the context information and storing it in an appropriate form rather than focusing on the parts of the application that consume the context information. This approach results in the repetition of the same task and multiple copies of data. This paper presents our work detailing the development of a middleware that handles context information collection and its storage. The work provides a framework that allows the developers to easily implement context aware applications that consume the services provided by the middleware. Applications will only have to react to context data (past and present) while the middleware takes care of everything else such as the background service for context information collection and storage, thus reducing the redundancy, increasing adaptability and flexibility, and simultaneously supporting developers in rapid prototyping of context-aware applications. Thus the paper presents our work towards building sustainable Android Framework which follows the principle of Reformat, Reduce, Regenerate, Reuse and Repurpose.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130875206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic domain ontology construction mechanism 自动领域本体构建机制
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745492
Aarti Singh, P. Anand
Ontologies play vital role in agent communication in present semantic web. Manual construction of ontology is very complex and time consuming process. Thus it is mandatory to automate this task. A number of tools have been created to help the ontology generation in a semi automatic or manual way. But there is no standard framework for complete automation of this process. Though ontologies are highly reusable so based on a basic ontology other ontologies may be built for specific application domain. This work attempts to present a mechanism for automatic construction of ontology using intelligent agents.
在当前的语义网中,本体在智能体通信中起着至关重要的作用。人工构建本体是一个非常复杂和耗时的过程。因此,必须自动执行此任务。已经创建了许多工具来帮助以半自动或手动的方式生成本体。但是对于这个过程的完全自动化并没有标准框架。虽然本体具有高度可重用性,但可以在基本本体的基础上为特定的应用领域构建其他本体。本工作试图提出一种利用智能代理自动构建本体的机制。
{"title":"Automatic domain ontology construction mechanism","authors":"Aarti Singh, P. Anand","doi":"10.1109/RAICS.2013.6745492","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745492","url":null,"abstract":"Ontologies play vital role in agent communication in present semantic web. Manual construction of ontology is very complex and time consuming process. Thus it is mandatory to automate this task. A number of tools have been created to help the ontology generation in a semi automatic or manual way. But there is no standard framework for complete automation of this process. Though ontologies are highly reusable so based on a basic ontology other ontologies may be built for specific application domain. This work attempts to present a mechanism for automatic construction of ontology using intelligent agents.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131309350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Survey of test strategies for System-on Chip and it's embedded memories 片上系统及其嵌入式存储器测试策略综述
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745473
G. P. Acharya, M. A. Rani
Today's submicron VLSI technology has been emerged as integration of many VLSI ICs into a single Si Chip called System-on-Chip (SoC). The SoC architecture normally contains multiple processors along with either separate or centralized memory blocks as its core elements as well as many noncore elements, e.g., Cache/DRAM controllers, I/O Controllers. Due to the increased demands for high data storage, the integration of on-chip memories ranging from Gigabytes to Terrabytes is becoming essential for the latest SoC technology. To improve the reliability and performance of SoCs due to technology miniaturization and increased memory density, there is a need to incorporate on-chip self-testing unit for testing these memory units. Further, to improve the yield and fault tolerance of on-chip memories without degradation on its performance, self repair mechanism may be integrated on chip. Apart from memory self test and repair., another biggest challenge in SoC testing is the testing of logic blocks (core elements) as well as the noncore elements as specified earlier. This paper brings out the reviews of BIST strategies from various literatures that are being applied for testing of embedded memories and IP cores along with associated noncore elements.
今天的亚微米VLSI技术是将许多VLSI集成电路集成到一个称为片上系统(SoC)的Si芯片中。SoC架构通常包含多个处理器以及独立或集中的内存块作为其核心元素,以及许多非核心元素,例如缓存/DRAM控制器,I/O控制器。由于对高数据存储的需求不断增加,从千兆字节到太字节的片上存储器的集成对于最新的SoC技术变得至关重要。由于技术小型化和内存密度的增加,为了提高soc的可靠性和性能,需要集成片上自测单元来测试这些存储单元。此外,为了提高片上存储器的良率和容错性而不降低其性能,可以在片上集成自修复机制。除了记忆自我测试和修复。在SoC测试中,另一个最大的挑战是测试逻辑块(核心元件)以及前面指定的非核心元件。本文从各种文献中对应用于嵌入式存储器和IP核以及相关非核元件测试的BIST策略进行了综述。
{"title":"Survey of test strategies for System-on Chip and it's embedded memories","authors":"G. P. Acharya, M. A. Rani","doi":"10.1109/RAICS.2013.6745473","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745473","url":null,"abstract":"Today's submicron VLSI technology has been emerged as integration of many VLSI ICs into a single Si Chip called System-on-Chip (SoC). The SoC architecture normally contains multiple processors along with either separate or centralized memory blocks as its core elements as well as many noncore elements, e.g., Cache/DRAM controllers, I/O Controllers. Due to the increased demands for high data storage, the integration of on-chip memories ranging from Gigabytes to Terrabytes is becoming essential for the latest SoC technology. To improve the reliability and performance of SoCs due to technology miniaturization and increased memory density, there is a need to incorporate on-chip self-testing unit for testing these memory units. Further, to improve the yield and fault tolerance of on-chip memories without degradation on its performance, self repair mechanism may be integrated on chip. Apart from memory self test and repair., another biggest challenge in SoC testing is the testing of logic blocks (core elements) as well as the noncore elements as specified earlier. This paper brings out the reviews of BIST strategies from various literatures that are being applied for testing of embedded memories and IP cores along with associated noncore elements.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126555798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Optimal design of a symmetric monopole antenna with spiral geometry 螺旋形对称单极天线的优化设计
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745484
Burni George, S. S. Kumar
Conventional monopole antennas have narrow bandwidth and this bandwidth can slightly be improved by using dual arms for the monopole structure. Folding both ends of the dual monopole antenna can result in further improvement of bandwidth of the dual monopole structure. Spiral antennas have large bandwidth. This paper proposes an optimal symmetric antenna combining spiral geometry with a dual monopole structure. Modeling of the proposed antenna is done using the number of turns, segment lengths and spiral geometrical parameters. The antenna parameters, gain and input impedance, are calculated using NEC2 and the proposed structure is optimized using Particle Swarm Optimization(PSO). The proposed structure is optimized for maximizing gain subject to the prescribed input impedance. Simulated results of the proposed antenna shows better gain, impedance and bandwidth characteristics compared to L-shape folded dual monopole.
传统单极天线的带宽较窄,采用双臂单极结构可以略微提高带宽。折叠双单极天线的两端可以进一步提高双单极结构的带宽。螺旋天线带宽大。本文提出了一种结合螺旋几何和双单极子结构的最优对称天线。利用匝数、线段长度和螺旋几何参数对天线进行了建模。利用NEC2计算天线参数、增益和输入阻抗,并利用粒子群算法(PSO)对结构进行优化。所提出的结构在给定的输入阻抗下被优化为增益最大化。仿真结果表明,与l型折叠双单极子天线相比,该天线具有更好的增益、阻抗和带宽特性。
{"title":"Optimal design of a symmetric monopole antenna with spiral geometry","authors":"Burni George, S. S. Kumar","doi":"10.1109/RAICS.2013.6745484","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745484","url":null,"abstract":"Conventional monopole antennas have narrow bandwidth and this bandwidth can slightly be improved by using dual arms for the monopole structure. Folding both ends of the dual monopole antenna can result in further improvement of bandwidth of the dual monopole structure. Spiral antennas have large bandwidth. This paper proposes an optimal symmetric antenna combining spiral geometry with a dual monopole structure. Modeling of the proposed antenna is done using the number of turns, segment lengths and spiral geometrical parameters. The antenna parameters, gain and input impedance, are calculated using NEC2 and the proposed structure is optimized using Particle Swarm Optimization(PSO). The proposed structure is optimized for maximizing gain subject to the prescribed input impedance. Simulated results of the proposed antenna shows better gain, impedance and bandwidth characteristics compared to L-shape folded dual monopole.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"IA-15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126556423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Respiratory sound classification using cepstral features and support vector machine 基于倒谱特征和支持向量机的呼吸声分类
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745460
R. Palaniappan, K. Sundaraj
Respiratory sound analysis provides vital information of the present condition of the Lungs. It can be used to assist medical professionals in differential diagnosis. In this paper, we intend to distinguish between normal (without any pathological condition), airway obstruction pathology and parenchymal pathology using respiratory sound recordings taken from RALE database. The proposed method uses Mel-frequency cepstral coefficients (MFCC) as features extracted from respiratory sounds. The extracted features are distinguished using support vector machine classifier (SVM). The classifier performance is analysed by using confusion matrix technique. A mean classification accuracy of 90.77% was reported using the proposed method. The performance analysis of the SVM classifier using confusion matrix revealed that normal, airway obstruction and parenchymal pathology are classified at 94.11%, 92.31% and 88.00% classification accuracy respectively. The analysis reveals that the proposed method shows promising outcome in distinguishing between the normal, airway obstruction and parenchymal pathology.
呼吸声音分析提供了肺部当前状况的重要信息。它可用于协助医疗专业人员进行鉴别诊断。在本文中,我们打算利用RALE数据库中的呼吸录音来区分正常(无任何病理情况)、气道阻塞病理和实质病理。该方法使用Mel-frequency倒谱系数(MFCC)作为呼吸声特征提取。使用支持向量机分类器(SVM)对提取的特征进行区分。利用混淆矩阵技术对分类器性能进行了分析。该方法的平均分类准确率为90.77%。使用混淆矩阵对SVM分类器进行性能分析,分类正确率分别为94.11%、92.31%和88.00%。分析表明,所提出的方法在区分正常、气道阻塞和实质病理方面显示出良好的结果。
{"title":"Respiratory sound classification using cepstral features and support vector machine","authors":"R. Palaniappan, K. Sundaraj","doi":"10.1109/RAICS.2013.6745460","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745460","url":null,"abstract":"Respiratory sound analysis provides vital information of the present condition of the Lungs. It can be used to assist medical professionals in differential diagnosis. In this paper, we intend to distinguish between normal (without any pathological condition), airway obstruction pathology and parenchymal pathology using respiratory sound recordings taken from RALE database. The proposed method uses Mel-frequency cepstral coefficients (MFCC) as features extracted from respiratory sounds. The extracted features are distinguished using support vector machine classifier (SVM). The classifier performance is analysed by using confusion matrix technique. A mean classification accuracy of 90.77% was reported using the proposed method. The performance analysis of the SVM classifier using confusion matrix revealed that normal, airway obstruction and parenchymal pathology are classified at 94.11%, 92.31% and 88.00% classification accuracy respectively. The analysis reveals that the proposed method shows promising outcome in distinguishing between the normal, airway obstruction and parenchymal pathology.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"457 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122893520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Templated marching cubes — A low computation approach to surface rendering 模板化的移动立方体——表面渲染的低计算方法
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745485
C. K. Manikandtan, S. Resmi, S. Sibi, R. Kumar, G. S. Harikumaran Nair
Surface generation from high resolution datasets using triangulation algorithms like marching cubes require large amounts of computational time for the generation and interpolation of vertices. Here we propose a templated method of generating triangles which has far less computation involved and saves on CPU cycles and memory. Each cube orientation corresponding to the boundary cases in the original algorithm is listed in a prebuilt table of templated triangles. The template created using binary input may be further smoothened using cost functions related to input image data.
从高分辨率数据集使用三角测量算法(如行军立方体)生成表面需要大量的计算时间来生成和插值顶点。在这里,我们提出了一种模板化的生成三角形的方法,它所涉及的计算量要少得多,并且节省了CPU周期和内存。每个立方体方向对应于原始算法中的边界情况,在预先构建的模板三角形表中列出。使用二进制输入创建的模板可以使用与输入图像数据相关的代价函数进一步平滑。
{"title":"Templated marching cubes — A low computation approach to surface rendering","authors":"C. K. Manikandtan, S. Resmi, S. Sibi, R. Kumar, G. S. Harikumaran Nair","doi":"10.1109/RAICS.2013.6745485","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745485","url":null,"abstract":"Surface generation from high resolution datasets using triangulation algorithms like marching cubes require large amounts of computational time for the generation and interpolation of vertices. Here we propose a templated method of generating triangles which has far less computation involved and saves on CPU cycles and memory. Each cube orientation corresponding to the boundary cases in the original algorithm is listed in a prebuilt table of templated triangles. The template created using binary input may be further smoothened using cost functions related to input image data.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133130823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic gesture recognition of Indian sign language considering local motion of hand using spatial location of Key Maximum Curvature Points 基于最大曲率点空间位置的考虑手部局部运动的印度手语动态手势识别
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745452
M. Geetha, P. Aswathi
Sign language is the most natural way of expression for the deaf community. Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a new method for, vision-based recognition of dynamic signs corresponding to Indian Sign Language words. A new method is proposed for key frame extraction which is more accurate than the existing methods. The frames corresponding to the Maximum Curvature Points (MCPs) of the global trajectory are taken as the keyframes. The method accomodates the spatio temporal variability that may occur when different persons perform the same gesture. We are also proposing a new method based on spatial location of the Key Maximum Curvature Points of the boundary for shape feature extraction of key frames. Our method when compared with three other exisiting methods has given better performance. The method has considered the local as well as global trajectory information for recognition. The feature extraction method has proved to be scale invariant and translation invariant.
手语是聋哑人最自然的表达方式。印度手语(ISL)是一种视觉空间语言,通过手、手臂、面部表情和头部/身体姿势提供语言信息。本文提出了一种基于视觉的印度手语动态符号识别方法。提出了一种新的关键帧提取方法,该方法比现有方法的提取精度更高。将全局轨迹的最大曲率点(Maximum Curvature Points, mcp)对应的帧作为关键帧。该方法适应了当不同的人执行相同的手势时可能发生的时空变化。我们还提出了一种基于边界关键最大曲率点空间位置的关键帧形状特征提取方法。与已有的三种方法进行了比较,结果表明该方法具有更好的性能。该方法考虑了局部和全局轨迹信息进行识别。该特征提取方法具有尺度不变性和平移不变性。
{"title":"Dynamic gesture recognition of Indian sign language considering local motion of hand using spatial location of Key Maximum Curvature Points","authors":"M. Geetha, P. Aswathi","doi":"10.1109/RAICS.2013.6745452","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745452","url":null,"abstract":"Sign language is the most natural way of expression for the deaf community. Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a new method for, vision-based recognition of dynamic signs corresponding to Indian Sign Language words. A new method is proposed for key frame extraction which is more accurate than the existing methods. The frames corresponding to the Maximum Curvature Points (MCPs) of the global trajectory are taken as the keyframes. The method accomodates the spatio temporal variability that may occur when different persons perform the same gesture. We are also proposing a new method based on spatial location of the Key Maximum Curvature Points of the boundary for shape feature extraction of key frames. Our method when compared with three other exisiting methods has given better performance. The method has considered the local as well as global trajectory information for recognition. The feature extraction method has proved to be scale invariant and translation invariant.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"2 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114105222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Single image super-resolution based on compressive sensing and TV minimization sparse recovery for remote sensing images 基于压缩感知和TV最小化稀疏恢复的遥感图像单幅超分辨率
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745476
Sreeja S, M Wilscy
In this paper we address the problem of super resolution in remote sensing images from a single low resolution image without using an external database. This method uses the techniques of Compressive Sensing (CS), Structural Self Similarity and Total Variation (TV) Minimization. The approach is based on sparse and redundant representations over trained dictionaries. The method involves identifying a dictionary that represents high resolution patches in a sparse manner. Extra information from similar structures that exist in remote sensing images can be introduced to dictionary in the CS framework. K-SVD method is used for finding the dictionary and TV Minimization method is used for finding the sparse representation coefficients. Instead of using the HR patches from an external database, the proposed method uses the patches from the interpolated version of the LR image for training the dictionary. The method is compared with other single image super resolution algorithms that use sparse recovery methods such as Orthogonal Matching Pursuit algorithm. The proposed method is tested with satellite images from USC_SIPI database. The method gives better results than other methods both visually and quantitatively. Performance of the method is evaluated using the metrics: PSNR, MSSIM, FSIM and Blur Metric.
在本文中,我们在不使用外部数据库的情况下,从单个低分辨率图像中解决了遥感图像的超分辨率问题。该方法采用压缩感知(CS)、结构自相似和总变异(TV)最小化技术。该方法基于训练字典上的稀疏和冗余表示。该方法涉及以稀疏方式识别表示高分辨率补丁的字典。遥感影像中存在的类似结构的额外信息可以引入CS框架中的字典中。使用K-SVD方法查找字典,使用TV最小化方法查找稀疏表示系数。该方法不使用来自外部数据库的HR补丁,而是使用来自插值版本的LR图像的补丁来训练字典。并与其它采用稀疏恢复方法的单图像超分辨率算法(如正交匹配追踪算法)进行了比较。用USC_SIPI数据库的卫星图像对该方法进行了验证。该方法在视觉上和定量上都优于其他方法。该方法的性能使用指标进行评估:PSNR, MSSIM, FSIM和模糊度量。
{"title":"Single image super-resolution based on compressive sensing and TV minimization sparse recovery for remote sensing images","authors":"Sreeja S, M Wilscy","doi":"10.1109/RAICS.2013.6745476","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745476","url":null,"abstract":"In this paper we address the problem of super resolution in remote sensing images from a single low resolution image without using an external database. This method uses the techniques of Compressive Sensing (CS), Structural Self Similarity and Total Variation (TV) Minimization. The approach is based on sparse and redundant representations over trained dictionaries. The method involves identifying a dictionary that represents high resolution patches in a sparse manner. Extra information from similar structures that exist in remote sensing images can be introduced to dictionary in the CS framework. K-SVD method is used for finding the dictionary and TV Minimization method is used for finding the sparse representation coefficients. Instead of using the HR patches from an external database, the proposed method uses the patches from the interpolated version of the LR image for training the dictionary. The method is compared with other single image super resolution algorithms that use sparse recovery methods such as Orthogonal Matching Pursuit algorithm. The proposed method is tested with satellite images from USC_SIPI database. The method gives better results than other methods both visually and quantitatively. Performance of the method is evaluated using the metrics: PSNR, MSSIM, FSIM and Blur Metric.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132699606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Emotion detection from “the SMS of the internet” 从“互联网短信”看情感检测
Pub Date : 2013-12-01 DOI: 10.1109/RAICS.2013.6745494
U. Nagarsekar, A. Mhapsekar, P. Kulkarni, D. Kalbande
Due to the sudden eruption of activity in the social networking domain, analysts, social media as well as general public are drawn to Sentiment Analysis domain to gain invaluable information. In this paper, we go beyond basic sentiment classification (positive, negative and neutral) and target deeper emotion classification of Twitter data. We have focused on emotion identification into Ekman's six basic emotions i.e. JOY, SURPRISE, ANGER, DISGUST, FEAR and SADNESS. We have employed two diverse machine learning algorithms with three varied datasets and analyzed their outcomes. We show how equal distribution of emotions in training tweets results in better learning accuracies and hence better performance in the classification task.
由于社交网络领域的活动突然爆发,分析师,社交媒体以及公众都被吸引到情感分析领域以获得宝贵的信息。在本文中,我们超越了基本的情感分类(积极,消极和中性),并针对Twitter数据进行更深层次的情感分类。我们将情绪识别分为艾克曼的六种基本情绪,即喜悦、惊讶、愤怒、厌恶、恐惧和悲伤。我们使用了两种不同的机器学习算法和三个不同的数据集,并分析了它们的结果。我们展示了训练推文中情绪的均匀分布如何导致更好的学习准确性,从而在分类任务中获得更好的性能。
{"title":"Emotion detection from “the SMS of the internet”","authors":"U. Nagarsekar, A. Mhapsekar, P. Kulkarni, D. Kalbande","doi":"10.1109/RAICS.2013.6745494","DOIUrl":"https://doi.org/10.1109/RAICS.2013.6745494","url":null,"abstract":"Due to the sudden eruption of activity in the social networking domain, analysts, social media as well as general public are drawn to Sentiment Analysis domain to gain invaluable information. In this paper, we go beyond basic sentiment classification (positive, negative and neutral) and target deeper emotion classification of Twitter data. We have focused on emotion identification into Ekman's six basic emotions i.e. JOY, SURPRISE, ANGER, DISGUST, FEAR and SADNESS. We have employed two diverse machine learning algorithms with three varied datasets and analyzed their outcomes. We show how equal distribution of emotions in training tweets results in better learning accuracies and hence better performance in the classification task.","PeriodicalId":184155,"journal":{"name":"2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129025962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1