首页 > 最新文献

2013 International Conference on Recent Trends in Information Technology (ICRTIT)最新文献

英文 中文
Augmenting learning-experiences in the real world with digital technologies 利用数字技术增强现实世界的学习体验
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844210
H. Ogata
One of the challenges of CSUL (Computer Supported Ubiquitous Learning) research is capturing what learners have learned with the contextual data, and reminding the learners of it in the right place and the right time. This paper proposes a ubiquitous learning log system called SCROLL (System for Capturing and Reminding Of Learning Log). Ubiquitous Learning Log (ULL) is defined as a digital record of what learners have learned in the daily life using ubiquitous technologies. It allows the learners to log their learning experiences with photos, audios, videos, location, QR-code, RFID tag, and sensor data, and to share and to reuse ULL with others. Using SCROLL, they can receive personalized quizzes and answers for their questions. Also, they can navigate and be aware of their past ULLs supported by augmented reality view. This paper also describes how SCROLL can be used in different contexts such as learning analytics for Japanese language learning, seamless language learning, and museum.
CSUL(计算机支持的泛在学习)研究的挑战之一是通过上下文数据捕获学习者所学到的内容,并在正确的时间和地点提醒学习者。本文提出了一种泛在学习日志系统,称为SCROLL (system for capture and remind Of learning log)。泛在学习日志(ULL)被定义为学习者在日常生活中使用泛在技术学习的数字记录。它允许学习者记录他们的学习经历,包括照片、音频、视频、位置、qr码、RFID标签和传感器数据,并与他人共享和重用ULL。使用SCROLL,他们可以收到个性化的测验和问题的答案。此外,他们还可以导航并了解增强现实视图支持的过去的ull。本文还介绍了SCROLL如何在日语学习分析、无缝语言学习和博物馆等不同环境中使用。
{"title":"Augmenting learning-experiences in the real world with digital technologies","authors":"H. Ogata","doi":"10.1109/ICRTIT.2013.6844210","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844210","url":null,"abstract":"One of the challenges of CSUL (Computer Supported Ubiquitous Learning) research is capturing what learners have learned with the contextual data, and reminding the learners of it in the right place and the right time. This paper proposes a ubiquitous learning log system called SCROLL (System for Capturing and Reminding Of Learning Log). Ubiquitous Learning Log (ULL) is defined as a digital record of what learners have learned in the daily life using ubiquitous technologies. It allows the learners to log their learning experiences with photos, audios, videos, location, QR-code, RFID tag, and sensor data, and to share and to reuse ULL with others. Using SCROLL, they can receive personalized quizzes and answers for their questions. Also, they can navigate and be aware of their past ULLs supported by augmented reality view. This paper also describes how SCROLL can be used in different contexts such as learning analytics for Japanese language learning, seamless language learning, and museum.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130460933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cost aware task scheduling and core mapping on Network-on-Chip topology using Firefly algorithm 基于Firefly算法的片上网络拓扑的成本感知任务调度和核心映射
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844278
S. Umamaheswari, K. I. Kirthiga, B. Abinaya, D. Ashwin
An optimal Network on Chip topology is generated with reduced area and power consumption. The Firefly algorithm is used for the optimal mapping of each and every Intellectual Property core in a specific application. This method incorporates multiple objectives subject to some constraints based on the information available in the Communication Task Graph. The paper proceeds with two phases. In the first phase the tasks are mapped on the processors and in the second phase the processors are mapped on the network tiles.
在减小面积和功耗的情况下,生成了最优的片上网络拓扑。萤火虫算法用于在特定应用程序中对每个知识产权核心进行最优映射。该方法基于通信任务图中可用的信息,在一定约束条件下结合多个目标。本文分两个阶段进行。在第一阶段,任务被映射到处理器上,在第二阶段,处理器被映射到网络块上。
{"title":"Cost aware task scheduling and core mapping on Network-on-Chip topology using Firefly algorithm","authors":"S. Umamaheswari, K. I. Kirthiga, B. Abinaya, D. Ashwin","doi":"10.1109/ICRTIT.2013.6844278","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844278","url":null,"abstract":"An optimal Network on Chip topology is generated with reduced area and power consumption. The Firefly algorithm is used for the optimal mapping of each and every Intellectual Property core in a specific application. This method incorporates multiple objectives subject to some constraints based on the information available in the Communication Task Graph. The paper proceeds with two phases. In the first phase the tasks are mapped on the processors and in the second phase the processors are mapped on the network tiles.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134054241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Selecting best spectrum using multispectral palm texture 利用多光谱手掌纹理选择最佳光谱
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844204
M. Maheswari, S. Ancy, G. Suresh
Multispectral palm print is one of the most reliable and unique Biometric. MSI have faster acquisition time and better quality images than normal images. The advantages of the proposed method include better hygiene and higher verification performance. In this we proposed Local binary Pattern (LBP) based histogram for multispectral palm print representation and to choose the best spectrum for authentication. Here the central part of the palm print image is resized to the size of 180 × 180 and divided into non overlapping sub-images. The size of the sub-image various from 2×2 pixels to 90×90 pixels. The histogram is obtained for each block and the values are used for comparison. Totally 36 images per person are taken from standard database available. Training set is prepared with the help of 2 images from each spectrum. Results are checked against remaining images in authentication mode. Results are represented in terms of Genuine acceptance rate(%). Most of the palm print recognition systems use white light to acquire Images. This study analyzes the palm print recognition performance under six different illuminations, including the white light. The experimental results with a large database show that white light is not the optimal illumination, while 700nm light could achieve higher palm print recognition accuracy than the white light. In authentication mode 98% recognition rate is obtained for the spectrum 700nm. The experiment was conducted for six spectrums like 460,630,700,850,940nm, White Light. We use the CASIA-MS-Palmprint V1 database of size 7200 images collected by the Chinese Academy of Sciences' Institute of Automation (CASIA).
多光谱掌纹是最可靠、最独特的生物识别技术之一。MSI具有比普通图像更快的采集时间和更好的图像质量。该方法具有较好的卫生性和较高的验证性能。本文提出了基于局部二值模式(LBP)的掌纹直方图多光谱表示方法,并选择最佳的掌纹光谱进行认证。在这里,掌纹图像的中心部分被调整为180 × 180的大小,并划分为不重叠的子图像。子图像的大小从2×2像素到90×90像素不等。获得每个块的直方图,并将其值用于比较。每人总共36张图片取自标准数据库。训练集是利用来自每个光谱的2张图像来准备的。在身份验证模式下对剩余映像检查结果。结果以真实接受率(%)表示。大多数掌纹识别系统使用白光来获取图像。本研究分析了包括白光在内的六种不同光照下的掌纹识别性能。大型数据库的实验结果表明,白光不是最优照明,700nm光比白光能达到更高的掌纹识别精度。在认证模式下,对700nm光谱的识别率达到98%。实验在460、630,700、850,940nm、白光等6个光谱下进行。我们使用CASIA- ms - palm - print V1数据库,由中国科学院自动化研究所(CASIA)收集的7200幅图像。
{"title":"Selecting best spectrum using multispectral palm texture","authors":"M. Maheswari, S. Ancy, G. Suresh","doi":"10.1109/ICRTIT.2013.6844204","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844204","url":null,"abstract":"Multispectral palm print is one of the most reliable and unique Biometric. MSI have faster acquisition time and better quality images than normal images. The advantages of the proposed method include better hygiene and higher verification performance. In this we proposed Local binary Pattern (LBP) based histogram for multispectral palm print representation and to choose the best spectrum for authentication. Here the central part of the palm print image is resized to the size of 180 × 180 and divided into non overlapping sub-images. The size of the sub-image various from 2×2 pixels to 90×90 pixels. The histogram is obtained for each block and the values are used for comparison. Totally 36 images per person are taken from standard database available. Training set is prepared with the help of 2 images from each spectrum. Results are checked against remaining images in authentication mode. Results are represented in terms of Genuine acceptance rate(%). Most of the palm print recognition systems use white light to acquire Images. This study analyzes the palm print recognition performance under six different illuminations, including the white light. The experimental results with a large database show that white light is not the optimal illumination, while 700nm light could achieve higher palm print recognition accuracy than the white light. In authentication mode 98% recognition rate is obtained for the spectrum 700nm. The experiment was conducted for six spectrums like 460,630,700,850,940nm, White Light. We use the CASIA-MS-Palmprint V1 database of size 7200 images collected by the Chinese Academy of Sciences' Institute of Automation (CASIA).","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134324481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimizing fuzzy search in XML using efficient trie indexing structure 使用高效的trie索引结构优化XML中的模糊搜索
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844253
S. Chandragandhi, L. Nithya
In a traditional keyword based search system over XML data, a user submit keyword query to the system and retrieves relevant answers. In keyword search system where the user has limited knowledge about the data. Our proposed method provides the following features: 1) Search as you type: It extends Auto complete by supporting queries with multiple keywords in XML data. 2) Fuzzy: It provides high-quality answers that have keywords matching query keywords approximately. 3) Efficient: An index structures can reduce the searching time. An effective ranking technique can identifies high quality results. This method achieves high search efficiency and result quality.
在基于XML数据的传统关键字搜索系统中,用户向系统提交关键字查询并检索相关答案。在关键字搜索系统中,用户对数据的了解有限。我们提出的方法提供以下特性:1)输入时搜索:它通过支持XML数据中包含多个关键字的查询来扩展自动完成。2)模糊:提供与查询关键字近似匹配的高质量答案。3)高效:一个索引结构可以减少搜索时间。有效的排序技术可以识别出高质量的结果。该方法具有较高的搜索效率和搜索结果质量。
{"title":"Optimizing fuzzy search in XML using efficient trie indexing structure","authors":"S. Chandragandhi, L. Nithya","doi":"10.1109/ICRTIT.2013.6844253","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844253","url":null,"abstract":"In a traditional keyword based search system over XML data, a user submit keyword query to the system and retrieves relevant answers. In keyword search system where the user has limited knowledge about the data. Our proposed method provides the following features: 1) Search as you type: It extends Auto complete by supporting queries with multiple keywords in XML data. 2) Fuzzy: It provides high-quality answers that have keywords matching query keywords approximately. 3) Efficient: An index structures can reduce the searching time. An effective ranking technique can identifies high quality results. This method achieves high search efficiency and result quality.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"146 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128919661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A secured cloud storage technique to improve security in cloud infrastructure 一种安全的云存储技术,用于提高云基础设施的安全性
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844187
M. S. Kumar, M. Kumar
Security in cloud computing is considered to be the most challenging research domain where most of the solutions remain vulnerable. Availability of data in cloud to every other components of cloud made it difficult to protect it from external sources. The risk of malicious insiders in cloud and failing of cloud services have received a strong attention from many companies. Security ensures confidentiality, integrity and availability. Security has the characteristics of a complement to reliability. This work proposes a new system where data can be stored with high security using Storage Efficient Secret Sharing Algorithm (SESS) implemented using Shamir's Secret Sharing (SSS) approach. This approach ensures that security for data can be achieved by splitting data into K number of parts and need of every partition of data to construct original data proves that this algorithm is computationally strong against any attack. Implementing SSS approach for application processing, is computationally inefficient. This paper presents a detailed implementation of SESS using SSS approach with Dynamic Software Module (DSM) to secure data stored in cloud environment to improve security in cloud infrastructure.
云计算中的安全性被认为是最具挑战性的研究领域,其中大多数解决方案仍然容易受到攻击。云中的数据对云的所有其他组件的可用性使得很难保护它免受外部来源的侵害。云中的恶意内部人员和云服务失败的风险已经受到许多公司的强烈关注。安全性确保机密性、完整性和可用性。安全性具有补充可靠性的特点。本文提出了一种基于Shamir秘密共享(SSS)方法实现的存储高效秘密共享算法(SESS)的高安全性数据存储系统。这种方法保证了将数据分割成K个部分就可以实现数据的安全性,并且每个数据分区都需要构造原始数据,这证明了该算法对任何攻击都具有很强的计算能力。实现应用程序处理的SSS方法在计算上效率低下。本文提出了一种基于动态软件模块(DSM)的SSS方法对存储在云环境中的数据进行安全保护的详细实现方法,以提高云基础设施的安全性。
{"title":"A secured cloud storage technique to improve security in cloud infrastructure","authors":"M. S. Kumar, M. Kumar","doi":"10.1109/ICRTIT.2013.6844187","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844187","url":null,"abstract":"Security in cloud computing is considered to be the most challenging research domain where most of the solutions remain vulnerable. Availability of data in cloud to every other components of cloud made it difficult to protect it from external sources. The risk of malicious insiders in cloud and failing of cloud services have received a strong attention from many companies. Security ensures confidentiality, integrity and availability. Security has the characteristics of a complement to reliability. This work proposes a new system where data can be stored with high security using Storage Efficient Secret Sharing Algorithm (SESS) implemented using Shamir's Secret Sharing (SSS) approach. This approach ensures that security for data can be achieved by splitting data into K number of parts and need of every partition of data to construct original data proves that this algorithm is computationally strong against any attack. Implementing SSS approach for application processing, is computationally inefficient. This paper presents a detailed implementation of SESS using SSS approach with Dynamic Software Module (DSM) to secure data stored in cloud environment to improve security in cloud infrastructure.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130761077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluation of semantic role labeling based on lexical features using conditional random fields and support vector machine 基于条件随机场和支持向量机的词法特征语义角色标注评价
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844179
K. Ravidhaa, S. Meena, R. S. Milton
The main objective of this paper is to identify the semantic roles of arguments in a sentence based on lexicalized features even if less semantic information is available. The semantic role labeling task (SRL) involves identifying which groups of words act as arguments to a given predicate. These arguments must be labeled with their role with respect to the predicate, indicating how the proposition should be semantically interpreted. The approach mainly focuses on improving the task of SRL by adding the similar words and selectional preferences to the existing lexical features, thereby avoiding data sparsity problem. Addition of richer lexical information can improve SRL task even when very little syntactic knowledge is available in the input sentence. We analyze the performance of SRL which use a probabilistic graphical model (Conditional Random Field) and a machine learning model (Support Vector Machines). The statistical modelling is trained by CONLL-2004 Shared Task training data.
本文的主要目的是在语义信息较少的情况下,基于词汇化特征识别句子中论点的语义角色。语义角色标记任务(SRL)涉及识别哪些单词组作为给定谓词的参数。这些参数必须标记为它们相对于谓词的角色,指示应该如何在语义上解释命题。该方法主要侧重于通过在现有的词法特征上添加相似词和选择偏好来改进SRL任务,从而避免数据稀疏性问题。添加更丰富的词汇信息可以改善SRL任务,即使在输入句子中可用的语法知识很少的情况下也是如此。我们使用概率图形模型(条件随机场)和机器学习模型(支持向量机)来分析SRL的性能。统计模型采用CONLL-2004共享任务训练数据进行训练。
{"title":"Evaluation of semantic role labeling based on lexical features using conditional random fields and support vector machine","authors":"K. Ravidhaa, S. Meena, R. S. Milton","doi":"10.1109/ICRTIT.2013.6844179","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844179","url":null,"abstract":"The main objective of this paper is to identify the semantic roles of arguments in a sentence based on lexicalized features even if less semantic information is available. The semantic role labeling task (SRL) involves identifying which groups of words act as arguments to a given predicate. These arguments must be labeled with their role with respect to the predicate, indicating how the proposition should be semantically interpreted. The approach mainly focuses on improving the task of SRL by adding the similar words and selectional preferences to the existing lexical features, thereby avoiding data sparsity problem. Addition of richer lexical information can improve SRL task even when very little syntactic knowledge is available in the input sentence. We analyze the performance of SRL which use a probabilistic graphical model (Conditional Random Field) and a machine learning model (Support Vector Machines). The statistical modelling is trained by CONLL-2004 Shared Task training data.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generating relevant paths using keyword search on compact XML 在紧凑XML上使用关键字搜索生成相关路径
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844222
S. Meenakshi, R. Senthilkumar
The management of XML data has always been a popular research issue. A simple yet effective way to search in XML database is keyword search. In existing methods, the user has to compose query with which the relevant answers can be retrieved. These methods require the user to have prior knowledge about the data. To overcome the issues arising out of these methods, several approaches have been proposed. In this paper, Two challenges for searching the keyword in XML document has been proposed; 1) how to retrieve high answer semantics matches of the keyword queries (Top-k) 2) how to identify the relevant path for the keyword queries. To identify relevant answers over XML data streams, the Compact Lowest Common Ancestors (CLCAs) are used. We use a compact storage structure (QUICX) system which is efficient both in compression and storage with indexing features for efficient querying. Experiments were carried out using benchmark datasets such as geographical dataset (mondial) and bibliographic dataset (DBLP). In order to prove the effectiveness of the proposed system, it is compared against the existing system with respect to time taken for retrieval and the proposed system achieves about 63.3% of improvement over the keyword search in XML document in terms of time taken for retrieval.
XML数据的管理一直是一个热门的研究问题。在XML数据库中进行搜索的一种简单而有效的方法是关键字搜索。在现有的方法中,用户必须编写可以检索相关答案的查询。这些方法要求用户事先了解数据。为了克服这些方法所产生的问题,提出了几种方法。本文提出了在XML文档中搜索关键字的两个挑战;1)如何检索关键字查询的高答案语义匹配(Top-k) 2)如何识别关键字查询的相关路径。为了识别XML数据流上的相关答案,使用了Compact最低共同祖先(clca)。我们使用紧凑的存储结构(QUICX)系统,该系统在压缩和存储方面都很高效,并且具有索引功能,可以实现高效的查询。实验采用地理数据集(mondial)和书目数据集(DBLP)等基准数据集进行。为了证明所提系统的有效性,将所提系统与现有系统在检索时间方面进行了比较,所提系统在检索时间方面比XML文档中的关键字搜索改进了约63.3%。
{"title":"Generating relevant paths using keyword search on compact XML","authors":"S. Meenakshi, R. Senthilkumar","doi":"10.1109/ICRTIT.2013.6844222","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844222","url":null,"abstract":"The management of XML data has always been a popular research issue. A simple yet effective way to search in XML database is keyword search. In existing methods, the user has to compose query with which the relevant answers can be retrieved. These methods require the user to have prior knowledge about the data. To overcome the issues arising out of these methods, several approaches have been proposed. In this paper, Two challenges for searching the keyword in XML document has been proposed; 1) how to retrieve high answer semantics matches of the keyword queries (Top-k) 2) how to identify the relevant path for the keyword queries. To identify relevant answers over XML data streams, the Compact Lowest Common Ancestors (CLCAs) are used. We use a compact storage structure (QUICX) system which is efficient both in compression and storage with indexing features for efficient querying. Experiments were carried out using benchmark datasets such as geographical dataset (mondial) and bibliographic dataset (DBLP). In order to prove the effectiveness of the proposed system, it is compared against the existing system with respect to time taken for retrieval and the proposed system achieves about 63.3% of improvement over the keyword search in XML document in terms of time taken for retrieval.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124599855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Cognitive inspired optimal routing of OLSR in VANET 认知启发的VANET中OLSR最优路径
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844217
Thompson Stephan, K. Karuppanan
Vehicular Ad Hoc Networks (VANETs) evolved as a result of recent advances in wireless technologies. In such networks, the limitation of signal coverage and the high mobility of the nodes generate frequent changes in topology. The scope of reactive routing protocol in VANET is limited due to the topology instability of VANET whereas proactive routing protocol such as OLSR, designed for MANET is also unable to meet the broad range of data services envisioned for VANET. It is due to the inability of existing OLSR protocol to sense channel conditions and predict channel overload. In order to improve the routing efficiency, the network needs to possess some cognitive capacity to choose an optimal path accounting both link state and channel information and thereby overcoming the problem of channel incapacity. This paper attempts to enhance OLSR routing with help of cognitive process that involve in obtaining and storing knowledge on routing strategies to opt for the most suitable route and also appropriate channel for transmission.
车载自组织网络(vanet)是随着无线技术的发展而发展起来的。在这种网络中,信号覆盖的局限性和节点的高移动性导致拓扑结构频繁变化。由于VANET的拓扑不稳定性,VANET中的被动路由协议的范围受到限制,而为MANET设计的主动路由协议(如OLSR)也无法满足VANET所设想的广泛数据服务。这是由于现有的OLSR协议无法感知信道状况和预测信道过载。为了提高路由效率,网络需要具备考虑链路状态和信道信息选择最优路径的认知能力,从而克服信道无容量的问题。本文试图通过获取和存储路由策略知识的认知过程来增强OLSR路由,从而选择最合适的路由和合适的传输通道。
{"title":"Cognitive inspired optimal routing of OLSR in VANET","authors":"Thompson Stephan, K. Karuppanan","doi":"10.1109/ICRTIT.2013.6844217","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844217","url":null,"abstract":"Vehicular Ad Hoc Networks (VANETs) evolved as a result of recent advances in wireless technologies. In such networks, the limitation of signal coverage and the high mobility of the nodes generate frequent changes in topology. The scope of reactive routing protocol in VANET is limited due to the topology instability of VANET whereas proactive routing protocol such as OLSR, designed for MANET is also unable to meet the broad range of data services envisioned for VANET. It is due to the inability of existing OLSR protocol to sense channel conditions and predict channel overload. In order to improve the routing efficiency, the network needs to possess some cognitive capacity to choose an optimal path accounting both link state and channel information and thereby overcoming the problem of channel incapacity. This paper attempts to enhance OLSR routing with help of cognitive process that involve in obtaining and storing knowledge on routing strategies to opt for the most suitable route and also appropriate channel for transmission.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124827525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Software components prioritization using OCL formal specification for effective testing 软件组件优先级使用OCL正式规范进行有效的测试
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844288
A. Jalila, D. Mala
In soft real time system development, testing effort minimization is a challenging task. Earlier research has shown that often a small percentage of components are responsible for most of the faults reported at the later stages of software development. Due to the time and other resource constraints, fault-prone components are ignored during testing activity which leads to compromises on software quality. Thus there is a need to identify fault-prone components of the system based on the data collected at the early stages of software development. The major focus of the proposed methodology is to identify and prioritize fault-prone components of the system using its OCL formal specifications. This approach enables testers to distribute more effort on fault-prone components than non fault-prone components of the system. The proposed methodology is illustrated based on three case study applications.
在软实时系统开发中,最小化测试工作量是一项具有挑战性的任务。早期的研究表明,在软件开发的后期阶段,通常一小部分组件要对报告的大多数错误负责。由于时间和其他资源的限制,在测试活动中容易出错的组件被忽略,从而导致软件质量的妥协。因此,有必要根据在软件开发的早期阶段收集的数据来识别系统中容易出错的组件。所建议的方法的主要焦点是使用OCL正式规范来识别系统中容易出错的组件并对其进行优先排序。这种方法使测试人员能够将更多的精力分配到容易出错的组件上,而不是系统中不容易出错的组件上。提出的方法是基于三个案例研究应用说明。
{"title":"Software components prioritization using OCL formal specification for effective testing","authors":"A. Jalila, D. Mala","doi":"10.1109/ICRTIT.2013.6844288","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844288","url":null,"abstract":"In soft real time system development, testing effort minimization is a challenging task. Earlier research has shown that often a small percentage of components are responsible for most of the faults reported at the later stages of software development. Due to the time and other resource constraints, fault-prone components are ignored during testing activity which leads to compromises on software quality. Thus there is a need to identify fault-prone components of the system based on the data collected at the early stages of software development. The major focus of the proposed methodology is to identify and prioritize fault-prone components of the system using its OCL formal specifications. This approach enables testers to distribute more effort on fault-prone components than non fault-prone components of the system. The proposed methodology is illustrated based on three case study applications.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122792252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Modified BPS algorithm based on shearlet transform for noisy images 基于shearlet变换的噪声图像改进BPS算法
Pub Date : 2013-07-25 DOI: 10.1109/ICRTIT.2013.6844237
M. Nishanthi, J. J. Nayahi
Most of the medical and nuclear images contain visual noise. Due to the presence of noise, the images can't be examined properly. So, there is a necessity to avoid the noise in order to provide better quality images along with the better compression efficiency for efficient storage and transmission. Block Based Pass Parallel SPIHT (BPS) is widely used for compression because of its high processing speed. It is possible by decomposing the wavelet transformed image into 4×4 bit-blocks and the decomposed blocks are encoded simultaneously in all the bit planes hence the speed is very high. However, the major drawback is the slight degradation in the PSNR value and visual quality. To overcome this drawback, a modified BPS algorithm is proposed which replaces wavelet with Shearlet because shearlet provides multi directional information and it also used to detect the geometrical features like edges. LLSURE technique is appliedbefore transformation to remove the noise induced in the image. It is preferred because of its high edge preserving capability. Experimental results demonstrate the effectiveness of the LLSURE filter and the shearlet transform in the BPS algorithm. It shows that the PSNRvalue is very effective for images corrupted with Gaussian noise when compared to other noisy images.
大多数医学和核图像都含有视觉噪声。由于噪声的存在,不能很好地检查图像。因此,有必要避免噪声,以提供更好的图像质量和更好的压缩效率,以实现高效的存储和传输。基于块的通道并行SPIHT (BPS)由于处理速度快而被广泛应用于压缩领域。可以将小波变换后的图像分解成4×4位块,分解后的块在所有位平面上同时编码,因此速度非常快。然而,主要的缺点是PSNR值和视觉质量的轻微下降。为了克服这一缺点,提出了一种改进的BPS算法,该算法用Shearlet代替小波,因为Shearlet可以提供多向信息,并且可以用于检测边缘等几何特征。变换前采用LLSURE技术去除图像中的噪声。它是首选的,因为它的高边缘保持能力。实验结果证明了LLSURE滤波器和shearlet变换在BPS算法中的有效性。结果表明,与其他噪声图像相比,PSNRvalue对于高斯噪声损坏的图像是非常有效的。
{"title":"Modified BPS algorithm based on shearlet transform for noisy images","authors":"M. Nishanthi, J. J. Nayahi","doi":"10.1109/ICRTIT.2013.6844237","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844237","url":null,"abstract":"Most of the medical and nuclear images contain visual noise. Due to the presence of noise, the images can't be examined properly. So, there is a necessity to avoid the noise in order to provide better quality images along with the better compression efficiency for efficient storage and transmission. Block Based Pass Parallel SPIHT (BPS) is widely used for compression because of its high processing speed. It is possible by decomposing the wavelet transformed image into 4×4 bit-blocks and the decomposed blocks are encoded simultaneously in all the bit planes hence the speed is very high. However, the major drawback is the slight degradation in the PSNR value and visual quality. To overcome this drawback, a modified BPS algorithm is proposed which replaces wavelet with Shearlet because shearlet provides multi directional information and it also used to detect the geometrical features like edges. LLSURE technique is appliedbefore transformation to remove the noise induced in the image. It is preferred because of its high edge preserving capability. Experimental results demonstrate the effectiveness of the LLSURE filter and the shearlet transform in the BPS algorithm. It shows that the PSNRvalue is very effective for images corrupted with Gaussian noise when compared to other noisy images.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133769479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 International Conference on Recent Trends in Information Technology (ICRTIT)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1