Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844210
H. Ogata
One of the challenges of CSUL (Computer Supported Ubiquitous Learning) research is capturing what learners have learned with the contextual data, and reminding the learners of it in the right place and the right time. This paper proposes a ubiquitous learning log system called SCROLL (System for Capturing and Reminding Of Learning Log). Ubiquitous Learning Log (ULL) is defined as a digital record of what learners have learned in the daily life using ubiquitous technologies. It allows the learners to log their learning experiences with photos, audios, videos, location, QR-code, RFID tag, and sensor data, and to share and to reuse ULL with others. Using SCROLL, they can receive personalized quizzes and answers for their questions. Also, they can navigate and be aware of their past ULLs supported by augmented reality view. This paper also describes how SCROLL can be used in different contexts such as learning analytics for Japanese language learning, seamless language learning, and museum.
CSUL(计算机支持的泛在学习)研究的挑战之一是通过上下文数据捕获学习者所学到的内容,并在正确的时间和地点提醒学习者。本文提出了一种泛在学习日志系统,称为SCROLL (system for capture and remind Of learning log)。泛在学习日志(ULL)被定义为学习者在日常生活中使用泛在技术学习的数字记录。它允许学习者记录他们的学习经历,包括照片、音频、视频、位置、qr码、RFID标签和传感器数据,并与他人共享和重用ULL。使用SCROLL,他们可以收到个性化的测验和问题的答案。此外,他们还可以导航并了解增强现实视图支持的过去的ull。本文还介绍了SCROLL如何在日语学习分析、无缝语言学习和博物馆等不同环境中使用。
{"title":"Augmenting learning-experiences in the real world with digital technologies","authors":"H. Ogata","doi":"10.1109/ICRTIT.2013.6844210","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844210","url":null,"abstract":"One of the challenges of CSUL (Computer Supported Ubiquitous Learning) research is capturing what learners have learned with the contextual data, and reminding the learners of it in the right place and the right time. This paper proposes a ubiquitous learning log system called SCROLL (System for Capturing and Reminding Of Learning Log). Ubiquitous Learning Log (ULL) is defined as a digital record of what learners have learned in the daily life using ubiquitous technologies. It allows the learners to log their learning experiences with photos, audios, videos, location, QR-code, RFID tag, and sensor data, and to share and to reuse ULL with others. Using SCROLL, they can receive personalized quizzes and answers for their questions. Also, they can navigate and be aware of their past ULLs supported by augmented reality view. This paper also describes how SCROLL can be used in different contexts such as learning analytics for Japanese language learning, seamless language learning, and museum.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130460933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844278
S. Umamaheswari, K. I. Kirthiga, B. Abinaya, D. Ashwin
An optimal Network on Chip topology is generated with reduced area and power consumption. The Firefly algorithm is used for the optimal mapping of each and every Intellectual Property core in a specific application. This method incorporates multiple objectives subject to some constraints based on the information available in the Communication Task Graph. The paper proceeds with two phases. In the first phase the tasks are mapped on the processors and in the second phase the processors are mapped on the network tiles.
{"title":"Cost aware task scheduling and core mapping on Network-on-Chip topology using Firefly algorithm","authors":"S. Umamaheswari, K. I. Kirthiga, B. Abinaya, D. Ashwin","doi":"10.1109/ICRTIT.2013.6844278","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844278","url":null,"abstract":"An optimal Network on Chip topology is generated with reduced area and power consumption. The Firefly algorithm is used for the optimal mapping of each and every Intellectual Property core in a specific application. This method incorporates multiple objectives subject to some constraints based on the information available in the Communication Task Graph. The paper proceeds with two phases. In the first phase the tasks are mapped on the processors and in the second phase the processors are mapped on the network tiles.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134054241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844204
M. Maheswari, S. Ancy, G. Suresh
Multispectral palm print is one of the most reliable and unique Biometric. MSI have faster acquisition time and better quality images than normal images. The advantages of the proposed method include better hygiene and higher verification performance. In this we proposed Local binary Pattern (LBP) based histogram for multispectral palm print representation and to choose the best spectrum for authentication. Here the central part of the palm print image is resized to the size of 180 × 180 and divided into non overlapping sub-images. The size of the sub-image various from 2×2 pixels to 90×90 pixels. The histogram is obtained for each block and the values are used for comparison. Totally 36 images per person are taken from standard database available. Training set is prepared with the help of 2 images from each spectrum. Results are checked against remaining images in authentication mode. Results are represented in terms of Genuine acceptance rate(%). Most of the palm print recognition systems use white light to acquire Images. This study analyzes the palm print recognition performance under six different illuminations, including the white light. The experimental results with a large database show that white light is not the optimal illumination, while 700nm light could achieve higher palm print recognition accuracy than the white light. In authentication mode 98% recognition rate is obtained for the spectrum 700nm. The experiment was conducted for six spectrums like 460,630,700,850,940nm, White Light. We use the CASIA-MS-Palmprint V1 database of size 7200 images collected by the Chinese Academy of Sciences' Institute of Automation (CASIA).
多光谱掌纹是最可靠、最独特的生物识别技术之一。MSI具有比普通图像更快的采集时间和更好的图像质量。该方法具有较好的卫生性和较高的验证性能。本文提出了基于局部二值模式(LBP)的掌纹直方图多光谱表示方法,并选择最佳的掌纹光谱进行认证。在这里,掌纹图像的中心部分被调整为180 × 180的大小,并划分为不重叠的子图像。子图像的大小从2×2像素到90×90像素不等。获得每个块的直方图,并将其值用于比较。每人总共36张图片取自标准数据库。训练集是利用来自每个光谱的2张图像来准备的。在身份验证模式下对剩余映像检查结果。结果以真实接受率(%)表示。大多数掌纹识别系统使用白光来获取图像。本研究分析了包括白光在内的六种不同光照下的掌纹识别性能。大型数据库的实验结果表明,白光不是最优照明,700nm光比白光能达到更高的掌纹识别精度。在认证模式下,对700nm光谱的识别率达到98%。实验在460、630,700、850,940nm、白光等6个光谱下进行。我们使用CASIA- ms - palm - print V1数据库,由中国科学院自动化研究所(CASIA)收集的7200幅图像。
{"title":"Selecting best spectrum using multispectral palm texture","authors":"M. Maheswari, S. Ancy, G. Suresh","doi":"10.1109/ICRTIT.2013.6844204","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844204","url":null,"abstract":"Multispectral palm print is one of the most reliable and unique Biometric. MSI have faster acquisition time and better quality images than normal images. The advantages of the proposed method include better hygiene and higher verification performance. In this we proposed Local binary Pattern (LBP) based histogram for multispectral palm print representation and to choose the best spectrum for authentication. Here the central part of the palm print image is resized to the size of 180 × 180 and divided into non overlapping sub-images. The size of the sub-image various from 2×2 pixels to 90×90 pixels. The histogram is obtained for each block and the values are used for comparison. Totally 36 images per person are taken from standard database available. Training set is prepared with the help of 2 images from each spectrum. Results are checked against remaining images in authentication mode. Results are represented in terms of Genuine acceptance rate(%). Most of the palm print recognition systems use white light to acquire Images. This study analyzes the palm print recognition performance under six different illuminations, including the white light. The experimental results with a large database show that white light is not the optimal illumination, while 700nm light could achieve higher palm print recognition accuracy than the white light. In authentication mode 98% recognition rate is obtained for the spectrum 700nm. The experiment was conducted for six spectrums like 460,630,700,850,940nm, White Light. We use the CASIA-MS-Palmprint V1 database of size 7200 images collected by the Chinese Academy of Sciences' Institute of Automation (CASIA).","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134324481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844253
S. Chandragandhi, L. Nithya
In a traditional keyword based search system over XML data, a user submit keyword query to the system and retrieves relevant answers. In keyword search system where the user has limited knowledge about the data. Our proposed method provides the following features: 1) Search as you type: It extends Auto complete by supporting queries with multiple keywords in XML data. 2) Fuzzy: It provides high-quality answers that have keywords matching query keywords approximately. 3) Efficient: An index structures can reduce the searching time. An effective ranking technique can identifies high quality results. This method achieves high search efficiency and result quality.
{"title":"Optimizing fuzzy search in XML using efficient trie indexing structure","authors":"S. Chandragandhi, L. Nithya","doi":"10.1109/ICRTIT.2013.6844253","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844253","url":null,"abstract":"In a traditional keyword based search system over XML data, a user submit keyword query to the system and retrieves relevant answers. In keyword search system where the user has limited knowledge about the data. Our proposed method provides the following features: 1) Search as you type: It extends Auto complete by supporting queries with multiple keywords in XML data. 2) Fuzzy: It provides high-quality answers that have keywords matching query keywords approximately. 3) Efficient: An index structures can reduce the searching time. An effective ranking technique can identifies high quality results. This method achieves high search efficiency and result quality.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"146 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128919661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844187
M. S. Kumar, M. Kumar
Security in cloud computing is considered to be the most challenging research domain where most of the solutions remain vulnerable. Availability of data in cloud to every other components of cloud made it difficult to protect it from external sources. The risk of malicious insiders in cloud and failing of cloud services have received a strong attention from many companies. Security ensures confidentiality, integrity and availability. Security has the characteristics of a complement to reliability. This work proposes a new system where data can be stored with high security using Storage Efficient Secret Sharing Algorithm (SESS) implemented using Shamir's Secret Sharing (SSS) approach. This approach ensures that security for data can be achieved by splitting data into K number of parts and need of every partition of data to construct original data proves that this algorithm is computationally strong against any attack. Implementing SSS approach for application processing, is computationally inefficient. This paper presents a detailed implementation of SESS using SSS approach with Dynamic Software Module (DSM) to secure data stored in cloud environment to improve security in cloud infrastructure.
{"title":"A secured cloud storage technique to improve security in cloud infrastructure","authors":"M. S. Kumar, M. Kumar","doi":"10.1109/ICRTIT.2013.6844187","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844187","url":null,"abstract":"Security in cloud computing is considered to be the most challenging research domain where most of the solutions remain vulnerable. Availability of data in cloud to every other components of cloud made it difficult to protect it from external sources. The risk of malicious insiders in cloud and failing of cloud services have received a strong attention from many companies. Security ensures confidentiality, integrity and availability. Security has the characteristics of a complement to reliability. This work proposes a new system where data can be stored with high security using Storage Efficient Secret Sharing Algorithm (SESS) implemented using Shamir's Secret Sharing (SSS) approach. This approach ensures that security for data can be achieved by splitting data into K number of parts and need of every partition of data to construct original data proves that this algorithm is computationally strong against any attack. Implementing SSS approach for application processing, is computationally inefficient. This paper presents a detailed implementation of SESS using SSS approach with Dynamic Software Module (DSM) to secure data stored in cloud environment to improve security in cloud infrastructure.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130761077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844179
K. Ravidhaa, S. Meena, R. S. Milton
The main objective of this paper is to identify the semantic roles of arguments in a sentence based on lexicalized features even if less semantic information is available. The semantic role labeling task (SRL) involves identifying which groups of words act as arguments to a given predicate. These arguments must be labeled with their role with respect to the predicate, indicating how the proposition should be semantically interpreted. The approach mainly focuses on improving the task of SRL by adding the similar words and selectional preferences to the existing lexical features, thereby avoiding data sparsity problem. Addition of richer lexical information can improve SRL task even when very little syntactic knowledge is available in the input sentence. We analyze the performance of SRL which use a probabilistic graphical model (Conditional Random Field) and a machine learning model (Support Vector Machines). The statistical modelling is trained by CONLL-2004 Shared Task training data.
{"title":"Evaluation of semantic role labeling based on lexical features using conditional random fields and support vector machine","authors":"K. Ravidhaa, S. Meena, R. S. Milton","doi":"10.1109/ICRTIT.2013.6844179","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844179","url":null,"abstract":"The main objective of this paper is to identify the semantic roles of arguments in a sentence based on lexicalized features even if less semantic information is available. The semantic role labeling task (SRL) involves identifying which groups of words act as arguments to a given predicate. These arguments must be labeled with their role with respect to the predicate, indicating how the proposition should be semantically interpreted. The approach mainly focuses on improving the task of SRL by adding the similar words and selectional preferences to the existing lexical features, thereby avoiding data sparsity problem. Addition of richer lexical information can improve SRL task even when very little syntactic knowledge is available in the input sentence. We analyze the performance of SRL which use a probabilistic graphical model (Conditional Random Field) and a machine learning model (Support Vector Machines). The statistical modelling is trained by CONLL-2004 Shared Task training data.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844222
S. Meenakshi, R. Senthilkumar
The management of XML data has always been a popular research issue. A simple yet effective way to search in XML database is keyword search. In existing methods, the user has to compose query with which the relevant answers can be retrieved. These methods require the user to have prior knowledge about the data. To overcome the issues arising out of these methods, several approaches have been proposed. In this paper, Two challenges for searching the keyword in XML document has been proposed; 1) how to retrieve high answer semantics matches of the keyword queries (Top-k) 2) how to identify the relevant path for the keyword queries. To identify relevant answers over XML data streams, the Compact Lowest Common Ancestors (CLCAs) are used. We use a compact storage structure (QUICX) system which is efficient both in compression and storage with indexing features for efficient querying. Experiments were carried out using benchmark datasets such as geographical dataset (mondial) and bibliographic dataset (DBLP). In order to prove the effectiveness of the proposed system, it is compared against the existing system with respect to time taken for retrieval and the proposed system achieves about 63.3% of improvement over the keyword search in XML document in terms of time taken for retrieval.
{"title":"Generating relevant paths using keyword search on compact XML","authors":"S. Meenakshi, R. Senthilkumar","doi":"10.1109/ICRTIT.2013.6844222","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844222","url":null,"abstract":"The management of XML data has always been a popular research issue. A simple yet effective way to search in XML database is keyword search. In existing methods, the user has to compose query with which the relevant answers can be retrieved. These methods require the user to have prior knowledge about the data. To overcome the issues arising out of these methods, several approaches have been proposed. In this paper, Two challenges for searching the keyword in XML document has been proposed; 1) how to retrieve high answer semantics matches of the keyword queries (Top-k) 2) how to identify the relevant path for the keyword queries. To identify relevant answers over XML data streams, the Compact Lowest Common Ancestors (CLCAs) are used. We use a compact storage structure (QUICX) system which is efficient both in compression and storage with indexing features for efficient querying. Experiments were carried out using benchmark datasets such as geographical dataset (mondial) and bibliographic dataset (DBLP). In order to prove the effectiveness of the proposed system, it is compared against the existing system with respect to time taken for retrieval and the proposed system achieves about 63.3% of improvement over the keyword search in XML document in terms of time taken for retrieval.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124599855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844217
Thompson Stephan, K. Karuppanan
Vehicular Ad Hoc Networks (VANETs) evolved as a result of recent advances in wireless technologies. In such networks, the limitation of signal coverage and the high mobility of the nodes generate frequent changes in topology. The scope of reactive routing protocol in VANET is limited due to the topology instability of VANET whereas proactive routing protocol such as OLSR, designed for MANET is also unable to meet the broad range of data services envisioned for VANET. It is due to the inability of existing OLSR protocol to sense channel conditions and predict channel overload. In order to improve the routing efficiency, the network needs to possess some cognitive capacity to choose an optimal path accounting both link state and channel information and thereby overcoming the problem of channel incapacity. This paper attempts to enhance OLSR routing with help of cognitive process that involve in obtaining and storing knowledge on routing strategies to opt for the most suitable route and also appropriate channel for transmission.
{"title":"Cognitive inspired optimal routing of OLSR in VANET","authors":"Thompson Stephan, K. Karuppanan","doi":"10.1109/ICRTIT.2013.6844217","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844217","url":null,"abstract":"Vehicular Ad Hoc Networks (VANETs) evolved as a result of recent advances in wireless technologies. In such networks, the limitation of signal coverage and the high mobility of the nodes generate frequent changes in topology. The scope of reactive routing protocol in VANET is limited due to the topology instability of VANET whereas proactive routing protocol such as OLSR, designed for MANET is also unable to meet the broad range of data services envisioned for VANET. It is due to the inability of existing OLSR protocol to sense channel conditions and predict channel overload. In order to improve the routing efficiency, the network needs to possess some cognitive capacity to choose an optimal path accounting both link state and channel information and thereby overcoming the problem of channel incapacity. This paper attempts to enhance OLSR routing with help of cognitive process that involve in obtaining and storing knowledge on routing strategies to opt for the most suitable route and also appropriate channel for transmission.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124827525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844288
A. Jalila, D. Mala
In soft real time system development, testing effort minimization is a challenging task. Earlier research has shown that often a small percentage of components are responsible for most of the faults reported at the later stages of software development. Due to the time and other resource constraints, fault-prone components are ignored during testing activity which leads to compromises on software quality. Thus there is a need to identify fault-prone components of the system based on the data collected at the early stages of software development. The major focus of the proposed methodology is to identify and prioritize fault-prone components of the system using its OCL formal specifications. This approach enables testers to distribute more effort on fault-prone components than non fault-prone components of the system. The proposed methodology is illustrated based on three case study applications.
{"title":"Software components prioritization using OCL formal specification for effective testing","authors":"A. Jalila, D. Mala","doi":"10.1109/ICRTIT.2013.6844288","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844288","url":null,"abstract":"In soft real time system development, testing effort minimization is a challenging task. Earlier research has shown that often a small percentage of components are responsible for most of the faults reported at the later stages of software development. Due to the time and other resource constraints, fault-prone components are ignored during testing activity which leads to compromises on software quality. Thus there is a need to identify fault-prone components of the system based on the data collected at the early stages of software development. The major focus of the proposed methodology is to identify and prioritize fault-prone components of the system using its OCL formal specifications. This approach enables testers to distribute more effort on fault-prone components than non fault-prone components of the system. The proposed methodology is illustrated based on three case study applications.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122792252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-07-25DOI: 10.1109/ICRTIT.2013.6844237
M. Nishanthi, J. J. Nayahi
Most of the medical and nuclear images contain visual noise. Due to the presence of noise, the images can't be examined properly. So, there is a necessity to avoid the noise in order to provide better quality images along with the better compression efficiency for efficient storage and transmission. Block Based Pass Parallel SPIHT (BPS) is widely used for compression because of its high processing speed. It is possible by decomposing the wavelet transformed image into 4×4 bit-blocks and the decomposed blocks are encoded simultaneously in all the bit planes hence the speed is very high. However, the major drawback is the slight degradation in the PSNR value and visual quality. To overcome this drawback, a modified BPS algorithm is proposed which replaces wavelet with Shearlet because shearlet provides multi directional information and it also used to detect the geometrical features like edges. LLSURE technique is appliedbefore transformation to remove the noise induced in the image. It is preferred because of its high edge preserving capability. Experimental results demonstrate the effectiveness of the LLSURE filter and the shearlet transform in the BPS algorithm. It shows that the PSNRvalue is very effective for images corrupted with Gaussian noise when compared to other noisy images.
{"title":"Modified BPS algorithm based on shearlet transform for noisy images","authors":"M. Nishanthi, J. J. Nayahi","doi":"10.1109/ICRTIT.2013.6844237","DOIUrl":"https://doi.org/10.1109/ICRTIT.2013.6844237","url":null,"abstract":"Most of the medical and nuclear images contain visual noise. Due to the presence of noise, the images can't be examined properly. So, there is a necessity to avoid the noise in order to provide better quality images along with the better compression efficiency for efficient storage and transmission. Block Based Pass Parallel SPIHT (BPS) is widely used for compression because of its high processing speed. It is possible by decomposing the wavelet transformed image into 4×4 bit-blocks and the decomposed blocks are encoded simultaneously in all the bit planes hence the speed is very high. However, the major drawback is the slight degradation in the PSNR value and visual quality. To overcome this drawback, a modified BPS algorithm is proposed which replaces wavelet with Shearlet because shearlet provides multi directional information and it also used to detect the geometrical features like edges. LLSURE technique is appliedbefore transformation to remove the noise induced in the image. It is preferred because of its high edge preserving capability. Experimental results demonstrate the effectiveness of the LLSURE filter and the shearlet transform in the BPS algorithm. It shows that the PSNRvalue is very effective for images corrupted with Gaussian noise when compared to other noisy images.","PeriodicalId":113531,"journal":{"name":"2013 International Conference on Recent Trends in Information Technology (ICRTIT)","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133769479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}