首页 > 最新文献

2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)最新文献

英文 中文
Automatic diabetic retinopathy detection using Gabor filter with local entropy thresholding 基于局部熵阈值Gabor滤波的糖尿病视网膜病变自动检测
S. K. Kuri
Diabetic retinopathy (DR) is a complication of diabetes and a leading cause of vision loss. DR detection, poor quality retinal image makes more difficult the analysis for ophthalmologist. Automatic segmentation of blood vessels in retina is helpful for ophthalmologists to screen larger populations. This literature presents a new automatic analysis to extract blood vessels with high accuracy. In this algorithm comprised of Gabor filter with local entropy thresholding for vessels extraction under various normal or abnormal conditions. The frequency and orientation of Gabor filter are tuned to match that of a part of blood vessels to be enhanced in a green channel image. Extraction of blood vessels pixels are classified by local entropy thresholding technique in this method. The performance of the proposed algorithm is analysed by MATLAB software with DRIVE database.
糖尿病视网膜病变(DR)是糖尿病的一种并发症,也是导致视力丧失的主要原因。DR检测中,质量差的视网膜图像给眼科医生的分析增加了难度。视网膜血管的自动分割有助于眼科医生筛查更大的人群。本文提出了一种高精度的血管自动提取方法。该算法由Gabor滤波器和局部熵阈值构成,用于各种正常或异常条件下的血管提取。Gabor滤波器的频率和方向调整到与绿色通道图像中要增强的部分血管的频率和方向相匹配。该方法采用局部熵阈值技术对提取的血管像素点进行分类。利用MATLAB软件,结合DRIVE数据库对该算法的性能进行了分析。
{"title":"Automatic diabetic retinopathy detection using Gabor filter with local entropy thresholding","authors":"S. K. Kuri","doi":"10.1109/ReTIS.2015.7232914","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232914","url":null,"abstract":"Diabetic retinopathy (DR) is a complication of diabetes and a leading cause of vision loss. DR detection, poor quality retinal image makes more difficult the analysis for ophthalmologist. Automatic segmentation of blood vessels in retina is helpful for ophthalmologists to screen larger populations. This literature presents a new automatic analysis to extract blood vessels with high accuracy. In this algorithm comprised of Gabor filter with local entropy thresholding for vessels extraction under various normal or abnormal conditions. The frequency and orientation of Gabor filter are tuned to match that of a part of blood vessels to be enhanced in a green channel image. Extraction of blood vessels pixels are classified by local entropy thresholding technique in this method. The performance of the proposed algorithm is analysed by MATLAB software with DRIVE database.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115313013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Improving graph based multidocument text summarization using an enhanced sentence similarity measure 使用增强的句子相似度度量改进基于图的多文档文本摘要
K. Sarkar, Khushbu Saraf, Avishikta Ghosh
Multi document summarization is a process to produce a single summary from a set of related documents collected from heterogeneous sources. Since the documents may contain redundant information, the performance of a multi document summarization system heavily depends on the sentence similarity measure used for removing redundant sentences from the summary. For graph based multi document summarization where existence of an edge between a pair of sentences is determined based on how much two sentences are similar to each other, the sentence similarity measure also plays an important role. This paper presents an enhanced method for computing sentence similarity aiming for improving multidocument summarization performance. Experiments using two different datasets show the effectiveness of the proposed sentence similarity measure in improving the performance of a graph based multidocument summarization system.
多文档摘要是从异构源收集的一组相关文档生成单个摘要的过程。由于文档可能包含冗余信息,多文档摘要系统的性能很大程度上取决于用于从摘要中删除冗余句子的句子相似度度量。对于基于图的多文档摘要,根据两个句子的相似度来确定一对句子之间是否存在一条边,句子相似度度量也起着重要作用。为了提高多文档摘要的性能,提出了一种改进的句子相似度计算方法。在两个不同的数据集上进行的实验表明,所提出的句子相似度度量在提高基于图的多文档摘要系统的性能方面是有效的。
{"title":"Improving graph based multidocument text summarization using an enhanced sentence similarity measure","authors":"K. Sarkar, Khushbu Saraf, Avishikta Ghosh","doi":"10.1109/ReTIS.2015.7232905","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232905","url":null,"abstract":"Multi document summarization is a process to produce a single summary from a set of related documents collected from heterogeneous sources. Since the documents may contain redundant information, the performance of a multi document summarization system heavily depends on the sentence similarity measure used for removing redundant sentences from the summary. For graph based multi document summarization where existence of an edge between a pair of sentences is determined based on how much two sentences are similar to each other, the sentence similarity measure also plays an important role. This paper presents an enhanced method for computing sentence similarity aiming for improving multidocument summarization performance. Experiments using two different datasets show the effectiveness of the proposed sentence similarity measure in improving the performance of a graph based multidocument summarization system.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115234906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Detection of macula and fovea for disease analysis in color fundus images 彩色眼底图像中黄斑和中央凹病变分析的检测
Dharitri Deka, J. Medhi, S. Nirmala
For the diagnosis of several retinal diseases, such as Diabetic Retinopathy (DR), Diabetic Macular Edema (DME), Age-related Macular degeneration (AMD) detection of macula and fovea is considered as an important prerequisite. If any abnormality like haemorrhages, exudates fall over macula then vision is severely effected and at higher stage people may become blind. In this paper a new approach for detection of macula and fovea is presented. Investigating the structure of blood vessels (BV) in the macular region localization of macula is done. The proposed method is tested on both normal and diseased images using DRIVE, MESSIDOR, DIARETDB1, HRF, STARE databases.
对于糖尿病性视网膜病变(DR)、糖尿病性黄斑水肿(DME)等几种视网膜疾病的诊断,黄斑和中央凹的老年性黄斑变性(AMD)检测被认为是一个重要的前提。如果有任何异常,如出血、渗出物落在黄斑上,那么视力就会受到严重影响,在较高的阶段,人们可能会失明。本文提出了一种检测黄斑和中央凹的新方法。对黄斑区血管结构进行了研究,对黄斑进行了定位。采用DRIVE、MESSIDOR、DIARETDB1、HRF、STARE数据库对正常和病变图像进行了测试。
{"title":"Detection of macula and fovea for disease analysis in color fundus images","authors":"Dharitri Deka, J. Medhi, S. Nirmala","doi":"10.1109/ReTIS.2015.7232883","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232883","url":null,"abstract":"For the diagnosis of several retinal diseases, such as Diabetic Retinopathy (DR), Diabetic Macular Edema (DME), Age-related Macular degeneration (AMD) detection of macula and fovea is considered as an important prerequisite. If any abnormality like haemorrhages, exudates fall over macula then vision is severely effected and at higher stage people may become blind. In this paper a new approach for detection of macula and fovea is presented. Investigating the structure of blood vessels (BV) in the macular region localization of macula is done. The proposed method is tested on both normal and diseased images using DRIVE, MESSIDOR, DIARETDB1, HRF, STARE databases.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116664973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
An efficient DCT based image watermarking using RGB color space 基于DCT的RGB彩色图像水印算法
Priyanka, Sushila Maheshkar
With the advent of various image processing tools, images can be easily counterfeited or corrupted. Digital image watermarking has emerged as an important tool for protection and authentication of digital multimedia content. This paper presents a robust DCT-based blind digital watermarking scheme for still color images. RGB color space has been used to decompose the color cover image into three channels which can be taken as a gray scaled image. Experimental result shows that it sustains good visual quality even after attacks. Proposed technique is better in terms of payload, imperceptibility than available counterparts.
随着各种图像处理工具的出现,图像很容易被伪造或损坏。数字图像水印已成为数字多媒体内容保护和认证的重要工具。提出了一种鲁棒的基于dct的静态彩色图像盲数字水印方案。利用RGB色彩空间将彩色封面图像分解为三个通道,作为灰度图像。实验结果表明,该方法在遭受攻击后仍能保持良好的视觉质量。所提出的技术在有效载荷、隐蔽性方面优于现有的同类技术。
{"title":"An efficient DCT based image watermarking using RGB color space","authors":"Priyanka, Sushila Maheshkar","doi":"10.1109/ReTIS.2015.7232881","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232881","url":null,"abstract":"With the advent of various image processing tools, images can be easily counterfeited or corrupted. Digital image watermarking has emerged as an important tool for protection and authentication of digital multimedia content. This paper presents a robust DCT-based blind digital watermarking scheme for still color images. RGB color space has been used to decompose the color cover image into three channels which can be taken as a gray scaled image. Experimental result shows that it sustains good visual quality even after attacks. Proposed technique is better in terms of payload, imperceptibility than available counterparts.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130065230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A new adaptive Cuckoo search algorithm 一种新的自适应布谷鸟搜索算法
N. M. Kumar, Maheshwari Rashmita Nath, Aneesh Wunnava, Siddharth Sahany, Rutuparna Panda
This paper presents a new adaptive Cuckoo search (ACS) algorithm based on the Cuckoo search (CS) for optimization. The main thrust is to decide the step size adaptively from its fitness value without using the Levy distribution. The other idea is to enhance the performance from the point of time and global minima. The performance of ACS against standard benchmark function show that the proposed algorithm converges to best solution with less time than Cuckoo search.
在布谷鸟搜索算法的基础上,提出了一种新的自适应布谷鸟搜索算法(ACS)。其主要目的是在不使用Levy分布的情况下,从适应度值自适应地确定步长。另一个想法是从时间点和全局最小值来增强性能。ACS对标准基准函数的性能表明,该算法收敛到最优解的时间比布谷鸟搜索短。
{"title":"A new adaptive Cuckoo search algorithm","authors":"N. M. Kumar, Maheshwari Rashmita Nath, Aneesh Wunnava, Siddharth Sahany, Rutuparna Panda","doi":"10.1109/ReTIS.2015.7232842","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232842","url":null,"abstract":"This paper presents a new adaptive Cuckoo search (ACS) algorithm based on the Cuckoo search (CS) for optimization. The main thrust is to decide the step size adaptively from its fitness value without using the Levy distribution. The other idea is to enhance the performance from the point of time and global minima. The performance of ACS against standard benchmark function show that the proposed algorithm converges to best solution with less time than Cuckoo search.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128923929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Hand gesture recognition of English alphabets using artificial neural network 基于人工神经网络的英语字母手势识别
Sourav Bhowmick, Sushant Kumar, Anurag Kumar
Human computer interaction (HCI) and sign language recognition (SLR), aimed at creating a virtual reality, 3D gaming environment, helping the deaf-and-mute people etc., extensively exploit the use of hand gestures. Segmentation of the hand part from the other body parts and background is the primary need of any hand gesture based application system; but gesture recognition systems are often plagued by different segmentation problems, and by the ones like co-articulation, movement epenthesis, recognition of similar gestures etc. The principal objective of this paper is to address a few of the said problems. In this paper, we propose a method for recognizing isolated as well as continuous English alphabet gestures which is a step towards helping and educating the hearing and speech-impaired people. We have performed the classification of the gestures with artificial neural network. Recognition rate (RR) of the isolated gestures is found to be 92.50% while that of continuous gestures is 89.05% with multilayer perceptron and 87.14% with focused time delay neural network. These results, when compared with other such system in the literature, go into showing the effectiveness of the system.
人机交互(HCI)和手语识别(SLR),旨在创造虚拟现实,3D游戏环境,帮助聋哑人等,广泛利用手势的使用。从其他身体部位和背景中分割手的部分是任何基于手势的应用系统的首要需求;但手势识别系统经常受到不同分割问题的困扰,如共同发音、动作放大、相似手势识别等。本文的主要目的是解决上述几个问题。在本文中,我们提出了一种识别孤立的和连续的英语字母手势的方法,这是帮助和教育听力和语言障碍人士的一步。我们用人工神经网络对手势进行了分类。多层感知器对孤立手势的识别率为92.50%,连续手势的识别率为89.05%,集中时滞神经网络的识别率为87.14%。这些结果,当与文献中的其他此类系统进行比较时,将显示该系统的有效性。
{"title":"Hand gesture recognition of English alphabets using artificial neural network","authors":"Sourav Bhowmick, Sushant Kumar, Anurag Kumar","doi":"10.1109/ReTIS.2015.7232913","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232913","url":null,"abstract":"Human computer interaction (HCI) and sign language recognition (SLR), aimed at creating a virtual reality, 3D gaming environment, helping the deaf-and-mute people etc., extensively exploit the use of hand gestures. Segmentation of the hand part from the other body parts and background is the primary need of any hand gesture based application system; but gesture recognition systems are often plagued by different segmentation problems, and by the ones like co-articulation, movement epenthesis, recognition of similar gestures etc. The principal objective of this paper is to address a few of the said problems. In this paper, we propose a method for recognizing isolated as well as continuous English alphabet gestures which is a step towards helping and educating the hearing and speech-impaired people. We have performed the classification of the gestures with artificial neural network. Recognition rate (RR) of the isolated gestures is found to be 92.50% while that of continuous gestures is 89.05% with multilayer perceptron and 87.14% with focused time delay neural network. These results, when compared with other such system in the literature, go into showing the effectiveness of the system.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133566918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An efficient periodic web content recommendation based on web usage mining 基于web使用挖掘的高效周期性web内容推荐
Ravi Khatri, D. Gupta
Now a day's use of internet has been increased tremendously, so providing information relevant to a user at particular time is very important task. Periodic web personalization is a process of recommending the most relevant information to the users at accurate time. In this paper we are proposing an improved personalize web recommender model, which not only considers user specific activities but also considers some other factors related to websites like total number of visitors, number of unique visitors, numbers of users download data, amount of data downloaded, amount of data uploaded and number of advertisements for a particular URL to provide a better result. This model consider user's web access activities to extract its usage behavior to build knowledge base and then knowledge base along with prior specified factors are used to predict the user specific content. Thus this advance computation of resources will help user to access required information more efficiently and effectively.
现在人们每天使用互联网的时间大大增加,因此提供与用户在特定时间相关的信息是一项非常重要的任务。周期性的网络个性化是在准确的时间向用户推荐最相关信息的过程。在本文中,我们提出了一种改进的个性化网页推荐模型,该模型不仅考虑了用户的特定活动,还考虑了与网站相关的其他因素,如访问者总数、唯一访问者数量、用户下载数据数量、下载数据数量、上传数据数量以及特定URL的广告数量,以提供更好的结果。该模型考虑用户的网络访问活动,提取其使用行为来构建知识库,然后利用知识库与预先指定的因素来预测用户特定的内容。因此,这种先进的资源计算将有助于用户更有效地访问所需的信息。
{"title":"An efficient periodic web content recommendation based on web usage mining","authors":"Ravi Khatri, D. Gupta","doi":"10.1109/ReTIS.2015.7232866","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232866","url":null,"abstract":"Now a day's use of internet has been increased tremendously, so providing information relevant to a user at particular time is very important task. Periodic web personalization is a process of recommending the most relevant information to the users at accurate time. In this paper we are proposing an improved personalize web recommender model, which not only considers user specific activities but also considers some other factors related to websites like total number of visitors, number of unique visitors, numbers of users download data, amount of data downloaded, amount of data uploaded and number of advertisements for a particular URL to provide a better result. This model consider user's web access activities to extract its usage behavior to build knowledge base and then knowledge base along with prior specified factors are used to predict the user specific content. Thus this advance computation of resources will help user to access required information more efficiently and effectively.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"00 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133359792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fast and efficient compressive sensing for wideband Cognitive Radio systems 面向宽带认知无线电系统的快速高效压缩感知
Naveen Kumar, Neetu Sood
This paper presents a Compressive Spectrum Sensing (CSS) technique for wideband Cognitive Radio (CR) system to shorten spectrum sensing interval. Fast and efficient CSS is used to detect wideband spectrum, where samples are taken at sub-Nyquist rate and signal acquisition is terminated automatically once the samples are sufficient for the best spectral recovery. To improve sensing performance we propose a new approach for sparsifying basis in context of CSS, based on Empirical Wavelet Transform (EWT) which is adaptive to the processed signal spectrum. Simulation results show that the proposed fast and efficient EWT CSS scheme outperforms the conventional Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) based schemes in terms of sensing time, detection probability, system throughput and robustness to noise.
为了缩短宽带认知无线电(CR)系统的频谱感知间隔,提出了一种压缩频谱感知(CSS)技术。快速高效的CSS用于检测宽带频谱,其中样本以亚奈奎斯特速率采集,一旦样本足以达到最佳光谱恢复,信号采集将自动终止。为了提高感知性能,提出了一种基于经验小波变换(EWT)的基于CSS的基稀疏化方法,该方法对处理后的信号频谱具有自适应能力。仿真结果表明,该方案在感知时间、检测概率、系统吞吐量和对噪声的鲁棒性等方面均优于传统的离散傅立叶变换(DFT)和离散余弦变换(DCT)方案。
{"title":"Fast and efficient compressive sensing for wideband Cognitive Radio systems","authors":"Naveen Kumar, Neetu Sood","doi":"10.1109/ReTIS.2015.7232858","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232858","url":null,"abstract":"This paper presents a Compressive Spectrum Sensing (CSS) technique for wideband Cognitive Radio (CR) system to shorten spectrum sensing interval. Fast and efficient CSS is used to detect wideband spectrum, where samples are taken at sub-Nyquist rate and signal acquisition is terminated automatically once the samples are sufficient for the best spectral recovery. To improve sensing performance we propose a new approach for sparsifying basis in context of CSS, based on Empirical Wavelet Transform (EWT) which is adaptive to the processed signal spectrum. Simulation results show that the proposed fast and efficient EWT CSS scheme outperforms the conventional Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) based schemes in terms of sensing time, detection probability, system throughput and robustness to noise.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132086801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multifractal detrended fluctuation analysis used to analyse EEG signals originating from different lobes of brain 多重分形去趋势波动分析用于分析来自不同脑叶的脑电信号
S. Dey, Rushab Bhimani, A. Mazumder, Poulami Ghosh, D. Tibarewala
The electrical activity of human brain when subjected to puzzle stimulus & musical stimuli is recorded as EEG. Non-linear analysis method MFDFA was used to analyze long-lasting effect of music on rest state of brain. Multifractal width and Hurst exponent are used as an indicator of multifractality in our analysis. EEG data was also shuffled to find out the origin of multifractality(i.e. whether due to long range correlation, broad probability density or both).
人脑在拼图刺激和音乐刺激下的脑电活动记录为脑电图。采用非线性分析方法MFDFA分析了音乐对大脑休息状态的长期影响。在我们的分析中,多重分形的宽度和Hurst指数作为多重分形的指标。同时对脑电数据进行洗牌,找出多重分形的起源。无论是由于长程相关性,宽概率密度还是两者兼而有之)。
{"title":"Multifractal detrended fluctuation analysis used to analyse EEG signals originating from different lobes of brain","authors":"S. Dey, Rushab Bhimani, A. Mazumder, Poulami Ghosh, D. Tibarewala","doi":"10.1109/ReTIS.2015.7232916","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232916","url":null,"abstract":"The electrical activity of human brain when subjected to puzzle stimulus & musical stimuli is recorded as EEG. Non-linear analysis method MFDFA was used to analyze long-lasting effect of music on rest state of brain. Multifractal width and Hurst exponent are used as an indicator of multifractality in our analysis. EEG data was also shuffled to find out the origin of multifractality(i.e. whether due to long range correlation, broad probability density or both).","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128852491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Discovering association rules partially devoid of dissociation by weighted confidence 利用加权置信度发现部分无解离的关联规则
Subrata Datta, Subrata Bose
Useful rule formation from frequent itemsets in a large database is a crucial task in Association Rule mining. Traditionally association rules refer to associations between two frequent sets and measured by their confidence. This approach basically concentrates on positive associations and thereby do not study the effect of dissociation and null transactions in association. Though the effect of dissociation has been studied in association, the impact of null transactions has been generally ignored. Some scholars have identified both positive and negative rules and thus studied the impact of the null transactions. However there is no uniform treatment towards inclusion of null transactions in either positive or negative category. We have tried to bridge these gaps. We have established a uniform approach to mine association rules by combining the effect of all kinds of transactions in the rules without categorizing them as positives and negatives. We have proposed to identify the frequent sets by weighted support in lieu of support and measure rules by weighted confidence in lieu of confidence for useful positive rule generation taking care of the negativity through dissociation and Null Transaction Impact Factor. We have shown that the weighted support, weighted confidence approach increases the chance of discovering rules which are less dissociated compared to the traditional support-confidence framework provided we maintain same level of minsupp and minconf in both cases.
从大型数据库中的频繁项集中生成有用的规则是关联规则挖掘的关键任务。传统的关联规则是指两个频繁集之间的关联,并通过它们的置信度来度量。这种方法基本上集中在积极的联想上,因此没有研究游离和无效交易在联想中的影响。虽然在关联中已经研究了解离的影响,但通常忽略了零交易的影响。一些学者区分了积极规则和消极规则,从而研究了无效交易的影响。但是,对于将无效交易列入正面或负面类别并没有统一的处理办法。我们试图弥合这些差距。我们建立了一种统一的方法来挖掘关联规则,将规则中各种交易的影响结合起来,而不将其分类为正面和负面。我们建议通过加权支持代替支持来识别频繁集,并通过加权置信度代替置信度来度量规则,从而生成有用的积极规则,通过解离和Null事务影响因子来处理消极性。我们已经证明,与传统的支持-置信度框架相比,加权支持、加权置信度方法增加了发现分离程度较低的规则的机会,前提是我们在两种情况下保持相同的最小支持和最小信任水平。
{"title":"Discovering association rules partially devoid of dissociation by weighted confidence","authors":"Subrata Datta, Subrata Bose","doi":"10.1109/ReTIS.2015.7232867","DOIUrl":"https://doi.org/10.1109/ReTIS.2015.7232867","url":null,"abstract":"Useful rule formation from frequent itemsets in a large database is a crucial task in Association Rule mining. Traditionally association rules refer to associations between two frequent sets and measured by their confidence. This approach basically concentrates on positive associations and thereby do not study the effect of dissociation and null transactions in association. Though the effect of dissociation has been studied in association, the impact of null transactions has been generally ignored. Some scholars have identified both positive and negative rules and thus studied the impact of the null transactions. However there is no uniform treatment towards inclusion of null transactions in either positive or negative category. We have tried to bridge these gaps. We have established a uniform approach to mine association rules by combining the effect of all kinds of transactions in the rules without categorizing them as positives and negatives. We have proposed to identify the frequent sets by weighted support in lieu of support and measure rules by weighted confidence in lieu of confidence for useful positive rule generation taking care of the negativity through dissociation and Null Transaction Impact Factor. We have shown that the weighted support, weighted confidence approach increases the chance of discovering rules which are less dissociated compared to the traditional support-confidence framework provided we maintain same level of minsupp and minconf in both cases.","PeriodicalId":161306,"journal":{"name":"2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123568013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1