首页 > 最新文献

2015 Eighth International Conference on Contemporary Computing (IC3)最新文献

英文 中文
Leveraging probabilistic segmentation to document clustering 利用概率分割来进行文档聚类
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346657
Arko Banerjee
In this paper a novel approach to document clustering has been introduced by defining a representative-based document similarity model that performs probabilistic segmentation of documents into chunks. The frequently occuring chunks that are considered as representatives of the document set, may represent phrases or stem of true words. The representative based document similarity model, containing a term-document matrix with respect to the representatives, is a compact representation of the vector space model that improves quality of document clustering over traditional methods.
本文引入了一种新的文档聚类方法,通过定义一个基于代表性的文档相似度模型,将文档概率分割成块。经常出现的块被认为是文档集的代表,可以代表真实单词的短语或词干。基于代表的文档相似度模型,包含一个相对于代表的术语-文档矩阵,是向量空间模型的紧凑表示,它比传统方法提高了文档聚类的质量。
{"title":"Leveraging probabilistic segmentation to document clustering","authors":"Arko Banerjee","doi":"10.1109/IC3.2015.7346657","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346657","url":null,"abstract":"In this paper a novel approach to document clustering has been introduced by defining a representative-based document similarity model that performs probabilistic segmentation of documents into chunks. The frequently occuring chunks that are considered as representatives of the document set, may represent phrases or stem of true words. The representative based document similarity model, containing a term-document matrix with respect to the representatives, is a compact representation of the vector space model that improves quality of document clustering over traditional methods.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134552491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Congestion control for self similar traffic in wireless sensor network 无线传感器网络中自相似流量的拥塞控制
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346702
Arpan Kumar Dubey, Adwitiya Sinha
Wireless sensor networks (WSNs) have inspired many research domains in recent years. Congestion is a major issue faced by such networks, which causes heavy loss in data transmissions. Congestion is caused due to several reasons, such as heavy traffic, link failure, node failure and many more. There are various techniques developed for combatting network congestion. In this paper, we have proposed a technique for prediction of the congestion before it happens and controlling the situation before it becomes worse. Congestion in the network is controlled by adjusting the traffic rate of sources. Source nodes change their transmission rate as soon as they receive the control signal. Our algorithm is developed especially for managing congestive situations created by self similar traffic. The self-similarty in network traffic is simulated Pareto distribution. Congestion in the network is detected by analyzing the buffer ratio of nodes. Further, the simulation results show that our algorithm outperforms other existing techniques in terms of packet delivery ratio and average number of packets dropped.
近年来,无线传感器网络(WSNs)激发了许多研究领域。拥塞是此类网络面临的主要问题,它会导致数据传输的严重损失。拥塞是由多种原因引起的,如流量大、链路故障、节点故障等。为了对抗网络拥塞,开发了各种各样的技术。在本文中,我们提出了一种在拥塞发生之前进行预测并在拥塞恶化之前进行控制的技术。网络拥塞是通过调整源的流量速率来控制的。源节点在接收到控制信号后立即改变其传输速率。我们的算法是专门为管理由自相似流量造成的充血性情况而开发的。对网络流量的自相似性进行了帕累托分布模拟。通过分析节点的缓冲比率来检测网络的拥塞情况。此外,仿真结果表明,我们的算法在数据包传送率和平均丢包数方面优于其他现有技术。
{"title":"Congestion control for self similar traffic in wireless sensor network","authors":"Arpan Kumar Dubey, Adwitiya Sinha","doi":"10.1109/IC3.2015.7346702","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346702","url":null,"abstract":"Wireless sensor networks (WSNs) have inspired many research domains in recent years. Congestion is a major issue faced by such networks, which causes heavy loss in data transmissions. Congestion is caused due to several reasons, such as heavy traffic, link failure, node failure and many more. There are various techniques developed for combatting network congestion. In this paper, we have proposed a technique for prediction of the congestion before it happens and controlling the situation before it becomes worse. Congestion in the network is controlled by adjusting the traffic rate of sources. Source nodes change their transmission rate as soon as they receive the control signal. Our algorithm is developed especially for managing congestive situations created by self similar traffic. The self-similarty in network traffic is simulated Pareto distribution. Congestion in the network is detected by analyzing the buffer ratio of nodes. Further, the simulation results show that our algorithm outperforms other existing techniques in terms of packet delivery ratio and average number of packets dropped.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114665804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hand written digit recognition system for South Indian languages using artificial neural networks 使用人工神经网络的南印度语言手写数字识别系统
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346665
Leo Pauly, Rahul D. Raj, B. Paul
In this paper a novel approach for recognition of handwritten digits for South Indian languages using artificial neural networks (ANN) and Histogram of Oriented Gradients (HOG) features is presented. The images of documents containing the hand written digits are optically scanned and are segmented into individual images of isolated digits. HOG features are then extracted from these images and applied to the ANN for recognition. The system recognises the digits with an overall accuracy of 83.4%.
本文提出了一种利用人工神经网络(ANN)和梯度直方图(HOG)特征识别南印度语手写数字的新方法。包含手写数字的文档图像被光学扫描并分割成孤立数字的单个图像。然后从这些图像中提取HOG特征并应用于人工神经网络进行识别。该系统识别数字的总体准确率为83.4%。
{"title":"Hand written digit recognition system for South Indian languages using artificial neural networks","authors":"Leo Pauly, Rahul D. Raj, B. Paul","doi":"10.1109/IC3.2015.7346665","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346665","url":null,"abstract":"In this paper a novel approach for recognition of handwritten digits for South Indian languages using artificial neural networks (ANN) and Histogram of Oriented Gradients (HOG) features is presented. The images of documents containing the hand written digits are optically scanned and are segmented into individual images of isolated digits. HOG features are then extracted from these images and applied to the ANN for recognition. The system recognises the digits with an overall accuracy of 83.4%.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"45 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114121201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reduction of congestion and signal waiting time 减少交通挤塞及讯号等待时间
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346698
Palki Gupta, Lasit Pratap Singh, A. Khandelwal, Kavita Pandey
In this current scenario, vehicular ad-hoc networks are expected to provide support to a large range of distributed applications which ranges from traffic management, dynamic route planning to location based services. Traffic jams on the roads is a very serious issue that needs immediate attention. Various algorithms and solutions have been suggested in the field of VANETs to remove the problem of traffic congestion and waiting time. At initial level, it is not feasible to implement the proposed solution in the real world so a small area of Noida was taken up for real time simulations. The traffic simulation was created and observed using SUMO and NS2. This was in done in order to note the behavior of traffic light and congestion at the junctions and the results were further calculated and verified using AODV and GPSR protocol.
在目前的情况下,车载自组织网络有望为从交通管理、动态路线规划到基于位置的服务等广泛的分布式应用提供支持。道路上的交通堵塞是一个非常严重的问题,需要立即引起注意。为了解决交通拥堵和等待时间问题,在VANETs领域提出了各种算法和解决方案。在初始阶段,在现实世界中实现所提出的解决方案是不可行的,因此在诺伊达的一小块区域进行实时模拟。使用SUMO和NS2建立和观察交通模拟。这样做是为了记录路口交通灯和拥堵的行为,并使用AODV和GPSR协议进一步计算和验证结果。
{"title":"Reduction of congestion and signal waiting time","authors":"Palki Gupta, Lasit Pratap Singh, A. Khandelwal, Kavita Pandey","doi":"10.1109/IC3.2015.7346698","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346698","url":null,"abstract":"In this current scenario, vehicular ad-hoc networks are expected to provide support to a large range of distributed applications which ranges from traffic management, dynamic route planning to location based services. Traffic jams on the roads is a very serious issue that needs immediate attention. Various algorithms and solutions have been suggested in the field of VANETs to remove the problem of traffic congestion and waiting time. At initial level, it is not feasible to implement the proposed solution in the real world so a small area of Noida was taken up for real time simulations. The traffic simulation was created and observed using SUMO and NS2. This was in done in order to note the behavior of traffic light and congestion at the junctions and the results were further calculated and verified using AODV and GPSR protocol.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127013979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
DNA compression using referential compression algorithm DNA压缩采用参考压缩算法
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346654
Kanika Mehta, S. P. Ghrera
With rapid technological development and growth of sequencing data, an umpteen gamut of biological data has been generated. As an alternative, Data Compression is employed to reduce the size of data. In this direction, this paper proposes a new reference-based compression approach, which is employed as a solution. Firstly, a reference has been constructed from the common sub strings of randomly selected input sequences. Reference set is a pair of key and value, where key is a fingerprint (or a unique id) and value is a sequence of characters. Next, these given sequences are compressed using referential compression algorithm. This is attained by matching the input with the reference and hence, replacing the match found in input by its fingerprints contained in the reference, thereby achieving better compression. The experimental results of this paper show that the approach proposed herein, outperforms the existing approaches and methodologies applied so far.
随着技术的快速发展和测序数据的增长,产生了无数的生物数据。作为一种替代方案,数据压缩被用来减少数据的大小。在这个方向上,本文提出了一种新的基于参考的压缩方法作为解决方案。首先,从随机选择的输入序列的公共子串构造一个引用。引用集是一对键和值,其中键是一个指纹(或唯一id),值是一个字符序列。接下来,使用引用压缩算法对这些给定序列进行压缩。这是通过将输入与参考进行匹配来实现的,因此,用参考中包含的指纹替换输入中找到的匹配,从而实现更好的压缩。实验结果表明,本文提出的方法优于现有的方法和方法。
{"title":"DNA compression using referential compression algorithm","authors":"Kanika Mehta, S. P. Ghrera","doi":"10.1109/IC3.2015.7346654","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346654","url":null,"abstract":"With rapid technological development and growth of sequencing data, an umpteen gamut of biological data has been generated. As an alternative, Data Compression is employed to reduce the size of data. In this direction, this paper proposes a new reference-based compression approach, which is employed as a solution. Firstly, a reference has been constructed from the common sub strings of randomly selected input sequences. Reference set is a pair of key and value, where key is a fingerprint (or a unique id) and value is a sequence of characters. Next, these given sequences are compressed using referential compression algorithm. This is attained by matching the input with the reference and hence, replacing the match found in input by its fingerprints contained in the reference, thereby achieving better compression. The experimental results of this paper show that the approach proposed herein, outperforms the existing approaches and methodologies applied so far.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"407 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129216883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Logging method for high execution frequency paths of Linux kernel Linux内核高执行频率路径的日志记录方法
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346728
K. K. Jha
Understanding Operating System behavior is very critical for any embedded designer to make informed design decision. We present a new logging method which can capture the high granular details of the kernel activity. It reduces the logging latency by 95-97% & logging memory usage by 70% compared to conventional “printk”. We utilize the string literal pool of the Linux kernel to reconstruct the log offline, and store only the parameter values passed to a printk function, instead of current method putting the log as string after printk formatting.
理解操作系统行为对于任何嵌入式设计人员做出明智的设计决策都是非常关键的。我们提出了一种新的日志记录方法,可以捕获内核活动的高粒度细节。与传统的“printk”相比,它减少了95-97%的日志延迟和70%的日志内存使用。我们利用Linux内核的字符串字面量池来离线重建日志,并仅存储传递给printk函数的参数值,而不是当前方法将日志作为printk格式化后的字符串。
{"title":"Logging method for high execution frequency paths of Linux kernel","authors":"K. K. Jha","doi":"10.1109/IC3.2015.7346728","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346728","url":null,"abstract":"Understanding Operating System behavior is very critical for any embedded designer to make informed design decision. We present a new logging method which can capture the high granular details of the kernel activity. It reduces the logging latency by 95-97% & logging memory usage by 70% compared to conventional “printk”. We utilize the string literal pool of the Linux kernel to reconstruct the log offline, and store only the parameter values passed to a printk function, instead of current method putting the log as string after printk formatting.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"41 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127989671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous feature space for Android malware detection Android恶意软件检测的异构特征空间
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346711
V. VarshaM., P. Vinod, A. DhanyaK.
In this paper, a broad static analysis system to classify the android malware application is been proposed. The features like hardware components, permissions, application components, filtered intents, opcodes and number of smali files per application are used to generate the vector space model. Significant features are selected using Entropy based Category Coverage Difference criterion. The performance of the system was evaluated using classifiers like SVM, Rotation Forest and Random Forest. An accuracy of 98.14% with F-measure 0.976 was obtained for the Meta feature space model containing malware features using Random Forest classifier. An overall analysis concluded that the malware model outperforms benign model.
本文提出了一个广义的静态分析系统,用于对android恶意软件应用进行分类。诸如硬件组件、权限、应用程序组件、过滤意图、操作码和每个应用程序的小文件数量等特性用于生成向量空间模型。使用基于熵的分类覆盖差准则选择显著特征。使用支持向量机、旋转森林和随机森林等分类器对系统的性能进行评估。使用随机森林分类器对包含恶意软件特征的Meta特征空间模型进行分类,准确率为98.14%,F-measure为0.976。综合分析得出结论,恶意软件模型优于良性模型。
{"title":"Heterogeneous feature space for Android malware detection","authors":"V. VarshaM., P. Vinod, A. DhanyaK.","doi":"10.1109/IC3.2015.7346711","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346711","url":null,"abstract":"In this paper, a broad static analysis system to classify the android malware application is been proposed. The features like hardware components, permissions, application components, filtered intents, opcodes and number of smali files per application are used to generate the vector space model. Significant features are selected using Entropy based Category Coverage Difference criterion. The performance of the system was evaluated using classifiers like SVM, Rotation Forest and Random Forest. An accuracy of 98.14% with F-measure 0.976 was obtained for the Meta feature space model containing malware features using Random Forest classifier. An overall analysis concluded that the malware model outperforms benign model.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126822380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Robust language identification using Power Normalized Cepstral Coefficients 基于幂归一化倒谱系数的鲁棒语言识别
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346688
A. Dutta, K. S. Rao
The present work investigates the robustness of Power Normalized Cepstral Coefficients (PNCC) for Language identification (LID) from noisy speech. Though the state of the art vocal tract features like mel frequency cepstral coefficients (MFCC) give good recognition accuracy in clean environments, the performance degrades drastically when the signal to noise ratio decreases. In this work, experiments have been carried out on IITKGP-MLILSC speech database. Gaussian mixture model (GMM) is used to building the language models. We have used NOISEX-92 database to add synthetic noise at different SNR levels. We have also compared the recognition accuracy of two systems, one developed using MFCCs and and the other using PNCCs. Finally, we have shown that PNCC features are more robust to noise.
本文研究了功率归一化倒谱系数(PNCC)在语言识别(LID)中的鲁棒性。虽然最先进的声道特征,如mel频率倒谱系数(MFCC)在清洁环境下具有良好的识别精度,但当信噪比降低时,性能会急剧下降。本文在IITKGP-MLILSC语音数据库上进行了实验。采用高斯混合模型(GMM)建立语言模型。我们使用NOISEX-92数据库添加不同信噪比水平的合成噪声。我们还比较了两种系统的识别精度,一种是使用mfc开发的,另一种是使用pnc开发的。最后,我们证明了PNCC特征对噪声具有更强的鲁棒性。
{"title":"Robust language identification using Power Normalized Cepstral Coefficients","authors":"A. Dutta, K. S. Rao","doi":"10.1109/IC3.2015.7346688","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346688","url":null,"abstract":"The present work investigates the robustness of Power Normalized Cepstral Coefficients (PNCC) for Language identification (LID) from noisy speech. Though the state of the art vocal tract features like mel frequency cepstral coefficients (MFCC) give good recognition accuracy in clean environments, the performance degrades drastically when the signal to noise ratio decreases. In this work, experiments have been carried out on IITKGP-MLILSC speech database. Gaussian mixture model (GMM) is used to building the language models. We have used NOISEX-92 database to add synthetic noise at different SNR levels. We have also compared the recognition accuracy of two systems, one developed using MFCCs and and the other using PNCCs. Finally, we have shown that PNCC features are more robust to noise.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128392523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A system for compound adverbs MWEs extraction in Hindi 印地语复合副词MWEs提取系统
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346703
Rakhi Joon, A. Singhal
Adverbs are one of the main aspects of grammar in almost all the languages as they play a vital role in formation of a sentence. The identification and extraction of Multi Word Expressions (MWEs) in Hindi is done by various researchers but this new category of adverbs in Hindi MWEs is not known so far. A lot of research is going on adverbs in many other languages but in Hindi MWEs, adverbs have not gained proper place. There are various combinations of adverbs which could be used as Multi words. The main focus of this paper is to extract those Adverbs combination or compound adverbs which act as MWEs in Hindi text. Further classification of these adverb is also proposed on the basis of type of adverbs. The system is developed and tested with the dataset obtained from CFILT Hindi corpus. Results are evaluated using the evaluation measures precision, recall and F-measure.
副词是几乎所有语言语法的一个主要方面,因为它们在句子的构成中起着至关重要的作用。印地语多词短语的识别和提取已经有很多研究者进行了研究,但这类新的副词在印地语多词短语中尚不为人所知。许多其他语言对副词进行了大量的研究,但在印地语MWEs中,副词没有得到应有的地位。有各种各样的副词组合,可以作为多词使用。本文的研究重点是对印地语语篇中充当副谓语动词的副词组合或复合副词进行提取。在副词类型的基础上,对这些副词进行了进一步的分类。利用CFILT印地语语料库的数据集对系统进行了开发和测试。使用评价指标精密度、召回率和f值对结果进行评价。
{"title":"A system for compound adverbs MWEs extraction in Hindi","authors":"Rakhi Joon, A. Singhal","doi":"10.1109/IC3.2015.7346703","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346703","url":null,"abstract":"Adverbs are one of the main aspects of grammar in almost all the languages as they play a vital role in formation of a sentence. The identification and extraction of Multi Word Expressions (MWEs) in Hindi is done by various researchers but this new category of adverbs in Hindi MWEs is not known so far. A lot of research is going on adverbs in many other languages but in Hindi MWEs, adverbs have not gained proper place. There are various combinations of adverbs which could be used as Multi words. The main focus of this paper is to extract those Adverbs combination or compound adverbs which act as MWEs in Hindi text. Further classification of these adverb is also proposed on the basis of type of adverbs. The system is developed and tested with the dataset obtained from CFILT Hindi corpus. Results are evaluated using the evaluation measures precision, recall and F-measure.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133348001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Collaborative teaching in large classes of computer science courses 计算机科学大班课程的协作式教学
Pub Date : 2015-08-20 DOI: 10.1109/IC3.2015.7346714
S. Goel, Suma Dawn, G. Dhanalekshmi, N. Hema, S. Singh, Sanchika Gupta, Taj Alam, Prashant Kaushik, Kashav Ajmera
Collaborative teaching was applied by eight teachers for teaching nearly 700 students in four different sections of three different computer science courses with section strength varying from 120-240. Different forms of collaborative teaching were tried. Collaborative teaching at JIIT, Noida has turned out to be successful for large classes of the strength of 100 and above.
采用协作式教学,8名教师对近700名学生进行了3门不同计算机科学课程的4个不同分组的教学,分组人数从120-240不等。尝试了不同形式的合作教学。诺伊达印度理工学院的合作教学在100人及以上的大班级中取得了成功。
{"title":"Collaborative teaching in large classes of computer science courses","authors":"S. Goel, Suma Dawn, G. Dhanalekshmi, N. Hema, S. Singh, Sanchika Gupta, Taj Alam, Prashant Kaushik, Kashav Ajmera","doi":"10.1109/IC3.2015.7346714","DOIUrl":"https://doi.org/10.1109/IC3.2015.7346714","url":null,"abstract":"Collaborative teaching was applied by eight teachers for teaching nearly 700 students in four different sections of three different computer science courses with section strength varying from 120-240. Different forms of collaborative teaching were tried. Collaborative teaching at JIIT, Noida has turned out to be successful for large classes of the strength of 100 and above.","PeriodicalId":217950,"journal":{"name":"2015 Eighth International Conference on Contemporary Computing (IC3)","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131659061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2015 Eighth International Conference on Contemporary Computing (IC3)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1