首页 > 最新文献

Compute最新文献

英文 中文
CRETAL: A Personalized Learning Environment in Conventional Setup CRETAL:传统设置下的个性化学习环境
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140130
G. M. Shivanagowda, R. Goudar, U. Kulkarni
The increased variety of learning resources, like e-books with modern collaborative tools, video lectures of different teachers across the world, lively discussion boards etc. have substantially affected learning styles of students. Having accepted such forms of learning materials, conventional academic setups are in desperate need to make learning activities efficient, effective and meaningful. In this paper a diverse resources hosting system CRETAL (Compiler of Resources in Engineering &Technology to Aid Learning) with rich set of unique annotation tools are described to address the challenges of the modern students to query, access, extract, connect, process and share the important concepts from the different form of the learning materials created and adapted by the teacher. In addition to the system description, we also present the usability study along with the impact on students learning and teacher's decision when CRETAL was adapted in few undergraduate course using Educational Data mining and learning analytics.
越来越多的学习资源,如带有现代协作工具的电子书、来自世界各地不同老师的视频讲座、活跃的讨论板等,极大地影响了学生的学习方式。在接受了这种形式的学习材料后,传统的学术机构迫切需要使学习活动高效、有效和有意义。本文描述了一个多样化的资源托管系统CRETAL (Compiler of resources In Engineering &Technology to Aid Learning),该系统具有丰富的一套独特的注释工具,以解决现代学生从教师创建和改写的不同形式的学习材料中查询、访问、提取、连接、处理和共享重要概念的挑战。除了系统描述外,我们还提出了可用性研究,以及使用教育数据挖掘和学习分析将CRETAL应用于少数本科课程时对学生学习和教师决策的影响。
{"title":"CRETAL: A Personalized Learning Environment in Conventional Setup","authors":"G. M. Shivanagowda, R. Goudar, U. Kulkarni","doi":"10.1145/3140107.3140130","DOIUrl":"https://doi.org/10.1145/3140107.3140130","url":null,"abstract":"The increased variety of learning resources, like e-books with modern collaborative tools, video lectures of different teachers across the world, lively discussion boards etc. have substantially affected learning styles of students. Having accepted such forms of learning materials, conventional academic setups are in desperate need to make learning activities efficient, effective and meaningful. In this paper a diverse resources hosting system CRETAL (Compiler of Resources in Engineering &Technology to Aid Learning) with rich set of unique annotation tools are described to address the challenges of the modern students to query, access, extract, connect, process and share the important concepts from the different form of the learning materials created and adapted by the teacher. In addition to the system description, we also present the usability study along with the impact on students learning and teacher's decision when CRETAL was adapted in few undergraduate course using Educational Data mining and learning analytics.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"43 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120927944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D Ear Based Human Recognition Using Gauss Map Clustering 基于高斯地图聚类的三维人耳识别
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140112
I. I. Ganapathi, S. Prakash
This paper addresses the problem of human recognition using 3D ear biometrics. Existing feature extraction and description techniques in the literature for 3D shape recognition works well with the different class of shapes, however, not for profoundly comparable objects like human 3D ears. This work proposes an effective method utilizing Gauss mapping for feature keypoints detection and shape context to describe the detected keypoints. The proposed technique is as follows. A triangle for every point p is computed using two other points of the k-nearest neighbors within a sphere of radius r. A normal is computed for the obtained triangle and is mapped to a unit sphere. This mapping of normals is done for every conceivable triangle of point p. It is observed that mapped normals form a different number of clusters depending upon the type of surface point p belongs to. A point is considered as a keypoint if its projected normals form more than two clusters. Further, we project all the detected keypoints onto a plane and use them in the computation of feature descriptor vectors. Descriptor vector of a keypoint is computed by keeping it at the center and defining its shape context considering all other keypoints as its neighbors. To match a probe ear image with a gallery image for recognition, we compute correspondence for all the feature keypoints of the probe image to the feature keypoints of the gallery image. Final matching is performed by aligning the gallery image with the probe image and considering the registration error as the matching score. The experimental analysis conducted on University of Notre Dame (UND)-Collection J2 has achieved a verification accuracy of 98.20% with an equal error rate (EER) of 1.84%.
本文研究了利用三维耳生物特征技术进行人体识别的问题。现有的三维形状识别的特征提取和描述技术可以很好地识别不同类型的形状,但是对于像人的三维耳朵这样的具有高度可比性的物体来说,效果并不好。本文提出了一种利用高斯映射进行特征关键点检测和形状上下文描述检测到的关键点的有效方法。建议的技术如下。每个点p的三角形是用半径为r的球体内k个最近邻的另外两个点来计算的。为得到的三角形计算法线,并映射到单位球体。这种法线的映射是针对点p的每个可能的三角形完成的。可以观察到,映射的法线形成不同数量的簇,这取决于点p所属的曲面类型。一个点被认为是关键点,如果它的投影法线形成两个以上的簇。进一步,我们将所有检测到的关键点投影到一个平面上,并使用它们来计算特征描述子向量。关键点的描述符向量是通过将其保持在中心并将所有其他关键点视为其邻居来定义其形状上下文来计算的。为了将探测耳图像与图库图像匹配以进行识别,我们计算了探测图像的所有特征关键点与图库图像特征关键点的对应关系。通过将图库图像与探测图像对齐并将配准误差作为匹配分数来进行最终匹配。在University of Notre Dame (UND)-Collection J2上进行的实验分析,验证准确率达到98.20%,等效错误率(EER)为1.84%。
{"title":"3D Ear Based Human Recognition Using Gauss Map Clustering","authors":"I. I. Ganapathi, S. Prakash","doi":"10.1145/3140107.3140112","DOIUrl":"https://doi.org/10.1145/3140107.3140112","url":null,"abstract":"This paper addresses the problem of human recognition using 3D ear biometrics. Existing feature extraction and description techniques in the literature for 3D shape recognition works well with the different class of shapes, however, not for profoundly comparable objects like human 3D ears. This work proposes an effective method utilizing Gauss mapping for feature keypoints detection and shape context to describe the detected keypoints. The proposed technique is as follows. A triangle for every point p is computed using two other points of the k-nearest neighbors within a sphere of radius r. A normal is computed for the obtained triangle and is mapped to a unit sphere. This mapping of normals is done for every conceivable triangle of point p. It is observed that mapped normals form a different number of clusters depending upon the type of surface point p belongs to. A point is considered as a keypoint if its projected normals form more than two clusters. Further, we project all the detected keypoints onto a plane and use them in the computation of feature descriptor vectors. Descriptor vector of a keypoint is computed by keeping it at the center and defining its shape context considering all other keypoints as its neighbors. To match a probe ear image with a gallery image for recognition, we compute correspondence for all the feature keypoints of the probe image to the feature keypoints of the gallery image. Final matching is performed by aligning the gallery image with the probe image and considering the registration error as the matching score. The experimental analysis conducted on University of Notre Dame (UND)-Collection J2 has achieved a verification accuracy of 98.20% with an equal error rate (EER) of 1.84%.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134178568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Breast Tissue Density Classification in Mammograms Based on Supervised Machine Learning Technique 基于监督机器学习技术的乳腺组织密度分类
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140131
K. Kashyap, M. Bajpai, P. Khanna
Breast tissue density is one of the symptoms for breast cancer detection. Fully automatic breast tissue density classification is presented in this work. Present work consists of four steps which include breast region extraction and enhancement of mammograms, segmentation, feature extraction, and breast tissue density classification. Enhancement of mammogram is done by applying fractional order differential based filter. Segmentation of breast tissue segmentation has been done by using clustering based fast fuzzy c-means technique. Further, texture based local binary pattern (LBP) and dominant rotated local binary pattern (DRLBP) features have been computed from the extracted breast tissues to characterize its texture property. Support vector machine with linear kernel functions are used to classify the breast tissue density. Proposed algorithm is validated on the publicly available 322 mammograms of Mini-Mammographic Image Analysis Society (MIAS).
乳腺组织密度是检测乳腺癌的症状之一。本文提出了一种全自动乳腺组织密度分类方法。目前的工作包括乳房区域提取和增强、图像分割、特征提取和乳腺组织密度分类四个步骤。乳房x光片的增强是通过应用分数阶微分滤波器来完成的。采用基于聚类的快速模糊c均值技术对乳腺组织进行分割。进一步,从提取的乳腺组织中计算基于纹理的局部二值模式(LBP)和显性旋转局部二值模式(DRLBP)特征来表征其纹理特性。采用线性核函数支持向量机对乳腺组织密度进行分类。该算法在Mini-Mammographic Image Analysis Society (MIAS)公开的322张乳房x线照片上进行了验证。
{"title":"Breast Tissue Density Classification in Mammograms Based on Supervised Machine Learning Technique","authors":"K. Kashyap, M. Bajpai, P. Khanna","doi":"10.1145/3140107.3140131","DOIUrl":"https://doi.org/10.1145/3140107.3140131","url":null,"abstract":"Breast tissue density is one of the symptoms for breast cancer detection. Fully automatic breast tissue density classification is presented in this work. Present work consists of four steps which include breast region extraction and enhancement of mammograms, segmentation, feature extraction, and breast tissue density classification. Enhancement of mammogram is done by applying fractional order differential based filter. Segmentation of breast tissue segmentation has been done by using clustering based fast fuzzy c-means technique. Further, texture based local binary pattern (LBP) and dominant rotated local binary pattern (DRLBP) features have been computed from the extracted breast tissues to characterize its texture property. Support vector machine with linear kernel functions are used to classify the breast tissue density. Proposed algorithm is validated on the publicly available 322 mammograms of Mini-Mammographic Image Analysis Society (MIAS).","PeriodicalId":435920,"journal":{"name":"Compute","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114120023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Neural Machine Translation of Indian Languages 印度语言的神经机器翻译
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140111
Karthik Revanuru, Kaushik Turlapaty, Shrisha Rao
Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.
神经机器翻译(NMT)是一种新的机器翻译技术,它克服了传统机器翻译技术的许多缺点,比基于规则的机器翻译和统计机器翻译(SMT)技术有了显著的进步。我们研究并应用NMT技术创建了一个具有多个模型的系统,然后我们将其应用于六个印度语言对。我们使用自动评估指标(如UNK Count, METEOR, F-Measure和BLEU)将NMT模型的性能与系统进行比较。我们发现NMT技术对于印度语对的机器翻译非常有效。然后我们证明,即使使用浅网络,我们也可以获得良好的精度;在比较谷歌翻译在我们的测试数据集上的表现时,我们最好的模型在乌尔都语-印地语翻译上比谷歌翻译高出17个BLEU点,在旁遮普语-印地语翻译上高出29个BLEU点,在古吉拉特语-印地语翻译上高出30个BLEU点。
{"title":"Neural Machine Translation of Indian Languages","authors":"Karthik Revanuru, Kaushik Turlapaty, Shrisha Rao","doi":"10.1145/3140107.3140111","DOIUrl":"https://doi.org/10.1145/3140107.3140111","url":null,"abstract":"Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130886436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Webcam Based Eye Gaze Tracking Using a Landmark Detector 基于网络摄像头的眼球注视跟踪,使用地标检测器
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140117
Atul Sahay, P. Biswas
This paper proposes a real-time algorithm to detect users' gaze point in a video sequence from a standard web camera. We have shown that landmarks constructed for both eyes can reliably estimate the eyelid opening, which in turn can be used to tell where the user is staring at that particular moment. Further, the knowledge of the eye opening can be combined with the iris displacement from the reference point to predict the user's gaze point. We have reported a user study involving 8 users and we can track one of nine positions on screen within a radius of 11° of visual angle.
本文提出了一种从标准网络摄像机中实时检测视频序列中用户注视点的算法。我们已经证明,为两只眼睛构建的标志可以可靠地估计眼睑的张开,进而可以用来判断用户在特定时刻盯着哪里。此外,可以将眼睛睁开的知识与参考点的虹膜位移相结合,以预测用户的注视点。我们报告了一项涉及8个用户的用户研究,我们可以在11°视角半径内跟踪屏幕上9个位置中的一个。
{"title":"Webcam Based Eye Gaze Tracking Using a Landmark Detector","authors":"Atul Sahay, P. Biswas","doi":"10.1145/3140107.3140117","DOIUrl":"https://doi.org/10.1145/3140107.3140117","url":null,"abstract":"This paper proposes a real-time algorithm to detect users' gaze point in a video sequence from a standard web camera. We have shown that landmarks constructed for both eyes can reliably estimate the eyelid opening, which in turn can be used to tell where the user is staring at that particular moment. Further, the knowledge of the eye opening can be combined with the iris displacement from the reference point to predict the user's gaze point. We have reported a user study involving 8 users and we can track one of nine positions on screen within a radius of 11° of visual angle.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126389338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fingerprint Shell Construction with Prominent Minutiae Points 指纹壳结构与突出的细节点
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140113
Syed Sadaf Ali, S. Prakash
Fingerprint based authentication is one of the most extensively used authentication technique. Biometric systems relying on fingerprint usually directly uses minutiae points information of a fingerprint and store it as a user template. There are many recent works which show that original fingerprint of a user can be generated from the data of minutiae points. In case of traditional authentication systems based on password there is a liberty to change the password, however biometric data of a user cannot be changed as it is permanently associated with the human body. If any information related to biometric features of a user is stolen or compromised, then in that case we cannot change the compromised information. Therefore, it is essential to make sure that the biometric data is secure. Our motive is to generate a biometric template that will fulfill the necessities of performance, security, revocability and diversity. Moujahdi et al. proposed a technique called Fingerprint shell as a secure representation of fingerprint data using a user key. In this technique, a spiral curve is generated as a secured user template by using the distances between singular point and minutiae points. In this paper, we have proposed a technique in which we have included the quality of minutiae points for the construction of spiral curve. We have used a pair of unique user keys and utilized the information provided by minutiae points to generate a non-invertible user template. In case of compromising of user template by adversary, user has the liberty to generate new template by using different user keys, the new template and the compromised one are non-linkable. We tested our technique on FVC2002 DB1, FVC2002 DB2 and IIT Kanpur fingerprint databases using FVC protocol. Experimental results obtained are encouraging and demonstrate the viability of our technique.
基于指纹的身份验证是应用最广泛的身份验证技术之一。基于指纹的生物识别系统通常直接利用指纹的细微点信息作为用户模板进行存储。近年来有很多研究表明,用户的原始指纹可以从细微点的数据中生成。在基于密码的传统认证系统中,可以自由更改密码,但用户的生物特征数据与人体永久相关,因此无法更改。如果与用户的生物特征相关的任何信息被盗或泄露,那么在这种情况下,我们无法更改泄露的信息。因此,确保生物识别数据的安全性至关重要。我们的动机是生成一个生物识别模板,它将满足性能、安全性、可撤销性和多样性的要求。Moujahdi等人提出了一种称为指纹外壳的技术,作为使用用户密钥的指纹数据的安全表示。在该技术中,利用奇异点与细点之间的距离生成螺旋曲线作为安全用户模板。在本文中,我们提出了一种技术,其中我们包括了细节点的质量,为螺旋曲线的构造。我们使用了一对唯一的用户密钥,并利用细节点提供的信息生成了一个不可逆转的用户模板。当用户模板被攻击者泄露时,用户可以自由地使用不同的用户密钥生成新模板,新模板与被泄露的模板不可链接。我们使用FVC协议在FVC2002 DB1、FVC2002 DB2和IIT Kanpur指纹数据库上测试了我们的技术。实验结果令人鼓舞,证明了该技术的可行性。
{"title":"Fingerprint Shell Construction with Prominent Minutiae Points","authors":"Syed Sadaf Ali, S. Prakash","doi":"10.1145/3140107.3140113","DOIUrl":"https://doi.org/10.1145/3140107.3140113","url":null,"abstract":"Fingerprint based authentication is one of the most extensively used authentication technique. Biometric systems relying on fingerprint usually directly uses minutiae points information of a fingerprint and store it as a user template. There are many recent works which show that original fingerprint of a user can be generated from the data of minutiae points. In case of traditional authentication systems based on password there is a liberty to change the password, however biometric data of a user cannot be changed as it is permanently associated with the human body. If any information related to biometric features of a user is stolen or compromised, then in that case we cannot change the compromised information. Therefore, it is essential to make sure that the biometric data is secure. Our motive is to generate a biometric template that will fulfill the necessities of performance, security, revocability and diversity. Moujahdi et al. proposed a technique called Fingerprint shell as a secure representation of fingerprint data using a user key. In this technique, a spiral curve is generated as a secured user template by using the distances between singular point and minutiae points. In this paper, we have proposed a technique in which we have included the quality of minutiae points for the construction of spiral curve. We have used a pair of unique user keys and utilized the information provided by minutiae points to generate a non-invertible user template. In case of compromising of user template by adversary, user has the liberty to generate new template by using different user keys, the new template and the compromised one are non-linkable. We tested our technique on FVC2002 DB1, FVC2002 DB2 and IIT Kanpur fingerprint databases using FVC protocol. Experimental results obtained are encouraging and demonstrate the viability of our technique.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132505317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Prediction Intervals via Support Vector-quantile Regression Random Forest Hybrid 基于支持向量-分位数回归的随机森林混合预测区间
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140122
V. Ravi, Vadali Tejasviram, Anurag Sharma, R. R. Khansama
This paper presents a new method of determining prediction intervals via the hybrid of support vector machine and quantile regression random forest introduced elsewhere. Its effectiveness is tested on 5 benchmark regression problems. Fromthe experiments, we infer that the difference in performance of the prediction intervals from the proposed method and those from quantile regression and quantile regression random forest is statistically significant as shown by the Wilcoxon test at 5% level of significance. This is an important achievement of the paper.
本文提出了一种基于支持向量机和分位数回归随机森林相结合的预测区间确定方法。在5个基准回归问题上对其有效性进行了测试。从实验中,我们推断,该方法的预测区间与分位数回归和分位数回归随机森林的预测区间的性能差异具有统计学意义,在5%显著性水平下进行Wilcoxon检验。这是本文的一个重要成果。
{"title":"Prediction Intervals via Support Vector-quantile Regression Random Forest Hybrid","authors":"V. Ravi, Vadali Tejasviram, Anurag Sharma, R. R. Khansama","doi":"10.1145/3140107.3140122","DOIUrl":"https://doi.org/10.1145/3140107.3140122","url":null,"abstract":"This paper presents a new method of determining prediction intervals via the hybrid of support vector machine and quantile regression random forest introduced elsewhere. Its effectiveness is tested on 5 benchmark regression problems. Fromthe experiments, we infer that the difference in performance of the prediction intervals from the proposed method and those from quantile regression and quantile regression random forest is statistically significant as shown by the Wilcoxon test at 5% level of significance. This is an important achievement of the paper.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126450495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Incorporating Formal Methods and Measures Obtained through Analysis, Simulation Testing for Dependable Self-Adaptive Software in Avionics Systems 结合分析得到的形式化方法和措施,对航空电子系统中可靠自适应软件进行仿真试验
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140128
Rajanikanth N. Kashi, Meenakshi D'Souza, Koyalkar Raman Kishore
An Area 1rich with challenges is that of self-adaptive avionics software, with considerable thrust in both the American and European airspace modernization programs. Verifying functional requirements for such a system, ultimately leading to certification poses a unique set of problems, since these systems are required to be dependable. Also inherent is the subject of eliciting measures of adaptability which help evaluate the system in the context of non-functional requirements qualified by self-properties. We illustrate our approach for such a verification and evaluation exercise by proposing a combination of formal methods verification techniques and simulation based testing. The test bed is a representative self-adaptive software of a small UAS (Unmanned Aircraft System) avionics modeled as a multiagent BDI (Belief Desire Intention) system with evolutionary and reactive behaviours, illustrating important aspects of verification.
一个充满挑战的领域是自适应航空电子软件,在美国和欧洲的空域现代化计划中都有相当大的推力。验证这样一个系统的功能需求,最终导致认证,这会带来一系列独特的问题,因为这些系统需要是可靠的。同样固有的是引出适应性度量的主题,这有助于在由自属性限定的非功能需求的上下文中评估系统。我们通过提出正式方法验证技术和基于模拟的测试的组合来说明我们的验证和评估练习的方法。该试验台是小型无人机系统航电系统的典型自适应软件,其模型为具有进化行为和反应行为的多智能体BDI系统,说明了验证的重要方面。
{"title":"Incorporating Formal Methods and Measures Obtained through Analysis, Simulation Testing for Dependable Self-Adaptive Software in Avionics Systems","authors":"Rajanikanth N. Kashi, Meenakshi D'Souza, Koyalkar Raman Kishore","doi":"10.1145/3140107.3140128","DOIUrl":"https://doi.org/10.1145/3140107.3140128","url":null,"abstract":"An Area 1rich with challenges is that of self-adaptive avionics software, with considerable thrust in both the American and European airspace modernization programs. Verifying functional requirements for such a system, ultimately leading to certification poses a unique set of problems, since these systems are required to be dependable. Also inherent is the subject of eliciting measures of adaptability which help evaluate the system in the context of non-functional requirements qualified by self-properties. We illustrate our approach for such a verification and evaluation exercise by proposing a combination of formal methods verification techniques and simulation based testing. The test bed is a representative self-adaptive software of a small UAS (Unmanned Aircraft System) avionics modeled as a multiagent BDI (Belief Desire Intention) system with evolutionary and reactive behaviours, illustrating important aspects of verification.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132659059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
LDA Based Feature Selection for Document Clustering 基于LDA的文档聚类特征选择
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140129
B. S. Kumar, V. Ravi
In this paper, we propose a novel model for text document clustering. Usually, Text documents consist of a vast number of features. We selected important features through four methods term variance, document frequency, Latent Dirichlet Allocation, and Significance methods. We demonstrated the effectiveness of proposed model on 20NG and WebKB datasets which are publicly available. We evaluated the model with F-Score value. Results indicate that LDA performed best in capturing the discriminate features.
本文提出了一种新的文本文档聚类模型。通常,文本文档包含大量的特性。我们通过四种方法选择重要的特征,包括术语方差、文档频率、潜在狄利克雷分配和显著性方法。我们在公开的20NG和WebKB数据集上验证了所提出模型的有效性。我们用F-Score值评价模型。结果表明,LDA在识别特征方面表现最好。
{"title":"LDA Based Feature Selection for Document Clustering","authors":"B. S. Kumar, V. Ravi","doi":"10.1145/3140107.3140129","DOIUrl":"https://doi.org/10.1145/3140107.3140129","url":null,"abstract":"In this paper, we propose a novel model for text document clustering. Usually, Text documents consist of a vast number of features. We selected important features through four methods term variance, document frequency, Latent Dirichlet Allocation, and Significance methods. We demonstrated the effectiveness of proposed model on 20NG and WebKB datasets which are publicly available. We evaluated the model with F-Score value. Results indicate that LDA performed best in capturing the discriminate features.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116630988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimal Algorithms for Min-Closed, Max-Closed and Arc Consistency over Connected Row Convex Constraints 连通行凸约束上最小闭、最大闭和弧一致性的最优算法
Pub Date : 2017-11-16 DOI: 10.1145/3140107.3140110
Shubhadip Mitra, P. Dutta, Arnab Bhattacharya
A key research interest in the area of Constraint Satisfaction Problems (CSP) is to identify tractable classes of constraints and develop efficient algorithms for solving them. In this paper, we propose an optimal algorithm for solving r-ary min-closed and max-closed constraints. Assuming r = O(1), our algorithm has an optimal running time of O(ct) where c and t are the number of constraints and the maximum size of any constraint, respectively. This significantly improves the existing pairwise consistency based algorithm that takes O(c2t2) time. Moreover, for (binary) connected row convex (CRC) constraints, we design an optimal algorithm for arc consistency that runs in O(cd) time where d is the largest size of any domain. This again improves upon the existing O(cd2) algorithms. This, in turn, leads to a faster algorithm for solving CRC constraints. We also show how our solutions can be applied to determine problems in large distributed IT systems. The experimental evaluation shows that the proposed algorithms are several orders of magnitudes faster than the state-of-the-art algorithms.
约束满足问题(CSP)领域的一个重要研究方向是识别可处理的约束类别并开发有效的算法来解决它们。本文提出了求解r-任意最小闭约束和最大闭约束的最优算法。假设r = O(1),我们的算法的最优运行时间为O(ct),其中c和t分别是约束的数量和任何约束的最大大小。这大大改善了现有的基于配对一致性的算法,该算法需要O(c2t2)时间。此外,对于(二进制)连通行凸(CRC)约束,我们设计了一种最优的弧一致性算法,该算法在O(cd)时间内运行,其中d是任意域的最大尺寸。这再次改进了现有的O(cd2)算法。这反过来又导致求解CRC约束的更快算法。我们还展示了如何应用我们的解决方案来确定大型分布式IT系统中的问题。实验结果表明,本文提出的算法比现有算法快几个数量级。
{"title":"Optimal Algorithms for Min-Closed, Max-Closed and Arc Consistency over Connected Row Convex Constraints","authors":"Shubhadip Mitra, P. Dutta, Arnab Bhattacharya","doi":"10.1145/3140107.3140110","DOIUrl":"https://doi.org/10.1145/3140107.3140110","url":null,"abstract":"A key research interest in the area of Constraint Satisfaction Problems (CSP) is to identify tractable classes of constraints and develop efficient algorithms for solving them. In this paper, we propose an optimal algorithm for solving r-ary min-closed and max-closed constraints. Assuming r = O(1), our algorithm has an optimal running time of O(ct) where c and t are the number of constraints and the maximum size of any constraint, respectively. This significantly improves the existing pairwise consistency based algorithm that takes O(c2t2) time. Moreover, for (binary) connected row convex (CRC) constraints, we design an optimal algorithm for arc consistency that runs in O(cd) time where d is the largest size of any domain. This again improves upon the existing O(cd2) algorithms. This, in turn, leads to a faster algorithm for solving CRC constraints. We also show how our solutions can be applied to determine problems in large distributed IT systems. The experimental evaluation shows that the proposed algorithms are several orders of magnitudes faster than the state-of-the-art algorithms.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129023397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Compute
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1