The increased variety of learning resources, like e-books with modern collaborative tools, video lectures of different teachers across the world, lively discussion boards etc. have substantially affected learning styles of students. Having accepted such forms of learning materials, conventional academic setups are in desperate need to make learning activities efficient, effective and meaningful. In this paper a diverse resources hosting system CRETAL (Compiler of Resources in Engineering &Technology to Aid Learning) with rich set of unique annotation tools are described to address the challenges of the modern students to query, access, extract, connect, process and share the important concepts from the different form of the learning materials created and adapted by the teacher. In addition to the system description, we also present the usability study along with the impact on students learning and teacher's decision when CRETAL was adapted in few undergraduate course using Educational Data mining and learning analytics.
越来越多的学习资源,如带有现代协作工具的电子书、来自世界各地不同老师的视频讲座、活跃的讨论板等,极大地影响了学生的学习方式。在接受了这种形式的学习材料后,传统的学术机构迫切需要使学习活动高效、有效和有意义。本文描述了一个多样化的资源托管系统CRETAL (Compiler of resources In Engineering &Technology to Aid Learning),该系统具有丰富的一套独特的注释工具,以解决现代学生从教师创建和改写的不同形式的学习材料中查询、访问、提取、连接、处理和共享重要概念的挑战。除了系统描述外,我们还提出了可用性研究,以及使用教育数据挖掘和学习分析将CRETAL应用于少数本科课程时对学生学习和教师决策的影响。
{"title":"CRETAL: A Personalized Learning Environment in Conventional Setup","authors":"G. M. Shivanagowda, R. Goudar, U. Kulkarni","doi":"10.1145/3140107.3140130","DOIUrl":"https://doi.org/10.1145/3140107.3140130","url":null,"abstract":"The increased variety of learning resources, like e-books with modern collaborative tools, video lectures of different teachers across the world, lively discussion boards etc. have substantially affected learning styles of students. Having accepted such forms of learning materials, conventional academic setups are in desperate need to make learning activities efficient, effective and meaningful. In this paper a diverse resources hosting system CRETAL (Compiler of Resources in Engineering &Technology to Aid Learning) with rich set of unique annotation tools are described to address the challenges of the modern students to query, access, extract, connect, process and share the important concepts from the different form of the learning materials created and adapted by the teacher. In addition to the system description, we also present the usability study along with the impact on students learning and teacher's decision when CRETAL was adapted in few undergraduate course using Educational Data mining and learning analytics.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"43 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120927944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the problem of human recognition using 3D ear biometrics. Existing feature extraction and description techniques in the literature for 3D shape recognition works well with the different class of shapes, however, not for profoundly comparable objects like human 3D ears. This work proposes an effective method utilizing Gauss mapping for feature keypoints detection and shape context to describe the detected keypoints. The proposed technique is as follows. A triangle for every point p is computed using two other points of the k-nearest neighbors within a sphere of radius r. A normal is computed for the obtained triangle and is mapped to a unit sphere. This mapping of normals is done for every conceivable triangle of point p. It is observed that mapped normals form a different number of clusters depending upon the type of surface point p belongs to. A point is considered as a keypoint if its projected normals form more than two clusters. Further, we project all the detected keypoints onto a plane and use them in the computation of feature descriptor vectors. Descriptor vector of a keypoint is computed by keeping it at the center and defining its shape context considering all other keypoints as its neighbors. To match a probe ear image with a gallery image for recognition, we compute correspondence for all the feature keypoints of the probe image to the feature keypoints of the gallery image. Final matching is performed by aligning the gallery image with the probe image and considering the registration error as the matching score. The experimental analysis conducted on University of Notre Dame (UND)-Collection J2 has achieved a verification accuracy of 98.20% with an equal error rate (EER) of 1.84%.
本文研究了利用三维耳生物特征技术进行人体识别的问题。现有的三维形状识别的特征提取和描述技术可以很好地识别不同类型的形状,但是对于像人的三维耳朵这样的具有高度可比性的物体来说,效果并不好。本文提出了一种利用高斯映射进行特征关键点检测和形状上下文描述检测到的关键点的有效方法。建议的技术如下。每个点p的三角形是用半径为r的球体内k个最近邻的另外两个点来计算的。为得到的三角形计算法线,并映射到单位球体。这种法线的映射是针对点p的每个可能的三角形完成的。可以观察到,映射的法线形成不同数量的簇,这取决于点p所属的曲面类型。一个点被认为是关键点,如果它的投影法线形成两个以上的簇。进一步,我们将所有检测到的关键点投影到一个平面上,并使用它们来计算特征描述子向量。关键点的描述符向量是通过将其保持在中心并将所有其他关键点视为其邻居来定义其形状上下文来计算的。为了将探测耳图像与图库图像匹配以进行识别,我们计算了探测图像的所有特征关键点与图库图像特征关键点的对应关系。通过将图库图像与探测图像对齐并将配准误差作为匹配分数来进行最终匹配。在University of Notre Dame (UND)-Collection J2上进行的实验分析,验证准确率达到98.20%,等效错误率(EER)为1.84%。
{"title":"3D Ear Based Human Recognition Using Gauss Map Clustering","authors":"I. I. Ganapathi, S. Prakash","doi":"10.1145/3140107.3140112","DOIUrl":"https://doi.org/10.1145/3140107.3140112","url":null,"abstract":"This paper addresses the problem of human recognition using 3D ear biometrics. Existing feature extraction and description techniques in the literature for 3D shape recognition works well with the different class of shapes, however, not for profoundly comparable objects like human 3D ears. This work proposes an effective method utilizing Gauss mapping for feature keypoints detection and shape context to describe the detected keypoints. The proposed technique is as follows. A triangle for every point p is computed using two other points of the k-nearest neighbors within a sphere of radius r. A normal is computed for the obtained triangle and is mapped to a unit sphere. This mapping of normals is done for every conceivable triangle of point p. It is observed that mapped normals form a different number of clusters depending upon the type of surface point p belongs to. A point is considered as a keypoint if its projected normals form more than two clusters. Further, we project all the detected keypoints onto a plane and use them in the computation of feature descriptor vectors. Descriptor vector of a keypoint is computed by keeping it at the center and defining its shape context considering all other keypoints as its neighbors. To match a probe ear image with a gallery image for recognition, we compute correspondence for all the feature keypoints of the probe image to the feature keypoints of the gallery image. Final matching is performed by aligning the gallery image with the probe image and considering the registration error as the matching score. The experimental analysis conducted on University of Notre Dame (UND)-Collection J2 has achieved a verification accuracy of 98.20% with an equal error rate (EER) of 1.84%.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134178568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast tissue density is one of the symptoms for breast cancer detection. Fully automatic breast tissue density classification is presented in this work. Present work consists of four steps which include breast region extraction and enhancement of mammograms, segmentation, feature extraction, and breast tissue density classification. Enhancement of mammogram is done by applying fractional order differential based filter. Segmentation of breast tissue segmentation has been done by using clustering based fast fuzzy c-means technique. Further, texture based local binary pattern (LBP) and dominant rotated local binary pattern (DRLBP) features have been computed from the extracted breast tissues to characterize its texture property. Support vector machine with linear kernel functions are used to classify the breast tissue density. Proposed algorithm is validated on the publicly available 322 mammograms of Mini-Mammographic Image Analysis Society (MIAS).
乳腺组织密度是检测乳腺癌的症状之一。本文提出了一种全自动乳腺组织密度分类方法。目前的工作包括乳房区域提取和增强、图像分割、特征提取和乳腺组织密度分类四个步骤。乳房x光片的增强是通过应用分数阶微分滤波器来完成的。采用基于聚类的快速模糊c均值技术对乳腺组织进行分割。进一步,从提取的乳腺组织中计算基于纹理的局部二值模式(LBP)和显性旋转局部二值模式(DRLBP)特征来表征其纹理特性。采用线性核函数支持向量机对乳腺组织密度进行分类。该算法在Mini-Mammographic Image Analysis Society (MIAS)公开的322张乳房x线照片上进行了验证。
{"title":"Breast Tissue Density Classification in Mammograms Based on Supervised Machine Learning Technique","authors":"K. Kashyap, M. Bajpai, P. Khanna","doi":"10.1145/3140107.3140131","DOIUrl":"https://doi.org/10.1145/3140107.3140131","url":null,"abstract":"Breast tissue density is one of the symptoms for breast cancer detection. Fully automatic breast tissue density classification is presented in this work. Present work consists of four steps which include breast region extraction and enhancement of mammograms, segmentation, feature extraction, and breast tissue density classification. Enhancement of mammogram is done by applying fractional order differential based filter. Segmentation of breast tissue segmentation has been done by using clustering based fast fuzzy c-means technique. Further, texture based local binary pattern (LBP) and dominant rotated local binary pattern (DRLBP) features have been computed from the extracted breast tissues to characterize its texture property. Support vector machine with linear kernel functions are used to classify the breast tissue density. Proposed algorithm is validated on the publicly available 322 mammograms of Mini-Mammographic Image Analysis Society (MIAS).","PeriodicalId":435920,"journal":{"name":"Compute","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114120023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.
{"title":"Neural Machine Translation of Indian Languages","authors":"Karthik Revanuru, Kaushik Turlapaty, Shrisha Rao","doi":"10.1145/3140107.3140111","DOIUrl":"https://doi.org/10.1145/3140107.3140111","url":null,"abstract":"Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130886436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a real-time algorithm to detect users' gaze point in a video sequence from a standard web camera. We have shown that landmarks constructed for both eyes can reliably estimate the eyelid opening, which in turn can be used to tell where the user is staring at that particular moment. Further, the knowledge of the eye opening can be combined with the iris displacement from the reference point to predict the user's gaze point. We have reported a user study involving 8 users and we can track one of nine positions on screen within a radius of 11° of visual angle.
{"title":"Webcam Based Eye Gaze Tracking Using a Landmark Detector","authors":"Atul Sahay, P. Biswas","doi":"10.1145/3140107.3140117","DOIUrl":"https://doi.org/10.1145/3140107.3140117","url":null,"abstract":"This paper proposes a real-time algorithm to detect users' gaze point in a video sequence from a standard web camera. We have shown that landmarks constructed for both eyes can reliably estimate the eyelid opening, which in turn can be used to tell where the user is staring at that particular moment. Further, the knowledge of the eye opening can be combined with the iris displacement from the reference point to predict the user's gaze point. We have reported a user study involving 8 users and we can track one of nine positions on screen within a radius of 11° of visual angle.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126389338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fingerprint based authentication is one of the most extensively used authentication technique. Biometric systems relying on fingerprint usually directly uses minutiae points information of a fingerprint and store it as a user template. There are many recent works which show that original fingerprint of a user can be generated from the data of minutiae points. In case of traditional authentication systems based on password there is a liberty to change the password, however biometric data of a user cannot be changed as it is permanently associated with the human body. If any information related to biometric features of a user is stolen or compromised, then in that case we cannot change the compromised information. Therefore, it is essential to make sure that the biometric data is secure. Our motive is to generate a biometric template that will fulfill the necessities of performance, security, revocability and diversity. Moujahdi et al. proposed a technique called Fingerprint shell as a secure representation of fingerprint data using a user key. In this technique, a spiral curve is generated as a secured user template by using the distances between singular point and minutiae points. In this paper, we have proposed a technique in which we have included the quality of minutiae points for the construction of spiral curve. We have used a pair of unique user keys and utilized the information provided by minutiae points to generate a non-invertible user template. In case of compromising of user template by adversary, user has the liberty to generate new template by using different user keys, the new template and the compromised one are non-linkable. We tested our technique on FVC2002 DB1, FVC2002 DB2 and IIT Kanpur fingerprint databases using FVC protocol. Experimental results obtained are encouraging and demonstrate the viability of our technique.
{"title":"Fingerprint Shell Construction with Prominent Minutiae Points","authors":"Syed Sadaf Ali, S. Prakash","doi":"10.1145/3140107.3140113","DOIUrl":"https://doi.org/10.1145/3140107.3140113","url":null,"abstract":"Fingerprint based authentication is one of the most extensively used authentication technique. Biometric systems relying on fingerprint usually directly uses minutiae points information of a fingerprint and store it as a user template. There are many recent works which show that original fingerprint of a user can be generated from the data of minutiae points. In case of traditional authentication systems based on password there is a liberty to change the password, however biometric data of a user cannot be changed as it is permanently associated with the human body. If any information related to biometric features of a user is stolen or compromised, then in that case we cannot change the compromised information. Therefore, it is essential to make sure that the biometric data is secure. Our motive is to generate a biometric template that will fulfill the necessities of performance, security, revocability and diversity. Moujahdi et al. proposed a technique called Fingerprint shell as a secure representation of fingerprint data using a user key. In this technique, a spiral curve is generated as a secured user template by using the distances between singular point and minutiae points. In this paper, we have proposed a technique in which we have included the quality of minutiae points for the construction of spiral curve. We have used a pair of unique user keys and utilized the information provided by minutiae points to generate a non-invertible user template. In case of compromising of user template by adversary, user has the liberty to generate new template by using different user keys, the new template and the compromised one are non-linkable. We tested our technique on FVC2002 DB1, FVC2002 DB2 and IIT Kanpur fingerprint databases using FVC protocol. Experimental results obtained are encouraging and demonstrate the viability of our technique.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132505317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Ravi, Vadali Tejasviram, Anurag Sharma, R. R. Khansama
This paper presents a new method of determining prediction intervals via the hybrid of support vector machine and quantile regression random forest introduced elsewhere. Its effectiveness is tested on 5 benchmark regression problems. Fromthe experiments, we infer that the difference in performance of the prediction intervals from the proposed method and those from quantile regression and quantile regression random forest is statistically significant as shown by the Wilcoxon test at 5% level of significance. This is an important achievement of the paper.
{"title":"Prediction Intervals via Support Vector-quantile Regression Random Forest Hybrid","authors":"V. Ravi, Vadali Tejasviram, Anurag Sharma, R. R. Khansama","doi":"10.1145/3140107.3140122","DOIUrl":"https://doi.org/10.1145/3140107.3140122","url":null,"abstract":"This paper presents a new method of determining prediction intervals via the hybrid of support vector machine and quantile regression random forest introduced elsewhere. Its effectiveness is tested on 5 benchmark regression problems. Fromthe experiments, we infer that the difference in performance of the prediction intervals from the proposed method and those from quantile regression and quantile regression random forest is statistically significant as shown by the Wilcoxon test at 5% level of significance. This is an important achievement of the paper.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126450495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rajanikanth N. Kashi, Meenakshi D'Souza, Koyalkar Raman Kishore
An Area 1rich with challenges is that of self-adaptive avionics software, with considerable thrust in both the American and European airspace modernization programs. Verifying functional requirements for such a system, ultimately leading to certification poses a unique set of problems, since these systems are required to be dependable. Also inherent is the subject of eliciting measures of adaptability which help evaluate the system in the context of non-functional requirements qualified by self-properties. We illustrate our approach for such a verification and evaluation exercise by proposing a combination of formal methods verification techniques and simulation based testing. The test bed is a representative self-adaptive software of a small UAS (Unmanned Aircraft System) avionics modeled as a multiagent BDI (Belief Desire Intention) system with evolutionary and reactive behaviours, illustrating important aspects of verification.
{"title":"Incorporating Formal Methods and Measures Obtained through Analysis, Simulation Testing for Dependable Self-Adaptive Software in Avionics Systems","authors":"Rajanikanth N. Kashi, Meenakshi D'Souza, Koyalkar Raman Kishore","doi":"10.1145/3140107.3140128","DOIUrl":"https://doi.org/10.1145/3140107.3140128","url":null,"abstract":"An Area 1rich with challenges is that of self-adaptive avionics software, with considerable thrust in both the American and European airspace modernization programs. Verifying functional requirements for such a system, ultimately leading to certification poses a unique set of problems, since these systems are required to be dependable. Also inherent is the subject of eliciting measures of adaptability which help evaluate the system in the context of non-functional requirements qualified by self-properties. We illustrate our approach for such a verification and evaluation exercise by proposing a combination of formal methods verification techniques and simulation based testing. The test bed is a representative self-adaptive software of a small UAS (Unmanned Aircraft System) avionics modeled as a multiagent BDI (Belief Desire Intention) system with evolutionary and reactive behaviours, illustrating important aspects of verification.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132659059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a novel model for text document clustering. Usually, Text documents consist of a vast number of features. We selected important features through four methods term variance, document frequency, Latent Dirichlet Allocation, and Significance methods. We demonstrated the effectiveness of proposed model on 20NG and WebKB datasets which are publicly available. We evaluated the model with F-Score value. Results indicate that LDA performed best in capturing the discriminate features.
{"title":"LDA Based Feature Selection for Document Clustering","authors":"B. S. Kumar, V. Ravi","doi":"10.1145/3140107.3140129","DOIUrl":"https://doi.org/10.1145/3140107.3140129","url":null,"abstract":"In this paper, we propose a novel model for text document clustering. Usually, Text documents consist of a vast number of features. We selected important features through four methods term variance, document frequency, Latent Dirichlet Allocation, and Significance methods. We demonstrated the effectiveness of proposed model on 20NG and WebKB datasets which are publicly available. We evaluated the model with F-Score value. Results indicate that LDA performed best in capturing the discriminate features.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116630988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key research interest in the area of Constraint Satisfaction Problems (CSP) is to identify tractable classes of constraints and develop efficient algorithms for solving them. In this paper, we propose an optimal algorithm for solving r-ary min-closed and max-closed constraints. Assuming r = O(1), our algorithm has an optimal running time of O(ct) where c and t are the number of constraints and the maximum size of any constraint, respectively. This significantly improves the existing pairwise consistency based algorithm that takes O(c2t2) time. Moreover, for (binary) connected row convex (CRC) constraints, we design an optimal algorithm for arc consistency that runs in O(cd) time where d is the largest size of any domain. This again improves upon the existing O(cd2) algorithms. This, in turn, leads to a faster algorithm for solving CRC constraints. We also show how our solutions can be applied to determine problems in large distributed IT systems. The experimental evaluation shows that the proposed algorithms are several orders of magnitudes faster than the state-of-the-art algorithms.
{"title":"Optimal Algorithms for Min-Closed, Max-Closed and Arc Consistency over Connected Row Convex Constraints","authors":"Shubhadip Mitra, P. Dutta, Arnab Bhattacharya","doi":"10.1145/3140107.3140110","DOIUrl":"https://doi.org/10.1145/3140107.3140110","url":null,"abstract":"A key research interest in the area of Constraint Satisfaction Problems (CSP) is to identify tractable classes of constraints and develop efficient algorithms for solving them. In this paper, we propose an optimal algorithm for solving r-ary min-closed and max-closed constraints. Assuming r = O(1), our algorithm has an optimal running time of O(ct) where c and t are the number of constraints and the maximum size of any constraint, respectively. This significantly improves the existing pairwise consistency based algorithm that takes O(c2t2) time. Moreover, for (binary) connected row convex (CRC) constraints, we design an optimal algorithm for arc consistency that runs in O(cd) time where d is the largest size of any domain. This again improves upon the existing O(cd2) algorithms. This, in turn, leads to a faster algorithm for solving CRC constraints. We also show how our solutions can be applied to determine problems in large distributed IT systems. The experimental evaluation shows that the proposed algorithms are several orders of magnitudes faster than the state-of-the-art algorithms.","PeriodicalId":435920,"journal":{"name":"Compute","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129023397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}