Online Communities for Question Answering (CQA) such as Quora and Stack Overflow face the challenge of providing sufficient answers for the questions asked by users. The exponential growing rate of the unanswered questions compromises the effectiveness of the CQA frameworks as knowledge sharing platforms. The main reason for this issue is the inefficient routing of the questions to the potential answerers, who are the field experts and interested users. This paper proposes the deep-learning-based technique QR-DSSM to increase the accuracy of the question routing process. This technique uses deep semantic similarity model (DSSM) to extract semantic similarity features using deep neural networks and use the features to rank users' profiles. QR-DSSM maps the asked questions and the profiles of the users into a latent semantic space where the ability to answer is measured using the cosine similarity between the questions and the profiles of the users. QR-DSSM experiments outperformed the baseline models such as LDA, SVM, and Rank-SVM techniques and achieved an MRR score of 0.1737.
{"title":"Text-based question routing for question answering communities via deep learning","authors":"Amr Azzam, N. Tazi, A. Hossny","doi":"10.1145/3019612.3019762","DOIUrl":"https://doi.org/10.1145/3019612.3019762","url":null,"abstract":"Online Communities for Question Answering (CQA) such as Quora and Stack Overflow face the challenge of providing sufficient answers for the questions asked by users. The exponential growing rate of the unanswered questions compromises the effectiveness of the CQA frameworks as knowledge sharing platforms. The main reason for this issue is the inefficient routing of the questions to the potential answerers, who are the field experts and interested users. This paper proposes the deep-learning-based technique QR-DSSM to increase the accuracy of the question routing process. This technique uses deep semantic similarity model (DSSM) to extract semantic similarity features using deep neural networks and use the features to rank users' profiles. QR-DSSM maps the asked questions and the profiles of the users into a latent semantic space where the ability to answer is measured using the cosine similarity between the questions and the profiles of the users. QR-DSSM experiments outperformed the baseline models such as LDA, SVM, and Rank-SVM techniques and achieved an MRR score of 0.1737.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87879674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Mosqueira-Rey, David Alonso-Ríos, Diego Prado-Gesto, V. Moret-Bonillo
This paper describes a methodological approach for the usability evaluation of a second-screen application, that is, the use of the same application in two devices simultaneously, one of them taking a more passive role acting as a second screen. In our case, the devices are a tablet and a smart TV, and the application is a sports news application that provides news and multimedia content. We perform the usability analysis of the application following some comprehensive taxonomies of usability and context-of-use attributes. We put the focus of the analysis on the interaction between the two devices because it is the most challenging part and is a new form of use that is becoming more common nowadays. After this analysis we generalize some usability heuristics that can be useful for assessing second-screen applications.
{"title":"Usability evaluation and development of heuristics for second-screen applications","authors":"E. Mosqueira-Rey, David Alonso-Ríos, Diego Prado-Gesto, V. Moret-Bonillo","doi":"10.1145/3019612.3019883","DOIUrl":"https://doi.org/10.1145/3019612.3019883","url":null,"abstract":"This paper describes a methodological approach for the usability evaluation of a second-screen application, that is, the use of the same application in two devices simultaneously, one of them taking a more passive role acting as a second screen. In our case, the devices are a tablet and a smart TV, and the application is a sports news application that provides news and multimedia content. We perform the usability analysis of the application following some comprehensive taxonomies of usability and context-of-use attributes. We put the focus of the analysis on the interaction between the two devices because it is the most challenging part and is a new form of use that is becoming more common nowadays. After this analysis we generalize some usability heuristics that can be useful for assessing second-screen applications.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86667964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The modeling of temporal aspects in BPMN processes is poorly addressed to date, despite the crucial role played by time during process design and execution. In the clinical domain, temporal conditions often constrain medical decisions that drive healthcare process execution and organizational outcomes. However, both temporal constraints and their effects on process decisions are often hidden within process models. In this paper, we deal with modeling a set of selected time constraints that "decide" how process execution paths are taken and we address their enforcement in BPMN process diagrams. A formal semantics based on timed automata clarifies the behavior of the proposed processes.
{"title":"Driving time-dependent paths in clinical BPMN processes","authors":"Combi Carlo, P. Sala, Francesca Zerbato","doi":"10.1145/3019612.3019620","DOIUrl":"https://doi.org/10.1145/3019612.3019620","url":null,"abstract":"The modeling of temporal aspects in BPMN processes is poorly addressed to date, despite the crucial role played by time during process design and execution. In the clinical domain, temporal conditions often constrain medical decisions that drive healthcare process execution and organizational outcomes. However, both temporal constraints and their effects on process decisions are often hidden within process models. In this paper, we deal with modeling a set of selected time constraints that \"decide\" how process execution paths are taken and we address their enforcement in BPMN process diagrams. A formal semantics based on timed automata clarifies the behavior of the proposed processes.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"418 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86843120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cooperation in multi-agent teams opens up the possibility of solving large-scale and complex tasks. One of the important aspects in performing cooperation is the ability to share information. In this paper, we propose an on-the-fly synthesis of transformations that enables information sharing in a dynamic group of heterogeneous agents. The synthesis is done by a declarative logic program, which is based on semantic descriptions of representations. Our experiments show that this approach is useful to compensate heterogeneity in dynamic multi-agent teams.
{"title":"On-the-fly transformation synthesis for information sharing in heterogeneous multi-agent systems","authors":"S. Niemczyk, Nugroho Fredivianus, K. Geihs","doi":"10.1145/3019612.3019879","DOIUrl":"https://doi.org/10.1145/3019612.3019879","url":null,"abstract":"Cooperation in multi-agent teams opens up the possibility of solving large-scale and complex tasks. One of the important aspects in performing cooperation is the ability to share information. In this paper, we propose an on-the-fly synthesis of transformations that enables information sharing in a dynamic group of heterogeneous agents. The synthesis is done by a declarative logic program, which is based on semantic descriptions of representations. Our experiments show that this approach is useful to compensate heterogeneity in dynamic multi-agent teams.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88698919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donghwa Kang, Seoyeon Kim, Jinmang Jung, Bongjae Kim, Hong Min, Junyoung Heo
With the increasing number of online social networking site users, there is increasing cyber attacks on online social networks. Strong connectivity among users of the social networking site makes the information of interest rapidly spread. If the worm disguised as information to attract users' interest spreads on online social networking sites, it can cause great damage. Therefore, a fast patch propagation schemes need to be able to inhibit the activity of the worm. In this paper, we propose a fast patch propagation scheme based on genetic algorithms for online social networks.
{"title":"Genetic algorithm based patching scheme for worm containment on social network","authors":"Donghwa Kang, Seoyeon Kim, Jinmang Jung, Bongjae Kim, Hong Min, Junyoung Heo","doi":"10.1145/3019612.3019912","DOIUrl":"https://doi.org/10.1145/3019612.3019912","url":null,"abstract":"With the increasing number of online social networking site users, there is increasing cyber attacks on online social networks. Strong connectivity among users of the social networking site makes the information of interest rapidly spread. If the worm disguised as information to attract users' interest spreads on online social networking sites, it can cause great damage. Therefore, a fast patch propagation schemes need to be able to inhibit the activity of the worm. In this paper, we propose a fast patch propagation scheme based on genetic algorithms for online social networks.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78953376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. L. Paul, Raul Ceretta Nunes, Vanderlan D. Oliveira, Diogo Kunde
In distributed simulation, the purpose of multi-resolution methods is to allow simulation integration getting consistent views of different resolutions. In these methods, challenges in the treatment of aggregation and disaggregation can be found, such as synchronization of events and objects, and data reliability when occurs a change of resolution. Doctrines are the basis for operations involving different resources (human and material) with different roles to succeed in operation, and multi-resolution in distributed simulation can take advantage of this factor to ensure the reliability of aggregation and disaggregation processes. The aim of this paper is to propose a new strategy for treatment of multi-resolution, which builds converting rules from doctrine dynamic rules. The multi-resolution converter uses dynamic rules to describe aspects of the doctrine in the aggregation and disaggregation processes, increasing the reliability of multi-resolution distributed simulations. This strategy was implemented on a High Level Architecture (HLA) federation with virtual and constructive Commercial Off- The-Shelf (COTS) simulators. The experiments showed the flexibility of this strategy on managing different doctrines.
{"title":"Doctrine based multi-resolution HLA distributed simulation","authors":"R. L. Paul, Raul Ceretta Nunes, Vanderlan D. Oliveira, Diogo Kunde","doi":"10.1145/3019612.3019727","DOIUrl":"https://doi.org/10.1145/3019612.3019727","url":null,"abstract":"In distributed simulation, the purpose of multi-resolution methods is to allow simulation integration getting consistent views of different resolutions. In these methods, challenges in the treatment of aggregation and disaggregation can be found, such as synchronization of events and objects, and data reliability when occurs a change of resolution. Doctrines are the basis for operations involving different resources (human and material) with different roles to succeed in operation, and multi-resolution in distributed simulation can take advantage of this factor to ensure the reliability of aggregation and disaggregation processes. The aim of this paper is to propose a new strategy for treatment of multi-resolution, which builds converting rules from doctrine dynamic rules. The multi-resolution converter uses dynamic rules to describe aspects of the doctrine in the aggregation and disaggregation processes, increasing the reliability of multi-resolution distributed simulations. This strategy was implemented on a High Level Architecture (HLA) federation with virtual and constructive Commercial Off- The-Shelf (COTS) simulators. The experiments showed the flexibility of this strategy on managing different doctrines.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78953565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-robot teams can play a crucial role in many applications such as exploration, or search and rescue operations. One of the most important problems within the multi-robot context is path planning. This has been shown to be particularly challenging, as the team of robots must deal with additional constraints, e.g. inter-robot collision avoidance, while searching in a much larger action space. Previous works have proposed solutions to this problem, but they present two major drawbacks: (i) algorithms suffer from a high computational complexity, or (ii) algorithms require a communication link between any two robots within the system. This paper presents a method to solve this problem, which is both computationally efficient and only requires local communication between neighboring agents. We formulate the multirobot path planning as a distributed constraint optimization problem. Specifically, in our approach the asynchronous distributed constraint optimization algorithm (Adopt) [15] is combined with sampling-based planners to obtain collision free paths, which allows us to take into account both kinematic and kinodynamic constraints of the individual robots. The paper analyzes the performance and scalability of the approach using simulations, and presents real experiments employing a team of several robots.
{"title":"An asynchronous distributed constraint optimization approach to multi-robot path planning with complex constraints","authors":"Alberto Viseras Ruiz, Valentina Karolj, L. Merino","doi":"10.1145/3019612.3019708","DOIUrl":"https://doi.org/10.1145/3019612.3019708","url":null,"abstract":"Multi-robot teams can play a crucial role in many applications such as exploration, or search and rescue operations. One of the most important problems within the multi-robot context is path planning. This has been shown to be particularly challenging, as the team of robots must deal with additional constraints, e.g. inter-robot collision avoidance, while searching in a much larger action space. Previous works have proposed solutions to this problem, but they present two major drawbacks: (i) algorithms suffer from a high computational complexity, or (ii) algorithms require a communication link between any two robots within the system. This paper presents a method to solve this problem, which is both computationally efficient and only requires local communication between neighboring agents. We formulate the multirobot path planning as a distributed constraint optimization problem. Specifically, in our approach the asynchronous distributed constraint optimization algorithm (Adopt) [15] is combined with sampling-based planners to obtain collision free paths, which allows us to take into account both kinematic and kinodynamic constraints of the individual robots. The paper analyzes the performance and scalability of the approach using simulations, and presents real experiments employing a team of several robots.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77449481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ragone, Paolo Tomeo, Corrado Magarelli, T. D. Noia, M. Palmonari, A. Maurino, E. Sciascio
Recommender systems are emerging as an interesting application scenario for Linked Data (LD). In fact, by exploiting the knowledge encoded in LD datasets, a new generation of semantics-aware recommendation engines have been developed in the last years. As Linked Data is often very rich and contains many information that may result irrelevant and noisy for a recommendation task, an initial step of feature selection is always required in order to select the most meaningful portion of the original dataset. Many approaches have been proposed in the literature for feature selection that exploit different statistical dimensions of the original data. In this paper we investigate the role of the semantics encoded in an ontological hierarchy when exploited to select the most relevant properties for a recommendation task. In particular, we compare an approach based on schema summarization with a "classical" one, i.e., Information Gain. We evaluated the performance of the two methods in terms of accuracy and aggregate diversity by setting up an experimental testbed relying on the Movielens dataset.
{"title":"Schema-summarization in linked-data-based feature selection for recommender systems","authors":"A. Ragone, Paolo Tomeo, Corrado Magarelli, T. D. Noia, M. Palmonari, A. Maurino, E. Sciascio","doi":"10.1145/3019612.3019837","DOIUrl":"https://doi.org/10.1145/3019612.3019837","url":null,"abstract":"Recommender systems are emerging as an interesting application scenario for Linked Data (LD). In fact, by exploiting the knowledge encoded in LD datasets, a new generation of semantics-aware recommendation engines have been developed in the last years. As Linked Data is often very rich and contains many information that may result irrelevant and noisy for a recommendation task, an initial step of feature selection is always required in order to select the most meaningful portion of the original dataset. Many approaches have been proposed in the literature for feature selection that exploit different statistical dimensions of the original data. In this paper we investigate the role of the semantics encoded in an ontological hierarchy when exploited to select the most relevant properties for a recommendation task. In particular, we compare an approach based on schema summarization with a \"classical\" one, i.e., Information Gain. We evaluated the performance of the two methods in terms of accuracy and aggregate diversity by setting up an experimental testbed relying on the Movielens dataset.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76452862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trust networks have been widely used to mitigate the data sparsity and cold-start problems of collaborative filtering. Recently, some approaches have been proposed which exploit explicit signed trust relationships, i.e., trust and distrust relationships. These approaches ignore the fact that users despite trusting/distrusting each other in a trust network may have different preferences in real-life. Most of these approaches also handle the notion of the transitivity of distrust as well as trust. However, other existing work observed that trust is transitive while distrust is intransitive. Moreover, explicit signed trust relationships are fairly sparse and may not contribute to infer true preferences of users. In this paper, we propose to create implicit signed trust relationships and exploit them along with explicit signed trust relationship to solve sparsity problem of trust relationships. We also confirm the similarity (resp. dissimilarity) of implicit and explicit trust (resp. distrust) relationships by using the similarity score between users so that users' true preferences can be inferred. In addition to these strategies, we also propose a matrix factorization model that simultaneously exploits implicit and explicit signed trust relationships along with rating information and also handles transitivity of trust and intransitivity of distrust. Extensive experiments on Epinions dataset show that the proposed approach outperforms existing approaches in terms of accuracy.
{"title":"Exploiting implicit and explicit signed trust relationships for effective recommendations","authors":"Irfan Ali, Jiwon Hong, Sang-Wook Kim","doi":"10.1145/3019612.3019666","DOIUrl":"https://doi.org/10.1145/3019612.3019666","url":null,"abstract":"Trust networks have been widely used to mitigate the data sparsity and cold-start problems of collaborative filtering. Recently, some approaches have been proposed which exploit explicit signed trust relationships, i.e., trust and distrust relationships. These approaches ignore the fact that users despite trusting/distrusting each other in a trust network may have different preferences in real-life. Most of these approaches also handle the notion of the transitivity of distrust as well as trust. However, other existing work observed that trust is transitive while distrust is intransitive. Moreover, explicit signed trust relationships are fairly sparse and may not contribute to infer true preferences of users. In this paper, we propose to create implicit signed trust relationships and exploit them along with explicit signed trust relationship to solve sparsity problem of trust relationships. We also confirm the similarity (resp. dissimilarity) of implicit and explicit trust (resp. distrust) relationships by using the similarity score between users so that users' true preferences can be inferred. In addition to these strategies, we also propose a matrix factorization model that simultaneously exploits implicit and explicit signed trust relationships along with rating information and also handles transitivity of trust and intransitivity of distrust. Extensive experiments on Epinions dataset show that the proposed approach outperforms existing approaches in terms of accuracy.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83903528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Initialization of background model also known as foreground-free image against outliers or noise is a very important task for various computer vision applications. Tensor deomposition using Higher Order Robust Principal Component Analysis has been shown to be a very efficient framework for exact recovery of low-rank (corresponds to the background model) component. Recent study shows that tensor decomposition based on online optimization into low- rank and sparse component addressed the limitations of memory and computational issues as compared to the earlier approaches. However, it is based on the iterative optimization of nuclear norm which is not always robust when the large entries of an input observation tensor are corrupted against outliers. Therefore, the task of background modeling shows a weak performance in the presence of an increasing number of outliers. To address this issue, this paper presents an extension of an online tensor decomposition into low-rank and sparse components using a maximum norm constraint. Since, maximum norm regularizer is more robust than nuclear norm against large number of outliers, therefore the proposed extended tensor based decomposition framework with maximum norm provides an accurate estimation of background scene. Experimental evaluations on synthetic data as well as real dataset such as Scene Background Modeling Initialization (SBMI) show encouraging performance for the task of background modeling as compared to the state of the art approaches.
{"title":"SBMI-LTD: stationary background model initialization based on low-rank tensor decomposition","authors":"S. Javed, T. Bouwmans, Soon Ki Jung","doi":"10.1145/3019612.3019687","DOIUrl":"https://doi.org/10.1145/3019612.3019687","url":null,"abstract":"Initialization of background model also known as foreground-free image against outliers or noise is a very important task for various computer vision applications. Tensor deomposition using Higher Order Robust Principal Component Analysis has been shown to be a very efficient framework for exact recovery of low-rank (corresponds to the background model) component. Recent study shows that tensor decomposition based on online optimization into low- rank and sparse component addressed the limitations of memory and computational issues as compared to the earlier approaches. However, it is based on the iterative optimization of nuclear norm which is not always robust when the large entries of an input observation tensor are corrupted against outliers. Therefore, the task of background modeling shows a weak performance in the presence of an increasing number of outliers. To address this issue, this paper presents an extension of an online tensor decomposition into low-rank and sparse components using a maximum norm constraint. Since, maximum norm regularizer is more robust than nuclear norm against large number of outliers, therefore the proposed extended tensor based decomposition framework with maximum norm provides an accurate estimation of background scene. Experimental evaluations on synthetic data as well as real dataset such as Scene Background Modeling Initialization (SBMI) show encouraging performance for the task of background modeling as compared to the state of the art approaches.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79925224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}