As with many other areas of study, mathematical knowledge has been produced for centuries and will continue to be produced for centuries to come. The records have taken many forms, from manuscripts, to printed journals, and now digital media. Unlike many other fields, however, much of mathematical knowledge has a high degree of precision and objectivity that both gives it permanent utility and makes it susceptible to mechanized treatment. We outline a path toward assembling the world’s mathematical knowledge. While initially in the form of a comprehensive digital library of page images, we expect evolution toward a knowledge base supporting sophisticated queries and automated reasoning. It is the aim of the nascent International Mathematical Knowledge Trust to provide a framework and to foster a community to make progress in this direction. We can foresee that such a knowledge base will enhance the capacity of individual mathematicians, accelerate discovery and allow new kinds of collaboration.
{"title":"How to Build a Global Digital Mathematics Library","authors":"S. Watt","doi":"10.1109/SYNASC.2016.019","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.019","url":null,"abstract":"As with many other areas of study, mathematical knowledge has been produced for centuries and will continue to be produced for centuries to come. The records have taken many forms, from manuscripts, to printed journals, and now digital media. Unlike many other fields, however, much of mathematical knowledge has a high degree of precision and objectivity that both gives it permanent utility and makes it susceptible to mechanized treatment. We outline a path toward assembling the world’s mathematical knowledge. While initially in the form of a comprehensive digital library of page images, we expect evolution toward a knowledge base supporting sophisticated queries and automated reasoning. It is the aim of the nascent International Mathematical Knowledge Trust to provide a framework and to foster a community to make progress in this direction. We can foresee that such a knowledge base will enhance the capacity of individual mathematicians, accelerate discovery and allow new kinds of collaboration.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132277897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As hardware becomes more flexible in terms ofprogramming, software APIs must expose hardware features ina portable way. Additions in the OpenCL 2.0 API expose threadcommunication through the newly defined work-group functions. In this paper we focus on two implementations of the work-groupfunctions in the OpenCL compiler backend for Intel's GPUs. Wefirst describe the particularities of Intel's GEN GPU architectureand the Beignet OpenCL open source project. Both work-groupimplementations are then detailed, one based on thread to threadmessage passing while the other on thread to shared local memoryread/write. The focus is around choosing an optimal variant basedon how each implementation maps to the hardware and its impacton performance.
{"title":"Analysis of OpenCL Work-Group Reduce for Intel GPUs","authors":"Grigore Lupescu, E. Slusanschi, N. Tapus","doi":"10.1109/SYNASC.2016.070","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.070","url":null,"abstract":"As hardware becomes more flexible in terms ofprogramming, software APIs must expose hardware features ina portable way. Additions in the OpenCL 2.0 API expose threadcommunication through the newly defined work-group functions. In this paper we focus on two implementations of the work-groupfunctions in the OpenCL compiler backend for Intel's GPUs. Wefirst describe the particularities of Intel's GEN GPU architectureand the Beignet OpenCL open source project. Both work-groupimplementations are then detailed, one based on thread to threadmessage passing while the other on thread to shared local memoryread/write. The focus is around choosing an optimal variant basedon how each implementation maps to the hardware and its impacton performance.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125233495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical files and observation papers have always been an important source of knowledge. Unfortunately, most of the times they are still stored as physical documents either printed or handwritten, thus making it difficult to transfer this precious information from one place to another, or centralizing and extracting new knowledge from it. But with nowadays advancements in computer science this problem may be handled. In this paper we propose a patient centered Multi-agent system project that is able to extract relevant information from the patients' health records and store that knowledge on a centralized data-store, based on a predefined ontology scheme. The system's purpose is to standardize and enrich the knowledge by performing various mining tasks on the given text. The final purpose is to provide different hospital departments with a tool which they can query for useful information about the patients' medication, treatments and/or scheduled tests and exams and get suggestions regarding their treatment intentions.
{"title":"Towards a Multi-Agent System for Medical Records Processing and Knowledge Discovery","authors":"Todor Ivascu, Adriana Dinis, V. Negru","doi":"10.1109/SYNASC.2016.067","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.067","url":null,"abstract":"Medical files and observation papers have always been an important source of knowledge. Unfortunately, most of the times they are still stored as physical documents either printed or handwritten, thus making it difficult to transfer this precious information from one place to another, or centralizing and extracting new knowledge from it. But with nowadays advancements in computer science this problem may be handled. In this paper we propose a patient centered Multi-agent system project that is able to extract relevant information from the patients' health records and store that knowledge on a centralized data-store, based on a predefined ontology scheme. The system's purpose is to standardize and enrich the knowledge by performing various mining tasks on the given text. The final purpose is to provide different hospital departments with a tool which they can query for useful information about the patients' medication, treatments and/or scheduled tests and exams and get suggestions regarding their treatment intentions.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131380732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Structural and (Noetherian) cyclic induction are two instances of the Noetherian induction principle adapted to reason on first-order logic. From a theoretical point of view, every structural proof can be converted to a cyclic proof but the other way is only conjectured. From a practical point of view, i) structural induction principles are built-in or automatically issued from the analysis of recursive data structures by many theorem provers, and ii) the implementation of cyclic induction reasoning may require additional resources such as functional schemas, libraries and human interaction. In this paper, we firstly define a set of conjectures that can be proved by using cyclic induction and following a similar scenario. Next, we implement the cyclic induction reasoning in the Coq proof assistant. Finally, we show that the scenarios for proving these conjectures with structural induction differ in terms of the number of induction steps and lemmas, as well as proof scenario. We identified three conjectures from this set that are hard or impossible to be proved by structural induction.
{"title":"Structural vs. Cyclic Induction: A Report on Some Experiments with Coq","authors":"Sorin Stratulat","doi":"10.1109/SYNASC.2016.018","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.018","url":null,"abstract":"Structural and (Noetherian) cyclic induction are two instances of the Noetherian induction principle adapted to reason on first-order logic. From a theoretical point of view, every structural proof can be converted to a cyclic proof but the other way is only conjectured. From a practical point of view, i) structural induction principles are built-in or automatically issued from the analysis of recursive data structures by many theorem provers, and ii) the implementation of cyclic induction reasoning may require additional resources such as functional schemas, libraries and human interaction. In this paper, we firstly define a set of conjectures that can be proved by using cyclic induction and following a similar scenario. Next, we implement the cyclic induction reasoning in the Coq proof assistant. Finally, we show that the scenarios for proving these conjectures with structural induction differ in terms of the number of induction steps and lemmas, as well as proof scenario. We identified three conjectures from this set that are hard or impossible to be proved by structural induction.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117350290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the recent years, the vast volume of digitalimages available enabled a large range of learning methods tobe applicable, while making human input obsolete for manytasks. In this paper, we are addressing the problem of removingprivate information from images. When confronted with arelatively big number of pictures to be made public, one mayfind the task of manual editing out sensitive regions to beunfeasible. Ideally, we would like to use a machine learningapproach to automate this task. We implement and comparedifferent architectures based on convolutional neural networks, with generative and discriminative models competing in anadversarial fashion.
{"title":"Censoring Sensitive Data from Images","authors":"Stefan Postavaru, Ionut-MihaIta Plesea","doi":"10.1109/SYNASC.2016.073","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.073","url":null,"abstract":"In the recent years, the vast volume of digitalimages available enabled a large range of learning methods tobe applicable, while making human input obsolete for manytasks. In this paper, we are addressing the problem of removingprivate information from images. When confronted with arelatively big number of pictures to be made public, one mayfind the task of manual editing out sensitive regions to beunfeasible. Ideally, we would like to use a machine learningapproach to automate this task. We implement and comparedifferent architectures based on convolutional neural networks, with generative and discriminative models competing in anadversarial fashion.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124912534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Gabriel, Dragos Gavrilut, Baetu Ioan Alexandru, Adrian-Stefan Popescu
As malware industry grows, so does the means of infecting a computer or device evolve. One of the most common infection vector is to use the Internet as an entry point. Not only that this method is easy to use, but due to the fact that URLs come in different forms and shapes, it is really difficult to distinguish a malicious URL from a benign one. Furthermore, every system that tries to classify or detect URLs must work on a real time stream and needs to provide a fast response for every URL that is submitted for analysis (in our context a fast response means less than 300-400 milliseconds/URL). From a malware creator point of view, it is really easy to change such URLs multiple times in one day. As a general observation, malicious URLs tend to have a short life (they appear, serve malicious content for several hours and then they are shut down usually by the ISP where they reside in). This paper aims to present a system that analyzes URLs in network traffic that is also capable of adjusting its detection models to adapt to new malicious content. Every correctly classified URL is reused as part of a new dataset that acts as the backbone for new detection models. The system also uses different clustering techniques in order to identify the lack of features on malicious URLs, thus creating a way to improve detection for this kind of threats.
{"title":"Detecting Malicious URLs: A Semi-Supervised Machine Learning System Approach","authors":"A. Gabriel, Dragos Gavrilut, Baetu Ioan Alexandru, Adrian-Stefan Popescu","doi":"10.1109/SYNASC.2016.045","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.045","url":null,"abstract":"As malware industry grows, so does the means of infecting a computer or device evolve. One of the most common infection vector is to use the Internet as an entry point. Not only that this method is easy to use, but due to the fact that URLs come in different forms and shapes, it is really difficult to distinguish a malicious URL from a benign one. Furthermore, every system that tries to classify or detect URLs must work on a real time stream and needs to provide a fast response for every URL that is submitted for analysis (in our context a fast response means less than 300-400 milliseconds/URL). From a malware creator point of view, it is really easy to change such URLs multiple times in one day. As a general observation, malicious URLs tend to have a short life (they appear, serve malicious content for several hours and then they are shut down usually by the ISP where they reside in). This paper aims to present a system that analyzes URLs in network traffic that is also capable of adjusting its detection models to adapt to new malicious content. Every correctly classified URL is reused as part of a new dataset that acts as the backbone for new detection models. The system also uses different clustering techniques in order to identify the lack of features on malicious URLs, thus creating a way to improve detection for this kind of threats.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131656941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present the deduction of the Levenberg-Marquardt algorithm for training quaternion-valued feedforward neural networks, using the framework of the HR calculus. Its performances in the real-and complex-valued cases lead to the idea of extending it to the quaternion domain, also. The proposed method is exemplified on time series prediction applications, showing a significant improvement over the quaternion gradient descent algorithm.
{"title":"Levenberg-Marquardt Learning Algorithm for Quaternion-Valued Neural Networks","authors":"Călin-Adrian Popa","doi":"10.1109/SYNASC.2016.050","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.050","url":null,"abstract":"In this paper, we present the deduction of the Levenberg-Marquardt algorithm for training quaternion-valued feedforward neural networks, using the framework of the HR calculus. Its performances in the real-and complex-valued cases lead to the idea of extending it to the quaternion domain, also. The proposed method is exemplified on time series prediction applications, showing a significant improvement over the quaternion gradient descent algorithm.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"75 2-3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131922533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Task scheduling, which is the fundamental problem for real-time systems, has been approached from various points of view and for various classes of hardware/software configurations. Most of the results currently available have been determined for preemptive scheduling. However, the non-preemptive case is also of great interest, and its higher complexity requires different solutions. This paper builds on previous results of the authors regarding the minimum number of processors that is necessary to allow finding a feasible schedule for a given task set. As previous work was considering single-instance tasks, now the focus moves to periodic tasks, and the existing results are extended in such a way as to cover the new requirements. Also, an existing scheduling algorithm, which aims to combine the characteristics of the well-known EDF and LLF techniques, is being adapted for dealing with periodic tasks.
{"title":"Resource Bounding for Non-Preemptive Task Scheduling on a Multiprocessor Platform","authors":"V. Radulescu, S. Andrei, A. Cheng","doi":"10.1109/SYNASC.2016.035","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.035","url":null,"abstract":"Task scheduling, which is the fundamental problem for real-time systems, has been approached from various points of view and for various classes of hardware/software configurations. Most of the results currently available have been determined for preemptive scheduling. However, the non-preemptive case is also of great interest, and its higher complexity requires different solutions. This paper builds on previous results of the authors regarding the minimum number of processors that is necessary to allow finding a feasible schedule for a given task set. As previous work was considering single-instance tasks, now the focus moves to periodic tasks, and the existing results are extended in such a way as to cover the new requirements. Also, an existing scheduling algorithm, which aims to combine the characteristics of the well-known EDF and LLF techniques, is being adapted for dealing with periodic tasks.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129943764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Symmetry breaking is a crucial technique to solve many graph problems. However, current state-of-the-art techniques break graph symmetries only partially, causing search algorithms to unnecessarily explore many isomorphic parts of the search space. We study properties of perfect symmetry breaking for graph problems. One promising and surprising result on small-sized graphs—up to order five— is that perfect symmetry breaking can be achieved using a compact propositional formula in which each literal occurs at most twice. At least for small graphs, perfect symmetry breaking can be expressed more compactly than the existing (partial) symmetry-breaking methods. We present several techniques to compute and analyze perfect symmetry-breaking formulas.
{"title":"The Quest for Perfect and Compact Symmetry Breaking for Graph Problems","authors":"Marijn J. H. Heule","doi":"10.1109/SYNASC.2016.034","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.034","url":null,"abstract":"Symmetry breaking is a crucial technique to solve many graph problems. However, current state-of-the-art techniques break graph symmetries only partially, causing search algorithms to unnecessarily explore many isomorphic parts of the search space. We study properties of perfect symmetry breaking for graph problems. One promising and surprising result on small-sized graphs—up to order five— is that perfect symmetry breaking can be achieved using a compact propositional formula in which each literal occurs at most twice. At least for small graphs, perfect symmetry breaking can be expressed more compactly than the existing (partial) symmetry-breaking methods. We present several techniques to compute and analyze perfect symmetry-breaking formulas.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127908441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the semantics of a language inspiredby membrane computing in which computation proceedsin a maximally parallel way, involving multisets of objectsdistributed into hierarchical structures of regions delimitedby membranes. The language provides primitives for parallelcommunication of objects across membranes in the form ofrules that can be used to express symport/antiport interactions. It also provides a primitive for membrane creation. Forthe language under investigation we present a denotationalsemantics designed with metric spaces and continuations.
{"title":"Continuation Semantics of a Language Inspired by Membrane Computing with Symport/Antiport Interactions","authors":"Gabriel Ciobanu, E. Todoran","doi":"10.1109/SYNASC.2016.060","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.060","url":null,"abstract":"We investigate the semantics of a language inspiredby membrane computing in which computation proceedsin a maximally parallel way, involving multisets of objectsdistributed into hierarchical structures of regions delimitedby membranes. The language provides primitives for parallelcommunication of objects across membranes in the form ofrules that can be used to express symport/antiport interactions. It also provides a primitive for membrane creation. Forthe language under investigation we present a denotationalsemantics designed with metric spaces and continuations.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"65 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113943386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}