Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.2
A. Przybyszewski
Humans can easily recognize objects as complex as faces even if they have not seen them in such conditions before. We would like to find out computational basis of this ability. As an example of our approach we use the neurophysiological data from the visual system. In the retina and thalamus simple light spots are classified, in V1 - oriented lines and in V4 - simple shapes. The feedforward (FF) pathways by extracting above attributes from the object form hypotheses. The feedback (FB) pathways play different roles - they form predictions. In each area structure related predictions are tested against hypotheses. We formulate a theory in which different visual stimuli are described through their condition attributes. Responses in LGN, V1, and V4 neurons to different stimuli are divided into several ranges and are treated as decision attributes. Applying rough set theory (Pawlak, 1991 -[1]) we have divided our stimuli into equivalent classes in different brain areas. We propose that relationships between decision rules in each area are determined in two ways: by different logic of FF and FB pathways: FF pathways gather a huge number of possible objects attributes together using logical "AND" (drivers), and FB pathways choose the right one mainly by logical "OR" (modulators).
{"title":"Brain-Like Approximate Reasoning","authors":"A. Przybyszewski","doi":"10.14236/EWIC/VOCS2008.2","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.2","url":null,"abstract":"Humans can easily recognize objects as complex as faces even if they have not seen them in such conditions before. We would like to find out computational basis of this ability. As an example of our approach we use the neurophysiological data from the visual system. In the retina and thalamus simple light spots are classified, in V1 - oriented lines and in V4 - simple shapes. \u0000 \u0000The feedforward (FF) pathways by extracting above attributes from the object form hypotheses. The feedback (FB) pathways play different roles - they form predictions. In each area structure related predictions are tested against hypotheses. We formulate a theory in which different visual stimuli are described through their condition attributes. Responses in LGN, V1, and V4 neurons to different stimuli are divided into several ranges and are treated as decision attributes. Applying rough set theory (Pawlak, 1991 -[1]) we have divided our stimuli into equivalent classes in different brain areas. We propose that relationships between decision rules in each area are determined in two ways: by different logic of FF and FB pathways: FF pathways gather a huge number of possible objects attributes together using logical \"AND\" (drivers), and FB pathways choose the right one mainly by logical \"OR\" (modulators).","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114932234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.9
Martin Wojtczyk, Michael Marszalek, A. Knoll, R. Heidemann, K. Joeris, Chun Zhang, M. Burnett, T. Monica
Both Robots and Personal Computers established new markets about 30 years ago and were enabling factors in Automation and Information Technology. However, while you can see Personal Computers in almost every home nowadays, the domain of Robots in general still is mostly restricted to industrial automation. Due to the physical impact of robots, a safe design is essential, which most robots still lack of and therefore prevent their application for personal use, although a slow change can be noticed by the introduction of dedicated robots for specific tasks, which can be classified as service robots. Our approach to service robots was driven by the idea for supporting lab personnel in a biotechnology laboratory. That resulted in the combination of a manipulator with a mobile platform, extended with the necessary sensors to carry out a complete sample management process in a mammalian cell culture plant. After the initial development in Germany, the mobile manipulator was shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The platform was installed and successfully tested there in a pilot plant. This project demonstrates the successful combination of both key technologies: Information Technology and Robotics - and its application in a Life Science pilot plant.
{"title":"Automation of the Complete Sample Management in a Biotech Laboratory","authors":"Martin Wojtczyk, Michael Marszalek, A. Knoll, R. Heidemann, K. Joeris, Chun Zhang, M. Burnett, T. Monica","doi":"10.14236/EWIC/VOCS2008.9","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.9","url":null,"abstract":"Both Robots and Personal Computers established new markets about 30 years ago and were enabling factors in Automation and Information Technology. However, while you can see Personal Computers in almost every home nowadays, the domain of Robots in general still is mostly restricted to industrial automation. Due to the physical impact of robots, a safe design is essential, which most robots still lack of and therefore prevent their application for personal use, although a slow change can be noticed by the introduction of dedicated robots for specific tasks, which can be classified as service robots. \u0000 \u0000Our approach to service robots was driven by the idea for supporting lab personnel in a biotechnology laboratory. That resulted in the combination of a manipulator with a mobile platform, extended with the necessary sensors to carry out a complete sample management process in a mammalian cell culture plant. After the initial development in Germany, the mobile manipulator was shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The platform was installed and successfully tested there in a pilot plant. This project demonstrates the successful combination of both key technologies: Information Technology and Robotics - and its application in a Life Science pilot plant.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133335210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.3
D. Corsar, D. Sleeman
The benefits of reuse have long been recognized in the knowledge engineering community where the dream of creating knowledge-based systems on-the-fly from libraries of reusable components is still to be fully realised. In this paper we present a two stage methodology for creating knowledge-based systems: first reusing domain knowledge by mapping it, where appropriate, to the requirements of a generic problem solver; and secondly using this mapped knowledge and the requirements of the problem solver to "drive" the acquisition of the additional knowledge it needs. For example, suppose we have available a knowledge-based systems which is composed of a propose-and-revise problem solver linked with an appropriate knowledge base/ontology from the elevator domain. Then to create a diagnostic knowledge-based systems in the same domain, we require to map relevant information from the elevator knowledge base/ontology, such as component information, to a diagnostic problem solver, and then to extend it with diagnostic information such as malfunctions, symptoms and repairs for each component. We have developed MAKTab, a Protege plug-in which supports both these steps and results in a composite knowledgebased systems which is executable. In the final section of this paper we discuss the issues involved in extending MAKTab so that it would be able to operate in the context of the (Semantic) Web. Here we use the idea of centralised mapping repositories and mapping composition. This work contributes to the vision of the Web, which contains components (both problem solvers and instantiated ontologies (knowledge bases)) that tools (like MAKTab) can use to create knowledge-based systems which subsequently can enhance the richness of the Web by providing yet further knowledge-based Web-services.
{"title":"Developing Knowledge-Based Systems using the Semantic Web","authors":"D. Corsar, D. Sleeman","doi":"10.14236/EWIC/VOCS2008.3","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.3","url":null,"abstract":"The benefits of reuse have long been recognized in the knowledge engineering community where the dream of creating knowledge-based systems on-the-fly from libraries of reusable components is still to be fully realised. In this paper we present a two stage methodology for creating knowledge-based systems: first reusing domain knowledge by mapping it, where appropriate, to the requirements of a generic problem solver; and secondly using this mapped knowledge and the requirements of the problem solver to \"drive\" the acquisition of the additional knowledge it needs. For example, suppose we have available a knowledge-based systems which is composed of a propose-and-revise problem solver linked with an appropriate knowledge base/ontology from the elevator domain. Then to create a diagnostic knowledge-based systems in the same domain, we require to map relevant information from the elevator knowledge base/ontology, such as component information, to a diagnostic problem solver, and then to extend it with diagnostic information such as malfunctions, symptoms and repairs for each component. We have developed MAKTab, a Protege plug-in which supports both these steps and results in a composite knowledgebased systems which is executable. In the final section of this paper we discuss the issues involved in extending MAKTab so that it would be able to operate in the context of the (Semantic) Web. Here we use the idea of centralised mapping repositories and mapping composition. This work contributes to the vision of the Web, which contains components (both problem solvers and instantiated ontologies (knowledge bases)) that tools (like MAKTab) can use to create knowledge-based systems which subsequently can enhance the richness of the Web by providing yet further knowledge-based Web-services.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121641250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.31
J. Jürjens, Y. Yu, A. Bauer
Dependable systems evolution has been identified by the UK Computing Research Committee (UKCRC) as one of the current grand challenges for computer science. We present work towards addressing this challenge which focusses on one facet of dependability, namely data security: We give an overview on an approach for modelbased security verification which provides a traceability link to the implementation. The approach uses a design model in the UML security extension UMLsec which can be formally verified against high-level security requirements such as secrecy and authenticity. An implementation of the specification can then be verified against the model by making use of run-time verification through the traceability link. The approach supports software evolution in so far as the traceability mapping is updated when refactoring operations are regressively performed using our tool-supported refactoring technique. The proposed method has been applied to an implementation of the Internet security protocol SSL.
{"title":"Tools for Traceable Security Verification","authors":"J. Jürjens, Y. Yu, A. Bauer","doi":"10.14236/EWIC/VOCS2008.31","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.31","url":null,"abstract":"Dependable systems evolution has been identified by the UK Computing Research Committee (UKCRC) as one of the current grand challenges for computer science. We present work towards addressing this challenge which focusses on one facet of dependability, namely data security: We give an overview on an approach for modelbased security verification which provides a traceability link to the implementation. The approach uses a design model in the UML security extension UMLsec which can be formally verified against high-level security requirements such as secrecy and authenticity. An implementation of the specification can then be verified against the model by making use of run-time verification through the traceability link. The approach supports software evolution in so far as the traceability mapping is updated when refactoring operations are regressively performed using our tool-supported refactoring technique. The proposed method has been applied to an implementation of the Internet security protocol SSL.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122460042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.16
Faraj Alhwarin, Chao Wang, Danijela Ristić-Durrant, A. Gräser
The SIFT algorithm (Scale Invariant Feature Transform) proposed by Lowe [1] is an approach for extracting distinctive invariant features from images. It has been successfully applied to a variety of computer vision problems based on feature matching including object recognition, pose estimation, image retrieval and many others. However, in real-world applications there is still a need for improvement of the algorithm's robustness with respect to the correct matching of SIFT features. In this paper, an improvement of the original SIFT algorithm providing more reliable feature matching for the purpose of object recognition is proposed. The main idea is to divide the features extracted from both the test and the model object image into several sub-collections before they are matched. The features are divided into several sub-collections considering the features arising from different octaves, that is from different frequency domains. To evaluate the performance of the proposed approach, it was applied to real images acquired with the stereo camera system of the rehabilitation robotic system FRIEND II. The experimental results show an increase in the number of correct features matched and, at the same time, a decrease in the number of outliers in comparison with the original SIFT algorithm. Compared with the original SIFT algorithm, a 40% reduction in processing time was achieved for the matching of the stereo images.
{"title":"Improved SIFT-Features Matching for Object Recognition","authors":"Faraj Alhwarin, Chao Wang, Danijela Ristić-Durrant, A. Gräser","doi":"10.14236/EWIC/VOCS2008.16","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.16","url":null,"abstract":"The SIFT algorithm (Scale Invariant Feature Transform) proposed by Lowe [1] is an approach for extracting distinctive invariant features from images. It has been successfully applied to a variety of computer vision problems based on feature matching including object recognition, pose estimation, image retrieval and many others. However, in real-world applications there is still a need for improvement of the algorithm's robustness with respect to the correct matching of SIFT features. In this paper, an improvement of the original SIFT algorithm providing more reliable feature matching for the purpose of object recognition is proposed. The main idea is to divide the features extracted from both the test and the model object image into several sub-collections before they are matched. The features are divided into several sub-collections considering the features arising from different octaves, that is from different frequency domains. \u0000 \u0000To evaluate the performance of the proposed approach, it was applied to real images acquired with the stereo camera system of the rehabilitation robotic system FRIEND II. The experimental results show an increase in the number of correct features matched and, at the same time, a decrease in the number of outliers in comparison with the original SIFT algorithm. Compared with the original SIFT algorithm, a 40% reduction in processing time was achieved for the matching of the stereo images.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128560380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.23
P. Mosses
Research in formal description of programming languages over the past four decades has led to some significant achievements. These include formal syntax and semantics for complete major programming languages, and theoretical foundations for novel features that might be included in future languages. Nevertheless, to give a completely formal, validated description of any significant programming language using the conventional frameworks remains an immense effort, disproportionate to its perceived benefits. Our diagnosis of the causes of this disappointing situation highlights two major deficiencies in the pragmatic aspects of formal language descriptions in conventional frameworks: lack of reusable components, and poor tool support. Part of the proposed remedy is a radical shift to a novel component-based paradigm for the development of complete language descriptions, based on simple interfaces between descriptions of syntactic and semantic aspects, and employing frameworks that allow independent description of individual programming constructs. The introduction of a language-independent notation for common programming constructs maximises the reusability of components. Tool support for component-based language description is being developed using the ASF+SDF Meta-Environment; the aim is to provide an efficient component-based workbench for use in design and implementation of future programming languages, accompanied by an online repository for validated formal descriptions of programming constructs and languages.
{"title":"Component-Based Description of Programming Languages","authors":"P. Mosses","doi":"10.14236/EWIC/VOCS2008.23","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.23","url":null,"abstract":"Research in formal description of programming languages over the past four decades has led to some significant achievements. These include formal syntax and semantics for complete major programming languages, and theoretical foundations for novel features that might be included in future languages. Nevertheless, to give a completely formal, validated description of any significant programming language using the conventional frameworks remains an immense effort, disproportionate to its perceived benefits. Our diagnosis of the causes of this disappointing situation highlights two major deficiencies in the pragmatic aspects of formal language descriptions in conventional frameworks: lack of reusable components, and poor tool support. \u0000 \u0000Part of the proposed remedy is a radical shift to a novel component-based paradigm for the development of complete language descriptions, based on simple interfaces between descriptions of syntactic and semantic aspects, and employing frameworks that allow independent description of individual programming constructs. The introduction of a language-independent notation for common programming constructs maximises the reusability of components. Tool support for component-based language description is being developed using the ASF+SDF Meta-Environment; the aim is to provide an efficient component-based workbench for use in design and implementation of future programming languages, accompanied by an online repository for validated formal descriptions of programming constructs and languages.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125598908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.24
P. V. Roy
Programs are fragile for many reasons, including software errors, partial failures, and network problems. One way to make software more robust is to design it from the start as a set of interacting feedback loops. Studying and using feedback loops is an old idea that dates back at least to Norbert Wiener's work on Cybernetics. Up to now almost all work in this area has focused on how to optimize single feedback loops. We show that it is important to design software with multiple interacting feedback loops. We present examples taken from both biology and software to substantiate this. We are realizing these ideas in the SELFMAN project: extending structured overlay networks (a generalization of peer-to-peer networks) for large-scale distributed applications. Structured overlay networks are a good example of systems designed with interacting feedback loops. Using ideas from physics, we postulate that these systems can potentially handle extremely hostile environments. If the system is properly designed, it will perform a reversible phase transition when the node failure rate increases beyond a critical point. The structured overlay network will make a transition from a single connected ring to a set of disjoint rings and back again when the failure rate decreases. We are exploring how to expose this phase transition to the application so that it can continue to provide a service. For validation we are building three realistic applications taken from industrial case studies, using a distributed transaction layer built on top of the overlay. Finally, we propose a research agenda to create a practical design methodology for building systems based on the use of interacting feedback loops and reversible phase transitions.
{"title":"Overcoming Software Fragility with Interacting Feedback Loops and Reversible Phase Transitions","authors":"P. V. Roy","doi":"10.14236/EWIC/VOCS2008.24","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.24","url":null,"abstract":"Programs are fragile for many reasons, including software errors, partial failures, and network problems. One way to make software more robust is to design it from the start as a set of interacting feedback loops. Studying and using feedback loops is an old idea that dates back at least to Norbert Wiener's work on Cybernetics. Up to now almost all work in this area has focused on how to optimize single feedback loops. We show that it is important to design software with multiple interacting feedback loops. We present examples taken from both biology and software to substantiate this. We are realizing these ideas in the SELFMAN project: extending structured overlay networks (a generalization of peer-to-peer networks) for large-scale distributed applications. Structured overlay networks are a good example of systems designed with interacting feedback loops. Using ideas from physics, we postulate that these systems can potentially handle extremely hostile environments. If the system is properly designed, it will perform a reversible phase transition when the node failure rate increases beyond a critical point. The structured overlay network will make a transition from a single connected ring to a set of disjoint rings and back again when the failure rate decreases. We are exploring how to expose this phase transition to the application so that it can continue to provide a service. For validation we are building three realistic applications taken from industrial case studies, using a distributed transaction layer built on top of the overlay. Finally, we propose a research agenda to create a practical design methodology for building systems based on the use of interacting feedback loops and reversible phase transitions.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129023905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.32
E. Bodden, Patrick Lam, L. Hendren
Pointer analyses enable many subsequent program analyses and transformations by statically disambiguating references to the heap. However, different client analyses may have different sets of pointer analysis needs, and each must pick some pointer analysis along the cost/precision spectrum to meet those needs. Some analysis clients employ combinations of pointer analyses to obtain better precision with reduced analysis times. Our goal is to ease the task of developing client analyses by enabling composition and substitutability for pointer analyses. We therefore propose object representatives, which statically represent runtime objects. A representative encapsulates the notion of object identity, as observed through the representative's aliasing relations with other representatives. Object representatives enable pointer analysis clients to disambiguate references to the heap in a uniform yet flexible way. Representatives can be generated from many combinations of pointer analyses, and pointer analyses can be freely exchanged and combined without changing client code. We believe that the use of object representatives brings many software engineering benefits to compiler implementations because, at compile time, object representatives are Java objects. We discuss our motivating case for object representatives, namely, the development of an abstract interpreter for tracematches, a language feature for runtime monitoring. We explain one particular algorithm for computing object representatives which combines flow-sensitive intraprocedural must-alias and must-not-alias analyses with a flow-insensitive, context-sensitive whole-program points-to analysis. In our experience, client analysis implementations can almost directly substitute object representatives for runtime objects, simplifying the design and implementation of such analyses.
{"title":"Object representatives: a uniform abstraction for pointer information","authors":"E. Bodden, Patrick Lam, L. Hendren","doi":"10.14236/EWIC/VOCS2008.32","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.32","url":null,"abstract":"Pointer analyses enable many subsequent program analyses and transformations by statically disambiguating references to the heap. However, different client analyses may have different sets of pointer analysis needs, and each must pick some pointer analysis along the cost/precision spectrum to meet those needs. Some analysis clients employ combinations of pointer analyses to obtain better precision with reduced analysis times. Our goal is to ease the task of developing client analyses by enabling composition and substitutability for pointer analyses. We therefore propose object representatives, which statically represent runtime objects. A representative encapsulates the notion of object identity, as observed through the representative's aliasing relations with other representatives. Object representatives enable pointer analysis clients to disambiguate references to the heap in a uniform yet flexible way. Representatives can be generated from many combinations of pointer analyses, and pointer analyses can be freely exchanged and combined without changing client code. We believe that the use of object representatives brings many software engineering benefits to compiler implementations because, at compile time, object representatives are Java objects. We discuss our motivating case for object representatives, namely, the development of an abstract interpreter for tracematches, a language feature for runtime monitoring. We explain one particular algorithm for computing object representatives which combines flow-sensitive intraprocedural must-alias and must-not-alias analyses with a flow-insensitive, context-sensitive whole-program points-to analysis. In our experience, client analysis implementations can almost directly substitute object representatives for runtime objects, simplifying the design and implementation of such analyses.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129271974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.20
A. Beckmann, F. Moller
Parity games underlie the model checking problem for the modal µ-calculus, the complexity of which remains unresolved after more than two decades of intensive research. The community is split into those who believe this problem - which is known to be both in NP and coNP - has a polynomial-time solution (without the assumption that P = NP) and those who believe that it does not. (A third, pessimistic, faction believes that the answer to this question will remain unknown in their lifetime.) In this paper we explore the possibility of employing Bounded Arithmetic to resolve this question, motivated by the fact that problems which are both NP and coNP, and where the equivalence between their NP and coNP description can be formulated and proved within a certain fragment of Bounded Arithmetic, necessarily admit a polynomial-time solution. While the problem remains unresolved by this paper, we do proposed another approach, and at the very least provide a modest refinement to the complexity of parity games (and in turn the µ-calculus model checking problem): that they lie in the class PLS of Polynomial Local Search problems. This result is based on a new proof of memoryless determinacy which can be formalised in Bounded Arithmetic. The approach we propose may offer a route to a polynomial-time solution. Alternatively, there may be scope in devising a reduction between the problem and some other problem which is hard with respect to PLS, thus making the discovery of a polynomial-time solution unlikely according to current wisdom.
{"title":"On the Complexity of Parity Games","authors":"A. Beckmann, F. Moller","doi":"10.14236/EWIC/VOCS2008.20","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.20","url":null,"abstract":"Parity games underlie the model checking problem for the modal µ-calculus, the complexity of which remains unresolved after more than two decades of intensive research. The community is split into those who believe this problem - which is known to be both in NP and coNP - has a polynomial-time solution (without the assumption that P = NP) and those who believe that it does not. (A third, pessimistic, faction believes that the answer to this question will remain unknown in their lifetime.) \u0000 \u0000In this paper we explore the possibility of employing Bounded Arithmetic to resolve this question, motivated by the fact that problems which are both NP and coNP, and where the equivalence between their NP and coNP description can be formulated and proved within a certain fragment of Bounded Arithmetic, necessarily admit a polynomial-time solution. While the problem remains unresolved by this paper, we do proposed another approach, and at the very least provide a modest refinement to the complexity of parity games (and in turn the µ-calculus model checking problem): that they lie in the class PLS of Polynomial Local Search problems. This result is based on a new proof of memoryless determinacy which can be formalised in Bounded Arithmetic. \u0000 \u0000The approach we propose may offer a route to a polynomial-time solution. Alternatively, there may be scope in devising a reduction between the problem and some other problem which is hard with respect to PLS, thus making the discovery of a polynomial-time solution unlikely according to current wisdom.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116719193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-09-22DOI: 10.14236/EWIC/VOCS2008.7
M. Crochemore, E. Porat
We consider the complexity of computing a longest increasing subsequence parameterised by the length of the output. Namely, we show that the maximal length k of an increasing subsequence of a permutation of the set of integers -1, 2,..., n} can be computed in time O(n log log k) in the RAM model, improving the previous 30-year bound of O(n log log k). The optimality of the new bound is an open question.
{"title":"Computing a Longest Increasing Subsequence of Length k in Time O(n log log k)","authors":"M. Crochemore, E. Porat","doi":"10.14236/EWIC/VOCS2008.7","DOIUrl":"https://doi.org/10.14236/EWIC/VOCS2008.7","url":null,"abstract":"We consider the complexity of computing a longest increasing subsequence parameterised by the length of the output. Namely, we show that the maximal length k of an increasing subsequence of a permutation of the set of integers -1, 2,..., n} can be computed in time O(n log log k) in the RAM model, improving the previous 30-year bound of O(n log log k). The optimality of the new bound is an open question.","PeriodicalId":247606,"journal":{"name":"BCS International Academic Conference","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127397458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}