Jiandong Fang, S. Fang, Jeffrey Huang, M. Tuceryan
This paper describes a new medical image analysis technique for polygon mesh surfaces of human faces for a medical diagnosis application. The goal is to explore the natural patterns and 3D facial features to provide diagnostic information for Fetal Alcohol Syndrome (FAS). Our approach is based on a digital geometry analysis framework that applies pattern recognition techniques to digital geometry (polygon mesh) data from 3D laser scanners and other sources. Novel 3D geometric features are extracted and analyzed to determine the most discriminatory features that best represent FAS characteristics. As part of the NIH Consortium for FASD, the techniques developed here are being applied and tested on real patient datasets collected by the NIH Consortium both within and outside the US.
{"title":"Digital geometry image analysis for medical diagnosis","authors":"Jiandong Fang, S. Fang, Jeffrey Huang, M. Tuceryan","doi":"10.1145/1141277.1141327","DOIUrl":"https://doi.org/10.1145/1141277.1141327","url":null,"abstract":"This paper describes a new medical image analysis technique for polygon mesh surfaces of human faces for a medical diagnosis application. The goal is to explore the natural patterns and 3D facial features to provide diagnostic information for Fetal Alcohol Syndrome (FAS). Our approach is based on a digital geometry analysis framework that applies pattern recognition techniques to digital geometry (polygon mesh) data from 3D laser scanners and other sources. Novel 3D geometric features are extracted and analyzed to determine the most discriminatory features that best represent FAS characteristics. As part of the NIH Consortium for FASD, the techniques developed here are being applied and tested on real patient datasets collected by the NIH Consortium both within and outside the US.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114695866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Bodhuin, G. Canfora, R. Preziosi, M. Tortorella
Continuous technological advances lead to computerize all the electronic devices and connect them in a network, so that in the future physical and virtual worlds will be integrated and interoperate each other at the point that browsing the reality will be similar to browsing the Web. Heterogeneous networked devices, services satisfying needs of people and living environments equipped with devices and services, will have to collaborate instead of working independently for offering to the end-user a better quality of the daily life. As a consequence, developers of ubiquitous computing and communication software infrastructures should address their efforts toward the abstraction of the implemented concepts. They have to abstract concepts from direct and immediate human needs in specific smart environments, avoid undue assumptions about the available devices or services and promote decoupling among distinctive, physical and functional features of devices and services.This paper briefly describes the extensible software architecture for smart environments the authors designed and implemented and presents the approach used for representing the physical world in a useful, comprehensible and more abstract manner and facilitating connections with the virtual world.
{"title":"Hiding complexity and heterogeneity of the physical world in smart living environments","authors":"T. Bodhuin, G. Canfora, R. Preziosi, M. Tortorella","doi":"10.1145/1141277.1141731","DOIUrl":"https://doi.org/10.1145/1141277.1141731","url":null,"abstract":"Continuous technological advances lead to computerize all the electronic devices and connect them in a network, so that in the future physical and virtual worlds will be integrated and interoperate each other at the point that browsing the reality will be similar to browsing the Web. Heterogeneous networked devices, services satisfying needs of people and living environments equipped with devices and services, will have to collaborate instead of working independently for offering to the end-user a better quality of the daily life. As a consequence, developers of ubiquitous computing and communication software infrastructures should address their efforts toward the abstraction of the implemented concepts. They have to abstract concepts from direct and immediate human needs in specific smart environments, avoid undue assumptions about the available devices or services and promote decoupling among distinctive, physical and functional features of devices and services.This paper briefly describes the extensible software architecture for smart environments the authors designed and implemented and presents the approach used for representing the physical world in a useful, comprehensible and more abstract manner and facilitating connections with the virtual world.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114302752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes a novel framework, constructed using constraint logic programming and parallelism, to determine the association between parts of the primary sequence of a protein and α-helices extracted from 3-dimensional low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position, and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.
{"title":"A constraint logic programming approach to 3D structure determination of large protein complexes","authors":"A. D. Palù, Enrico Pontelli, Jing He, Y. Lu","doi":"10.1145/1141277.1141309","DOIUrl":"https://doi.org/10.1145/1141277.1141309","url":null,"abstract":"The paper describes a novel framework, constructed using constraint logic programming and parallelism, to determine the association between parts of the primary sequence of a protein and α-helices extracted from 3-dimensional low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position, and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114563228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a technique for extracting grammar rules, given a set of programs and an approximate grammar. Grammar is an important artifact used in the generation of tools for program analysis, modification, etc. Current grammar extraction techniques are heuristic in nature. This work proposes a deterministic technique for extracting keyword based grammar rules. The technique uses CYK-parser and LR-parser to build a set of possible rules. For each rule it checks whether the grammar after including that rule is able to parse all the programs or not. As this results in a large set of possible rules, a set of optimizations are proposed to reduce the search space of possible rules. The proposed optimizations utilize the knowledge from multiple programs and exploit the abundance of unit productions in the grammar of programming languages. The proposed approach and optimizations are experimentally checked on a set of input programs.
{"title":"A deterministic technique for extracting keyword based grammar rules from programs","authors":"Alpana Dubey, P. Jalote, S. Aggarwal","doi":"10.1145/1141277.1141659","DOIUrl":"https://doi.org/10.1145/1141277.1141659","url":null,"abstract":"This paper presents a technique for extracting grammar rules, given a set of programs and an approximate grammar. Grammar is an important artifact used in the generation of tools for program analysis, modification, etc. Current grammar extraction techniques are heuristic in nature. This work proposes a deterministic technique for extracting keyword based grammar rules. The technique uses CYK-parser and LR-parser to build a set of possible rules. For each rule it checks whether the grammar after including that rule is able to parse all the programs or not. As this results in a large set of possible rules, a set of optimizations are proposed to reduce the search space of possible rules. The proposed optimizations utilize the knowledge from multiple programs and exploit the abundance of unit productions in the grammar of programming languages. The proposed approach and optimizations are experimentally checked on a set of input programs.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117168668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Loadable kernel modules supported by Linux provides lots of benefits such as a small-sized kernel, on-demand loading, and easy software upgrading. However, since modules are executed in a privileged mode, trivial misuses in a module may cause critical system halts or deadlock situations. This paper presents a kernel resource protector which prevents kernel from faults generated by modules. The protector models the system in two objects: module object and resource object. By observing the interrelations between the two objects, the protector can detect misuses of modules and take actions to resolve the erroneous situations. Implementation study has shown that the protector can find out memory leaks wasted by modules and can reclaim leaks without degrading system performance. The proposed protector makes Linux more robust, which is required indispensably in the system equipped with NVRAM (Non Volatile RAM) such as FRAM and PRAM.
{"title":"Design and implementation of a kernel resource protector for robustness of Linux module programming","authors":"Jongmoo Choi, Seungjae Baek, Sung Y. Shin","doi":"10.1145/1141277.1141621","DOIUrl":"https://doi.org/10.1145/1141277.1141621","url":null,"abstract":"Loadable kernel modules supported by Linux provides lots of benefits such as a small-sized kernel, on-demand loading, and easy software upgrading. However, since modules are executed in a privileged mode, trivial misuses in a module may cause critical system halts or deadlock situations. This paper presents a kernel resource protector which prevents kernel from faults generated by modules. The protector models the system in two objects: module object and resource object. By observing the interrelations between the two objects, the protector can detect misuses of modules and take actions to resolve the erroneous situations. Implementation study has shown that the protector can find out memory leaks wasted by modules and can reclaim leaks without degrading system performance. The proposed protector makes Linux more robust, which is required indispensably in the system equipped with NVRAM (Non Volatile RAM) such as FRAM and PRAM.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117271275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Model Driven Architecture (MDA) is a software development approach that focuses on models. In order to support MDA, a lot of CASE tools have emerged; each of them provides a different set of modeling services (operations for automating model manipulation). We have proposed an open environment called ModelBus, which enables the integration of heterogeneous and distributed CASE tools. ModelBus enables tools to invoke the modeling services provided by other tools. In this paper, we focus on supporting a particular kind of modeling services: services that update models (i.e. they have inout parameters). Our contribution is to enable a tool to update models owned by another tool. We propose a parameter passing mechanism that hides the complexity of model update from tools. First, it enables a tool to update models transparently to heterogeneous model representations. Second, it enables a tool to update models located in the memory of another remote tool transparently, as if the models were local. Third, it ensures the integrity between the updated models and the tool that owns the models.
{"title":"Supporting transparent model update in distributed CASE tool integration","authors":"Prawee Sriplakich, Xavier Blanc, M. Gervais","doi":"10.1145/1141277.1141692","DOIUrl":"https://doi.org/10.1145/1141277.1141692","url":null,"abstract":"Model Driven Architecture (MDA) is a software development approach that focuses on models. In order to support MDA, a lot of CASE tools have emerged; each of them provides a different set of modeling services (operations for automating model manipulation). We have proposed an open environment called ModelBus, which enables the integration of heterogeneous and distributed CASE tools. ModelBus enables tools to invoke the modeling services provided by other tools. In this paper, we focus on supporting a particular kind of modeling services: services that update models (i.e. they have inout parameters). Our contribution is to enable a tool to update models owned by another tool. We propose a parameter passing mechanism that hides the complexity of model update from tools. First, it enables a tool to update models transparently to heterogeneous model representations. Second, it enables a tool to update models located in the memory of another remote tool transparently, as if the models were local. Third, it ensures the integrity between the updated models and the tool that owns the models.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116073246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this research we investigate an approach for adaptive dynamic instruction set extension, tuning processors to specific applications after fabrication.
{"title":"Preliminary performance evaluation of an adaptive dynamic extensible processor for embedded applications","authors":"Hamid Noori, K. Murakami","doi":"10.1145/1141277.1141496","DOIUrl":"https://doi.org/10.1145/1141277.1141496","url":null,"abstract":"In this research we investigate an approach for adaptive dynamic instruction set extension, tuning processors to specific applications after fabrication.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rate monotonic analysis (RMA) has been shown to be effective in the schedulability analysis of various types of system. This paper focuses on reducing the run time of each RMA-tested system. Based on a new concept of tasks, denoted by the lift-utilization tasks, we propose a novel method to reduce the number of iterative calculations in the derivation of the worst-case response time of each task in its RMA test. The capability of the proposed method was evaluated and compared to related work, which revealed that our method produced savings of 26-33% in the number of RMA iterations.
{"title":"A precise schedulability test algorithm for scheduling periodic tasks in real-time systems","authors":"Wan-Chen Lu, Jen-Wei Hsieh, W. Shih","doi":"10.1145/1141277.1141616","DOIUrl":"https://doi.org/10.1145/1141277.1141616","url":null,"abstract":"Rate monotonic analysis (RMA) has been shown to be effective in the schedulability analysis of various types of system. This paper focuses on reducing the run time of each RMA-tested system. Based on a new concept of tasks, denoted by the lift-utilization tasks, we propose a novel method to reduce the number of iterative calculations in the derivation of the worst-case response time of each task in its RMA test. The capability of the proposed method was evaluated and compared to related work, which revealed that our method produced savings of 26-33% in the number of RMA iterations.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115142691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Cavalin, A. Britto, Flávio Bortolozzi, R. Sabourin, Luiz Oliveira
This paper describes an implicit segmentation-based method for recognition of strings of characters (words or numerals). In a two-stage HMM-based method, an implicit segmentation is applied to segment either words or numeral strings, and in the verification stage, foreground and background features are combined to compensate the loss in terms of recognition rate when segmentation and recognition are performed in the same process. A rigorous experimental protocol shows the performance of the proposed method for isolated characters, numeral strings, and words.
{"title":"An implicit segmentation-based method for recognition of handwritten strings of characters","authors":"P. Cavalin, A. Britto, Flávio Bortolozzi, R. Sabourin, Luiz Oliveira","doi":"10.1145/1141277.1141468","DOIUrl":"https://doi.org/10.1145/1141277.1141468","url":null,"abstract":"This paper describes an implicit segmentation-based method for recognition of strings of characters (words or numerals). In a two-stage HMM-based method, an implicit segmentation is applied to segment either words or numeral strings, and in the verification stage, foreground and background features are combined to compensate the loss in terms of recognition rate when segmentation and recognition are performed in the same process. A rigorous experimental protocol shows the performance of the proposed method for isolated characters, numeral strings, and words.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115283076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A total order protocol is a fundamental building block in the construction of distributed fault-tolerant applications. Unfortunately, the implementation of such a primitive can be expensive both in terms of communication steps and of number of messages exchanged. This problem is exacerbated in large-scale systems, where the performance of the algorithm may be limited by the presence of high-latency links.Optimistic total order protocols have been proposed to alleviate this problem. However, different optimistic protocols offer quite distinct services. This paper makes an overview of different optimistic approaches and shows how they can be combined in a single adaptive protocol.
{"title":"From spontaneous total order to uniform total order: different degrees of optimistic delivery","authors":"L. Rodrigues, J. Mocito, N. Carvalho","doi":"10.1145/1141277.1141441","DOIUrl":"https://doi.org/10.1145/1141277.1141441","url":null,"abstract":"A total order protocol is a fundamental building block in the construction of distributed fault-tolerant applications. Unfortunately, the implementation of such a primitive can be expensive both in terms of communication steps and of number of messages exchanged. This problem is exacerbated in large-scale systems, where the performance of the algorithm may be limited by the presence of high-latency links.Optimistic total order protocols have been proposed to alleviate this problem. However, different optimistic protocols offer quite distinct services. This paper makes an overview of different optimistic approaches and shows how they can be combined in a single adaptive protocol.","PeriodicalId":269830,"journal":{"name":"Proceedings of the 2006 ACM symposium on Applied computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123316838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}