Christopher Smith, Jeffrey Strauss, Peter E. Maher
An understanding of the underlying mechanics of common data structures is of paramount importance to undergraduate computer science students. Developing such an understanding can be challenging for students, but provides a firm platform for success in later software engineering courses. Conversely, conveying a clear explanation of how data structures evolve under standard operations is challenging for instructors. This paper gives an overview of a data structure visualization tool designed to animate standards manipulations of several common data structures. The application is intended for use by students wanting to practice with algorithms being covered in class, as well as instructors wishing to embellish their lectures with an animated interface. We describe the requirements gathering process, detail the technologies involved in the development of the tool, and demonstrate the main features.
{"title":"Data structure visualization: the design and implementation of an animation tool","authors":"Christopher Smith, Jeffrey Strauss, Peter E. Maher","doi":"10.1145/1900008.1900105","DOIUrl":"https://doi.org/10.1145/1900008.1900105","url":null,"abstract":"An understanding of the underlying mechanics of common data structures is of paramount importance to undergraduate computer science students. Developing such an understanding can be challenging for students, but provides a firm platform for success in later software engineering courses. Conversely, conveying a clear explanation of how data structures evolve under standard operations is challenging for instructors. This paper gives an overview of a data structure visualization tool designed to animate standards manipulations of several common data structures. The application is intended for use by students wanting to practice with algorithms being covered in class, as well as instructors wishing to embellish their lectures with an animated interface. We describe the requirements gathering process, detail the technologies involved in the development of the tool, and demonstrate the main features.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123904031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the design and implementation of time-predictable dual-core architecture on Xilinx FPGA. The emphasis is to observe the impact of various cache replacement algorithms on the time-predictability of a high priority thread, in a multi-core architecture. This design is done in verilog and consists of two cores, each with a simple 5-stage in-order pipeline and a private L1-cache. This is further connected to a shared L2 cache and a RAM. The design is synthesized in Xilinx ISE and its performance will be tested on Virtex-6 FPGA.
{"title":"A time-predictable dual-core prototype on FPGA","authors":"Satya Mohan Raju Gudidevuni, Wei Zhang","doi":"10.1145/1900008.1900020","DOIUrl":"https://doi.org/10.1145/1900008.1900020","url":null,"abstract":"This paper describes the design and implementation of time-predictable dual-core architecture on Xilinx FPGA. The emphasis is to observe the impact of various cache replacement algorithms on the time-predictability of a high priority thread, in a multi-core architecture. This design is done in verilog and consists of two cores, each with a simple 5-stage in-order pipeline and a private L1-cache. This is further connected to a shared L2 cache and a RAM. The design is synthesized in Xilinx ISE and its performance will be tested on Virtex-6 FPGA.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129707052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe an architecture to show how data mining techniques like association rule mining, based on the semantic knowledge of the database, can be used to partition data into views, which can then aid in the query optimization process.
{"title":"Query optimization in large databases using association rule mining","authors":"S. Bagui, Mohammad Islam","doi":"10.1145/1900008.1900123","DOIUrl":"https://doi.org/10.1145/1900008.1900123","url":null,"abstract":"In this paper, we describe an architecture to show how data mining techniques like association rule mining, based on the semantic knowledge of the database, can be used to partition data into views, which can then aid in the query optimization process.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127904057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integration of existing physical systems and hardware controlled systems such as machinery, HVAC, door locks, power distribution, pumps, etc, with newly developed software designed to control these systems presents interesting problems. Many of these large systems cannot be replaced with newer systems to enable integrated software control, or such replacement is not feasible, leading to the need to bridge the gap between the existing hardware system and the software which will control it. Consideration of this problem, a paradigm for design and development of such a system to bridge the gap, and a case study involving a robot arm and the SR4 robot will be used to illustrate and present the problem.
{"title":"Integrating incompatible hardware and software systems","authors":"Ian Burchett","doi":"10.1145/1900008.1900038","DOIUrl":"https://doi.org/10.1145/1900008.1900038","url":null,"abstract":"Integration of existing physical systems and hardware controlled systems such as machinery, HVAC, door locks, power distribution, pumps, etc, with newly developed software designed to control these systems presents interesting problems. Many of these large systems cannot be replaced with newer systems to enable integrated software control, or such replacement is not feasible, leading to the need to bridge the gap between the existing hardware system and the software which will control it. Consideration of this problem, a paradigm for design and development of such a system to bridge the gap, and a case study involving a robot arm and the SR4 robot will be used to illustrate and present the problem.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125433651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Schrimpsher, Zhiqiang Wu, Anthony M. Orme, L. Etzkorn
Ontologies are used today in many application areas. With the use of ontologies in bioinformatics, as well as their use in semantic web technologies, ontology based software has become widely used. This has led to a need for keeping track of different ontology versions [8], as the operation of software will change as the ontologies it uses change. However, existing approaches to ontology versioning have worked on static ontologies. Thus, the ontology version that a software package will use must be chosen prior to running that package. This requires substantial human oversight, and is therefore a major limitation. In this paper, we examine a dynamic approach to ontology versioning that will automatically provide the correct ontology for a software package on-the-fly. We examine a methodology that employs storing different time stamped ontologies in the same file, and we discuss how this methodology can be used on a real ontology.
{"title":"Dynamic ontology version control","authors":"Dan Schrimpsher, Zhiqiang Wu, Anthony M. Orme, L. Etzkorn","doi":"10.1145/1900008.1900044","DOIUrl":"https://doi.org/10.1145/1900008.1900044","url":null,"abstract":"Ontologies are used today in many application areas. With the use of ontologies in bioinformatics, as well as their use in semantic web technologies, ontology based software has become widely used. This has led to a need for keeping track of different ontology versions [8], as the operation of software will change as the ontologies it uses change. However, existing approaches to ontology versioning have worked on static ontologies. Thus, the ontology version that a software package will use must be chosen prior to running that package. This requires substantial human oversight, and is therefore a major limitation. In this paper, we examine a dynamic approach to ontology versioning that will automatically provide the correct ontology for a software package on-the-fly. We examine a methodology that employs storing different time stamped ontologies in the same file, and we discuss how this methodology can be used on a real ontology.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114183779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Center for Remote Sensing of Ice Sheets (CReSIS) has been compiling Greenland ice sheet thickness data since 1993. The airborne program utilizes a 150 MHz radar echo sounder to measure the ice thickness. The data is currently available on the CReSIS web site in various formats including PDF, Matlab, and plain text files. These formats are not usable in the classroom environment as a visual representation of the ice depth for each expedition. During the Undergraduate Research Experience in Ocean, Marine and Polar Science 2009 program, the Greenland Data Visualization Team took the CReSIS data and created a 4-D visualization consisting of depth, time, latitude, and longitude. This visualization was created utilizing HTML, JavaScript, and PHP. Microsoft Excel was used to filter the raw data downloaded from the CReSIS site. The team then statistically analyzed the Greenland ice sheet thickness data for calculated, missing, and actual depth readings. The goal of this project was to present the CReSIS data via the web in a visual format to elementary, undergraduate, and graduate students for research and education. This visualization package and corresponding data will eventually be migrated to the Elizabeth City State University Polar Grid High Performance Computing System. The research that follows involved converting plain text files to comma separated values to be used by PHP and JavaScript to produce data visualizations in Google Maps and HTML pages.
{"title":"Visualization of the CreSIS Greenland data sets","authors":"Shaketia L. McCoy, M. Austin, F. Slaughter","doi":"10.1145/1900008.1900135","DOIUrl":"https://doi.org/10.1145/1900008.1900135","url":null,"abstract":"The Center for Remote Sensing of Ice Sheets (CReSIS) has been compiling Greenland ice sheet thickness data since 1993. The airborne program utilizes a 150 MHz radar echo sounder to measure the ice thickness. The data is currently available on the CReSIS web site in various formats including PDF, Matlab, and plain text files. These formats are not usable in the classroom environment as a visual representation of the ice depth for each expedition.\u0000 During the Undergraduate Research Experience in Ocean, Marine and Polar Science 2009 program, the Greenland Data Visualization Team took the CReSIS data and created a 4-D visualization consisting of depth, time, latitude, and longitude. This visualization was created utilizing HTML, JavaScript, and PHP. Microsoft Excel was used to filter the raw data downloaded from the CReSIS site. The team then statistically analyzed the Greenland ice sheet thickness data for calculated, missing, and actual depth readings. The goal of this project was to present the CReSIS data via the web in a visual format to elementary, undergraduate, and graduate students for research and education. This visualization package and corresponding data will eventually be migrated to the Elizabeth City State University Polar Grid High Performance Computing System. The research that follows involved converting plain text files to comma separated values to be used by PHP and JavaScript to produce data visualizations in Google Maps and HTML pages.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114243703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deidra Morrison, J. Gilbert, Hanan Alnizami, Shaneé Dawkins, W. Eugene, Aqueasha M. Martin, W. Moses
The need for delivering quick and accurate information to first responders, such as law enforcement officers, is important for providing them with the resources needed to do their jobs safely and effectively. The common method of information exchange from officers to emergency dispatchers is problematic in that response time and communicative consistency can result in inaccurate or untimely information. Although information requests by officers currently require the use of defined alpha codes to ensure the accuracy of vehicle license plate sequences, the proper use is inconsistent. We introduce in this paper an adaption of VoiceLETS, [1] which provides an algorithm to detect and predict license sequences without the use of alpha codes. Preliminary testing of this algorithm showed a 34.2% increase in the accuracy of tag query results. There was also a correction accuracy of 95.35% when the system attempted to correct misinterpreted characters within a query. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ACMSE '10, April 15--17, 2010, Oxford, MS, USA
{"title":"Supporting license plate queries for first responders using the voiceLETS system","authors":"Deidra Morrison, J. Gilbert, Hanan Alnizami, Shaneé Dawkins, W. Eugene, Aqueasha M. Martin, W. Moses","doi":"10.1145/1900008.1900095","DOIUrl":"https://doi.org/10.1145/1900008.1900095","url":null,"abstract":"The need for delivering quick and accurate information to first responders, such as law enforcement officers, is important for providing them with the resources needed to do their jobs safely and effectively. The common method of information exchange from officers to emergency dispatchers is problematic in that response time and communicative consistency can result in inaccurate or untimely information. Although information requests by officers currently require the use of defined alpha codes to ensure the accuracy of vehicle license plate sequences, the proper use is inconsistent. We introduce in this paper an adaption of VoiceLETS, [1] which provides an algorithm to detect and predict license sequences without the use of alpha codes. Preliminary testing of this algorithm showed a 34.2% increase in the accuracy of tag query results. There was also a correction accuracy of 95.35% when the system attempted to correct misinterpreted characters within a query.\u0000 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.\u0000 ACMSE '10, April 15--17, 2010, Oxford, MS, USA","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"125 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120929353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Relaxation Adaptive Memory Programming (RAMP) algorithm is developed to solve large-scale resource constrained project scheduling problems (RCPSP). The RAMP algorithm presented here takes advantage of a cross-parametric relaxation and extends a recent approach that casts the relaxed problem as a minimum cut problem. Computational results on a classical set of benchmark problems show that even a relatively simple implementation of the RAMP algorithm can find optimal or near-optimal solutions for a large set of those instances.
{"title":"A simple dual-RAMP algorithm for resource constraint project scheduling","authors":"C. Riley, C. Rego, Haitao Li","doi":"10.1145/1900008.1900097","DOIUrl":"https://doi.org/10.1145/1900008.1900097","url":null,"abstract":"A Relaxation Adaptive Memory Programming (RAMP) algorithm is developed to solve large-scale resource constrained project scheduling problems (RCPSP). The RAMP algorithm presented here takes advantage of a cross-parametric relaxation and extends a recent approach that casts the relaxed problem as a minimum cut problem. Computational results on a classical set of benchmark problems show that even a relatively simple implementation of the RAMP algorithm can find optimal or near-optimal solutions for a large set of those instances.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122393697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical video microscopy is widely used to observe living cells and their moving parts. The smaller moving parts of the cells, such as vesicles, have low contrast and are often obscured by membranes and cell walls. Large images (1k x 1k) showing many cells are most helpful to the microscopist; limited memory prohibits storing such images for the entire life of a cell. As a result, it is imperative that image enhancement calculations be performed in real time, so that the researcher can observe moving vesicles immediately, rather than by post-processing. The MEDIC algorithm uses background subtraction to remove or at least minimize the effects of the immobile parts of the cell, including the cell wall. With MEDIC, moving objects are visible to the naked eye. In this paper, we extend the MEDIC algorithm to take advantage of fast computing on GPUs. Current mainstream CPUs are not fast enough to execute the MEDIC algorithm in real time with fast cameras. Dedicated image processing boards, made by companies like Matrox Imaging, are faster, but they are also expensive. GPUs, which are designed for rendering video game graphics, are made to perform calculations in parallel, and they can be obtained for a few hundred dollars. While not as fast, they are still well suited to executing the MEDIC algorithm in real time. The GPU can provide a significant speedup over CPU computations, making real time imaging possible with fast cameras for a fraction of the price of dedicated image processing boards.
光学视频显微镜被广泛用于观察活细胞及其活动部位。细胞中较小的活动部分,如囊泡,对比度较低,常被膜和细胞壁遮挡。显示许多细胞的大图像(1k x 1k)对显微镜最有帮助;有限的内存无法在细胞的整个生命周期内存储这样的图像。因此,必须实时进行图像增强计算,以便研究人员可以立即观察到运动的囊泡,而不是进行后处理。MEDIC算法使用背景减法来去除或至少最小化细胞的不可移动部分(包括细胞壁)的影响。有了MEDIC,移动的物体是肉眼可见的。在本文中,我们扩展了MEDIC算法,以利用gpu上快速计算的优势。目前的主流cpu速度不够快,无法在快速相机上实时执行MEDIC算法。由Matrox Imaging等公司生产的专用图像处理板速度更快,但也很昂贵。gpu是为渲染视频游戏图形而设计的,用于并行计算,只要几百美元就能买到。虽然没有那么快,但它们仍然非常适合实时执行MEDIC算法。与CPU计算相比,GPU可以提供显著的加速,使快速相机的实时成像成为可能,而价格只是专用图像处理板的一小部分。
{"title":"Motion-enhanced, differential interference contrast video microscopy using a GPU and CUDA","authors":"M. Steen","doi":"10.1145/1900008.1900137","DOIUrl":"https://doi.org/10.1145/1900008.1900137","url":null,"abstract":"Optical video microscopy is widely used to observe living cells and their moving parts. The smaller moving parts of the cells, such as vesicles, have low contrast and are often obscured by membranes and cell walls. Large images (1k x 1k) showing many cells are most helpful to the microscopist; limited memory prohibits storing such images for the entire life of a cell. As a result, it is imperative that image enhancement calculations be performed in real time, so that the researcher can observe moving vesicles immediately, rather than by post-processing.\u0000 The MEDIC algorithm uses background subtraction to remove or at least minimize the effects of the immobile parts of the cell, including the cell wall. With MEDIC, moving objects are visible to the naked eye. In this paper, we extend the MEDIC algorithm to take advantage of fast computing on GPUs.\u0000 Current mainstream CPUs are not fast enough to execute the MEDIC algorithm in real time with fast cameras. Dedicated image processing boards, made by companies like Matrox Imaging, are faster, but they are also expensive. GPUs, which are designed for rendering video game graphics, are made to perform calculations in parallel, and they can be obtained for a few hundred dollars. While not as fast, they are still well suited to executing the MEDIC algorithm in real time. The GPU can provide a significant speedup over CPU computations, making real time imaging possible with fast cameras for a fraction of the price of dedicated image processing boards.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115217304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses an approach to learning in order to find the joint movement patterns of a legged robot. In particular, we concentrate on a movement exploration technique based on patterns generated by a neural oscillator. The current stage of development and project status are presented along with a philosophy and implementation plan.
{"title":"Learning leg movement patterns using neural oscillators","authors":"Patrick McDowell, T. Beaubouef","doi":"10.1145/1900008.1900023","DOIUrl":"https://doi.org/10.1145/1900008.1900023","url":null,"abstract":"This paper discusses an approach to learning in order to find the joint movement patterns of a legged robot. In particular, we concentrate on a movement exploration technique based on patterns generated by a neural oscillator. The current stage of development and project status are presented along with a philosophy and implementation plan.","PeriodicalId":333104,"journal":{"name":"ACM SE '10","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114999389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}