Continuous integration and delivery (CI/CD) frameworks are a core element of DevOps-based software development. A PHP-based case study assessed the suitability of five such frameworks---JFrog Arti-factory, Bitbucket Pipelines, Jenkins, Azure DevOps, and TeamCity---for instructional use. The five were found to be roughly equivalent in terms of their usability for simple configurations. The effort needed to implement CI/CD substantially increased for more realistic production scenarios, like deployments to cloud and load-balanced platforms. These results suggest a need to limit CI/CD-based academic projects to simple infrastructure and technology stacks: e.g., a web application on a single instance web server.
{"title":"An evaluation of continuous integration and delivery frameworks for classroom use","authors":"Jarred Light, Phil Pfeiffer, Brian Bennett","doi":"10.1145/3409334.3452085","DOIUrl":"https://doi.org/10.1145/3409334.3452085","url":null,"abstract":"Continuous integration and delivery (CI/CD) frameworks are a core element of DevOps-based software development. A PHP-based case study assessed the suitability of five such frameworks---JFrog Arti-factory, Bitbucket Pipelines, Jenkins, Azure DevOps, and TeamCity---for instructional use. The five were found to be roughly equivalent in terms of their usability for simple configurations. The effort needed to implement CI/CD substantially increased for more realistic production scenarios, like deployments to cloud and load-balanced platforms. These results suggest a need to limit CI/CD-based academic projects to simple infrastructure and technology stacks: e.g., a web application on a single instance web server.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124568202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrating emerging technologies into current systems is critical to enhance the human quality of life. In the field of transportation, the automobile is the predominant locomotion method used by people. Even though new vehicular safety systems have been integrated into vehicles, road accidents are still one of the major reasons for death worldwide. In general, Vulnerable Road Users (VRUs) that share roads with vehicles have second priority in safety systems for the Intelligent Transportation System, since they are mostly focused on avoiding collision between vehicles. However, VRUs do represent a very significant percent of the victims of road accidents. In this paper, we propose a solution where the integration of current and future technologies to the vehicular safety system is a key factor, so that roads will be a better place for all the actors (people, animals, and vehicles) that transit on them. In order to protect VRUs and animals on or nearby the roads, a collision-avoidance system with two levels is proposed. The idea is to have a flexible solution that will integrate any current technology and new technologies as they appear. Warning information will be delivered in real-time about people, other living beings, vehicles, and obstacles when a possible collision is detected. On the lower level, on-board computers can address imminent threats thanks to the access to lightweight quality information consisting of data samples shared by vehicles, persons, and animals that join the common platform. In the upper level, additional support based on pre-processed information coming as a service from the Cloud will also assist any decision. With all this information, a vehicle will be in the capacity of taking instant safety decisions, in real-time, and without overloading its local computational resources.
{"title":"Toward a collision avoidance system based on the integration of technologies","authors":"C. Palacio, Eric Gamess","doi":"10.1145/3409334.3452084","DOIUrl":"https://doi.org/10.1145/3409334.3452084","url":null,"abstract":"Integrating emerging technologies into current systems is critical to enhance the human quality of life. In the field of transportation, the automobile is the predominant locomotion method used by people. Even though new vehicular safety systems have been integrated into vehicles, road accidents are still one of the major reasons for death worldwide. In general, Vulnerable Road Users (VRUs) that share roads with vehicles have second priority in safety systems for the Intelligent Transportation System, since they are mostly focused on avoiding collision between vehicles. However, VRUs do represent a very significant percent of the victims of road accidents. In this paper, we propose a solution where the integration of current and future technologies to the vehicular safety system is a key factor, so that roads will be a better place for all the actors (people, animals, and vehicles) that transit on them. In order to protect VRUs and animals on or nearby the roads, a collision-avoidance system with two levels is proposed. The idea is to have a flexible solution that will integrate any current technology and new technologies as they appear. Warning information will be delivered in real-time about people, other living beings, vehicles, and obstacles when a possible collision is detected. On the lower level, on-board computers can address imminent threats thanks to the access to lightweight quality information consisting of data samples shared by vehicles, persons, and animals that join the common platform. In the upper level, additional support based on pre-processed information coming as a service from the Cloud will also assist any decision. With all this information, a vehicle will be in the capacity of taking instant safety decisions, in real-time, and without overloading its local computational resources.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116681094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online discussion forums provide valuable information about students' learning and engagement in course activities. The hidden knowledge in the contents of these discussion posts can be examined by analyzing the social interactions between the participants. This research investigates students' learning and collaborative problem-solving aspects by applying social network analysis (SNA) metrics and sophisticated computational techniques. The data is collected from online course discussion forums on Canvas, a Learning Management System (LMS), in a CS1 course at a medium-sized US University. The research demonstrates that efficient tools are needed to model and evaluate goal-oriented discussion forums constructed from active student collaborations. This research aims to develop a systematic data collection and analysis instrument incorporated into LMSs that enables grading the discussions to improve instructional outcomes, gain insights into and explain educational phenomena. The study also emphasizes important SNA metrics that analyze students' social behavior since a positive correlation was seen between the number of posts made by students and their academic performance in terms of the final grade. The prototype developed (CODA - Canvas Online Discussion Analyzer) helps evaluate students' performance based on the useful knowledge they share while participating in course discussions. The experimental results provided evidence that analysis of structured discussion data offers potential insights about changes in student collaboration patterns over time and students' sense of belongingness for pedagogical benefits. As future work, further analysis will be done by extracting additional students' data, such as their demographic data, majors, and performance in other courses to study cognitive and behavioral aspects from the collaboration networks.
{"title":"Evaluation of student collaboration on canvas LMS using educational data mining techniques","authors":"Urvashi Desai, Vijayalakshmi Ramasamy, J. Kiper","doi":"10.1145/3409334.3452042","DOIUrl":"https://doi.org/10.1145/3409334.3452042","url":null,"abstract":"Online discussion forums provide valuable information about students' learning and engagement in course activities. The hidden knowledge in the contents of these discussion posts can be examined by analyzing the social interactions between the participants. This research investigates students' learning and collaborative problem-solving aspects by applying social network analysis (SNA) metrics and sophisticated computational techniques. The data is collected from online course discussion forums on Canvas, a Learning Management System (LMS), in a CS1 course at a medium-sized US University. The research demonstrates that efficient tools are needed to model and evaluate goal-oriented discussion forums constructed from active student collaborations. This research aims to develop a systematic data collection and analysis instrument incorporated into LMSs that enables grading the discussions to improve instructional outcomes, gain insights into and explain educational phenomena. The study also emphasizes important SNA metrics that analyze students' social behavior since a positive correlation was seen between the number of posts made by students and their academic performance in terms of the final grade. The prototype developed (CODA - Canvas Online Discussion Analyzer) helps evaluate students' performance based on the useful knowledge they share while participating in course discussions. The experimental results provided evidence that analysis of structured discussion data offers potential insights about changes in student collaboration patterns over time and students' sense of belongingness for pedagogical benefits. As future work, further analysis will be done by extracting additional students' data, such as their demographic data, majors, and performance in other courses to study cognitive and behavioral aspects from the collaboration networks.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126909766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to its deep penetration in people's daily life, smartphone has been proposed as a practical platform for indoor localization. Yet one major challenge is how to handle the non-negligible sensor errors that can become problematic when accumulated over time. To this end, a series of approaches such as fingerprint and pedestrian dead reckoning have been proposed, which, however, either need WiFi infrastructure, pre-installed beacons or can only support certain movement patterns or scenarios. In this paper, we take one step further towards tackle this challenge by carefully developing a testbed that can enable deep investigation on the smartphone-based indoor localization problem and the potential for promising practical solution design. In particular, our testbed only accesses the raw inertial measurement unit and orientation data from the smartphone, making it infrastructure-free and require no pre-installation, and providing an in-depth view of sensor errors and their impacts on the localization accuracy. Our testbed also provides built-in functionalities for localization and supports real-time data processing and visualization, which can be extremely valuable for solution development and practical usefulness. We have conducted extensive experiments to evaluate our testbed, and obtained interesting observations that not only validate the effectiveness of our testbed design, but also opens a future direction to develop more advanced mechanisms such as deep learning based approaches to better compensate sensor errors and achieve high accuracy in practice.
{"title":"Testbed development for a novel approach towards high accuracy indoor localization with smartphones","authors":"Yunshu Wang, Lee Easson, Feng Wang","doi":"10.1145/3409334.3452044","DOIUrl":"https://doi.org/10.1145/3409334.3452044","url":null,"abstract":"Due to its deep penetration in people's daily life, smartphone has been proposed as a practical platform for indoor localization. Yet one major challenge is how to handle the non-negligible sensor errors that can become problematic when accumulated over time. To this end, a series of approaches such as fingerprint and pedestrian dead reckoning have been proposed, which, however, either need WiFi infrastructure, pre-installed beacons or can only support certain movement patterns or scenarios. In this paper, we take one step further towards tackle this challenge by carefully developing a testbed that can enable deep investigation on the smartphone-based indoor localization problem and the potential for promising practical solution design. In particular, our testbed only accesses the raw inertial measurement unit and orientation data from the smartphone, making it infrastructure-free and require no pre-installation, and providing an in-depth view of sensor errors and their impacts on the localization accuracy. Our testbed also provides built-in functionalities for localization and supports real-time data processing and visualization, which can be extremely valuable for solution development and practical usefulness. We have conducted extensive experiments to evaluate our testbed, and obtained interesting observations that not only validate the effectiveness of our testbed design, but also opens a future direction to develop more advanced mechanisms such as deep learning based approaches to better compensate sensor errors and achieve high accuracy in practice.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129175649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Tomaselli, Austin Willoughby, Jorge Vargas Amezcua, Emma Delehanty, Katherine Floyd, Damien Wright, M. Lammers, R. Vetter
Phishing attacks are the scourge of the network security manager's job. Looking for a solution to counter this trend, this paper examines and verifies the efficacy of Phishmon, a machine learning framework for scrutinizing webpages that relies on technical attributes of the webpage's structure for classification. More specifically, each of the four machine learning algorithms mentioned in the original paper are applied to a portion of the data set used by Phishmon's creators in order to verify and confirm their results. This paper expands the author's original work in two ways. First, the Phishmon framework is applied to two additional machine learning models for comparison to the first group. Furthermore, dimension reduction and algorithm parameter optimization are explored to determine their effects on the Phishmon framework's accuracy. Our findings suggest improvements to the Phishmon framework's implementation. Namely, downsizing the dataset to include an equal number of phishing and benign webpages as the model is formed appears to balance the accuracy rates achieved for both phishing and benign webpages. Furthermore, removing features with very low relative importance values may save time and processing power while preserving a vast majority of the model's information.
{"title":"Verifying phishmon: a framework for dynamic webpage classification","authors":"J. Tomaselli, Austin Willoughby, Jorge Vargas Amezcua, Emma Delehanty, Katherine Floyd, Damien Wright, M. Lammers, R. Vetter","doi":"10.1145/3409334.3452082","DOIUrl":"https://doi.org/10.1145/3409334.3452082","url":null,"abstract":"Phishing attacks are the scourge of the network security manager's job. Looking for a solution to counter this trend, this paper examines and verifies the efficacy of Phishmon, a machine learning framework for scrutinizing webpages that relies on technical attributes of the webpage's structure for classification. More specifically, each of the four machine learning algorithms mentioned in the original paper are applied to a portion of the data set used by Phishmon's creators in order to verify and confirm their results. This paper expands the author's original work in two ways. First, the Phishmon framework is applied to two additional machine learning models for comparison to the first group. Furthermore, dimension reduction and algorithm parameter optimization are explored to determine their effects on the Phishmon framework's accuracy. Our findings suggest improvements to the Phishmon framework's implementation. Namely, downsizing the dataset to include an equal number of phishing and benign webpages as the model is formed appears to balance the accuracy rates achieved for both phishing and benign webpages. Furthermore, removing features with very low relative importance values may save time and processing power while preserving a vast majority of the model's information.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130936504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dembe Koi Stephanos, G. Husari, Brian T. Bennett, Emma Stephanos
Recently, strategies of National Basketball Association (NBA) teams have evolved with the skillsets of players and the emergence of advanced analytics. This has led to a more free-flowing game in which traditional positions and play calls have been replaced with player archetypes and read-and-react offensives that operate off a variety of isolated actions. The introduction of position tracking technology by SportVU has aided the analysis of these patterns by offering a vast dataset of on-court behavior. There have been numerous attempts to identify and classify patterns by evaluating the outcomes of offensive and defensive strategies associated with actions within this dataset, a job currently done manually by reviewing game tape. Some of these classification attempts have used supervised techniques that begin with labeled sets of plays and feature sets to automate the detection of future cases. Increasingly, however, deep learning approaches such as convolutional neural networks have been used in conjunction with player trajectory images generated from positional data. This enables classification to occur in a bottom-up manner, potentially discerning unexpected patterns. Others have shifted focus from classification, instead using this positional data to evaluate the success of a given possession based on spatial factors such as defender proximity and player factors such as role or skillset. While play/action detection, classification and analysis have each been addressed in literature, a comprehensive approach that accounts for modern trends is still lacking. In this paper, we discuss various approaches to action detection and analysis and ultimately propose an outline for a deep learning approach of identification and analysis resulting in a queryable dataset complete with shot evaluations, thus combining multiple contributions into a serviceable tool capable of assisting and automating much of the work currently done by NBA professionals.
{"title":"Machine learning predictive analytics for player movement prediction in NBA: applications, opportunities, and challenges","authors":"Dembe Koi Stephanos, G. Husari, Brian T. Bennett, Emma Stephanos","doi":"10.1145/3409334.3452064","DOIUrl":"https://doi.org/10.1145/3409334.3452064","url":null,"abstract":"Recently, strategies of National Basketball Association (NBA) teams have evolved with the skillsets of players and the emergence of advanced analytics. This has led to a more free-flowing game in which traditional positions and play calls have been replaced with player archetypes and read-and-react offensives that operate off a variety of isolated actions. The introduction of position tracking technology by SportVU has aided the analysis of these patterns by offering a vast dataset of on-court behavior. There have been numerous attempts to identify and classify patterns by evaluating the outcomes of offensive and defensive strategies associated with actions within this dataset, a job currently done manually by reviewing game tape. Some of these classification attempts have used supervised techniques that begin with labeled sets of plays and feature sets to automate the detection of future cases. Increasingly, however, deep learning approaches such as convolutional neural networks have been used in conjunction with player trajectory images generated from positional data. This enables classification to occur in a bottom-up manner, potentially discerning unexpected patterns. Others have shifted focus from classification, instead using this positional data to evaluate the success of a given possession based on spatial factors such as defender proximity and player factors such as role or skillset. While play/action detection, classification and analysis have each been addressed in literature, a comprehensive approach that accounts for modern trends is still lacking. In this paper, we discuss various approaches to action detection and analysis and ultimately propose an outline for a deep learning approach of identification and analysis resulting in a queryable dataset complete with shot evaluations, thus combining multiple contributions into a serviceable tool capable of assisting and automating much of the work currently done by NBA professionals.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128417024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature modeling is a process for identifying the common and variable parts of a software product line and recording them in a tree-structured feature model. However, feature models can be difficult for mainstream developers to specify and maintain because most tools rely on specialized theories, notations, or technologies. To address this issue, we propose a design that uses mainstream JSON-related technologies to encode and manipulate feature models and then uses the models to generate Web forms for product configuration. This JSON-based design can form part of a comprehensive, interactive environment that enables mainstream developers to specify, store, update, and exchange feature models and use them to configure members of product families.
{"title":"Encoding feature models using mainstream JSON technologies","authors":"Hazim Shatnawi, H. C. Cunningham","doi":"10.1145/3409334.3452048","DOIUrl":"https://doi.org/10.1145/3409334.3452048","url":null,"abstract":"Feature modeling is a process for identifying the common and variable parts of a software product line and recording them in a tree-structured feature model. However, feature models can be difficult for mainstream developers to specify and maintain because most tools rely on specialized theories, notations, or technologies. To address this issue, we propose a design that uses mainstream JSON-related technologies to encode and manipulate feature models and then uses the models to generate Web forms for product configuration. This JSON-based design can form part of a comprehensive, interactive environment that enables mainstream developers to specify, store, update, and exchange feature models and use them to configure members of product families.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"1861 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129980298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katia P. Maxwell, Mikel D. Petty, C. D. Colvett, W. A. Cantrell
In today's world every system developer and administrator should be familiar with cyberattacks and possible threats to their organizations systems. Petri Nets have been used to model and simulate cyberattacks allowing for additional knowledge on the planning stages of defending a system. Petri Nets have been used since the 1960's and there exists several extensions and variations of how they are designed, in particular Petri Nets with Players, Strategies and Cost has been recently proposed to model individual cyberattacks on target systems. These models can also be broken down into smaller components to build different models. This study introduces the concept of fine-grain components and a bottom-up approach to create a cyberattack model.
{"title":"A bottom-up approach to creating a cyberattack model with fine grain components","authors":"Katia P. Maxwell, Mikel D. Petty, C. D. Colvett, W. A. Cantrell","doi":"10.1145/3409334.3452070","DOIUrl":"https://doi.org/10.1145/3409334.3452070","url":null,"abstract":"In today's world every system developer and administrator should be familiar with cyberattacks and possible threats to their organizations systems. Petri Nets have been used to model and simulate cyberattacks allowing for additional knowledge on the planning stages of defending a system. Petri Nets have been used since the 1960's and there exists several extensions and variations of how they are designed, in particular Petri Nets with Players, Strategies and Cost has been recently proposed to model individual cyberattacks on target systems. These models can also be broken down into smaller components to build different models. This study introduces the concept of fine-grain components and a bottom-up approach to create a cyberattack model.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129336330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The AirFlute is an interactive and less-expensive alternative to traditional music lessons. AirFlute uses the Leap Motion Controller within a web browser to track how a student moves their fingers in 3-dimensional physical space. Each movement of the student (e.g., moving or bending a finger) is visualized on the virtual flute display within a web browser and the corresponding note is played through the computer's speaker. By expressing a correct fingering, a student can simulate playing a note on the flute without a physical instrument. AirFlute has three main features: 1) a place for users to free play music, 2) a tutorial for users to learn note fingerings, and 3) a capability for users to practice exercises and receive feedback. This brief abstract summarizes the motivation and implementation of AirFlute as a research application in human-computer interaction. AirFlute has the potential to broaden participation in music performance for students who may not be able to afford a physical instrument.
{"title":"AirFlute","authors":"Kate Sanborn","doi":"10.1145/3409334.3452087","DOIUrl":"https://doi.org/10.1145/3409334.3452087","url":null,"abstract":"The AirFlute is an interactive and less-expensive alternative to traditional music lessons. AirFlute uses the Leap Motion Controller within a web browser to track how a student moves their fingers in 3-dimensional physical space. Each movement of the student (e.g., moving or bending a finger) is visualized on the virtual flute display within a web browser and the corresponding note is played through the computer's speaker. By expressing a correct fingering, a student can simulate playing a note on the flute without a physical instrument. AirFlute has three main features: 1) a place for users to free play music, 2) a tutorial for users to learn note fingerings, and 3) a capability for users to practice exercises and receive feedback. This brief abstract summarizes the motivation and implementation of AirFlute as a research application in human-computer interaction. AirFlute has the potential to broaden participation in music performance for students who may not be able to afford a physical instrument.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129149997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BallCaller is a computer vision-based tool for amateur tennis players to automate line calling in the game of Tennis using only a laptop and camera. I made BallCaller using the Python programming language in conjunction with OpenCV. BallCaller tracks the position of a tennis ball in 2D space and captures the frame where the ball is touching the ground. If the ball is on the outside of the line, BallCaller reports that the ball is "out." If the ball is on the inside of the line, or if the ball is touching the line, BallCaller reports that the ball is "in." This extended abstract submission introduces BallCaller and discuses the technical details of using OpenCV's color detection and image modification functionality to process the images of sample ball locations.
{"title":"BallCaller","authors":"Bryan Whitehurst","doi":"10.1145/3409334.3452086","DOIUrl":"https://doi.org/10.1145/3409334.3452086","url":null,"abstract":"BallCaller is a computer vision-based tool for amateur tennis players to automate line calling in the game of Tennis using only a laptop and camera. I made BallCaller using the Python programming language in conjunction with OpenCV. BallCaller tracks the position of a tennis ball in 2D space and captures the frame where the ball is touching the ground. If the ball is on the outside of the line, BallCaller reports that the ball is \"out.\" If the ball is on the inside of the line, or if the ball is touching the line, BallCaller reports that the ball is \"in.\" This extended abstract submission introduces BallCaller and discuses the technical details of using OpenCV's color detection and image modification functionality to process the images of sample ball locations.","PeriodicalId":148741,"journal":{"name":"Proceedings of the 2021 ACM Southeast Conference","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122913443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}