Pub Date : 2022-04-28DOI: 10.1109/SIEDS55548.2022.9799347
E. Emch, K. Hayes, Erin Janiga, T. Benzing, A. Salman
As the world progresses in technological advances, more efficient ways to track and manage streams, tributaries, and rivers can be developed. In this project, we are implementing wireless sensors and Unmanned Aerial Vehicle (UAV) technology to further progress monitoring and potentially restoring a local stream. This advanced technology and new method of water testing is autonomous and is especially useful for water bodies that are difficult to access or are very remote. By having consistent and available water data, water bodies can be restored more efficiently. In this study, we are specifically focusing on restoring Boone Run, a forested mountain stream on the South Fork of the Shenandoah River in Virginia, that is managed by the VA Department of Forestry. The methodology includes using handheld meters to collect data each month to collect and analyze temperature, pH, dissolved oxygen, and conductivity. The methodology also includes a more innovative collection of data using two remote water sensors, and further using UAV technology to retrieve the data, to make a fully autonomous system. The two remote water sensors utilize wireless technology and consist of three probes: temperature, pH, and conductivity and also a raspberry pi and circuit board to transfer and store the data. With the use of wireless and remotely collecting data, more frequent data will be found and can be analyzed in more depth to get the most accurate understanding of the water quality. The collection of both of these forms of data will then be further analyzed to find the averages of the different parameters being measured, and also to see how the stream changes overtime. A comparison of the manually collected data and automated collected data will also be made to see accuracy differences and will further help explain the results. Overall, the autonomous and continuous system of using the sensor nodes and the UAV will ultimately reduce labor, costs, and time associated with manually collecting data. The ultimate goal of analyzing this data is to recognize if the stream's conditions can support brook trout life, a keystone species of the stream. If the stream's conditions align with the conditions that trout can inhabit, it indicates the stream is in good health, and restoration initiatives can begin to reintroduce trout life.
{"title":"Restoration of Water Streams Utilizing Unmanned Aerial Vehicles","authors":"E. Emch, K. Hayes, Erin Janiga, T. Benzing, A. Salman","doi":"10.1109/SIEDS55548.2022.9799347","DOIUrl":"https://doi.org/10.1109/SIEDS55548.2022.9799347","url":null,"abstract":"As the world progresses in technological advances, more efficient ways to track and manage streams, tributaries, and rivers can be developed. In this project, we are implementing wireless sensors and Unmanned Aerial Vehicle (UAV) technology to further progress monitoring and potentially restoring a local stream. This advanced technology and new method of water testing is autonomous and is especially useful for water bodies that are difficult to access or are very remote. By having consistent and available water data, water bodies can be restored more efficiently. In this study, we are specifically focusing on restoring Boone Run, a forested mountain stream on the South Fork of the Shenandoah River in Virginia, that is managed by the VA Department of Forestry. The methodology includes using handheld meters to collect data each month to collect and analyze temperature, pH, dissolved oxygen, and conductivity. The methodology also includes a more innovative collection of data using two remote water sensors, and further using UAV technology to retrieve the data, to make a fully autonomous system. The two remote water sensors utilize wireless technology and consist of three probes: temperature, pH, and conductivity and also a raspberry pi and circuit board to transfer and store the data. With the use of wireless and remotely collecting data, more frequent data will be found and can be analyzed in more depth to get the most accurate understanding of the water quality. The collection of both of these forms of data will then be further analyzed to find the averages of the different parameters being measured, and also to see how the stream changes overtime. A comparison of the manually collected data and automated collected data will also be made to see accuracy differences and will further help explain the results. Overall, the autonomous and continuous system of using the sensor nodes and the UAV will ultimately reduce labor, costs, and time associated with manually collecting data. The ultimate goal of analyzing this data is to recognize if the stream's conditions can support brook trout life, a keystone species of the stream. If the stream's conditions align with the conditions that trout can inhabit, it indicates the stream is in good health, and restoration initiatives can begin to reintroduce trout life.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130025837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/sieds55548.2022.9799420
Danyi Chen, Danette Martinez, T. H. Taylor
The epipelagic zone, the region from the surface of the ocean to 200 meters in depth, is an area of interest for ecologists, biologists, oceanographers, and other researchers interested in studying marine life and the environment. While autonomous underwater vehicles that can allow researchers to collect data throughout the epipelagic zone exist, most of these solutions are too expensive to enable smaller institutions and individual research labs to conduct their own explorations. This project aims to create an autonomous epipelagic zone profiling system that is relatively inexpensive (<$2000), requires minimal maintenance, and is accessible to smaller and/or landlocked institutions. By providing a more accessible means of data collection, researchers can more cost efficiently conduct targeted studies of marine ecosystems to better understand the environment and topics such as the impacts of climate change on the oceans, or the changing population size of phytoplankton in the environment, etc. Through conversations with our stakeholders, Dr. Sheri Floge of the Wake Forest University (WFU) Department of Biology and Electrical and Computer Engineer Dr. Kyle Luthy of the WFU Department of Engineering, we established that our system should be able to autonomously descend to a depth of 50 to 100 meters underwater and collect data such as water temperature, and pressure, as well as capture images. U sing a systematic design process, the team was able to conceptualize a design for a low-cost modular buoyancy-controlled capsule. The capsule will be attached to a buoy system, to maintain its longitudinal and latitudinal position, from which it will be able to traverse the epipelagic zone to collect data. The team is currently in the process of prototyping and testing the system, and although the prototype will only have a few sensors, the modularity of the design will enable future users to purchase and attach various sensors (such as a PlanktoScope) that suit their needs. Over the coming weeks, the team will be completing assembly and conducting laboratory and field testing of the prototype.
{"title":"Design of a Low-Cost Autonomous Epipelagic Profiling System for Oceanic Research","authors":"Danyi Chen, Danette Martinez, T. H. Taylor","doi":"10.1109/sieds55548.2022.9799420","DOIUrl":"https://doi.org/10.1109/sieds55548.2022.9799420","url":null,"abstract":"The epipelagic zone, the region from the surface of the ocean to 200 meters in depth, is an area of interest for ecologists, biologists, oceanographers, and other researchers interested in studying marine life and the environment. While autonomous underwater vehicles that can allow researchers to collect data throughout the epipelagic zone exist, most of these solutions are too expensive to enable smaller institutions and individual research labs to conduct their own explorations. This project aims to create an autonomous epipelagic zone profiling system that is relatively inexpensive (<$2000), requires minimal maintenance, and is accessible to smaller and/or landlocked institutions. By providing a more accessible means of data collection, researchers can more cost efficiently conduct targeted studies of marine ecosystems to better understand the environment and topics such as the impacts of climate change on the oceans, or the changing population size of phytoplankton in the environment, etc. Through conversations with our stakeholders, Dr. Sheri Floge of the Wake Forest University (WFU) Department of Biology and Electrical and Computer Engineer Dr. Kyle Luthy of the WFU Department of Engineering, we established that our system should be able to autonomously descend to a depth of 50 to 100 meters underwater and collect data such as water temperature, and pressure, as well as capture images. U sing a systematic design process, the team was able to conceptualize a design for a low-cost modular buoyancy-controlled capsule. The capsule will be attached to a buoy system, to maintain its longitudinal and latitudinal position, from which it will be able to traverse the epipelagic zone to collect data. The team is currently in the process of prototyping and testing the system, and although the prototype will only have a few sensors, the modularity of the design will enable future users to purchase and attach various sensors (such as a PlanktoScope) that suit their needs. Over the coming weeks, the team will be completing assembly and conducting laboratory and field testing of the prototype.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128013461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/sieds55548.2022.9799407
Latifa Al Jlayel, Kazi Asifa Ashrafi, Yumna Dahab, Diing Manyang
The Washington Metropolitan Area Transit Authority (WMATA), also known as Metro, spans three states and serves thousands of customers daily with transportation services. Due to the transit agency's influence on the area, as well as its size, WMATA has countless assets, thus making asset management and resource allocation challenging tasks. To develop a prioritization schema for efficient capital allocation within WMATA's facility, the project's intended goal was to perform an integrated approach with combined Multi - Criteria Decision Analysis (MCDA) methods and Mixed Knapsack. MCDA methods such as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) were implemented in our evaluation framework to develop ranked alternatives, and for its relative simplicity in logical computations. Additionally, the project applied Mixed Knapsack to develop an optimized model to minimize budget and ensure high-priority projects are not excluded. To evaluate the robustness of the criteria's weights and scoring metrics of the assets, a sensitivity analysis was carried out within the MCDA process. The proposed deliverable, consisting of a user-friendly Power BI dashboard and Excel model, will assist WMATA's Facility Asset Management Office (FAMO) in their initiative to support the capital planning decision-making process within the transit agency. The prioritization tool will also allow transparency of various elements that impact an asset's performance and its direct influence on the Quality-of-service delivery within WMATA
{"title":"Multi - Criteria Decision Analysis Tool for Capital Planning and Prioritization of WMATA Facilities and Assets","authors":"Latifa Al Jlayel, Kazi Asifa Ashrafi, Yumna Dahab, Diing Manyang","doi":"10.1109/sieds55548.2022.9799407","DOIUrl":"https://doi.org/10.1109/sieds55548.2022.9799407","url":null,"abstract":"The Washington Metropolitan Area Transit Authority (WMATA), also known as Metro, spans three states and serves thousands of customers daily with transportation services. Due to the transit agency's influence on the area, as well as its size, WMATA has countless assets, thus making asset management and resource allocation challenging tasks. To develop a prioritization schema for efficient capital allocation within WMATA's facility, the project's intended goal was to perform an integrated approach with combined Multi - Criteria Decision Analysis (MCDA) methods and Mixed Knapsack. MCDA methods such as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) were implemented in our evaluation framework to develop ranked alternatives, and for its relative simplicity in logical computations. Additionally, the project applied Mixed Knapsack to develop an optimized model to minimize budget and ensure high-priority projects are not excluded. To evaluate the robustness of the criteria's weights and scoring metrics of the assets, a sensitivity analysis was carried out within the MCDA process. The proposed deliverable, consisting of a user-friendly Power BI dashboard and Excel model, will assist WMATA's Facility Asset Management Office (FAMO) in their initiative to support the capital planning decision-making process within the transit agency. The prioritization tool will also allow transparency of various elements that impact an asset's performance and its direct influence on the Quality-of-service delivery within WMATA","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115624840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/sieds55548.2022.9799375
Syrine Mefteh, Alexa L. Rosdahl, Kaitlin G. Fagan, Anirudh Kumar
The assurance of the operability of surface water treatment facilities lies in many factors, but the factor with the largest impact on said assurance is the availability of the necessary chemicals. Facilities across the country vary in their processes and sources, but all require chemicals to produce potable water. The purpose of this project was to develop a risk assessment tool to determine the shortfalls and risks in the water treatment industry's chemical supply chain, which was used to produce a risk mitigation plan ensuring plant operability. To achieve this, a Fault Tree was built to address four main areas of concern: (i) market supply and demand, (ii) chemical substitutability, (iii) chemical transportation, and (iv) chemical storage process. Expert elicitation was then conducted to formulate a Failure Modes and Effects Analysis (FMEA) and develop Radar Charts, regarding the operations and management of specific plants. These tools were then employed to develop a final risk mitigation plan comprising two parts: (i) a quantitative analysis comparing and contrasting the risks of the water treatment plants under study and (ii) a qualitative recommendation for each of the plants-both culminating in a mitigation model on how to control and monitor chemical-related risks.
{"title":"Evaluating Chemical Supply Chain Criticality in the Water Treatment Industry: A Risk Analysis and Mitigation Model","authors":"Syrine Mefteh, Alexa L. Rosdahl, Kaitlin G. Fagan, Anirudh Kumar","doi":"10.1109/sieds55548.2022.9799375","DOIUrl":"https://doi.org/10.1109/sieds55548.2022.9799375","url":null,"abstract":"The assurance of the operability of surface water treatment facilities lies in many factors, but the factor with the largest impact on said assurance is the availability of the necessary chemicals. Facilities across the country vary in their processes and sources, but all require chemicals to produce potable water. The purpose of this project was to develop a risk assessment tool to determine the shortfalls and risks in the water treatment industry's chemical supply chain, which was used to produce a risk mitigation plan ensuring plant operability. To achieve this, a Fault Tree was built to address four main areas of concern: (i) market supply and demand, (ii) chemical substitutability, (iii) chemical transportation, and (iv) chemical storage process. Expert elicitation was then conducted to formulate a Failure Modes and Effects Analysis (FMEA) and develop Radar Charts, regarding the operations and management of specific plants. These tools were then employed to develop a final risk mitigation plan comprising two parts: (i) a quantitative analysis comparing and contrasting the risks of the water treatment plants under study and (ii) a qualitative recommendation for each of the plants-both culminating in a mitigation model on how to control and monitor chemical-related risks.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122656118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/sieds55548.2022.9799401
Ryan Ahmadiyar, J. Chun, Caroline Fuccella, Damir Hrnjez, Grace Parzych, Benjamin Weisel, Zeyu Mu, Michael E. Duffy, B. Park
The University of Virginia's Facilities Management (FM) Fleet consists of around 260 total vehicles and is committed to safe and sustainable driving. The fleet vehicles contain telematic tracking systems which provide feedback on a multitude of driving behavioral measures, including speeding, harsh braking, hard acceleration, seat belt usage, harsh cornering, and idling time. In a previous study, data collected on these measures was used to develop relevant educational materials on mindful driving. This paper aims to further improve safe and eco-friendly FM driving behaviors by analyzing if reinforcement training, additional scorecards and manager conversations, proved to be effective when given proactively or reactively to increased violations of driving behavioral measures. This paper outlines the process we used in determining when and how to administer the two different training programs and which vehicle shops to involve. One group of shops received in-depth training before any notable violations were detected, which was deemed proactive training. A separate shop received the reactive training after any significant increase in vehicle incidents was detected. These reinforcement training programs were largely based on the professional FM education modules and provided conversation templates for managers to use in order to re-educate their shop's respective drivers. The research showed that reactive reinforcement training was statistically significant for speeding while proactive reinforcement training was not statistically significant; however, further expansion upon both trainings may still be warranted.
{"title":"Safe and Sustainable Fleet Management with Data Analytics and Reinforcement Training","authors":"Ryan Ahmadiyar, J. Chun, Caroline Fuccella, Damir Hrnjez, Grace Parzych, Benjamin Weisel, Zeyu Mu, Michael E. Duffy, B. Park","doi":"10.1109/sieds55548.2022.9799401","DOIUrl":"https://doi.org/10.1109/sieds55548.2022.9799401","url":null,"abstract":"The University of Virginia's Facilities Management (FM) Fleet consists of around 260 total vehicles and is committed to safe and sustainable driving. The fleet vehicles contain telematic tracking systems which provide feedback on a multitude of driving behavioral measures, including speeding, harsh braking, hard acceleration, seat belt usage, harsh cornering, and idling time. In a previous study, data collected on these measures was used to develop relevant educational materials on mindful driving. This paper aims to further improve safe and eco-friendly FM driving behaviors by analyzing if reinforcement training, additional scorecards and manager conversations, proved to be effective when given proactively or reactively to increased violations of driving behavioral measures. This paper outlines the process we used in determining when and how to administer the two different training programs and which vehicle shops to involve. One group of shops received in-depth training before any notable violations were detected, which was deemed proactive training. A separate shop received the reactive training after any significant increase in vehicle incidents was detected. These reinforcement training programs were largely based on the professional FM education modules and provided conversation templates for managers to use in order to re-educate their shop's respective drivers. The research showed that reactive reinforcement training was statistically significant for speeding while proactive reinforcement training was not statistically significant; however, further expansion upon both trainings may still be warranted.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115332057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.48550/arXiv.2205.01180
W. Coleman, Ben Johann, Nicholas Pasternak, Jaya Vellayan, N. Foutz, Heman Shakeri
With everyone trying to enter the real estate market nowadays, knowing the proper valuations for residential and commercial properties has become crucial. Past researchers have been known to utilize static real estate data (e.g, number of beds, baths, square footage) or even a combination of real estate and demographic information to predict property prices. In this investigation, we attempted to improve upon past research. So we decided to explore a unique approach - we wanted to determine if mobile location data could be used to improve the predictive power of popular regression and tree-based models. To prepare our data for our models, we processed the mobility data by attaching it to individual properties from the real estate data that aggregated users within 500 meters of the property for each day of the week. We removed people that lived within 500 meters of each property, so each property's aggregated mobility data only contained non-resident census features. On top of these dynamic census features, we also included static census features, including the number of people in the area, the average proportion of people commuting, and the number of residents in the area. Finally, we tested multiple models to predict real estate prices. Our proposed model is two stacked random forest modules combined using a ridge regression that uses the random forest outputs as predictors. The first random forest model used static features only and the second random forest model used dynamic features only. Comparing our models with and without the dynamic mobile location features concludes the model with dynamic mobile location features achieves 3 % lower mean squared error than the same model but without dynamic mobile location features.
{"title":"Using Machine Learning to Evaluate Real Estate Prices Using Location Big Data","authors":"W. Coleman, Ben Johann, Nicholas Pasternak, Jaya Vellayan, N. Foutz, Heman Shakeri","doi":"10.48550/arXiv.2205.01180","DOIUrl":"https://doi.org/10.48550/arXiv.2205.01180","url":null,"abstract":"With everyone trying to enter the real estate market nowadays, knowing the proper valuations for residential and commercial properties has become crucial. Past researchers have been known to utilize static real estate data (e.g, number of beds, baths, square footage) or even a combination of real estate and demographic information to predict property prices. In this investigation, we attempted to improve upon past research. So we decided to explore a unique approach - we wanted to determine if mobile location data could be used to improve the predictive power of popular regression and tree-based models. To prepare our data for our models, we processed the mobility data by attaching it to individual properties from the real estate data that aggregated users within 500 meters of the property for each day of the week. We removed people that lived within 500 meters of each property, so each property's aggregated mobility data only contained non-resident census features. On top of these dynamic census features, we also included static census features, including the number of people in the area, the average proportion of people commuting, and the number of residents in the area. Finally, we tested multiple models to predict real estate prices. Our proposed model is two stacked random forest modules combined using a ridge regression that uses the random forest outputs as predictors. The first random forest model used static features only and the second random forest model used dynamic features only. Comparing our models with and without the dynamic mobile location features concludes the model with dynamic mobile location features achieves 3 % lower mean squared error than the same model but without dynamic mobile location features.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131521214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/sieds55548.2022.9799431
Jenna E. Cotter, Emily H. O’Hear, R. C. Smitherman, Addison B. Bright, N. Tenhundfeld, Jason Forsyth, N. Sprague, Samy El-Tawab
As automation is becoming more prevalent across everything from military and health care settings to everyday household items, it is necessary to understand the nature of human interactions with these systems. One critically important element of these interactions is user trust, as it can predict automated systems' safe and effective use. Past research has evaluated individuals' trust in automation through a host of different assessment techniques such as self-report, physiological, and behavioral measures. However, to date, there has been little evaluation of the convergence across these measures in a real-world environment. Convergence across measures is a useful tool in understanding the mechanisms by which a cognitive construct is impacted and providing greater confidence that any single measure is evaluating what it purports to measure. The present study used an autonomous golf cart that drove participants to different locations around the campus of James Madison University while a camera recorded them. In addition, participants were given the AICP-R and TOAST to evaluate their complacency potential and trust, respectively. Researchers coded videos for verification/checking behaviors (i.e., participants looked at or interacted with the GUI used to control the cart) and nervous behaviors (i.e., bracing, fidgeting, etc.). Additionally, environmental 'obstacles' such as pedestrians, food-delivery robots, and construction were also coded for by watching a front-facing camera. Results indicate a disconnect between the self-report and behavioral measures evaluating trust. However, there was a relationship between the coded nervous behaviors and verification behaviors and a relationship between those and the presence of obstacles. This lack of convergence across measures indicates a need for future research to understand whether this non-convergence represents shortcomings with the measures themselves, the existing definition of trust as a construct, or perhaps indicates that there is a nuance that can be afforded by some measures over another.
{"title":"Convergence Across Behavioral and Self-report Measures Evaluating Individuals' Trust in an Autonomous Golf Cart","authors":"Jenna E. Cotter, Emily H. O’Hear, R. C. Smitherman, Addison B. Bright, N. Tenhundfeld, Jason Forsyth, N. Sprague, Samy El-Tawab","doi":"10.1109/sieds55548.2022.9799431","DOIUrl":"https://doi.org/10.1109/sieds55548.2022.9799431","url":null,"abstract":"As automation is becoming more prevalent across everything from military and health care settings to everyday household items, it is necessary to understand the nature of human interactions with these systems. One critically important element of these interactions is user trust, as it can predict automated systems' safe and effective use. Past research has evaluated individuals' trust in automation through a host of different assessment techniques such as self-report, physiological, and behavioral measures. However, to date, there has been little evaluation of the convergence across these measures in a real-world environment. Convergence across measures is a useful tool in understanding the mechanisms by which a cognitive construct is impacted and providing greater confidence that any single measure is evaluating what it purports to measure. The present study used an autonomous golf cart that drove participants to different locations around the campus of James Madison University while a camera recorded them. In addition, participants were given the AICP-R and TOAST to evaluate their complacency potential and trust, respectively. Researchers coded videos for verification/checking behaviors (i.e., participants looked at or interacted with the GUI used to control the cart) and nervous behaviors (i.e., bracing, fidgeting, etc.). Additionally, environmental 'obstacles' such as pedestrians, food-delivery robots, and construction were also coded for by watching a front-facing camera. Results indicate a disconnect between the self-report and behavioral measures evaluating trust. However, there was a relationship between the coded nervous behaviors and verification behaviors and a relationship between those and the presence of obstacles. This lack of convergence across measures indicates a need for future research to understand whether this non-convergence represents shortcomings with the measures themselves, the existing definition of trust as a construct, or perhaps indicates that there is a nuance that can be afforded by some measures over another.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114158234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/SIEDS55548.2022.9799319
Hannah M. Barr, R. C. Smitherman, Bryan L. Mesmer, Kristin Weger, Douglas L. Van Bossuyt, Robert Semmens, N. Tenhundfeld
Incentive mechanisms are used to encourage a behavior. Incentive mechanisms can be reputation incentives (social standing risks and rewards), gamification incentives (game-based elements in non-gaming environments), and feedback incentives (verbal or text feedback). Previous research suggests that reputation and gamification incentives provide extrinsic motivation (EM), while feedback incentives provide intrinsic motivation (IM). Incentive mechanisms vary in effectiveness, but most studies indicate that IM yielding incentives are most effective. Incentive mechanisms used to promote the use, acceptance, and adoption of automated systems can prove useful to organizations that do not want to waste resources on unused systems. Incentivizing the use, acceptance, and adoption of automated systems can enhance productivity, overall safety, and work-life balance. Though there are many studies on these topics, the relative effectiveness of different IM and EM incentive mechanisms has not been studied. This study fills that gap by examining the effectiveness of incentive mechanisms that affect IM and EM. The current study utilized reputation incentives, gamification incentives, feedback incentives, and a control group to compare the use, acceptance, and adoption of an unmanned aerial vehicle (UAV) in a simulated hostage rescue task. Data were collected on how frequently participants used the system. Following the hostage rescue task, participants were given questionnaires measuring motivation, acceptance, and adoption. This study provides insight into the relative influence of IM and EM-based incentive mechanisms to promote automated technologies. These results will help elucidate the steps that organizations like the Military can take to enhance warfighter buy-in and use of new technologies.
{"title":"Use, Acceptance, and Adoption of Automated Systems with Intrinsic and Extrinsic Motivation Based Incentive Mechanisms","authors":"Hannah M. Barr, R. C. Smitherman, Bryan L. Mesmer, Kristin Weger, Douglas L. Van Bossuyt, Robert Semmens, N. Tenhundfeld","doi":"10.1109/SIEDS55548.2022.9799319","DOIUrl":"https://doi.org/10.1109/SIEDS55548.2022.9799319","url":null,"abstract":"Incentive mechanisms are used to encourage a behavior. Incentive mechanisms can be reputation incentives (social standing risks and rewards), gamification incentives (game-based elements in non-gaming environments), and feedback incentives (verbal or text feedback). Previous research suggests that reputation and gamification incentives provide extrinsic motivation (EM), while feedback incentives provide intrinsic motivation (IM). Incentive mechanisms vary in effectiveness, but most studies indicate that IM yielding incentives are most effective. Incentive mechanisms used to promote the use, acceptance, and adoption of automated systems can prove useful to organizations that do not want to waste resources on unused systems. Incentivizing the use, acceptance, and adoption of automated systems can enhance productivity, overall safety, and work-life balance. Though there are many studies on these topics, the relative effectiveness of different IM and EM incentive mechanisms has not been studied. This study fills that gap by examining the effectiveness of incentive mechanisms that affect IM and EM. The current study utilized reputation incentives, gamification incentives, feedback incentives, and a control group to compare the use, acceptance, and adoption of an unmanned aerial vehicle (UAV) in a simulated hostage rescue task. Data were collected on how frequently participants used the system. Following the hostage rescue task, participants were given questionnaires measuring motivation, acceptance, and adoption. This study provides insight into the relative influence of IM and EM-based incentive mechanisms to promote automated technologies. These results will help elucidate the steps that organizations like the Military can take to enhance warfighter buy-in and use of new technologies.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123142215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/sieds55548.2022.9799322
Branko Bokan, Joost Santos
The traditional threat modeling methodologies work well on a small scale, when evaluating targets such as a data field, a software application, or a system component–but they do not allow for comprehensive evaluation of an entire enterprise architecture. They also do not enumerate and consider a comprehensive set of actual threat actions observed in the wild. Because of the lack of adequate threat modeling methodologies for determining cybersecurity protection needs on an enterprise scale, cybersecurity executives and decision makers have traditionally relied upon marketing pressure as the main input into decision making for investments in cybersecurity capabilities (tools). A new methodology, originally developed by the Department of Defense then further expanded by the Department of Homeland Security, for the first time allows for a threat-based, end-to-end evaluation of cybersecurity architectures and determination of gaps or areas in need of future investments. Although in the public domain, this methodology has not been used outside of the federal government. This paper examines the new threat modeling approach that allows organizations to look at their cybersecurity protections from the standpoint of an adversary. The methodology enumerates threat actions that have been observed in the wild using a cyber threat framework and scores cybersecurity architectural capabilities for their ability to protect, detect, and recover from each threat action. The results of the analysis form a matrix called capability coverage map that visually represents the coverage, gaps, and overlaps against threat actions. The threat actions can be further prioritized using a threat heat map – a visual representation of the prevalence and maneuverability of threat actions that can be overlaid on top of a coverage map. The paper discusses the new threat modeling methodology and proposes future research with a goal to establish a decision-making framework for selecting cybersecurity architectural capability portfolios that maximize protections against known cybersecurity threats.
{"title":"Threat Modeling for Enterprise Cybersecurity Architecture","authors":"Branko Bokan, Joost Santos","doi":"10.1109/sieds55548.2022.9799322","DOIUrl":"https://doi.org/10.1109/sieds55548.2022.9799322","url":null,"abstract":"The traditional threat modeling methodologies work well on a small scale, when evaluating targets such as a data field, a software application, or a system component–but they do not allow for comprehensive evaluation of an entire enterprise architecture. They also do not enumerate and consider a comprehensive set of actual threat actions observed in the wild. Because of the lack of adequate threat modeling methodologies for determining cybersecurity protection needs on an enterprise scale, cybersecurity executives and decision makers have traditionally relied upon marketing pressure as the main input into decision making for investments in cybersecurity capabilities (tools). A new methodology, originally developed by the Department of Defense then further expanded by the Department of Homeland Security, for the first time allows for a threat-based, end-to-end evaluation of cybersecurity architectures and determination of gaps or areas in need of future investments. Although in the public domain, this methodology has not been used outside of the federal government. This paper examines the new threat modeling approach that allows organizations to look at their cybersecurity protections from the standpoint of an adversary. The methodology enumerates threat actions that have been observed in the wild using a cyber threat framework and scores cybersecurity architectural capabilities for their ability to protect, detect, and recover from each threat action. The results of the analysis form a matrix called capability coverage map that visually represents the coverage, gaps, and overlaps against threat actions. The threat actions can be further prioritized using a threat heat map – a visual representation of the prevalence and maneuverability of threat actions that can be overlaid on top of a coverage map. The paper discusses the new threat modeling methodology and proposes future research with a goal to establish a decision-making framework for selecting cybersecurity architectural capability portfolios that maximize protections against known cybersecurity threats.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130114941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-28DOI: 10.1109/sieds55548.2022.9799426
Navya Annapareddy, Kara Fallin, Ryan Folks, W. Jarrard, Marcel Durieux, Nazanin Moradinasab, B. Naik, S. Sengupta, Christian Ndaribitse, Donald Brown
The African Surgical Outcomes Study, a seven-day, prospective, observational cohort study across 25 countries in Africa reported a rate of serious postoperative complications of 18% and mortality of 2% [1]. 95% of these deaths occurred in the postoperative period and were considered preventable. There are many factors that contribute to postoperative outcomes, but a key approach to decreasing complications is the ability to predict patient outcome trajectories from perioperative parameters [2]. In order to efficiently predict these outcomes, electronic medical record systems are needed. As compared to handwritten paper records, these systems offer profound advantages, including automated transfer of medical information, dynamic search queries, and improved resilience for data backups. In this paper, we implement the digitization of the drug and physiological indicator portions of 363 handwritten perioperative flowsheets sourced from the University Teaching Hospital of Kigali in Rwanda. In both sections, the detection of handwritten words and digits is accomplished using a YOLOv5 model trained on a single class. The bounding boxes are then cropped and classified by a convolutional neural network (CNN). Our experimental results suggest that our proposed method can successfully detect handwritten digits and words as evaluated on object mean average precision (mAP).
{"title":"Handwritten Text and Digit Classification on Rwandan Perioperative Flowsheets via YOLOv5","authors":"Navya Annapareddy, Kara Fallin, Ryan Folks, W. Jarrard, Marcel Durieux, Nazanin Moradinasab, B. Naik, S. Sengupta, Christian Ndaribitse, Donald Brown","doi":"10.1109/sieds55548.2022.9799426","DOIUrl":"https://doi.org/10.1109/sieds55548.2022.9799426","url":null,"abstract":"The African Surgical Outcomes Study, a seven-day, prospective, observational cohort study across 25 countries in Africa reported a rate of serious postoperative complications of 18% and mortality of 2% [1]. 95% of these deaths occurred in the postoperative period and were considered preventable. There are many factors that contribute to postoperative outcomes, but a key approach to decreasing complications is the ability to predict patient outcome trajectories from perioperative parameters [2]. In order to efficiently predict these outcomes, electronic medical record systems are needed. As compared to handwritten paper records, these systems offer profound advantages, including automated transfer of medical information, dynamic search queries, and improved resilience for data backups. In this paper, we implement the digitization of the drug and physiological indicator portions of 363 handwritten perioperative flowsheets sourced from the University Teaching Hospital of Kigali in Rwanda. In both sections, the detection of handwritten words and digits is accomplished using a YOLOv5 model trained on a single class. The bounding boxes are then cropped and classified by a convolutional neural network (CNN). Our experimental results suggest that our proposed method can successfully detect handwritten digits and words as evaluated on object mean average precision (mAP).","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130237099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}