Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106694
Simon Saliby, Alexander Tong, Selin Ciesielski, Patrick Lim, R. Francis
The Environmental Protection Agency (EPA) sets national ambient air quality standards (NAAQS) for six criteria pollutants and monitors attainment to these standards through a system of federally accepted analytical methods (FEMs or FRMs). Due to their cost and technical sophistication, FEM monitoring networks may be good for measuring air pollution at a regional spatial resolution but may not be appropriate for monitoring neighborhood-level resolution. While FEM networks are effective at monitoring regional scale pollution, literature indicates that pollution varies considerably at sub-regional and even neighborhood-or block-level spatial scales. Therefore, while a region may be in attainment with the NAAQS, local communities’ air quality may not be represented by the data collected through the FEM network. In fact, public health trends, uneven distribution of tree canopy, and socioeconomic data indicate that particular communities in Washington, DC may have a high risk for nonattainment despite the District being in compliance with federal measures of air quality. With the overarching goal of mitigating this potential environmental and public health issue, the George Washington University Sustainability living lab project, Fresh Air DC, is working to deploy low-cost high-density air monitoring systems with at least one sensor in each advisory neighborhood council (ANC) throughout the District. In this paper, we formulate the elements of a policy framework that can support the establishment of this system. Our research will involve a comparative analysis between two existing low-cost high-density systems, one deployed in California and brought forth by state legislation, and the Breathe London hyperlocal air quality monitoring network. This comparative analysis will demonstrate the role of effective policy systems in supporting the development and implementation of a low-cost high-density system in DC. Moreover, this analysis demonstrates the role of the proposed policy framework in supporting the new DC Sustainability Plan.
环境保护署(EPA)为六种标准污染物制定了国家环境空气质量标准(NAAQS),并通过联邦认可的分析方法(fem或FRMs)系统监测这些标准的实现情况。由于其成本和技术的复杂性,有限元监测网络可能有利于测量区域空间分辨率的空气污染,但可能不适合监测邻里水平的分辨率。虽然FEM网络在监测区域尺度污染方面是有效的,但文献表明,在次区域甚至邻里或街区水平的空间尺度上,污染变化很大。因此,虽然一个地区可能达到了NAAQS,但通过FEM网络收集的数据可能无法代表当地社区的空气质量。事实上,公共卫生趋势、树冠分布不均和社会经济数据表明,尽管华盛顿特区符合联邦空气质量措施,但该地区的特定社区可能有很高的不达标风险。为了减轻这种潜在的环境和公共卫生问题,乔治华盛顿大学可持续发展生活实验室项目Fresh Air DC正在努力在整个地区的每个咨询社区委员会(ANC)部署至少一个传感器的低成本高密度空气监测系统。在本文中,我们制定了一个政策框架的要素,可以支持这一制度的建立。我们的研究将涉及对两个现有的低成本高密度系统的比较分析,一个在加州部署并由州立法提出,另一个是呼吸伦敦超本地空气质量监测网络。这种比较分析将证明有效的政策系统在支持发展和实施DC低成本高密度系统方面的作用。此外,该分析还证明了拟议的政策框架在支持新的DC可持续发展计划方面的作用。
{"title":"Analysis of Policy Factors Impacting the Use of Low-Cost Air Monitoring Networks in Washington, D.C.","authors":"Simon Saliby, Alexander Tong, Selin Ciesielski, Patrick Lim, R. Francis","doi":"10.1109/SIEDS49339.2020.9106694","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106694","url":null,"abstract":"The Environmental Protection Agency (EPA) sets national ambient air quality standards (NAAQS) for six criteria pollutants and monitors attainment to these standards through a system of federally accepted analytical methods (FEMs or FRMs). Due to their cost and technical sophistication, FEM monitoring networks may be good for measuring air pollution at a regional spatial resolution but may not be appropriate for monitoring neighborhood-level resolution. While FEM networks are effective at monitoring regional scale pollution, literature indicates that pollution varies considerably at sub-regional and even neighborhood-or block-level spatial scales. Therefore, while a region may be in attainment with the NAAQS, local communities’ air quality may not be represented by the data collected through the FEM network. In fact, public health trends, uneven distribution of tree canopy, and socioeconomic data indicate that particular communities in Washington, DC may have a high risk for nonattainment despite the District being in compliance with federal measures of air quality. With the overarching goal of mitigating this potential environmental and public health issue, the George Washington University Sustainability living lab project, Fresh Air DC, is working to deploy low-cost high-density air monitoring systems with at least one sensor in each advisory neighborhood council (ANC) throughout the District. In this paper, we formulate the elements of a policy framework that can support the establishment of this system. Our research will involve a comparative analysis between two existing low-cost high-density systems, one deployed in California and brought forth by state legislation, and the Breathe London hyperlocal air quality monitoring network. This comparative analysis will demonstrate the role of effective policy systems in supporting the development and implementation of a low-cost high-density system in DC. Moreover, this analysis demonstrates the role of the proposed policy framework in supporting the new DC Sustainability Plan.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115916457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106644
Bianca Donadio, Á. Gómez, S. Atran, Jonathon Novak, Marshall Wheeler, Colin Marquez, E. D. Visser, Chad C. Tossell
Why are soldiers, sailors, airmen, and marines willing to make costly sacrifices? Previous research suggests loyalty (e.g., duty) to teammates is important among other reasons. More recently, studies conducted overseas have identified sacred values (i.e., values held so deeply they are immune to material tradeoffs) and group identity fusion as primary factors. Importantly, however, these studies have been conducted using survey-based and other social science methods which assess attitudes and beliefs, but not behavior. For example, it is one thing for a respondent to say they would jump on a grenade to sacrifice for their group but another to actually jump on a grenade in real life. Thus, we have developed a simulation to help bridge the gap between what people say and do in life-or-death scenarios. This high-fidelity simulation was developed to provide a more immersive means of testing realistic, “shoot or no shoot” hostage scenarios. Using feedback from individuals with military experience, the scenarios were designed to elicit more real-life stress than attitude-based surveys. This paper describes the systems engineering process we used to design the simulation as well as the proof-of-concept study developed to explore reasons behind why people are willing to make costly sacrifices. Early pilot data have revealed that values and identities related to religion, risk to self, and the Air Force predicted engagement decisions of Air Force cadets, in a series of simulated hostage scenarios. Possibilities for future use of this simulation will also be discussed. For example, while this experimental setup lacks high stakes consequences, this simulation could be useful for selection and training in addition to a research tool for studying motivations in different simulated combat environments.
{"title":"Simulating Combat to Explore Motivations Behind Why Military Members Make Costly Sacrifices","authors":"Bianca Donadio, Á. Gómez, S. Atran, Jonathon Novak, Marshall Wheeler, Colin Marquez, E. D. Visser, Chad C. Tossell","doi":"10.1109/SIEDS49339.2020.9106644","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106644","url":null,"abstract":"Why are soldiers, sailors, airmen, and marines willing to make costly sacrifices? Previous research suggests loyalty (e.g., duty) to teammates is important among other reasons. More recently, studies conducted overseas have identified sacred values (i.e., values held so deeply they are immune to material tradeoffs) and group identity fusion as primary factors. Importantly, however, these studies have been conducted using survey-based and other social science methods which assess attitudes and beliefs, but not behavior. For example, it is one thing for a respondent to say they would jump on a grenade to sacrifice for their group but another to actually jump on a grenade in real life. Thus, we have developed a simulation to help bridge the gap between what people say and do in life-or-death scenarios. This high-fidelity simulation was developed to provide a more immersive means of testing realistic, “shoot or no shoot” hostage scenarios. Using feedback from individuals with military experience, the scenarios were designed to elicit more real-life stress than attitude-based surveys. This paper describes the systems engineering process we used to design the simulation as well as the proof-of-concept study developed to explore reasons behind why people are willing to make costly sacrifices. Early pilot data have revealed that values and identities related to religion, risk to self, and the Air Force predicted engagement decisions of Air Force cadets, in a series of simulated hostage scenarios. Possibilities for future use of this simulation will also be discussed. For example, while this experimental setup lacks high stakes consequences, this simulation could be useful for selection and training in addition to a research tool for studying motivations in different simulated combat environments.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123291416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106693
Allison Renehan, B. Rombach, Anna Haikl, Corey Nolan, W. Lupton, E. Timmons, R. Bailey
The focus of this work is developing an effective and cost-efficient monitoring system that collects spatially-granular data within a vineyard. Many vineyard managers currently rely on limited data paired with past experiences to make key decisions pertaining to frost prediction, pest and disease prediction, and irrigation optimization. Considering that soil conditions and microclimates vary significantly within a single vineyard, this lack of data prevents them from precisely managing their vines.By engaging stakeholders in iterative prototype development, we identified key design features of a low-cost, high-density sensor network for vineyards. Functionally, an ideal system 1) uses Long Range (LoRa) wireless communication technology; and 2) places temperature, humidity, soil moisture, and light intensity sensors in relevant areas throughout the vineyard. Additionally, by engaging with industry competitors, we learned that the market lacks low-cost, high-density sensor network implementations.Using LoRa allows for a high density of sensors to be placed in every microclimate throughout a vineyard without relying on cellular coverage. The focus on temperature, humidity, soil moisture, and light intensity targets a low cost, minimally-viable set of metrics that can provide the necessary information for key models and decisions.User input and site visits suggested that the system must endure harsh environmental conditions and relay timely, actionable data without disrupting fieldwork. To prevent damage and extend device lifetime, the sensor housing and connections need to be waterproof and durable. Further, vine growing methods are not standardized across the industry, meaning the product needs to be adaptable to different growing styles. Vineyard managers want a system that informs their decisions by providing data and the results of established prediction models. The research presented here shows that a system incorporating these features and minimizing costs will be valuable in vineyards while also being broadly applicable to a variety of other agricultural applications.
{"title":"Low Power Wireless Networks in Vineyards","authors":"Allison Renehan, B. Rombach, Anna Haikl, Corey Nolan, W. Lupton, E. Timmons, R. Bailey","doi":"10.1109/SIEDS49339.2020.9106693","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106693","url":null,"abstract":"The focus of this work is developing an effective and cost-efficient monitoring system that collects spatially-granular data within a vineyard. Many vineyard managers currently rely on limited data paired with past experiences to make key decisions pertaining to frost prediction, pest and disease prediction, and irrigation optimization. Considering that soil conditions and microclimates vary significantly within a single vineyard, this lack of data prevents them from precisely managing their vines.By engaging stakeholders in iterative prototype development, we identified key design features of a low-cost, high-density sensor network for vineyards. Functionally, an ideal system 1) uses Long Range (LoRa) wireless communication technology; and 2) places temperature, humidity, soil moisture, and light intensity sensors in relevant areas throughout the vineyard. Additionally, by engaging with industry competitors, we learned that the market lacks low-cost, high-density sensor network implementations.Using LoRa allows for a high density of sensors to be placed in every microclimate throughout a vineyard without relying on cellular coverage. The focus on temperature, humidity, soil moisture, and light intensity targets a low cost, minimally-viable set of metrics that can provide the necessary information for key models and decisions.User input and site visits suggested that the system must endure harsh environmental conditions and relay timely, actionable data without disrupting fieldwork. To prevent damage and extend device lifetime, the sensor housing and connections need to be waterproof and durable. Further, vine growing methods are not standardized across the industry, meaning the product needs to be adaptable to different growing styles. Vineyard managers want a system that informs their decisions by providing data and the results of established prediction models. The research presented here shows that a system incorporating these features and minimizing costs will be valuable in vineyards while also being broadly applicable to a variety of other agricultural applications.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128635976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106647
Jacqueline Hoege, Maryanna C. Lansing, Sarah Nelson, Daniel Ungerleider, Rishab Iyer, C. Rhodes, Ben Metzger, Peter Worcester, Aniket Chandra, Jacob Leonard, Rachel Kreitzer, W. Scherer
As of 2019, sports analytics has grown to be a $780 million industry [1]. Many organizations and institutions contribute to the field through research in exercise science, optimization of in-game decision making, sports marketing, business performance, and sports compliance fields. We propose an open, interdisciplinary approach to sports analytics within institutes of higher education to work across many fields and provide opportunities to diverse members within the community, enable research and communication across fields, serve the surrounding community, and ethically use data.
{"title":"An Interdisciplinary Approach to Sports Analytics in a University Setting","authors":"Jacqueline Hoege, Maryanna C. Lansing, Sarah Nelson, Daniel Ungerleider, Rishab Iyer, C. Rhodes, Ben Metzger, Peter Worcester, Aniket Chandra, Jacob Leonard, Rachel Kreitzer, W. Scherer","doi":"10.1109/SIEDS49339.2020.9106647","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106647","url":null,"abstract":"As of 2019, sports analytics has grown to be a $780 million industry [1]. Many organizations and institutions contribute to the field through research in exercise science, optimization of in-game decision making, sports marketing, business performance, and sports compliance fields. We propose an open, interdisciplinary approach to sports analytics within institutes of higher education to work across many fields and provide opportunities to diverse members within the community, enable research and communication across fields, serve the surrounding community, and ethically use data.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114644909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/sieds49339.2020.9106684
{"title":"SIEDS 2020 Cover Page","authors":"","doi":"10.1109/sieds49339.2020.9106684","DOIUrl":"https://doi.org/10.1109/sieds49339.2020.9106684","url":null,"abstract":"","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132240881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106584
Jason Tiezzi, Rice Tyler, Suchetha Sharma
With over 300 million users, including frequent postings by elites across the political and entertainment fields, Twitter has become a rich field for mining and analyzing data. Despite its prominence within social science research, relatively little attention has been paid to the process of data acquisition. To that end, our research uses a case study to illustrate the process of acquiring and storing tweets. To construct our data pipeline, we first applied for and created Twitter developer accounts and used the Tweepy app in Python to interact with Twitter’s API. We created a program that uses a producer-consumer multithreading model to request tweets from the API, then cleans the data and pushes it to a MySQL database with four tables: one for tweets, one for user information, one for retweets, and one for special entities (e.g., hashtags).With our pipeline operational, we explore how candidate gender affects Twitter discourse in the 2020 Democratic presidential primary. Specifically, we use unsupervised text analysis methods to examine differences in word frequencies, sentiment, and emotional dimensions. We find that gender is central to the discourse surrounding female candidates, but peripheral for male candidates. The discourse surrounding female candidates in our dataset is also more joyful and positive. Finally, with our case study concluded, we offer lessons to future researchers who wish to acquire and utilize Twitter data for social science research.
{"title":"Lessons Learned: A Case Study in Creating a Data Pipeline using Twitter’s API","authors":"Jason Tiezzi, Rice Tyler, Suchetha Sharma","doi":"10.1109/SIEDS49339.2020.9106584","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106584","url":null,"abstract":"With over 300 million users, including frequent postings by elites across the political and entertainment fields, Twitter has become a rich field for mining and analyzing data. Despite its prominence within social science research, relatively little attention has been paid to the process of data acquisition. To that end, our research uses a case study to illustrate the process of acquiring and storing tweets. To construct our data pipeline, we first applied for and created Twitter developer accounts and used the Tweepy app in Python to interact with Twitter’s API. We created a program that uses a producer-consumer multithreading model to request tweets from the API, then cleans the data and pushes it to a MySQL database with four tables: one for tweets, one for user information, one for retweets, and one for special entities (e.g., hashtags).With our pipeline operational, we explore how candidate gender affects Twitter discourse in the 2020 Democratic presidential primary. Specifically, we use unsupervised text analysis methods to examine differences in word frequencies, sentiment, and emotional dimensions. We find that gender is central to the discourse surrounding female candidates, but peripheral for male candidates. The discourse surrounding female candidates in our dataset is also more joyful and positive. Finally, with our case study concluded, we offer lessons to future researchers who wish to acquire and utilize Twitter data for social science research.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127182674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106664
Abigail Sharp, D. Ojeda, Victoria Nilsen
In many buildings, energy tracking methods provide inadequate information regarding energy consumption, impeding the identification of economic and environmental waste in building operations and maintenance. M.C. Dean, an electrical design-build firm, recognized the importance of effective energy tracking methods in the large, complex buildings that it manages. Energy guidelines, such as Leadership in Energy and Environmental Design (LEED), are implemented to increase energy efficiency. Acquiring a LEED certification provides building owners with incentives, but it requires an in-depth documentation and understanding of energy usage prior to certification. Previously, M.C. Dean manually calculated average energy usage and created control charts that summarized annual statistics of their buildings. This method is sufficient for a single site but is an inefficient practice when applied manually across multiple sites. This project performed a requirements elicitation to determine the critical criteria for analyzing energy usage for M.C. Dean’s buildings. The results were used to develop a standardized Excel-based dashboard tool that instructs the user on importing and modifying raw energy data. Once the data is imported, the dashboard tool automatically tests for normality via probability plots and generates 3-sigma control charts. By automating this process, the dashboard tool enabled the user to gain detailed understanding of the energy usage of their site. Areas of potential improvement were identified through the implementation of three additional methods: analytic hierarchy process, cost simulation, and cost-benefit analysis. The manager’s preferences and energy guidelines generated a ranking of building usage factors, which can guide decisions on repurposing certain building elements. A Monte Carlo simulation was performed using parametric distribution analysis to predict future costs. Additionally, the dashboard increases awareness of energy usage by linking energy performance to the LEED Operations and Maintenance version 4.1 guidelines to estimate the current certification level and highlight areas for improvement. This research produced an energy performance tool that can be standardized to other complex buildings. It provides efficient energy tracking using standardized methods, allowing building owners to objectively assess the potential adoption of economical and sustainable practices.
{"title":"A New Dashboard Tool to Enhance Data Processing and Energy Efficiency Analysis in Modern Buildings","authors":"Abigail Sharp, D. Ojeda, Victoria Nilsen","doi":"10.1109/SIEDS49339.2020.9106664","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106664","url":null,"abstract":"In many buildings, energy tracking methods provide inadequate information regarding energy consumption, impeding the identification of economic and environmental waste in building operations and maintenance. M.C. Dean, an electrical design-build firm, recognized the importance of effective energy tracking methods in the large, complex buildings that it manages. Energy guidelines, such as Leadership in Energy and Environmental Design (LEED), are implemented to increase energy efficiency. Acquiring a LEED certification provides building owners with incentives, but it requires an in-depth documentation and understanding of energy usage prior to certification. Previously, M.C. Dean manually calculated average energy usage and created control charts that summarized annual statistics of their buildings. This method is sufficient for a single site but is an inefficient practice when applied manually across multiple sites. This project performed a requirements elicitation to determine the critical criteria for analyzing energy usage for M.C. Dean’s buildings. The results were used to develop a standardized Excel-based dashboard tool that instructs the user on importing and modifying raw energy data. Once the data is imported, the dashboard tool automatically tests for normality via probability plots and generates 3-sigma control charts. By automating this process, the dashboard tool enabled the user to gain detailed understanding of the energy usage of their site. Areas of potential improvement were identified through the implementation of three additional methods: analytic hierarchy process, cost simulation, and cost-benefit analysis. The manager’s preferences and energy guidelines generated a ranking of building usage factors, which can guide decisions on repurposing certain building elements. A Monte Carlo simulation was performed using parametric distribution analysis to predict future costs. Additionally, the dashboard increases awareness of energy usage by linking energy performance to the LEED Operations and Maintenance version 4.1 guidelines to estimate the current certification level and highlight areas for improvement. This research produced an energy performance tool that can be standardized to other complex buildings. It provides efficient energy tracking using standardized methods, allowing building owners to objectively assess the potential adoption of economical and sustainable practices.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126757760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106582
Aditya Singh, Justin W. Williams, J. Barba
Site selection, the process of locating alternatives for new facilities, is a complex and crucial decision faced by growing companies. Organizations often employ time consuming and informal market research techniques, which may fail to capture institutional knowledge or consider all feasible alternatives. Advancements in geographic information systems (GIS) have allowed for analytical methods to be adopted, but current GIS- based methodologies may only be able to study a small area using expensive software, hardware, or data. The goal of this project is to create a decision support tool that can study a large area using open source GIS software and publicly available data, without the use of high-performance computing. The project client is a business that combines an urban winery, a multipurpose venue, and a restaurant into one facility. The company’s site selection problem focuses on finding locations where there is a high demand for their products and services. Requirements elicitation was performed on several experts, and group aggregation techniques were applied to the traditional analytic hierarchy process (AHP) to generate weights for various decision criteria. Data for each criterion was standardized into a consistent scale and then loaded into GIS map layers. A weighted overlay technique was implemented to rank feasible alternatives in map form. Inter- market analysis was conducted using variables that capture an area’s demand for weddings and corporate events, which are the company’s key sources of revenue. Variables that capture demand for the organization’s services include labor availability, existing event infrastructure, and wine consumption in the target region. Intra-market analysis is performed to provide granular recommendations by capturing factors such as crime statistics, accessibility, and proximity to complementary businesses. Recommendations were provided at a “census block group” level of granularity. Sensitivity analysis was performed to test model robustness, and model accuracy was validated through ex post analysis of the firm’s existing locations. Opportunities exist to apply the underlying methodology presented in this project for other companies in various industries to address site selection problems.
{"title":"Site Selection Decision Support Tool Using Geographic Information Systems and Multi-Expert Analytic Hierarchy Process","authors":"Aditya Singh, Justin W. Williams, J. Barba","doi":"10.1109/SIEDS49339.2020.9106582","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106582","url":null,"abstract":"Site selection, the process of locating alternatives for new facilities, is a complex and crucial decision faced by growing companies. Organizations often employ time consuming and informal market research techniques, which may fail to capture institutional knowledge or consider all feasible alternatives. Advancements in geographic information systems (GIS) have allowed for analytical methods to be adopted, but current GIS- based methodologies may only be able to study a small area using expensive software, hardware, or data. The goal of this project is to create a decision support tool that can study a large area using open source GIS software and publicly available data, without the use of high-performance computing. The project client is a business that combines an urban winery, a multipurpose venue, and a restaurant into one facility. The company’s site selection problem focuses on finding locations where there is a high demand for their products and services. Requirements elicitation was performed on several experts, and group aggregation techniques were applied to the traditional analytic hierarchy process (AHP) to generate weights for various decision criteria. Data for each criterion was standardized into a consistent scale and then loaded into GIS map layers. A weighted overlay technique was implemented to rank feasible alternatives in map form. Inter- market analysis was conducted using variables that capture an area’s demand for weddings and corporate events, which are the company’s key sources of revenue. Variables that capture demand for the organization’s services include labor availability, existing event infrastructure, and wine consumption in the target region. Intra-market analysis is performed to provide granular recommendations by capturing factors such as crime statistics, accessibility, and proximity to complementary businesses. Recommendations were provided at a “census block group” level of granularity. Sensitivity analysis was performed to test model robustness, and model accuracy was validated through ex post analysis of the firm’s existing locations. Opportunities exist to apply the underlying methodology presented in this project for other companies in various industries to address site selection problems.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125921725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106646
Kelly Rohrer, Jacob Ziller, Alanna Flores, W. Scherer, Christopher Kaylor, Orlando Jimenez, Stephen Adams
The NBA, MLB, NFL and other professional leagues utilize sports analytics, but the potential of professional golf analytics is largely untapped. Instead of using data-driven methods connecting practice to tournament performance, training regimens are often based on conventional wisdom. How can data be used to recommend training regimens for golfers to improve performance? We partnered with golf analytics company, GameForge, to develop tools and methods for golf analytics to capture these markets, including the development of a state-based training recommendation system. We used Gameforge, PGA, and LPGA data to build markov models using k-means clustering, and linear models. These two model types form the basis of our recommendation system. In the future, these methods can be used to inform training decisions, particularly as more data is collected.
{"title":"Developing State-Based Recommendation Systems for Golf Training","authors":"Kelly Rohrer, Jacob Ziller, Alanna Flores, W. Scherer, Christopher Kaylor, Orlando Jimenez, Stephen Adams","doi":"10.1109/SIEDS49339.2020.9106646","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106646","url":null,"abstract":"The NBA, MLB, NFL and other professional leagues utilize sports analytics, but the potential of professional golf analytics is largely untapped. Instead of using data-driven methods connecting practice to tournament performance, training regimens are often based on conventional wisdom. How can data be used to recommend training regimens for golfers to improve performance? We partnered with golf analytics company, GameForge, to develop tools and methods for golf analytics to capture these markets, including the development of a state-based training recommendation system. We used Gameforge, PGA, and LPGA data to build markov models using k-means clustering, and linear models. These two model types form the basis of our recommendation system. In the future, these methods can be used to inform training decisions, particularly as more data is collected.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-01DOI: 10.1109/SIEDS49339.2020.9106580
Jordan Frengut, Anwesha Tomar, Andrew Burwell, R. Francis
The objective of this paper is to report the results of a generalized additive model used to predict local particulate matter concentrations at a Washington, DC Department of Energy and Environment (DOEE) federal regulatory monitoring station. While the DOEE uses state-of-the-art federal equivalent method (FEM) equipment to demonstrate compliance with the clean air act for regulatory purposes, these measurements reflect regional, not neighborhood air quality. A GW student-led living lab project—Fresh Air DC—has been testing uRAD INDUSTRIAL low-cost air quality sensors that can be used to collect air quality data at the neighborhood level using LoRaWAN based smart city technology. Because low-cost sensors often lack the accuracy and sensitivity of FEM equipment, research indicates that low-cost sensor (LCS) monitoring networks require post- processing and data modelling in order to apply findings to educational and policy goals. Although LCS data processing has been conducted using linear and nonlinear models, nonlinear models tend to have a greater ability to capture the nuanced relationships between air pollutants and meteorological influences. In this paper, we post-process uRAD PM 2.5 sensor data using DOEE FEM equipment as a reference instrument in the development of three models to adjust uRAD data to the DOEE FEM data—ordinary least squares linear regression, generalized linear models (GLMs), and generalized additive models (GAMs). Our model includes meteorological variables such as temperature, humidity, and wind speed. Our statistical models for post-processing are evaluated on the basis of deviance and Akaike Information Criterion (AIC). We expect that the GLM and GAM will be useful for capturing nonlinear relationships between the PM2.5 measurements and meteorological variables.
本文的目的是报告在华盛顿特区能源和环境部(DOEE)联邦监管监测站用于预测当地颗粒物浓度的广义相加模型的结果。虽然doe使用最先进的联邦等效方法(FEM)设备来证明符合清洁空气法案的监管目的,但这些测量反映的是区域空气质量,而不是社区空气质量。华盛顿大学学生领导的生活实验室项目fresh Air dc一直在测试uRAD INDUSTRIAL低成本空气质量传感器,该传感器可用于使用基于LoRaWAN的智能城市技术收集社区一级的空气质量数据。由于低成本传感器往往缺乏FEM设备的准确性和灵敏度,研究表明,低成本传感器(LCS)监测网络需要后处理和数据建模,以便将研究结果应用于教育和政策目标。虽然LCS数据处理是使用线性和非线性模型进行的,但非线性模型往往更能捕捉空气污染物与气象影响之间的细微关系。本文以DOEE FEM设备为参考工具,对uRAD pm2.5传感器数据进行后处理,建立了三种模型,将uRAD数据调整为DOEE FEM数据——普通最小二乘线性回归、广义线性模型(GLMs)和广义加性模型(GAMs)。我们的模型包括气象变量,如温度、湿度和风速。基于偏差和赤池信息准则(Akaike Information Criterion, AIC)对后处理统计模型进行了评价。我们期望GLM和GAM将有助于捕捉PM2.5测量值与气象变量之间的非线性关系。
{"title":"Analysis of real-time particulate matter (PM2.5) concentrations in Washington, DC, using generalized additive models (GAMs)","authors":"Jordan Frengut, Anwesha Tomar, Andrew Burwell, R. Francis","doi":"10.1109/SIEDS49339.2020.9106580","DOIUrl":"https://doi.org/10.1109/SIEDS49339.2020.9106580","url":null,"abstract":"The objective of this paper is to report the results of a generalized additive model used to predict local particulate matter concentrations at a Washington, DC Department of Energy and Environment (DOEE) federal regulatory monitoring station. While the DOEE uses state-of-the-art federal equivalent method (FEM) equipment to demonstrate compliance with the clean air act for regulatory purposes, these measurements reflect regional, not neighborhood air quality. A GW student-led living lab project—Fresh Air DC—has been testing uRAD INDUSTRIAL low-cost air quality sensors that can be used to collect air quality data at the neighborhood level using LoRaWAN based smart city technology. Because low-cost sensors often lack the accuracy and sensitivity of FEM equipment, research indicates that low-cost sensor (LCS) monitoring networks require post- processing and data modelling in order to apply findings to educational and policy goals. Although LCS data processing has been conducted using linear and nonlinear models, nonlinear models tend to have a greater ability to capture the nuanced relationships between air pollutants and meteorological influences. In this paper, we post-process uRAD PM 2.5 sensor data using DOEE FEM equipment as a reference instrument in the development of three models to adjust uRAD data to the DOEE FEM data—ordinary least squares linear regression, generalized linear models (GLMs), and generalized additive models (GAMs). Our model includes meteorological variables such as temperature, humidity, and wind speed. Our statistical models for post-processing are evaluated on the basis of deviance and Akaike Information Criterion (AIC). We expect that the GLM and GAM will be useful for capturing nonlinear relationships between the PM2.5 measurements and meteorological variables.","PeriodicalId":331495,"journal":{"name":"2020 Systems and Information Engineering Design Symposium (SIEDS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124900028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}