COVID-19 has transformed face-to-face software development into distributed development (e. g., remote work). While the company authors belong to studies microtask programming, an open source software (OSS) -like development, as a solution to employ distributed development, a prior study reports a challenge: online communication in microtask programming takes longer; such lengthy communication discourages developers and affects their completion of assigned tasks. OSS, however, is successfully developed using online communication, such as issues. Hence, we have a question: how does OSS address the online communication challenge? In this experience report, we answer this question based on an empirical study on OSS communication. We found that (1) OSS prefers burst communication similar to face-to-face development, and (2) attracting developers’ attention may be a possible solution. Based on the findings, we discuss the direction of future studies to achieve better online communication in microtask programming in the company. The main contributions of this report are (1) to empirically reveal the actual communication times in OSS and (2) to show how an empirical approach helps industrial collaborators.
{"title":"Towards Better Online Communication for Future Software Development in Industry","authors":"Masanari Kondo, Shinobu Saito, Yukako Iimura, Eunjong Choi, O. Mizuno, Yasutaka Kamei, Naoyasu Ubayashi","doi":"10.1109/COMPSAC57700.2023.00250","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00250","url":null,"abstract":"COVID-19 has transformed face-to-face software development into distributed development (e. g., remote work). While the company authors belong to studies microtask programming, an open source software (OSS) -like development, as a solution to employ distributed development, a prior study reports a challenge: online communication in microtask programming takes longer; such lengthy communication discourages developers and affects their completion of assigned tasks. OSS, however, is successfully developed using online communication, such as issues. Hence, we have a question: how does OSS address the online communication challenge? In this experience report, we answer this question based on an empirical study on OSS communication. We found that (1) OSS prefers burst communication similar to face-to-face development, and (2) attracting developers’ attention may be a possible solution. Based on the findings, we discuss the direction of future studies to achieve better online communication in microtask programming in the company. The main contributions of this report are (1) to empirically reveal the actual communication times in OSS and (2) to show how an empirical approach helps industrial collaborators.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122208063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1109/COMPSAC57700.2023.00288
A. Mulone, Sherine Awad, Davide Chiarugi, Marco Aldinucci
In recent years we have understood the importance of analyzing and sequencing human genetic variation. A relevant aspect that emerged from the Covid-19 pandemic was the need to obtain results very quickly; this involved using High-Performance Computing (HPC) environments to execute the Next Generation Sequencing (NGS) pipeline. However, HPC is not always the most suitable environment for the entire execution of a pipeline, especially when it involves many heterogeneous tools. The ability to execute parts of the pipeline on different environments can lead to higher performance but also cheaper executions. This work shows the design and optimization process that led us to a state-of-the-art Variant Calling hybrid workflow based on the StreamFlow Workflow Management System (WfMS). We also compare StreamFlow with Snakemake, an established WfMS targeting HPC facilities, observing comparable performance on single environments and satisfactory improvements with a hybrid cloud-HPC configuration.
{"title":"Porting the Variant Calling Pipeline for NGS data in cloud-HPC environment","authors":"A. Mulone, Sherine Awad, Davide Chiarugi, Marco Aldinucci","doi":"10.1109/COMPSAC57700.2023.00288","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00288","url":null,"abstract":"In recent years we have understood the importance of analyzing and sequencing human genetic variation. A relevant aspect that emerged from the Covid-19 pandemic was the need to obtain results very quickly; this involved using High-Performance Computing (HPC) environments to execute the Next Generation Sequencing (NGS) pipeline. However, HPC is not always the most suitable environment for the entire execution of a pipeline, especially when it involves many heterogeneous tools. The ability to execute parts of the pipeline on different environments can lead to higher performance but also cheaper executions. This work shows the design and optimization process that led us to a state-of-the-art Variant Calling hybrid workflow based on the StreamFlow Workflow Management System (WfMS). We also compare StreamFlow with Snakemake, an established WfMS targeting HPC facilities, observing comparable performance on single environments and satisfactory improvements with a hybrid cloud-HPC configuration.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116559008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1109/COMPSAC57700.2023.00255
Alessio Rugo, C. Ardagna
Transparency is a fundamental administrative principle for public institutions. One of its main implementations is the publication of goods and service acquisition tenders, as prescribed by EU and national legislation. This need of transparency can however undermine the security of public institutions, which are disseminating information that could be leveraged by advanced threat actors to bring disruptive attacks. In this paper, we analyse how threat actors can extract useful information from this publicly available information, taking advantage from transparency. We introduce a new technique named transparency-based reconnaissance, which implements a passive recognition process using transparency information published under law requirements. To better highlight the value of the gathered data, we experiment its effectiveness by simulating a transparency-based reconnaissance run against an Italian public institution, obtaining complete technological and supply chain inventories. The collected inventories enabled the creation of an unsophisticated malware bypassing the defences in place, along with a weaponization and delivery strategy. Finally, we propose a list of potential countermeasure areas, both technical and organizational, to protect information while still safeguarding transparency through a graduated approach.
{"title":"Transparency-based reconnaissance for APT attacks","authors":"Alessio Rugo, C. Ardagna","doi":"10.1109/COMPSAC57700.2023.00255","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00255","url":null,"abstract":"Transparency is a fundamental administrative principle for public institutions. One of its main implementations is the publication of goods and service acquisition tenders, as prescribed by EU and national legislation. This need of transparency can however undermine the security of public institutions, which are disseminating information that could be leveraged by advanced threat actors to bring disruptive attacks. In this paper, we analyse how threat actors can extract useful information from this publicly available information, taking advantage from transparency. We introduce a new technique named transparency-based reconnaissance, which implements a passive recognition process using transparency information published under law requirements. To better highlight the value of the gathered data, we experiment its effectiveness by simulating a transparency-based reconnaissance run against an Italian public institution, obtaining complete technological and supply chain inventories. The collected inventories enabled the creation of an unsophisticated malware bypassing the defences in place, along with a weaponization and delivery strategy. Finally, we propose a list of potential countermeasure areas, both technical and organizational, to protect information while still safeguarding transparency through a graduated approach.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130324786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1109/COMPSAC57700.2023.00013
Martí Caro, Jordi Fornt, J. Abella
Automotive applications with safety requirements must adhere to specific regulations such as ISO 26262, which imposes the use of diverse redundancy for the highest integrity levels (i.e., ASIL D). While this has been often achieved by means of Dual-Core LockStep (DCLS) for microcontrollers, it remains an open challenge how to realize diverse redundancy efficiently, i.e., without full duplication and preserving performance, for DNN-based safety-related tasks, such as object detection, needing accelerators for performance reasons.This paper proposes an architecture where the accelerator performing DNN inference is replicated, as in the case of DCLS for cores, but using a cheaper implementation for the replica. In particular, we build on the stochastic nature of DNN-based object detection to realize two redundant accelerators where the secondary accelerator uses smartly chosen lower precision arithmetic (e.g., dropping some bits of the original data) so that it provides diverse redundancy, it can keep the performance of the primary accelerator, does not require as much cost as full- precision replication, and can build on the very same data stream from memory used by the primary accelerator. With a simple heuristic, we show that such a diverse redundancy scheme is able to cope with faults restricting false positives and negatives to a few relatively small objects.
{"title":"Efficient Diverse Redundant DNNs for Autonomous Driving","authors":"Martí Caro, Jordi Fornt, J. Abella","doi":"10.1109/COMPSAC57700.2023.00013","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00013","url":null,"abstract":"Automotive applications with safety requirements must adhere to specific regulations such as ISO 26262, which imposes the use of diverse redundancy for the highest integrity levels (i.e., ASIL D). While this has been often achieved by means of Dual-Core LockStep (DCLS) for microcontrollers, it remains an open challenge how to realize diverse redundancy efficiently, i.e., without full duplication and preserving performance, for DNN-based safety-related tasks, such as object detection, needing accelerators for performance reasons.This paper proposes an architecture where the accelerator performing DNN inference is replicated, as in the case of DCLS for cores, but using a cheaper implementation for the replica. In particular, we build on the stochastic nature of DNN-based object detection to realize two redundant accelerators where the secondary accelerator uses smartly chosen lower precision arithmetic (e.g., dropping some bits of the original data) so that it provides diverse redundancy, it can keep the performance of the primary accelerator, does not require as much cost as full- precision replication, and can build on the very same data stream from memory used by the primary accelerator. With a simple heuristic, we show that such a diverse redundancy scheme is able to cope with faults restricting false positives and negatives to a few relatively small objects.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129704584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1109/COMPSAC57700.2023.00126
Elanor Jackson, Sahra Sedigh Sarvestani
The safety of autonomous vehicles relies on dependable and secure infrastructure for intelligent transportation. The doctoral research described in this paper aims to enable self-healing and survivability of the intelligent transportation systems required for autonomous vehicles (AV-ITS). The proposed approach is comprised of four major elements: qualitative and quantitative modeling of the AV-ITS, stochastic analysis to capture and quantify interdependencies, mitigation of disruptions, and validation of efficacy of the self-healing process. This paper describes the overall methodology and presents preliminary results, including an agent-based model for detection of and recovery from disruptions to the AV-ITS.
{"title":"Securing the Transportation of Tomorrow: Enabling Self-Healing Intelligent Transportation","authors":"Elanor Jackson, Sahra Sedigh Sarvestani","doi":"10.1109/COMPSAC57700.2023.00126","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00126","url":null,"abstract":"The safety of autonomous vehicles relies on dependable and secure infrastructure for intelligent transportation. The doctoral research described in this paper aims to enable self-healing and survivability of the intelligent transportation systems required for autonomous vehicles (AV-ITS). The proposed approach is comprised of four major elements: qualitative and quantitative modeling of the AV-ITS, stochastic analysis to capture and quantify interdependencies, mitigation of disruptions, and validation of efficacy of the self-healing process. This paper describes the overall methodology and presents preliminary results, including an agent-based model for detection of and recovery from disruptions to the AV-ITS.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128236450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the spread and development of 5G technology, network configurations with MEC (Multi-Access Edge Computing) servers in the vicinity of 5G base stations are becoming more common. We have been researching and developing Giocci, a resource permeating distributed processing platform that offloads computation tasks on end devices to MEC servers and cloud servers. In this paper, we propose resource allocation methods to efficiently determine the server to which tasks are allocated in a network configuration that includes MEC servers. In order to construct the proposed methods, we first model the main functions of Giocci and define the resource allocation problem for this work. There are four proposed methods according to each objective and priority; a prioritized allocation by the average number of waiting tasks, by the communication delay, by the task response time, and by the cost of using computing resources. We initially implement these methods assuming that they are task allocation functions in Giocci. Experimental evaluations demonstrate that they could achieve appropriate resource allocation results for each objective. This research contributes to the smooth allocation of computational resources in 5G networks including MEC servers.
{"title":"Resource Allocation Methods among Server Clusters in a Resource Permeating Distributed Computing Platform for 5G Networks","authors":"Daisuke Sasaki, Hiroki Kashiwazaki, Mitsuhiro Osaki, Kazuma Nishiuchi, Ikuo Nakagawa, Shunsuke Kikuchi, Yutaka Kikuchi, Shintaro Hosoai, Hideki Takase","doi":"10.1109/COMPSAC57700.2023.00171","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00171","url":null,"abstract":"With the spread and development of 5G technology, network configurations with MEC (Multi-Access Edge Computing) servers in the vicinity of 5G base stations are becoming more common. We have been researching and developing Giocci, a resource permeating distributed processing platform that offloads computation tasks on end devices to MEC servers and cloud servers. In this paper, we propose resource allocation methods to efficiently determine the server to which tasks are allocated in a network configuration that includes MEC servers. In order to construct the proposed methods, we first model the main functions of Giocci and define the resource allocation problem for this work. There are four proposed methods according to each objective and priority; a prioritized allocation by the average number of waiting tasks, by the communication delay, by the task response time, and by the cost of using computing resources. We initially implement these methods assuming that they are task allocation functions in Giocci. Experimental evaluations demonstrate that they could achieve appropriate resource allocation results for each objective. This research contributes to the smooth allocation of computational resources in 5G networks including MEC servers.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132848518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1109/COMPSAC57700.2023.00058
Chiara Vercellino, G. Vitali, Paolo Viviani, A. Scionti, Andrea Scarabosio, O. Terzo, Edoardo Giusto, B. Montrucchio
Quantum machines are among the most promising technologies expected to provide significant improvements in the following years. However, bridging the gap between real-world applications and their implementation on quantum hardware is still a complicated task. One of the main challenges is to represent through qubits (i.e., the basic units of quantum information) the problems of interest. According to the specific technology under-lying the quantum machine, it is necessary to implement a proper representation strategy, generally referred to as embedding. This paper introduces a neural-enhanced optimization framework to solve the constrained unit disk problem, which arises in the context of qubits positioning for neutral atoms-based quantum hardware. The proposed approach involves a modified autoencoder model, i.e., the Distances Encoder Network, and a custom loss, i.e., the Embedding Loss Function, respectively, to compute Euclidean distances and model the optimization constraints. The core idea behind this design relies on the capability of neural networks to approximate non-linear transformations to make the Distances Encoder Network learn the spatial transformation that maps initial non-feasible solutions of the constrained unit disk problem into feasible ones. The proposed approach outperforms classical solvers, given fixed comparable computation times, and paves the way to address other optimization problems through a similar strategy.
{"title":"Neural optimization for quantum architectures: graph embedding problems with Distance Encoder Networks","authors":"Chiara Vercellino, G. Vitali, Paolo Viviani, A. Scionti, Andrea Scarabosio, O. Terzo, Edoardo Giusto, B. Montrucchio","doi":"10.1109/COMPSAC57700.2023.00058","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00058","url":null,"abstract":"Quantum machines are among the most promising technologies expected to provide significant improvements in the following years. However, bridging the gap between real-world applications and their implementation on quantum hardware is still a complicated task. One of the main challenges is to represent through qubits (i.e., the basic units of quantum information) the problems of interest. According to the specific technology under-lying the quantum machine, it is necessary to implement a proper representation strategy, generally referred to as embedding. This paper introduces a neural-enhanced optimization framework to solve the constrained unit disk problem, which arises in the context of qubits positioning for neutral atoms-based quantum hardware. The proposed approach involves a modified autoencoder model, i.e., the Distances Encoder Network, and a custom loss, i.e., the Embedding Loss Function, respectively, to compute Euclidean distances and model the optimization constraints. The core idea behind this design relies on the capability of neural networks to approximate non-linear transformations to make the Distances Encoder Network learn the spatial transformation that maps initial non-feasible solutions of the constrained unit disk problem into feasible ones. The proposed approach outperforms classical solvers, given fixed comparable computation times, and paves the way to address other optimization problems through a similar strategy.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131261810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1109/COMPSAC57700.2023.00023
Myke Morais de Oliveira, E. Barbosa
This paper presents a systematic review of the use of multilevel models for the analysis and prediction of school dropout. Several studies were carried out in this theme, but there are still challenges to be addressed. There are many different applications of multilevel modeling for school dropouts, which makes it difficult to synthesize the main contributions and advances in the area. The lack of a holistic view makes it difficult to understand the main advances and research gaps. To shed some light on this scenario, this literature review covered the most investigated factors at the student and school levels, such as demographic, socioeconomic, family background, and student’s academic performance variables; the main educational environments in which multilevel models were used for the analysis or prediction of school dropout, such as high school/secondary education, and higher education; and the main multilevel models used in these researches, such as the multilevel logistic regression, and the multilevel linear regression. In addition, we also investigated whether the authors used multivariate exploratory techniques or other artificial intelligence techniques to support the fitting and interpretation of the modeling process.
{"title":"Multilevel modeling for the analysis and prediction of school dropout: a systematic review","authors":"Myke Morais de Oliveira, E. Barbosa","doi":"10.1109/COMPSAC57700.2023.00023","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00023","url":null,"abstract":"This paper presents a systematic review of the use of multilevel models for the analysis and prediction of school dropout. Several studies were carried out in this theme, but there are still challenges to be addressed. There are many different applications of multilevel modeling for school dropouts, which makes it difficult to synthesize the main contributions and advances in the area. The lack of a holistic view makes it difficult to understand the main advances and research gaps. To shed some light on this scenario, this literature review covered the most investigated factors at the student and school levels, such as demographic, socioeconomic, family background, and student’s academic performance variables; the main educational environments in which multilevel models were used for the analysis or prediction of school dropout, such as high school/secondary education, and higher education; and the main multilevel models used in these researches, such as the multilevel logistic regression, and the multilevel linear regression. In addition, we also investigated whether the authors used multivariate exploratory techniques or other artificial intelligence techniques to support the fitting and interpretation of the modeling process.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134105808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1109/COMPSAC57700.2023.00081
Mohammad Yousef Alkhawaldeh, M. Subu, Nabeel Al-Yateem, S. Rahman, F. Ahmed, J. Dias, M. AbuRuz, A. Saifan, Amina Al-Marzouqi, H. Hijazi, Mohamad Qasim Alshabi, A. Hossain
INTRODUCTION: During COVID-19 pandemic, Telehealth was crucial to the delivery of healthcare services. it is expected that even after the pandemic is almost over, providers will probably start using telehealth on a regular basis as we transition to the “new normal.” Therefore, it is crucial to identify and resolve any discrepancies in telehealth’s effectiveness and accessibility. OBJECTIVES: Examine disparities in telehealth access and explain how frequently OB-GYN patients used telehealth during the first four months of the COVID-19 pandemic according to race/ethnicity and insurance status. METHOD: A cross-sectional design was used including a convenience sample of 9370 women who received telehealth or in-person care. RESULTS: 15,362 encounters were completed in total. 81.34% of appointments were held in person throughout the study’s time period, and 18.66% were managed by telehealth. The majority of the patients had private health insurance (n = 975, 52.4%) and were Caucasian (n = 1202, 63.4%). Compared to patients of other races, patients of Hispanic and Asian descent were less likely to attend their telehealth appointment (p< 0.001). Patients with private health insurance were more likely than those with public health insurance to show up for their telehealth appointments (p< 0.001).CONCLUSION: This study demonstrates that underserved populations, including individuals of color, those with public insurance, and others, must have greater access to and utilization of telehealth services.
{"title":"OB-GYN Telehealth Access and Utilization During COVID-19: Racial and Sociodemographic Disparities","authors":"Mohammad Yousef Alkhawaldeh, M. Subu, Nabeel Al-Yateem, S. Rahman, F. Ahmed, J. Dias, M. AbuRuz, A. Saifan, Amina Al-Marzouqi, H. Hijazi, Mohamad Qasim Alshabi, A. Hossain","doi":"10.1109/COMPSAC57700.2023.00081","DOIUrl":"https://doi.org/10.1109/COMPSAC57700.2023.00081","url":null,"abstract":"INTRODUCTION: During COVID-19 pandemic, Telehealth was crucial to the delivery of healthcare services. it is expected that even after the pandemic is almost over, providers will probably start using telehealth on a regular basis as we transition to the “new normal.” Therefore, it is crucial to identify and resolve any discrepancies in telehealth’s effectiveness and accessibility. OBJECTIVES: Examine disparities in telehealth access and explain how frequently OB-GYN patients used telehealth during the first four months of the COVID-19 pandemic according to race/ethnicity and insurance status. METHOD: A cross-sectional design was used including a convenience sample of 9370 women who received telehealth or in-person care. RESULTS: 15,362 encounters were completed in total. 81.34% of appointments were held in person throughout the study’s time period, and 18.66% were managed by telehealth. The majority of the patients had private health insurance (n = 975, 52.4%) and were Caucasian (n = 1202, 63.4%). Compared to patients of other races, patients of Hispanic and Asian descent were less likely to attend their telehealth appointment (p< 0.001). Patients with private health insurance were more likely than those with public health insurance to show up for their telehealth appointments (p< 0.001).CONCLUSION: This study demonstrates that underserved populations, including individuals of color, those with public insurance, and others, must have greater access to and utilization of telehealth services.","PeriodicalId":296288,"journal":{"name":"2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132178516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}