Pub Date : 2024-10-07DOI: 10.1109/TTS.2024.3462726
Matthew Ryan;Glenn Withers;Frank Den Hartog
This study delves into the escalating risk of a major disruption event involving Cloud Service Providers (CSPs) within the global financial system, amidst shifting supplier dynamics and mounting economic challenges. It focuses on the increasing dependence of financial institutions on three CSPs for critical business services, highlighting the emergent issue of “cloud concentration risks.” The paper explores various factors influencing technological decisions in financial institutions, including events and the regulatory environment. The advantages of cloud computing, and the potential risks associated with CSPs transitioning their business models from growth-centric to value-oriented strategies are also discussed. Furthermore, CSPs are contending with rising operational costs and diminishing profit margins, compelling them to adopt cost-saving measures such as prolonging the lifecycles of hardware components. This analysis also considers the implications of potential increases in cloud computing costs and the financial burden of migrating services, underscoring significant challenges faced by financial institutions in this evolving landscape.
{"title":"The Cloud Conundrum: Are Financial Institutions Heading for a Catastrophic Disruption Event?","authors":"Matthew Ryan;Glenn Withers;Frank Den Hartog","doi":"10.1109/TTS.2024.3462726","DOIUrl":"https://doi.org/10.1109/TTS.2024.3462726","url":null,"abstract":"This study delves into the escalating risk of a major disruption event involving Cloud Service Providers (CSPs) within the global financial system, amidst shifting supplier dynamics and mounting economic challenges. It focuses on the increasing dependence of financial institutions on three CSPs for critical business services, highlighting the emergent issue of “cloud concentration risks.” The paper explores various factors influencing technological decisions in financial institutions, including events and the regulatory environment. The advantages of cloud computing, and the potential risks associated with CSPs transitioning their business models from growth-centric to value-oriented strategies are also discussed. Furthermore, CSPs are contending with rising operational costs and diminishing profit margins, compelling them to adopt cost-saving measures such as prolonging the lifecycles of hardware components. This analysis also considers the implications of potential increases in cloud computing costs and the financial burden of migrating services, underscoring significant challenges faced by financial institutions in this evolving landscape.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"6 1","pages":"80-89"},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1109/TTS.2024.3465935
Susan Helser;Mark Hwang
This research provides an in-depth exploration of the intersection of cybersecurity, artificial intelligence (AI), and big data (CAB) across six sectors in manufacturing and public service. It highlights the transformative potential of these technologies in reshaping industries and enhancing efficiency while also underscoring the challenges they present, particularly in data protection and privacy. To put these challenges in context, a security model consisting of three dimensions (security goal, security control, and data state) is developed and applied to six sectors. The resultant models represent a major step toward more effective risk assessment in practice. They should also inspire research efforts to further advance CAB more effectively and responsibly.
{"title":"AI and Big Data: Synergies and Cybersecurity Challenges in Key Sectors","authors":"Susan Helser;Mark Hwang","doi":"10.1109/TTS.2024.3465935","DOIUrl":"https://doi.org/10.1109/TTS.2024.3465935","url":null,"abstract":"This research provides an in-depth exploration of the intersection of cybersecurity, artificial intelligence (AI), and big data (CAB) across six sectors in manufacturing and public service. It highlights the transformative potential of these technologies in reshaping industries and enhancing efficiency while also underscoring the challenges they present, particularly in data protection and privacy. To put these challenges in context, a security model consisting of three dimensions (security goal, security control, and data state) is developed and applied to six sectors. The resultant models represent a major step toward more effective risk assessment in practice. They should also inspire research efforts to further advance CAB more effectively and responsibly.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"6 1","pages":"54-63"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/TTS.2024.3453396
Katina Michael
Online digital services have changed the way that people interact. Companies provide apps for download allowing users of any age to experience them through smartphones and tablets among other devices. To date, company policies have acted as pseudo-guidelines for recommended use. But what happens when apps that were never designed for children are acquired and used by them? To mitigate potential risks the IEEE 2089–2021 standard was developed- an age appropriate digital services framework for children. The standard stipulates the need for a risk-based age appropriate register by which developers can do away with potential intolerable harms on children during the design phase, and keep track of unintended hazards, in order to counteract ongoing negative impacts on children, allowing them to thrive and flourish. Supplementing international law, state regulations, and company policies related to acceptable use, IEEE 2089–2021 provides a benchmark for how children’s apps should be designed based on the 5Rights Principles. Technical standards can be considered a type of soft law, supplementing hard law like treaties or acts, and even non-legally binding instruments like declarations and policies. Together this panoply of safeguards can mitigate the potential for flaws in product development, ranging from data privacy breaches, location tracking default features, nudging toward in-gaming purchases and autoscrolling, child labor toward data annotation, and adverse metaverse experiences. But given the rapidity of product development cycles, it is technical standards that can have the most immediate effect on the pacing problem ensuring that child rights impact assessments (CRIA) are implemented in practice.
{"title":"Mitigating Risk and Ensuring Human Flourishing Using Design Standards: IEEE 2089–2021 an Age Appropriate Digital Services Framework for Children","authors":"Katina Michael","doi":"10.1109/TTS.2024.3453396","DOIUrl":"https://doi.org/10.1109/TTS.2024.3453396","url":null,"abstract":"Online digital services have changed the way that people interact. Companies provide apps for download allowing users of any age to experience them through smartphones and tablets among other devices. To date, company policies have acted as pseudo-guidelines for recommended use. But what happens when apps that were never designed for children are acquired and used by them? To mitigate potential risks the IEEE 2089–2021 standard was developed- an age appropriate digital services framework for children. The standard stipulates the need for a risk-based age appropriate register by which developers can do away with potential intolerable harms on children during the design phase, and keep track of unintended hazards, in order to counteract ongoing negative impacts on children, allowing them to thrive and flourish. Supplementing international law, state regulations, and company policies related to acceptable use, IEEE 2089–2021 provides a benchmark for how children’s apps should be designed based on the 5Rights Principles. Technical standards can be considered a type of soft law, supplementing hard law like treaties or acts, and even non-legally binding instruments like declarations and policies. Together this panoply of safeguards can mitigate the potential for flaws in product development, ranging from data privacy breaches, location tracking default features, nudging toward in-gaming purchases and autoscrolling, child labor toward data annotation, and adverse metaverse experiences. But given the rapidity of product development cycles, it is technical standards that can have the most immediate effect on the pacing problem ensuring that child rights impact assessments (CRIA) are implemented in practice.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 4","pages":"342-354"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/TTS.2024.3444248
Katina Michael
{"title":"In This Special Section: Algorithmic Bias—Australia’s Robodebt and Its Human Rights Aftermath","authors":"Katina Michael","doi":"10.1109/TTS.2024.3444248","DOIUrl":"https://doi.org/10.1109/TTS.2024.3444248","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"254-263"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial Intelligence (AI) has witnessed remarkable advancements in recent years and has significantly impacted various domains, including cultural heritage. Indeed, AI technologies offer unprecedented capacities to analyze huge amounts of historical data, enabling researchers and art historians to uncover precious patterns, connections, and insights that might otherwise remain elusive. Also, the efficiency and accuracy of AI techniques play a pivotal role in many cultural heritage-related tasks, such as cataloging and organizing extensive cultural collections, streamlining the management of heritage resources for present and future generations. However, the integration of AI in cultural heritage also brings forth intricate ethical questions. These span over the issues of authenticity, subjectivity, and interpretation biases of an AI-empowered, reproduced, and/or generated artwork up to the legal concerns related to authorship. However, such issues are mostly undefined and unaddressed in the scholarship at the intersection on AI, ethics, and cultural heritage. This paper aims to pave the way to fill such a gap of context-sensitive ethical issues for AI in cultural heritage. To this aim, the paper first analyzes the main opportunities and benefits raised by AI in cultural heritage. Then, matching benchmark, agreed-upon AI ethics principles elaborated in the AI ethics scholarship in the last decade and relevant to cultural heritage, it highlights specific ethical risks that ought to be considered for the development and deployment of trustworthy AI in and for cultural heritage. Finally, areas requiring further attention and work, and actors call to intervene, are identified to facilitate next steps for ethics and governance of AI in cultural heritage.
{"title":"Ethics of Artificial Intelligence for Cultural Heritage: Opportunities and Challenges","authors":"Simona Tiribelli;Sofia Pansoni;Emanuele Frontoni;Benedetta Giovanola","doi":"10.1109/TTS.2024.3432407","DOIUrl":"https://doi.org/10.1109/TTS.2024.3432407","url":null,"abstract":"Artificial Intelligence (AI) has witnessed remarkable advancements in recent years and has significantly impacted various domains, including cultural heritage. Indeed, AI technologies offer unprecedented capacities to analyze huge amounts of historical data, enabling researchers and art historians to uncover precious patterns, connections, and insights that might otherwise remain elusive. Also, the efficiency and accuracy of AI techniques play a pivotal role in many cultural heritage-related tasks, such as cataloging and organizing extensive cultural collections, streamlining the management of heritage resources for present and future generations. However, the integration of AI in cultural heritage also brings forth intricate ethical questions. These span over the issues of authenticity, subjectivity, and interpretation biases of an AI-empowered, reproduced, and/or generated artwork up to the legal concerns related to authorship. However, such issues are mostly undefined and unaddressed in the scholarship at the intersection on AI, ethics, and cultural heritage. This paper aims to pave the way to fill such a gap of context-sensitive ethical issues for AI in cultural heritage. To this aim, the paper first analyzes the main opportunities and benefits raised by AI in cultural heritage. Then, matching benchmark, agreed-upon AI ethics principles elaborated in the AI ethics scholarship in the last decade and relevant to cultural heritage, it highlights specific ethical risks that ought to be considered for the development and deployment of trustworthy AI in and for cultural heritage. Finally, areas requiring further attention and work, and actors call to intervene, are identified to facilitate next steps for ethics and governance of AI in cultural heritage.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"293-305"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680564","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/TTS.2024.3437588
Joseph R. Herkert;Brent K. Jesiek;Justin Hess;Marc Cheong
{"title":"In This Special Issue: Ethics in the Global Innovation Helix","authors":"Joseph R. Herkert;Brent K. Jesiek;Justin Hess;Marc Cheong","doi":"10.1109/TTS.2024.3437588","DOIUrl":"https://doi.org/10.1109/TTS.2024.3437588","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"289-292"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680490","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/TTS.2024.3447928
John Impagliazzo
{"title":"Call for EIC/Co-EICs of IEEE Transactions on Technology and Society","authors":"John Impagliazzo","doi":"10.1109/TTS.2024.3447928","DOIUrl":"https://doi.org/10.1109/TTS.2024.3447928","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"253-253"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680486","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-16DOI: 10.1109/TTS.2024.3455829
{"title":"IEEE Transactions on Technology and Society Publication Information","authors":"","doi":"10.1109/TTS.2024.3455829","DOIUrl":"https://doi.org/10.1109/TTS.2024.3455829","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"C2-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680484","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spectrum of how much or how little organizational processes should be automated has long been debated. As the world undergoes a digital transformation where contactless and frictionless are promoted as two aspects that should be honored, many academics are questioning both the frenetic deployment of digital transformation in learning and teaching environments (e.g., face-to-face classrooms, library and academic office spaces, and laboratories, virtual/hybrid modalities, etc.) and its corresponding validity to students. Indeed, little consultation seems to have taken place with the necessary stakeholders, such as academics, students, instructional designers and pedagogical experts, during and after the COVID-19 pandemic. Rather, discussions and decisions appear to have been reactive regarding which modalities of teaching delivery might be the best in a given context, based on operational scenarios directly linked to financials, such as student recruitment trends, and local legislative changes affecting international students. Furthermore, many academic faculty and a great number of corresponding auxiliary staff have found themselves in the unemployment queue. This paper seeks to present the possibilities that AI-based systems may bring to higher education, but in so doing, point to the harmonization required to offer the most appropriate solutions to the needs of both students and teachers, as well as university administration. Education is not a commodity, although it has been treated as one. We are not advocating for an open market which offers “free education” for all, though we wish for everyone to have adequate access to education. But we are certainly advocating for a future in which students and teachers are central to the learning and teaching environment, not relegated to a passive role nor exploited. This article uses Shiv Ramdas’ short science fiction story, “The Trolley Solution”, to work through the future possibilities of AI in higher education.
{"title":"Automating Higher Education Through Artificial Intelligence?","authors":"Katina Michael;Jeremy Pitt;Jason Sargent;Eusebio Scornavacca","doi":"10.1109/TTS.2024.3450694","DOIUrl":"https://doi.org/10.1109/TTS.2024.3450694","url":null,"abstract":"The spectrum of how much or how little organizational processes should be automated has long been debated. As the world undergoes a digital transformation where contactless and frictionless are promoted as two aspects that should be honored, many academics are questioning both the frenetic deployment of digital transformation in learning and teaching environments (e.g., face-to-face classrooms, library and academic office spaces, and laboratories, virtual/hybrid modalities, etc.) and its corresponding validity to students. Indeed, little consultation seems to have taken place with the necessary stakeholders, such as academics, students, instructional designers and pedagogical experts, during and after the COVID-19 pandemic. Rather, discussions and decisions appear to have been reactive regarding which modalities of teaching delivery might be the best in a given context, based on operational scenarios directly linked to financials, such as student recruitment trends, and local legislative changes affecting international students. Furthermore, many academic faculty and a great number of corresponding auxiliary staff have found themselves in the unemployment queue. This paper seeks to present the possibilities that AI-based systems may bring to higher education, but in so doing, point to the harmonization required to offer the most appropriate solutions to the needs of both students and teachers, as well as university administration. Education is not a commodity, although it has been treated as one. We are not advocating for an open market which offers “free education” for all, though we wish for everyone to have adequate access to education. But we are certainly advocating for a future in which students and teachers are central to the learning and teaching environment, not relegated to a passive role nor exploited. This article uses Shiv Ramdas’ short science fiction story, “The Trolley Solution”, to work through the future possibilities of AI in higher education.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"264-271"},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10666816","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1109/TTS.2024.3446183
Kevin R. McKee
In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, roughly 9% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data. Yet AI and ML researchers lack guidelines for ethical research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers confirm independent ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by examining the normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research presents several distinct considerations—namely, participatory design, crowdsourced dataset development, and an expansive role of corporations—that necessitate a contextual ethics framework. To address these concerns, this manuscript outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. Overall, this paper seeks to equip technical researchers with practical knowledge for their work, and to position them for further dialogue with social scientists, behavioral researchers, and ethicists.
近年来,有人类参与者参与的研究对于人工智能(AI)和机器学习(ML)的进步至关重要,尤其是在对话式、人类兼容和合作式人工智能领域。例如,在最近的 AAAI 和 NeurIPS 会议上,约有 9% 的出版物表明收集了原始人类数据。然而,人工智能和 ML 研究人员缺乏对人类参与者进行伦理研究的指导方针。在这些 AAAI 和 NeurIPS 论文中,每四篇中只有不到一篇确认了独立的伦理审查、知情同意书的收集或参与者补偿。本文旨在通过研究人工智能研究与涉及人类参与者的相关领域在规范方面的异同来弥补这一差距。虽然心理学、人机交互学和其他相邻领域提供了历史教训和有益的启示,但人工智能研究提出了几个独特的考虑因素--即参与式设计、众包数据集开发和企业的广泛作用--这就需要一个背景伦理框架。为了解决这些问题,本手稿概述了一套在人工智能和 ML 研究中对人类参与者进行伦理和透明实践的指导方针。总之,本文旨在为技术研究人员的工作提供实用知识,并为他们与社会科学家、行为研究人员和伦理学家的进一步对话奠定基础。
{"title":"Human Participants in AI Research: Ethics and Transparency in Practice","authors":"Kevin R. McKee","doi":"10.1109/TTS.2024.3446183","DOIUrl":"https://doi.org/10.1109/TTS.2024.3446183","url":null,"abstract":"In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, roughly 9% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data. Yet AI and ML researchers lack guidelines for ethical research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers confirm independent ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by examining the normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research presents several distinct considerations—namely, participatory design, crowdsourced dataset development, and an expansive role of corporations—that necessitate a contextual ethics framework. To address these concerns, this manuscript outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. Overall, this paper seeks to equip technical researchers with practical knowledge for their work, and to position them for further dialogue with social scientists, behavioral researchers, and ethicists.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"279-288"},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10664609","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}