首页 > 最新文献

International Journal of Computing and Engineering最新文献

英文 中文
A Deep Reinforcement Learning Strategy for MEC Enabled Virtual Reality in Telecommunication Networks 电信网络中的 MEC 虚拟现实深度强化学习策略
Pub Date : 2024-04-20 DOI: 10.47941/ijce.1820
Kodanda Rami Reddy Manukonda
One of the most anticipated features of 5G and subsequent networks is wireless virtual reality (VR), which promises to transform human interaction via its immersive experiences and game-changing capabilities. Wireless virtual reality systems, and VR games in particular, are notoriously slow due to rendering issues. But most academics don't care about data correlation or real-time rendering. Using mobile edge computing (MEC) and mmWave-enabled wireless networks, we provide an adaptive VR system that enables high-quality wireless VR. By using this architecture, VR rendering operations may be adaptively offloaded to MEC servers in real-time, resulting in even greater performance advantages via caching.The limited processing power of VR devices, the need for a high quality of experience (QoE), and the small latency in VR activities make it difficult to connect wireless VR consumers to high-quality VR content in real-time. To solve these problems, we provide a wireless VR network that is enabled by MEC. This network makes use of recurrent neural networks (RNNs) to provide real-time predictions about each user's field of vision (FoV). It is feasible to simultaneously move the rendering of virtual reality material to the memory of the MEC server. To improve the long-term VR users' quality of experience (QoE) while staying within the VR interaction latency limitation, we provide decoupling deep reinforcement learning algorithms that are both centrally and distributedly run, taking into consideration the connection between requests' fields of vision and their locations. When compared with rendering on VR headsets, our proposed MEC rendering techniques and DRL algorithms considerably improve VR users' long-term experience quality and reduce VR interaction latency, according to the simulation results.
无线虚拟现实(VR)是 5G 及其后续网络最令人期待的功能之一,它有望通过身临其境的体验和改变游戏规则的功能改变人与人之间的互动。由于渲染问题,无线虚拟现实系统,尤其是 VR 游戏,速度之慢是出了名的。但大多数学者并不关心数据关联或实时渲染。利用移动边缘计算(MEC)和毫米波无线网络,我们提供了一种自适应 VR 系统,可以实现高质量的无线 VR。由于 VR 设备的处理能力有限、对高质量体验(QoE)的需求以及 VR 活动中的微小延迟,很难将无线 VR 消费者与高质量 VR 内容实时连接起来。为了解决这些问题,我们提供了一种由 MEC 支持的无线 VR 网络。该网络利用递归神经网络(RNN)对每个用户的视野(FoV)进行实时预测。同时将虚拟现实材料的渲染转移到 MEC 服务器的内存中是可行的。为了提高虚拟现实用户的长期体验质量(QoE),同时不超出虚拟现实交互延迟的限制,我们提供了既集中又分布式运行的解耦深度强化学习算法,并考虑到了请求者视野和位置之间的联系。根据仿真结果,与在 VR 头显上进行渲染相比,我们提出的 MEC 渲染技术和 DRL 算法大大提高了 VR 用户的长期体验质量,并降低了 VR 交互延迟。
{"title":"A Deep Reinforcement Learning Strategy for MEC Enabled Virtual Reality in Telecommunication Networks","authors":"Kodanda Rami Reddy Manukonda","doi":"10.47941/ijce.1820","DOIUrl":"https://doi.org/10.47941/ijce.1820","url":null,"abstract":"One of the most anticipated features of 5G and subsequent networks is wireless virtual reality (VR), which promises to transform human interaction via its immersive experiences and game-changing capabilities. Wireless virtual reality systems, and VR games in particular, are notoriously slow due to rendering issues. But most academics don't care about data correlation or real-time rendering. Using mobile edge computing (MEC) and mmWave-enabled wireless networks, we provide an adaptive VR system that enables high-quality wireless VR. By using this architecture, VR rendering operations may be adaptively offloaded to MEC servers in real-time, resulting in even greater performance advantages via caching.The limited processing power of VR devices, the need for a high quality of experience (QoE), and the small latency in VR activities make it difficult to connect wireless VR consumers to high-quality VR content in real-time. To solve these problems, we provide a wireless VR network that is enabled by MEC. This network makes use of recurrent neural networks (RNNs) to provide real-time predictions about each user's field of vision (FoV). It is feasible to simultaneously move the rendering of virtual reality material to the memory of the MEC server. To improve the long-term VR users' quality of experience (QoE) while staying within the VR interaction latency limitation, we provide decoupling deep reinforcement learning algorithms that are both centrally and distributedly run, taking into consideration the connection between requests' fields of vision and their locations. When compared with rendering on VR headsets, our proposed MEC rendering techniques and DRL algorithms considerably improve VR users' long-term experience quality and reduce VR interaction latency, according to the simulation results.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":" 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140681600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Artificial Intelligence Techniques for Quality Control in Semiconductor Production 人工智能技术在半导体生产质量控制中的应用综述
Pub Date : 2024-04-19 DOI: 10.47941/ijce.1815
Rajat Suvra Das
Purpose: Exploring AI techniques to improve the quality control of semiconductor production brings numerous advantages, such as enhanced precision, heightened efficiency, and early detection of issues, cost reduction, continuous enhancement, and a competitive edge. These benefits establish this area of research and its practical application in the semiconductor industry as valuable and worthwhile. Methodology: It aims to highlight the advancements, methodologies employed, and outcomes obtained thus far. By scrutinizing the current state of research, the primary objective of this paper is to identify significant challenges and issues associated with AI approaches in this domain. These challenges encompass data quality and availability, selecting appropriate algorithms, interpreting AI models, and integrating them with existing production systems. It is vital for researchers and industry professionals to understand these challenges to effectively address them and devise effective solutions. Moreover, it aims to lay the groundwork for future researchers, offering them a theoretical framework to devise potential solutions for enhancing quality control in semiconductor production. This review aims to drive a research on the semi-conductor production with the AI techniques to enhance the Quality control. Findings: The main findings to offer research is more efficient and accurate approach compared to traditional manual methods, leading to improved product quality, reduced costs, and increased productivity. Armed with this knowledge, future researchers can design and implement innovative AI-driven solutions to enhance quality control in semiconductor production. Unique contribution to theory, policy and practice: Overall, the theoretical foundation presented in this paper will aid researchers in developing novel solutions to improve quality control in the semiconductor industry, ultimately leading to enhanced product reliability and customer satisfaction.
目的:探索人工智能技术以改进半导体生产的质量控制,可带来诸多优势,如提高精度、提高效率、及早发现问题、降低成本、持续改进和竞争优势。这些优势确立了这一研究领域及其在半导体行业的实际应用的价值和意义。研究方法:它旨在强调迄今为止所取得的进展、采用的方法和取得的成果。通过仔细研究当前的研究状况,本文的主要目的是找出与该领域的人工智能方法相关的重大挑战和问题。这些挑战包括数据质量和可用性、选择合适的算法、解释人工智能模型以及将其与现有生产系统集成。研究人员和行业专业人士必须了解这些挑战,才能有效应对它们并制定有效的解决方案。此外,本综述还旨在为未来的研究人员奠定基础,为他们提供一个理论框架,为加强半导体生产的质量控制设计潜在的解决方案。本综述旨在利用人工智能技术推动有关半导体生产的研究,以加强质量控制。研究结果:研究的主要发现是,与传统的人工方法相比,人工智能方法更高效、更准确,可提高产品质量、降低成本并提高生产率。有了这些知识,未来的研究人员可以设计和实施创新的人工智能驱动解决方案,以加强半导体生产中的质量控制。对理论、政策和实践的独特贡献:总体而言,本文提出的理论基础将有助于研究人员开发新型解决方案,以改进半导体行业的质量控制,最终提高产品可靠性和客户满意度。
{"title":"A Review of Artificial Intelligence Techniques for Quality Control in Semiconductor Production","authors":"Rajat Suvra Das","doi":"10.47941/ijce.1815","DOIUrl":"https://doi.org/10.47941/ijce.1815","url":null,"abstract":"Purpose: Exploring AI techniques to improve the quality control of semiconductor production brings numerous advantages, such as enhanced precision, heightened efficiency, and early detection of issues, cost reduction, continuous enhancement, and a competitive edge. These benefits establish this area of research and its practical application in the semiconductor industry as valuable and worthwhile. \u0000Methodology: It aims to highlight the advancements, methodologies employed, and outcomes obtained thus far. By scrutinizing the current state of research, the primary objective of this paper is to identify significant challenges and issues associated with AI approaches in this domain. These challenges encompass data quality and availability, selecting appropriate algorithms, interpreting AI models, and integrating them with existing production systems. It is vital for researchers and industry professionals to understand these challenges to effectively address them and devise effective solutions. Moreover, it aims to lay the groundwork for future researchers, offering them a theoretical framework to devise potential solutions for enhancing quality control in semiconductor production. This review aims to drive a research on the semi-conductor production with the AI techniques to enhance the Quality control. \u0000Findings: The main findings to offer research is more efficient and accurate approach compared to traditional manual methods, leading to improved product quality, reduced costs, and increased productivity. Armed with this knowledge, future researchers can design and implement innovative AI-driven solutions to enhance quality control in semiconductor production. \u0000Unique contribution to theory, policy and practice: Overall, the theoretical foundation presented in this paper will aid researchers in developing novel solutions to improve quality control in the semiconductor industry, ultimately leading to enhanced product reliability and customer satisfaction.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":" 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140684948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Systematic Literature Review on Graphics Processing Unit Accelerated Realm of High-Performance Computing 关于图形处理器加速高性能计算领域的系统性文献综述
Pub Date : 2024-04-19 DOI: 10.47941/ijce.1813
Rajat Suvra Das, Vikas Gupta
GPUs (Graphics Processing Units) are widely used due to their impressive computational power and parallel computing ability.It have shown significant potential in improving the performance of HPC applications. This is due to their highly parallel architecture, which allows for the execution of multiple tasks simultaneously. However, GPU computing is synonymous with CUDA in providing applications for GPU devices. This offers enhanced development tools and comprehensive documentation to increase performance, while AMD’s ROCm platform features an application programming interface compatible with CUDA. Hence, the main objective of the systematic literature review is to thoroughly analyze and compute the performance characteristics of two prominent GPU computing frameworks, namely NVIDIA's CUDA and AMD's ROCm (Radeon Open Compute). By meticulously examining the strengths, weaknesses, and overall performance capabilities of CUDA and ROCm, a deeper understanding of these concepts is gained and will benefit researchers. The purpose of the research on GPU accelerated HPC is to provide a comprehensive and unbiased overview of the current state of research and development in this area. It can help researchers, practitioners, and policymakers understand the role of GPUs in HPC and facilitate evidence-based decision making. In addition, different real-time applications of CUDA and ROCm platforms are also discussed to explore potential performance benefits and trade-offs in leveraging these techniques. The insights provided by the study will empower the way to make well-informed decisions when choosing between CUDA and ROCm approaches that apply to real-world software.
图形处理器(GPU)因其强大的计算能力和并行计算能力而得到广泛应用。这得益于其高度并行的架构,可以同时执行多个任务。然而,在为 GPU 设备提供应用程序方面,GPU 计算与 CUDA 是同义词。它提供了增强的开发工具和全面的文档来提高性能,而 AMD 的 ROCm 平台具有与 CUDA 兼容的应用编程接口。因此,系统性文献综述的主要目的是全面分析和计算两个著名 GPU 计算框架的性能特点,即英伟达公司的 CUDA 和 AMD 公司的 ROCm(Radeon Open Compute)。通过仔细研究 CUDA 和 ROCm 的优缺点和整体性能,可以加深对这些概念的理解,从而使研究人员受益匪浅。有关 GPU 加速 HPC 的研究旨在全面、公正地概述该领域的研究和开发现状。它可以帮助研究人员、从业人员和决策者了解 GPU 在高性能计算中的作用,并促进基于证据的决策制定。此外,还讨论了 CUDA 和 ROCm 平台的不同实时应用,以探索利用这些技术的潜在性能优势和权衡。本研究提供的见解将帮助人们在选择适用于真实世界软件的 CUDA 和 ROCm 方法时做出明智的决策。
{"title":"A Systematic Literature Review on Graphics Processing Unit Accelerated Realm of High-Performance Computing","authors":"Rajat Suvra Das, Vikas Gupta","doi":"10.47941/ijce.1813","DOIUrl":"https://doi.org/10.47941/ijce.1813","url":null,"abstract":"GPUs (Graphics Processing Units) are widely used due to their impressive computational power and parallel computing ability.It have shown significant potential in improving the performance of HPC applications. This is due to their highly parallel architecture, which allows for the execution of multiple tasks simultaneously. However, GPU computing is synonymous with CUDA in providing applications for GPU devices. This offers enhanced development tools and comprehensive documentation to increase performance, while AMD’s ROCm platform features an application programming interface compatible with CUDA. Hence, the main objective of the systematic literature review is to thoroughly analyze and compute the performance characteristics of two prominent GPU computing frameworks, namely NVIDIA's CUDA and AMD's ROCm (Radeon Open Compute). By meticulously examining the strengths, weaknesses, and overall performance capabilities of CUDA and ROCm, a deeper understanding of these concepts is gained and will benefit researchers. The purpose of the research on GPU accelerated HPC is to provide a comprehensive and unbiased overview of the current state of research and development in this area. It can help researchers, practitioners, and policymakers understand the role of GPUs in HPC and facilitate evidence-based decision making. In addition, different real-time applications of CUDA and ROCm platforms are also discussed to explore potential performance benefits and trade-offs in leveraging these techniques. The insights provided by the study will empower the way to make well-informed decisions when choosing between CUDA and ROCm approaches that apply to real-world software.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":" October","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140682781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TensorFlow: Revolutionizing Large-Scale Machine Learning in Complex Semiconductor Design TensorFlow:彻底改变复杂半导体设计中的大规模机器学习
Pub Date : 2024-04-19 DOI: 10.47941/ijce.1812
Rajat Suvra Das
The development of semiconductor manufacturing processes is becoming more intricate in order to meet the constantly growing need for affordable and speedy computing devices with greater memory capacity. This calls for the inclusion of innovative manufacturing techniques hardware components, advanced intricate assemblies and. Tensorflow emerges as a powerful technology that comprehensively addresses these aspects of ML systems. With its rapid growth, TensorFlow finds application in various domains, including the design of intricate semiconductors. While TensorFlow is primarily known for ML, it can also be utilized for numerical computations involving data flow graphs in semiconductor design tasks. Consequently, this SLR (Systematic Literature Review) focuses on assessing research papers about the intersection of ML, TensorFlow, and the design of complex semiconductors. The SLR sheds light on different methodologies for gathering relevant papers, emphasizing inclusion and exclusion criteria as key strategies. Additionally, it provides an overview of the Tensorflow technology itself and its applications in semiconductor design. In future, the semiconductors may be designed in order to enhance the performance, and the scalability and size can be increased. Furthermore, the compatibility of the tensor flow can be increased in order to leverage the potential in semiconductor technology.
为了满足人们对价格低廉、速度快、内存容量大的计算设备不断增长的需求,半导体制造工艺的发展正变得越来越复杂。这就需要采用创新的制造技术,包括硬件组件、先进的复杂组件和软件。Tensorflow 是一种强大的技术,可全面解决 ML 系统的这些方面问题。随着 TensorFlow 的快速发展,它在包括复杂半导体设计在内的各个领域都得到了应用。虽然 TensorFlow 主要用于 ML,但它也可用于半导体设计任务中涉及数据流图的数值计算。因此,本系统文献综述(SLR)侧重于评估有关 ML、TensorFlow 和复杂半导体设计的交叉研究论文。SLR 揭示了收集相关论文的不同方法,强调了作为关键策略的纳入和排除标准。此外,它还概述了 Tensorflow 技术本身及其在半导体设计中的应用。未来,半导体的设计可以提高性能,增加可扩展性和尺寸。此外,还可以提高张量流的兼容性,以充分利用半导体技术的潜力。
{"title":"TensorFlow: Revolutionizing Large-Scale Machine Learning in Complex Semiconductor Design","authors":"Rajat Suvra Das","doi":"10.47941/ijce.1812","DOIUrl":"https://doi.org/10.47941/ijce.1812","url":null,"abstract":"The development of semiconductor manufacturing processes is becoming more intricate in order to meet the constantly growing need for affordable and speedy computing devices with greater memory capacity. This calls for the inclusion of innovative manufacturing techniques hardware components, advanced intricate assemblies and. Tensorflow emerges as a powerful technology that comprehensively addresses these aspects of ML systems. With its rapid growth, TensorFlow finds application in various domains, including the design of intricate semiconductors. While TensorFlow is primarily known for ML, it can also be utilized for numerical computations involving data flow graphs in semiconductor design tasks. Consequently, this SLR (Systematic Literature Review) focuses on assessing research papers about the intersection of ML, TensorFlow, and the design of complex semiconductors. The SLR sheds light on different methodologies for gathering relevant papers, emphasizing inclusion and exclusion criteria as key strategies. Additionally, it provides an overview of the Tensorflow technology itself and its applications in semiconductor design. In future, the semiconductors may be designed in order to enhance the performance, and the scalability and size can be increased. Furthermore, the compatibility of the tensor flow can be increased in order to leverage the potential in semiconductor technology.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":" May","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140682472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Semiconductor Functional Verification with Deep Learning with Innovation and Challenges 利用深度学习加强半导体功能验证的创新与挑战
Pub Date : 2024-04-19 DOI: 10.47941/ijce.1814
Rajat Suvra Das, Arjun Pal Chowdhury
Purpose: Universally, the semiconductor is the foundation of electronic technology used in an extensive range of applications such as computers, televisions, smartphones, etc. It is utilized to create ICs (Integrated Circuits), one of the vital electronic device components. The Functional verification of semiconductors is significant to analyze the correctness of an IC for appropriate applications. Besides, Functional verification supports the manufacturers in various factors such as quality assurance, performance optimization, etc. Traditionally, semiconductor Functional verification is carried out manually with the support of expertise. However, it is prone to human error, inaccurate, expensive and time-consuming. To resolve the problem, DL (Deep Learning) based technologies have revolutionized the functional verification of semiconductor device. The utilization of various DL algorithms automates the semiconductor Functional verification to improve the semiconductor quality and performance. Therefore, the focus of this study is to explore the advancements in the functional verification process within the semiconductor industry. Methodology: It begins by examining research techniques used to analyse existing studies on semiconductors. Additionally, it highlights the manual limitations of semiconductor functional verification and the need for DL-based solutions. Findings: The study also identifies and discusses the challenges of integrating DL into semiconductor functional verification. Furthermore, it outlines future directions to improve the effectiveness of semiconductor functional verification and support research efforts in this area. The analysis reveals that there is a limited amount of research on deep learning-based functional verification, which necessitates further enhancement to improve the efficiency of functional verification. Unique contribution to theory, policy and practice: The presented review is intended to support the research in enhancing the efficiency of the semiconductor functional verification. Furthermore, it is envisioned to assist the semiconductor manufacturers in the field of functional verification regarding efficient verifications, yield enhancement, improved accuracy, etc.
目的:在全球范围内,半导体是电子技术的基础,广泛应用于电脑、电视、智能手机等领域。它被用来制造集成电路(IC),是重要的电子设备组件之一。半导体的功能验证对于分析集成电路在适当应用中的正确性非常重要。此外,功能验证还在质量保证、性能优化等多方面为制造商提供支持。传统上,半导体功能验证是在专业人员的支持下手工进行的。然而,这种方法容易出现人为错误、不准确、昂贵且耗时。为了解决这个问题,基于深度学习(DL)的技术彻底改变了半导体器件的功能验证。利用各种深度学习算法,可以实现半导体功能验证的自动化,从而提高半导体的质量和性能。因此,本研究的重点是探索半导体行业功能验证流程的进步。研究方法:本研究首先探讨用于分析现有半导体研究的研究技术。此外,它还强调了半导体功能验证的人工限制以及对基于 DL 的解决方案的需求。研究结果:研究还确定并讨论了将 DL 集成到半导体功能验证中的挑战。此外,研究还概述了提高半导体功能验证有效性和支持该领域研究工作的未来方向。分析表明,基于深度学习的功能验证研究数量有限,需要进一步加强,以提高功能验证的效率。对理论、政策和实践的独特贡献:本综述旨在为提高半导体功能验证效率的研究提供支持。此外,还希望在功能验证领域为半导体制造商提供高效验证、提高产量和准确性等方面的帮助。
{"title":"Enhancing Semiconductor Functional Verification with Deep Learning with Innovation and Challenges","authors":"Rajat Suvra Das, Arjun Pal Chowdhury","doi":"10.47941/ijce.1814","DOIUrl":"https://doi.org/10.47941/ijce.1814","url":null,"abstract":"Purpose: Universally, the semiconductor is the foundation of electronic technology used in an extensive range of applications such as computers, televisions, smartphones, etc. It is utilized to create ICs (Integrated Circuits), one of the vital electronic device components. The Functional verification of semiconductors is significant to analyze the correctness of an IC for appropriate applications. Besides, Functional verification supports the manufacturers in various factors such as quality assurance, performance optimization, etc. Traditionally, semiconductor Functional verification is carried out manually with the support of expertise. However, it is prone to human error, inaccurate, expensive and time-consuming. To resolve the problem, DL (Deep Learning) based technologies have revolutionized the functional verification of semiconductor device. The utilization of various DL algorithms automates the semiconductor Functional verification to improve the semiconductor quality and performance. Therefore, the focus of this study is to explore the advancements in the functional verification process within the semiconductor industry. \u0000Methodology: It begins by examining research techniques used to analyse existing studies on semiconductors. Additionally, it highlights the manual limitations of semiconductor functional verification and the need for DL-based solutions. \u0000Findings: The study also identifies and discusses the challenges of integrating DL into semiconductor functional verification. Furthermore, it outlines future directions to improve the effectiveness of semiconductor functional verification and support research efforts in this area. The analysis reveals that there is a limited amount of research on deep learning-based functional verification, which necessitates further enhancement to improve the efficiency of functional verification. \u0000Unique contribution to theory, policy and practice: The presented review is intended to support the research in enhancing the efficiency of the semiconductor functional verification. Furthermore, it is envisioned to assist the semiconductor manufacturers in the field of functional verification regarding efficient verifications, yield enhancement, improved accuracy, etc.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":" 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140684346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Community-Led Development and Participatory Design in Open Source: Empowering Collaboration for Sustainable Solutions 开源中的社区主导发展和参与式设计:为可持续解决方案增强协作能力
Pub Date : 2024-04-16 DOI: 10.47941/ijce.1803
Savitha Raghunathan
This whitepaper delves into the active role of community-led development (CLD) and participatory design (PD) in open source software, highlighting how these complementary approaches bring stakeholders from various backgrounds together to create a cooperative atmosphere for developing stable solutions. It emphasizes the importance of these methodologies in enabling communities to tackle real-world issues effectively and robustly, thus influencing the expansion of open-source development. Integrating CLD and PD within open-source projects fosters a more inclusive collaborative development environment, driving innovation and user-centric solutions. Through case studies like Kubernetes and Konveyor, it is evident that these methodologies significantly contribute to project success by enhancing adaptability, ensuring broad community engagement, and addressing diverse user needs. The findings underscore the vital role of these strategies in creating sustainable and resilient software solutions, highlighting their potential to transform the technology development landscape.
本白皮书深入探讨了社区主导开发(CLD)和参与式设计(PD)在开源软件中的积极作用,强调了这些互补方法如何将来自不同背景的利益相关者聚集在一起,为开发稳定的解决方案营造合作氛围。报告强调了这些方法的重要性,它们使社区能够有效、稳健地解决现实世界中的问题,从而影响开源开发的扩展。在开源项目中整合 CLD 和 PD 可以营造更具包容性的协作开发环境,推动创新和以用户为中心的解决方案。通过对 Kubernetes 和 Konveyor 等案例的研究,可以看出这些方法通过提高适应性、确保广泛的社区参与和满足不同的用户需求,极大地促进了项目的成功。研究结果强调了这些策略在创建可持续和弹性软件解决方案中的重要作用,凸显了它们改变技术开发格局的潜力。
{"title":"Community-Led Development and Participatory Design in Open Source: Empowering Collaboration for Sustainable Solutions","authors":"Savitha Raghunathan","doi":"10.47941/ijce.1803","DOIUrl":"https://doi.org/10.47941/ijce.1803","url":null,"abstract":"This whitepaper delves into the active role of community-led development (CLD) and participatory design (PD) in open source software, highlighting how these complementary approaches bring stakeholders from various backgrounds together to create a cooperative atmosphere for developing stable solutions. It emphasizes the importance of these methodologies in enabling communities to tackle real-world issues effectively and robustly, thus influencing the expansion of open-source development. Integrating CLD and PD within open-source projects fosters a more inclusive collaborative development environment, driving innovation and user-centric solutions. Through case studies like Kubernetes and Konveyor, it is evident that these methodologies significantly contribute to project success by enhancing adaptability, ensuring broad community engagement, and addressing diverse user needs. The findings underscore the vital role of these strategies in creating sustainable and resilient software solutions, highlighting their potential to transform the technology development landscape.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":"10 27","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140696116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Security in Machine Learning (ML) Workflows 机器学习 (ML) 工作流程的安全性
Pub Date : 2024-03-02 DOI: 10.47941/ijce.1714
Dinesh Reddy Chittibala, Srujan Reddy Jabbireddy
Purpose: This paper addresses the comprehensive security challenges inherent in the lifecycle of machine learning (ML) systems, including data collection, processing, model training, evaluation, and deployment. The imperative for robust security mechanisms within ML workflows has become increasingly paramount in the rapidly advancing field of ML, as these challenges encompass data privacy breaches, unauthorized access, model theft, adversarial attacks, and vulnerabilities within the computational infrastructure. Methodology: To counteract these threats, we propose a holistic suite of strategies designed to enhance the security of ML workflows. These strategies include advanced data protection techniques like anonymization and encryption, model security enhancements through adversarial training and hardening, and the fortification of infrastructure security via secure computing environments and continuous monitoring. Findings: The multifaceted nature of security challenges in ML workflows poses significant risks to the confidentiality, integrity, and availability of ML systems, potentially leading to severe consequences such as financial loss, erosion of trust, and misuse of sensitive information. Unique Contribution to Theory, Policy and Practice: Additionally, this paper advocates for the integration of legal and ethical considerations into a proactive and layered security approach, aiming to mitigate the risks associated with ML workflows effectively. By implementing these comprehensive security measures, stakeholders can significantly reinforce the trustworthiness and efficacy of ML applications across sensitive and critical sectors, ensuring their resilience against an evolving landscape of threats.
目的:本文探讨了机器学习(ML)系统生命周期中固有的全面安全挑战,包括数据收集、处理、模型训练、评估和部署。在快速发展的机器学习领域,在机器学习工作流程中建立强大的安全机制变得越来越重要,因为这些挑战包括数据隐私泄露、未经授权的访问、模型盗窃、对抗性攻击以及计算基础设施中的漏洞。方法论:为了应对这些威胁,我们提出了一整套旨在增强 ML 工作流安全性的策略。这些策略包括先进的数据保护技术(如匿名化和加密)、通过对抗训练和加固来增强模型的安全性,以及通过安全计算环境和持续监控来加强基础设施的安全性。研究结果:人工智能工作流程中的安全挑战具有多面性,对人工智能系统的保密性、完整性和可用性构成了重大风险,可能导致严重后果,如经济损失、信任度下降和敏感信息被滥用。对理论、政策和实践的独特贡献:此外,本文主张将法律和道德因素纳入积极主动的分层安全方法中,旨在有效降低与 ML 工作流程相关的风险。通过实施这些全面的安全措施,利益相关者可以大大加强敏感和关键领域的 ML 应用程序的可信度和有效性,确保它们能够抵御不断变化的威胁。
{"title":"Security in Machine Learning (ML) Workflows","authors":"Dinesh Reddy Chittibala, Srujan Reddy Jabbireddy","doi":"10.47941/ijce.1714","DOIUrl":"https://doi.org/10.47941/ijce.1714","url":null,"abstract":"Purpose: This paper addresses the comprehensive security challenges inherent in the lifecycle of machine learning (ML) systems, including data collection, processing, model training, evaluation, and deployment. The imperative for robust security mechanisms within ML workflows has become increasingly paramount in the rapidly advancing field of ML, as these challenges encompass data privacy breaches, unauthorized access, model theft, adversarial attacks, and vulnerabilities within the computational infrastructure. \u0000Methodology: To counteract these threats, we propose a holistic suite of strategies designed to enhance the security of ML workflows. These strategies include advanced data protection techniques like anonymization and encryption, model security enhancements through adversarial training and hardening, and the fortification of infrastructure security via secure computing environments and continuous monitoring. \u0000Findings: The multifaceted nature of security challenges in ML workflows poses significant risks to the confidentiality, integrity, and availability of ML systems, potentially leading to severe consequences such as financial loss, erosion of trust, and misuse of sensitive information. \u0000Unique Contribution to Theory, Policy and Practice: Additionally, this paper advocates for the integration of legal and ethical considerations into a proactive and layered security approach, aiming to mitigate the risks associated with ML workflows effectively. By implementing these comprehensive security measures, stakeholders can significantly reinforce the trustworthiness and efficacy of ML applications across sensitive and critical sectors, ensuring their resilience against an evolving landscape of threats.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":"29 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140081635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proactive Edge Computing for Smart City: A Novel Case for ML-Powered IoT 智能城市的主动边缘计算:由 ML 驱动的物联网新案例
Pub Date : 2024-01-06 DOI: 10.47941/ijce.1605
Rohan Singh Rajput, Sarthik Shah, Shantanu Neema
Purpose: In response to the challenges posed by traditional cloud-centric IoT architectures, this research explores the integration of Proactive Edge Computing (PEC) in context of smart cities. The purpose addresses privacy concerns, enhance system capabilities, and introduce machine learning powered anticipation to revolutionize urban city management. Methodology: The research employs a comprehensive methodology that includes a thorough review of existing literature on use of IoT devices, edge computing and machine learning in context of smart cities. It introduces the concept of PEC to advocate for a shift from cloud-centric to on-chip computing. The methodology is based on several case studies in various domains of smart city management focusing on the improvement of public life. Findings: This research reveal that the integration of PEC in various smart city domains leads to a significant improvement. Real time data analysis, and machine learning predictions contributes to reduced congestion, enhance public safety, sustainable energy practices, efficient waste management, and personalized healthcare. Unique Contribution to Theory, Policy and Practice: The research makes a unique contribution to the field of theory, policy and practice by proposing a paradigm shift associated with IoT for smart cities. The suggested shift not only ensures data security but also offers a more efficient and proactive approach to urban challenges. The case studies provide actionable insights for policymakers and practitioners, fostering a holistic understanding of the complexities associated with deploying IoT devices in smart cities. The research lays the foundation for a more secure, efficient, and anticipatory ecosystem, aligning technological advancements with societal needs in the dynamic landscape of smart cities.
目的:为应对以云为中心的传统物联网架构所带来的挑战,本研究探讨了在智慧城市背景下整合主动边缘计算(PEC)的问题。目的是解决隐私问题,增强系统能力,并引入机器学习驱动的预测,以彻底改变城市管理。研究方法:本研究采用了一种全面的方法,包括对现有文献中关于在智慧城市中使用物联网设备、边缘计算和机器学习的内容进行全面回顾。研究引入了 PEC 概念,倡导从以云计算为中心向片上计算转变。该方法基于智慧城市管理各领域的多个案例研究,重点关注公共生活的改善。研究结果:这项研究表明,将 PEC 集成到智慧城市的各个领域会带来显著改善。实时数据分析和机器学习预测有助于减少拥堵、加强公共安全、可持续能源实践、高效废物管理和个性化医疗保健。对理论、政策和实践的独特贡献:这项研究通过提出与智慧城市物联网相关的范式转变,为理论、政策和实践领域做出了独特贡献。所建议的转变不仅能确保数据安全,还能提供一种更高效、更积极主动的方法来应对城市挑战。案例研究为政策制定者和实践者提供了可操作的见解,促进了对在智慧城市部署物联网设备的复杂性的全面理解。这项研究为建立一个更安全、更高效、更具前瞻性的生态系统奠定了基础,使技术进步与智慧城市动态景观中的社会需求相一致。
{"title":"Proactive Edge Computing for Smart City: A Novel Case for ML-Powered IoT","authors":"Rohan Singh Rajput, Sarthik Shah, Shantanu Neema","doi":"10.47941/ijce.1605","DOIUrl":"https://doi.org/10.47941/ijce.1605","url":null,"abstract":"Purpose: In response to the challenges posed by traditional cloud-centric IoT architectures, this research explores the integration of Proactive Edge Computing (PEC) in context of smart cities. The purpose addresses privacy concerns, enhance system capabilities, and introduce machine learning powered anticipation to revolutionize urban city management. \u0000Methodology: The research employs a comprehensive methodology that includes a thorough review of existing literature on use of IoT devices, edge computing and machine learning in context of smart cities. It introduces the concept of PEC to advocate for a shift from cloud-centric to on-chip computing. The methodology is based on several case studies in various domains of smart city management focusing on the improvement of public life. \u0000Findings: This research reveal that the integration of PEC in various smart city domains leads to a significant improvement. Real time data analysis, and machine learning predictions contributes to reduced congestion, enhance public safety, sustainable energy practices, efficient waste management, and personalized healthcare. \u0000Unique Contribution to Theory, Policy and Practice: The research makes a unique contribution to the field of theory, policy and practice by proposing a paradigm shift associated with IoT for smart cities. The suggested shift not only ensures data security but also offers a more efficient and proactive approach to urban challenges. The case studies provide actionable insights for policymakers and practitioners, fostering a holistic understanding of the complexities associated with deploying IoT devices in smart cities. The research lays the foundation for a more secure, efficient, and anticipatory ecosystem, aligning technological advancements with societal needs in the dynamic landscape of smart cities.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":"9 31","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139380289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Network Reliability Following Emergency (E911) Calls 提高紧急 (E911) 呼叫后的网络可靠性
Pub Date : 2024-01-05 DOI: 10.47941/ijce.1600
Riteshkumar S. Patel, Jigarkumar Patel
Purpose: In this research, the purpose is to explore E911 call reliability requirements, study real-world issues related to telecommunication networks transitioning from LTE to 5G (NR) and WCDMA, and present network optimization solutions. The primary objective is to ensure the continuous supply of emergency services and improve the dependability of Enhanced 911 (E911) calls. Methodology: The research methodology involves an examination of the transition from LTE to 5G (NR) and WCDMA in telecommunication networks. The study delves into government-mandated E911 call reliability requirements and conducts a detailed analysis of two real-world issues affecting tight connectivity for E911 calls. Additionally, the research proposes network optimization solutions to address these challenges and enhance the overall reliability of emergency services. Findings: The findings of this research reveal insights into government-mandated E911 call reliability requirements and identify two practical issues affecting the continuity of emergency services during the transition from LTE to 5G (NR) and WCDMA. Unique contributor to theory, policy and practice: The study presents network optimization solutions aimed at overcoming these challenges, with the ultimate goal of improving the dependability of E911 calls and enhancing public safety.
目的:本研究旨在探讨 E911 呼叫可靠性要求,研究与电信网络从 LTE 向 5G (NR) 和 WCDMA 过渡相关的实际问题,并提出网络优化解决方案。主要目的是确保持续提供紧急服务,提高增强型 911 (E911) 呼叫的可靠性。研究方法:研究方法包括检查电信网络从 LTE 向 5G (NR) 和 WCDMA 的过渡情况。研究深入探讨了政府规定的 E911 呼叫可靠性要求,并对影响 E911 呼叫紧密连接的两个现实问题进行了详细分析。此外,研究还提出了网络优化解决方案,以应对这些挑战并提高紧急服务的整体可靠性。研究结果:研究结果揭示了政府规定的 E911 呼叫可靠性要求,并确定了在从 LTE 向 5G (NR) 和 WCDMA 过渡期间影响紧急服务连续性的两个实际问题。对理论、政策和实践的独特贡献:该研究提出了旨在克服这些挑战的网络优化解决方案,其最终目标是提高 E911 呼叫的可靠性并加强公共安全。
{"title":"Enhanced Network Reliability Following Emergency (E911) Calls","authors":"Riteshkumar S. Patel, Jigarkumar Patel","doi":"10.47941/ijce.1600","DOIUrl":"https://doi.org/10.47941/ijce.1600","url":null,"abstract":"Purpose: In this research, the purpose is to explore E911 call reliability requirements, study real-world issues related to telecommunication networks transitioning from LTE to 5G (NR) and WCDMA, and present network optimization solutions. The primary objective is to ensure the continuous supply of emergency services and improve the dependability of Enhanced 911 (E911) calls. \u0000Methodology: The research methodology involves an examination of the transition from LTE to 5G (NR) and WCDMA in telecommunication networks. The study delves into government-mandated E911 call reliability requirements and conducts a detailed analysis of two real-world issues affecting tight connectivity for E911 calls. Additionally, the research proposes network optimization solutions to address these challenges and enhance the overall reliability of emergency services. \u0000Findings: The findings of this research reveal insights into government-mandated E911 call reliability requirements and identify two practical issues affecting the continuity of emergency services during the transition from LTE to 5G (NR) and WCDMA. \u0000Unique contributor to theory, policy and practice: The study presents network optimization solutions aimed at overcoming these challenges, with the ultimate goal of improving the dependability of E911 calls and enhancing public safety.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":"47 27","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139382224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demystifying AI: Navigating the Balance between Precision and Comprehensibility with Explainable Artificial Intelligence 揭开人工智能的神秘面纱:利用可解释的人工智能在精确性和可理解性之间取得平衡
Pub Date : 2024-01-05 DOI: 10.47941/ijce.1603
Narayana Challa
Integrating Artificial Intelligence (AI) into daily life has brought transformative changes, ranging from personalized recommendations on streaming platforms to advancements in medical diagnostics. However, concerns about the transparency and interpretability of AI models, intense neural networks, have become prominent. This paper explores the emerging paradigm of Explainable Artificial Intelligence (XAI) as a crucial response to address these concerns. Delving into the multifaceted challenges posed by AI complexity, the study emphasizes the critical significance of interpretability. It examines how XAI is fundamentally reshaping the landscape of artificial intelligence, seeking to reconcile precision with the transparency necessary for widespread acceptance.
将人工智能(AI)融入日常生活带来了变革,从流媒体平台上的个性化推荐到医疗诊断的进步,不一而足。然而,人们对人工智能模型(包括神经网络)的透明度和可解释性的担忧已变得十分突出。本文探讨了可解释人工智能(XAI)这一新兴范式,作为解决这些问题的重要对策。研究深入探讨了人工智能复杂性带来的多方面挑战,强调了可解释性的重要意义。研究探讨了 XAI 如何从根本上重塑人工智能的格局,如何在精确性与广泛接受所需的透明度之间寻求协调。
{"title":"Demystifying AI: Navigating the Balance between Precision and Comprehensibility with Explainable Artificial Intelligence","authors":"Narayana Challa","doi":"10.47941/ijce.1603","DOIUrl":"https://doi.org/10.47941/ijce.1603","url":null,"abstract":"Integrating Artificial Intelligence (AI) into daily life has brought transformative changes, ranging from personalized recommendations on streaming platforms to advancements in medical diagnostics. However, concerns about the transparency and interpretability of AI models, intense neural networks, have become prominent. This paper explores the emerging paradigm of Explainable Artificial Intelligence (XAI) as a crucial response to address these concerns. Delving into the multifaceted challenges posed by AI complexity, the study emphasizes the critical significance of interpretability. It examines how XAI is fundamentally reshaping the landscape of artificial intelligence, seeking to reconcile precision with the transparency necessary for widespread acceptance.","PeriodicalId":198033,"journal":{"name":"International Journal of Computing and Engineering","volume":"119 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computing and Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1