首页 > 最新文献

Concurrency and Computation-Practice & Experience最新文献

英文 中文
From GPU to CPU (and Beyond): Extending Hardware Support in GPUSPH Through a SYCL-Inspired Interface 从GPU到CPU(以及其他):通过sycl启发的接口扩展GPUSPH中的硬件支持
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-28 DOI: 10.1002/cpe.8313
Giuseppe Bilotta

While most software is originally designed for serial or parallel execution on CPU, and porting to GPU comes later in its development, GPUSPH was designed from the ground up to run on GPUs using CUDA. Making it accessible to a wider audience by introducing support for other computational hardware, and in particular CPUs, poses challenges that are complementary to the ones normally faced when porting CPU code to GPU. We present the approach we have adopted to support CPUs as computational devices in GPUSPH with minimal code changes and low developer effort. Detailed benchmarks illustrating the performance of the implementation and its scalability across multiple cores in both single-socket and NUMA configurations show good strong and weak scaling.

虽然大多数软件最初是为在CPU上串行或并行执行而设计的,并且在其开发后期才移植到GPU上,但GPUSPH是从底层开始设计的,使用CUDA在GPU上运行。通过引入对其他计算硬件(特别是CPU)的支持,让更广泛的受众可以访问它,这与将CPU代码移植到GPU时通常面临的挑战是互补的。我们提出了在GPUSPH中支持cpu作为计算设备的方法,只需最少的代码更改和较少的开发人员工作。详细的基准测试说明了实现的性能及其在单套接字和NUMA配置下跨多核的可伸缩性,显示了良好的强伸缩性和弱伸缩性。
{"title":"From GPU to CPU (and Beyond): Extending Hardware Support in GPUSPH Through a SYCL-Inspired Interface","authors":"Giuseppe Bilotta","doi":"10.1002/cpe.8313","DOIUrl":"https://doi.org/10.1002/cpe.8313","url":null,"abstract":"<p>While most software is originally designed for serial or parallel execution on CPU, and porting to GPU comes later in its development, GPUSPH was designed from the ground up to run on GPUs using CUDA. Making it accessible to a wider audience by introducing support for other computational hardware, and in particular CPUs, poses challenges that are complementary to the ones normally faced when porting CPU code to GPU. We present the approach we have adopted to support CPUs as computational devices in GPUSPH with minimal code changes and low developer effort. Detailed benchmarks illustrating the performance of the implementation and its scalability across multiple cores in both single-socket and NUMA configurations show good strong and weak scaling.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.8313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a Dynamic Allocation Strategy for Deadline-Oriented Resource and Job Management in HPC Systems 面向最后期限的高性能计算系统资源与作业管理的动态分配策略
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-26 DOI: 10.1002/cpe.8310
Barry Linnert, Cesar Augusto F. De Rose, Hans-Ulrich Heiss

As high-performance computing (HPC) becomes a tool used in many different workflows, quality of service (QoS) becomes increasingly important. In many cases, this includes the reliable execution of an HPC job and the generation of the results by a certain deadline. The resource and job management system (RJMS) or simply RMS is responsible for receiving the job requests and executing the jobs with a deadline-oriented policy to support the workflows. In this article, we evaluate how well static resource management policies cope with deadline-constrained HPC jobs and explore two variations of a dynamic policy in this context. As the Hilbert curve-based approach used by the SLURM workload manager represents the state-of-the-art in production environments, it was selected as one of the static allocation strategies. The Manhattan median approach as a second allocation strategy was introduced as a research work that aims to minimize the communication overhead of the parallel programs by providing compact partitions more than the Hilbert curve approach. In contrast to the static partitions provided by the Hilbert curve approach and the Manhattan median approach, the leak approach focuses on supporting dynamic runtime behavior of the jobs and assigning nodes of the HPC system on demand at runtime. Since the contiguous leak version also relies on a compact set of nodes, the noncontiguous leak can provide additional nodes at a greater distance from the nodes already used by the job. Our preliminary results clearly show that a dynamic policy is needed to meet the requirements of a modern deadline-oriented RMS scenario.

随着高性能计算(HPC)成为许多不同工作流程中使用的工具,服务质量(QoS)变得越来越重要。在许多情况下,这包括可靠地执行HPC作业和在特定截止日期前生成结果。资源和作业管理系统(RJMS)或简称为RMS负责接收作业请求,并使用面向截止日期的策略执行作业,以支持工作流。在本文中,我们将评估静态资源管理策略如何很好地处理期限受限的HPC作业,并在此上下文中探讨动态策略的两种变体。由于SLURM工作负载管理器使用的基于Hilbert曲线的方法代表了生产环境中的最新技术,因此选择它作为静态分配策略之一。作为第二种分配策略的曼哈顿中位数方法是作为一项研究工作引入的,旨在通过提供比希尔伯特曲线方法更紧凑的分区来最小化并行程序的通信开销。与Hilbert曲线方法和Manhattan中位数方法提供的静态分区相比,泄漏方法侧重于支持作业的动态运行时行为,并在运行时按需分配HPC系统的节点。由于连续泄漏版本还依赖于一组紧凑的节点,因此不连续泄漏可以在距离作业已经使用的节点较远的地方提供额外的节点。我们的初步结果清楚地表明,需要动态策略来满足现代面向截止日期的RMS场景的需求。
{"title":"Toward a Dynamic Allocation Strategy for Deadline-Oriented Resource and Job Management in HPC Systems","authors":"Barry Linnert,&nbsp;Cesar Augusto F. De Rose,&nbsp;Hans-Ulrich Heiss","doi":"10.1002/cpe.8310","DOIUrl":"https://doi.org/10.1002/cpe.8310","url":null,"abstract":"<p>As high-performance computing (HPC) becomes a tool used in many different workflows, quality of service (QoS) becomes increasingly important. In many cases, this includes the reliable execution of an HPC job and the generation of the results by a certain deadline. The resource and job management system (RJMS) or simply RMS is responsible for receiving the job requests and executing the jobs with a deadline-oriented policy to support the workflows. In this article, we evaluate how well static resource management policies cope with deadline-constrained HPC jobs and explore two variations of a dynamic policy in this context. As the Hilbert curve-based approach used by the SLURM workload manager represents the state-of-the-art in production environments, it was selected as one of the static allocation strategies. The Manhattan median approach as a second allocation strategy was introduced as a research work that aims to minimize the communication overhead of the parallel programs by providing compact partitions more than the Hilbert curve approach. In contrast to the static partitions provided by the Hilbert curve approach and the Manhattan median approach, the leak approach focuses on supporting dynamic runtime behavior of the jobs and assigning nodes of the HPC system on demand at runtime. Since the contiguous leak version also relies on a compact set of nodes, the noncontiguous leak can provide additional nodes at a greater distance from the nodes already used by the job. Our preliminary results clearly show that a dynamic policy is needed to meet the requirements of a modern deadline-oriented RMS scenario.</p>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cpe.8310","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review on Network Covert Channel Construction and Attack Detection 网络隐蔽通道构建与攻击检测研究进展
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-26 DOI: 10.1002/cpe.8316
Mrinal Ashish Khadse, Dhananjay Manohar Dakhane

A covert network channel is a communication channel in which the message is secretly transmitted to the recipient. Sometimes, covert network channels are vulnerable to multiple attacks. Therefore, the message must be properly secure. In most cases, the covert channel is used to ensure data protection and allow users to freely access the Internet. In this paper, several recent studies are reviewed on covert network channels and examine the existing works from 2015 to 2024. This review article also discusses the undetectability and reliability of different types of covert network channels. Furthermore, a detailed description of the covert network channel's ability to hide in containers is provided. Existing research on covert network channels explains a few techniques for detecting attacks in secret data communication. However, several machine learning and deep learning techniques have been discussed in this article. Additionally, this article describes the accuracy of detection through an overview of current technologies. In addition, various countermeasures to prevent attacks in covert channels are also discussed in detail. However, in this case, the bandwidth limitations, data set limitations, and covert channel capacity are clearly defined, which will help future researchers build covert network channels and detect attacks. Finally, this work considers the challenges faced by covert network channels and the future scope of application.

隐蔽网络信道是一种通信信道,其中消息被秘密地传输给接收方。有时,隐蔽的网络通道容易受到多种攻击。因此,消息必须是安全的。在大多数情况下,隐蔽通道用于确保数据保护并允许用户自由访问互联网。本文回顾了近年来对隐蔽网络通道的研究,并考察了2015年至2024年的现有工作。本文还讨论了不同类型隐蔽网络信道的不可检测性和可靠性。此外,还提供了隐蔽网络通道隐藏在容器中的能力的详细描述。现有的隐蔽网络信道研究介绍了几种检测秘密数据通信攻击的技术。然而,本文讨论了几种机器学习和深度学习技术。此外,本文通过对当前技术的概述来描述检测的准确性。此外,还详细讨论了隐蔽信道中防止攻击的各种对策。然而,在这种情况下,带宽限制、数据集限制和隐蔽信道容量都得到了明确的定义,这将有助于未来的研究人员构建隐蔽网络信道并检测攻击。最后,本工作考虑了隐蔽网络信道面临的挑战和未来的应用范围。
{"title":"A Review on Network Covert Channel Construction and Attack Detection","authors":"Mrinal Ashish Khadse,&nbsp;Dhananjay Manohar Dakhane","doi":"10.1002/cpe.8316","DOIUrl":"https://doi.org/10.1002/cpe.8316","url":null,"abstract":"<div>\u0000 \u0000 <p>A covert network channel is a communication channel in which the message is secretly transmitted to the recipient. Sometimes, covert network channels are vulnerable to multiple attacks. Therefore, the message must be properly secure. In most cases, the covert channel is used to ensure data protection and allow users to freely access the Internet. In this paper, several recent studies are reviewed on covert network channels and examine the existing works from 2015 to 2024. This review article also discusses the undetectability and reliability of different types of covert network channels. Furthermore, a detailed description of the covert network channel's ability to hide in containers is provided. Existing research on covert network channels explains a few techniques for detecting attacks in secret data communication. However, several machine learning and deep learning techniques have been discussed in this article. Additionally, this article describes the accuracy of detection through an overview of current technologies. In addition, various countermeasures to prevent attacks in covert channels are also discussed in detail. However, in this case, the bandwidth limitations, data set limitations, and covert channel capacity are clearly defined, which will help future researchers build covert network channels and detect attacks. Finally, this work considers the challenges faced by covert network channels and the future scope of application.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering and Ranking Urban Social Clusters Out of Streaming Social Media Datasets 从流式社交媒体数据集中发现城市社交集群并对其进行排名
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-24 DOI: 10.1002/cpe.8314
Mete Celik, Ahmet Sakir Dokuz, Alper Ecemis, Emre Erdogmus

Urban social media mining is the process of discovering urban patterns from spatio-temporal social media datasets. Urban social clusters are the clusters formed by the social media posts of users living in cities at a certain time and place. Discovering and identifying urban social clusters is of great importance for urban and regional planning, target audience identification, a better understanding of city dynamics and so forth. Discovering and ranking urban social clusters out of streaming social media datasets require efficient filtering approaches and mining algorithms. In the literature, there are several studies performed that address the discovery of the importance of urban clusters. Most of these studies take into account the spatial expansions over time and the changes in the numbers of elements within clusters when identifying the significance of urban clusters. However, in contrast to these studies, we have also considered cluster temporal formation stability, spatial density variation, and the impact of meta-information on urban social clusters. In this study, Temporal, Spatial, and Meta Important Urban Social Clusters Miner (TSMIUSC-Miner) algorithm is proposed. In the proposed algorithm, urban social clusters are discovered, and their importance relative to each other are compared and ranked. The temporal, spatial and meta importance scores of the clusters are calculated and then, the clusters that satisfy predefined score thresholds are discovered. The performance of the proposed TSMIUSC-Miner algorithm compared with that of a naive approach using real-life streaming Twitter/X dataset. The results showed that the proposed TSMIUSC-Miner algorithm outperforms the naive approach in terms of execution time.

城市社交媒体挖掘是从时空社交媒体数据集中发现城市模式的过程。城市社交集群是指生活在城市中的用户在一定时间和地点的社交媒体帖子所形成的集群。发现和识别城市社会集群对于城市和区域规划、目标受众识别、更好地了解城市动态等都具有重要意义。从流式社交媒体数据集中发现和排名城市社交集群需要有效的过滤方法和挖掘算法。在文献中,有几项研究是针对城市群重要性的发现进行的。这些研究在确定城市群的重要性时,大多考虑了空间随时间的扩展和集群内要素数量的变化。然而,与这些研究相比,我们还考虑了集群的时间形成稳定性、空间密度变化以及元信息对城市社会集群的影响。本文提出了时间、空间和元重要城市社会集群挖掘算法(TSMIUSC-Miner)。在该算法中,发现城市社会集群,并对其相对重要性进行比较和排序。计算聚类的时间、空间和元重要性得分,然后发现满足预定义得分阈值的聚类。提出的TSMIUSC-Miner算法的性能与使用真实流Twitter/X数据集的朴素方法的性能进行了比较。结果表明,提出的TSMIUSC-Miner算法在执行时间方面优于朴素方法。
{"title":"Discovering and Ranking Urban Social Clusters Out of Streaming Social Media Datasets","authors":"Mete Celik,&nbsp;Ahmet Sakir Dokuz,&nbsp;Alper Ecemis,&nbsp;Emre Erdogmus","doi":"10.1002/cpe.8314","DOIUrl":"https://doi.org/10.1002/cpe.8314","url":null,"abstract":"<div>\u0000 \u0000 <p>Urban social media mining is the process of discovering urban patterns from spatio-temporal social media datasets. Urban social clusters are the clusters formed by the social media posts of users living in cities at a certain time and place. Discovering and identifying urban social clusters is of great importance for urban and regional planning, target audience identification, a better understanding of city dynamics and so forth. Discovering and ranking urban social clusters out of streaming social media datasets require efficient filtering approaches and mining algorithms. In the literature, there are several studies performed that address the discovery of the importance of urban clusters. Most of these studies take into account the spatial expansions over time and the changes in the numbers of elements within clusters when identifying the significance of urban clusters. However, in contrast to these studies, we have also considered cluster temporal formation stability, spatial density variation, and the impact of meta-information on urban social clusters. In this study, Temporal, Spatial, and Meta Important Urban Social Clusters Miner (TSMIUSC-Miner) algorithm is proposed. In the proposed algorithm, urban social clusters are discovered, and their importance relative to each other are compared and ranked. The temporal, spatial and meta importance scores of the clusters are calculated and then, the clusters that satisfy predefined score thresholds are discovered. The performance of the proposed TSMIUSC-Miner algorithm compared with that of a naive approach using real-life streaming Twitter/X dataset. The results showed that the proposed TSMIUSC-Miner algorithm outperforms the naive approach in terms of execution time.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Request Deadline Split and Interference-Aware Request Migration in Edge Cloud 边缘云中请求截止日期分割和干扰感知请求迁移
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-24 DOI: 10.1002/cpe.8315
Jie Wang, Huiqun Yu, Guisheng Fan, Jiayin Zhang

Edge computing extends computing resources from the data center to the edge of the network to better handle latency-sensitive tasks. However, with the rise of the Internet of Things, edge devices with limited processing capabilities face difficulties in executing requests with fluctuating request peaks. In order to meet the deadline constraints of latency-sensitive tasks, a feasible solution is to offload some latency-sensitive tasks to other nearby edge devices. This article studies the problem of request migration in edge computing systems and minimizes the request deadline violation rate based on actual online arrival patterns, performance interference phenomena, and deadline constraints. Since a request contains multiple services and request migration will lead to changes in server resource competition pressure, we split the problem into three sub-problems, dividing the request deadline to determine the maximum response time of the service, determining the performance of the service under different resource pressures and the request migration strategies. To this end, we propose two deadline splitting methods, a performance interference model under multi-resource pressure, and two heuristic request migration strategies. Since this article considers online edge scenarios, the number and type of requests are black boxes. We conduct simulation experiments and find that our method has only one-third the number of request violations of other methods.

边缘计算将计算资源从数据中心扩展到网络边缘,以更好地处理对延迟敏感的任务。然而,随着物联网的兴起,处理能力有限的边缘设备在执行请求峰值波动的请求时面临困难。为了满足延迟敏感任务的时限限制,一种可行的解决方案是将一些延迟敏感任务卸载到附近的其他边缘设备上。本文研究边缘计算系统中的请求迁移问题,并基于实际在线到达模式、性能干扰现象和截止日期约束最小化请求截止日期违反率。由于一个请求包含多个服务,并且请求迁移会导致服务器资源竞争压力的变化,我们将问题分解为三个子问题,通过划分请求截止时间来确定服务的最大响应时间,确定不同资源压力下服务的性能和请求迁移策略。为此,我们提出了两种截止日期分割方法、多资源压力下的性能干扰模型和两种启发式请求迁移策略。由于本文考虑的是在线边缘场景,因此请求的数量和类型都是黑盒。我们进行了仿真实验,发现我们的方法的请求违规次数只有其他方法的三分之一。
{"title":"Request Deadline Split and Interference-Aware Request Migration in Edge Cloud","authors":"Jie Wang,&nbsp;Huiqun Yu,&nbsp;Guisheng Fan,&nbsp;Jiayin Zhang","doi":"10.1002/cpe.8315","DOIUrl":"https://doi.org/10.1002/cpe.8315","url":null,"abstract":"<div>\u0000 \u0000 <p>Edge computing extends computing resources from the data center to the edge of the network to better handle latency-sensitive tasks. However, with the rise of the Internet of Things, edge devices with limited processing capabilities face difficulties in executing requests with fluctuating request peaks. In order to meet the deadline constraints of latency-sensitive tasks, a feasible solution is to offload some latency-sensitive tasks to other nearby edge devices. This article studies the problem of request migration in edge computing systems and minimizes the request deadline violation rate based on actual online arrival patterns, performance interference phenomena, and deadline constraints. Since a request contains multiple services and request migration will lead to changes in server resource competition pressure, we split the problem into three sub-problems, dividing the request deadline to determine the maximum response time of the service, determining the performance of the service under different resource pressures and the request migration strategies. To this end, we propose two deadline splitting methods, a performance interference model under multi-resource pressure, and two heuristic request migration strategies. Since this article considers online edge scenarios, the number and type of requests are black boxes. We conduct simulation experiments and find that our method has only one-third the number of request violations of other methods.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142869055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multiobjective Approach for E-Commerce Website Structure Optimization 电子商务网站结构优化的多目标方法
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-23 DOI: 10.1002/cpe.8302
Shina Panicker, T. V. Vijay Kumar, Divakar Yadav

Complex websites comprise a variety of diverse web entities, which require constant restructuring resonating with the latest trends, shifting consumer expectations and market driven changes. Therefore, designing suitable models to optimally restructure such websites is of paramount importance and must take into consideration several factors about the web entities such as display size, download time, type, location in the page, sales likelihood, discounts, and the ongoing trend. A recent study has taken all these attributes into consideration and designed a model based on the Access Score, Interface Score, and Purchase Score. However, this model suffers from certain drawbacks such as it did not address the underlying cohesiveness between these attributes. Further, it provided a single optimal solution to the adaptive website structure optimization (AWSO) problem and relied on the a priori knowledge of weights. The basis of the new proposed model is that there can be more than one optimal solution to the AWSO problem in the real world. The novel tri-objective optimization model uses NSGA-II algorithm to simultaneously optimize the attributes and finds advantageous trade-off solutions without requiring a priori knowledge of weights. The proposed MO-AWSONSGA-II model is shown to outperform the existing model proving it better suited for the AWSO problem.

复杂的网站由各种不同的网络实体组成,需要根据最新趋势、消费者不断变化的期望和市场驱动的变化进行不断重组。因此,设计合适的模型来优化重组此类网站至关重要,而且必须考虑到网站实体的多个因素,如显示尺寸、下载时间、类型、在页面中的位置、销售可能性、折扣和当前趋势。最近的一项研究将所有这些属性都考虑在内,并设计了一个基于访问得分、界面得分和购买得分的模型。然而,该模型也存在一些缺陷,如没有考虑到这些属性之间的内在联系。此外,它为自适应网站结构优化(AWSO)问题提供了一个单一的最优解,并且依赖于权重的先验知识。新建议模型的基础是,在现实世界中,AWSO 问题的最优解可能不止一个。新的三目标优化模型使用 NSGA-II 算法同时优化属性,并找到有利的权衡解决方案,而不需要权重的先验知识。结果表明,所提出的 MO-AWSONSGA-II 模型优于现有模型,证明它更适合解决 AWSO 问题。
{"title":"A Multiobjective Approach for E-Commerce Website Structure Optimization","authors":"Shina Panicker,&nbsp;T. V. Vijay Kumar,&nbsp;Divakar Yadav","doi":"10.1002/cpe.8302","DOIUrl":"https://doi.org/10.1002/cpe.8302","url":null,"abstract":"<div>\u0000 \u0000 <p>Complex websites comprise a variety of diverse web entities, which require constant restructuring resonating with the latest trends, shifting consumer expectations and market driven changes. Therefore, designing suitable models to optimally restructure such websites is of paramount importance and must take into consideration several factors about the web entities such as display size, download time, type, location in the page, sales likelihood, discounts, and the ongoing trend. A recent study has taken all these attributes into consideration and designed a model based on the Access Score, Interface Score, and Purchase Score. However, this model suffers from certain drawbacks such as it did not address the underlying cohesiveness between these attributes. Further, it provided a single optimal solution to the adaptive website structure optimization (<i>AWSO</i>) problem and relied on the a priori knowledge of weights. The basis of the new proposed model is that there can be more than one optimal solution to the <i>AWSO</i> problem in the real world. The novel tri-objective optimization model uses <i>NSGA-II</i> algorithm to simultaneously optimize the attributes and finds advantageous trade-off solutions without requiring a priori knowledge of weights. The proposed <i>MO-AWSO</i><sub><i>NSGA-II</i></sub> model is shown to outperform the existing model proving it better suited for the <i>AWSO</i> problem.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 28","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142737369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Workflow Scheduling in Cloud–Fog Computing Environments: A Systematic Literature Review 云雾计算环境中的工作流调度:系统性文献综述
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-22 DOI: 10.1002/cpe.8304
Raouia Bouabdallah, Fairouz Fakhfakh

The Internet of Things (IoT) facilitates the connectivity of billions of physical devices for exchanging information and enabling a wide range of applications. These applications can be presented in the form of dependent tasks, as outlined in a workflow. These workflows face limitations due to constraints in IoT sensors. To address these limitations, cloud computing has emerged to offer a large capacity of computing and storing with a great capability to adjust resources according to the need. However, cloud computing might not adequately meet the low-latency of IoT workflow requirements when scheduling a workflow composed of IoT tasks due to its centralized nature. Moreover, cloud computing is not ideal for delay-sensitive workflows and may increase communication costs. In response to these challenges, the use of fog computing as an extension to cloud computing scheme is recommended. Fog computing aims to process workflow tasks close to IoT devices. While fog computing offers various advantages, integrating these systems into workflow scheduling remains one of the most formidable challenges in distributed environments. Indeed, significant issues arise due to the timely execution and the resource limitations. In this survey paper, we present a Systematic Literature Review (SLR) on the current state of the art in this domain. We propose a taxonomy to compare and evaluate the existing studies on workflow scheduling approaches in cloud–fog computing environments. This taxonomy encompasses various criteria, including scheduling techniques, performance metrics, workflow dependencies, scheduling policies, and evaluation tools. We highlight certain recommendations for open issues which require more investigations. Our aim is to provide valuable insights for researchers and developers interested in understanding the contributions and challenges of current workflow scheduling approaches in cloud–fog computing environments.

物联网(IoT)促进了数十亿物理设备之间的连接,使它们能够交换信息并实现广泛的应用。这些应用可以以工作流程中概述的从属任务的形式呈现。由于物联网传感器的限制,这些工作流程面临着局限性。为了解决这些限制,云计算应运而生,它提供了巨大的计算和存储能力,并能根据需要调整资源。然而,在调度由物联网任务组成的工作流时,云计算由于其集中性,可能无法充分满足物联网工作流的低延迟要求。此外,云计算对于延迟敏感型工作流来说并不理想,可能会增加通信成本。为应对这些挑战,建议使用雾计算作为云计算方案的扩展。雾计算旨在处理靠近物联网设备的工作流任务。虽然雾计算具有各种优势,但将这些系统集成到工作流调度中仍然是分布式环境中最严峻的挑战之一。事实上,由于执行的及时性和资源的局限性,出现了一些重大问题。在这篇调查报告中,我们对该领域的技术现状进行了系统的文献综述(SLR)。我们提出了一种分类方法,用于比较和评估云雾计算环境中工作流调度方法的现有研究。该分类法包含各种标准,包括调度技术、性能指标、工作流依赖性、调度策略和评估工具。我们着重强调了一些需要进一步研究的开放性问题的建议。我们的目标是为有兴趣了解云雾计算环境中当前工作流调度方法的贡献和挑战的研究人员和开发人员提供有价值的见解。
{"title":"Workflow Scheduling in Cloud–Fog Computing Environments: A Systematic Literature Review","authors":"Raouia Bouabdallah,&nbsp;Fairouz Fakhfakh","doi":"10.1002/cpe.8304","DOIUrl":"https://doi.org/10.1002/cpe.8304","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Things (IoT) facilitates the connectivity of billions of physical devices for exchanging information and enabling a wide range of applications. These applications can be presented in the form of dependent tasks, as outlined in a workflow. These workflows face limitations due to constraints in IoT sensors. To address these limitations, cloud computing has emerged to offer a large capacity of computing and storing with a great capability to adjust resources according to the need. However, cloud computing might not adequately meet the low-latency of IoT workflow requirements when scheduling a workflow composed of IoT tasks due to its centralized nature. Moreover, cloud computing is not ideal for delay-sensitive workflows and may increase communication costs. In response to these challenges, the use of fog computing as an extension to cloud computing scheme is recommended. Fog computing aims to process workflow tasks close to IoT devices. While fog computing offers various advantages, integrating these systems into workflow scheduling remains one of the most formidable challenges in distributed environments. Indeed, significant issues arise due to the timely execution and the resource limitations. In this survey paper, we present a Systematic Literature Review (SLR) on the current state of the art in this domain. We propose a taxonomy to compare and evaluate the existing studies on workflow scheduling approaches in cloud–fog computing environments. This taxonomy encompasses various criteria, including scheduling techniques, performance metrics, workflow dependencies, scheduling policies, and evaluation tools. We highlight certain recommendations for open issues which require more investigations. Our aim is to provide valuable insights for researchers and developers interested in understanding the contributions and challenges of current workflow scheduling approaches in cloud–fog computing environments.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 28","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142737395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WSC: A Crowd-Powered Framework for Mapping Decomposable Complex-Task With Worker-Set WSC:众力驱动的可分解复杂任务与工人集映射框架
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-21 DOI: 10.1002/cpe.8305
Suneel Kumar, Sarvesh Pandey

The crowdsourcing platform serves as an intermediary managing the interaction between a requester who posts a decomposable task and a pool of workers who bid to solve it. Each worker intending to take up the task (partially or fully) decomposes it into multiple independent subtasks and submits it to the platform. Selection of a diverse set of workers (based on the bids received) to solve the decomposable task is challenging as it requires balancing factors like cost and quality while encouraging collaboration. We propose a Worker Set Computation (WSC) methodology to address these challenges by selecting a custom set of potential workers who can collaboratively complete the task with the optimal cost, in an efficient way. The aging technique is employed to dynamically update the weight of each worker, giving more weightage to the feedback received in the recent past. This, in turn, not only favors those workers who were rated well in the immediate past but also ensures that one odd feedback does not influence the overall rating heavily. We compare the performance of the proposed method against the state-of-the-art methods, considering the computational (and budget) requirements, as well as the aging-based worker rating.

众包平台作为一个中介,管理着发布可分解任务的请求者与竞标解决该任务的工人之间的互动。每个打算承担任务(部分或全部)的工人都会将任务分解成多个独立的子任务并提交给平台。选择一组不同的工人(根据收到的出价)来解决可分解的任务具有挑战性,因为这需要在鼓励协作的同时平衡成本和质量等因素。我们提出了一种工人集计算(WSC)方法来应对这些挑战,即选择一组自定义的潜在工人,他们能以最佳成本、高效地协作完成任务。老化技术用于动态更新每个工人的权重,对近期收到的反馈给予更多权重。反过来,这不仅有利于那些在近期获得良好评价的工人,还能确保一个奇怪的反馈不会严重影响整体评价。考虑到计算(和预算)要求以及基于老化的工人评级,我们将建议方法的性能与最先进的方法进行了比较。
{"title":"WSC: A Crowd-Powered Framework for Mapping Decomposable Complex-Task With Worker-Set","authors":"Suneel Kumar,&nbsp;Sarvesh Pandey","doi":"10.1002/cpe.8305","DOIUrl":"https://doi.org/10.1002/cpe.8305","url":null,"abstract":"<div>\u0000 \u0000 <p>The crowdsourcing platform serves as an intermediary managing the interaction between a requester who posts a decomposable task and a pool of workers who bid to solve it. Each worker intending to take up the task (partially or fully) decomposes it into multiple independent subtasks and submits it to the platform. Selection of a diverse set of workers (based on the bids received) to solve the decomposable task is challenging as it requires balancing factors like cost and quality while encouraging collaboration. We propose a Worker Set Computation (WSC) methodology to address these challenges by selecting a custom set of potential workers who can collaboratively complete the task with the optimal cost, in an efficient way. The aging technique is employed to dynamically update the weight of each worker, giving more weightage to the feedback received in the recent past. This, in turn, not only favors those workers who were rated well in the immediate past but also ensures that one odd feedback does not influence the overall rating heavily. We compare the performance of the proposed method against the state-of-the-art methods, considering the computational (and budget) requirements, as well as the aging-based worker rating.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 28","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142737394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D LVCN: A Lightweight Volumetric ConvNet 3D LVCN:一个轻量级的体积卷积神经网络
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-20 DOI: 10.1002/cpe.8312
Xiaoyun Lu, Chunjie Zhou, Shengjie Liu, Jialong Li

In recent years, with the significant increase in the volume of three-dimensional medical image data, three-dimensional medical models have emerged. However, existing methods often require a large number of model parameters to deal with complex medical datasets, leading to high model complexity and significant consumption of computational resources. In order to address these issues, this paper proposes a 3D Lightweight Volume Convolutional Neural Network (3D LVCN), aiming to achieve efficient and accurate volume segmentation. This network architecture combines the design principles of convolutional neural network modules and hierarchical transformers, using large convolutional kernels as the basic framework for feature extraction, while introducing 1 × 1 × 1 convolutional kernels for deep convolution. This improvement not only enhances the computational efficiency of the model but also improves its generalization ability. The pro-posed model is tested on three challenging public datasets, namely spleen, liver, and lung, from the medical segmentation decathlon. Experimental results show that the proposed model performance has in-creased from 0.8315 to 0.8673, with a reduction in parameters of approximately 5%. This indicates that compared to currently advanced model structures, our proposed model architecture exhibits significant advantages in segmentation performance.

近年来,随着三维医学影像数据量的大幅增加,三维医学模型应运而生。然而,现有的方法往往需要大量的模型参数才能处理复杂的医学数据集,导致模型复杂度高、计算资源消耗大。为了解决这些问题,本文提出了一种三维轻量级体积卷积神经网络(3D LVCN),旨在实现高效、准确的体积分割。该网络架构结合了卷积神经网络模块和分层变换器的设计原理,使用大卷积核作为特征提取的基本框架,同时引入 1 × 1 × 1 卷积核进行深度卷积。这一改进不仅提高了模型的计算效率,还提高了模型的泛化能力。我们在医学分割十项全能比赛的脾脏、肝脏和肺脏三个具有挑战性的公共数据集上测试了所提出的模型。实验结果表明,提出的模型性能从 0.8315 提高到了 0.8673,参数减少了约 5%。这表明,与目前先进的模型结构相比,我们提出的模型架构在分割性能方面具有显著优势。
{"title":"3D LVCN: A Lightweight Volumetric ConvNet","authors":"Xiaoyun Lu,&nbsp;Chunjie Zhou,&nbsp;Shengjie Liu,&nbsp;Jialong Li","doi":"10.1002/cpe.8312","DOIUrl":"https://doi.org/10.1002/cpe.8312","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, with the significant increase in the volume of three-dimensional medical image data, three-dimensional medical models have emerged. However, existing methods often require a large number of model parameters to deal with complex medical datasets, leading to high model complexity and significant consumption of computational resources. In order to address these issues, this paper proposes a 3D Lightweight Volume Convolutional Neural Network (3D LVCN), aiming to achieve efficient and accurate volume segmentation. This network architecture combines the design principles of convolutional neural network modules and hierarchical transformers, using large convolutional kernels as the basic framework for feature extraction, while introducing 1 × 1 × 1 convolutional kernels for deep convolution. This improvement not only enhances the computational efficiency of the model but also improves its generalization ability. The pro-posed model is tested on three challenging public datasets, namely spleen, liver, and lung, from the medical segmentation decathlon. Experimental results show that the proposed model performance has in-creased from 0.8315 to 0.8673, with a reduction in parameters of approximately 5%. This indicates that compared to currently advanced model structures, our proposed model architecture exhibits significant advantages in segmentation performance.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142868741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Constrained Green Routing Protocol for IoT-Based Software-Defined WSN 基于物联网的软件定义 WSN 的多约束绿色路由协议
IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-17 DOI: 10.1002/cpe.8306
Nitesh Kumar, Rohit Beniwal

In recent times, there has been a notable surge in the utilization of Internet of Things (IoT) network devices due to their vast applications. However, this rapid growth has undoubtedly led to raised energy consumption, which, in turn, has raised significant concerns about the environment. Consequently, there is a growing demand for green computing techniques that can mitigate IoT device's energy usage and carbon footprint. Clustering IoT networks is a useful strategy for extending their lifespan. However, clustering presents a complex optimization problem that falls under the category of NP-hard; hence making it a challenging issue. Nevertheless, using meta-heuristics algorithms has greatly improved our ability to tackle such challenges. Therefore, this study introduces a clustering scheme called EQ-AHA, which combines Equilibrium optimization and artificial hummingbird optimization techniques to enhance the efficiency of IoT-based Software-Defined Wireless Sensor Networks (IoT-SDWSN). The primary goal of EQ-AHA is to select the Cluster Heads (CHs) and determine the optimal path between CHs and the Base Station (BS). EQ-AHA employs a fitness function that considers three important factors: the distance between CHs, the distance between nodes and the CHs, and the energy levels of the nodes. Overall, this strategy improves the network's performance by 31.6% compared to other State-of-the-Art (SoA) algorithms.

近来,由于物联网(IoT)网络设备的广泛应用,其使用率明显激增。然而,这种快速增长无疑导致了能源消耗的增加,这反过来又引起了人们对环境的极大关注。因此,人们对能够减少物联网设备能源消耗和碳足迹的绿色计算技术的需求日益增长。对物联网网络进行集群是延长其使用寿命的有效策略。然而,聚类是一个复杂的优化问题,属于 NP-hard(NP-hard)范畴,因此是一个具有挑战性的问题。不过,使用元启发式算法大大提高了我们应对此类挑战的能力。因此,本研究引入了一种名为 EQ-AHA 的聚类方案,该方案结合了均衡优化和人工蜂鸟优化技术,以提高基于物联网的软件定义无线传感器网络(IoT-SDWSN)的效率。EQ-AHA 的主要目标是选择簇头(CHs),并确定 CHs 与基站(BS)之间的最优路径。EQ-AHA 采用的适应度函数考虑了三个重要因素:CHs 之间的距离、节点与 CHs 之间的距离以及节点的能量水平。总体而言,与其他先进算法相比,该策略可将网络性能提高 31.6%。
{"title":"A Multi-Constrained Green Routing Protocol for IoT-Based Software-Defined WSN","authors":"Nitesh Kumar,&nbsp;Rohit Beniwal","doi":"10.1002/cpe.8306","DOIUrl":"https://doi.org/10.1002/cpe.8306","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent times, there has been a notable surge in the utilization of Internet of Things (IoT) network devices due to their vast applications. However, this rapid growth has undoubtedly led to raised energy consumption, which, in turn, has raised significant concerns about the environment. Consequently, there is a growing demand for green computing techniques that can mitigate IoT device's energy usage and carbon footprint. Clustering IoT networks is a useful strategy for extending their lifespan. However, clustering presents a complex optimization problem that falls under the category of NP-hard; hence making it a challenging issue. Nevertheless, using meta-heuristics algorithms has greatly improved our ability to tackle such challenges. Therefore, this study introduces a clustering scheme called EQ-AHA, which combines Equilibrium optimization and artificial hummingbird optimization techniques to enhance the efficiency of IoT-based Software-Defined Wireless Sensor Networks (IoT-SDWSN). The primary goal of EQ-AHA is to select the Cluster Heads (CHs) and determine the optimal path between CHs and the Base Station (BS). EQ-AHA employs a fitness function that considers three important factors: the distance between CHs, the distance between nodes and the CHs, and the energy levels of the nodes. Overall, this strategy improves the network's performance by 31.6% compared to other State-of-the-Art (SoA) algorithms.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 28","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142737591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Concurrency and Computation-Practice & Experience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1