首页 > 最新文献

Proceedings. Advances in Parallel and Distributed Computing最新文献

英文 中文
A multithreaded processor designed for distributed shared memory systems 为分布式共享内存系统设计的多线程处理器
Pub Date : 1997-03-19 DOI: 10.1109/APDC.1997.574034
Winfried Grünewald, T. Ungerer
The multithreaded processor-called Rhamma-uses a fast context switch to bridge latencies caused by memory accesses or by synchronization operations. Load/store, synchronization, and execution operations of different threads of control are executed simultaneously by appropriate functional units. A fast context switch is performed whenever a functional unit comes across an operation that is destined for another unit. The overall performance depends on the speed of the context switch. We present two techniques to reduce the context switch cost to at most one processor cycle: A context switch is explicitly coded in the opcode, and a context switch buffer is used. The load/store unit shows up as the principal bottleneck. We evaluate four implementation alternatives of the load/store unit to increase processor performance.
称为rhamma的多线程处理器使用快速上下文切换来桥接由内存访问或同步操作引起的延迟。不同控制线程的加载/存储、同步和执行操作由适当的功能单元同时执行。每当一个功能单元遇到要执行另一个单元的操作时,就会执行快速上下文切换。整体性能取决于上下文切换的速度。我们提出了两种技术来将上下文切换成本减少到最多一个处理器周期:在操作码中显式地编码上下文切换,并使用上下文切换缓冲区。加载/存储单元显示为主要瓶颈。我们评估了加载/存储单元的四种实现方案,以提高处理器性能。
{"title":"A multithreaded processor designed for distributed shared memory systems","authors":"Winfried Grünewald, T. Ungerer","doi":"10.1109/APDC.1997.574034","DOIUrl":"https://doi.org/10.1109/APDC.1997.574034","url":null,"abstract":"The multithreaded processor-called Rhamma-uses a fast context switch to bridge latencies caused by memory accesses or by synchronization operations. Load/store, synchronization, and execution operations of different threads of control are executed simultaneously by appropriate functional units. A fast context switch is performed whenever a functional unit comes across an operation that is destined for another unit. The overall performance depends on the speed of the context switch. We present two techniques to reduce the context switch cost to at most one processor cycle: A context switch is explicitly coded in the opcode, and a context switch buffer is used. The load/store unit shows up as the principal bottleneck. We evaluate four implementation alternatives of the load/store unit to increase processor performance.","PeriodicalId":413925,"journal":{"name":"Proceedings. Advances in Parallel and Distributed Computing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133344164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Construction of multimedia server in a distributed multimedia system 分布式多媒体系统中多媒体服务器的构建
Pub Date : 1997-03-19 DOI: 10.1109/APDC.1997.574040
Xiaoqiang Fei, P. Shi
The framework of constructing a distributed multimedia system based on the server/client architecture is described in this paper. We focus our attention on the realization of synchronization presentation of different media in a multimedia application, and a set of QoS (qualify of service) parameters is given as a criterion to make a trade-off between overall performance of the system and the synchronization presentation in each multimedia application.
本文描述了一个基于服务器/客户端架构的分布式多媒体系统的构建框架。本文重点研究了多媒体应用中不同媒体同步呈现的实现,并给出了一组服务质量(QoS)参数作为标准,在系统的整体性能和各多媒体应用中的同步呈现之间进行权衡。
{"title":"Construction of multimedia server in a distributed multimedia system","authors":"Xiaoqiang Fei, P. Shi","doi":"10.1109/APDC.1997.574040","DOIUrl":"https://doi.org/10.1109/APDC.1997.574040","url":null,"abstract":"The framework of constructing a distributed multimedia system based on the server/client architecture is described in this paper. We focus our attention on the realization of synchronization presentation of different media in a multimedia application, and a set of QoS (qualify of service) parameters is given as a criterion to make a trade-off between overall performance of the system and the synchronization presentation in each multimedia application.","PeriodicalId":413925,"journal":{"name":"Proceedings. Advances in Parallel and Distributed Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121490477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An effective parallelizing scheme of MPEG-1 video encoding on Ethernet-connected workstations 基于以太网的MPEG-1视频编码并行化方案
Pub Date : 1997-03-19 DOI: 10.1109/APDC.1997.574007
J. Nang, Junwha Kim
Although MPEG-1 Video is a promising and the most widely used moving picture compression standard it requires a lot of computational resources to encode the moving pictures with a reasonable frame size and quality. In this paper we propose and implement an efficient parallelizing scheme for an MPEG-1 Video encoding algorithm on Ethernet-connected workstations which is the most widely available computing environment nowadays. In this parallelizing scheme, the slice-level, frame-level, and GOP (Group of Pictures)-level parallelisms are identified as the attractive parallelisms that can be exploited in Ethernet-connected workstations. Three efficient parallel implementation schemes considering the communication characteristics of Ethernet-connected workstations are also proposed and experimented A series of experiments using thirty workstations shows that the MPEG-1 Video encoding time can be reduced in proportional to the number of workstations used in encoding computations although there is a saturation point in the speedup graphs.
虽然MPEG-1视频是一种很有前途的、应用最广泛的运动图像压缩标准,但要对运动图像进行合理的帧大小和质量的编码,需要大量的计算资源。本文提出并实现了一种高效的MPEG-1视频编码算法并行化方案,该方案适用于目前应用最广泛的以太网连接工作站。在这种并行化方案中,片级、帧级和GOP(组图)级并行被认为是可以在以太网连接的工作站中利用的有吸引力的并行。考虑到以太网连接工作站的通信特性,提出了三种高效的并行实现方案,并进行了实验。在30个工作站上进行的一系列实验表明,尽管加速图中存在饱和点,但MPEG-1视频编码时间可以与编码计算使用的工作站数量成比例地减少。
{"title":"An effective parallelizing scheme of MPEG-1 video encoding on Ethernet-connected workstations","authors":"J. Nang, Junwha Kim","doi":"10.1109/APDC.1997.574007","DOIUrl":"https://doi.org/10.1109/APDC.1997.574007","url":null,"abstract":"Although MPEG-1 Video is a promising and the most widely used moving picture compression standard it requires a lot of computational resources to encode the moving pictures with a reasonable frame size and quality. In this paper we propose and implement an efficient parallelizing scheme for an MPEG-1 Video encoding algorithm on Ethernet-connected workstations which is the most widely available computing environment nowadays. In this parallelizing scheme, the slice-level, frame-level, and GOP (Group of Pictures)-level parallelisms are identified as the attractive parallelisms that can be exploited in Ethernet-connected workstations. Three efficient parallel implementation schemes considering the communication characteristics of Ethernet-connected workstations are also proposed and experimented A series of experiments using thirty workstations shows that the MPEG-1 Video encoding time can be reduced in proportional to the number of workstations used in encoding computations although there is a saturation point in the speedup graphs.","PeriodicalId":413925,"journal":{"name":"Proceedings. Advances in Parallel and Distributed Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130667175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Precise dependence test for scalars within nested loops 精确的依赖测试标量内嵌套循环
Pub Date : 1997-03-19 DOI: 10.1109/APDC.1997.574055
Gao Nianshu, Zhaoqing Zhang, Ruliang Qiao
Exact direction and distance vectors are essential for detecting hierarchical parallelism and examining legality of loop transformation for a multiple level loop nest. Much of this work has been concentrated on array references. Little has been done to address the problems of finding precise dependences between scalar references, except to use extended SSA form with factored use-def links. In this paper, we present a technique for calculating precise direction and distance vectors for scalar references within nested loops without using any forms of SSA. To do this, we use conventional use-def links in combination with joint dominator and joint postdominator relationships, which are extended from dominator and postdominator respectively in standard data flow analysis. The precision of dependence information gathered by our algorithm can not be achieved by traditional analysis of dominator or reaching definitions.
精确的方向和距离向量是检测多层环巢的层次并行性和检验环变换合法性的必要条件。大部分工作都集中在数组引用上。除了使用带有分解的use-def链接的扩展SSA形式外,几乎没有解决在标量引用之间找到精确依赖关系的问题。在本文中,我们提出了一种在不使用任何形式的SSA的情况下计算嵌套循环内标量引用的精确方向和距离向量的技术。为此,我们将传统的use-def链接与联合支配子和联合后支配子关系结合使用,它们分别是标准数据流分析中的支配子和后支配子的扩展。该算法所收集的依赖信息的精度是传统的支配子分析和定义法所不能达到的。
{"title":"Precise dependence test for scalars within nested loops","authors":"Gao Nianshu, Zhaoqing Zhang, Ruliang Qiao","doi":"10.1109/APDC.1997.574055","DOIUrl":"https://doi.org/10.1109/APDC.1997.574055","url":null,"abstract":"Exact direction and distance vectors are essential for detecting hierarchical parallelism and examining legality of loop transformation for a multiple level loop nest. Much of this work has been concentrated on array references. Little has been done to address the problems of finding precise dependences between scalar references, except to use extended SSA form with factored use-def links. In this paper, we present a technique for calculating precise direction and distance vectors for scalar references within nested loops without using any forms of SSA. To do this, we use conventional use-def links in combination with joint dominator and joint postdominator relationships, which are extended from dominator and postdominator respectively in standard data flow analysis. The precision of dependence information gathered by our algorithm can not be achieved by traditional analysis of dominator or reaching definitions.","PeriodicalId":413925,"journal":{"name":"Proceedings. Advances in Parallel and Distributed Computing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131991424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive hybrid scheduling of nonuniform loops on UMA models UMA模型上非均匀环路的自适应混合调度
Pub Date : 1997-03-19 DOI: 10.1109/APDC.1997.574059
Hua-ping Chen, Jing Li, Guoliang Chen
It is very difficult to keep load balancing among processors for the nonuniform loop in compile-time and it must be at the price of extra overhead to use dynamic methods. This paper proposes an adaptive hybrid scheduling way, in which the processes of distribution of loop are divided into a few rounds and the block size in each round is determined adaptively according to the average overhead due to dynamic scheduling. Several experimental results have also exposed the effect of scheduling parameter, which could be selected by programmers according to the probability that a fetching processor may not perform an additional task fetching.
对于编译时的非均匀循环,在处理器之间保持负载平衡是非常困难的,并且必须以使用动态方法的额外开销为代价。提出了一种自适应混合调度方法,该方法将循环分配过程分成若干轮,根据动态调度的平均开销自适应确定每轮的块大小。几个实验结果也揭示了调度参数的影响,程序员可以根据提取处理器不执行额外任务提取的概率来选择调度参数。
{"title":"Adaptive hybrid scheduling of nonuniform loops on UMA models","authors":"Hua-ping Chen, Jing Li, Guoliang Chen","doi":"10.1109/APDC.1997.574059","DOIUrl":"https://doi.org/10.1109/APDC.1997.574059","url":null,"abstract":"It is very difficult to keep load balancing among processors for the nonuniform loop in compile-time and it must be at the price of extra overhead to use dynamic methods. This paper proposes an adaptive hybrid scheduling way, in which the processes of distribution of loop are divided into a few rounds and the block size in each round is determined adaptively according to the average overhead due to dynamic scheduling. Several experimental results have also exposed the effect of scheduling parameter, which could be selected by programmers according to the probability that a fetching processor may not perform an additional task fetching.","PeriodicalId":413925,"journal":{"name":"Proceedings. Advances in Parallel and Distributed Computing","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134113622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient implementation of portable C*-like data-parallel library in C++ 在c++中高效实现可移植的类似C*的数据并行库
Pub Date : 1997-03-19 DOI: 10.1109/APDC.1997.574061
Motohiko Matsuda, M. Sato, Y. Ishikawa
The C* language is a data-parallel extension of the C language which incorporates parallel data types. Since the C++ language provides operator overloading, a C++ library can implement the C* parallel extensions with a similar syntax. Although library implementations are highly portable, some overheads make them impractical. The two major overheads incurred are temporaries in each operator application and the inability to detect regular communication patterns. The C++ overloading mechanism forces a temporary for each operator application. Also, regular communications in C* are syntactically indistinguishable from general point-to-point communications. We tackled these problems extensively in a library. The template mechanism, a type parameterization in C++, is used to eliminate temporaries by delaying operator application and evaluating the entire expression at once. The polymorphic type dispatch mechanism is used to detect regular communications by assigning particular types to potentially regular communications. We have implemented the library on the CM-5, and compared its performance with the C* compiler using three simple examples. The techniques presented offers improved performance comparable to the C* compiler, which is close or 1.5 times slower in two examples, and even faster in one example.
C*语言是C语言的数据并行扩展,它包含并行数据类型。由于c++语言提供了操作符重载,c++库可以用类似的语法实现C*并行扩展。尽管库实现具有很高的可移植性,但一些开销使它们不切实际。产生的两个主要开销是每个操作员应用程序中的临时开销和无法检测常规通信模式。c++的重载机制强制每个操作符应用程序都有一个临时文件。此外,C*中的常规通信在语法上与一般的点对点通信无法区分。我们在一个图书馆里广泛地解决了这些问题。模板机制是c++中的一种类型参数化机制,它通过延迟运算符的应用和一次计算整个表达式来消除临时变量。多态类型调度机制用于通过为潜在的常规通信分配特定类型来检测常规通信。我们在CM-5上实现了这个库,并通过三个简单的例子将其性能与C*编译器进行了比较。所介绍的技术提供了与C*编译器相当的改进性能,在两个示例中C*编译器的速度接近或慢1.5倍,在一个示例中甚至更快。
{"title":"Efficient implementation of portable C*-like data-parallel library in C++","authors":"Motohiko Matsuda, M. Sato, Y. Ishikawa","doi":"10.1109/APDC.1997.574061","DOIUrl":"https://doi.org/10.1109/APDC.1997.574061","url":null,"abstract":"The C* language is a data-parallel extension of the C language which incorporates parallel data types. Since the C++ language provides operator overloading, a C++ library can implement the C* parallel extensions with a similar syntax. Although library implementations are highly portable, some overheads make them impractical. The two major overheads incurred are temporaries in each operator application and the inability to detect regular communication patterns. The C++ overloading mechanism forces a temporary for each operator application. Also, regular communications in C* are syntactically indistinguishable from general point-to-point communications. We tackled these problems extensively in a library. The template mechanism, a type parameterization in C++, is used to eliminate temporaries by delaying operator application and evaluating the entire expression at once. The polymorphic type dispatch mechanism is used to detect regular communications by assigning particular types to potentially regular communications. We have implemented the library on the CM-5, and compared its performance with the C* compiler using three simple examples. The techniques presented offers improved performance comparable to the C* compiler, which is close or 1.5 times slower in two examples, and even faster in one example.","PeriodicalId":413925,"journal":{"name":"Proceedings. Advances in Parallel and Distributed Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134165679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ATOLL: a high-performance communication device for parallel systems ATOLL:用于并行系统的高性能通信设备
Pub Date : 1997-03-19 DOI: 10.1109/APDC.1997.574037
U. Bruening, Lambert Schaelicke
Fast and efficient communication is one of the major design goals not only for parallel systems but also for clusters of workstations. The proposed model of the high performance communication device ATOLL features very low latency for the start of communication operations and reduces the software overhead for communication specific functions. To close the gap between off-the-shelf microprocessors and the communication system a highly sophisticated processor interface implements atomic start of communication, MMU support, and a flexible event scheduling scheme. The interconnectivity of ATOLL provided by four independent network ports combined with cut-through routing allows the configuration of a large variety of network topologies. A software transparent error correction mechanism significantly reduces the required protocol overhead. The presented simulation results promise high performance and low-latency communication.
快速有效的通信是并行系统和工作站集群的主要设计目标之一。所提出的高性能通信设备ATOLL模型具有非常低的通信操作启动延迟,并减少了通信特定功能的软件开销。为了缩小现有微处理器和通信系统之间的差距,一个高度复杂的处理器接口实现了通信的原子启动、MMU支持和灵活的事件调度方案。ATOLL的互连性由四个独立的网络端口与直通路由相结合提供,允许配置各种网络拓扑结构。软件透明的纠错机制显著降低了所需的协议开销。给出的仿真结果保证了高性能和低延迟的通信。
{"title":"ATOLL: a high-performance communication device for parallel systems","authors":"U. Bruening, Lambert Schaelicke","doi":"10.1109/APDC.1997.574037","DOIUrl":"https://doi.org/10.1109/APDC.1997.574037","url":null,"abstract":"Fast and efficient communication is one of the major design goals not only for parallel systems but also for clusters of workstations. The proposed model of the high performance communication device ATOLL features very low latency for the start of communication operations and reduces the software overhead for communication specific functions. To close the gap between off-the-shelf microprocessors and the communication system a highly sophisticated processor interface implements atomic start of communication, MMU support, and a flexible event scheduling scheme. The interconnectivity of ATOLL provided by four independent network ports combined with cut-through routing allows the configuration of a large variety of network topologies. A software transparent error correction mechanism significantly reduces the required protocol overhead. The presented simulation results promise high performance and low-latency communication.","PeriodicalId":413925,"journal":{"name":"Proceedings. Advances in Parallel and Distributed Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126948817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Proceedings. Advances in Parallel and Distributed Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1