首页 > 最新文献

ASPLOS X最新文献

英文 中文
A comparative study of arbitration algorithms for the Alpha 21364 pipelined router Alpha 21364流水线路由器仲裁算法的比较研究
Pub Date : 2002-10-01 DOI: 10.1145/605397.605421
Shubhendu S. Mukherjee, F. Silla, P. Bannon, J. Emer, S. Lang, David Webb
Interconnection networks usually consist of a fabric of interconnected routers, which receive packets arriving at their input ports and forward them to appropriate output ports. Unfortunately, network packets moving through these routers are often delayed due to conflicting demand for resources, such as output ports or buffer space. Hence, routers typically employ arbiters that resolve conflicting resource demands to maximize the number of matches between packets waiting at input ports and free output ports. Efficient design and implementation of the algorithm running on these arbiters is critical to maximize network performance.This paper proposes a new arbitration algorithm called SPAA (Simple Pipelined Arbitration Algorithm), which is implemented in the Alpha 21364 processor's on-chip router pipeline. Simulation results show that SPAA significantly outperforms two earlier well-known arbitration algorithms: PIM (Parallel Iterative Matching) and WFA (Wave-Front Arbiter) implemented in the SGI Spider switch. SPAA outperforms PIM and WFA because SPAA exhibits matching capabilities similar to PIM and WFA under realistic conditions when many output ports are busy, incurs fewer clock cycles to perform the arbitration, and can be pipelined effectively. Additionally, we propose a new prioritization policy called the Rotary Rule, which prevents the network's adverse performance degradation from saturation at high network loads by prioritizing packets already in the network over new packets generated by caches or memory.
互连网络通常由相互连接的路由器组成,路由器接收到达其输入端口的数据包并将其转发到适当的输出端口。不幸的是,由于对资源(如输出端口或缓冲空间)的冲突需求,通过这些路由器移动的网络数据包经常被延迟。因此,路由器通常使用仲裁器来解决冲突的资源需求,以最大限度地增加等待在输入端口和空闲输出端口的数据包之间的匹配数量。有效地设计和实现在这些仲裁器上运行的算法对于最大化网络性能至关重要。本文提出了一种新的仲裁算法SPAA (Simple pipeline arbitration algorithm),该算法在Alpha 21364处理器的片上路由器流水线上实现。仿真结果表明,SPAA显著优于SGI Spider交换机中实现的两种较早的仲裁算法:PIM (Parallel Iterative Matching)和WFA (Wave-Front Arbiter)。SPAA优于PIM和WFA,因为当许多输出端口繁忙时,SPAA在实际条件下表现出与PIM和WFA相似的匹配能力,执行仲裁所需的时钟周期更少,并且可以有效地流水线化。此外,我们提出了一种新的优先级策略,称为旋转规则,它通过优先考虑网络中已有的数据包而不是缓存或内存生成的新数据包,从而防止网络在高网络负载下因饱和而导致的不良性能下降。
{"title":"A comparative study of arbitration algorithms for the Alpha 21364 pipelined router","authors":"Shubhendu S. Mukherjee, F. Silla, P. Bannon, J. Emer, S. Lang, David Webb","doi":"10.1145/605397.605421","DOIUrl":"https://doi.org/10.1145/605397.605421","url":null,"abstract":"Interconnection networks usually consist of a fabric of interconnected routers, which receive packets arriving at their input ports and forward them to appropriate output ports. Unfortunately, network packets moving through these routers are often delayed due to conflicting demand for resources, such as output ports or buffer space. Hence, routers typically employ arbiters that resolve conflicting resource demands to maximize the number of matches between packets waiting at input ports and free output ports. Efficient design and implementation of the algorithm running on these arbiters is critical to maximize network performance.This paper proposes a new arbitration algorithm called SPAA (Simple Pipelined Arbitration Algorithm), which is implemented in the Alpha 21364 processor's on-chip router pipeline. Simulation results show that SPAA significantly outperforms two earlier well-known arbitration algorithms: PIM (Parallel Iterative Matching) and WFA (Wave-Front Arbiter) implemented in the SGI Spider switch. SPAA outperforms PIM and WFA because SPAA exhibits matching capabilities similar to PIM and WFA under realistic conditions when many output ports are busy, incurs fewer clock cycles to perform the arbitration, and can be pipelined effectively. Additionally, we propose a new prioritization policy called the Rotary Rule, which prevents the network's adverse performance degradation from saturation at high network loads by prioritizing packets already in the network over new packets generated by caches or memory.","PeriodicalId":377379,"journal":{"name":"ASPLOS X","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124592520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Cool-Mem: combining statically speculative memory accessing with selective address translation for energy efficiency Cool-Mem:结合静态推测内存访问和选择性地址转换以提高能源效率
Pub Date : 2002-10-01 DOI: 10.1145/605397.605412
R. Ashok, Saurabh Chheda, C. A. Moritz
This paper presents Cool-Mem, a family of memory system architectures that integrate conventional memory system mechanisms, energy-aware address translation, and compiler-enabled cache disambiguation techniques, to reduce energy consumption in general purpose architectures. It combines statically speculative cache access modes, a dynamic CAM based Tag-Cache used as backup for statically mispredicted accesses, various conventional multi-level associative cache organizations, embedded protection checking along all cache access mechanisms, as well as architectural organizations to reduce the power consumed by address translation in virtual memory. Because it is based on speculative static information, the approach removes the burden of provable correctness in compiler analysis passes that extract static information. This makes Cool-Mem applicable for large and complex applications, without having any limitations due to complexity issues in the compiler passes or the presence of precompiled static libraries. Based on extensive evaluation, for both SPEC2000 and Mediabench applications, 12% to 20% total energy savings are obtained in the processor, with performance ranging from 1.2% degradation to 8% improvement, for the applications studied.
本文介绍了Cool-Mem,这是一种集成了传统存储系统机制、能量感知地址转换和编译器支持的缓存消歧技术的存储系统架构家族,以减少通用架构中的能耗。它结合了静态推测缓存访问模式,一个动态的基于CAM的标签缓存,用于静态错误预测访问的备份,各种传统的多级关联缓存组织,所有缓存访问机制的嵌入式保护检查,以及架构组织,以减少虚拟内存中地址转换消耗的能量。由于它基于推测的静态信息,因此该方法消除了提取静态信息的编译器分析过程中可证明正确性的负担。这使得Cool-Mem适用于大型和复杂的应用程序,而不会因为编译器传递的复杂性问题或预编译静态库的存在而受到任何限制。根据广泛的评估,在SPEC2000和mediabbench应用中,处理器的总能耗节省了12%到20%,性能下降了1.2%到8%不等。
{"title":"Cool-Mem: combining statically speculative memory accessing with selective address translation for energy efficiency","authors":"R. Ashok, Saurabh Chheda, C. A. Moritz","doi":"10.1145/605397.605412","DOIUrl":"https://doi.org/10.1145/605397.605412","url":null,"abstract":"This paper presents Cool-Mem, a family of memory system architectures that integrate conventional memory system mechanisms, energy-aware address translation, and compiler-enabled cache disambiguation techniques, to reduce energy consumption in general purpose architectures. It combines statically speculative cache access modes, a dynamic CAM based Tag-Cache used as backup for statically mispredicted accesses, various conventional multi-level associative cache organizations, embedded protection checking along all cache access mechanisms, as well as architectural organizations to reduce the power consumed by address translation in virtual memory. Because it is based on speculative static information, the approach removes the burden of provable correctness in compiler analysis passes that extract static information. This makes Cool-Mem applicable for large and complex applications, without having any limitations due to complexity issues in the compiler passes or the presence of precompiled static libraries. Based on extensive evaluation, for both SPEC2000 and Mediabench applications, 12% to 20% total energy savings are obtained in the processor, with performance ranging from 1.2% degradation to 8% improvement, for the applications studied.","PeriodicalId":377379,"journal":{"name":"ASPLOS X","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130902409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet 野生动物追踪的节能计算:ZebraNet的设计权衡和早期经验
Pub Date : 2002-10-01 DOI: 10.1145/605397.605408
Philo Juang, Hidekazu Oki, Yong Wang, M. Martonosi, L. Peh, D. Rubenstein
Over the past decade, mobile computing and wireless communication have become increasingly important drivers of many new computing applications. The field of wireless sensor networks particularly focuses on applications involving autonomous use of compute, sensing, and wireless communication devices for both scientific and commercial purposes. This paper examines the research decisions and design tradeoffs that arise when applying wireless peer-to-peer networking techniques in a mobile sensor network designed to support wildlife tracking for biology research.The ZebraNet system includes custom tracking collars (nodes) carried by animals under study across a large, wild area; the collars operate as a peer-to-peer network to deliver logged data back to researchers. The collars include global positioning system (GPS), Flash memory, wireless transceivers, and a small CPU; essentially each node is a small, wireless computing device. Since there is no cellular service or broadcast communication covering the region where animals are studied, ad hoc, peer-to-peer routing is needed. Although numerous ad hoc protocols exist, additional challenges arise because the researchers themselves are mobile and thus there is no fixed base station towards which to aim data. Overall, our goal is to use the least energy, storage, and other resources necessary to maintain a reliable system with a very high `data homing' success rate. We plan to deploy a 30-node ZebraNet system at the Mpala Research Centre in central Kenya. More broadly, we believe that the domain-centric protocols and energy tradeoffs presented here for ZebraNet will have general applicability in other wireless and sensor applications.
在过去的十年中,移动计算和无线通信已经成为许多新的计算应用日益重要的驱动因素。无线传感器网络领域特别关注涉及自动使用计算、传感和无线通信设备的应用,用于科学和商业目的。本文研究了在移动传感器网络中应用无线点对点网络技术时出现的研究决策和设计权衡,该网络旨在支持生物学研究的野生动物跟踪。ZebraNet系统包括定制跟踪项圈(节点),这些项圈由被研究动物携带,跨越大片野生区域;项圈作为一个点对点网络运行,将记录的数据传回给研究人员。项圈包括全球定位系统(GPS)、闪存、无线收发器和小型CPU;实际上,每个节点都是一个小型的无线计算设备。由于没有蜂窝服务或广播通信覆盖动物研究的区域,因此需要特设的点对点路由。虽然存在许多特别协议,但由于研究人员本身是移动的,因此没有固定的基站来瞄准数据,因此出现了额外的挑战。总的来说,我们的目标是使用最少的能源、存储和其他必要的资源来维护一个可靠的系统,并具有非常高的“数据归巢”成功率。我们计划在肯尼亚中部的Mpala研究中心部署一个30个节点的ZebraNet系统。更广泛地说,我们相信这里为ZebraNet介绍的以领域为中心的协议和能量权衡将在其他无线和传感器应用中具有普遍适用性。
{"title":"Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet","authors":"Philo Juang, Hidekazu Oki, Yong Wang, M. Martonosi, L. Peh, D. Rubenstein","doi":"10.1145/605397.605408","DOIUrl":"https://doi.org/10.1145/605397.605408","url":null,"abstract":"Over the past decade, mobile computing and wireless communication have become increasingly important drivers of many new computing applications. The field of wireless sensor networks particularly focuses on applications involving autonomous use of compute, sensing, and wireless communication devices for both scientific and commercial purposes. This paper examines the research decisions and design tradeoffs that arise when applying wireless peer-to-peer networking techniques in a mobile sensor network designed to support wildlife tracking for biology research.The ZebraNet system includes custom tracking collars (nodes) carried by animals under study across a large, wild area; the collars operate as a peer-to-peer network to deliver logged data back to researchers. The collars include global positioning system (GPS), Flash memory, wireless transceivers, and a small CPU; essentially each node is a small, wireless computing device. Since there is no cellular service or broadcast communication covering the region where animals are studied, ad hoc, peer-to-peer routing is needed. Although numerous ad hoc protocols exist, additional challenges arise because the researchers themselves are mobile and thus there is no fixed base station towards which to aim data. Overall, our goal is to use the least energy, storage, and other resources necessary to maintain a reliable system with a very high `data homing' success rate. We plan to deploy a 30-node ZebraNet system at the Mpala Research Centre in central Kenya. More broadly, we believe that the domain-centric protocols and energy tradeoffs presented here for ZebraNet will have general applicability in other wireless and sensor applications.","PeriodicalId":377379,"journal":{"name":"ASPLOS X","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124445083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2294
Bytecode fetch optimization for a Java interpreter Java解释器的字节码获取优化
Pub Date : 2002-10-01 DOI: 10.1145/605397.605404
Kazunori Ogata, H. Komatsu, T. Nakatani
Interpreters play an important role in many languages, and their performance is critical particularly for the popular language Java. The performance of the interpreter is important even for high-performance virtual machines that employ just-in-time compiler technology, because there are advantages in delaying the start of compilation and in reducing the number of the target methods to be compiled. Many techniques have been proposed to improve the performance of various interpreters, but none of them has fully addressed the issues of minimizing redundant memory accesses and the overhead of indirect branches inherent to interpreters running on superscalar processors. These issues are especially serious for Java because each bytecode is typically one or a few bytes long and the execution routine for each bytecode is also short due to the low-level, stack-based semantics of Java bytecode. In this paper, we describe three novel techniques of our Java bytecode interpreter, write-through top-of-stack caching (WT), position-based handler customization (PHC), and position-based speculative decoding (PSD), which ameliorate these problems for the PowerPC processors. We show how each technique contributes to improving the overall performance of the interpreter for major Java benchmark programs on an IBM POWER3 processor. Among three, PHC is the most effective one. We also show that the main source of memory accesses is due to bytecode fetches and that PHC successfully eliminates the majority of them, while it keeps the instruction cache miss ratios small.
解释器在许多语言中都扮演着重要的角色,它们的性能非常关键,尤其是对于流行的语言Java。即使对于使用即时编译器技术的高性能虚拟机,解释器的性能也很重要,因为延迟编译的开始时间和减少要编译的目标方法的数量是有好处的。已经提出了许多技术来提高各种解释器的性能,但是没有一种技术能够完全解决最小化冗余内存访问和运行在超标量处理器上的解释器固有的间接分支开销的问题。这些问题对于Java来说尤其严重,因为每个字节码通常是一个或几个字节长,而且由于Java字节码的低级、基于堆栈的语义,每个字节码的执行例程也很短。在本文中,我们描述了Java字节码解释器的三种新技术,即透写堆栈顶缓存(WT)、基于位置的处理程序定制(PHC)和基于位置的推测解码(PSD),它们改善了PowerPC处理器的这些问题。我们将展示每种技术如何有助于提高IBM POWER3处理器上主要Java基准程序的解释器的整体性能。其中,PHC是最有效的。我们还表明,内存访问的主要来源是字节码获取,PHC成功地消除了其中的大部分,同时使指令缓存丢失率保持在较小的水平。
{"title":"Bytecode fetch optimization for a Java interpreter","authors":"Kazunori Ogata, H. Komatsu, T. Nakatani","doi":"10.1145/605397.605404","DOIUrl":"https://doi.org/10.1145/605397.605404","url":null,"abstract":"Interpreters play an important role in many languages, and their performance is critical particularly for the popular language Java. The performance of the interpreter is important even for high-performance virtual machines that employ just-in-time compiler technology, because there are advantages in delaying the start of compilation and in reducing the number of the target methods to be compiled. Many techniques have been proposed to improve the performance of various interpreters, but none of them has fully addressed the issues of minimizing redundant memory accesses and the overhead of indirect branches inherent to interpreters running on superscalar processors. These issues are especially serious for Java because each bytecode is typically one or a few bytes long and the execution routine for each bytecode is also short due to the low-level, stack-based semantics of Java bytecode. In this paper, we describe three novel techniques of our Java bytecode interpreter, write-through top-of-stack caching (WT), position-based handler customization (PHC), and position-based speculative decoding (PSD), which ameliorate these problems for the PowerPC processors. We show how each technique contributes to improving the overall performance of the interpreter for major Java benchmark programs on an IBM POWER3 processor. Among three, PHC is the most effective one. We also show that the main source of memory accesses is due to bytecode fetches and that PHC successfully eliminates the majority of them, while it keeps the instruction cache miss ratios small.","PeriodicalId":377379,"journal":{"name":"ASPLOS X","volume":"1991 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125511970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Enhancing software reliability with speculative threads 通过推测线程增强软件可靠性
Pub Date : 1900-01-01 DOI: 10.1145/605397.605417
Jeffrey T. Oplinger, M. Lam
This paper advocates the use of a monitor-and-recover programming paradigm to enhance the reliability of software, and proposes an architectural design that allows software and hardware to cooperate in making this paradigm more efficient and easier to program.We propose that programmers write monitoring functions assuming simple sequential execution semantics. Our architecture speeds up the computation by executing the monitoring functions speculatively in parallel with the main computation. For recovery, programmers can define fine-grain transactions whose side effects, including all register modifications and memory writes, can either be committed or aborted under program control. Transactions are implemented efficiently by treating them as speculative threads.Our experimental results suggest that monitored execution is more amenable to parallelization than regular program execution. Code monitoring is sped up by a factor of 1.5 by exploiting single-thread instruction-level parallelism, and by an additional factor of 1.6 using thread-level speculation. This results in an overall improvement of 2.5 times and a sustained 5.4 instructions-per-cycle performance. A monitored execution that used to be 2.5 times slower executes with a degradation of only 12% when compared to the performance on the baseline machine. We also show that the concept of fine-grain transactional programming is useful in catching buffer overrun errors through a number of real-life examples.
本文提倡使用监视-恢复编程范式来增强软件的可靠性,并提出了一种架构设计,允许软件和硬件合作,使这种范式更有效,更容易编程。我们建议程序员使用简单的顺序执行语义来编写监视函数。我们的架构通过推测性地与主计算并行执行监视功能来加快计算速度。对于恢复,程序员可以定义细粒度事务,其副作用,包括所有寄存器修改和内存写入,可以在程序控制下提交或终止。通过将事务视为推测线程,可以有效地实现事务。我们的实验结果表明,监控执行比常规程序执行更适合并行化。通过利用单线程指令级并行性,代码监控的速度提高了1.5倍,通过使用线程级推测,代码监控的速度又提高了1.6倍。这使得总体性能提高了2.5倍,每周期的指令数持续达到5.4条。与基准机器上的性能相比,以前被监视的执行速度要慢2.5倍,但执行速度只下降了12%。我们还通过许多实际示例说明了细粒度事务编程的概念在捕获缓冲区溢出错误方面是有用的。
{"title":"Enhancing software reliability with speculative threads","authors":"Jeffrey T. Oplinger, M. Lam","doi":"10.1145/605397.605417","DOIUrl":"https://doi.org/10.1145/605397.605417","url":null,"abstract":"This paper advocates the use of a monitor-and-recover programming paradigm to enhance the reliability of software, and proposes an architectural design that allows software and hardware to cooperate in making this paradigm more efficient and easier to program.We propose that programmers write monitoring functions assuming simple sequential execution semantics. Our architecture speeds up the computation by executing the monitoring functions speculatively in parallel with the main computation. For recovery, programmers can define fine-grain transactions whose side effects, including all register modifications and memory writes, can either be committed or aborted under program control. Transactions are implemented efficiently by treating them as speculative threads.Our experimental results suggest that monitored execution is more amenable to parallelization than regular program execution. Code monitoring is sped up by a factor of 1.5 by exploiting single-thread instruction-level parallelism, and by an additional factor of 1.6 using thread-level speculation. This results in an overall improvement of 2.5 times and a sustained 5.4 instructions-per-cycle performance. A monitored execution that used to be 2.5 times slower executes with a degradation of only 12% when compared to the performance on the baseline machine. We also show that the concept of fine-grain transactional programming is useful in catching buffer overrun errors through a number of real-life examples.","PeriodicalId":377379,"journal":{"name":"ASPLOS X","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124137878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 102
期刊
ASPLOS X
全部 Geobiology Appl. Clay Sci. Geochim. Cosmochim. Acta J. Hydrol. Org. Geochem. Carbon Balance Manage. Contrib. Mineral. Petrol. Int. J. Biometeorol. IZV-PHYS SOLID EART+ J. Atmos. Chem. Acta Oceanolog. Sin. Acta Geophys. ACTA GEOL POL ACTA PETROL SIN ACTA GEOL SIN-ENGL AAPG Bull. Acta Geochimica Adv. Atmos. Sci. Adv. Meteorol. Am. J. Phys. Anthropol. Am. J. Sci. Am. Mineral. Annu. Rev. Earth Planet. Sci. Appl. Geochem. Aquat. Geochem. Ann. Glaciol. Archaeol. Anthropol. Sci. ARCHAEOMETRY ARCT ANTARCT ALP RES Asia-Pac. J. Atmos. Sci. ATMOSPHERE-BASEL Atmos. Res. Aust. J. Earth Sci. Atmos. Chem. Phys. Atmos. Meas. Tech. Basin Res. Big Earth Data BIOGEOSCIENCES Geostand. Geoanal. Res. GEOLOGY Geosci. J. Geochem. J. Geochem. Trans. Geosci. Front. Geol. Ore Deposits Global Biogeochem. Cycles Gondwana Res. Geochem. Int. Geol. J. Geophys. Prospect. Geosci. Model Dev. GEOL BELG GROUNDWATER Hydrogeol. J. Hydrol. Earth Syst. Sci. Hydrol. Processes Int. J. Climatol. Int. J. Earth Sci. Int. Geol. Rev. Int. J. Disaster Risk Reduct. Int. J. Geomech. Int. J. Geog. Inf. Sci. Isl. Arc J. Afr. Earth. Sci. J. Adv. Model. Earth Syst. J APPL METEOROL CLIM J. Atmos. Oceanic Technol. J. Atmos. Sol. Terr. Phys. J. Clim. J. Earth Sci. J. Earth Syst. Sci. J. Environ. Eng. Geophys. J. Geog. Sci. Mineral. Mag. Miner. Deposita Mon. Weather Rev. Nat. Hazards Earth Syst. Sci. Nat. Clim. Change Nat. Geosci. Ocean Dyn. Ocean and Coastal Research npj Clim. Atmos. Sci. Ocean Modell. Ocean Sci. Ore Geol. Rev. OCEAN SCI J Paleontol. J. PALAEOGEOGR PALAEOCL PERIOD MINERAL PETROLOGY+ Phys. Chem. Miner. Polar Sci. Prog. Oceanogr. Quat. Sci. Rev. Q. J. Eng. Geol. Hydrogeol. RADIOCARBON Pure Appl. Geophys. Resour. Geol. Rev. Geophys. Sediment. Geol.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1