首页 > 最新文献

Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering最新文献

英文 中文
AI Based Performance Benchmarking & Analysis of Big Data and Cloud Powered Applications: An in Depth View 基于AI的性能基准测试与大数据和云驱动应用的分析:深度视图
Jayanti Vemulapati, Anuruddha S. Khastgir, Chethana Savalgi
Big data analytics platforms on cloud are becoming mainstream technology enabling cost-effective rapid deployment of customer's Big Data applications delivering quicker insights from their data. It is, therefore, even more imperative that we have high performant platform infrastructure and application at a reasonable cost. This is only possible if we make a transition from traditional approach to execute and measure performance by adopting new AI techniques such as Machine Learning (ML) & predictive approach to performance benchmarking for every application domain. This paper proposes a high-level conceptual model for automated performance benchmarking which includes execution engine that has been designed to support a self-service model covering automated benchmarking in every application domain. The automated engine is supported by performance scaling recommendations via prescriptive analytics from real performance data set. We furthermore extended the recommendation capabilities of our self-service automated engine by introducing predictive analytics for making it more flexible in addressing 'what-if' scenarios to predict 'Right Scale' with measurement of "Performance Cost Ratio" (PCR). Finally, we also present some real-world industry examples which have seen the performance benefits in their applications with the recommendations given by our proposed model.
云上的大数据分析平台正在成为主流技术,能够经济高效地快速部署客户的大数据应用程序,从他们的数据中获得更快的见解。因此,以合理的成本拥有高性能的平台基础设施和应用程序变得更加迫切。只有当我们通过采用新的人工智能技术(如机器学习(ML)和预测方法)对每个应用程序领域进行性能基准测试,从传统方法过渡到执行和衡量性能时,这才有可能。本文提出了一个自动化性能基准测试的高级概念模型,该模型包括执行引擎,该执行引擎被设计为支持涵盖每个应用领域的自动化基准测试的自助服务模型。自动化引擎由基于真实性能数据集的规定性分析的性能扩展建议提供支持。通过引入预测分析,我们进一步扩展了自助服务自动化引擎的推荐功能,使其在处理“假设”场景时更加灵活,从而通过测量“性能成本比”(PCR)来预测“合适的规模”。最后,我们还提供了一些实际的行业示例,这些示例通过我们提出的模型给出的建议在其应用程序中看到了性能优势。
{"title":"AI Based Performance Benchmarking & Analysis of Big Data and Cloud Powered Applications: An in Depth View","authors":"Jayanti Vemulapati, Anuruddha S. Khastgir, Chethana Savalgi","doi":"10.1145/3297663.3309676","DOIUrl":"https://doi.org/10.1145/3297663.3309676","url":null,"abstract":"Big data analytics platforms on cloud are becoming mainstream technology enabling cost-effective rapid deployment of customer's Big Data applications delivering quicker insights from their data. It is, therefore, even more imperative that we have high performant platform infrastructure and application at a reasonable cost. This is only possible if we make a transition from traditional approach to execute and measure performance by adopting new AI techniques such as Machine Learning (ML) & predictive approach to performance benchmarking for every application domain. This paper proposes a high-level conceptual model for automated performance benchmarking which includes execution engine that has been designed to support a self-service model covering automated benchmarking in every application domain. The automated engine is supported by performance scaling recommendations via prescriptive analytics from real performance data set. We furthermore extended the recommendation capabilities of our self-service automated engine by introducing predictive analytics for making it more flexible in addressing 'what-if' scenarios to predict 'Right Scale' with measurement of \"Performance Cost Ratio\" (PCR). Finally, we also present some real-world industry examples which have seen the performance benefits in their applications with the recommendations given by our proposed model.","PeriodicalId":273447,"journal":{"name":"Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126403170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Simulation Based Job Scheduling Optimization for Batch Workloads 基于仿真的批处理工作负载调度优化
Dheeraj Chahal, Benny Mathew, M. Nambiar
We present a simulation based approach for scheduling jobs that are part of a batch workflow. Our objective is to minimize the makespan, defined as completion time of the last job to leave the system in a batch workflow with dependencies. The existing job schedulers make scheduling decisions based on available cores, memory size, priority or execution time of jobs. This does not guarantee minimum makespan since contention for resources among concurrently running jobs are ignored. In our approach, prior to scheduling batch jobs on physical servers, we simulate the execution of jobs using a discrete event simulator. The simulator considers available cores and available memory bandwidth on distributed systems to accurately simulate the execution of jobs using resource contention models in a concurrent run. We also propose simulation based job scheduling algorithms that use underlying contention models and minimize the makespan by optimally mapping jobs onto the available nodes. Our approach ensures that job dependencies are adhered to during the simulation. We assess the efficacy of our job scheduling algorithms and contention models by performing experiments on a real cluster. Our experimental results show that simulation based approach improves the makespan by 15% to 35% depending on the nature of workload.
我们提出了一种基于模拟的方法来调度作为批处理工作流一部分的作业。我们的目标是最小化makespan,它被定义为最后一个作业离开系统时的完成时间。现有的作业调度器根据可用的内核、内存大小、优先级或作业的执行时间做出调度决策。这并不能保证最小的makespan,因为并发运行的作业之间的资源争用会被忽略。在我们的方法中,在调度物理服务器上的批处理作业之前,我们使用离散事件模拟器模拟作业的执行。模拟器考虑分布式系统上可用的内核和可用的内存带宽,以便在并发运行中使用资源争用模型准确地模拟作业的执行。我们还提出了基于仿真的作业调度算法,该算法使用底层争用模型,并通过将作业最佳地映射到可用节点上来最小化makespan。我们的方法确保在模拟过程中遵守作业依赖关系。我们通过在真实集群上执行实验来评估我们的作业调度算法和争用模型的有效性。我们的实验结果表明,基于仿真的方法根据工作负载的性质将makespan提高了15%到35%。
{"title":"Simulation Based Job Scheduling Optimization for Batch Workloads","authors":"Dheeraj Chahal, Benny Mathew, M. Nambiar","doi":"10.1145/3297663.3310312","DOIUrl":"https://doi.org/10.1145/3297663.3310312","url":null,"abstract":"We present a simulation based approach for scheduling jobs that are part of a batch workflow. Our objective is to minimize the makespan, defined as completion time of the last job to leave the system in a batch workflow with dependencies. The existing job schedulers make scheduling decisions based on available cores, memory size, priority or execution time of jobs. This does not guarantee minimum makespan since contention for resources among concurrently running jobs are ignored. In our approach, prior to scheduling batch jobs on physical servers, we simulate the execution of jobs using a discrete event simulator. The simulator considers available cores and available memory bandwidth on distributed systems to accurately simulate the execution of jobs using resource contention models in a concurrent run. We also propose simulation based job scheduling algorithms that use underlying contention models and minimize the makespan by optimally mapping jobs onto the available nodes. Our approach ensures that job dependencies are adhered to during the simulation. We assess the efficacy of our job scheduling algorithms and contention models by performing experiments on a real cluster. Our experimental results show that simulation based approach improves the makespan by 15% to 35% depending on the nature of workload.","PeriodicalId":273447,"journal":{"name":"Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132634793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Characterization of a Big Data Storage Workload in the Cloud 云环境下大数据存储工作负载的表征
Sacheendra Talluri, Alicja Luszczak, Cristina L. Abad, A. Iosup
The proliferation of big data processing platforms has led to radically different system designs, such as MapReduce and the newer Spark. Understanding the workloads of such systems facilitates tuning and could foster new designs. However, whereas MapReduce workloads have been characterized extensively, relatively little public knowledge exists about the characteristics of Spark workloads in representative environments. To address this problem, in this work we collect and analyze a 6-month Spark workload from a major provider of big data processing services, Databricks. Our analysis focuses on a number of key features, such as the long-term trends of reads and modifications, the statistical properties of reads, and the popularity of clusters and of file formats. Overall, we present numerous findings that could form the basis of new systems studies and designs. Our quantitative evidence and its analysis suggest the existence of daily and weekly load imbalances, of heavy-tailed and bursty behaviour, of the relative rarity of modifications, and of proliferation of big data specific formats.
大数据处理平台的激增导致了截然不同的系统设计,例如MapReduce和较新的Spark。了解此类系统的工作负载有助于调优,并可以促进新的设计。然而,尽管MapReduce工作负载已经被广泛地描述过,但在代表性环境中,关于Spark工作负载特征的公共知识相对较少。为了解决这个问题,在这项工作中,我们从大数据处理服务的主要提供商Databricks那里收集并分析了6个月的Spark工作负载。我们的分析集中在一些关键特性上,比如读取和修改的长期趋势、读取的统计属性以及集群和文件格式的流行程度。总的来说,我们提出了许多可以形成新系统研究和设计基础的发现。我们的定量证据及其分析表明,存在每日和每周的负载失衡、重尾和突发行为、相对罕见的修改以及大数据特定格式的激增。
{"title":"Characterization of a Big Data Storage Workload in the Cloud","authors":"Sacheendra Talluri, Alicja Luszczak, Cristina L. Abad, A. Iosup","doi":"10.1145/3297663.3310302","DOIUrl":"https://doi.org/10.1145/3297663.3310302","url":null,"abstract":"The proliferation of big data processing platforms has led to radically different system designs, such as MapReduce and the newer Spark. Understanding the workloads of such systems facilitates tuning and could foster new designs. However, whereas MapReduce workloads have been characterized extensively, relatively little public knowledge exists about the characteristics of Spark workloads in representative environments. To address this problem, in this work we collect and analyze a 6-month Spark workload from a major provider of big data processing services, Databricks. Our analysis focuses on a number of key features, such as the long-term trends of reads and modifications, the statistical properties of reads, and the popularity of clusters and of file formats. Overall, we present numerous findings that could form the basis of new systems studies and designs. Our quantitative evidence and its analysis suggest the existence of daily and weekly load imbalances, of heavy-tailed and bursty behaviour, of the relative rarity of modifications, and of proliferation of big data specific formats.","PeriodicalId":273447,"journal":{"name":"Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133514125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
How is Performance Addressed in DevOps? 如何在DevOps中解决性能问题?
C. Bezemer, Simon Eismann, Vincenzo Ferme, Johannes Grohmann, R. Heinrich, Pooyan Jamshidi, Weiyi Shang, A. Hoorn, M. Villavicencio, J. Walter, Felix Willnecker
DevOps is a modern software engineering paradigm that is gaining widespread adoption in industry. The goal of DevOps is to bring software changes into production with a high frequency and fast feedback cycles. This conflicts with software quality assurance activities, particularly with respect to performance. For instance, performance evaluation activities --- such as load testing --- require a considerable amount of time to get statistically significant results. We conducted an industrial survey to get insights into how performance is addressed in industrial DevOps settings. In particular, we were interested in the frequency of executing performance evaluations, the tools being used, the granularity of the obtained performance data, and the use of model-based techniques. The survey responses, which come from a wide variety of participants from different industry sectors, indicate that the complexity of performance engineering approaches and tools is a barrier for wide-spread adoption of performance analysis in DevOps. The implication of our results is that performance analysis tools need to have a short learning curve, and should be easy to integrate into the DevOps pipeline in order to be adopted by practitioners.
DevOps是一种现代软件工程范例,在业界得到了广泛的采用。DevOps的目标是以高频率和快速的反馈周期将软件更改引入生产。这与软件质量保证活动相冲突,特别是在性能方面。例如,性能评估活动——例如负载测试——需要相当多的时间来获得统计上有意义的结果。我们进行了一项行业调查,以深入了解如何在工业DevOps设置中解决性能问题。特别是,我们对执行性能评估的频率、使用的工具、获得的性能数据的粒度以及基于模型的技术的使用感兴趣。来自不同行业部门的各种各样的参与者的调查反馈表明,性能工程方法和工具的复杂性是DevOps中广泛采用性能分析的障碍。我们的结果的含义是,性能分析工具需要有一个短的学习曲线,并且应该很容易集成到DevOps管道中,以便被实践者采用。
{"title":"How is Performance Addressed in DevOps?","authors":"C. Bezemer, Simon Eismann, Vincenzo Ferme, Johannes Grohmann, R. Heinrich, Pooyan Jamshidi, Weiyi Shang, A. Hoorn, M. Villavicencio, J. Walter, Felix Willnecker","doi":"10.1145/3297663.3309672","DOIUrl":"https://doi.org/10.1145/3297663.3309672","url":null,"abstract":"DevOps is a modern software engineering paradigm that is gaining widespread adoption in industry. The goal of DevOps is to bring software changes into production with a high frequency and fast feedback cycles. This conflicts with software quality assurance activities, particularly with respect to performance. For instance, performance evaluation activities --- such as load testing --- require a considerable amount of time to get statistically significant results. We conducted an industrial survey to get insights into how performance is addressed in industrial DevOps settings. In particular, we were interested in the frequency of executing performance evaluations, the tools being used, the granularity of the obtained performance data, and the use of model-based techniques. The survey responses, which come from a wide variety of participants from different industry sectors, indicate that the complexity of performance engineering approaches and tools is a barrier for wide-spread adoption of performance analysis in DevOps. The implication of our results is that performance analysis tools need to have a short learning curve, and should be easy to integrate into the DevOps pipeline in order to be adopted by practitioners.","PeriodicalId":273447,"journal":{"name":"Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering","volume":"60 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131451544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Simultaneous Solving of Batched Linear Programs on a GPU 批处理线性程序在GPU上的同时求解
Amit Gurung, Rajarshi Ray
Linear Programs (LPs) appear in a large number of applications. Offloading the LP solving tasks to a GPU is viable to accelerate an application's performance. Existing work on offloading and solving an LP on a GPU shows that performance can be accelerated only for large LPs (typically 500 constraints, 500 variables and above). This paper is motivated from applications having to solve small LPs but many of them. Existing techniques fail to accelerate such applications using GPU. We propose a batched LP solver in CUDA to accelerate such applications and demonstrate its utility in a use case - state-space exploration of models of control systems design. A performance comparison of The batched LP solver against sequential solving in CPU using the open source solver GLPK (GNU Linear Programming Kit) and the CPLEX solver from IBM is also shown. The evaluation on selected LP benchmarks from the Netlib repository displays a maximum speed-up of 95x and 5x with respect to CPLEX and GLPK solver respectively, for a batch of 1e5 LPs.
线性规划(lp)有大量的应用。将LP求解任务卸载到GPU上对于加速应用程序的性能是可行的。在GPU上卸载和解决LP的现有工作表明,只有在大型LP(通常是500个约束,500个变量及以上)下才能加速性能。本文的动机来自于必须解决小型lp的应用程序,但其中有很多。现有技术无法使用GPU加速此类应用程序。我们在CUDA中提出了一个批处理LP求解器,以加速此类应用,并展示其在控制系统设计模型的用例-状态空间探索中的实用性。还显示了使用开源求解器GLPK (GNU Linear Programming Kit)和IBM的CPLEX求解器在CPU中对批处理LP求解器与顺序求解的性能比较。对Netlib存储库中选定的LP基准的评估显示,对于一批1e5 LP,相对于CPLEX和GLPK求解器,最大速度分别提高了95倍和5倍。
{"title":"Simultaneous Solving of Batched Linear Programs on a GPU","authors":"Amit Gurung, Rajarshi Ray","doi":"10.1145/3297663.3310308","DOIUrl":"https://doi.org/10.1145/3297663.3310308","url":null,"abstract":"Linear Programs (LPs) appear in a large number of applications. Offloading the LP solving tasks to a GPU is viable to accelerate an application's performance. Existing work on offloading and solving an LP on a GPU shows that performance can be accelerated only for large LPs (typically 500 constraints, 500 variables and above). This paper is motivated from applications having to solve small LPs but many of them. Existing techniques fail to accelerate such applications using GPU. We propose a batched LP solver in CUDA to accelerate such applications and demonstrate its utility in a use case - state-space exploration of models of control systems design. A performance comparison of The batched LP solver against sequential solving in CPU using the open source solver GLPK (GNU Linear Programming Kit) and the CPLEX solver from IBM is also shown. The evaluation on selected LP benchmarks from the Netlib repository displays a maximum speed-up of 95x and 5x with respect to CPLEX and GLPK solver respectively, for a batch of 1e5 LPs.","PeriodicalId":273447,"journal":{"name":"Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128843688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering 2019年ACM/SPEC性能工程国际会议论文集
Seetharami R. Seelam, P. Tůma, G. Casale, T. Field, J. N. Amaral
We are delighted to bring you an outstanding technical program to 2013 International Conference on Performance Engineering -- ICPE'13 in Prague. The main Research track for the conference attracted 42 submissions. Thanks to the diligent efforts of the members of the Program Committee each paper received a minimum of four reviews. After extensive deliberation the Program Committee decided to accept 20 submissions as regular papers and two as short papers. The Industry and Experience track focuses on the application of research results to industrial performance engineering problems and addresses in particular innovative implementations, the novel application of performance-related technologies and the reporting of insightful performance results. This track received 22 submissions of which 8 were selected for presentation at the conference. The papers accepted to the Research track and to the Industry and Experience track cover several topics such as software development and various flavors of modeling, including performance, survivability and scalability modeling. The development of representative workloads and benchmarks is also well represented. There are then a number of papers that focus on performance aspects of cloud-related systems and more general aspects of scheduling and load balancing. The Vision/Work-in-Progress track is a feature of ICPE that allows researchers to present and discuss ideas that they are still working on or that they are planning to work on in the near future. It is a great forum for learning about the direction of research in the area. This year we received 18 submissions to this track and were able to accommodate 10 short presentations in the conference program. The topics covered by this track are similar to those in the main Research track, which suggests that they are likely to feature again at ICPE in the near future. In summary, there were 81 submissions in total across the three tracks, of which 38 were selected for presentation. We are now looking forward to several days of great presentations and stimulating discussions at ICPE 2013 in beautiful Prague. It has been a privilege and a pleasure forus to be involved.
我们很高兴为您带来2013年在布拉格举行的性能工程国际会议- ICPE'13的杰出技术方案。会议的主要研究主题吸引了42份意见书。由于项目委员会成员的勤奋努力,每篇论文至少经过四次审查。经过广泛的审议,计划委员会决定接受20份提交作为常规论文和两份作为短篇论文。工业和经验课程侧重于将研究成果应用于工业性能工程问题,并特别关注创新实施、性能相关技术的新应用以及有见地的性能结果报告。该专题收到了22份意见书,其中8份被选中在会议上发表。研究组和行业与经验组接受的论文涵盖了几个主题,比如软件开发和各种建模风格,包括性能、生存能力和可伸缩性建模。代表性工作负载和基准的开发也得到了很好的体现。然后有许多论文关注与云相关的系统的性能方面,以及更一般的调度和负载平衡方面。愿景/工作进展轨道是ICPE的一个功能,它允许研究人员展示和讨论他们仍在研究或计划在不久的将来研究的想法。这是了解该领域研究方向的一个很好的论坛。今年,我们收到了18份关于这一主题的报告,并在会议计划中安排了10份简短的报告。本专题所涵盖的主题与主要研究专题的主题相似,这表明它们很可能在不久的将来再次出现在ICPE上。综上所述,三个项目共有81份参赛作品,其中38份被选中进行展示。我们现在期待着在美丽的布拉格举行的ICPE 2013上进行几天的精彩演讲和激动人心的讨论。我们很荣幸也很高兴能参与其中。
{"title":"Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering","authors":"Seetharami R. Seelam, P. Tůma, G. Casale, T. Field, J. N. Amaral","doi":"10.1145/3297663","DOIUrl":"https://doi.org/10.1145/3297663","url":null,"abstract":"We are delighted to bring you an outstanding technical program to 2013 International Conference on Performance Engineering -- ICPE'13 in Prague. The main Research track for the conference attracted 42 submissions. Thanks to the diligent efforts of the members of the Program Committee each paper received a minimum of four reviews. After extensive deliberation the Program Committee decided to accept 20 submissions as regular papers and two as short papers. \u0000 \u0000The Industry and Experience track focuses on the application of research results to industrial performance engineering problems and addresses in particular innovative implementations, the novel application of performance-related technologies and the reporting of insightful performance results. This track received 22 submissions of which 8 were selected for presentation at the conference. \u0000 \u0000The papers accepted to the Research track and to the Industry and Experience track cover several topics such as software development and various flavors of modeling, including performance, survivability and scalability modeling. The development of representative workloads and benchmarks is also well represented. There are then a number of papers that focus on performance aspects of cloud-related systems and more general aspects of scheduling and load balancing. \u0000 \u0000The Vision/Work-in-Progress track is a feature of ICPE that allows researchers to present and discuss ideas that they are still working on or that they are planning to work on in the near future. It is a great forum for learning about the direction of research in the area. This year we received 18 submissions to this track and were able to accommodate 10 short presentations in the conference program. The topics covered by this track are similar to those in the main Research track, which suggests that they are likely to feature again at ICPE in the near future. \u0000 \u0000In summary, there were 81 submissions in total across the three tracks, of which 38 were selected for presentation. We are now looking forward to several days of great presentations and stimulating discussions at ICPE 2013 in beautiful Prague. It has been a privilege and a pleasure forus to be involved.","PeriodicalId":273447,"journal":{"name":"Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125957247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1