首页 > 最新文献

2009 Fifth IEEE International Conference on e-Science最新文献

英文 中文
An Architecture for Real Time Data Acquisition and Online Signal Processing for High Throughput Tandem Mass Spectrometry 一种高通量串联质谱实时数据采集和在线信号处理的体系结构
Pub Date : 2009-12-09 DOI: 10.1109/E-SCIENCE.2009.21
A. Shah, N. Jaitly, Nino Zuljevic, M. Monroe, A. Liyu, A. Polpitiya, J. Adkins, M. Belov, G. Anderson, Richard D. Smith, I. Gorton
Independent, greedy collection of data events using simple heuristics results in massive over-sampling of the prominent data features in large-scale studies over what should be achievable through “intelligent,” online acquisition of such data. As a result, data generated are more aptly described as a collection of a large number of small experiments rather than a true large-scale experiment. Nevertheless, achieving “intelligent,” online control requires tight interplay between state-of-the-art, data-intensive computing infrastructure developments and analytical algorithms. In this paper, we propose a Software Architecture for Mass spectrometry-based Proteomics coupled with Liquid chromatography Experiments (SAMPLE) to develop an “intelligent” online control and analysis system to significantly enhance the information content from each sensor (in this case, a mass spectrometer). Using online analysis of data events as they are collected and decision theory to optimize the collection of events during an experiment, we aim to maximize the information content generated during an experiment by the use of pre-existing knowledge to optimize the dynamic collection of events.
使用简单的启发式方法独立、贪婪地收集数据事件会导致大规模研究中突出数据特征的大量过度采样,而不是通过“智能”在线获取这些数据。因此,将生成的数据描述为大量小型实验的集合而不是真正的大规模实验更为恰当。然而,实现“智能”在线控制需要最先进的数据密集型计算基础设施发展与分析算法之间的紧密相互作用。在本文中,我们提出了一种基于质谱的蛋白质组学与液相色谱实验(SAMPLE)相结合的软件架构,以开发一个“智能”在线控制和分析系统,以显着增强来自每个传感器(在本例中为质谱仪)的信息内容。利用在线分析收集的数据事件和决策理论来优化实验过程中的事件收集,我们的目标是通过使用预先存在的知识来优化事件的动态收集,从而最大化实验过程中产生的信息内容。
{"title":"An Architecture for Real Time Data Acquisition and Online Signal Processing for High Throughput Tandem Mass Spectrometry","authors":"A. Shah, N. Jaitly, Nino Zuljevic, M. Monroe, A. Liyu, A. Polpitiya, J. Adkins, M. Belov, G. Anderson, Richard D. Smith, I. Gorton","doi":"10.1109/E-SCIENCE.2009.21","DOIUrl":"https://doi.org/10.1109/E-SCIENCE.2009.21","url":null,"abstract":"Independent, greedy collection of data events using simple heuristics results in massive over-sampling of the prominent data features in large-scale studies over what should be achievable through “intelligent,” online acquisition of such data. As a result, data generated are more aptly described as a collection of a large number of small experiments rather than a true large-scale experiment. Nevertheless, achieving “intelligent,” online control requires tight interplay between state-of-the-art, data-intensive computing infrastructure developments and analytical algorithms. In this paper, we propose a Software Architecture for Mass spectrometry-based Proteomics coupled with Liquid chromatography Experiments (SAMPLE) to develop an “intelligent” online control and analysis system to significantly enhance the information content from each sensor (in this case, a mass spectrometer). Using online analysis of data events as they are collected and decision theory to optimize the collection of events during an experiment, we aim to maximize the information content generated during an experiment by the use of pre-existing knowledge to optimize the dynamic collection of events.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133935809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Alfalab: Construction and Deconstruction of a Digital Humanities Experiment Alfalab:数字人文实验的建构与解构
Pub Date : 2009-12-09 DOI: 10.1109/E-SCIENCE.2009.8
J. Zundert, D. Zeldenrust, A. Beaulieu
This paper presents project 'Alfalab'. Alfalab is a collaborative frame work project of the Royal Netherlands Academy of Arts and Sciences (KNAW). It explores the success and fail factors for virtual research collaboration and supporting digital infrastructure in the Humanities. It does so by delivering a virtual research environment engineered through a virtual R&D collaborative and by drawing in use cases and feedback from Humanities researchers from two research fields: textual historical text research and historical GIS-application. The motivation for the project is found in a number of commonly stated factors that seem to be inhibiting general application of virtualized research infrastructure in the Humanities. The paper outlines the project's motivation, key characteristics and implementation. One of the pilot applications is described in greater detail.
本文介绍了“Alfalab”项目。Alfalab是荷兰皇家艺术与科学学院(KNAW)的一个合作框架项目。它探讨了虚拟研究协作和支持人文学科数字基础设施的成功和失败因素。它通过虚拟研发协作提供虚拟研究环境,并从两个研究领域(文本历史文本研究和历史地理信息系统应用)中吸取人文学科研究人员的用例和反馈来实现这一目标。该项目的动机是在一些常见的因素中发现的,这些因素似乎阻碍了虚拟化研究基础设施在人文学科中的普遍应用。本文概述了该项目的动机、主要特征和实施。更详细地描述了其中一个试点应用程序。
{"title":"Alfalab: Construction and Deconstruction of a Digital Humanities Experiment","authors":"J. Zundert, D. Zeldenrust, A. Beaulieu","doi":"10.1109/E-SCIENCE.2009.8","DOIUrl":"https://doi.org/10.1109/E-SCIENCE.2009.8","url":null,"abstract":"This paper presents project 'Alfalab'. Alfalab is a collaborative frame work project of the Royal Netherlands Academy of Arts and Sciences (KNAW). It explores the success and fail factors for virtual research collaboration and supporting digital infrastructure in the Humanities. It does so by delivering a virtual research environment engineered through a virtual R&D collaborative and by drawing in use cases and feedback from Humanities researchers from two research fields: textual historical text research and historical GIS-application. The motivation for the project is found in a number of commonly stated factors that seem to be inhibiting general application of virtualized research infrastructure in the Humanities. The paper outlines the project's motivation, key characteristics and implementation. One of the pilot applications is described in greater detail.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121877632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Ontology Based Framework for the Preservation of Interactive Multimedia Performances 基于本体的交互式多媒体性能保存框架
Pub Date : 2009-12-09 DOI: 10.1109/e-Science.2009.14
K. Ng, Eleni Mikroyannidi, B. Ong, D. Giaretta
Interactive multimedia and human-computer interaction technologies are effecting and contributing towards a wide range of developments in many different subject areas including contemporary performing arts. These technologies have facilitated the developments and advancements of augmented and virtual instruments for interactive music performance, interactive installation, many aspects of technology-enhanced learning (TEL) and others. These systems typically involve several different digital objects including software as well as data that are necessary for the performance and/or data captured/generated during the performance that may be invaluable to understand the performance. Consequently, the preservation of interactive multimedia systems and performances is an important step to ensure possible future re-performances as well as preserving the artistic style and heritage of the art form. This paper presents the CASPAR framework (developed within the CASPAR EC IST project) for the preservation of Interactive Multimedia Performances (IMP) and introduces an IMP archival system that has been developed based on the CASPAR framework and components. This paper also discusses the main functionalities and validation of the IMP archival system developed.
交互式多媒体和人机交互技术正在影响和促进许多不同学科领域的广泛发展,包括当代表演艺术。这些技术促进了增强和虚拟乐器的发展和进步,用于交互式音乐表演、交互式装置、技术增强学习(TEL)等许多方面。这些系统通常涉及几个不同的数字对象,包括软件以及性能所需的数据和/或在性能期间捕获/生成的数据,这些数据对于理解性能可能是无价的。因此,保存互动多媒体系统和表演是重要的一步,以确保可能的未来再表演,以及保存艺术形式的艺术风格和遗产。本文介绍了用于保存交互式多媒体表演(IMP)的CASPAR框架(在CASPAR EC IST项目中开发),并介绍了基于CASPAR框架和组件开发的IMP存档系统。本文还讨论了所开发的IMP档案系统的主要功能和验证。
{"title":"An Ontology Based Framework for the Preservation of Interactive Multimedia Performances","authors":"K. Ng, Eleni Mikroyannidi, B. Ong, D. Giaretta","doi":"10.1109/e-Science.2009.14","DOIUrl":"https://doi.org/10.1109/e-Science.2009.14","url":null,"abstract":"Interactive multimedia and human-computer interaction technologies are effecting and contributing towards a wide range of developments in many different subject areas including contemporary performing arts. These technologies have facilitated the developments and advancements of augmented and virtual instruments for interactive music performance, interactive installation, many aspects of technology-enhanced learning (TEL) and others. These systems typically involve several different digital objects including software as well as data that are necessary for the performance and/or data captured/generated during the performance that may be invaluable to understand the performance. Consequently, the preservation of interactive multimedia systems and performances is an important step to ensure possible future re-performances as well as preserving the artistic style and heritage of the art form. This paper presents the CASPAR framework (developed within the CASPAR EC IST project) for the preservation of Interactive Multimedia Performances (IMP) and introduces an IMP archival system that has been developed based on the CASPAR framework and components. This paper also discusses the main functionalities and validation of the IMP archival system developed.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131157615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Scheduling Multiple Parameter Sweep Workflow Instances on the Grid 调度网格上的多参数扫描工作流实例
Pub Date : 2009-12-09 DOI: 10.1109/e-Science.2009.49
Sucha Smanchat, M. Indrawan, Sea Ling, C. Enticott, D. Abramson
Due to its ability to provide high-performance computing environment, the grid has become an important infrastructure to support eScience. To utilise the grid for parameter sweep experiments, workflow technology combined with tools such as Nimrod/K are used to orchestrate and automate scientific services provided on the grid. As parameter sweeping over a workflow needs to be executed numerous times, it is more efficient to execute multiple instances of the workflow in parallel. However, this parallel execution can be delayed as every workflow instance requires the same set of resources leading to resource competition problem. Although many algorithms exist for scheduling grid workflows, there is little effort in considering multiple workflow instances and resource competition in the scheduling process. In this paper, we proposed a scheduling algorithm for parameter sweep workflow based on resource competition. The proposed algorithm aims to support multiple workflow instances and avoid allocating resources with high resource competition to minimise delay due to the blocking of tasks. The result is evaluated using simulation to compare with an existing scheduling algorithm.
由于其提供高性能计算环境的能力,网格已成为支持eScience的重要基础设施。为了利用网格进行参数扫描实验,工作流技术与Nimrod/K等工具相结合,用于编排和自动化网格上提供的科学服务。由于对工作流的参数扫描需要执行多次,因此并行执行工作流的多个实例更为有效。然而,这种并行执行可能会延迟,因为每个工作流实例都需要相同的资源集,从而导致资源竞争问题。虽然网格工作流调度算法很多,但在调度过程中很少考虑到多工作流实例和资源竞争问题。本文提出了一种基于资源竞争的参数扫描工作流调度算法。该算法旨在支持多个工作流实例,避免资源竞争激烈的资源分配,以最大限度地减少任务阻塞造成的延迟。仿真结果与现有的调度算法进行了比较。
{"title":"Scheduling Multiple Parameter Sweep Workflow Instances on the Grid","authors":"Sucha Smanchat, M. Indrawan, Sea Ling, C. Enticott, D. Abramson","doi":"10.1109/e-Science.2009.49","DOIUrl":"https://doi.org/10.1109/e-Science.2009.49","url":null,"abstract":"Due to its ability to provide high-performance computing environment, the grid has become an important infrastructure to support eScience. To utilise the grid for parameter sweep experiments, workflow technology combined with tools such as Nimrod/K are used to orchestrate and automate scientific services provided on the grid. As parameter sweeping over a workflow needs to be executed numerous times, it is more efficient to execute multiple instances of the workflow in parallel. However, this parallel execution can be delayed as every workflow instance requires the same set of resources leading to resource competition problem. Although many algorithms exist for scheduling grid workflows, there is little effort in considering multiple workflow instances and resource competition in the scheduling process. In this paper, we proposed a scheduling algorithm for parameter sweep workflow based on resource competition. The proposed algorithm aims to support multiple workflow instances and avoid allocating resources with high resource competition to minimise delay due to the blocking of tasks. The result is evaluated using simulation to compare with an existing scheduling algorithm.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123724435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Integrating Full-Text Search and Linguistic Analyses on Disperse Data for Humanities and Social Sciences Research Projects 人文社科研究项目全文检索与分散数据的语言分析集成
Pub Date : 2009-12-09 DOI: 10.1109/E-SCIENCE.2009.12
Marta Villegas, Carla Parra
The research reported in this paper is part of the activities carried out within the CLARIN (Common Language Resources and Technology Infrastructure) project, a large-scale pan-European project to create, coordinate and make Language Resources and Technologies (LRT) available and readily useable. CLARIN is devoted to the creation of a persistent and stable infrastructure serving the needs of the European Humanities and Social Sciences (HSS) research community. HSS researchers will be able to efficiently access distributed resources and apply analysis and exploitation tools relevant for their research. Hereby we present a real use case addressed as a CLARIN scenario and the implementation of a demonstrator that enables us to foresee the potential problems and contributes to the planning of the implementation phase. It deals with how to support researchers interested in harvesting and analyzing data from historical press archives. Therefore, we address the integration and interoperability of distributed and heterogeneous research data and analysis tools.
本文中报告的研究是CLARIN(公共语言资源和技术基础设施)项目中开展的活动的一部分,该项目是一个大规模的泛欧项目,旨在创建、协调和使语言资源和技术(LRT)可用并易于使用。CLARIN致力于创建持久稳定的基础设施,以满足欧洲人文与社会科学(HSS)研究界的需求。HSS研究人员将能够有效地访问分布式资源,并应用与他们的研究相关的分析和开发工具。在此,我们提出了一个真实的用例,作为CLARIN场景和演示器的实现,使我们能够预见潜在的问题,并有助于规划实现阶段。它涉及如何支持对从历史新闻档案中收集和分析数据感兴趣的研究人员。因此,我们解决了分布式和异构研究数据和分析工具的集成和互操作性。
{"title":"Integrating Full-Text Search and Linguistic Analyses on Disperse Data for Humanities and Social Sciences Research Projects","authors":"Marta Villegas, Carla Parra","doi":"10.1109/E-SCIENCE.2009.12","DOIUrl":"https://doi.org/10.1109/E-SCIENCE.2009.12","url":null,"abstract":"The research reported in this paper is part of the activities carried out within the CLARIN (Common Language Resources and Technology Infrastructure) project, a large-scale pan-European project to create, coordinate and make Language Resources and Technologies (LRT) available and readily useable. CLARIN is devoted to the creation of a persistent and stable infrastructure serving the needs of the European Humanities and Social Sciences (HSS) research community. HSS researchers will be able to efficiently access distributed resources and apply analysis and exploitation tools relevant for their research. Hereby we present a real use case addressed as a CLARIN scenario and the implementation of a demonstrator that enables us to foresee the potential problems and contributes to the planning of the implementation phase. It deals with how to support researchers interested in harvesting and analyzing data from historical press archives. Therefore, we address the integration and interoperability of distributed and heterogeneous research data and analysis tools.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126672931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CHIC - Converting Hamburgers into Cows CHIC——把汉堡包变成奶牛
Pub Date : 2009-12-09 DOI: 10.1109/e-Science.2009.54
J. Townsend, J. Downing, Peter Murray-Rust
We have developed a methodology and workflow (CHIC) for the automatic semantification and structuring of legacy textual scientific documents. CHIC imports common document formats (PDF, DOCX and (X)HTML) and uses a number of toolkits to extract components and convert them into SciXML. This is sectioned into text-rich and data-rich streams and stand-off annotation (SAF) is created for each. Embedded domain specific objects can be converted into XML (Chemical Markup Language). The different workflow streams can then be recombined and typically converted into RDF (Resource Description Format).
我们已经开发了一种方法和工作流程(CHIC),用于自动语义化和结构化遗留文本科学文档。CHIC导入常见的文档格式(PDF、DOCX和(X)HTML),并使用许多工具包提取组件并将其转换为SciXML。这被划分为文本丰富的流和数据丰富的流,并为每个流创建隔离注释(SAF)。嵌入式领域特定对象可以转换为XML(化学标记语言)。然后,不同的工作流可以重新组合并通常转换为RDF(资源描述格式)。
{"title":"CHIC - Converting Hamburgers into Cows","authors":"J. Townsend, J. Downing, Peter Murray-Rust","doi":"10.1109/e-Science.2009.54","DOIUrl":"https://doi.org/10.1109/e-Science.2009.54","url":null,"abstract":"We have developed a methodology and workflow (CHIC) for the automatic semantification and structuring of legacy textual scientific documents. CHIC imports common document formats (PDF, DOCX and (X)HTML) and uses a number of toolkits to extract components and convert them into SciXML. This is sectioned into text-rich and data-rich streams and stand-off annotation (SAF) is created for each. Embedded domain specific objects can be converted into XML (Chemical Markup Language). The different workflow streams can then be recombined and typically converted into RDF (Resource Description Format).","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Methodology for File Relationship Discovery 一种文件关系发现方法
Pub Date : 2009-12-09 DOI: 10.1109/e-Science.2009.35
M. Ondrejcek, Jason Kastner, R. Kooper, P. Bajcsy
This paper addresses the problem of discovering temporal and contextual relationships across document, data, and software categories of electronic records. We designed a methodology to discover unknown relationships by conducting file system and file content analyses. The work also investigates automation of metadata extraction from engineering drawings and storage requirements for metadata extraction. The methodology has been applied to extracting information from a test collection of electronic records about the NAVY ship (TWR 841) archived by the US National Archive (NARA). This test collection represents a problem of unknown relationships among files that include 784 2D image drawings and 22 CAD models.
本文解决了发现电子记录的文档、数据和软件类别之间的时间和上下文关系的问题。我们设计了一种方法,通过进行文件系统和文件内容分析来发现未知关系。该工作还研究了从工程图纸中提取元数据的自动化和元数据提取的存储需求。该方法已应用于从美国国家档案馆(NARA)存档的海军舰艇(TWR 841)电子记录的测试收集中提取信息。这个测试集合代表了文件之间未知关系的问题,其中包括784张2D图像图纸和22个CAD模型。
{"title":"A Methodology for File Relationship Discovery","authors":"M. Ondrejcek, Jason Kastner, R. Kooper, P. Bajcsy","doi":"10.1109/e-Science.2009.35","DOIUrl":"https://doi.org/10.1109/e-Science.2009.35","url":null,"abstract":"This paper addresses the problem of discovering temporal and contextual relationships across document, data, and software categories of electronic records. We designed a methodology to discover unknown relationships by conducting file system and file content analyses. The work also investigates automation of metadata extraction from engineering drawings and storage requirements for metadata extraction. The methodology has been applied to extracting information from a test collection of electronic records about the NAVY ship (TWR 841) archived by the US National Archive (NARA). This test collection represents a problem of unknown relationships among files that include 784 2D image drawings and 22 CAD models.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"105 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114032552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A High-Performance Hybrid Computing Approach to Massive Contingency Analysis in the Power Grid 电网大规模事故分析的高性能混合计算方法
Pub Date : 2009-12-09 DOI: 10.1109/e-Science.2009.46
I. Gorton, Zhenyu Huang, Yousu Chen, Benson Kalahar, Shuangshuang Jin, D. Chavarría-Miranda, Douglas J. Baxter, J. Feo
Operating the electrical power grid to prevent power black-outs is a complex task. An important aspect of this is contingency analysis, which involves understanding and mitigating potential failures in power grid elements such as transmission lines. When taking into account the potential for multiple simultaneous failures (known as the N-x contingency problem), contingency analysis becomes a massively computational task. In this paper we describe a novel hybrid computational approach to contingency analysis. This approach exploits the unique graph processing performance of the Cray XMT in conjunction with a conventional massively parallel compute cluster to identify likely simultaneous failures that could cause widespread cascading power failures that have massive economic and social impact on society. The approach has the potential to provide the first practical and scalable solution to the N-x contingency problem. When deployed in power grid operations, it will increase the grid operator’s ability to deal effectively with outages and failures with power grid components while preserving stable and safe operation of the grid. The paper describes the architecture of our solution and presents preliminary performance results that validate the efficacy of our approach.
运行电网以防止停电是一项复杂的任务。其中一个重要方面是应急分析,它涉及了解和减轻电网元件(如输电线路)的潜在故障。当考虑到多个同时发生故障的可能性(称为N-x偶然性问题)时,偶然性分析成为一项大量计算任务。本文描述了一种新的权变分析混合计算方法。这种方法利用Cray XMT独特的图形处理性能,结合传统的大规模并行计算集群来识别可能同时发生的故障,这些故障可能导致广泛的级联电源故障,对社会产生巨大的经济和社会影响。该方法有可能为N-x偶然性问题提供第一个实用且可扩展的解决方案。当部署在电网运行中时,它将提高电网运营商有效处理电网组件中断和故障的能力,同时保持电网的稳定和安全运行。本文描述了我们的解决方案的体系结构,并给出了验证我们方法有效性的初步性能结果。
{"title":"A High-Performance Hybrid Computing Approach to Massive Contingency Analysis in the Power Grid","authors":"I. Gorton, Zhenyu Huang, Yousu Chen, Benson Kalahar, Shuangshuang Jin, D. Chavarría-Miranda, Douglas J. Baxter, J. Feo","doi":"10.1109/e-Science.2009.46","DOIUrl":"https://doi.org/10.1109/e-Science.2009.46","url":null,"abstract":"Operating the electrical power grid to prevent power black-outs is a complex task. An important aspect of this is contingency analysis, which involves understanding and mitigating potential failures in power grid elements such as transmission lines. When taking into account the potential for multiple simultaneous failures (known as the N-x contingency problem), contingency analysis becomes a massively computational task. In this paper we describe a novel hybrid computational approach to contingency analysis. This approach exploits the unique graph processing performance of the Cray XMT in conjunction with a conventional massively parallel compute cluster to identify likely simultaneous failures that could cause widespread cascading power failures that have massive economic and social impact on society. The approach has the potential to provide the first practical and scalable solution to the N-x contingency problem. When deployed in power grid operations, it will increase the grid operator’s ability to deal effectively with outages and failures with power grid components while preserving stable and safe operation of the grid. The paper describes the architecture of our solution and presents preliminary performance results that validate the efficacy of our approach.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116720445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
ICAT: Integrating Data Infrastructure for Facilities Based Science ICAT:为基于设施的科学集成数据基础设施
Pub Date : 2009-12-09 DOI: 10.1109/E-SCIENCE.2009.36
D. Flannery, B. Matthews, T. Griffin, J. Bicarregui, M. Gleaves, L. Lerusse, Roger Downing, A. Ashton, Shoaib Sufi, G. Drinkwater, K. K. Dam
Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility-generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadata model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.
科学设施,特别是大规模光子和中子源,对其产生的越来越多的实验数据进行系统和安全的管理有着苛刻的要求。在本文中,我们描述了在STFC和DLS内部已经开发了几年的用于编目设施生成实验数据的ICAT基础设施。我们考虑了影响其设计的因素,并描述了其体系结构和元数据模型,元数据模型是数据管理的关键工具。接下来,我们将概述其目前的实施和使用情况,并对其未来的发展进行规划。
{"title":"ICAT: Integrating Data Infrastructure for Facilities Based Science","authors":"D. Flannery, B. Matthews, T. Griffin, J. Bicarregui, M. Gleaves, L. Lerusse, Roger Downing, A. Ashton, Shoaib Sufi, G. Drinkwater, K. K. Dam","doi":"10.1109/E-SCIENCE.2009.36","DOIUrl":"https://doi.org/10.1109/E-SCIENCE.2009.36","url":null,"abstract":"Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility-generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadata model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129090870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Extracting and Ingesting DDI Metadata and Digital Objects from a Data Archive into the iRODS Extension of the NARA TPAP Using the OAI-PMH 利用OAI-PMH从数据档案中提取和吸收DDI元数据和数字对象到NARA TPAP的iRODS扩展
Pub Date : 2009-12-09 DOI: 10.1109/e-Science.2009.34
J. Ward, A. D. Torcy, Mason Chua, Jon Crabtree
This prototype demonstrated that the migration of collections between digital libraries and preservation data archives is now possible using automated batch load for both data and metadata. We used this capability to enable collection interoperability between the H.W. Odum Institute for Research in Social Science (Odum) Data Archive and the integrated Rule Oriented Data System (iRODS) extension of the National Archives and Record Administration's (NARA) Transcontinental Persistent Archive Prototype (TPAP). We extracted data and metadata from a Dataverse data archive and ingested it into the iRODS server and metadata catalog using the OAI-PMH, Java, XML/XSL and iRODS rules and microservices. We validated ingest of the files and retained the required Terms & Conditions for the social science data after ingest.
这个原型表明,现在可以使用数据和元数据的自动批处理加载,在数字图书馆和保存数据档案之间迁移集合。我们使用这个功能来实现H.W. Odum社会科学研究所(Odum)数据档案和国家档案和记录管理局(NARA)跨大陆持久性档案原型(TPAP)的集成规则导向数据系统(iRODS)扩展之间的收集互操作性。我们从Dataverse数据存档中提取数据和元数据,并使用OAI-PMH、Java、XML/XSL和iRODS规则和微服务将其纳入iRODS服务器和元数据目录。我们验证了文件的摄取,并在摄取后保留了社会科学数据所需的条款和条件。
{"title":"Extracting and Ingesting DDI Metadata and Digital Objects from a Data Archive into the iRODS Extension of the NARA TPAP Using the OAI-PMH","authors":"J. Ward, A. D. Torcy, Mason Chua, Jon Crabtree","doi":"10.1109/e-Science.2009.34","DOIUrl":"https://doi.org/10.1109/e-Science.2009.34","url":null,"abstract":"This prototype demonstrated that the migration of collections between digital libraries and preservation data archives is now possible using automated batch load for both data and metadata. We used this capability to enable collection interoperability between the H.W. Odum Institute for Research in Social Science (Odum) Data Archive and the integrated Rule Oriented Data System (iRODS) extension of the National Archives and Record Administration's (NARA) Transcontinental Persistent Archive Prototype (TPAP). We extracted data and metadata from a Dataverse data archive and ingested it into the iRODS server and metadata catalog using the OAI-PMH, Java, XML/XSL and iRODS rules and microservices. We validated ingest of the files and retained the required Terms & Conditions for the social science data after ingest.","PeriodicalId":325840,"journal":{"name":"2009 Fifth IEEE International Conference on e-Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129639034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2009 Fifth IEEE International Conference on e-Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1