首页 > 最新文献

Linked Open Data - Applications, Trends and Future Developments最新文献

英文 中文
Linked Open Data: State-of-the-Art Mechanisms and Conceptual Framework 链接开放数据:最新的机制和概念框架
Pub Date : 2020-10-30 DOI: 10.5772/intechopen.94504
Kingsley Okoye
Today, one of the state-of-the-art technologies that have shown its importance towards data integration and analysis is the linked open data (LOD) systems or applications. LOD constitute of machine-readable resources or mechanisms that are useful in describing data properties. However, one of the issues with the existing systems or data models is the need for not just representing the derived information (data) in formats that can be easily understood by humans, but also creating systems that are able to process the information that they contain or support. Technically, the main mechanisms for developing the data or information processing systems are the aspects of aggregating or computing the metadata descriptions for the various process elements. This is due to the fact that there has been more than ever an increasing need for a more generalized and standard definition of data (or information) to create systems capable of providing understandable formats for the different data types and sources. To this effect, this chapter proposes a semantic-based linked open data framework (SBLODF) that integrates the different elements (entities) within information systems or models with semantics (metadata descriptions) to produce explicit and implicit information based on users’ search or queries. In essence, this work introduces a machine-readable and machine-understandable system that proves to be useful for encoding knowledge about different process domains, as well as provides the discovered information (knowledge) at a more conceptual level.
如今,显示出其对数据集成和分析重要性的最先进技术之一是链接开放数据(LOD)系统或应用程序。LOD由机器可读的资源或机制组成,这些资源或机制对描述数据属性很有用。然而,现有系统或数据模型的问题之一是,不仅需要以人类易于理解的格式表示派生信息(数据),而且还需要创建能够处理它们包含或支持的信息的系统。从技术上讲,开发数据或信息处理系统的主要机制是聚合或计算各种流程元素的元数据描述。这是因为,为了创建能够为不同的数据类型和数据源提供可理解格式的系统,对数据(或信息)的更一般化和标准定义的需求比以往任何时候都要增加。为此,本章提出了一个基于语义的链接开放数据框架(SBLODF),它将信息系统或模型中的不同元素(实体)与语义(元数据描述)集成在一起,根据用户的搜索或查询生成显式和隐式信息。从本质上讲,这项工作引入了一个机器可读和机器可理解的系统,该系统被证明对于编码关于不同过程域的知识很有用,并且在更概念化的层面上提供发现的信息(知识)。
{"title":"Linked Open Data: State-of-the-Art Mechanisms and Conceptual Framework","authors":"Kingsley Okoye","doi":"10.5772/intechopen.94504","DOIUrl":"https://doi.org/10.5772/intechopen.94504","url":null,"abstract":"Today, one of the state-of-the-art technologies that have shown its importance towards data integration and analysis is the linked open data (LOD) systems or applications. LOD constitute of machine-readable resources or mechanisms that are useful in describing data properties. However, one of the issues with the existing systems or data models is the need for not just representing the derived information (data) in formats that can be easily understood by humans, but also creating systems that are able to process the information that they contain or support. Technically, the main mechanisms for developing the data or information processing systems are the aspects of aggregating or computing the metadata descriptions for the various process elements. This is due to the fact that there has been more than ever an increasing need for a more generalized and standard definition of data (or information) to create systems capable of providing understandable formats for the different data types and sources. To this effect, this chapter proposes a semantic-based linked open data framework (SBLODF) that integrates the different elements (entities) within information systems or models with semantics (metadata descriptions) to produce explicit and implicit information based on users’ search or queries. In essence, this work introduces a machine-readable and machine-understandable system that proves to be useful for encoding knowledge about different process domains, as well as provides the discovered information (knowledge) at a more conceptual level.","PeriodicalId":343023,"journal":{"name":"Linked Open Data - Applications, Trends and Future Developments","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129848334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Financial Time Series Analysis via Backtesting Approach 基于回溯检验的金融时间序列分析
Pub Date : 2020-10-27 DOI: 10.5772/intechopen.94112
Monday Osagie Adenomon
This book chapter investigated the place of backtesting approach in financial time series analysis in choosing a reliable Generalized Auto-Regressive Conditional Heteroscedastic (GARCH) Model to analyze stock returns in Nigeria. To achieve this, The chapter used a secondary data that was collected from www.cashcraft.com under stock trend and analysis. Daily stock price was collected on Zenith bank stock price from October 21st 2004 to May 8th 2017. The chapter used nine different GARCH models (standard GARCH (sGARCH), Glosten-Jagannathan-Runkle GARCH (gjrGARCH), Exponential GARCH (Egarch), Integrated GARCH (iGARCH), Asymmetric Power Autoregressive Conditional Heteroskedasticity (ARCH) (apARCH), Threshold GARCH (TGARCH), Non-linear GARCH (NGARCH), Nonlinear (Asymmetric) GARCH (NAGARCH) and The Absolute Value GARCH (AVGARCH) with maximum lag of 2. Most the information criteria for the sGARCH model were not available due to lack of convergence. The lowest information criteria were associated with apARCH (2,2) with Student t-distribution followed by NGARCH(2,1) with skewed student t-distribution. The backtesting result of the apARCH (2,2) was not available while eGARCH(1,1) with Skewed student t-distribution, NGARCH(1,1), NGARCH(2,1), and TGARCH (2,1) failed the backtesting but eGARCH (1,1) with student t-distribution passed the backtesting approach. Therefore with the backtesting approach, eGARCH(1,1) with student distribution emerged the superior model for modeling Zenith Bank stock returns in Nigeria. This chapter recommended the backtesting approach to selecting reliable GARCH model.
本章研究了回溯检验方法在金融时间序列分析中选择可靠的广义自回归条件异方差(GARCH)模型来分析尼日利亚股票收益的地位。为了实现这一点,本章使用了从www.cashcraft.com收集的股票趋势和分析的辅助数据。从2004年10月21日至2017年5月8日,每日股票价格收集在Zenith银行股票价格上。本章使用了九种不同的GARCH模型(标准GARCH (sGARCH)、glosten - jagannahn - runkle GARCH (gjrGARCH)、指数GARCH (Egarch)、积分GARCH (iGARCH)、非对称功率自回归条件异方差(ARCH) (apARCH)、阈值GARCH (TGARCH)、非线性GARCH (NGARCH)、非线性(不对称)GARCH (NAGARCH)和最大滞后为2的绝对值GARCH (AVGARCH))。由于缺乏收敛性,大多数sGARCH模型的信息标准是不可用的。最低信息标准与学生t分布的apARCH(2,2)相关,其次是学生t分布偏态的NGARCH(2,1)。学生t分布偏态的eGARCH(1,1)、学生t分布偏态的NGARCH(1,1)、学生t分布偏态的NGARCH(2,1)、学生t分布偏态的NGARCH(1,1)、学生t分布偏态的NGARCH(1,1)、学生t分布偏态的eGARCH(1,1)通过了回溯检验,学生t分布偏态的eGARCH(1,1)通过了回溯检验。因此,通过回溯检验方法,学生分布的eGARCH(1,1)成为建模尼日利亚Zenith银行股票收益的最佳模型。本章推荐采用回溯测试的方法来选择可靠的GARCH模型。
{"title":"Financial Time Series Analysis via Backtesting Approach","authors":"Monday Osagie Adenomon","doi":"10.5772/intechopen.94112","DOIUrl":"https://doi.org/10.5772/intechopen.94112","url":null,"abstract":"This book chapter investigated the place of backtesting approach in financial time series analysis in choosing a reliable Generalized Auto-Regressive Conditional Heteroscedastic (GARCH) Model to analyze stock returns in Nigeria. To achieve this, The chapter used a secondary data that was collected from www.cashcraft.com under stock trend and analysis. Daily stock price was collected on Zenith bank stock price from October 21st 2004 to May 8th 2017. The chapter used nine different GARCH models (standard GARCH (sGARCH), Glosten-Jagannathan-Runkle GARCH (gjrGARCH), Exponential GARCH (Egarch), Integrated GARCH (iGARCH), Asymmetric Power Autoregressive Conditional Heteroskedasticity (ARCH) (apARCH), Threshold GARCH (TGARCH), Non-linear GARCH (NGARCH), Nonlinear (Asymmetric) GARCH (NAGARCH) and The Absolute Value GARCH (AVGARCH) with maximum lag of 2. Most the information criteria for the sGARCH model were not available due to lack of convergence. The lowest information criteria were associated with apARCH (2,2) with Student t-distribution followed by NGARCH(2,1) with skewed student t-distribution. The backtesting result of the apARCH (2,2) was not available while eGARCH(1,1) with Skewed student t-distribution, NGARCH(1,1), NGARCH(2,1), and TGARCH (2,1) failed the backtesting but eGARCH (1,1) with student t-distribution passed the backtesting approach. Therefore with the backtesting approach, eGARCH(1,1) with student distribution emerged the superior model for modeling Zenith Bank stock returns in Nigeria. This chapter recommended the backtesting approach to selecting reliable GARCH model.","PeriodicalId":343023,"journal":{"name":"Linked Open Data - Applications, Trends and Future Developments","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116695149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study on IoT and Big Data Analysis of 12” 7 nm Advanced Furnace Process Exhaust Gas Leakage 12 " 7 nm先进炉工艺废气泄漏物联网及大数据分析研究
Pub Date : 2020-07-23 DOI: 10.5772/intechopen.92849
Kuo-Chi Chang, Kai-Chun Chu, Hsiao-Chuan Wang, Yuh-Chung Lin, Tsui-Lien Hsu, Yu-Wen Zhou
Modern FAB uses a large number of high-energy processes, including plasma, CVD, and ion implantation. Furnaces are one of the important tools for semiconductor manufacturing. According to the requirements of conversion production management, FAB installed a set of IoT-based research based on 12″ 7 nm-level furnaces chip process. Two furnace processing tool measurement points were set up in a 12-inch 7 nm-level factory in Hsinchu Science Park, Taiwan, this is a 24-hour continuous monitoring system, the data obtained every second is sequentially send and stored in the cloud system. This study will be set in the cloud database for big data analysis and decision-making. The lower limit of TEOS, C2H4, CO is 0.4, 1.5, 1 ppm. Semiconductor process, so that IoT integration and big data operations can be performed in all processes, this is an important step to promote FAB intelligent production, and also an important contribution to this research.
现代FAB采用大量高能工艺,包括等离子体、CVD和离子注入。电炉是半导体制造的重要工具之一。根据转换生产管理的要求,FAB安装了一套基于物联网研究的12台″7纳米级熔炉芯片工艺。在台湾新竹科技园的一个12英寸7纳米级工厂内设置了两个炉加工工具测量点,这是一个24小时连续监测系统,每秒钟获得的数据依次发送并存储在云系统中。本研究将设置在云数据库中进行大数据分析和决策。TEOS、C2H4、CO的下限分别为0.4、1.5、1 ppm。半导体制程,使物联网集成和大数据操作可以在所有过程中进行,这是推动FAB智能化生产的重要一步,也是本研究的重要贡献。
{"title":"Study on IoT and Big Data Analysis of 12” 7 nm Advanced Furnace Process Exhaust Gas Leakage","authors":"Kuo-Chi Chang, Kai-Chun Chu, Hsiao-Chuan Wang, Yuh-Chung Lin, Tsui-Lien Hsu, Yu-Wen Zhou","doi":"10.5772/intechopen.92849","DOIUrl":"https://doi.org/10.5772/intechopen.92849","url":null,"abstract":"Modern FAB uses a large number of high-energy processes, including plasma, CVD, and ion implantation. Furnaces are one of the important tools for semiconductor manufacturing. According to the requirements of conversion production management, FAB installed a set of IoT-based research based on 12″ 7 nm-level furnaces chip process. Two furnace processing tool measurement points were set up in a 12-inch 7 nm-level factory in Hsinchu Science Park, Taiwan, this is a 24-hour continuous monitoring system, the data obtained every second is sequentially send and stored in the cloud system. This study will be set in the cloud database for big data analysis and decision-making. The lower limit of TEOS, C2H4, CO is 0.4, 1.5, 1 ppm. Semiconductor process, so that IoT integration and big data operations can be performed in all processes, this is an important step to promote FAB intelligent production, and also an important contribution to this research.","PeriodicalId":343023,"journal":{"name":"Linked Open Data - Applications, Trends and Future Developments","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128178990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Effective Load Balancing Techniques in Distributed Environment 分布式环境下有效负载均衡技术分析
Pub Date : 2020-07-02 DOI: 10.5772/intechopen.91460
A. Shukla, Shishir Kumar, Harikesh Singh
Computational approaches contribute a significance role in various fields such as medical applications, astronomy, and weather science, to perform complex calculations in speedy manner. Today, personal computers are very powerful but underutilized. Most of the computer resources are idle; 75% of the time and server are often unproductive. This brings the sense of distributed computing, in which the idea is to use the geographically distributed resources to meet the demand of high-performance computing. The Internet facilitates users to access heterogeneous services and run applications over a distributed environment. Due to openness and heterogeneous nature of distributed computing, the developer must deal with several issues like load balancing, interoperability, fault occurrence, resource selection, and task scheduling. Load balancing is the mechanism to distribute the load among resources optimally. The objective of this chapter is to discuss need and issues of load balancing that evolves the research scope. Various load balancing algorithms and scheduling methods are analyzed that are used for performance optimization of web resources. A systematic literature with their solutions and limitations has been presented. The chapter provides a concise narrative of the problems encountered and dimensions for future extension.
计算方法在医学应用、天文学和气象科学等各个领域发挥着重要作用,可以快速地进行复杂的计算。今天,个人电脑非常强大,但没有得到充分利用。大部分计算机资源是闲置的;75%的时间和服务器通常是非生产性的。这就带来了分布式计算的意义,其思想是使用地理上分布的资源来满足高性能计算的需求。Internet为用户访问异构服务和在分布式环境中运行应用程序提供了便利。由于分布式计算的开放性和异构性,开发人员必须处理一些问题,如负载平衡、互操作性、故障发生、资源选择和任务调度。负载平衡是在资源之间最优分配负载的机制。本章的目的是讨论发展研究范围的负载平衡的需求和问题。分析了用于web资源性能优化的各种负载均衡算法和调度方法。一个系统的文献与他们的解决方案和局限性已经提出。本章提供了遇到的问题和未来扩展的维度的简明叙述。
{"title":"Analysis of Effective Load Balancing Techniques in Distributed Environment","authors":"A. Shukla, Shishir Kumar, Harikesh Singh","doi":"10.5772/intechopen.91460","DOIUrl":"https://doi.org/10.5772/intechopen.91460","url":null,"abstract":"Computational approaches contribute a significance role in various fields such as medical applications, astronomy, and weather science, to perform complex calculations in speedy manner. Today, personal computers are very powerful but underutilized. Most of the computer resources are idle; 75% of the time and server are often unproductive. This brings the sense of distributed computing, in which the idea is to use the geographically distributed resources to meet the demand of high-performance computing. The Internet facilitates users to access heterogeneous services and run applications over a distributed environment. Due to openness and heterogeneous nature of distributed computing, the developer must deal with several issues like load balancing, interoperability, fault occurrence, resource selection, and task scheduling. Load balancing is the mechanism to distribute the load among resources optimally. The objective of this chapter is to discuss need and issues of load balancing that evolves the research scope. Various load balancing algorithms and scheduling methods are analyzed that are used for performance optimization of web resources. A systematic literature with their solutions and limitations has been presented. The chapter provides a concise narrative of the problems encountered and dimensions for future extension.","PeriodicalId":343023,"journal":{"name":"Linked Open Data - Applications, Trends and Future Developments","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TULIP: A Five-Star Table and List - From Machine-Readable to Machine-Understandable Systems 五星表和列表——从机器可读到机器可理解的系统
Pub Date : 2020-05-06 DOI: 10.5772/intechopen.91406
Julthep Nandakwang, P. Chongstitvatana
Currently, Linked Data is increasing at a rapid rate as the growth of the Web. Aside from new information that has been created exclusively as Semantic Web-ready, part of them comes from the transformation of existing structural data to be in the form of five-star open data. However, there are still many legacy data in structured and semi-structured form, for example, tables and lists, which are the principal format for human-readable, waiting for transformation. In this chapter, we discuss attempts in the research area to transform table and list data to make them machine-readable in various formats. Furthermore, our research proposes a novel method for transforming tables and lists into RDF format while maintaining their essential configurations thoroughly. And, it is possible to recreate their original form back informatively. We introduce a system named TULIP which embodied this conversion method as a tool for the future development of the Semantic Web. Our method is more flexible compared to other works. The TULIP data model contains complete information of the source; hence it can be projected into different views. This tool can be used to create a tremendous amount of data for the machine to be used at a broader scale.
目前,随着网络的发展,关联数据也在快速增长。除了专门为语义web创建的新信息外,其中一部分来自将现有结构数据转换为五星级开放数据的形式。然而,仍然有许多结构化和半结构化形式的遗留数据等待转换,例如表和列表,它们是人类可读的主要格式。在本章中,我们讨论了在研究领域的尝试,以转换表格和列表数据,使它们以各种格式机器可读。此外,我们的研究提出了一种新的方法,可以将表和列表转换为RDF格式,同时完全保持它们的基本配置。并且,有可能重新创建它们的原始形式回来的信息。我们介绍了一个名为TULIP的系统,它体现了这种转换方法,作为未来语义网发展的工具。与其他作品相比,我们的方法更加灵活。TULIP数据模型包含源的完整信息;因此,它可以投射到不同的视图。这个工具可以用来为机器创建大量的数据,以便在更大的范围内使用。
{"title":"TULIP: A Five-Star Table and List - From Machine-Readable to Machine-Understandable Systems","authors":"Julthep Nandakwang, P. Chongstitvatana","doi":"10.5772/intechopen.91406","DOIUrl":"https://doi.org/10.5772/intechopen.91406","url":null,"abstract":"Currently, Linked Data is increasing at a rapid rate as the growth of the Web. Aside from new information that has been created exclusively as Semantic Web-ready, part of them comes from the transformation of existing structural data to be in the form of five-star open data. However, there are still many legacy data in structured and semi-structured form, for example, tables and lists, which are the principal format for human-readable, waiting for transformation. In this chapter, we discuss attempts in the research area to transform table and list data to make them machine-readable in various formats. Furthermore, our research proposes a novel method for transforming tables and lists into RDF format while maintaining their essential configurations thoroughly. And, it is possible to recreate their original form back informatively. We introduce a system named TULIP which embodied this conversion method as a tool for the future development of the Semantic Web. Our method is more flexible compared to other works. The TULIP data model contains complete information of the source; hence it can be projected into different views. This tool can be used to create a tremendous amount of data for the machine to be used at a broader scale.","PeriodicalId":343023,"journal":{"name":"Linked Open Data - Applications, Trends and Future Developments","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132983406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BIBFRAME Linked Data: A Conceptual Study on the Prevailing Content Standards and Data Model BIBFRAME关联数据:主流内容标准和数据模型的概念研究
Pub Date : 2020-04-22 DOI: 10.5772/intechopen.91849
Jung-ran Park, A. Brenza, Lori Richards
The BIBFRAME model is designed with a high degree of flexibility in that it can accommodate any number of existing models as well as models yet to be developed within the Web environment. The model’s flexibility is intended to foster extensibility. This study discusses the relationship of BIBFRAME to the prevailing content standards and models employed by cultural heritage institutions across museums, archives, libraries, historical societies, and community centers or those in the process of being adopted by cultural heritage institutions. This is to determine the degree to which BIBFRAME, as it is currently understood, can be a viable and extensible framework for bibliographic description and exchange in the Web environment. We highlight the areas of compatibility as well as areas of incompatibility. BIBFRAME holds the promise of freeing library data from the silos of online catalogs permitting library data to interact with data both within and outside the library community. We discuss some of the challenges that need to be addressed in order to optimize the potential capabilities that the BIBFRAME model holds.
BIBFRAME模型具有高度的灵活性,因为它可以容纳任何数量的现有模型以及在Web环境中尚未开发的模型。模型的灵活性旨在促进可扩展性。本研究探讨了BIBFRAME与博物馆、档案馆、图书馆、历史学会、社区中心等文化遗产机构或正在被文化遗产机构采用的主流内容标准和模式的关系。这是为了确定BIBFRAME在多大程度上可以作为Web环境中书目描述和交换的可行和可扩展框架。我们强调兼容的领域以及不兼容的领域。BIBFRAME承诺将图书馆数据从在线目录的筒仓中解放出来,允许图书馆数据与图书馆社区内外的数据进行交互。我们讨论了一些需要解决的挑战,以便优化BIBFRAME模型所具有的潜在功能。
{"title":"BIBFRAME Linked Data: A Conceptual Study on the Prevailing Content Standards and Data Model","authors":"Jung-ran Park, A. Brenza, Lori Richards","doi":"10.5772/intechopen.91849","DOIUrl":"https://doi.org/10.5772/intechopen.91849","url":null,"abstract":"The BIBFRAME model is designed with a high degree of flexibility in that it can accommodate any number of existing models as well as models yet to be developed within the Web environment. The model’s flexibility is intended to foster extensibility. This study discusses the relationship of BIBFRAME to the prevailing content standards and models employed by cultural heritage institutions across museums, archives, libraries, historical societies, and community centers or those in the process of being adopted by cultural heritage institutions. This is to determine the degree to which BIBFRAME, as it is currently understood, can be a viable and extensible framework for bibliographic description and exchange in the Web environment. We highlight the areas of compatibility as well as areas of incompatibility. BIBFRAME holds the promise of freeing library data from the silos of online catalogs permitting library data to interact with data both within and outside the library community. We discuss some of the challenges that need to be addressed in order to optimize the potential capabilities that the BIBFRAME model holds.","PeriodicalId":343023,"journal":{"name":"Linked Open Data - Applications, Trends and Future Developments","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125323239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Linked Open Data - Applications, Trends and Future Developments
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1