首页 > 最新文献

Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos最新文献

英文 中文
Learning, Understanding and Interaction in Videos 视频中的学习、理解和互动
Manmohan Chandraker
Advances in mobile phone camera technologies and internet connectivity have made videos one of the most intuitive ways to communicate and share experiences. Millions of cameras deployed in our homes, offices and public spaces record videos for purposes ranging across safety, assistance, entertainment and many others. This talk describes some of our recent progress in learning, understanding and interaction with such digital media. It will introduce methods in unsupervised and self-supervised representation learning that allow video solutions to be efficiently deployed with minimal data curation. It will discuss how physical priors or human knowledge are leveraged to understand insights in videos ranging from three-dimensional scene properties to language-based descriptions. It will also illustrate how these insights allow us to augment or interact with digital media with unprecedented photorealism and ease.
手机摄像技术和互联网连接的进步使视频成为沟通和分享经验最直观的方式之一。数以百万计的摄像机部署在我们的家庭、办公室和公共场所,用于录制视频,用于安全、援助、娱乐和许多其他目的。这次演讲描述了我们最近在学习、理解和与这些数字媒体互动方面的一些进展。它将引入无监督和自监督表示学习的方法,使视频解决方案能够以最少的数据管理有效地部署。它将讨论如何利用物理先验或人类知识来理解从三维场景属性到基于语言的描述的视频中的见解。它还将说明这些见解如何使我们能够以前所未有的真实感和轻松来增强或与数字媒体互动。
{"title":"Learning, Understanding and Interaction in Videos","authors":"Manmohan Chandraker","doi":"10.1145/3552463.3555837","DOIUrl":"https://doi.org/10.1145/3552463.3555837","url":null,"abstract":"Advances in mobile phone camera technologies and internet connectivity have made videos one of the most intuitive ways to communicate and share experiences. Millions of cameras deployed in our homes, offices and public spaces record videos for purposes ranging across safety, assistance, entertainment and many others. This talk describes some of our recent progress in learning, understanding and interaction with such digital media. It will introduce methods in unsupervised and self-supervised representation learning that allow video solutions to be efficiently deployed with minimal data curation. It will discuss how physical priors or human knowledge are leveraged to understand insights in videos ranging from three-dimensional scene properties to language-based descriptions. It will also illustrate how these insights allow us to augment or interact with digital media with unprecedented photorealism and ease.","PeriodicalId":293267,"journal":{"name":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123334308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Panel Discussion: Emerging Topics on Video Summarization 小组讨论:视频摘要的新兴话题
Mohan S. Kankanhalli, Jianquan Liu, Yongkang Wong, Karen Stephen
With video capture devices becoming widely popular, the amount of video data generated per day has seen a rapid increase over the past few years. Browsing through hours of video data to retrieve useful information is a tedious and boring task. Video Summarization technology has played a crucial role in addressing this issue and is a well-researched topic in the multimedia community. This panel aims to bring together researchers working on relevant backgrounds to discuss emerging topics on video summarization, including recent developments, future directions, challenges, solutions, potential applications and other open problems.
随着视频捕捉设备的广泛普及,每天产生的视频数据量在过去几年中迅速增加。浏览数小时的视频数据以检索有用的信息是一项乏味而无聊的任务。视频摘要技术在解决这一问题方面发挥了至关重要的作用,是多媒体领域研究的热点。该小组旨在汇集相关背景的研究人员,讨论视频摘要的新兴主题,包括最近的发展,未来的方向,挑战,解决方案,潜在的应用和其他开放问题。
{"title":"Panel Discussion: Emerging Topics on Video Summarization","authors":"Mohan S. Kankanhalli, Jianquan Liu, Yongkang Wong, Karen Stephen","doi":"10.1145/3552463.3558051","DOIUrl":"https://doi.org/10.1145/3552463.3558051","url":null,"abstract":"With video capture devices becoming widely popular, the amount of video data generated per day has seen a rapid increase over the past few years. Browsing through hours of video data to retrieve useful information is a tedious and boring task. Video Summarization technology has played a crucial role in addressing this issue and is a well-researched topic in the multimedia community. This panel aims to bring together researchers working on relevant backgrounds to discuss emerging topics on video summarization, including recent developments, future directions, challenges, solutions, potential applications and other open problems.","PeriodicalId":293267,"journal":{"name":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121383995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video Summarization in the Deep Learning Era: Current Landscape and Future Directions
I. Patras
In this talk we will provide an overview of the field of video summarization with a focus on the developments, the trends and the open challenges in the era of Deep Learning and Big Data. After a brief introduction to the problem, we will provide a broad taxonomy of the works in the area and the recent trends from multiple perspectives, including types of methodologies/architectures; supervision signals; and modalities. We will then present current datasets and evaluation protocols highlighting their limitations and challenges that are faced with respect to it. Finally, we will close by giving our perspective for the challenges in the field and for interesting future directions.
在本次演讲中,我们将对视频摘要领域进行概述,重点关注深度学习和大数据时代的发展、趋势和开放挑战。在对问题进行简要介绍之后,我们将从多个角度对该领域的工作和最近的趋势进行广泛的分类,包括方法/架构的类型;监督信号;和模式。然后,我们将介绍当前的数据集和评估方案,强调其局限性和面临的挑战。最后,我们将给出我们对该领域的挑战和有趣的未来方向的看法。
{"title":"Video Summarization in the Deep Learning Era: Current Landscape and Future Directions","authors":"I. Patras","doi":"10.1145/3552463.3554166","DOIUrl":"https://doi.org/10.1145/3552463.3554166","url":null,"abstract":"In this talk we will provide an overview of the field of video summarization with a focus on the developments, the trends and the open challenges in the era of Deep Learning and Big Data. After a brief introduction to the problem, we will provide a broad taxonomy of the works in the area and the recent trends from multiple perspectives, including types of methodologies/architectures; supervision signals; and modalities. We will then present current datasets and evaluation protocols highlighting their limitations and challenges that are faced with respect to it. Finally, we will close by giving our perspective for the challenges in the field and for interesting future directions.","PeriodicalId":293267,"journal":{"name":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116676675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Narrative Dataset: Towards Goal-Driven Narrative Generation 叙事数据集:朝着目标驱动的叙事生成
Karen Stephen, Rishabh Sheoran, Satoshi Yamazaki
In this paper, we propose a new dataset called the Narrative dataset, which is a work in progress, towards generating video and text narratives of complex daily events from long videos, captured from multiple cameras. As most of the existing datasets are collected from publicly available videos such as YouTube videos, there are no datasets targeted towards the task of narrative summarization of complex videos which contains multiple narratives. Hence, we create story plots and conduct video shooting with hired actors to create complex video sets where 3 to 4 narratives happen in each video. In the story plot, a narrative composes of multiple events corresponding to video clips of key human activities. On top of the shot video sets and the story plot, the narrative dataset contains dense annotation of actors, objects, and their relationships for each frame as the facts of narratives. Therefore, narrative dataset richly contains holistic and hierarchical structure of facts, events, and narratives. Moreover, Narrative Graph, a collection of scene graphs of narrative events with their causal relationships, is introduced for bridging the gap between the collection of facts and generation of the summary sentences of a narrative. Beyond related subtasks such as scene graph generation, narrative dataset potentially provide challenges of subtasks for bridging human event clips to narratives.
在本文中,我们提出了一个新的数据集,称为叙事数据集,这是一项正在进行的工作,旨在从多个摄像机捕获的长视频中生成复杂日常事件的视频和文本叙述。由于大多数现有数据集都是从公开可用的视频(如YouTube视频)中收集的,因此没有针对包含多个叙事的复杂视频的叙事摘要任务的数据集。因此,我们创建故事情节,并聘请演员进行视频拍摄,制作复杂的视频集,每个视频中发生3到4个故事。在故事情节中,一个叙事由多个事件组成,这些事件对应于人类关键活动的视频片段。在镜头视频集和故事情节之上,叙事数据集包含演员、对象及其每帧关系的密集注释,作为叙事的事实。因此,叙事数据集丰富地包含了事实、事件和叙事的整体和层次结构。此外,本文还引入了叙事图(Narrative Graph),它是叙事事件及其因果关系的场景图集合,用于弥合事实收集与叙事总结句生成之间的差距。除了场景图生成等相关子任务之外,叙事数据集还可能为将人类事件剪辑连接到叙事提供子任务的挑战。
{"title":"Narrative Dataset: Towards Goal-Driven Narrative Generation","authors":"Karen Stephen, Rishabh Sheoran, Satoshi Yamazaki","doi":"10.1145/3552463.3557021","DOIUrl":"https://doi.org/10.1145/3552463.3557021","url":null,"abstract":"In this paper, we propose a new dataset called the Narrative dataset, which is a work in progress, towards generating video and text narratives of complex daily events from long videos, captured from multiple cameras. As most of the existing datasets are collected from publicly available videos such as YouTube videos, there are no datasets targeted towards the task of narrative summarization of complex videos which contains multiple narratives. Hence, we create story plots and conduct video shooting with hired actors to create complex video sets where 3 to 4 narratives happen in each video. In the story plot, a narrative composes of multiple events corresponding to video clips of key human activities. On top of the shot video sets and the story plot, the narrative dataset contains dense annotation of actors, objects, and their relationships for each frame as the facts of narratives. Therefore, narrative dataset richly contains holistic and hierarchical structure of facts, events, and narratives. Moreover, Narrative Graph, a collection of scene graphs of narrative events with their causal relationships, is introduced for bridging the gap between the collection of facts and generation of the summary sentences of a narrative. Beyond related subtasks such as scene graph generation, narrative dataset potentially provide challenges of subtasks for bridging human event clips to narratives.","PeriodicalId":293267,"journal":{"name":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133931231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Contrastive Representation Learning for Expression Recognition from Masked Face Images 基于对比表征学习的蒙面人脸表情识别
Fanxing Luo, Long Zhao, Yu Wang, Jien Kato
With the worldwide spread of COVID-19, people are trying different ways to prevent the spread of the virus. One of the most useful and popular ways is wearing a face mask. Most people wear a face mask when they go out, which makes facial expression recognition become harder. Thus, how to improve the performance of the facial expression recognition model on masked faces is becoming an important issue. However, there is no public dataset that includes facial expressions with masks. Thus, we built two datasets which are a real-world masked facial expression database (VIP-DB) and a man-made masked facial expression database (M-RAF-DB). To reduce the influence of masks, we utilize contrastive representation learning and propose a two-branches network. We study the influence of contrastive learning on our two datasets. Results show that using contrastive representation learning improves the performance of expression recognition from masked face images.
随着COVID-19在全球范围内的传播,人们正在尝试不同的方法来防止病毒的传播。最实用、最受欢迎的方法之一是戴口罩。大多数人外出时都戴着口罩,这使得面部表情识别变得更加困难。因此,如何提高面部表情识别模型在蒙面人脸上的性能成为一个重要的课题。然而,目前还没有包含带面具的面部表情的公共数据集。因此,我们构建了两个数据集,一个是真实世界的面具面部表情数据库(VIP-DB),另一个是人工面具面部表情数据库(M-RAF-DB)。为了减少掩模的影响,我们利用对比表征学习并提出了一个双分支网络。我们研究了对比学习对两个数据集的影响。结果表明,对比表示学习提高了蒙面人脸表情识别的性能。
{"title":"Contrastive Representation Learning for Expression Recognition from Masked Face Images","authors":"Fanxing Luo, Long Zhao, Yu Wang, Jien Kato","doi":"10.1145/3552463.3557020","DOIUrl":"https://doi.org/10.1145/3552463.3557020","url":null,"abstract":"With the worldwide spread of COVID-19, people are trying different ways to prevent the spread of the virus. One of the most useful and popular ways is wearing a face mask. Most people wear a face mask when they go out, which makes facial expression recognition become harder. Thus, how to improve the performance of the facial expression recognition model on masked faces is becoming an important issue. However, there is no public dataset that includes facial expressions with masks. Thus, we built two datasets which are a real-world masked facial expression database (VIP-DB) and a man-made masked facial expression database (M-RAF-DB). To reduce the influence of masks, we utilize contrastive representation learning and propose a two-branches network. We study the influence of contrastive learning on our two datasets. Results show that using contrastive representation learning improves the performance of expression recognition from masked face images.","PeriodicalId":293267,"journal":{"name":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122879311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soccer Game Summarization using Audio Commentary, Metadata, and Captions 使用音频解说、元数据和字幕的足球比赛摘要
Sushant Gautam, Cise Midoglu, Saeed Shafiee Sabet, Dinesh Baniya Kshatri, P. Halvorsen
Soccer is one of the most popular sports globally, and the amount of soccer-related content worldwide, including video footage, audio commentary, team/player statistics, scores, and rankings, is enormous and rapidly growing. Consequently, the generation of multimodal summaries is of tremendous interest for broadcasters and fans alike, as a large percentage of audiences prefer to follow only the main highlights of a game. However, annotating important events and producing summaries often requires expensive equipment and a lot of tedious, cumbersome, manual labour. In this context, recent developments in Artificial Intelligence (AI) have shown great potential. The goal of this work is to create an automated soccer game summarization pipeline using AI. In particular, our focus is on the generation of complete game summaries in continuous text format with length constraints, based on raw game multimedia, as well as readily available game metadata and captions where applicable, using Natural Language Processing (NLP) tools along with heuristics. We curate and extend a number of soccer datasets, implement an end-to-end pipeline for the automatic generation of text summaries, present our preliminary results from the comparative analysis of various summarization methods within this pipeline using different input modalities, and provide a discussion of open challenges in the field of automated game summarization.
足球是全球最受欢迎的运动之一,全球范围内与足球相关的内容,包括视频片段、音频评论、球队/球员统计、分数和排名,数量巨大且迅速增长。因此,多模式总结的生成对广播公司和球迷来说都是非常有趣的,因为大部分观众更喜欢只关注比赛的主要亮点。然而,注释重要事件和制作摘要往往需要昂贵的设备和大量乏味、繁琐的体力劳动。在这种背景下,人工智能(AI)的最新发展显示出巨大的潜力。这项工作的目标是使用人工智能创建一个自动化的足球比赛摘要管道。特别是,我们的重点是基于原始游戏多媒体,以及现成的游戏元数据和标题(在适用的情况下),使用自然语言处理(NLP)工具和启发式,以具有长度限制的连续文本格式生成完整的游戏摘要。我们管理和扩展了一些足球数据集,实现了一个端到端自动生成文本摘要的管道,通过使用不同的输入方式对该管道内的各种摘要方法进行比较分析,提出了我们的初步结果,并提供了对自动比赛摘要领域开放挑战的讨论。
{"title":"Soccer Game Summarization using Audio Commentary, Metadata, and Captions","authors":"Sushant Gautam, Cise Midoglu, Saeed Shafiee Sabet, Dinesh Baniya Kshatri, P. Halvorsen","doi":"10.1145/3552463.3557019","DOIUrl":"https://doi.org/10.1145/3552463.3557019","url":null,"abstract":"Soccer is one of the most popular sports globally, and the amount of soccer-related content worldwide, including video footage, audio commentary, team/player statistics, scores, and rankings, is enormous and rapidly growing. Consequently, the generation of multimodal summaries is of tremendous interest for broadcasters and fans alike, as a large percentage of audiences prefer to follow only the main highlights of a game. However, annotating important events and producing summaries often requires expensive equipment and a lot of tedious, cumbersome, manual labour. In this context, recent developments in Artificial Intelligence (AI) have shown great potential. The goal of this work is to create an automated soccer game summarization pipeline using AI. In particular, our focus is on the generation of complete game summaries in continuous text format with length constraints, based on raw game multimedia, as well as readily available game metadata and captions where applicable, using Natural Language Processing (NLP) tools along with heuristics. We curate and extend a number of soccer datasets, implement an end-to-end pipeline for the automatic generation of text summaries, present our preliminary results from the comparative analysis of various summarization methods within this pipeline using different input modalities, and provide a discussion of open challenges in the field of automated game summarization.","PeriodicalId":293267,"journal":{"name":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129389724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos 第一届以用户为中心的长视频叙事总结研讨会论文集
{"title":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","authors":"","doi":"10.1145/3552463","DOIUrl":"https://doi.org/10.1145/3552463","url":null,"abstract":"","PeriodicalId":293267,"journal":{"name":"Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117246170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 1st Workshop on User-centric Narrative Summarization of Long Videos
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1