首页 > 最新文献

Proceedings. IEEE International Conference on Computer Vision最新文献

英文 中文
Intrinsic 3D Dynamic Surface Tracking based on Dynamic Ricci Flow and Teichmüller Map. 基于动态Ricci流和teichm<s:1> ller图的内禀三维动态表面跟踪。
Pub Date : 2017-10-01 Epub Date: 2017-12-25 DOI: 10.1109/ICCV.2017.576
Xiaokang Yu, Na Lei, Yalin Wang, Xianfeng Gu

3D dynamic surface tracking is an important research problem and plays a vital role in many computer vision and medical imaging applications. However, it is still challenging to efficiently register surface sequences which has large deformations and strong noise. In this paper, we propose a novel automatic method for non-rigid 3D dynamic surface tracking with surface Ricci flow and Teichmüller map methods. According to quasi-conformal Teichmüller theory, the Techmüller map minimizes the maximal dilation so that our method is able to automatically register surfaces with large deformations. Besides, the adoption of Delaunay triangulation and quadrilateral meshes makes our method applicable to low quality meshes. In our work, the 3D dynamic surfaces are acquired by a high speed 3D scanner. We first identified sparse surface features using machine learning methods in the texture space. Then we assign landmark features with different curvature settings and the Riemannian metric of the surface is computed by the dynamic Ricci flow method, such that all the curvatures are concentrated on the feature points and the surface is flat everywhere else. The registration among frames is computed by the Teichmüller mappings, which aligns the feature points with least angle distortions. We apply our new method to multiple sequences of 3D facial surfaces with large expression deformations and compare them with two other state-of-the-art tracking methods. The effectiveness of our method is demonstrated by the clearly improved accuracy and efficiency.

三维动态表面跟踪是一个重要的研究问题,在许多计算机视觉和医学成像应用中起着至关重要的作用。然而,对于具有大变形和强噪声的曲面序列,如何有效地进行配准仍然是一个挑战。本文提出了一种基于曲面Ricci流和teichmller映射的非刚性三维动态曲面自动跟踪方法。根据拟共形teichm ller理论,techm ller映射最小化了最大膨胀,因此我们的方法能够自动注册具有大变形的表面。此外,采用Delaunay三角剖分和四边形网格,使得我们的方法适用于低质量的网格。在我们的工作中,三维动态表面是由高速三维扫描仪获得的。我们首先在纹理空间中使用机器学习方法识别稀疏表面特征。然后分配不同曲率设置的地标特征,利用动态Ricci流法计算曲面的黎曼度规,使所有曲率都集中在特征点上,曲面在其他地方都是平坦的。帧之间的配准由teichm ller映射计算,该映射使角度畸变最小的特征点对齐。我们将我们的新方法应用于具有大表情变形的3D面部表面的多个序列,并将它们与其他两种最先进的跟踪方法进行比较。结果表明,该方法的精度和效率均有明显提高。
{"title":"Intrinsic 3D Dynamic Surface Tracking based on Dynamic Ricci Flow and Teichmüller Map.","authors":"Xiaokang Yu,&nbsp;Na Lei,&nbsp;Yalin Wang,&nbsp;Xianfeng Gu","doi":"10.1109/ICCV.2017.576","DOIUrl":"https://doi.org/10.1109/ICCV.2017.576","url":null,"abstract":"<p><p>3D dynamic surface tracking is an important research problem and plays a vital role in many computer vision and medical imaging applications. However, it is still challenging to efficiently register surface sequences which has large deformations and strong noise. In this paper, we propose a novel automatic method for non-rigid 3D dynamic surface tracking with surface Ricci flow and Teichmüller map methods. According to quasi-conformal Teichmüller theory, the Techmüller map minimizes the maximal dilation so that our method is able to automatically register surfaces with large deformations. Besides, the adoption of Delaunay triangulation and quadrilateral meshes makes our method applicable to low quality meshes. In our work, the 3D dynamic surfaces are acquired by a high speed 3D scanner. We first identified sparse surface features using machine learning methods in the texture space. Then we assign landmark features with different curvature settings and the Riemannian metric of the surface is computed by the dynamic Ricci flow method, such that all the curvatures are concentrated on the feature points and the surface is flat everywhere else. The registration among frames is computed by the Teichmüller mappings, which aligns the feature points with least angle distortions. We apply our new method to multiple sequences of 3D facial surfaces with large expression deformations and compare them with two other state-of-the-art tracking methods. The effectiveness of our method is demonstrated by the clearly improved accuracy and efficiency.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2017 ","pages":"5400-5408"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2017.576","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35903959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A geometric framework for statistical analysis of trajectories with distinct temporal spans. 具有不同时间跨度的轨迹统计分析的几何框架。
Pub Date : 2017-10-01 Epub Date: 2017-12-25 DOI: 10.1109/iccv.2017.28
Rudrasis Chakraborty, Vikas Singh, Nagesh Adluru, Baba C Vemuri

Analyzing data representing multifarious trajectories is central to the many fields in Science and Engineering; for example, trajectories representing a tennis serve, a gymnast's parallel bar routine, progression/remission of disease and so on. We present a novel geometric algorithm for performing statistical analysis of trajectories with distinct number of samples representing longitudinal (or temporal) data. A key feature of our proposal is that unlike existing schemes, our model is deployable in regimes where each participant provides a different number of acquisitions (trajectories have different number of sample points or temporal span). To achieve this, we develop a novel method involving the parallel transport of the tangent vectors along each given trajectory to the starting point of the respective trajectories and then use the span of the matrix whose columns consist of these vectors, to construct a linear subspace in R m . We then map these linear subspaces (possibly of distinct dimensions) of R m on to a single high dimensional hypersphere. This enables computing group statistics over trajectories by instead performing statistics on the hypersphere (equipped with a simpler geometry). Given a point on the hypersphere representing a trajectory, we also provide a "reverse mapping" algorithm to uniquely (under certain assumptions) reconstruct the subspace that corresponds to this point. Finally, by using existing algorithms for recursive Fréchet mean and exact principal geodesic analysis on the hypersphere, we present several experiments on synthetic and real (vision and medical) data sets showing how group testing on such diversely sampled longitudinal data is possible by analyzing the reconstructed data in the subspace spanned by the first few principal components.

分析代表多种轨迹的数据是科学和工程许多领域的核心;例如,代表网球发球的轨迹,体操运动员的双杠动作,疾病的进展/缓解等等。我们提出了一种新的几何算法,用于对具有不同数量的代表纵向(或时间)数据的样本的轨迹进行统计分析。我们建议的一个关键特征是,与现有方案不同,我们的模型可部署在每个参与者提供不同数量的获取(轨迹具有不同数量的样本点或时间跨度)的制度中。为了实现这一点,我们开发了一种新的方法,涉及沿每个给定轨迹的切向量平行移动到各自轨迹的起点,然后使用由这些向量组成的列矩阵的张成空间来构造rm中的线性子空间。然后我们将这些rm的线性子空间(可能是不同维数的)映射到一个高维超球上。这可以通过在超球(配备更简单的几何结构)上执行统计数据来计算轨迹上的组统计数据。给定超球上的一个点表示轨迹,我们还提供了一个“反向映射”算法来唯一地(在某些假设下)重建与该点对应的子空间。最后,通过使用现有的超球递归fr均值和精确主测地分析算法,我们在合成和真实(视觉和医学)数据集上进行了几个实验,展示了如何通过分析由前几个主成分跨越的子空间中的重构数据来对这些不同采样的纵向数据进行群测试。
{"title":"A geometric framework for statistical analysis of trajectories with distinct temporal spans.","authors":"Rudrasis Chakraborty,&nbsp;Vikas Singh,&nbsp;Nagesh Adluru,&nbsp;Baba C Vemuri","doi":"10.1109/iccv.2017.28","DOIUrl":"https://doi.org/10.1109/iccv.2017.28","url":null,"abstract":"<p><p><i>Analyzing data representing multifarious trajectories is central to the many fields in Science and Engineering; for example, trajectories representing a tennis serve, a gymnast's parallel bar routine, progression/remission of disease and so on. We present a novel geometric algorithm for performing statistical analysis of trajectories with distinct number of samples representing longitudinal (or temporal) data. A key feature of our proposal is that unlike existing schemes, our model is deployable in regimes where each participant provides a</i> different <i>number of acquisitions (trajectories have different number of sample points or temporal span). To achieve this, we develop a novel method involving the parallel transport of the tangent vectors along each given trajectory to the starting point of the respective trajectories and then use the span of the matrix whose columns consist of these vectors, to construct a linear subspace in</i> <b>R</b> <sup><i>m</i></sup> . <i>We then map these linear subspaces (possibly of distinct dimensions) of</i> <b>R</b> <sup><i>m</i></sup> <i>on to a single high dimensional hypersphere. This enables computing group statistics over trajectories by instead performing statistics on the hypersphere (equipped with a simpler geometry). Given a point on the hypersphere representing a trajectory, we also provide a \"reverse mapping\" algorithm to uniquely (under certain assumptions) reconstruct the subspace that corresponds to this point. Finally, by using existing algorithms for recursive Fréchet mean and exact principal geodesic analysis on the hypersphere, we present several experiments on synthetic and real (vision and medical) data sets showing how group testing on such diversely sampled longitudinal data is possible by analyzing the reconstructed data in the subspace spanned by the first few principal components.</i></p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2017 ","pages":"172-181"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2017.28","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38023768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
An Optimal Transportation based Univariate Neuroimaging Index. 基于单变量神经成像指数的最优运输。
Liang Mi, Wen Zhang, Junwei Zhang, Yonghui Fan, Dhruman Goradia, Kewei Chen, Eric M Reiman, Xianfeng Gu, Yalin Wang

The alterations of brain structures and functions have been considered closely correlated to the change of cognitive performance due to neurodegenerative diseases such as Alzheimer's disease. In this paper, we introduce a variational framework to compute the optimal transformation (OT) in 3D space and propose a univariate neuroimaging index based on OT to measure such alterations. We compute the OT from each image to a template and measure the Wasserstein distance between them. By comparing the distances from all the images to the common template, we obtain a concise and informative index for each image. Our framework makes use of the Newton's method, which reduces the computational cost and enables itself to be applicable to large-scale datasets. The proposed work is a generic approach and thus may be applicable to various volumetric brain images, including structural magnetic resonance (sMR) and fluorodeoxyglucose positron emission tomography (FDG-PET) images. In the classification between Alzheimer's disease patients and healthy controls, our method achieves an accuracy of 82.30% on the Alzheimers Disease Neuroimaging Initiative (ADNI) baseline sMRI dataset and outperforms several other indices. On FDG-PET dataset, we boost the accuracy to 88.37% by leveraging pairwise Wasserstein distances. In a longitudinal study, we obtain a 5% significance with p-value = 1.13×105 in a t-test on FDG-PET. The results demonstrate a great potential of the proposed index for neuroimage analysis and the precision medicine research.

大脑结构和功能的改变被认为与阿尔茨海默病等神经退行性疾病引起的认知能力的改变密切相关。在本文中,我们引入了一个变分框架来计算三维空间中的最优变换(OT),并提出了一个基于OT的单变量神经成像指数来测量这种变化。我们计算从每个图像到模板的OT,并测量它们之间的Wasserstein距离。通过比较所有图像到通用模板的距离,我们得到了每个图像简洁且信息丰富的索引。我们的框架使用牛顿方法,降低了计算成本,使其能够适用于大规模数据集。所提出的工作是一种通用的方法,因此可能适用于各种体积脑图像,包括结构磁共振(sMR)和氟脱氧葡萄糖正电子发射断层扫描(FDG-PET)图像。在阿尔茨海默病患者和健康对照的分类中,我们的方法在阿尔茨海默病神经成像倡议(ADNI)基线sMRI数据集上实现了82.30%的准确率,优于其他几个指标。在FDG-PET数据集上,我们利用两两Wasserstein距离将准确率提高到88.37%。在纵向研究中,我们在FDG-PET的t检验中获得了5%的显著性,p值= 1.13×105。结果表明,该指标在神经影像分析和精准医学研究中具有很大的应用潜力。
{"title":"An Optimal Transportation based Univariate Neuroimaging Index.","authors":"Liang Mi,&nbsp;Wen Zhang,&nbsp;Junwei Zhang,&nbsp;Yonghui Fan,&nbsp;Dhruman Goradia,&nbsp;Kewei Chen,&nbsp;Eric M Reiman,&nbsp;Xianfeng Gu,&nbsp;Yalin Wang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The alterations of brain structures and functions have been considered closely correlated to the change of cognitive performance due to neurodegenerative diseases such as Alzheimer's disease. In this paper, we introduce a variational framework to compute the optimal transformation (OT) in 3D space and propose a univariate neuroimaging index based on OT to measure such alterations. We compute the OT from each image to a template and measure the Wasserstein distance between them. By comparing the distances from all the images to the common template, we obtain a concise and informative index for each image. Our framework makes use of the Newton's method, which reduces the computational cost and enables itself to be applicable to large-scale datasets. The proposed work is a generic approach and thus may be applicable to various volumetric brain images, including structural magnetic resonance (sMR) and fluorodeoxyglucose positron emission tomography (FDG-PET) images. In the classification between Alzheimer's disease patients and healthy controls, our method achieves an accuracy of 82.30% on the Alzheimers Disease Neuroimaging Initiative (ADNI) baseline sMRI dataset and outperforms several other indices. On FDG-PET dataset, we boost the accuracy to 88.37% by leveraging pairwise Wasserstein distances. In a longitudinal study, we obtain a 5% significance with p-value = 1.13×10<sup>5</sup> in a t-test on FDG-PET. The results demonstrate a great potential of the proposed index for neuroimage analysis and the precision medicine research.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2017 ","pages":"182-191"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5719504/pdf/nihms896614.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35238488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer. 用非光滑正则处理广义特征值问题的无投影方法
Pub Date : 2015-12-01 DOI: 10.1109/ICCV.2015.214
Seong Jae Hwang, Maxwell D Collins, Sathya N Ravi, Vamsi K Ithapu, Nagesh Adluru, Sterling C Johnson, Vikas Singh

Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.

特征值问题在计算机视觉领域无处不在,其应用范围非常广泛,从多视角几何中的估计问题到图像分割。很少有其他线性代数问题拥有一套更成熟的数值例程,许多计算机视觉库都广泛利用了这些工具。然而,只能作为 "黑盒子 "调用底层求解器往往会造成限制。视觉领域的许多 "人在回路中 "设置经常利用专家的监督,以至于用户可以被视为整个系统中的一个子程序。在其他情况下,人们可能希望在公式中加入额外的领域知识、侧面信息甚至部分信息。一般来说,利用这些附带信息对(广义)特征值问题进行正则化处理仍然很困难。基于这些需求,本文提出了一种解决广义特征值问题(GEP)的优化方案,其中涉及一个(非光滑)正则化器。我们从 GEP 的另一种表述出发,在这种表述中,模型的可行性集涉及 Stiefel 流形。本文的核心内容是针对结果问题提出一种端到端的随机优化方案。我们展示了这一通用算法如何改进脑成像数据的统计分析,其中正则来自疾病病理的其他 "视图",包括临床测量和其他图像衍生表征。
{"title":"A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.","authors":"Seong Jae Hwang, Maxwell D Collins, Sathya N Ravi, Vamsi K Ithapu, Nagesh Adluru, Sterling C Johnson, Vikas Singh","doi":"10.1109/ICCV.2015.214","DOIUrl":"10.1109/ICCV.2015.214","url":null,"abstract":"<p><p>Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a \"black box\" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2015 ","pages":"1841-1849"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4828964/pdf/nihms764614.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34405185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Synchrony Discovery in Human Interaction. 人际互动中的无监督同步发现
Pub Date : 2015-12-01 DOI: 10.1109/ICCV.2015.360
Wen-Sheng Chu, Jiabei Zeng, Fernando De la Torre, Jeffrey F Cohn, Daniel S Messinger

People are inherently social. Social interaction plays an important and natural role in human behavior. Most computational methods focus on individuals alone rather than in social context. They also require labelled training data. We present an unsupervised approach to discover interpersonal synchrony, referred as to two or more persons preforming common actions in overlapping video frames or segments. For computational efficiency, we develop a branch-and-bound (B&B) approach that affords exhaustive search while guaranteeing a globally optimal solution. The proposed method is entirely general. It takes from two or more videos any multi-dimensional signal that can be represented as a histogram. We derive three novel bounding functions and provide efficient extensions, including multi-synchrony detection and accelerated search, using a warm-start strategy and parallelism. We evaluate the effectiveness of our approach in multiple databases, including human actions using the CMU Mocap dataset [1], spontaneous facial behaviors using group-formation task dataset [37] and parent-infant interaction dataset [28].

人天生具有社会性。社会互动在人类行为中扮演着重要而自然的角色。大多数计算方法只关注个体,而不是社会背景。这些方法还需要标注训练数据。我们提出了一种发现人际同步的无监督方法,人际同步是指两个或两个以上的人在重叠的视频帧或片段中做出共同的动作。为了提高计算效率,我们开发了一种分支与边界(B&B)方法,在保证全局最优解的同时进行穷举搜索。所提出的方法完全通用。它可以从两个或多个视频中提取任何可以用直方图表示的多维信号。我们推导出三个新颖的边界函数,并提供了有效的扩展,包括多同步检测和加速搜索,使用了热启动策略和并行性。我们评估了我们的方法在多个数据库中的有效性,包括使用 CMU Mocap 数据集[1]的人类动作、使用群体形成任务数据集[37]的自发面部行为以及亲子互动数据集[28]。
{"title":"Unsupervised Synchrony Discovery in Human Interaction.","authors":"Wen-Sheng Chu, Jiabei Zeng, Fernando De la Torre, Jeffrey F Cohn, Daniel S Messinger","doi":"10.1109/ICCV.2015.360","DOIUrl":"10.1109/ICCV.2015.360","url":null,"abstract":"<p><p>People are inherently social. Social interaction plays an important and natural role in human behavior. Most computational methods focus on individuals alone rather than in social context. They also require labelled training data. We present an unsupervised approach to discover interpersonal synchrony, referred as to two or more persons preforming common actions in overlapping video frames or segments. For computational efficiency, we develop a branch-and-bound (B&B) approach that affords exhaustive search while guaranteeing a globally optimal solution. The proposed method is entirely general. It takes from two or more videos any multi-dimensional signal that can be represented as a histogram. We derive three novel bounding functions and provide efficient extensions, including multi-synchrony detection and accelerated search, using a warm-start strategy and parallelism. We evaluate the effectiveness of our approach in multiple databases, including human actions using the CMU Mocap dataset [1], spontaneous facial behaviors using group-formation task dataset [37] and parent-infant interaction dataset [28].</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2015 ","pages":"3146-3154"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4918688/pdf/nihms-751964.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34612878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions. 伸出一只手:在复杂的自我中心互动中检测手和识别活动。
Pub Date : 2015-12-01 Epub Date: 2016-02-18 DOI: 10.1109/ICCV.2015.226
Sven Bambach, Stefan Lee, David J Crandall, Chen Yu

Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.

在以自我为中心的视频中,手经常出现,他们的外表和姿势给了人们正在做什么和他们在关注什么的重要线索。但是,现有的手部检测工作已经做出了强有力的假设,这些假设只在简单的情况下有效,比如与他人的互动有限或在实验室环境中。我们开发了使用卷积神经网络的强外观模型来定位和区分自我中心视频中的手的方法,并引入了一种简单的候选区域生成方法,该方法以很小的计算成本优于现有技术。我们展示了如何使用这些高质量的边界框来创建精确的像素手部区域,并且作为一个应用程序,我们研究了单独的手部分割可以区分不同活动的程度。我们在一个新的数据集上评估了这些技术,该数据集包含48个人们在现实环境中互动的第一人称视频,具有超过15,000个手部实例的像素级地面真实性。
{"title":"Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions.","authors":"Sven Bambach,&nbsp;Stefan Lee,&nbsp;David J Crandall,&nbsp;Chen Yu","doi":"10.1109/ICCV.2015.226","DOIUrl":"https://doi.org/10.1109/ICCV.2015.226","url":null,"abstract":"<p><p>Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2015 ","pages":"1949-1957"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2015.226","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35238485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 359
Volumetric Semantic Segmentation using Pyramid Context Features. 使用金字塔上下文特征的体积语义分割。
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.428
Jonathan T Barron, Pablo Arbeláez, Soile V E Keränen, Mark D Biggin, David W Knowles, Jitendra Malik

We present an algorithm for the per-voxel semantic segmentation of a three-dimensional volume. At the core of our algorithm is a novel "pyramid context" feature, a descriptive representation designed such that exact per-voxel linear classification can be made extremely efficient. This feature not only allows for efficient semantic segmentation but enables other aspects of our algorithm, such as novel learned features and a stacked architecture that can reason about self-consistency. We demonstrate our technique on 3D fluorescence microscopy data of Drosophila embryos for which we are able to produce extremely accurate semantic segmentations in a matter of minutes, and for which other algorithms fail due to the size and high-dimensionality of the data, or due to the difficulty of the task.

提出了一种三维体的逐体素语义分割算法。我们算法的核心是一个新颖的“金字塔上下文”特征,这是一个描述性的表示,可以使精确的每体素线性分类非常有效。这个特性不仅允许有效的语义分割,而且支持我们算法的其他方面,例如新的学习特征和可以对自一致性进行推理的堆叠架构。我们在果蝇胚胎的3D荧光显微镜数据上展示了我们的技术,我们能够在几分钟内产生非常准确的语义分割,并且由于数据的大小和高维,或者由于任务的难度,其他算法失败。
{"title":"Volumetric Semantic Segmentation using Pyramid Context Features.","authors":"Jonathan T Barron, Pablo Arbeláez, Soile V E Keränen, Mark D Biggin, David W Knowles, Jitendra Malik","doi":"10.1109/ICCV.2013.428","DOIUrl":"10.1109/ICCV.2013.428","url":null,"abstract":"<p><p>We present an algorithm for the per-voxel semantic segmentation of a three-dimensional volume. At the core of our algorithm is a novel \"pyramid context\" feature, a descriptive representation designed such that exact per-voxel linear classification can be made extremely efficient. This feature not only allows for efficient semantic segmentation but enables other aspects of our algorithm, such as novel learned features and a stacked architecture that can reason about self-consistency. We demonstrate our technique on 3D fluorescence microscopy data of Drosophila embryos for which we are able to produce extremely accurate semantic segmentations in a matter of minutes, and for which other algorithms fail due to the size and high-dimensionality of the data, or due to the difficulty of the task.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2013 ","pages":"3448-3455"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2013.428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33223269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Learning a Dictionary of Shape Epitomes with Applications to Image Labeling. 学习形状缩影词典及其在图像标记中的应用。
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.49
Liang-Chieh Chen, George Papandreou, Alan L Yuille

The first main contribution of this paper is a novel method for representing images based on a dictionary of shape epitomes. These shape epitomes represent the local edge structure of the image and include hidden variables to encode shift and rotations. They are learnt in an unsupervised manner from groundtruth edges. This dictionary is compact but is also able to capture the typical shapes of edges in natural images. In this paper, we illustrate the shape epitomes by applying them to the image labeling task. In other work, described in the supplementary material, we apply them to edge detection and image modeling. We apply shape epitomes to image labeling by using Conditional Random Field (CRF) Models. They are alternatives to the superpixel or pixel representations used in most CRFs. In our approach, the shape of an image patch is encoded by a shape epitome from the dictionary. Unlike the superpixel representation, our method avoids making early decisions which cannot be reversed. Our resulting hierarchical CRFs efficiently capture both local and global class co-occurrence properties. We demonstrate its quantitative and qualitative properties of our approach with image labeling experiments on two standard datasets: MSRC-21 and Stanford Background.

本文的第一个主要贡献是一种基于形状缩影字典的图像表示新方法。这些形状缩影表示图像的局部边缘结构,并包含隐藏变量来编码移位和旋转。它们是以一种无监督的方式从底层真理边缘学习的。这个字典是紧凑的,但也能够捕捉自然图像的典型形状的边缘。在本文中,我们通过将形状缩影应用于图像标记任务来说明形状缩影。在补充材料中描述的其他工作中,我们将它们应用于边缘检测和图像建模。我们利用条件随机场(CRF)模型将形状缩影应用于图像标注。它们是大多数crf中使用的超像素或像素表示的替代方案。在我们的方法中,图像patch的形状由字典中的形状缩影编码。与超像素表示不同,我们的方法避免了无法逆转的早期决策。我们得到的分层crf有效地捕获了局部和全局类共现属性。我们通过两个标准数据集(MSRC-21和Stanford Background)上的图像标记实验证明了我们的方法的定量和定性特性。
{"title":"Learning a Dictionary of Shape Epitomes with Applications to Image Labeling.","authors":"Liang-Chieh Chen,&nbsp;George Papandreou,&nbsp;Alan L Yuille","doi":"10.1109/ICCV.2013.49","DOIUrl":"https://doi.org/10.1109/ICCV.2013.49","url":null,"abstract":"<p><p>The first main contribution of this paper is a novel method for representing images based on a dictionary of shape epitomes. These shape epitomes represent the local edge structure of the image and include hidden variables to encode shift and rotations. They are learnt in an unsupervised manner from groundtruth edges. This dictionary is compact but is also able to capture the typical shapes of edges in natural images. In this paper, we illustrate the shape epitomes by applying them to the image labeling task. In other work, described in the supplementary material, we apply them to edge detection and image modeling. We apply shape epitomes to image labeling by using Conditional Random Field (CRF) Models. They are alternatives to the superpixel or pixel representations used in most CRFs. In our approach, the shape of an image patch is encoded by a shape epitome from the dictionary. Unlike the superpixel representation, our method avoids making early decisions which cannot be reversed. Our resulting hierarchical CRFs efficiently capture both local and global class co-occurrence properties. We demonstrate its quantitative and qualitative properties of our approach with image labeling experiments on two standard datasets: MSRC-21 and Stanford Background.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2013 ","pages":"337-344"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2013.49","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33964061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Recursive Estimation of the Stein Center of SPD Matrices & its Applications. SPD矩阵Stein中心的递归估计及其应用。
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.225
Hesamoddin Salehian, Guang Cheng, Baba C Vemuri, Jeffrey Ho

Symmetric positive-definite (SPD) matrices are ubiquitous in Computer Vision, Machine Learning and Medical Image Analysis. Finding the center/average of a population of such matrices is a common theme in many algorithms such as clustering, segmentation, principal geodesic analysis, etc. The center of a population of such matrices can be defined using a variety of distance/divergence measures as the minimizer of the sum of squared distances/divergences from the unknown center to the members of the population. It is well known that the computation of the Karcher mean for the space of SPD matrices which is a negatively-curved Riemannian manifold is computationally expensive. Recently, the LogDet divergence-based center was shown to be a computationally attractive alternative. However, the LogDet-based mean of more than two matrices can not be computed in closed form, which makes it computationally less attractive for large populations. In this paper we present a novel recursive estimator for center based on the Stein distance - which is the square root of the LogDet divergence - that is significantly faster than the batch mode computation of this center. The key theoretical contribution is a closed-form solution for the weighted Stein center of two SPD matrices, which is used in the recursive computation of the Stein center for a population of SPD matrices. Additionally, we show experimental evidence of the convergence of our recursive Stein center estimator to the batch mode Stein center. We present applications of our recursive estimator to K-means clustering and image indexing depicting significant time gains over corresponding algorithms that use the batch mode computations. For the latter application, we develop novel hashing functions using the Stein distance and apply it to publicly available data sets, and experimental results have shown favorable comparisons to other competing methods.

对称正定矩阵(SPD)在计算机视觉、机器学习和医学图像分析中无处不在。在聚类、分割、主测地线分析等算法中,寻找这些矩阵的中心/平均值是一个共同的主题。这种矩阵的总体中心可以使用各种距离/散度度量来定义,作为从未知中心到总体成员的距离/散度平方和的最小值。众所周知,SPD矩阵的负弯曲黎曼流形空间的Karcher均值的计算是非常昂贵的。最近,基于LogDet散度的中心被证明是一种计算上有吸引力的替代方案。然而,基于logdet的两个以上矩阵的均值不能以封闭形式计算,这使得它在计算上对大种群不太有吸引力。在本文中,我们提出了一种新的基于Stein距离的中心递归估计方法,该方法是LogDet散度的平方根,它比该中心的批处理计算方式要快得多。关键的理论贡献是两个SPD矩阵的加权Stein中心的封闭解,该解用于SPD矩阵群的Stein中心的递归计算。此外,我们还展示了我们的递归Stein中心估计器收敛于批处理模式Stein中心的实验证据。我们将递归估计器应用于K-means聚类和图像索引,与使用批处理模式计算的相应算法相比,描述了显著的时间增益。对于后一种应用,我们使用Stein距离开发了新的哈希函数,并将其应用于公开可用的数据集,实验结果显示与其他竞争方法相比有利。
{"title":"Recursive Estimation of the Stein Center of SPD Matrices & its Applications.","authors":"Hesamoddin Salehian,&nbsp;Guang Cheng,&nbsp;Baba C Vemuri,&nbsp;Jeffrey Ho","doi":"10.1109/ICCV.2013.225","DOIUrl":"https://doi.org/10.1109/ICCV.2013.225","url":null,"abstract":"<p><p>Symmetric positive-definite (SPD) matrices are ubiquitous in Computer Vision, Machine Learning and Medical Image Analysis. Finding the center/average of a population of such matrices is a common theme in many algorithms such as clustering, segmentation, principal geodesic analysis, etc. The center of a population of such matrices can be defined using a variety of distance/divergence measures as the minimizer of the sum of squared distances/divergences from the unknown center to the members of the population. It is well known that the computation of the Karcher mean for the space of SPD matrices which is a negatively-curved Riemannian manifold is computationally expensive. Recently, the LogDet divergence-based center was shown to be a computationally attractive alternative. However, the LogDet-based mean of more than two matrices can not be computed in closed form, which makes it computationally less attractive for large populations. In this paper we present a novel recursive estimator for center based on the Stein distance - which is the square root of the LogDet divergence - that is significantly faster than the batch mode computation of this center. The key theoretical contribution is a closed-form solution for the weighted Stein center of two SPD matrices, which is used in the recursive computation of the Stein center for a population of SPD matrices. Additionally, we show experimental evidence of the convergence of our recursive Stein center estimator to the batch mode Stein center. We present applications of our recursive estimator to K-means clustering and image indexing depicting significant time gains over corresponding algorithms that use the batch mode computations. For the latter application, we develop novel hashing functions using the Stein distance and apply it to publicly available data sets, and experimental results have shown favorable comparisons to other competing methods.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":" ","pages":"1793-1800"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2013.225","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9368617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Image Segmentation with Cascaded Hierarchical Models and Logistic Disjunctive Normal Networks. 基于级联层次模型和逻辑析取正规网络的图像分割。
Pub Date : 2013-12-01 DOI: 10.1109/ICCV.2013.269
Mojtaba Seyedhosseini, Mehdi Sajjadi, Tolga Tasdizen

Contextual information plays an important role in solving vision problems such as image segmentation. However, extracting contextual information and using it in an effective way remains a difficult problem. To address this challenge, we propose a multi-resolution contextual framework, called cascaded hierarchical model (CHM), which learns contextual information in a hierarchical framework for image segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. We repeat this procedure by cascading the hierarchical framework to improve the segmentation accuracy. Multiple classifiers are learned in the CHM; therefore, a fast and accurate classifier is required to make the training tractable. The classifier also needs to be robust against overfitting due to the large number of parameters learned during training. We introduce a novel classification scheme, called logistic disjunctive normal networks (LDNN), which consists of one adaptive layer of feature detectors implemented by logistic sigmoid functions followed by two fixed layers of logical units that compute conjunctions and disjunctions, respectively. We demonstrate that LDNN outperforms state-of-theart classifiers and can be used in the CHM to improve object segmentation performance.

上下文信息在解决图像分割等视觉问题中起着重要的作用。然而,语境信息的提取和有效利用仍然是一个难题。为了解决这一挑战,我们提出了一种多分辨率上下文框架,称为级联分层模型(CHM),它在分层框架中学习上下文信息以进行图像分割。在层次结构的每一层,分类器是基于下采样的输入图像和前一层的输出来训练的。然后,我们的模型将得到的多分辨率上下文信息整合到分类器中,以原始分辨率分割输入图像。我们通过层叠的层次框架来重复这个过程,以提高分割的准确性。在CHM中学习了多个分类器;因此,需要一个快速准确的分类器,使训练易于处理。由于在训练过程中学习了大量参数,分类器还需要具有抗过拟合的鲁棒性。我们引入了一种新的分类方案,称为逻辑析取正规网络(LDNN),它由逻辑sigmoid函数实现的自适应特征检测器层和分别计算合取和析取的两个固定逻辑单元层组成。我们证明了LDNN优于最先进的分类器,可以在CHM中使用以提高目标分割性能。
{"title":"Image Segmentation with Cascaded Hierarchical Models and Logistic Disjunctive Normal Networks.","authors":"Mojtaba Seyedhosseini,&nbsp;Mehdi Sajjadi,&nbsp;Tolga Tasdizen","doi":"10.1109/ICCV.2013.269","DOIUrl":"https://doi.org/10.1109/ICCV.2013.269","url":null,"abstract":"<p><p>Contextual information plays an important role in solving vision problems such as image segmentation. However, extracting contextual information and using it in an effective way remains a difficult problem. To address this challenge, we propose a multi-resolution contextual framework, called cascaded hierarchical model (CHM), which learns contextual information in a hierarchical framework for image segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. We repeat this procedure by cascading the hierarchical framework to improve the segmentation accuracy. Multiple classifiers are learned in the CHM; therefore, a fast and accurate classifier is required to make the training tractable. The classifier also needs to be robust against overfitting due to the large number of parameters learned during training. We introduce a novel classification scheme, called logistic disjunctive normal networks (LDNN), which consists of one adaptive layer of feature detectors implemented by logistic sigmoid functions followed by two fixed layers of logical units that compute conjunctions and disjunctions, respectively. We demonstrate that LDNN outperforms state-of-theart classifiers and can be used in the CHM to improve object segmentation performance.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2013 ","pages":"2168-2175"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/ICCV.2013.269","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32835013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
期刊
Proceedings. IEEE International Conference on Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1