首页 > 最新文献

Computational Visual Media最新文献

英文 中文
MusicFace: Music-driven expressive singing face synthesis 音乐脸谱音乐驱动的表情歌唱脸部合成
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0343-7
Pengfei Liu, Wenjin Deng, Hengda Li, Jintai Wang, Yinglin Zheng, Yiwei Ding, Xiaohu Guo, Ming Zeng

It remains an interesting and challenging problem to synthesize a vivid and realistic singing face driven by music. In this paper, we present a method for this task with natural motions for the lips, facial expression, head pose, and eyes. Due to the coupling of mixed information for the human voice and backing music in common music audio signals, we design a decouple-and-fuse strategy to tackle the challenge. We first decompose the input music audio into a human voice stream and a backing music stream. Due to the implicit and complicated correlation between the two-stream input signals and the dynamics of the facial expressions, head motions, and eye states, we model their relationship with an attention scheme, where the effects of the two streams are fused seamlessly. Furthermore, to improve the expressivenes of the generated results, we decompose head movement generation in terms of speed and direction, and decompose eye state generation into short-term blinking and long-term eye closing, modeling them separately. We have also built a novel dataset, SingingFace, to support training and evaluation of models for this task, including future work on this topic. Extensive experiments and a user study show that our proposed method is capable of synthesizing vivid singing faces, qualitatively and quantitatively better than the prior state-of-the-art.

在音乐的驱动下合成一张生动逼真的歌唱脸谱,仍然是一个有趣而具有挑战性的问题。在本文中,我们针对这一任务提出了一种方法,其中包括嘴唇、面部表情、头部姿势和眼睛的自然运动。由于普通音乐音频信号中人声和伴奏音乐的混合信息耦合在一起,我们设计了一种 "解耦-融合"(decouple-and-fuse)策略来应对这一挑战。我们首先将输入的音乐音频分解为人声流和伴奏音乐流。由于双流输入信号与面部表情、头部运动和眼部状态的动态之间存在着隐含而复杂的相关性,我们用注意力方案来模拟它们之间的关系,将双流的效果无缝地融合在一起。此外,为了提高生成结果的表现力,我们将头部动作的生成分解为速度和方向,将眼球状态的生成分解为短期眨眼和长期闭眼,并分别对它们进行建模。我们还建立了一个新颖的数据集--SingingFace,以支持对这一任务的模型进行训练和评估,包括未来在这一主题上的工作。广泛的实验和用户研究表明,我们提出的方法能够合成生动的歌唱面孔,在质量和数量上都优于之前的先进水平。
{"title":"MusicFace: Music-driven expressive singing face synthesis","authors":"Pengfei Liu, Wenjin Deng, Hengda Li, Jintai Wang, Yinglin Zheng, Yiwei Ding, Xiaohu Guo, Ming Zeng","doi":"10.1007/s41095-023-0343-7","DOIUrl":"https://doi.org/10.1007/s41095-023-0343-7","url":null,"abstract":"<p>It remains an interesting and challenging problem to synthesize a vivid and realistic singing face driven by music. In this paper, we present a method for this task with natural motions for the lips, facial expression, head pose, and eyes. Due to the coupling of mixed information for the human voice and backing music in common music audio signals, we design a decouple-and-fuse strategy to tackle the challenge. We first decompose the input music audio into a human voice stream and a backing music stream. Due to the implicit and complicated correlation between the two-stream input signals and the dynamics of the facial expressions, head motions, and eye states, we model their relationship with an attention scheme, where the effects of the two streams are fused seamlessly. Furthermore, to improve the expressivenes of the generated results, we decompose head movement generation in terms of speed and direction, and decompose eye state generation into short-term blinking and long-term eye closing, modeling them separately. We have also built a novel dataset, SingingFace, to support training and evaluation of models for this task, including future work on this topic. Extensive experiments and a user study show that our proposed method is capable of synthesizing vivid singing faces, qualitatively and quantitatively better than the prior state-of-the-art.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"47 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139079090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D hand pose and shape estimation from monocular RGB via efficient 2D cues 通过高效的二维线索从单目 RGB 进行三维手部姿势和形状估计
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0346-4
Fenghao Zhang, Lin Zhao, Shengling Li, Wanjuan Su, Liman Liu, Wenbing Tao

Estimating 3D hand shape from a single-view RGB image is important for many applications. However, the diversity of hand shapes and postures, depth ambiguity, and occlusion may result in pose errors and noisy hand meshes. Making full use of 2D cues such as 2D pose can effectively improve the quality of 3D human hand shape estimation. In this paper, we use 2D joint heatmaps to obtain spatial details for robust pose estimation. We also introduce a depth-independent 2D mesh to avoid depth ambiguity in mesh regression for efficient hand-image alignment. Our method has four cascaded stages: 2D cue extraction, pose feature encoding, initial reconstruction, and reconstruction refinement. Specifically, we first encode the image to determine semantic features during 2D cue extraction; this is also used to predict hand joints and for segmentation. Then, during the pose feature encoding stage, we use a hand joints encoder to learn spatial information from the joint heatmaps. Next, a coarse 3D hand mesh and 2D mesh are obtained in the initial reconstruction step; a mesh squeeze-and-excitation block is used to fuse different hand features to enhance perception of 3D hand structures. Finally, a global mesh refinement stage learns non-local relations between vertices of the hand mesh from the predicted 2D mesh, to predict an offset hand mesh to fine-tune the reconstruction results. Quantitative and qualitative results on the FreiHAND benchmark dataset demonstrate that our approach achieves state-of-the-art performance.

根据单视角 RGB 图像估计三维手形对许多应用都很重要。然而,手部形状和姿势的多样性、深度模糊性和遮挡可能会导致姿势错误和手部网格噪声。充分利用二维姿势等二维线索可以有效提高三维人体手形估计的质量。在本文中,我们利用二维联合热图来获取空间细节,从而实现稳健的姿势估计。我们还引入了与深度无关的二维网格,以避免网格回归中的深度模糊,从而实现高效的手部图像配准。我们的方法有四个级联阶段:二维线索提取、姿势特征编码、初始重建和重建完善。具体来说,我们首先对图像进行编码,以便在二维线索提取过程中确定语义特征;这也用于预测手部关节和进行分割。然后,在姿势特征编码阶段,我们使用手部关节编码器从关节热图中学习空间信息。接着,在初始重建步骤中获得粗略的三维手部网格和二维网格;网格挤压和激发块用于融合不同的手部特征,以增强对三维手部结构的感知。最后,全局网格细化阶段从预测的 2D 网格中学习手部网格顶点之间的非局部关系,从而预测偏移手部网格,对重建结果进行微调。FreiHAND 基准数据集的定量和定性结果表明,我们的方法达到了最先进的性能。
{"title":"3D hand pose and shape estimation from monocular RGB via efficient 2D cues","authors":"Fenghao Zhang, Lin Zhao, Shengling Li, Wanjuan Su, Liman Liu, Wenbing Tao","doi":"10.1007/s41095-023-0346-4","DOIUrl":"https://doi.org/10.1007/s41095-023-0346-4","url":null,"abstract":"<p>Estimating 3D hand shape from a single-view RGB image is important for many applications. However, the diversity of hand shapes and postures, depth ambiguity, and occlusion may result in pose errors and noisy hand meshes. Making full use of 2D cues such as 2D pose can effectively improve the quality of 3D human hand shape estimation. In this paper, we use 2D joint heatmaps to obtain spatial details for robust pose estimation. We also introduce a depth-independent 2D mesh to avoid depth ambiguity in mesh regression for efficient hand-image alignment. Our method has four cascaded stages: 2D cue extraction, pose feature encoding, initial reconstruction, and reconstruction refinement. Specifically, we first encode the image to determine semantic features during 2D cue extraction; this is also used to predict hand joints and for segmentation. Then, during the pose feature encoding stage, we use a hand joints encoder to learn spatial information from the joint heatmaps. Next, a coarse 3D hand mesh and 2D mesh are obtained in the initial reconstruction step; a mesh squeeze-and-excitation block is used to fuse different hand features to enhance perception of 3D hand structures. Finally, a global mesh refinement stage learns non-local relations between vertices of the hand mesh from the predicted 2D mesh, to predict an offset hand mesh to fine-tune the reconstruction results. Quantitative and qualitative results on the FreiHAND benchmark dataset demonstrate that our approach achieves state-of-the-art performance.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"26 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139078754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A causal convolutional neural network for multi-subject motion modeling and generation 基于因果卷积神经网络的多主体运动建模与生成
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1007/s41095-022-0307-3
Shuaiying Hou, Congyi Wang, Wenlin Zhuang, Yu Chen, Yangang Wang, Hujun Bao, Jinxiang Chai, Weiwei Xu

Inspired by the success of WaveNet in multi-subject speech synthesis, we propose a novel neural network based on causal convolutions for multi-subject motion modeling and generation. The network can capture the intrinsic characteristics of the motion of different subjects, such as the influence of skeleton scale variation on motion style. Moreover, after fine-tuning the network using a small motion dataset for a novel skeleton that is not included in the training dataset, it is able to synthesize high-quality motions with a personalized style for the novel skeleton. The experimental results demonstrate that our network can model the intrinsic characteristics of motions well and can be applied to various motion modeling and synthesis tasks.

受WaveNet在多主体语音合成中的成功启发,我们提出了一种基于因果卷积的神经网络,用于多主体运动建模和生成。该网络可以捕捉不同主体运动的内在特征,如骨架尺度变化对运动风格的影响。此外,在使用小型运动数据集对网络进行微调后,该网络可以针对未包含在训练数据集中的新骨架合成具有个性化风格的高质量运动。实验结果表明,该网络可以很好地模拟运动的内在特征,可以应用于各种运动建模和合成任务。
{"title":"A causal convolutional neural network for multi-subject motion modeling and generation","authors":"Shuaiying Hou, Congyi Wang, Wenlin Zhuang, Yu Chen, Yangang Wang, Hujun Bao, Jinxiang Chai, Weiwei Xu","doi":"10.1007/s41095-022-0307-3","DOIUrl":"https://doi.org/10.1007/s41095-022-0307-3","url":null,"abstract":"<p>Inspired by the success of WaveNet in multi-subject speech synthesis, we propose a novel neural network based on causal convolutions for multi-subject motion modeling and generation. The network can capture the intrinsic characteristics of the motion of different subjects, such as the influence of skeleton scale variation on motion style. Moreover, after fine-tuning the network using a small motion dataset for a novel skeleton that is not included in the training dataset, it is able to synthesize high-quality motions with a personalized style for the novel skeleton. The experimental results demonstrate that our network can model the intrinsic characteristics of motions well and can be applied to various motion modeling and synthesis tasks.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"83 2","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
6DOF pose estimation of a 3D rigid object based on edge-enhanced point pair features 基于边缘增强点对特征的三维刚体6DOF位姿估计
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1007/s41095-022-0308-2
Chenyi Liu, Fei Chen, Lu Deng, Renjiao Yi, Lintao Zheng, Chenyang Zhu, Jia Wang, Kai Xu

The point pair feature (PPF) is widely used for 6D pose estimation. In this paper, we propose an efficient 6D pose estimation method based on the PPF framework. We introduce a well-targeted down-sampling strategy that focuses on edge areas for efficient feature extraction for complex geometry. A pose hypothesis validation approach is proposed to resolve ambiguity due to symmetry by calculating the edge matching degree. We perform evaluations on two challenging datasets and one real-world collected dataset, demonstrating the superiority of our method for pose estimation for geometrically complex, occluded, symmetrical objects. We further validate our method by applying it to simulated punctures.

点对特征(PPF)被广泛用于6D姿态估计。本文提出了一种基于PPF框架的高效6D姿态估计方法。我们引入了一种目标明确的下采样策略,该策略侧重于边缘区域,以便对复杂几何图形进行有效的特征提取。提出了一种姿态假设验证方法,通过计算边缘匹配度来解决由于对称引起的模糊问题。我们对两个具有挑战性的数据集和一个真实世界收集的数据集进行了评估,证明了我们的方法在几何复杂、遮挡、对称物体的姿态估计方面的优越性。我们通过将其应用于模拟穿孔进一步验证了我们的方法。
{"title":"6DOF pose estimation of a 3D rigid object based on edge-enhanced point pair features","authors":"Chenyi Liu, Fei Chen, Lu Deng, Renjiao Yi, Lintao Zheng, Chenyang Zhu, Jia Wang, Kai Xu","doi":"10.1007/s41095-022-0308-2","DOIUrl":"https://doi.org/10.1007/s41095-022-0308-2","url":null,"abstract":"<p>The point pair feature (PPF) is widely used for 6D pose estimation. In this paper, we propose an efficient 6D pose estimation method based on the PPF framework. We introduce a well-targeted down-sampling strategy that focuses on edge areas for efficient feature extraction for complex geometry. A pose hypothesis validation approach is proposed to resolve ambiguity due to symmetry by calculating the edge matching degree. We perform evaluations on two challenging datasets and one real-world collected dataset, demonstrating the superiority of our method for pose estimation for geometrically complex, occluded, symmetrical objects. We further validate our method by applying it to simulated punctures.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"83 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on facial image deblurring 人脸图像去模糊研究进展
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0336-6
Bingnan Wang, Fanjiang Xu, Quan Zheng

When a facial image is blurred, it significantly affects high-level vision tasks such as face recognition. The purpose of facial image deblurring is to recover a clear image from a blurry input image, which can improve the recognition accuracy, etc. However, general deblurring methods do not perform well on facial images. Therefore, some face deblurring methods have been proposed to improve performance by adding semantic or structural information as specific priors according to the characteristics of the facial images. In this paper, we survey and summarize recently published methods for facial image deblurring, most of which are based on deep learning. First, we provide a brief introduction to the modeling of image blurring. Next, we summarize face deblurring methods into two categories: model-based methods and deep learning-based methods. Furthermore, we summarize the datasets, loss functions, and performance evaluation metrics commonly used in the neural network training process. We show the performance of classical methods on these datasets and metrics and provide a brief discussion on the differences between model-based and learning-based methods. Finally, we discuss the current challenges and possible future research directions.

当面部图像被模糊时,它会严重影响高级视觉任务,如面部识别。人脸图像去模糊的目的是从模糊的输入图像中恢复出清晰的图像,从而提高识别精度等。然而,一般的去模糊方法在面部图像上表现不佳。因此,人们提出了一些人脸去模糊方法,根据人脸图像的特点,通过添加语义或结构信息作为特定的先验来提高性能。在本文中,我们调查和总结了最近发表的面部图像去模糊的方法,其中大多数是基于深度学习的。首先,我们简要介绍了图像模糊的建模。接下来,我们将人脸去模糊方法归纳为两类:基于模型的方法和基于深度学习的方法。此外,我们总结了神经网络训练过程中常用的数据集、损失函数和性能评估指标。我们展示了经典方法在这些数据集和指标上的性能,并简要讨论了基于模型和基于学习的方法之间的差异。最后,讨论了当前面临的挑战和未来可能的研究方向。
{"title":"A survey on facial image deblurring","authors":"Bingnan Wang, Fanjiang Xu, Quan Zheng","doi":"10.1007/s41095-023-0336-6","DOIUrl":"https://doi.org/10.1007/s41095-023-0336-6","url":null,"abstract":"<p>When a facial image is blurred, it significantly affects high-level vision tasks such as face recognition. The purpose of facial image deblurring is to recover a clear image from a blurry input image, which can improve the recognition accuracy, etc. However, general deblurring methods do not perform well on facial images. Therefore, some face deblurring methods have been proposed to improve performance by adding semantic or structural information as specific priors according to the characteristics of the facial images. In this paper, we survey and summarize recently published methods for facial image deblurring, most of which are based on deep learning. First, we provide a brief introduction to the modeling of image blurring. Next, we summarize face deblurring methods into two categories: model-based methods and deep learning-based methods. Furthermore, we summarize the datasets, loss functions, and performance evaluation metrics commonly used in the neural network training process. We show the performance of classical methods on these datasets and metrics and provide a brief discussion on the differences between model-based and learning-based methods. Finally, we discuss the current challenges and possible future research directions.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"14 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A visual modeling method for spatiotemporal and multidimensional features in epidemiological analysis: Applied COVID-19 aggregated datasets 流行病学分析中时空和多维特征的可视化建模方法:应用COVID-19汇总数据集
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1007/s41095-023-0353-5
Yu Dong, Christy Jie Liang, Yi Chen, Jie Hua

The visual modeling method enables flexible interactions with rich graphical depictions of data and supports the exploration of the complexities of epidemiological analysis. However, most epidemiology visualizations do not support the combined analysis of objective factors that might influence the transmission situation, resulting in a lack of quantitative and qualitative evidence. To address this issue, we developed a portrait-based visual modeling method called +msRNAer. This method considers the spatiotemporal features of virus transmission patterns and multidimensional features of objective risk factors in communities, enabling portrait-based exploration and comparison in epidemiological analysis. We applied +msRNAer to aggregate COVID-19-related datasets in New South Wales, Australia, combining COVID-19 case number trends, geo-information, intervention events, and expert-supervised risk factors extracted from local government area-based censuses. We perfected the +msRNAer workflow with collaborative views and evaluated its feasibility, effectiveness, and usefulness through one user study and three subject-driven case studies. Positive feedback from experts indicates that +msRNAer provides a general understanding for analyzing comprehension that not only compares relationships between cases in time-varying and risk factors through portraits but also supports navigation in fundamental geographical, timeline, and other factor comparisons. By adopting interactions, experts discovered functional and practical implications for potential patterns of long-standing community factors regarding the vulnerability faced by the pandemic. Experts confirmed that +msRNAer is expected to deliver visual modeling benefits with spatiotemporal and multidimensional features in other epidemiological analysis scenarios.

可视化建模方法能够与丰富的数据图形描述进行灵活互动,并支持探索复杂的流行病学分析。然而,大多数流行病学可视化方法不支持对可能影响传播情况的客观因素进行综合分析,从而导致缺乏定量和定性证据。为了解决这个问题,我们开发了一种基于肖像的可视化建模方法,名为 +msRNAer。该方法考虑了病毒传播模式的时空特征和社区中客观风险因素的多维特征,可在流行病学分析中进行基于肖像的探索和比较。我们将 +msRNAer 应用于澳大利亚新南威尔士州 COVID-19 相关数据集的汇总,结合了 COVID-19 病例数趋势、地理信息、干预事件以及从地方政府地区人口普查中提取的专家监督风险因素。我们利用协作视图完善了 +msRNAer 工作流程,并通过一项用户研究和三项主题驱动的案例研究评估了其可行性、有效性和实用性。专家们的积极反馈表明,+msRNAer 为分析理解提供了一种一般理解,不仅通过肖像比较了病例之间在时间变化和风险因素方面的关系,而且还支持在基本地理、时间线和其他因素比较方面的导航。通过采用交互方式,专家们发现了长期存在的社区因素对大流行病所面临的脆弱性的潜在模式的功能和实际影响。专家们确认,+msRNAer 预计将在其他流行病学分析方案中提供具有时空和多维特征的可视化建模优势。
{"title":"A visual modeling method for spatiotemporal and multidimensional features in epidemiological analysis: Applied COVID-19 aggregated datasets","authors":"Yu Dong, Christy Jie Liang, Yi Chen, Jie Hua","doi":"10.1007/s41095-023-0353-5","DOIUrl":"https://doi.org/10.1007/s41095-023-0353-5","url":null,"abstract":"<p>The visual modeling method enables flexible interactions with rich graphical depictions of data and supports the exploration of the complexities of epidemiological analysis. However, most epidemiology visualizations do not support the combined analysis of objective factors that might influence the transmission situation, resulting in a lack of quantitative and qualitative evidence. To address this issue, we developed a portrait-based visual modeling method called <i>+msRNAer</i>. This method considers the spatiotemporal features of virus transmission patterns and multidimensional features of objective risk factors in communities, enabling portrait-based exploration and comparison in epidemiological analysis. We applied <i>+msRNAer</i> to aggregate COVID-19-related datasets in New South Wales, Australia, combining COVID-19 case number trends, geo-information, intervention events, and expert-supervised risk factors extracted from local government area-based censuses. We perfected the <i>+msRNAer</i> workflow with collaborative views and evaluated its feasibility, effectiveness, and usefulness through one user study and three subject-driven case studies. Positive feedback from experts indicates that <i>+msRNAer</i> provides a general understanding for analyzing comprehension that not only compares relationships between cases in time-varying and risk factors through portraits but also supports navigation in fundamental geographical, timeline, and other factor comparisons. By adopting interactions, experts discovered functional and practical implications for potential patterns of long-standing community factors regarding the vulnerability faced by the pandemic. Experts confirmed that <i>+msRNAer</i> is expected to deliver visual modeling benefits with spatiotemporal and multidimensional features in other epidemiological analysis scenarios.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"146 1","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139078759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards robustness and generalization of point cloud representation: A geometry coding method and a large-scale object-level dataset 点云表示的鲁棒性和泛化:一种几何编码方法和大规模对象级数据集
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-11-30 DOI: 10.1007/s41095-022-0305-5
Mingye Xu, Zhipeng Zhou, Yali Wang, Yu Qiao

Robustness and generalization are two challenging problems for learning point cloud representation. To tackle these problems, we first design a novel geometry coding model, which can effectively use an invariant eigengraph to group points with similar geometric information, even when such points are far from each other. We also introduce a large-scale point cloud dataset, PCNet184. It consists of 184 categories and 51,915 synthetic objects, which brings new challenges for point cloud classification, and provides a new benchmark to assess point cloud cross-domain generalization. Finally, we perform extensive experiments on point cloud classification, using ModelNet40, ScanObjectNN, and our PCNet184, and segmentation, using ShapeNetPart and S3DIS. Our method achieves comparable performance to state-of-the-art methods on these datasets, for both supervised and unsupervised learning. Code and our dataset are available at https://github.com/MingyeXu/PCNet184.

鲁棒性和泛化是学习点云表示的两个难题。为了解决这些问题,我们首先设计一种新颖的几何编码模型,可以有效地使用一个不变的eigengraph组点具有相似的几何信息,即使这些点远离对方。我们还介绍了一个大规模的点云数据集PCNet184。它包含184个类别和51915个合成对象,为点云分类带来了新的挑战,并为评估点云跨域泛化提供了新的基准。最后,我们对点云分类进行了广泛的实验,使用ModelNet40、ScanObjectNN和我们的PCNet184,并使用ShapeNetPart和S3DIS进行了分割。我们的方法在这些数据集上实现了与最先进的方法相当的性能,无论是监督学习还是无监督学习。代码和我们的数据集可在https://github.com/MingyeXu/PCNet184上获得。
{"title":"Towards robustness and generalization of point cloud representation: A geometry coding method and a large-scale object-level dataset","authors":"Mingye Xu, Zhipeng Zhou, Yali Wang, Yu Qiao","doi":"10.1007/s41095-022-0305-5","DOIUrl":"https://doi.org/10.1007/s41095-022-0305-5","url":null,"abstract":"<p>Robustness and generalization are two challenging problems for learning point cloud representation. To tackle these problems, we first design a novel geometry coding model, which can effectively use an invariant eigengraph to group points with similar geometric information, even when such points are far from each other. We also introduce a large-scale point cloud dataset, PCNet184. It consists of 184 categories and 51,915 synthetic objects, which brings new challenges for point cloud classification, and provides a new benchmark to assess point cloud cross-domain generalization. Finally, we perform extensive experiments on point cloud classification, using ModelNet40, ScanObjectNN, and our PCNet184, and segmentation, using ShapeNetPart and S3DIS. Our method achieves comparable performance to state-of-the-art methods on these datasets, for both supervised and unsupervised learning. Code and our dataset are available at https://github.com/MingyeXu/PCNet184.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"83 3","pages":""},"PeriodicalIF":6.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of urban visual analytics: Advances and future directions. 城市视觉分析调查:进展与未来方向。
IF 17.3 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-01-01 Epub Date: 2022-10-18 DOI: 10.1007/s41095-022-0275-7
Zikun Deng, Di Weng, Shuhan Liu, Yuan Tian, Mingliang Xu, Yingcai Wu

Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.

开发有效的可视化分析系统需要对领域问题的特征进行仔细分析,并将可视化技术与计算模型相结合。城市可视化分析在解决城市问题和为智慧城市提供基础服务方面已经取得了显著成就。为了进一步推动学术研究,帮助开发工业化城市分析系统,我们从四个方面对城市可视化分析研究进行了全面回顾。其中,我们确定了 8 个城市领域和 22 种流行可视化类型,分析了 7 种计算方法,并根据可视化技术和计算模型的整合情况将现有系统分为 4 类。最后,我们提出了潜在的研究方向和机遇。
{"title":"A survey of urban visual analytics: Advances and future directions.","authors":"Zikun Deng, Di Weng, Shuhan Liu, Yuan Tian, Mingliang Xu, Yingcai Wu","doi":"10.1007/s41095-022-0275-7","DOIUrl":"10.1007/s41095-022-0275-7","url":null,"abstract":"<p><p>Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"9 1","pages":"3-39"},"PeriodicalIF":17.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9579670/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40655032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the Best Paper Award Committee 最佳论文奖委员会的致辞
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-04-20 DOI: 10.1007/s41095-022-0285-5
Ming C. Lin,Xin Tong,Wenping Wang
{"title":"Message from the Best Paper Award Committee","authors":"Ming C. Lin,Xin Tong,Wenping Wang","doi":"10.1007/s41095-022-0285-5","DOIUrl":"https://doi.org/10.1007/s41095-022-0285-5","url":null,"abstract":"","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"92 4","pages":"329-329"},"PeriodicalIF":6.9,"publicationDate":"2022-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138494565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised random forest for affinity estimation. 用于亲和性估计的无监督随机森林。
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-01-01 Epub Date: 2021-12-06 DOI: 10.1007/s41095-021-0241-9
Yunai Yi, Diya Sun, Peixin Li, Tae-Kyun Kim, Tianmin Xu, Yuru Pei

This paper presents an unsupervised clustering random-forest-based metric for affinity estimation in large and high-dimensional data. The criterion used for node splitting during forest construction can handle rank-deficiency when measuring cluster compactness. The binary forest-based metric is extended to continuous metrics by exploiting both the common traversal path and the smallest shared parent node. The proposed forest-based metric efficiently estimates affinity by passing down data pairs in the forest using a limited number of decision trees. A pseudo-leaf-splitting (PLS) algorithm is introduced to account for spatial relationships, which regularizes affinity measures and overcomes inconsistent leaf assign-ments. The random-forest-based metric with PLS facilitates the establishment of consistent and point-wise correspondences. The proposed method has been applied to automatic phrase recognition using color and depth videos and point-wise correspondence. Extensive experiments demonstrate the effectiveness of the proposed method in affinity estimation in a comparison with the state-of-the-art.

本文提出了一种基于无监督聚类随机森林的度量方法,用于大型高维数据的亲和性估计。在森林构建过程中,用于节点拆分的标准可以在测量聚类紧凑性时处理秩缺陷。通过利用共同遍历路径和最小共享父节点,基于森林的二元度量被扩展到连续度量。通过使用数量有限的决策树在森林中向下传递数据对,所提出的基于森林的度量方法能有效地估计亲和性。为了考虑空间关系,引入了一种伪树叶分割(PLS)算法,该算法对亲和度度量进行了正则化处理,并克服了树叶分配不一致的问题。带有 PLS 的基于随机森林的度量有助于建立一致的点对点对应关系。所提出的方法已被应用于利用彩色和深度视频以及点对应关系进行短语自动识别。通过与最先进的方法进行比较,大量的实验证明了所提出的方法在亲和性估计方面的有效性。
{"title":"Unsupervised random forest for affinity estimation.","authors":"Yunai Yi, Diya Sun, Peixin Li, Tae-Kyun Kim, Tianmin Xu, Yuru Pei","doi":"10.1007/s41095-021-0241-9","DOIUrl":"10.1007/s41095-021-0241-9","url":null,"abstract":"<p><p>This paper presents an unsupervised clustering random-forest-based metric for affinity estimation in large and high-dimensional data. The criterion used for node splitting during forest construction can handle rank-deficiency when measuring cluster compactness. The binary forest-based metric is extended to continuous metrics by exploiting both the common traversal path and the smallest shared parent node. The proposed forest-based metric efficiently estimates affinity by passing down data pairs in the forest using a limited number of decision trees. A pseudo-leaf-splitting (PLS) algorithm is introduced to account for spatial relationships, which regularizes affinity measures and overcomes inconsistent leaf assign-ments. The random-forest-based metric with PLS facilitates the establishment of consistent and point-wise correspondences. The proposed method has been applied to automatic phrase recognition using color and depth videos and point-wise correspondence. Extensive experiments demonstrate the effectiveness of the proposed method in affinity estimation in a comparison with the state-of-the-art.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"8 2","pages":"257-272"},"PeriodicalIF":6.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8645415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39720010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Visual Media
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1