首页 > 最新文献

2019 Digital Image Computing: Techniques and Applications (DICTA)最新文献

英文 中文
AB-PointNet for 3D Point Cloud Recognition AB-PointNet用于三维点云识别
Pub Date : 2019-04-08 DOI: 10.1109/DICTA47822.2019.8945926
J. Komori, K. Hotta
Semantic segmentation of 3D point clouds is difficult task due to its unordered representation. PointNet is a pioneering work which used 3D point clouds directly to predict 3D point semantic labels. However, it has a problem that it predicts labels without using local structure in metric space. Recent researches tackled this problem and achieved better performance. In addition to the problem, we considered that treating all channels with the same weight is obstacle to improve the accuracy. Therefore, we propose AB-PointNet which has been modified to predict 3D point semantic labels by considering the importance of channels. To emphasize the important channels, we used attention module which emphasizes channels that are useful for prediction and suppresses unimportant channels. This makes it possible to learn more effective features. In experiments, we evaluate our method on the large-scale indoor spaces 3D point cloud dataset with 13 semantic labels. Our proposed AB-PointNet has advanced performance of 3.2% in mean IoU in comparison with the conventional PointNet.
由于三维点云的无序表示,语义分割是一项困难的任务。PointNet是一项开创性的工作,它直接使用三维点云来预测三维点的语义标签。然而,它存在一个问题,即它在没有使用度量空间中的局部结构的情况下预测标签。最近的研究解决了这个问题,并取得了更好的性能。除此之外,我们认为用相同的权重处理所有信道是提高精度的障碍。因此,我们提出了经过改进的AB-PointNet,通过考虑通道的重要性来预测三维点语义标签。为了强调重要的通道,我们使用了注意模块,该模块强调对预测有用的通道,抑制不重要的通道。这使得学习更有效的特性成为可能。在实验中,我们对具有13个语义标签的大规模室内空间三维点云数据集进行了评估。与传统的PointNet相比,我们提出的AB-PointNet的平均IoU提高了3.2%。
{"title":"AB-PointNet for 3D Point Cloud Recognition","authors":"J. Komori, K. Hotta","doi":"10.1109/DICTA47822.2019.8945926","DOIUrl":"https://doi.org/10.1109/DICTA47822.2019.8945926","url":null,"abstract":"Semantic segmentation of 3D point clouds is difficult task due to its unordered representation. PointNet is a pioneering work which used 3D point clouds directly to predict 3D point semantic labels. However, it has a problem that it predicts labels without using local structure in metric space. Recent researches tackled this problem and achieved better performance. In addition to the problem, we considered that treating all channels with the same weight is obstacle to improve the accuracy. Therefore, we propose AB-PointNet which has been modified to predict 3D point semantic labels by considering the importance of channels. To emphasize the important channels, we used attention module which emphasizes channels that are useful for prediction and suppresses unimportant channels. This makes it possible to learn more effective features. In experiments, we evaluate our method on the large-scale indoor spaces 3D point cloud dataset with 13 semantic labels. Our proposed AB-PointNet has advanced performance of 3.2% in mean IoU in comparison with the conventional PointNet.","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"57 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2019-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90608547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Conference Chairs and Committees 会议主席和委员会
Pub Date : 2018-12-01 DOI: 10.1109/E-SCIENCE.2005.25
{"title":"Conference Chairs and Committees","authors":"","doi":"10.1109/E-SCIENCE.2005.25","DOIUrl":"https://doi.org/10.1109/E-SCIENCE.2005.25","url":null,"abstract":"","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82438042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Localization under Appearance Change: A Filtering Approach 外观变化下的视觉定位:一种滤波方法
Pub Date : 2018-11-20 DOI: 10.1109/DICTA47822.2019.8945810
Anh-Dzung Doan, Y. Latif, Tat-Jun Chin, Yu Liu, Shin-Fang Ch'ng, Thanh-Toan Do, I. Reid
A major focus of current research on place recognition is visual localization for autonomous driving. In this scenario, as cameras will be operating continuously, it is realistic to expect videos as an input to visual localization algorithms, as opposed to the single-image querying approach used in other place recognition works. In this paper, we show that exploiting temporal continuity in the testing sequence significantly improves visual localization — qualitatively and quantitatively. Although intuitive, this idea has not been fully explored in recent works. Our main contribution is a novel Monte Carlo-based visual localization technique that can efficiently reason over the image sequence. Also, we propose an image retrieval pipeline which relies on local features and an encoding technique to represent an image as a single vector. The experimental results show that our proposed method achieves better results than state-of-the-art approaches for the task on visual localization under significant appearance change. Our synthetic dataset is made available at: http://tiny.cc/jd73bz
当前位置识别研究的一个主要焦点是自动驾驶的视觉定位。在这种情况下,由于摄像机将连续运行,因此期望视频作为视觉定位算法的输入是现实的,而不是在其他位置识别工作中使用的单图像查询方法。在本文中,我们证明了在测试序列中利用时间连续性可以显著提高视觉定位的质量和数量。虽然这是直观的,但在最近的作品中并没有得到充分的探讨。我们的主要贡献是一种新的基于蒙特卡罗的视觉定位技术,可以有效地对图像序列进行推理。此外,我们还提出了一种基于局部特征的图像检索管道和一种将图像表示为单个向量的编码技术。实验结果表明,本文提出的方法在显著外观变化的视觉定位任务中取得了比现有方法更好的结果。我们的合成数据集可以在http://tiny.cc/jd73bz上获得
{"title":"Visual Localization under Appearance Change: A Filtering Approach","authors":"Anh-Dzung Doan, Y. Latif, Tat-Jun Chin, Yu Liu, Shin-Fang Ch'ng, Thanh-Toan Do, I. Reid","doi":"10.1109/DICTA47822.2019.8945810","DOIUrl":"https://doi.org/10.1109/DICTA47822.2019.8945810","url":null,"abstract":"A major focus of current research on place recognition is visual localization for autonomous driving. In this scenario, as cameras will be operating continuously, it is realistic to expect videos as an input to visual localization algorithms, as opposed to the single-image querying approach used in other place recognition works. In this paper, we show that exploiting temporal continuity in the testing sequence significantly improves visual localization — qualitatively and quantitatively. Although intuitive, this idea has not been fully explored in recent works. Our main contribution is a novel Monte Carlo-based visual localization technique that can efficiently reason over the image sequence. Also, we propose an image retrieval pipeline which relies on local features and an encoding technique to represent an image as a single vector. The experimental results show that our proposed method achieves better results than state-of-the-art approaches for the task on visual localization under significant appearance change. Our synthetic dataset is made available at: http://tiny.cc/jd73bz","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"91 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2018-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79524081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2019 Digital Image Computing: Techniques and Applications (DICTA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1