Context Driven Geometry Consistent Document Reconstruction from Photographs

Yusuf Coşkuner, Yakup Genç
{"title":"Context Driven Geometry Consistent Document Reconstruction from Photographs","authors":"Yusuf Coşkuner, Yakup Genç","doi":"10.1109/SIU49456.2020.9302484","DOIUrl":null,"url":null,"abstract":"It is very practical to photograph and store documents using mobile phones. However, it is difficult to obtain a quality document image due to creases on the paper and limitations of the camera pose. These produce geometric distortions and irregular shadows on the document image. The rectification of geometric distortions requires an estimate of the 3D shape of the photographed document. In this study, we introduce a new approach that can estimate the 3D shape of the document using artificial neural networks. Neural network models extract geometric information from the context of the image to create a 3D shape. In addition, an adaptive thresholding algorithm was used to correct lighting-related distortions. Data reflecting actual document conditions were used to train the neural networks. Therefore, in addition to previous studies, the method can be applied to photograph samples which creased in many different ways and photographed from varying perspectives. Comparative experiments show that the method works well.","PeriodicalId":312627,"journal":{"name":"2020 28th Signal Processing and Communications Applications Conference (SIU)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 28th Signal Processing and Communications Applications Conference (SIU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIU49456.2020.9302484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is very practical to photograph and store documents using mobile phones. However, it is difficult to obtain a quality document image due to creases on the paper and limitations of the camera pose. These produce geometric distortions and irregular shadows on the document image. The rectification of geometric distortions requires an estimate of the 3D shape of the photographed document. In this study, we introduce a new approach that can estimate the 3D shape of the document using artificial neural networks. Neural network models extract geometric information from the context of the image to create a 3D shape. In addition, an adaptive thresholding algorithm was used to correct lighting-related distortions. Data reflecting actual document conditions were used to train the neural networks. Therefore, in addition to previous studies, the method can be applied to photograph samples which creased in many different ways and photographed from varying perspectives. Comparative experiments show that the method works well.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
上下文驱动的几何一致的文件重建从照片
使用手机拍摄和存储文件是非常实用的。然而,由于纸张上的折痕和相机姿势的限制,很难获得高质量的文档图像。这些会在文档图像上产生几何扭曲和不规则阴影。几何畸变的校正需要对被摄文件的三维形状进行估计。在这项研究中,我们引入了一种新的方法,可以使用人工神经网络来估计文档的三维形状。神经网络模型从图像的上下文中提取几何信息以创建3D形状。此外,采用自适应阈值算法对光照相关的畸变进行校正。使用反映实际文档情况的数据来训练神经网络。因此,除了以往的研究之外,该方法还可以应用于以多种不同方式折痕的照片样本,从不同的角度拍摄。对比实验表明,该方法效果良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Skin Lesion Classification With Deep CNN Ensembles Design of a New System for Upper Extremity Movement Ability Assessment Stock Market Prediction with Stacked Autoencoder Based Feature Reduction Segmentation networks reinforced with attribute profiles for large scale land-cover map production Encoded Deep Features for Visual Place Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1