D-Extract: Extracting Dimensional Attributes From Product Images

Pushpendu Ghosh, N. Wang, Promod Yenigalla
{"title":"D-Extract: Extracting Dimensional Attributes From Product Images","authors":"Pushpendu Ghosh, N. Wang, Promod Yenigalla","doi":"10.1109/WACV56688.2023.00363","DOIUrl":null,"url":null,"abstract":"Product dimension is a crucial piece of information enabling customers make better buying decisions. E-commerce websites extract dimension attributes to enable customers filter the search results according to their requirements. The existing methods extract dimension attributes from textual data like title and product description. However, this textual information often exists in an ambiguous, disorganised structure. In comparison, images can be used to extract reliable and consistent dimensional information. With this motivation, we hereby propose two novel architecture to extract dimensional information from product images. The first namely Single-Box Classification Net-work is designed to classify each text token in the image, one at a time, whereas the second architecture namely Multi-Box Classification Network uses a transformer network to classify all the detected text tokens simultaneously. To attain better performance, the proposed architectures are also fused with statistical inferences derived from the product category which further increased the F1-score of the Single-Box Classification Network by 3.78% and Multi-Box Classification Network by ≈ 0.9%≈. We use distance super-vision technique to create a large scale automated dataset for pretraining purpose and notice considerable improvement when the models were pretrained on the large data before finetuning. The proposed model achieves a desirable precision of 91.54% at 89.75% recall and outperforms the other state of the art approaches by ≈ 4.76% in F1-score1.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV56688.2023.00363","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Product dimension is a crucial piece of information enabling customers make better buying decisions. E-commerce websites extract dimension attributes to enable customers filter the search results according to their requirements. The existing methods extract dimension attributes from textual data like title and product description. However, this textual information often exists in an ambiguous, disorganised structure. In comparison, images can be used to extract reliable and consistent dimensional information. With this motivation, we hereby propose two novel architecture to extract dimensional information from product images. The first namely Single-Box Classification Net-work is designed to classify each text token in the image, one at a time, whereas the second architecture namely Multi-Box Classification Network uses a transformer network to classify all the detected text tokens simultaneously. To attain better performance, the proposed architectures are also fused with statistical inferences derived from the product category which further increased the F1-score of the Single-Box Classification Network by 3.78% and Multi-Box Classification Network by ≈ 0.9%≈. We use distance super-vision technique to create a large scale automated dataset for pretraining purpose and notice considerable improvement when the models were pretrained on the large data before finetuning. The proposed model achieves a desirable precision of 91.54% at 89.75% recall and outperforms the other state of the art approaches by ≈ 4.76% in F1-score1.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
D-Extract:从产品图像中提取维度属性
产品维度是一个关键的信息,使客户做出更好的购买决策。电子商务网站提取维度属性,使客户能够根据自己的需求过滤搜索结果。现有的方法是从标题和产品描述等文本数据中提取维度属性。然而,这些文本信息往往存在于一个模糊的、无组织的结构中。相比之下,图像可以提取可靠和一致的维度信息。基于这一动机,我们提出了两种从产品图像中提取维度信息的新架构。第一种结构即单盒分类网络,用于对图像中的每个文本标记进行分类,每次一个;第二种结构即多盒分类网络,使用变压器网络同时对检测到的所有文本标记进行分类。为了获得更好的性能,所提出的架构还融合了来自产品类别的统计推断,进一步将单盒分类网络的f1分数提高了3.78%,将多盒分类网络的f1分数提高了≈0.9%。我们使用距离监督视觉技术创建了一个用于预训练的大规模自动化数据集,并注意到模型在微调之前在大数据上进行预训练时有相当大的改进。该模型在召回率为89.75%的情况下达到了91.54%的理想精度,并且在F1-score1方面优于其他最先进的方法约4.76%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Aggregating Bilateral Attention for Few-Shot Instance Localization Burst Reflection Removal using Reflection Motion Aggregation Cues Complementary Cues from Audio Help Combat Noise in Weakly-Supervised Object Detection Efficient Skeleton-Based Action Recognition via Joint-Mapping strategies Few-shot Object Detection via Improved Classification Features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1