MDCKE: Multimodal deep-context knowledge extractor that integrates contextual information

IF 6.8 2区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY alexandria engineering journal Pub Date : 2025-02-08 DOI:10.1016/j.aej.2025.01.119
Hyojin Ko, Joon Yoo, Ok-Ran Jeong
{"title":"MDCKE: Multimodal deep-context knowledge extractor that integrates contextual information","authors":"Hyojin Ko,&nbsp;Joon Yoo,&nbsp;Ok-Ran Jeong","doi":"10.1016/j.aej.2025.01.119","DOIUrl":null,"url":null,"abstract":"<div><div>Extraction of comprehensive information from diverse data sources remains a significant challenge in contemporary research. Although multimodal Named Entity Recognition (NER) and Relation Extraction (RE) tasks have garnered significant attention, existing methods often focus on surface-level information, underutilizing the potential depth of the available data. To address this issue, this study introduces a Multimodal Deep-Context Knowledge Extractor (MDCKE) that generates hierarchical multi-scale images and captions from original images. These connectors between image and text enhance information extraction by integrating more complex data relationships and contexts to build a multimodal knowledge graph. Captioning precedes feature extraction, leveraging semantic descriptions to align global and local image features and enhance inter- and intramodality alignment. Experimental validation on the Twitter2015 and Multimodal Neural Relation Extraction (MNRE) datasets demonstrated the novelty and accuracy of MDCKE, resulting in an improvement in the F1-score by up to 5.83% and 26.26%, respectively, compared to State-Of-The-Art (SOTA) models. MDCKE was compared with top models, case studies, and simulations in low-resource settings, proving its flexibility and efficacy. An ablation study further corroborated the contribution of each component, resulting in an approximately 6% enhancement in the F1-score across the datasets.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"119 ","pages":"Pages 478-492"},"PeriodicalIF":6.8000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825001474","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Extraction of comprehensive information from diverse data sources remains a significant challenge in contemporary research. Although multimodal Named Entity Recognition (NER) and Relation Extraction (RE) tasks have garnered significant attention, existing methods often focus on surface-level information, underutilizing the potential depth of the available data. To address this issue, this study introduces a Multimodal Deep-Context Knowledge Extractor (MDCKE) that generates hierarchical multi-scale images and captions from original images. These connectors between image and text enhance information extraction by integrating more complex data relationships and contexts to build a multimodal knowledge graph. Captioning precedes feature extraction, leveraging semantic descriptions to align global and local image features and enhance inter- and intramodality alignment. Experimental validation on the Twitter2015 and Multimodal Neural Relation Extraction (MNRE) datasets demonstrated the novelty and accuracy of MDCKE, resulting in an improvement in the F1-score by up to 5.83% and 26.26%, respectively, compared to State-Of-The-Art (SOTA) models. MDCKE was compared with top models, case studies, and simulations in low-resource settings, proving its flexibility and efficacy. An ablation study further corroborated the contribution of each component, resulting in an approximately 6% enhancement in the F1-score across the datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MDCKE:集成上下文信息的多模态深度上下文知识提取器
从各种数据源中提取综合信息仍然是当代研究中的一个重大挑战。尽管多模态命名实体识别(NER)和关系提取(RE)任务已经引起了广泛的关注,但现有的方法往往侧重于表面信息,未充分利用可用数据的潜在深度。为了解决这个问题,本研究引入了一种多模态深度上下文知识提取器(MDCKE),它从原始图像中生成分层的多尺度图像和字幕。图像和文本之间的这些连接器通过集成更复杂的数据关系和上下文来构建多模态知识图,从而增强了信息提取。字幕先于特征提取,利用语义描述来对齐全局和局部图像特征,并增强模态间和模态内对齐。在Twitter2015和多模态神经关系提取(MNRE)数据集上的实验验证证明了MDCKE的新颖性和准确性,与最先进的(SOTA)模型相比,其f1得分分别提高了5.83%和26.26%。将MDCKE与顶级模型、案例研究和低资源环境下的模拟进行了比较,证明了其灵活性和有效性。消融研究进一步证实了每个成分的贡献,导致整个数据集的f1评分提高了约6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
alexandria engineering journal
alexandria engineering journal Engineering-General Engineering
CiteScore
11.20
自引率
4.40%
发文量
1015
审稿时长
43 days
期刊介绍: Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification: • Mechanical, Production, Marine and Textile Engineering • Electrical Engineering, Computer Science and Nuclear Engineering • Civil and Architecture Engineering • Chemical Engineering and Applied Sciences • Environmental Engineering
期刊最新文献
PUF-based lightweight authentication protocol for vehicle-to-grid communication with three-factor secrecy Feature embedded attention based hybrid approach for athletic injury risk prediction Masked deep networks based on self-supervised learning for folk art image recognition and optimization of digital strategies for intangible cultural heritage preservation Editorial Board An edge-available defect detection And Localization Flow Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1