RelationLMM: Large Multimodal Model as Open and Versatile Visual Relationship Generalist

Chi Xie;Shuang Liang;Jie Li;Zhao Zhang;Feng Zhu;Rui Zhao;Yichen Wei
{"title":"RelationLMM: Large Multimodal Model as Open and Versatile Visual Relationship Generalist","authors":"Chi Xie;Shuang Liang;Jie Li;Zhao Zhang;Feng Zhu;Rui Zhao;Yichen Wei","doi":"10.1109/TPAMI.2025.3531452","DOIUrl":null,"url":null,"abstract":"Visual relationships are crucial for visual perception and reasoning, and cover tasks like Scene Graph Generation, Human-Object Interaction, and object affordance. Despite significant efforts, this field still suffers from the following limitations: specialists for a specific task without considering similar ones, strict and complex task formulations with limited flexibility, and underexploited reasoning with language and knowledge. To solve these limitations, we seek to build a new framework, one model for all tasks, over Large Multimodal Models (LMMs). LMMs offer the potential of unifying tasks, flexible forms, and reasoning with language. However, they fail to handle visual relationship tasks well. We find the obstacles include the conflicts between different tasks and insufficient instance-level information. We solve these problems by reforming the data for LMMs, rather than architectures, considering their strong language-in language-out capability. We propose to disassemble tasks into simple and common sub-tasks, verbally estimate instance confidence, and augment instance diversity, all without additional modules. These strategies help us build a visual relationship generalist, <italic>RelationLMM</i>, with a simple architecture. Exhaustive experiments demonstrate <italic>RelationLMM</i> is strong, generalizable and flexible to different tasks, with one model and one suite of weight.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 5","pages":"3515-3529"},"PeriodicalIF":18.6000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10845195/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Visual relationships are crucial for visual perception and reasoning, and cover tasks like Scene Graph Generation, Human-Object Interaction, and object affordance. Despite significant efforts, this field still suffers from the following limitations: specialists for a specific task without considering similar ones, strict and complex task formulations with limited flexibility, and underexploited reasoning with language and knowledge. To solve these limitations, we seek to build a new framework, one model for all tasks, over Large Multimodal Models (LMMs). LMMs offer the potential of unifying tasks, flexible forms, and reasoning with language. However, they fail to handle visual relationship tasks well. We find the obstacles include the conflicts between different tasks and insufficient instance-level information. We solve these problems by reforming the data for LMMs, rather than architectures, considering their strong language-in language-out capability. We propose to disassemble tasks into simple and common sub-tasks, verbally estimate instance confidence, and augment instance diversity, all without additional modules. These strategies help us build a visual relationship generalist, RelationLMM, with a simple architecture. Exhaustive experiments demonstrate RelationLMM is strong, generalizable and flexible to different tasks, with one model and one suite of weight.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
RelationLMM:大型多模态模型作为开放和通用的视觉关系通才
视觉关系对于视觉感知和推理是至关重要的,它涵盖了场景图生成、人机交互和对象提供等任务。尽管付出了巨大的努力,但这一领域仍然存在以下局限性:针对特定任务的专家没有考虑类似的任务,严格而复杂的任务公式具有有限的灵活性,以及未充分利用语言和知识的推理。为了解决这些限制,我们试图在大型多模态模型(lmm)上建立一个新的框架,一个适用于所有任务的模型。lmm提供了统一任务、灵活形式和语言推理的潜力。然而,他们不能很好地处理视觉关系任务。我们发现障碍包括不同任务之间的冲突和实例级信息的不足。考虑到lmm强大的语言输入语言输出能力,我们通过改造lmm的数据而不是架构来解决这些问题。我们建议将任务分解为简单和常见的子任务,口头估计实例置信度,并增加实例多样性,所有这些都不需要额外的模块。这些策略帮助我们用一个简单的架构构建一个可视化的关系通才——RelationLMM。详尽的实验表明,RelationLMM具有强大的通用性和灵活性,可以使用一个模型和一组权重来处理不同的任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Examining the Impact of Optical Aberrations to Image Classification and Object Detection Models. Neural Eigenfunctions are Structured Representation Learners. Collaborative Feedback Discriminative Propagation for Video Super-Resolution. UDFStudio: A Unified Framework of Datasets, Benchmarks and Generative Models for Unsigned Distance Functions. SSD: Making Face Forgery Clues Evident Again With Self-Steganographic Detection.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1