自然灾害损害图像人道主义计算的语义和视觉线索

H. Jomaa, Yara Rizk, M. Awad
{"title":"自然灾害损害图像人道主义计算的语义和视觉线索","authors":"H. Jomaa, Yara Rizk, M. Awad","doi":"10.1109/SITIS.2016.70","DOIUrl":null,"url":null,"abstract":"Identifying different types of damage is very essential in times of natural disasters, where first responders are flooding the internet with often annotated images and texts, and rescue teams are overwhelmed to prioritize often scarce resources. While most of the efforts in such humanitarian situations rely heavily on human labor and input, we propose in this paper a novel hybrid approach to help automate more humanitarian computing. Our framework merges low-level visual features that extract color, shape and texture along with a semantic attribute that is obtained after comparing the picture annotation to some bag of words. These visual and textual features were trained and tested on a dataset gathered from the SUN database and some Google Images. The best accuracy obtained using low-level features alone is 91.3 %, while appending the semantic attributes to it raised the accuracy to 95.5% using linear SVM and 5-Fold cross-validation which motivates an updated folk statement \"an ANNOTATED image is worth a thousand word \".","PeriodicalId":403704,"journal":{"name":"2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"106 4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Semantic and Visual Cues for Humanitarian Computing of Natural Disaster Damage Images\",\"authors\":\"H. Jomaa, Yara Rizk, M. Awad\",\"doi\":\"10.1109/SITIS.2016.70\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Identifying different types of damage is very essential in times of natural disasters, where first responders are flooding the internet with often annotated images and texts, and rescue teams are overwhelmed to prioritize often scarce resources. While most of the efforts in such humanitarian situations rely heavily on human labor and input, we propose in this paper a novel hybrid approach to help automate more humanitarian computing. Our framework merges low-level visual features that extract color, shape and texture along with a semantic attribute that is obtained after comparing the picture annotation to some bag of words. These visual and textual features were trained and tested on a dataset gathered from the SUN database and some Google Images. The best accuracy obtained using low-level features alone is 91.3 %, while appending the semantic attributes to it raised the accuracy to 95.5% using linear SVM and 5-Fold cross-validation which motivates an updated folk statement \\\"an ANNOTATED image is worth a thousand word \\\".\",\"PeriodicalId\":403704,\"journal\":{\"name\":\"2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)\",\"volume\":\"106 4\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SITIS.2016.70\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SITIS.2016.70","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

在自然灾害发生时,识别不同类型的损害是非常重要的,在这种情况下,第一响应者在互联网上充斥着通常带有注释的图像和文本,而救援队则不堪重负,无法优先考虑往往稀缺的资源。虽然在这种人道主义情况下的大多数努力严重依赖于人类劳动和投入,但我们在本文中提出了一种新的混合方法来帮助自动化更多的人道主义计算。我们的框架将提取颜色、形状和纹理的低级视觉特征与将图片注释与一些单词进行比较后获得的语义属性合并在一起。这些视觉和文本特征在从SUN数据库和一些Google Images收集的数据集上进行了训练和测试。仅使用低级特征获得的最佳准确率为91.3%,而使用线性支持向量机和5-Fold交叉验证将其添加到语义属性将准确率提高到95.5%,从而激发了更新的民间语句“注释图像胜过千言万语”。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Semantic and Visual Cues for Humanitarian Computing of Natural Disaster Damage Images
Identifying different types of damage is very essential in times of natural disasters, where first responders are flooding the internet with often annotated images and texts, and rescue teams are overwhelmed to prioritize often scarce resources. While most of the efforts in such humanitarian situations rely heavily on human labor and input, we propose in this paper a novel hybrid approach to help automate more humanitarian computing. Our framework merges low-level visual features that extract color, shape and texture along with a semantic attribute that is obtained after comparing the picture annotation to some bag of words. These visual and textual features were trained and tested on a dataset gathered from the SUN database and some Google Images. The best accuracy obtained using low-level features alone is 91.3 %, while appending the semantic attributes to it raised the accuracy to 95.5% using linear SVM and 5-Fold cross-validation which motivates an updated folk statement "an ANNOTATED image is worth a thousand word ".
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Consensus as a Nash Equilibrium of a Dynamic Game An Ontology-Based Augmented Reality Application Exploring Contextual Data of Cultural Heritage Sites All-in-One Mobile Outdoor Augmented Reality Framework for Cultural Heritage Sites 3D Visual-Based Human Motion Descriptors: A Review Tags and Information Recollection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1