首页 > 最新文献

Proceedings of the 2017 Workshop on Wearable MultiMedia最新文献

英文 中文
Session details: Paper Session 会议详情:论文会议
Pub Date : 2017-06-06 DOI: 10.1145/3252801
S. Alletto
{"title":"Session details: Paper Session","authors":"S. Alletto","doi":"10.1145/3252801","DOIUrl":"https://doi.org/10.1145/3252801","url":null,"abstract":"","PeriodicalId":126678,"journal":{"name":"Proceedings of the 2017 Workshop on Wearable MultiMedia","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114904332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the 2017 Workshop on Wearable MultiMedia 2017可穿戴多媒体研讨会论文集
Pub Date : 2017-06-06 DOI: 10.1145/3080538
S. Alletto, F. Pernici, Yoichi Sato
We are delighted to welcome you to Workshop on Wearable Multimedia (WearMMe 2017) held in conjunction with ACM International Conference on Multimedia Retrieval (ICMR) in Bucharest, Romania. There has been substantial progress to date in developing computing devices and sensors that can be easily carried on the body. The last few years have also been marked by some notable achievements in learning from sensory data. This unique combination poses research challenges and opportunities for the next future of wearable computing. We believe wearable computing will be a very prominent research field for the multimedia and other communities. As such, there is a compelling need for science and technology that enable devices, algorithms, and humans to interact to achieve humanistic intelligence reciprocally. The range of real-world examples and applications of wearable is large and spans from the web and social applications (e.g. egocentric search engines, recommendation systems, and personalization), to medical robotics (e.g. assistive devices, bionic limbs and exoskeletons). The aim of this workshop is to bring together experts from various research communities including multimedia, computer vision, human-computer interaction, robotics, and machine learning to share recent advances and explore the future research. Toward this end, we are proud to have organized an exciting program in this half-day event. We are pleased to have Associate Professor Yusuke Sugano of Osaka University in Japan to give a keynote speech on appearance-based gaze estimation from ubiquitous cameras. We are also fortunate to have Dr. Kyriaki Kalimeri of ISI Foundation in Italy to share her recent work on identifying urban mobility challenges for the visually impaired with mobile monitoring of multimodal bio-signals. Last but not least, we are pleased to have three stimulating presentations selected from papers submitted to the workshop. Finally, we wish all the attendees a highly stimulating, informative, and enjoyable workshop.
我们很高兴欢迎您参加与ACM多媒体检索国际会议(ICMR)在罗马尼亚布加勒斯特举行的可穿戴多媒体研讨会(WearMMe 2017)。到目前为止,在开发易于携带的计算设备和传感器方面已经取得了重大进展。过去几年在从感官数据中学习方面也取得了一些显著的成就。这种独特的结合为可穿戴计算的未来带来了研究挑战和机遇。我们相信可穿戴计算将成为多媒体和其他社区的一个非常突出的研究领域。因此,迫切需要科学技术,使设备、算法和人类能够相互作用,以实现人文智能。现实世界中可穿戴设备的例子和应用范围很大,从网络和社交应用(如以自我为中心的搜索引擎、推荐系统和个性化)到医疗机器人(如辅助设备、仿生肢体和外骨骼)。本次研讨会的目的是汇集来自多媒体、计算机视觉、人机交互、机器人和机器学习等各个研究领域的专家,分享最新进展并探索未来的研究。为此,我们很自豪地在这半天的活动中组织了一个激动人心的节目。我们很高兴邀请到日本大阪大学的副教授菅野佑介做一个主题演讲,主题是基于外观的目光估计,来自无处不在的相机。我们还有幸邀请到意大利ISI基金会的Kyriaki Kalimeri博士分享她最近的工作,即通过多模式生物信号的移动监测来确定视障人士在城市交通方面面临的挑战。最后但并非最不重要的是,我们很高兴从提交给研讨会的论文中选出三篇令人兴奋的演讲。最后,我们祝愿所有的与会者都有一个非常刺激、丰富和愉快的研讨会。
{"title":"Proceedings of the 2017 Workshop on Wearable MultiMedia","authors":"S. Alletto, F. Pernici, Yoichi Sato","doi":"10.1145/3080538","DOIUrl":"https://doi.org/10.1145/3080538","url":null,"abstract":"We are delighted to welcome you to Workshop on Wearable Multimedia (WearMMe 2017) held in conjunction with ACM International Conference on Multimedia Retrieval (ICMR) in Bucharest, Romania. \u0000 \u0000There has been substantial progress to date in developing computing devices and sensors that can be easily carried on the body. The last few years have also been marked by some notable achievements in learning from sensory data. This unique combination poses research challenges and opportunities for the next future of wearable computing. We believe wearable computing will be a very prominent research field for the multimedia and other communities. As such, there is a compelling need for science and technology that enable devices, algorithms, and humans to interact to achieve humanistic intelligence reciprocally. The range of real-world examples and applications of wearable is large and spans from the web and social applications (e.g. egocentric search engines, recommendation systems, and personalization), to medical robotics (e.g. assistive devices, bionic limbs and exoskeletons). \u0000 \u0000The aim of this workshop is to bring together experts from various research communities including multimedia, computer vision, human-computer interaction, robotics, and machine learning to share recent advances and explore the future research. Toward this end, we are proud to have organized an exciting program in this half-day event. We are pleased to have Associate Professor Yusuke Sugano of Osaka University in Japan to give a keynote speech on appearance-based gaze estimation from ubiquitous cameras. We are also fortunate to have Dr. Kyriaki Kalimeri of ISI Foundation in Italy to share her recent work on identifying urban mobility challenges for the visually impaired with mobile monitoring of multimodal bio-signals. Last but not least, we are pleased to have three stimulating presentations selected from papers submitted to the workshop. \u0000 \u0000Finally, we wish all the attendees a highly stimulating, informative, and enjoyable workshop.","PeriodicalId":126678,"journal":{"name":"Proceedings of the 2017 Workshop on Wearable MultiMedia","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114790850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wearable for Wearable: A Social Signal Processing Perspective for Clothing Analysis using Wearable Devices 穿戴式的穿戴式:使用穿戴式设备进行服装分析的社会信号处理视角
Pub Date : 2017-06-06 DOI: 10.1145/3080538.3080540
Marco Godi, Maedeh Aghaei, Mariella Dimiccoli, M. Cristani
Clothing conveys a strong communicative message in terms of social signals, influencing the impression and behaviour of others towards a person; unfortunately, the nature of this message is not completely clear, and social signal processing approaches are starting to consider this problem. Wearable computing devices offer a unique perspective in this scenario, capturing fine details of clothing items in the same way we do during a social interaction, through ego-centered points of views. These clothing characteristics can be then employed to unveil statistical relations with personal impressions. This position paper investigates this novel research direction, individuating the main objectives, the possible problems, viable research strategies, techniques and expected results. This analysis gives birth to brand-new concepts such as clothing saliency, that is, those parts of garments more relevant for triggering personal impressions.
服装在社会信号方面传达了强烈的沟通信息,影响他人对一个人的印象和行为;不幸的是,这个信息的本质并不完全清楚,社会信号处理方法开始考虑这个问题。可穿戴计算设备在这种情况下提供了一个独特的视角,通过以自我为中心的观点,以与我们在社交互动中所做的相同的方式捕捉服装的细节。这些服装特征可以用来揭示与个人印象的统计关系。本文对这一新的研究方向进行了探讨,具体阐述了主要目标、可能存在的问题、可行的研究策略、技术和预期结果。这种分析产生了全新的概念,如服装显著性,即服装中与引发个人印象更相关的部分。
{"title":"Wearable for Wearable: A Social Signal Processing Perspective for Clothing Analysis using Wearable Devices","authors":"Marco Godi, Maedeh Aghaei, Mariella Dimiccoli, M. Cristani","doi":"10.1145/3080538.3080540","DOIUrl":"https://doi.org/10.1145/3080538.3080540","url":null,"abstract":"Clothing conveys a strong communicative message in terms of social signals, influencing the impression and behaviour of others towards a person; unfortunately, the nature of this message is not completely clear, and social signal processing approaches are starting to consider this problem. Wearable computing devices offer a unique perspective in this scenario, capturing fine details of clothing items in the same way we do during a social interaction, through ego-centered points of views. These clothing characteristics can be then employed to unveil statistical relations with personal impressions. This position paper investigates this novel research direction, individuating the main objectives, the possible problems, viable research strategies, techniques and expected results. This analysis gives birth to brand-new concepts such as clothing saliency, that is, those parts of garments more relevant for triggering personal impressions.","PeriodicalId":126678,"journal":{"name":"Proceedings of the 2017 Workshop on Wearable MultiMedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114344308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Exploitation of Hidden Markov Models to Improve Location-Based Temporal Segmentation of Egocentric Videos 利用隐马尔可夫模型改进基于位置的自我中心视频时间分割
Pub Date : 2017-06-06 DOI: 10.1145/3080538.3080539
Antonino Furnari, S. Battiato, G. Farinella
Wearable cameras allow to easily acquire long and unstructured egocentric videos. In this context, temporal video segmentation methods can be useful to improve indexing, retrieval and summarization of such content. While past research investigated methods for temporal segmentation of egocentric videos according to different criteria (e.g., motion, location or appearance), many of them do not explicitly enforce any form of temporal coherence. Moreover, evaluations have been generally performed using frame-based measures, which only account for the overall correctness of predicted frames, overlooking the structure of the produced segmentation. In this paper, we investigate how a Hidden Markov Model based on an ad-hoc transition matrix can be exploited to obtain a more accurate segmentation from frame-based predictions in the context of location-based segmentation of egocentric videos. We introduce a segment-based evaluation measure which strongly penalizes over-segmented and under-segmented results. Experiments show that the exploitation of a Hidden Markov Model for temporal smoothing greatly improves temporal segmentation results and outperforms current video segmentation methods designed for both third-person and first-person videos.
可穿戴相机可以轻松获取长而非结构化的以自我为中心的视频。在这种情况下,时间视频分割方法可以用于改进这些内容的索引、检索和摘要。虽然过去的研究根据不同的标准(例如,运动,位置或外观)调查了以自我为中心的视频的时间分割方法,但其中许多方法没有明确地强制执行任何形式的时间一致性。此外,评估通常使用基于帧的度量来执行,它只考虑预测帧的总体正确性,而忽略了生成分割的结构。在本文中,我们研究了如何利用基于ad-hoc转移矩阵的隐马尔可夫模型,在基于位置的自我中心视频分割的背景下,从基于帧的预测中获得更准确的分割。我们引入了一种基于分段的评估方法,对过度分段和未分段的结果进行强烈的惩罚。实验表明,利用隐马尔可夫模型进行时间平滑大大改善了时间分割结果,并且优于当前针对第三人称和第一人称视频设计的视频分割方法。
{"title":"On the Exploitation of Hidden Markov Models to Improve Location-Based Temporal Segmentation of Egocentric Videos","authors":"Antonino Furnari, S. Battiato, G. Farinella","doi":"10.1145/3080538.3080539","DOIUrl":"https://doi.org/10.1145/3080538.3080539","url":null,"abstract":"Wearable cameras allow to easily acquire long and unstructured egocentric videos. In this context, temporal video segmentation methods can be useful to improve indexing, retrieval and summarization of such content. While past research investigated methods for temporal segmentation of egocentric videos according to different criteria (e.g., motion, location or appearance), many of them do not explicitly enforce any form of temporal coherence. Moreover, evaluations have been generally performed using frame-based measures, which only account for the overall correctness of predicted frames, overlooking the structure of the produced segmentation. In this paper, we investigate how a Hidden Markov Model based on an ad-hoc transition matrix can be exploited to obtain a more accurate segmentation from frame-based predictions in the context of location-based segmentation of egocentric videos. We introduce a segment-based evaluation measure which strongly penalizes over-segmented and under-segmented results. Experiments show that the exploitation of a Hidden Markov Model for temporal smoothing greatly improves temporal segmentation results and outperforms current video segmentation methods designed for both third-person and first-person videos.","PeriodicalId":126678,"journal":{"name":"Proceedings of the 2017 Workshop on Wearable MultiMedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128704798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Semi-Automatic Annotation with Predicted Visual Saliency Maps for Object Recognition in Wearable Video 基于预测视觉显著性图的可穿戴视频对象识别半自动标注
Pub Date : 2017-06-06 DOI: 10.1145/3080538.3080541
J. Benois-Pineau, M. García-Vázquez, L. Moralez, A. A. Ramírez-Acosta
Recognition of objects of a given category in visual content is one of the key problems in computer vision and multimedia. It is strongly needed in wearable video shooting for a wide range of important applications in society. Supervised learning approaches are proved to be the most efficient in this task. They require available ground truth for training models. It is specifically true for Deep Convolution Networks, but is also hold for other popular models such as SVM on visual signatures. Annotation of ground truth when drawing bounding boxes (BB) is a very tedious task requiring important human resource. The research in prediction of visual attention in images and videos has attained maturity, specifically in what concerns bottom-up visual attention modeling. Hence, instead of annotating the ground truth manually with BB we propose to use automatically predicted salient areas as object locators for annotation. Such a prediction of saliency is not perfect, nevertheless. Hence active contours models on saliency maps are used in order to isolate the most prominent areas covering the objects. The approach is tested in the framework of a well-studied supervised learning model by SVM with psycho-visual weighted Bag-of-Words. An egocentric GTEA dataset was used in the experiment. The difference in mAP (mean average precision) is less than 10 percent while the mean annotation time is 36% lower.
视觉内容中给定类别对象的识别是计算机视觉和多媒体领域的关键问题之一。可穿戴视频拍摄在社会上有着广泛的重要应用。有监督学习方法被证明是最有效的。它们需要可用的基础事实来训练模型。这对于深度卷积网络来说是特别正确的,但对于其他流行的模型,如视觉签名上的SVM,也适用。绘制边界框时对地面真值的标注是一项非常繁琐的工作,需要耗费大量的人力资源。图像和视频中视觉注意预测的研究已经趋于成熟,特别是自下而上的视觉注意建模。因此,我们建议使用自动预测的突出区域作为标注的对象定位器,而不是用BB手动标注地面真相。然而,这种对显著性的预测并不完美。因此,在显著性地图上使用活动等高线模型,以隔离覆盖物体的最突出区域。该方法在一个研究良好的支持向量机监督学习模型框架中进行了测试,该模型具有心理-视觉加权词袋。实验采用以自我为中心的GTEA数据集。mAP(平均精度)的差异小于10%,而平均标注时间减少了36%。
{"title":"Semi-Automatic Annotation with Predicted Visual Saliency Maps for Object Recognition in Wearable Video","authors":"J. Benois-Pineau, M. García-Vázquez, L. Moralez, A. A. Ramírez-Acosta","doi":"10.1145/3080538.3080541","DOIUrl":"https://doi.org/10.1145/3080538.3080541","url":null,"abstract":"Recognition of objects of a given category in visual content is one of the key problems in computer vision and multimedia. It is strongly needed in wearable video shooting for a wide range of important applications in society. Supervised learning approaches are proved to be the most efficient in this task. They require available ground truth for training models. It is specifically true for Deep Convolution Networks, but is also hold for other popular models such as SVM on visual signatures. Annotation of ground truth when drawing bounding boxes (BB) is a very tedious task requiring important human resource. The research in prediction of visual attention in images and videos has attained maturity, specifically in what concerns bottom-up visual attention modeling. Hence, instead of annotating the ground truth manually with BB we propose to use automatically predicted salient areas as object locators for annotation. Such a prediction of saliency is not perfect, nevertheless. Hence active contours models on saliency maps are used in order to isolate the most prominent areas covering the objects. The approach is tested in the framework of a well-studied supervised learning model by SVM with psycho-visual weighted Bag-of-Words. An egocentric GTEA dataset was used in the experiment. The difference in mAP (mean average precision) is less than 10 percent while the mean annotation time is 36% lower.","PeriodicalId":126678,"journal":{"name":"Proceedings of the 2017 Workshop on Wearable MultiMedia","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114409905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 2017 Workshop on Wearable MultiMedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1