首页 > 最新文献

Proceedings of the 4th Annual on Lifelog Search Challenge最新文献

英文 中文
Myscéal 2.0: A Revised Experimental Interactive Lifelog Retrieval System for LSC'21 mysc<s:1> 2.0:一种改进的交互式生命日志检索系统
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469064
Ly-Duyen Tran, Manh-Duy Nguyen, N. Binh, Hyowon Lee, C. Gurrin
Building an interactive retrieval system for lifelogging contains many challenges due to massive multi-modal personal data besides the requirement of accuracy and rapid response for such a tool. The Lifelog Search Challenge (LSC) is the international lifelog retrieval competition that inspires researchers to develop their systems to cope with the challenges and evaluates the effectiveness of their solutions. In this paper, we upgrade our previous Myscéal and present Myscéal 2.0 system for the LSC'21 with the improved features inspired by the novice users experiments. The experiments show that a novice user achieved more than half of the expert score on average. To mitigate the gap of them, some potential enhancements were identified and integrated to the enhanced version.
除了对生命日志交互检索工具的准确性和快速响应的要求外,由于海量的多模态个人数据,构建一个生命日志交互检索系统面临着许多挑战。生命日志检索挑战赛(LSC)是一项国际生命日志检索竞赛,旨在激励研究人员开发他们的系统来应对挑战并评估其解决方案的有效性。在本文中,我们对以前的mysc系统进行了升级,并在LSC'21上提出了mysc系统2.0,该系统在新手用户的实验中得到了改进。实验表明,新手的平均得分达到了专家的一半以上。为了缩小它们之间的差距,我们确定了一些潜在的增强功能,并将其集成到增强版本中。
{"title":"Myscéal 2.0: A Revised Experimental Interactive Lifelog Retrieval System for LSC'21","authors":"Ly-Duyen Tran, Manh-Duy Nguyen, N. Binh, Hyowon Lee, C. Gurrin","doi":"10.1145/3463948.3469064","DOIUrl":"https://doi.org/10.1145/3463948.3469064","url":null,"abstract":"Building an interactive retrieval system for lifelogging contains many challenges due to massive multi-modal personal data besides the requirement of accuracy and rapid response for such a tool. The Lifelog Search Challenge (LSC) is the international lifelog retrieval competition that inspires researchers to develop their systems to cope with the challenges and evaluates the effectiveness of their solutions. In this paper, we upgrade our previous Myscéal and present Myscéal 2.0 system for the LSC'21 with the improved features inspired by the novice users experiments. The experiments show that a novice user achieved more than half of the expert score on average. To mitigate the gap of them, some potential enhancements were identified and integrated to the enhanced version.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132232907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Exploring Intuitive Lifelog Retrieval and Interaction Modes in Virtual Reality with vitrivr-VR 利用vitrivr-VR探索虚拟现实中直观的生活日志检索和交互模式
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469061
Florian Spiess, Ralph Gasser, Silvan Heller, Luca Rossetto, Loris Sauter, Milan van Zanten, H. Schuldt
The multimodal nature of lifelog data collections poses unique challenges for multimedia management and retrieval systems. The Lifelog Search Challenge (LSC) offers an annual evaluation platform for such interactive retrieval systems. They compete against one another in finding items of interest within a set time frame. In this paper, we present the multimedia retrieval system vitrivr-VR, the latest addition to the vitrivr stack, which participated in the LSC in recent years. vitrivr-VR leverages the 3D space in virtual reality (VR) to offer novel retrieval and user interaction models, which we describe with a special focus on design decisions taken for the participation in the LSC.
生活日志数据收集的多模式特性对多媒体管理和检索系统提出了独特的挑战。生命日志搜索挑战(LSC)为这种交互式检索系统提供了一个年度评估平台。他们互相竞争,在规定的时间内找到感兴趣的项目。本文介绍了vitrivr多媒体检索系统vitrivr- vr,这是近年来参与LSC的vitrivr堆栈的最新成员。vitrivr-VR利用虚拟现实(VR)中的3D空间提供新颖的检索和用户交互模型,我们特别关注为参与LSC而采取的设计决策。
{"title":"Exploring Intuitive Lifelog Retrieval and Interaction Modes in Virtual Reality with vitrivr-VR","authors":"Florian Spiess, Ralph Gasser, Silvan Heller, Luca Rossetto, Loris Sauter, Milan van Zanten, H. Schuldt","doi":"10.1145/3463948.3469061","DOIUrl":"https://doi.org/10.1145/3463948.3469061","url":null,"abstract":"The multimodal nature of lifelog data collections poses unique challenges for multimedia management and retrieval systems. The Lifelog Search Challenge (LSC) offers an annual evaluation platform for such interactive retrieval systems. They compete against one another in finding items of interest within a set time frame. In this paper, we present the multimedia retrieval system vitrivr-VR, the latest addition to the vitrivr stack, which participated in the LSC in recent years. vitrivr-VR leverages the 3D space in virtual reality (VR) to offer novel retrieval and user interaction models, which we describe with a special focus on design decisions taken for the participation in the LSC.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121517122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Enhanced SOMHunter for Known-item Search in Lifelog Data 增强SOMHunter已知项目搜索在生活日志数据
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469074
Jakub Lokoč, František Mejzlík, Patrik Veselý, Tomás Soucek
SOMHunter represents a modern light-weight framework for known-item search in datasets of visual data like images or videos. The framework combines an effective W2VV++ text-to-image search approach, a traditional Bayesian like model for maintenance of relevance scores influenced by positive examples, and several types of exploration and exploitation displays. With this initial setting in 2020, already the first prototype of the system turned out to be highly competitive in comparison with other state-of-the-art systems at Video Browser Showdown and Lifelog Search Challenge competitions. In this paper, we present a new version of the system further extending the list of visual data search capabilities. The new version combines localized text queries with collage queries tested at VBS 2021 in two separate systems by our team. Furthermore, the new version of SOMHunter will integrate also the new CLIP text search model recently released by OpenAI. We believe that all the extensions will improve chances to effectively initialize the search that can continue with already supported browsing capabilities.
SOMHunter代表了一个现代轻量级框架,用于在图像或视频等视觉数据集中搜索已知项目。该框架结合了一种有效的w2vv++文本到图像搜索方法,一种传统的贝叶斯模型,用于维护受正例影响的相关性分数,以及几种类型的探索和利用显示。在2020年的初始设置下,与视频浏览器对决和Lifelog搜索挑战赛中其他最先进的系统相比,该系统的第一个原型已经证明具有很强的竞争力。在本文中,我们提出了一个新版本的系统,进一步扩展了可视化数据搜索功能列表。新版本结合了本地化文本查询和拼贴查询,我们的团队在两个独立的系统中测试了VBS 2021。此外,新版本的SOMHunter还将集成OpenAI最近发布的新的CLIP文本搜索模型。我们相信,所有的扩展将提高机会,有效地初始化搜索,可以继续与已经支持的浏览功能。
{"title":"Enhanced SOMHunter for Known-item Search in Lifelog Data","authors":"Jakub Lokoč, František Mejzlík, Patrik Veselý, Tomás Soucek","doi":"10.1145/3463948.3469074","DOIUrl":"https://doi.org/10.1145/3463948.3469074","url":null,"abstract":"SOMHunter represents a modern light-weight framework for known-item search in datasets of visual data like images or videos. The framework combines an effective W2VV++ text-to-image search approach, a traditional Bayesian like model for maintenance of relevance scores influenced by positive examples, and several types of exploration and exploitation displays. With this initial setting in 2020, already the first prototype of the system turned out to be highly competitive in comparison with other state-of-the-art systems at Video Browser Showdown and Lifelog Search Challenge competitions. In this paper, we present a new version of the system further extending the list of visual data search capabilities. The new version combines localized text queries with collage queries tested at VBS 2021 in two separate systems by our team. Furthermore, the new version of SOMHunter will integrate also the new CLIP text search model recently released by OpenAI. We believe that all the extensions will improve chances to effectively initialize the search that can continue with already supported browsing capabilities.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116765322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Interactive Multimodal Lifelog Retrieval with vitrivr at LSC 2021 基于vitrivr的交互式多模态生命日志检索
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469062
Silvan Heller, Ralph Gasser, Mahnaz Parian-Scherb, Sanja Popovic, Luca Rossetto, Loris Sauter, Florian Spiess, H. Schuldt
The Lifelog Search Challenge (LSC) is an annual benchmarking competition for interactive multimedia retrieval systems, where participating systems compete in finding events based on textual descriptions containing hints about structured, semi-structured, and/or unstructured data. In this paper, we present the multimedia retrieval system vitrivr, a long-time participant to LSC, with a focus on new functionality. Specifically, we introduce the image stabilisation module which is added prior to the feature extraction to reduce the image degradation caused by lifelogger movements, and discuss how geodata is used during query formulation, query execution, and result presentation.
Lifelog搜索挑战赛(LSC)是交互式多媒体检索系统的年度基准竞赛,参赛系统在基于包含结构化、半结构化和/或非结构化数据提示的文本描述的基础上竞争查找事件。本文介绍了多媒体检索系统vitrivr,它是LSC的一个长期参与者,并重点介绍了它的新功能。具体来说,我们介绍了在特征提取之前添加的图像稳定模块,以减少由于日志移动引起的图像退化,并讨论了在查询制定、查询执行和结果呈现过程中如何使用地理数据。
{"title":"Interactive Multimodal Lifelog Retrieval with vitrivr at LSC 2021","authors":"Silvan Heller, Ralph Gasser, Mahnaz Parian-Scherb, Sanja Popovic, Luca Rossetto, Loris Sauter, Florian Spiess, H. Schuldt","doi":"10.1145/3463948.3469062","DOIUrl":"https://doi.org/10.1145/3463948.3469062","url":null,"abstract":"The Lifelog Search Challenge (LSC) is an annual benchmarking competition for interactive multimedia retrieval systems, where participating systems compete in finding events based on textual descriptions containing hints about structured, semi-structured, and/or unstructured data. In this paper, we present the multimedia retrieval system vitrivr, a long-time participant to LSC, with a focus on new functionality. Specifically, we introduce the image stabilisation module which is added prior to the feature extraction to reduce the image degradation caused by lifelogger movements, and discuss how geodata is used during query formulation, query execution, and result presentation.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131752502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
LifeSeeker 3.0: An Interactive Lifelog Search Engine for LSC'21 lifeeeker 3.0:面向LSC'21的交互式生活日志搜索引擎
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469065
Thao-Nhu Nguyen, Tu-Khiem Le, Van-Tu Ninh, M. Tran, Nguyen Thanh Binh, G. Healy, A. Caputo, C. Gurrin
In this paper, we present the interactive lifelog retrieval engine developed for the LSC'21 comparative benchmarking challenge. The LifeSeeker 3.0 interactive lifelog retrieval engine is an enhanced version of our previous system participating in LSC'20 - LifeSeeker 2.0. The system is developed by both Dublin City University and the Ho Chi Minh City University of Science. The implementation of LifeSeeker 3.0 focuses on searching and filtering by text query using a weighted Bag-of-Words model with visual concept augmentation and three weighted vocabularies. The visual similarity search is improved using a bag of local convolutional features; while improving the previous version's performance, enhancing query processing time, result displaying, and browsing support.
在本文中,我们提出了为LSC'21比较基准挑战而开发的交互式生活日志检索引擎。LifeSeeker 3.0交互式生活日志检索引擎是我们之前参与LSC'20 - LifeSeeker 2.0的系统的增强版本。该系统由都柏林城市大学和胡志明城市科学大学共同开发。lifeeeker 3.0的实现重点是使用带有视觉概念增强和三个加权词汇表的加权词袋模型,通过文本查询进行搜索和过滤。利用局部卷积特征包改进视觉相似性搜索;虽然改进了以前版本的性能,但增强了查询处理时间、结果显示和浏览支持。
{"title":"LifeSeeker 3.0: An Interactive Lifelog Search Engine for LSC'21","authors":"Thao-Nhu Nguyen, Tu-Khiem Le, Van-Tu Ninh, M. Tran, Nguyen Thanh Binh, G. Healy, A. Caputo, C. Gurrin","doi":"10.1145/3463948.3469065","DOIUrl":"https://doi.org/10.1145/3463948.3469065","url":null,"abstract":"In this paper, we present the interactive lifelog retrieval engine developed for the LSC'21 comparative benchmarking challenge. The LifeSeeker 3.0 interactive lifelog retrieval engine is an enhanced version of our previous system participating in LSC'20 - LifeSeeker 2.0. The system is developed by both Dublin City University and the Ho Chi Minh City University of Science. The implementation of LifeSeeker 3.0 focuses on searching and filtering by text query using a weighted Bag-of-Words model with visual concept augmentation and three weighted vocabularies. The visual similarity search is improved using a bag of local convolutional features; while improving the previous version's performance, enhancing query processing time, result displaying, and browsing support.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Lifelogging as a Memory Prosthetic 作为记忆假肢的生活日志
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469271
A. Smeaton
Since computers were first used to address the challenge of managing information rather than performing calculations computing arithmetic values, or even before that since the time that MEMEX was designed by Vannevar Bush in the 1940s, we have been building systems that help people like us to find information accurately and quickly. These systems have grown to be technological marvels, discovering and indexing information almost as soon as it appears available online and making it available to billions of people for searching and delivery within fractions of a second, and across a range of devices. Yet it is well known that half the time people are actually searching for things that they once knew but have since forgotten, or can't remember where they found that information first time around, and need to re-find it. As our science of information seeking and information discovery has progressed, we rarely ask why people forgot those things in the first place. If we were allowed to jump back in time say 50 years, and to re-start the development of information retrieval as a technology then perhaps we would be build systems that help us to remember and to learn, rather than trying to plug the gap and find information for us when we forget. In separate but parallel and sometimes overlapping developments, the analysis and indexing of visual information -- images and video -- has also made spectacular progress mostly within the last decade. Using automated processes we can detect and track objects, we can describe visual content as tags or even as text captions, we can now generate realistic high quality visual content using machine learning and we can compute high-level abstract features of visual content like salience, aesthetics, and even memorability. One of the areas where information management/retrieval with its 50 years of technological progress meets computer vision with its recent decade of spectacular development is in lifelogging. At this intersection we can apply computer vision techniques to analyse and index visual lifelogs generated from wearable cameras, for example, in order to support lifelog search and browsing tasks. But we should ask ourselves whether this really is the right way for us to use our lifelogs. Memory is one of the core features that make us what we are yet it is fragile and only partly understood. We have no real control over what we remember and what we forget and when we really do need to remember something that could be important then we make ham-fisted efforts to consciously over-ride our natural tendency to forget. We do this, for example, rehearsing and replaying information, building on the Ebbinghaus principle of repeated conscious reviewing to overcome transience which is the general deterioration of memory over time. In this presentation I will probe deeper into memory, recall, recognition, memorability and memory triggers and how our lifelogs could really act as memory prosthetics, visual triggers for our
自从计算机第一次被用来解决管理信息的挑战,而不是执行计算计算算术值,或者甚至在此之前,自Vannevar Bush在20世纪40年代设计MEMEX以来,我们一直在建立系统,帮助像我们这样的人准确快速地找到信息。这些系统已经发展成为技术上的奇迹,几乎在信息出现在网上时就能发现和索引信息,并使数十亿人能够在几分之一秒内通过各种设备进行搜索和传递。然而,众所周知,有一半的时间,人们实际上是在寻找他们曾经知道但后来忘记的东西,或者不记得他们第一次在哪里找到的信息,需要重新找到它。随着我们的信息搜寻和信息发现科学的进步,我们很少问为什么人们首先会忘记这些东西。如果我们被允许回到50年前,重新开始信息检索技术的发展,那么我们可能会建立帮助我们记忆和学习的系统,而不是试图填补空白,在我们忘记的时候为我们寻找信息。视觉信息(图像和视频)的分析和索引也在过去十年中取得了惊人的进展,这些发展是独立但平行的,有时是重叠的。使用自动化过程,我们可以检测和跟踪对象,我们可以将视觉内容描述为标签甚至文本标题,我们现在可以使用机器学习生成逼真的高质量视觉内容,我们可以计算视觉内容的高级抽象特征,如显著性,美学,甚至可记忆性。经历了50年技术进步的信息管理/检索与近10年发展迅猛的计算机视觉相结合的领域之一是生活记录。在这个交叉点,我们可以应用计算机视觉技术来分析和索引从可穿戴相机生成的视觉生活日志,例如,为了支持生活日志搜索和浏览任务。但我们应该问问自己,这真的是我们使用人生日志的正确方式吗?记忆是使我们成为我们的核心特征之一,但它是脆弱的,只是部分地被理解。我们无法真正控制我们记住和忘记的东西,当我们真的需要记住一些可能很重要的东西时,我们会下意识地努力克服我们自然的遗忘倾向。我们这样做,例如,排练和重播信息,建立在艾宾浩斯原则的基础上,反复有意识地回顾,以克服短暂性,即记忆随着时间的推移而普遍恶化。在这次演讲中,我将深入探讨记忆,回忆,识别,可记忆性和记忆触发,以及我们的生活日志如何真正充当记忆假肢,视觉触发我们自己的自然记忆。这将允许我们问自己,我们在诸如“年度生活日志搜索挑战会议”这样的活动中建立和运行的生活日志挑战是否有适当的框架,以及它们是否将我们带向一个方向,即生活日志对广泛的人群真正有用,而不是对一小部分人有用。最后,我将谈到一个可怕的场景,即我们的一切都可能被记住,以及我们是否希望这种情况真的发生。
{"title":"Lifelogging as a Memory Prosthetic","authors":"A. Smeaton","doi":"10.1145/3463948.3469271","DOIUrl":"https://doi.org/10.1145/3463948.3469271","url":null,"abstract":"Since computers were first used to address the challenge of managing information rather than performing calculations computing arithmetic values, or even before that since the time that MEMEX was designed by Vannevar Bush in the 1940s, we have been building systems that help people like us to find information accurately and quickly. These systems have grown to be technological marvels, discovering and indexing information almost as soon as it appears available online and making it available to billions of people for searching and delivery within fractions of a second, and across a range of devices. Yet it is well known that half the time people are actually searching for things that they once knew but have since forgotten, or can't remember where they found that information first time around, and need to re-find it. As our science of information seeking and information discovery has progressed, we rarely ask why people forgot those things in the first place. If we were allowed to jump back in time say 50 years, and to re-start the development of information retrieval as a technology then perhaps we would be build systems that help us to remember and to learn, rather than trying to plug the gap and find information for us when we forget. In separate but parallel and sometimes overlapping developments, the analysis and indexing of visual information -- images and video -- has also made spectacular progress mostly within the last decade. Using automated processes we can detect and track objects, we can describe visual content as tags or even as text captions, we can now generate realistic high quality visual content using machine learning and we can compute high-level abstract features of visual content like salience, aesthetics, and even memorability. One of the areas where information management/retrieval with its 50 years of technological progress meets computer vision with its recent decade of spectacular development is in lifelogging. At this intersection we can apply computer vision techniques to analyse and index visual lifelogs generated from wearable cameras, for example, in order to support lifelog search and browsing tasks. But we should ask ourselves whether this really is the right way for us to use our lifelogs. Memory is one of the core features that make us what we are yet it is fragile and only partly understood. We have no real control over what we remember and what we forget and when we really do need to remember something that could be important then we make ham-fisted efforts to consciously over-ride our natural tendency to forget. We do this, for example, rehearsing and replaying information, building on the Ebbinghaus principle of repeated conscious reviewing to overcome transience which is the general deterioration of memory over time. In this presentation I will probe deeper into memory, recall, recognition, memorability and memory triggers and how our lifelogs could really act as memory prosthetics, visual triggers for our ","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131384187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Graph-querying approaches in LifeGraph 探索LifeGraph中的图查询方法
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469068
Luca Rossetto, Matthias Baumgartner, Ralph Gasser, Lucien Heitz, Ruijie Wang, A. Bernstein
The multi-modal and interrelated nature of lifelog data makes it well suited for graph-based representations. In this paper, we present the second iteration of LifeGraph, a Knowledge Graph for Lifelog Data, initially introduced during the 3rd Lifelog Search Challenge in 2020. This second iteration incorporates several lessons learned from the previous version. While the actual graph has undergone only small changes, the mechanisms by which it is traversed during querying as well as the underlying storage system which performs the traversal have been changed. The means for query formulation have also been slightly extended in capability and made more efficient and intuitive. All these changes have the aim of improving result quality and reducing query time.
生命日志数据的多模态和相互关联的性质使其非常适合基于图形的表示。在本文中,我们介绍了LifeGraph的第二次迭代,即lifeelog数据的知识图谱,最初是在2020年第三届lifeelog搜索挑战赛期间引入的。第二次迭代包含了从前一个版本中吸取的一些经验教训。虽然实际的图只发生了很小的变化,但在查询期间遍历图的机制以及执行遍历的底层存储系统都发生了变化。查询公式的方法在功能上也略有扩展,并且变得更加高效和直观。所有这些变化都是为了提高结果质量和减少查询时间。
{"title":"Exploring Graph-querying approaches in LifeGraph","authors":"Luca Rossetto, Matthias Baumgartner, Ralph Gasser, Lucien Heitz, Ruijie Wang, A. Bernstein","doi":"10.1145/3463948.3469068","DOIUrl":"https://doi.org/10.1145/3463948.3469068","url":null,"abstract":"The multi-modal and interrelated nature of lifelog data makes it well suited for graph-based representations. In this paper, we present the second iteration of LifeGraph, a Knowledge Graph for Lifelog Data, initially introduced during the 3rd Lifelog Search Challenge in 2020. This second iteration incorporates several lessons learned from the previous version. While the actual graph has undergone only small changes, the mechanisms by which it is traversed during querying as well as the underlying storage system which performs the traversal have been changed. The means for query formulation have also been slightly extended in capability and made more efficient and intuitive. All these changes have the aim of improving result quality and reducing query time.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131195021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Flexible Interactive Retrieval SysTem 2.0 for Visual Lifelog Exploration at LSC 2021 面向生命日志可视化探索的柔性交互检索系统2.0
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469072
Hoang-Phuc Trang-Trung, Thanh-Cong Le, Mai-Khiem Tran, Van-Tu Ninh, Tu-Khiem Le, C. Gurrin, Minh-Triet Tran
With a huge collection of photos and video clips, it is essential to provide an efficient and easy-to-use system for users to retrieve moments of interest with a wide variation of query types. This motivates us to develop and upgrade our flexible interactive retrieval system for visual lifelog exploration. In this paper, we briefly introduce version 2 of our system with the following main features. Our system supports multiple modalities for interaction and query processing, including visual query by meta-data, text query and visual information matching based on a joint embedding model, scene clustering based on visual and location information, flexible temporal event navigation, and query expansion with visual examples. With the flexibility in system architecture, we expect our system can easily integrate new modules to enhance its functionalities.
由于收集了大量的照片和视频剪辑,因此必须为用户提供一个高效且易于使用的系统,以便通过各种查询类型检索感兴趣的时刻。这促使我们开发和升级我们灵活的交互式检索系统,用于可视化的生活日志探索。在本文中,我们简要介绍了我们的系统的版本2,具有以下主要功能。我们的系统支持多种交互和查询处理方式,包括基于元数据的可视化查询、基于联合嵌入模型的文本查询和视觉信息匹配、基于视觉和位置信息的场景聚类、灵活的时间事件导航以及基于视觉示例的查询扩展。由于系统架构的灵活性,我们希望我们的系统可以很容易地集成新的模块,以增强其功能。
{"title":"Flexible Interactive Retrieval SysTem 2.0 for Visual Lifelog Exploration at LSC 2021","authors":"Hoang-Phuc Trang-Trung, Thanh-Cong Le, Mai-Khiem Tran, Van-Tu Ninh, Tu-Khiem Le, C. Gurrin, Minh-Triet Tran","doi":"10.1145/3463948.3469072","DOIUrl":"https://doi.org/10.1145/3463948.3469072","url":null,"abstract":"With a huge collection of photos and video clips, it is essential to provide an efficient and easy-to-use system for users to retrieve moments of interest with a wide variation of query types. This motivates us to develop and upgrade our flexible interactive retrieval system for visual lifelog exploration. In this paper, we briefly introduce version 2 of our system with the following main features. Our system supports multiple modalities for interaction and query processing, including visual query by meta-data, text query and visual information matching based on a joint embedding model, scene clustering based on visual and location information, flexible temporal event navigation, and query expansion with visual examples. With the flexibility in system architecture, we expect our system can easily integrate new modules to enhance its functionalities.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116968791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
ViRMA: Virtual Reality Multimedia Analytics at LSC 2021 ViRMA:2021 年 LSC 虚拟现实多媒体分析展
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469067
Aaron Duane, Björn þór Jónsson
In this paper we describe the first iteration of the ViRMA prototype system, a novel approach to multimedia analysis in virtual reality and inspired by the M3 data model. We intend to evaluate our approach via the Lifelog Search Challenge (LSC) to serve as a benchmark against other multimedia analytics systems.
在本文中,我们介绍了 ViRMA 原型系统的首次迭代,这是一种在虚拟现实中进行多媒体分析的新方法,其灵感来自 M3 数据模型。我们打算通过 "生活日志搜索挑战赛"(LSC)对我们的方法进行评估,以此作为其他多媒体分析系统的基准。
{"title":"ViRMA: Virtual Reality Multimedia Analytics at LSC 2021","authors":"Aaron Duane, Björn þór Jónsson","doi":"10.1145/3463948.3469067","DOIUrl":"https://doi.org/10.1145/3463948.3469067","url":null,"abstract":"In this paper we describe the first iteration of the ViRMA prototype system, a novel approach to multimedia analysis in virtual reality and inspired by the M3 data model. We intend to evaluate our approach via the Lifelog Search Challenge (LSC) to serve as a benchmark against other multimedia analytics systems.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130976411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
LifeConcept: An Interactive Approach for Multimodal Lifelog Retrieval through Concept Recommendation 生命概念:一种基于概念推荐的多模态生命日志检索交互方法
Pub Date : 2021-08-21 DOI: 10.1145/3463948.3469070
Wei-Hong Ang, An-Zi Yen, Tai-Te Chu, Hen-Hsen Huang, Hsin-Hsi Chen
The major challenge in visual lifelog retrieval is the semantic gap between textual queries and visual concepts. This paper presents our work on the Lifelog Search Challenge 2021 (LSC'21), an annual comparative benchmarking activity for comparing approaches to interactive retrieval from multimodal lifelogs. We propose LifeConcept, an interactive lifelog search system that is aimed at accelerating the retrieval process and retrieving more precise results. In this work, we introduce several new features such as the number of people, location cluster, and object with color. Moreover, we obtain visual concepts from the images with computer vision models and propose a concept recommendation method to reduce the semantic gap. In this way, users can efficiently set up the related conditions for their requirements and search the desired images with appropriate query terms based on the suggestion.
视觉生活日志检索的主要挑战是文本查询和视觉概念之间的语义差距。本文介绍了我们在2021年生命日志搜索挑战(LSC'21)上的工作,这是一项年度比较基准活动,用于比较从多模式生命日志中进行交互检索的方法。我们提出了一个交互式生命日志搜索系统lifecconcept,旨在加速检索过程并检索更精确的结果。在这项工作中,我们引入了几个新的特征,如人数、位置集群和带颜色的对象。此外,我们利用计算机视觉模型从图像中获取视觉概念,并提出了一种概念推荐方法来减小语义差距。这样,用户可以有效地为自己的需求设置相关条件,并根据建议使用合适的查询条件搜索到想要的图像。
{"title":"LifeConcept: An Interactive Approach for Multimodal Lifelog Retrieval through Concept Recommendation","authors":"Wei-Hong Ang, An-Zi Yen, Tai-Te Chu, Hen-Hsen Huang, Hsin-Hsi Chen","doi":"10.1145/3463948.3469070","DOIUrl":"https://doi.org/10.1145/3463948.3469070","url":null,"abstract":"The major challenge in visual lifelog retrieval is the semantic gap between textual queries and visual concepts. This paper presents our work on the Lifelog Search Challenge 2021 (LSC'21), an annual comparative benchmarking activity for comparing approaches to interactive retrieval from multimodal lifelogs. We propose LifeConcept, an interactive lifelog search system that is aimed at accelerating the retrieval process and retrieving more precise results. In this work, we introduce several new features such as the number of people, location cluster, and object with color. Moreover, we obtain visual concepts from the images with computer vision models and propose a concept recommendation method to reduce the semantic gap. In this way, users can efficiently set up the related conditions for their requirements and search the desired images with appropriate query terms based on the suggestion.","PeriodicalId":150532,"journal":{"name":"Proceedings of the 4th Annual on Lifelog Search Challenge","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131197290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings of the 4th Annual on Lifelog Search Challenge
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1