首页 > 最新文献

2015 IEEE International Symposium on Multimedia (ISM)最新文献

英文 中文
An Adaptive H.265/HEVC Encoding Control for 8K UHDTV Movies Based on Motion Complexity Estimation 基于运动复杂度估计的8K超高清电视电影H.265/HEVC自适应编码控制
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.74
Shota Orihashi, Rintaro Harada, Y. Matsuo, J. Katto
In this paper, we propose a method to control H.265/HEVC encoding for 8K UHDTV moving pictures by detecting amount or complexity of object motions. In 8K video, which has very high spatial resolution, motion has a big influence on encoding efficiency and processing time. The proposed method estimates motion features by external process which uses local feature points matching between two frames, selects an optimal prediction mode and determines search ranges of motion vectors. Experiments show we can detect motion complexity of 8K movies by using local feature matching between frames and we can select optimal configurations of encoding. By our method, we achieved highly efficient and low computation encoding.
本文提出了一种通过检测物体运动量或运动复杂度来控制8K超高清电视运动图像H.265/HEVC编码的方法。在空间分辨率非常高的8K视频中,运动对编码效率和处理时间影响很大。该方法通过外部处理,利用两帧之间的局部特征点匹配来估计运动特征,选择最优预测模式,确定运动向量的搜索范围。实验表明,利用帧间局部特征匹配可以检测8K电影的运动复杂度,并可以选择最优的编码配置。该方法实现了高效、低计算量的编码。
{"title":"An Adaptive H.265/HEVC Encoding Control for 8K UHDTV Movies Based on Motion Complexity Estimation","authors":"Shota Orihashi, Rintaro Harada, Y. Matsuo, J. Katto","doi":"10.1109/ISM.2015.74","DOIUrl":"https://doi.org/10.1109/ISM.2015.74","url":null,"abstract":"In this paper, we propose a method to control H.265/HEVC encoding for 8K UHDTV moving pictures by detecting amount or complexity of object motions. In 8K video, which has very high spatial resolution, motion has a big influence on encoding efficiency and processing time. The proposed method estimates motion features by external process which uses local feature points matching between two frames, selects an optimal prediction mode and determines search ranges of motion vectors. Experiments show we can detect motion complexity of 8K movies by using local feature matching between frames and we can select optimal configurations of encoding. By our method, we achieved highly efficient and low computation encoding.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126371744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-View Food Portion Estimation Based on Geometric Models 基于几何模型的单视图食物分量估计
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.67
S. Fang, Chang Liu, F. Zhu, E. Delp, C. Boushey
In this paper we present a food portion estimation technique based on a single-view food image used for the estimation of the amount of energy (in kilocalories) consumed at a meal. Unlike previous methods we have developed, the new technique is capable of estimating food portion without manual tuning of parameters. Although single-view 3D scene reconstruction is in general an ill-posed problem, the use of geometric models such as the shape of a container can help to partially recover 3D parameters of food items in the scene. Based on the estimated 3D parameters of each food item and a reference object in the scene, the volume of each food item in the image can be determined. The weight of each food can then be estimated using the density of the food item. We were able to achieve an error of less than 6% for energy estimation of an image of a meal assuming accurate segmentation and food classification.
在本文中,我们提出了一种基于单视图食物图像的食物分量估计技术,用于估计一顿饭消耗的能量(以千卡为单位)。与我们以前开发的方法不同,新技术能够在不手动调整参数的情况下估计食物的比例。虽然单视图3D场景重建通常是一个病态问题,但使用几何模型(如容器的形状)可以帮助部分恢复场景中食物的3D参数。根据预估的每一种食物的三维参数和场景中的一个参考物体,可以确定图像中每一种食物的体积。然后可以利用食物的密度来估计每种食物的重量。假设准确的分割和食物分类,我们能够实现膳食图像能量估计的误差小于6%。
{"title":"Single-View Food Portion Estimation Based on Geometric Models","authors":"S. Fang, Chang Liu, F. Zhu, E. Delp, C. Boushey","doi":"10.1109/ISM.2015.67","DOIUrl":"https://doi.org/10.1109/ISM.2015.67","url":null,"abstract":"In this paper we present a food portion estimation technique based on a single-view food image used for the estimation of the amount of energy (in kilocalories) consumed at a meal. Unlike previous methods we have developed, the new technique is capable of estimating food portion without manual tuning of parameters. Although single-view 3D scene reconstruction is in general an ill-posed problem, the use of geometric models such as the shape of a container can help to partially recover 3D parameters of food items in the scene. Based on the estimated 3D parameters of each food item and a reference object in the scene, the volume of each food item in the image can be determined. The weight of each food can then be estimated using the density of the food item. We were able to achieve an error of less than 6% for energy estimation of an image of a meal assuming accurate segmentation and food classification.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126511355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Scene Classification Using External Knowledge Source 基于外部知识源的场景分类
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.85
Esfandiar Zolghadr, B. Furht
In this paper, we introduce a model for scene category recognition using metadata of labeled training dataset. We define a measurement of object-scene relevance and apply it to scene category classification to increase coherence of objects in classification and annotation tasks. We show how our context-based extension of supervised Latent Dirichlet Allocation (LDA) model increases recognition accuracy when feature mix is influenced by our relevancy score. We demonstrate that the proposed approach performs well on LabelMe dataset. Comparison between our purposed approach and state of art semi-supervised clustering algorithms using labeled data shows effectiveness of our approach in interpretation of scenes.
本文介绍了一种基于标记训练数据集元数据的场景分类识别模型。我们定义了一个对象-场景相关性的度量,并将其应用于场景类别分类,以提高分类和注释任务中对象的一致性。我们展示了我们基于上下文的有监督潜在狄利克雷分配(LDA)模型的扩展如何在特征组合受我们的相关性评分影响时提高识别精度。我们证明了所提出的方法在LabelMe数据集上表现良好。我们的目标方法与使用标记数据的半监督聚类算法之间的比较表明,我们的方法在解释场景方面是有效的。
{"title":"Scene Classification Using External Knowledge Source","authors":"Esfandiar Zolghadr, B. Furht","doi":"10.1109/ISM.2015.85","DOIUrl":"https://doi.org/10.1109/ISM.2015.85","url":null,"abstract":"In this paper, we introduce a model for scene category recognition using metadata of labeled training dataset. We define a measurement of object-scene relevance and apply it to scene category classification to increase coherence of objects in classification and annotation tasks. We show how our context-based extension of supervised Latent Dirichlet Allocation (LDA) model increases recognition accuracy when feature mix is influenced by our relevancy score. We demonstrate that the proposed approach performs well on LabelMe dataset. Comparison between our purposed approach and state of art semi-supervised clustering algorithms using labeled data shows effectiveness of our approach in interpretation of scenes.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123334925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Extraction from 2D Images for Body Composition Analysis 用于人体成分分析的二维图像特征提取
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.117
Ligaj Pradhan, Song Gao, Chengcui Zhang, B. Gower, S. Heymsfield, D. Allison, O. Affuso
Body volume and body shape have been used in the estimation of body composition in clinical research. However, the determination of body volume typically requires sophisticated and expensive equipment. Similarly, the use of body shape to predict body composition is limited by rater biases as well as reproducibility. In this paper, we aim to introduce simple yet relatively accurate techniques for body volume and body shape representation that reduce limitations of traditional approaches. We propose an automated method to construct a 3D model of the body by accumulating ellipse-like slices formed by using the length and width features sampled from the back and side profile images. Body volume is represented in pixels by adding up the areas of the slices. Apart from representing body volume in pixels, we also aim to extract shape features from the 2D images and to create clusters of individuals according to their body shape. The body volume representation and the proposed shape features together with other meta-information including age, sex, race, height, and weight, could be effectively used in body composition prediction. Our study results indicate that the body volume calculated by the proposed method is reasonably accurate and the extracted shape clusters provide important information when estimating body composition.
在临床研究中,人体体积和体型已被用来估计人体成分。然而,人体体积的测定通常需要复杂而昂贵的设备。同样地,用体型来预测身体成分也受到偏见和可重复性的限制。在本文中,我们旨在介绍简单而相对准确的身体体积和身体形状表示技术,以减少传统方法的局限性。我们提出了一种自动构建人体三维模型的方法,该方法是利用从背部和侧面轮廓图像中采样的长度和宽度特征形成的椭圆状切片进行累积。通过将切片的面积相加,以像素表示体体积。除了以像素表示身体体积外,我们还旨在从2D图像中提取形状特征,并根据他们的体型创建个体簇。身体体积表示和体形特征与年龄、性别、种族、身高、体重等元信息可以有效地用于身体成分预测。研究结果表明,该方法计算的人体体积具有一定的准确性,提取的形状聚类为估计人体成分提供了重要信息。
{"title":"Feature Extraction from 2D Images for Body Composition Analysis","authors":"Ligaj Pradhan, Song Gao, Chengcui Zhang, B. Gower, S. Heymsfield, D. Allison, O. Affuso","doi":"10.1109/ISM.2015.117","DOIUrl":"https://doi.org/10.1109/ISM.2015.117","url":null,"abstract":"Body volume and body shape have been used in the estimation of body composition in clinical research. However, the determination of body volume typically requires sophisticated and expensive equipment. Similarly, the use of body shape to predict body composition is limited by rater biases as well as reproducibility. In this paper, we aim to introduce simple yet relatively accurate techniques for body volume and body shape representation that reduce limitations of traditional approaches. We propose an automated method to construct a 3D model of the body by accumulating ellipse-like slices formed by using the length and width features sampled from the back and side profile images. Body volume is represented in pixels by adding up the areas of the slices. Apart from representing body volume in pixels, we also aim to extract shape features from the 2D images and to create clusters of individuals according to their body shape. The body volume representation and the proposed shape features together with other meta-information including age, sex, race, height, and weight, could be effectively used in body composition prediction. Our study results indicate that the body volume calculated by the proposed method is reasonably accurate and the extracted shape clusters provide important information when estimating body composition.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126323197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Combining Diversity Queries and Visual Mining to Improve Content-Based Image Retrieval Systems: The DiVI Method 结合多样性查询和视觉挖掘改进基于内容的图像检索系统:DiVI方法
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.115
Lúcio F. D. Santos, Rafael L. Dias, M. X. Ribeiro, A. Traina, C. Traina
This paper proposes a new approach to improve similarity queries with diversity, the Diversity and Visually-Interactive method (DiVI), which employs Visual Data Mining techniques in Content-Based Image Retrieval (CBIR) systems. DiVI empowers the user to understand how the measures of similarity and diversity affect their queries, as well as increases the relevance of CBIR results according to the user judgment. An overview of the image distribution in the database is shown to the user through multidimensional projection. The user interacts with the visual representation changing the projected space or the query parameters, according to his/her needs and previous knowledge. DiVI takes advantage of the users' activity to transparently reduce the semantic gap faced by CBIR systems. Empirical evaluation show that DiVI increases the precision for querying by content and also increases the applicability and acceptance of similarity with diversity in CBIR systems.
本文提出了一种改进相似性查询多样性的新方法——多样性和视觉交互方法(DiVI),该方法将视觉数据挖掘技术应用于基于内容的图像检索(CBIR)系统中。DiVI使用户能够理解相似性和多样性的度量如何影响他们的查询,并根据用户的判断增加了CBIR结果的相关性。通过多维投影向用户显示数据库中图像分布的概况。用户根据自己的需要和以前的知识,与可视化表示进行交互,改变投影空间或查询参数。DiVI利用用户的活动来透明地减少CBIR系统面临的语义差距。实证评价表明,DiVI提高了按内容查询的精度,也提高了相似性与多样性在CBIR系统中的适用性和接受度。
{"title":"Combining Diversity Queries and Visual Mining to Improve Content-Based Image Retrieval Systems: The DiVI Method","authors":"Lúcio F. D. Santos, Rafael L. Dias, M. X. Ribeiro, A. Traina, C. Traina","doi":"10.1109/ISM.2015.115","DOIUrl":"https://doi.org/10.1109/ISM.2015.115","url":null,"abstract":"This paper proposes a new approach to improve similarity queries with diversity, the Diversity and Visually-Interactive method (DiVI), which employs Visual Data Mining techniques in Content-Based Image Retrieval (CBIR) systems. DiVI empowers the user to understand how the measures of similarity and diversity affect their queries, as well as increases the relevance of CBIR results according to the user judgment. An overview of the image distribution in the database is shown to the user through multidimensional projection. The user interacts with the visual representation changing the projected space or the query parameters, according to his/her needs and previous knowledge. DiVI takes advantage of the users' activity to transparently reduce the semantic gap faced by CBIR systems. Empirical evaluation show that DiVI increases the precision for querying by content and also increases the applicability and acceptance of similarity with diversity in CBIR systems.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122265256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Automatic Fight Detection Based on Motion Analysis 基于运动分析的自动战斗检测
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.98
E. Fu, H. Leong, G. Ngai, S. Chan
Social signal processing is becoming an important topic in affective computing. In this paper, we focus on an important social interaction in real life, namely, fighting. Fight detection will be useful in public transportation, prisons, bars, or even sport. A robust mechanism in detecting fights from a video will be extremely useful, especially in applications relevant to surveillance systems. Recent research works focus on extracting visual features from high resolution video, leading to computationally expensive systems. In this paper, we propose an approach to detect fights in a natural and robust way based on motion analysis, which is not only intuitive, but also robust. Experimental results show that we can accurately detect fight activities in different video surveillance settings.
社会信号处理正成为情感计算领域的一个重要课题。在本文中,我们关注现实生活中一个重要的社会互动,即战斗。在公共交通、监狱、酒吧,甚至体育运动中,战斗探测都将很有用。从视频中检测打斗的强大机制将非常有用,特别是在与监视系统相关的应用中。最近的研究工作集中在从高分辨率视频中提取视觉特征,导致计算昂贵的系统。本文提出了一种基于运动分析的自然鲁棒打斗检测方法,该方法不仅直观,而且鲁棒性强。实验结果表明,在不同的视频监控环境下,可以准确地检测到战斗活动。
{"title":"Automatic Fight Detection Based on Motion Analysis","authors":"E. Fu, H. Leong, G. Ngai, S. Chan","doi":"10.1109/ISM.2015.98","DOIUrl":"https://doi.org/10.1109/ISM.2015.98","url":null,"abstract":"Social signal processing is becoming an important topic in affective computing. In this paper, we focus on an important social interaction in real life, namely, fighting. Fight detection will be useful in public transportation, prisons, bars, or even sport. A robust mechanism in detecting fights from a video will be extremely useful, especially in applications relevant to surveillance systems. Recent research works focus on extracting visual features from high resolution video, leading to computationally expensive systems. In this paper, we propose an approach to detect fights in a natural and robust way based on motion analysis, which is not only intuitive, but also robust. Experimental results show that we can accurately detect fight activities in different video surveillance settings.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"46 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120933324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
An Illumination-Robust Approach for Feature-Based Road Detection 基于特征的道路检测光照鲁棒性方法
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.46
Zhenqiang Ying, Ge Li, Guozhen Tan
Road detection algorithms constitute a basis for intelligent vehicle systems which are designed to improve safety and efficiency for human drivers. In this paper, a novel road detection approach intended for tackling illumination-related effects is proposed. First, a grayscale image of modified saturation is derived from the input color image during preprocessing, effectively diminishing cast shadows. Second, the road boundary lines are detected, which provides an adaptive region of interest for the following lane-marking detection. Finally, an improved feature-based method is employed to identify lane-markings from the shadows. The experimental results show that the proposed approach is robust against illumination-related effects.
道路检测算法是智能车辆系统的基础,它旨在提高人类驾驶员的安全性和效率。本文提出了一种新的道路检测方法,旨在解决照明相关的影响。首先,在预处理过程中,从输入的彩色图像中得到一个修正饱和度的灰度图像,有效地减少了阴影。其次,检测道路边界线,为后续的车道标记检测提供一个自适应的兴趣区域。最后,采用一种改进的基于特征的方法从阴影中识别车道标记。实验结果表明,该方法对光照相关影响具有较强的鲁棒性。
{"title":"An Illumination-Robust Approach for Feature-Based Road Detection","authors":"Zhenqiang Ying, Ge Li, Guozhen Tan","doi":"10.1109/ISM.2015.46","DOIUrl":"https://doi.org/10.1109/ISM.2015.46","url":null,"abstract":"Road detection algorithms constitute a basis for intelligent vehicle systems which are designed to improve safety and efficiency for human drivers. In this paper, a novel road detection approach intended for tackling illumination-related effects is proposed. First, a grayscale image of modified saturation is derived from the input color image during preprocessing, effectively diminishing cast shadows. Second, the road boundary lines are detected, which provides an adaptive region of interest for the following lane-marking detection. Finally, an improved feature-based method is employed to identify lane-markings from the shadows. The experimental results show that the proposed approach is robust against illumination-related effects.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123373631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Earth Mover's Distance vs. Quadratic form Distance: An Analytical and Empirical Comparison 推土机距离与二次型距离:分析与实证比较
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.76
C. Beecks, M. S. Uysal, T. Seidl
It has been past more than a decade since the Earth Mover's Distance and the Quadratic Form Distance have been proposed as distance-based similarity measures for color-based image similarity. Ever since their utilization in various domains, they have developed into major general-purpose distance functions. In this paper, we subject both dissimilarity measures to a fundamental analytical and empirical analysis in order to reveal their strengths and weaknesses.
地球移动者距离和二次型距离被提出作为基于距离的图像相似度度量,已经有十多年的历史了。由于它们在各个领域的应用,它们已经发展成为主要的通用距离函数。在本文中,我们对这两种方法进行了基本分析和实证分析,以揭示它们的优缺点。
{"title":"Earth Mover's Distance vs. Quadratic form Distance: An Analytical and Empirical Comparison","authors":"C. Beecks, M. S. Uysal, T. Seidl","doi":"10.1109/ISM.2015.76","DOIUrl":"https://doi.org/10.1109/ISM.2015.76","url":null,"abstract":"It has been past more than a decade since the Earth Mover's Distance and the Quadratic Form Distance have been proposed as distance-based similarity measures for color-based image similarity. Ever since their utilization in various domains, they have developed into major general-purpose distance functions. In this paper, we subject both dissimilarity measures to a fundamental analytical and empirical analysis in order to reveal their strengths and weaknesses.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133212667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Collaborative Rehabilitation Support System: A Comprehensive Solution for Everyday Rehab 协同康复支持系统:日常康复的综合解决方案
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.62
Ziying Tang, Sonia Lawson, David Messing, Jin Guo, Ted Smith, Jinjuan Feng
Repetitive rehabilitation exercises on a regular basis are crucial for enhancing and restoring functional ability and quality of life to those with cognitive and physical disabilities such as stroke. However, a good proportion of patients are often non-compliant to the repetitive exercise regimen prescribed by their therapists. This may be due to a variety of factors including lack of motivation. Although interactive gaming systems such as the Wii by Nintendo and haptic devices have been introduced to make repetitive actions more fun and engaging for typically functioning individuals, challenges still remain for those who have some impairment. Expensive and complicated gaming systems are still not widely used by those with disabilities. In addition, therapists and caregivers, who play an important role in rehabilitation, have not been sufficiently involved. To address these problems as they pertain to people with disabilities, we propose a collaborative rehabilitation support system (CRSS) where mobile-based interactive games are included. Through our approach, we expand in-home rehabilitation to self-rehabilitation which allows users to perform rehab anywhere and anytime. A pilot study involving stroke survivors, caregivers, therapists and physicians was conducted to evaluate our system, and users' feedback is highly positive.
对于那些有认知和身体残疾(如中风)的人来说,定期进行重复的康复训练对于增强和恢复功能能力和生活质量至关重要。然而,很大一部分患者往往不遵守治疗师规定的重复性运动方案。这可能是由于多种因素造成的,包括缺乏动力。尽管任天堂的Wii等互动游戏系统和触觉设备已经被引入,使重复性动作对正常功能的人来说更加有趣和吸引人,但对于那些有一些损伤的人来说,挑战仍然存在。昂贵而复杂的游戏系统仍未被残障人士广泛使用。此外,在康复中发挥重要作用的治疗师和护理人员没有充分参与。为了解决这些与残疾人有关的问题,我们提出了一个协作康复支持系统(CRSS),其中包括基于移动设备的互动游戏。通过我们的方法,我们将家庭康复扩展到自我康复,允许用户随时随地进行康复。一项涉及中风幸存者、护理人员、治疗师和医生的试点研究对我们的系统进行了评估,用户的反馈非常积极。
{"title":"Collaborative Rehabilitation Support System: A Comprehensive Solution for Everyday Rehab","authors":"Ziying Tang, Sonia Lawson, David Messing, Jin Guo, Ted Smith, Jinjuan Feng","doi":"10.1109/ISM.2015.62","DOIUrl":"https://doi.org/10.1109/ISM.2015.62","url":null,"abstract":"Repetitive rehabilitation exercises on a regular basis are crucial for enhancing and restoring functional ability and quality of life to those with cognitive and physical disabilities such as stroke. However, a good proportion of patients are often non-compliant to the repetitive exercise regimen prescribed by their therapists. This may be due to a variety of factors including lack of motivation. Although interactive gaming systems such as the Wii by Nintendo and haptic devices have been introduced to make repetitive actions more fun and engaging for typically functioning individuals, challenges still remain for those who have some impairment. Expensive and complicated gaming systems are still not widely used by those with disabilities. In addition, therapists and caregivers, who play an important role in rehabilitation, have not been sufficiently involved. To address these problems as they pertain to people with disabilities, we propose a collaborative rehabilitation support system (CRSS) where mobile-based interactive games are included. Through our approach, we expand in-home rehabilitation to self-rehabilitation which allows users to perform rehab anywhere and anytime. A pilot study involving stroke survivors, caregivers, therapists and physicians was conducted to evaluate our system, and users' feedback is highly positive.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114352661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Betweenness Centrality Approaches for Image Retrieval 图像检索的中间性中心性方法
Pub Date : 2015-12-01 DOI: 10.1109/ISM.2015.83
B. Marshall, Anuya Ghanekar, John A. Springer, E. Matson
To quantify social tags' relatedness in an image collection, we examine the betweenness centrality measure. We depict the image collection as a multi-graph representation, where nodes are the social tags and edges bind an image's social tags. We present our weighted betweenness centrality algorithm and compare it to the unweighted version on sparse and dense graphs. The MIRFLICKR and ImageCLEF benchmark image collections are used in our experimental evaluation. We notice an 11% increase in the computation runtime with weighted edges in determining shortest paths within our image collections. We discuss the intended impact of our approach in conjunction with a node importance evaluation, via the k-path centrality algorithm, for determining situation-aware path planning applications.
为了量化社会标签在图像集合中的相关性,我们研究了中间性中心性度量。我们将图像集合描述为多图表示,其中节点是社交标签,边缘绑定图像的社交标签。我们提出了加权中间度中心性算法,并将其与稀疏图和密集图上的非加权中心性算法进行了比较。在我们的实验评估中使用了MIRFLICKR和ImageCLEF基准图像集合。我们注意到,在确定图像集合中的最短路径时,加权边的计算运行时间增加了11%。我们通过k-path中心性算法讨论了我们的方法与节点重要性评估相结合的预期影响,以确定态势感知路径规划应用。
{"title":"Betweenness Centrality Approaches for Image Retrieval","authors":"B. Marshall, Anuya Ghanekar, John A. Springer, E. Matson","doi":"10.1109/ISM.2015.83","DOIUrl":"https://doi.org/10.1109/ISM.2015.83","url":null,"abstract":"To quantify social tags' relatedness in an image collection, we examine the betweenness centrality measure. We depict the image collection as a multi-graph representation, where nodes are the social tags and edges bind an image's social tags. We present our weighted betweenness centrality algorithm and compare it to the unweighted version on sparse and dense graphs. The MIRFLICKR and ImageCLEF benchmark image collections are used in our experimental evaluation. We notice an 11% increase in the computation runtime with weighted edges in determining shortest paths within our image collections. We discuss the intended impact of our approach in conjunction with a node importance evaluation, via the k-path centrality algorithm, for determining situation-aware path planning applications.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114863058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2015 IEEE International Symposium on Multimedia (ISM)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1