首页 > 最新文献

MULTIMEDIA '04最新文献

英文 中文
Context data in geo-referenced digital photo collections 地理参考数字照片集中的背景数据
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027573
Mor Naaman, Susumu Harada, Qianying Wang, H. Garcia-Molina, A. Paepcke
Given time and location information about digital photographs we can automatically generate an abundance of related contextual metadata, using off-the-shelf and Web-based data sources. Among these are the local daylight status and weather conditions at the time and place a photo was taken. This metadata has the potential of serving as memory cues and filters when browsing photo collections, especially as these collections grow into the tens of thousands and span dozens of years. We describe the contextual metadata that we automatically assemble for a photograph, given time and location, as well as a browser interface that utilizes that metadata. We then present the results of a user study and a survey that together expose which categories of contextual metadata are most useful for recalling and finding photographs. We identify among still unavailable metadata categories those that are most promising to develop next.
给定数字照片的时间和位置信息,我们可以使用现成的和基于网络的数据源自动生成大量相关的上下文元数据。其中包括当地的日光状况和天气条件,当时和地点的照片。在浏览照片集时,这些元数据有可能充当记忆线索和过滤器,尤其是当这些照片集增长到数万张,跨越数十年的时候。我们描述了为给定时间和地点的照片自动组装的上下文元数据,以及利用该元数据的浏览器界面。然后,我们展示了用户研究和调查的结果,这些结果共同揭示了哪些类别的上下文元数据对回忆和查找照片最有用。我们在仍然不可用的元数据类别中识别出最有希望开发的类别。
{"title":"Context data in geo-referenced digital photo collections","authors":"Mor Naaman, Susumu Harada, Qianying Wang, H. Garcia-Molina, A. Paepcke","doi":"10.1145/1027527.1027573","DOIUrl":"https://doi.org/10.1145/1027527.1027573","url":null,"abstract":"Given time and location information about digital photographs we can automatically generate an abundance of related contextual metadata, using off-the-shelf and Web-based data sources. Among these are the local daylight status and weather conditions at the time and place a photo was taken. This metadata has the potential of serving as memory cues and filters when browsing photo collections, especially as these collections grow into the tens of thousands and span dozens of years.\u0000 We describe the contextual metadata that we automatically assemble for a photograph, given time and location, as well as a browser interface that utilizes that metadata. We then present the results of a user study and a survey that together expose which categories of contextual metadata are most useful for recalling and finding photographs. We identify among still unavailable metadata categories those that are most promising to develop next.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127886447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 210
An approach to interactive media system for mobile devices 一种移动设备交互式媒体系统的实现方法
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027557
Eun‐Seok Ryu, C. Yoo
The interactive system which interacts human with computer has been recognized as one direction of computer development for a long time. For example, in cinema, a person gets information he wants or plays the media data while moving by using a mobile device. As the development of this system, we designed and implemented the system interacts with users in a small terminal. Our study has three categories. The first category is the development of new interactive media markup language (IML) for the writing interactive media data. The second category is the IML translator which translates IML into the best form to be played on mobile device. And the third category is the IM player, which plays the transferred media data and interacts with user. IML was designed for controlling vector graphics and general media objects in detail and supporting synchronization. Also, it was designed to be operated in small mobile device as well as desktop PC or set-top box which has high CPU performance. The player, implemented finally, is operated on PDA (HP iPAQ) and plays the multimedia data consist of vector graphics (OpenGL), H.264 and AAC etc. according to the choice of user. This system can be used in the ways of interactive cinema and interactive game, and can substitute new interactive web services for existing web services.
长期以来,人机交互系统一直被认为是计算机发展的一个方向。例如,在电影院,一个人通过使用移动设备在移动中获取他想要的信息或播放媒体数据。随着系统的开发,我们设计并实现了系统与用户交互的小型终端。我们的研究分为三类。第一类是开发新的交互式媒体标记语言(IML),用于编写交互式媒体数据。第二类是IML翻译器,它将IML翻译成移动设备上播放的最佳形式。第三类是IM播放器,播放传输的媒体数据并与用户进行交互。IML是为详细控制矢量图形和一般媒体对象而设计的,并支持同步。此外,它还设计用于小型移动设备以及具有高CPU性能的台式PC或机顶盒。最后实现的播放器在PDA (HP iPAQ)上运行,可根据用户的选择播放由矢量图形(OpenGL)、H.264和AAC等组成的多媒体数据。该系统可以以互动影院和互动游戏的方式使用,可以用新的交互式web服务代替现有的web服务。
{"title":"An approach to interactive media system for mobile devices","authors":"Eun‐Seok Ryu, C. Yoo","doi":"10.1145/1027527.1027557","DOIUrl":"https://doi.org/10.1145/1027527.1027557","url":null,"abstract":"The interactive system which interacts human with computer has been recognized as one direction of computer development for a long time. For example, in cinema, a person gets information he wants or plays the media data while moving by using a mobile device. As the development of this system, we designed and implemented the system interacts with users in a small terminal. Our study has three categories. The first category is the development of new interactive media markup language (IML) for the writing interactive media data. The second category is the IML translator which translates IML into the best form to be played on mobile device. And the third category is the <i>IM player</i>, which plays the transferred media data and interacts with user. IML was designed for controlling vector graphics and general media objects in detail and supporting synchronization. Also, it was designed to be operated in small mobile device as well as desktop PC or set-top box which has high CPU performance. The player, implemented finally, is operated on PDA (HP iPAQ) and plays the multimedia data consist of vector graphics (OpenGL), H.264 and AAC etc. according to the choice of user. This system can be used in the ways of interactive cinema and interactive game, and can substitute new interactive web services for existing web services.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126267316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
ChucK: a programming language for on-the-fly, real-time audio synthesis and multimedia ChucK:一种用于即时音频合成和多媒体的编程语言
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027716
Ge Wang, P. Cook
In this paper, we describe ChucK - a programming language and programming model for writing precisely timed, concurrent audio synthesis and multimedia programs. Precise concurrent audio programming has been an unsolved (and ill-defined) problem. ChucK provides a concurrent programming model that solves this problem and significantly enhances designing, developing, and reasoning about programs with complex audio timing. ChucK employs a novel data-driven timing mechanism and a related time-based synchronization model, both implemented in a virtual machine. We show how these features enable precise, concurrent audio programming and provide a high degree of programmability in writing real-time audio and multimedia programs. As an extension, programmers can use this model to write code on-the-fly -- while the program is running. These features provide a powerful programming tool for building and experimenting with complex audio synthesis and multimedia programs.
在本文中,我们描述了ChucK——一种用于编写精确定时、并发音频合成和多媒体程序的编程语言和编程模型。精确的并发音频编程一直是一个未解决的(和不明确的)问题。ChucK提供了一个并发编程模型,解决了这个问题,并显著增强了复杂音频时序程序的设计、开发和推理能力。ChucK采用了一种新颖的数据驱动定时机制和相关的基于时间的同步模型,两者都在虚拟机中实现。我们展示了这些特性如何实现精确、并发的音频编程,并在编写实时音频和多媒体程序时提供高度的可编程性。作为扩展,程序员可以使用该模型在程序运行时动态编写代码。这些功能为构建和实验复杂的音频合成和多媒体程序提供了强大的编程工具。
{"title":"ChucK: a programming language for on-the-fly, real-time audio synthesis and multimedia","authors":"Ge Wang, P. Cook","doi":"10.1145/1027527.1027716","DOIUrl":"https://doi.org/10.1145/1027527.1027716","url":null,"abstract":"In this paper, we describe ChucK - a programming language and programming model for writing precisely timed, concurrent audio synthesis and multimedia programs. Precise concurrent audio programming has been an unsolved (and ill-defined) problem. ChucK provides a concurrent programming model that solves this problem and significantly enhances designing, developing, and reasoning about programs with complex audio timing. ChucK employs a novel <i>data-driven</i> timing mechanism and a related <i>time-based synchronization</i> model, both implemented in a virtual machine. We show how these features enable precise, concurrent audio programming and provide a high degree of programmability in writing real-time audio and multimedia programs. As an extension, programmers can use this model to write code <i>on-the-fly</i> -- while the program is running. These features provide a powerful programming tool for building and experimenting with complex audio synthesis and multimedia programs.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126504023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Finding the right shots: assessing usability and performance of a digital video library interface 寻找合适的镜头:评估数字视频库界面的可用性和性能
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027691
Michael G. Christel, N. Moraveji
The authors developed a system in which visually dense displays of thumbnail imagery in storyboard views are used for shot-based video retrieval. The views allow for effective retrieval, as evidenced by the success achieved by expert users with the system in interactive query for NIST TRECVID 2002 and 2003. This paper demonstrates that novice users also achieve comparatively high retrieval performance with these views using the TRECVID 2003 benchmarks. Through an analysis of the user interaction logs, heuristic evaluation, and think-aloud protocol, the usability of the video information retrieval system is appraised with respect to shot-based retrieval. Design implications are presented based on these TRECVID usability evaluations regarding efficient, effective information retrieval interfaces to locate visual information from video corpora.
作者开发了一个系统,在该系统中,故事板视图中的缩略图图像的视觉密集显示用于基于拍摄的视频检索。专家用户使用该系统在NIST TRECVID 2002和2003的交互式查询中取得的成功证明了这些视图允许有效的检索。本文表明,新手用户也可以通过使用TRECVID 2003基准测试获得相对较高的检索性能。通过对用户交互日志的分析、启发式评价和有声思考协议,从基于镜头的检索角度对视频信息检索系统的可用性进行了评价。基于这些TRECVID可用性评估,提出了设计含义,这些可用性评估涉及高效、有效的信息检索界面,以从视频语料库中定位视觉信息。
{"title":"Finding the right shots: assessing usability and performance of a digital video library interface","authors":"Michael G. Christel, N. Moraveji","doi":"10.1145/1027527.1027691","DOIUrl":"https://doi.org/10.1145/1027527.1027691","url":null,"abstract":"The authors developed a system in which visually dense displays of thumbnail imagery in storyboard views are used for shot-based video retrieval. The views allow for effective retrieval, as evidenced by the success achieved by expert users with the system in interactive query for NIST TRECVID 2002 and 2003. This paper demonstrates that novice users also achieve comparatively high retrieval performance with these views using the TRECVID 2003 benchmarks. Through an analysis of the user interaction logs, heuristic evaluation, and think-aloud protocol, the usability of the video information retrieval system is appraised with respect to shot-based retrieval. Design implications are presented based on these TRECVID usability evaluations regarding efficient, effective information retrieval interfaces to locate visual information from video corpora.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132543845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Do not zero-pute: an efficient homespun MPEG-audio layer II decoding and optimization strategy 不要零推:一种高效的自制mpeg音频第II层解码和优化策略
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027615
P. Smet, F. Rooms, H. Luong, W. Philips
In this paper we point out that the general principle "do not compute what you do not need to compute" can be applied easily and successfully within a MPEG audio decoding strategy. More specifically, we will discuss the problem of eliminating costly computation cycles being wasted at processing useless zero-valued data. Hence, the title: "do not zero-pute". At first, this may all sound somewhat obvious or trivial. Indeed, this can be true in many cases, but experience gathered in various teaching related projects during several academic years has also lead us to believe the opposite. Moreover, a survey of the existing literature quickly reveals that the approach discussed below has not been investigated and documented properly. Although we will only illustrate our optimization approach by discussing the MPEG-audio layer II decoding process in detail, we hope the reader will be able to apply, extend, and implement the basic principles presented here within many other applications.
在本文中,我们指出“不计算你不需要计算的东西”的一般原则可以很容易和成功地应用于MPEG音频解码策略。更具体地说,我们将讨论消除在处理无用的零值数据时浪费的昂贵计算周期的问题。因此,题目是:“不要零推”。乍一看,这一切可能听起来有些明显或微不足道。的确,这在很多情况下是正确的,但在几个学年的各种教学相关项目中积累的经验也让我们相信相反的情况。此外,对现有文献的调查很快表明,下面讨论的方法没有得到适当的调查和记录。虽然我们将只通过详细讨论mpeg -音频层II解码过程来说明我们的优化方法,但我们希望读者能够在许多其他应用程序中应用、扩展和实现这里介绍的基本原则。
{"title":"Do not zero-pute: an efficient homespun MPEG-audio layer II decoding and optimization strategy","authors":"P. Smet, F. Rooms, H. Luong, W. Philips","doi":"10.1145/1027527.1027615","DOIUrl":"https://doi.org/10.1145/1027527.1027615","url":null,"abstract":"In this paper we point out that the general principle \"do not compute what you do not need to compute\" can be applied easily and successfully within a MPEG audio decoding strategy. More specifically, we will discuss the problem of eliminating costly computation cycles being wasted at processing useless zero-valued data. Hence, the title: \"do not zero-pute\". At first, this may all sound somewhat obvious or trivial. Indeed, this can be true in many cases, but experience gathered in various teaching related projects during several academic years has also lead us to believe the opposite. Moreover, a survey of the existing literature quickly reveals that the approach discussed below has not been investigated and documented properly. Although we will only illustrate our optimization approach by discussing the MPEG-audio layer II decoding process in detail, we hope the reader will be able to apply, extend, and implement the basic principles presented here within many other applications.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"149 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134092301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
P-Karaoke: personalized karaoke system P-Karaoke:个性化卡拉ok系统
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027563
Xiansheng Hua, Lie Lu, HongJiang Zhang
In this demonstration, a personalized Karaoke system, P-Karaoke, is proposed. In the P-Karaoke system, personal home videos and photographs, which are automatically selected from users' multimedia database according to their content, users' preferences or music, are utilized as the background videos of the Karaoke. The selected video clips, photographs, music and lyrics are well aligned to compose a Karaoke video, connecting by specific content-based transitions.
在这个演示中,提出了一个个性化的卡拉ok系统P-Karaoke。在p - k系统中,根据用户的内容、用户的喜好或音乐,从用户的多媒体数据库中自动选择个人的家庭视频和照片作为卡拉ok的背景视频。选定的视频剪辑、照片、音乐和歌词可以很好地组合成一个卡拉ok视频,通过特定的基于内容的过渡连接起来。
{"title":"P-Karaoke: personalized karaoke system","authors":"Xiansheng Hua, Lie Lu, HongJiang Zhang","doi":"10.1145/1027527.1027563","DOIUrl":"https://doi.org/10.1145/1027527.1027563","url":null,"abstract":"In this demonstration, a personalized Karaoke system, <i>P-Karaoke</i>, is proposed. In the P-Karaoke system, personal home videos and photographs, which are automatically selected from users' multimedia database according to their content, users' preferences or music, are utilized as the background videos of the Karaoke. The selected video clips, photographs, music and lyrics are well aligned to compose a Karaoke video, connecting by specific content-based transitions.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134316714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
TRECVID: evaluating the effectiveness of information retrieval tasks on digital video TRECVID:评估数字视频信息检索任务的有效性
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027678
A. Smeaton, P. Over, Wessel Kraaij
TRECVID is an annual exercise which encourages research in information retrieval from digital video by providing a large video test collection, uniform scoring procedures, and a forum for organizations interested in comparing their results. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of some semantic features, and the automatic segmentation of TV news broadcasts into non-overlapping news stories. TRECVID has a broad range of over 40 participating groups from across the world and as it is now (2004) in its 4th annual cycle it is opportune to stand back and look at the lessons we have learned from the cumulative activity. In this paper we shall present a brief and high-level overview of the TRECVID activity covering the data, the benchmarked tasks, the overall results obtained by groups to date and an overview of the approaches taken by selective groups in some tasks. While progress from one year to the next cannot be measured directly because of the changing nature of the video data we have been using, we shall present a summary of the lessons we have learned from TRECVID and include some pointers on what we feel are the most important of these lessons.
TRECVID是一项年度活动,通过提供大量视频测试集、统一评分程序和对比较结果感兴趣的组织论坛,鼓励从数字视频中检索信息的研究。TRECVID的基准测试涵盖了终端用户的交互式和手动搜索,以及镜头边界检测、一些语义特征提取、电视新闻广播自动分割成不重叠新闻故事等一些支持技术的基准测试。TRECVID有来自世界各地的40多个参与团体,现在(2004年)是它的第四个年度周期,这是一个回顾我们从累积活动中吸取教训的机会。在本文中,我们将简要概述TRECVID活动,包括数据、基准任务、迄今为止各小组获得的总体结果,以及在某些任务中选定小组所采取的方法。由于我们一直在使用的视频数据性质不断变化,因此无法直接衡量一年到下一年的进展,但我们将总结从TRECVID中学到的经验教训,并就我们认为最重要的经验教训提出一些建议。
{"title":"TRECVID: evaluating the effectiveness of information retrieval tasks on digital video","authors":"A. Smeaton, P. Over, Wessel Kraaij","doi":"10.1145/1027527.1027678","DOIUrl":"https://doi.org/10.1145/1027527.1027678","url":null,"abstract":"TRECVID is an annual exercise which encourages research in information retrieval from digital video by providing a large video test collection, uniform scoring procedures, and a forum for organizations interested in comparing their results. TRECVID benchmarking covers both interactive and manual searching by end users, as well as the benchmarking of some supporting technologies including shot boundary detection, extraction of some semantic features, and the automatic segmentation of TV news broadcasts into non-overlapping news stories. TRECVID has a broad range of over 40 participating groups from across the world and as it is now (2004) in its 4th annual cycle it is opportune to stand back and look at the lessons we have learned from the cumulative activity. In this paper we shall present a brief and high-level overview of the TRECVID activity covering the data, the benchmarked tasks, the overall results obtained by groups to date and an overview of the approaches taken by selective groups in some tasks. While progress from one year to the next cannot be measured directly because of the changing nature of the video data we have been using, we shall present a summary of the lessons we have learned from TRECVID and include some pointers on what we feel are the most important of these lessons.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133377886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
A personal projected display 个人投影显示器
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027739
M. Ashdown, P. Robinson
User interfaces using windows, keyboard and mouse have been in use for over 30 years, but only offer limited facilities to the user. Conventional displays are small, at least compared with a physical desk; conventional input devices restrict both manual expression and cognitive flexibility; remote collaboration is a poor shadow of sitting in the same room. We show how recent technological advances in large display devices and input devices can address these problems. The Escritoire is a desk-based interface using overlapping projectors to create a large display with a high resolution region in the centre for detailed work. Two pens provide bimanual input over the entire area, and an interface like physical paper addresses some of the affordances not provided by the conventional user interface. Multiple desks can be connected to allow remote collaboration. The system has been tested with single users and collaborating pairs.
使用窗口、键盘和鼠标的用户界面已经使用了30多年,但给用户提供的便利有限。传统的显示器很小,至少与实体办公桌相比是这样;传统的输入设备限制了手动表达和认知灵活性;远程协作是坐在同一个房间里的糟糕阴影。我们展示了大型显示设备和输入设备的最新技术进步如何解决这些问题。Escritoire是一个基于桌面的界面,使用重叠的投影仪在中心创建一个高分辨率的大显示器,用于详细的工作。两支笔在整个区域内提供手动输入,而像物理纸一样的界面解决了传统用户界面无法提供的一些功能。可以连接多个办公桌以实现远程协作。该系统已通过单用户和协作对进行了测试。
{"title":"A personal projected display","authors":"M. Ashdown, P. Robinson","doi":"10.1145/1027527.1027739","DOIUrl":"https://doi.org/10.1145/1027527.1027739","url":null,"abstract":"User interfaces using windows, keyboard and mouse have been in use for over 30 years, but only offer limited facilities to the user. Conventional displays are small, at least compared with a physical desk; conventional input devices restrict both manual expression and cognitive flexibility; remote collaboration is a poor shadow of sitting in the same room. We show how recent technological advances in large display devices and input devices can address these problems. The <i>Escritoire</i> is a desk-based interface using overlapping projectors to create a large display with a high resolution region in the centre for detailed work. Two pens provide bimanual input over the entire area, and an interface like physical paper addresses some of the affordances not provided by the conventional user interface. Multiple desks can be connected to allow remote collaboration. The system has been tested with single users and collaborating pairs.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127859412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Where are the brave new mobile multimedia applications? 勇敢的新移动多媒体应用程序在哪里?
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027750
Susanne CJ Boll, S. Ahuja, Dirk Friebel, B. Horowitz, N. Raman, S. Nandagopalan
PANEL OVERVIEW With the availability of new and powerful mobile devices evolving network infrastructures in point-to-point networks towards 3G networks, wireless networks and also broadcasting of digital TV to set top boxes as well as the availability of a tremendous amount of media such as pictures from camera phones, and digital music, multimedia started its triumphal procession to inform, entertain, and educate users everywhere.
随着功能强大的新型移动设备的出现,网络基础设施从点对点网络向3G网络、无线网络、数字电视广播到机顶盒的发展,以及大量媒体(如照相手机的图片和数字音乐)的出现,多媒体开始了它为各地用户提供信息、娱乐和教育的凯旋历程。
{"title":"Where are the brave new mobile multimedia applications?","authors":"Susanne CJ Boll, S. Ahuja, Dirk Friebel, B. Horowitz, N. Raman, S. Nandagopalan","doi":"10.1145/1027527.1027750","DOIUrl":"https://doi.org/10.1145/1027527.1027750","url":null,"abstract":"PANEL OVERVIEW With the availability of new and powerful mobile devices evolving network infrastructures in point-to-point networks towards 3G networks, wireless networks and also broadcasting of digital TV to set top boxes as well as the availability of a tremendous amount of media such as pictures from camera phones, and digital music, multimedia started its triumphal procession to inform, entertain, and educate users everywhere.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124257549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Index-frame audio transmission 索引帧音频传输
Pub Date : 2004-10-10 DOI: 10.1145/1027527.1027618
J. Parker, Keith Chung
Sending audio data over a computer network consumes a large amount of bandwidth, and so compression strategies are regularly built into audio file formats and transmission software. In some environments, the basic nature of the sound does not change significantly; for example, phone lines deal frequently with voice transmission. By matching input audio blocks against those in a table, we can transmit the table indices only, and audio can be synthesized at the receiving end by simple table look-up. This has a number of potentially interesting applications.
通过计算机网络发送音频数据会消耗大量带宽,因此压缩策略通常被内置到音频文件格式和传输软件中。在某些环境中,声音的基本性质不会发生显著变化;例如,电话线经常处理语音传输。通过将输入音频块与表中的音频块进行匹配,我们可以只发送表索引,并且通过简单的表查找可以在接收端合成音频。这有许多潜在的有趣的应用。
{"title":"Index-frame audio transmission","authors":"J. Parker, Keith Chung","doi":"10.1145/1027527.1027618","DOIUrl":"https://doi.org/10.1145/1027527.1027618","url":null,"abstract":"Sending audio data over a computer network consumes a large amount of bandwidth, and so compression strategies are regularly built into audio file formats and transmission software. In some environments, the basic nature of the sound does not change significantly; for example, phone lines deal frequently with voice transmission. By matching input audio blocks against those in a table, we can transmit the table indices only, and audio can be synthesized at the receiving end by simple table look-up. This has a number of potentially interesting applications.","PeriodicalId":292207,"journal":{"name":"MULTIMEDIA '04","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114420355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
MULTIMEDIA '04
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1