Advances in graphics have done more than just revolutionize the computer game industry-graphics themselves have come to define the entire industry, surpassing even gameplay as the singular identifying characteristic of all games, regardless of genre. These spectacular developments are even more remarkable when we consider the almost ruthless constraints imposed by an industry needing to deliver high performance to a demanding computer-savvy group of users. A core issue involves low- versus high-resolution graphics. As multimedia technologies continue to mature, the debate between high and low resolution takes on more urgency. What is the most effective way to convey an emotion or information? Is it just pure realism or is it an artistic device? Always searching for the most effective way to provide an immersive experience, game developers still have no consensus on what rendered graphics should really look like. This article provides insight into the game industry as well as discussing this topic in the context of other visual media such as art and film.
{"title":"Escaping the World: High and Low Resolution in Gaming","authors":"Ben Serviss","doi":"10.1109/MMUL.2005.70","DOIUrl":"https://doi.org/10.1109/MMUL.2005.70","url":null,"abstract":"Advances in graphics have done more than just revolutionize the computer game industry-graphics themselves have come to define the entire industry, surpassing even gameplay as the singular identifying characteristic of all games, regardless of genre. These spectacular developments are even more remarkable when we consider the almost ruthless constraints imposed by an industry needing to deliver high performance to a demanding computer-savvy group of users. A core issue involves low- versus high-resolution graphics. As multimedia technologies continue to mature, the debate between high and low resolution takes on more urgency. What is the most effective way to convey an emotion or information? Is it just pure realism or is it an artistic device? Always searching for the most effective way to provide an immersive experience, game developers still have no consensus on what rendered graphics should really look like. This article provides insight into the game industry as well as discussing this topic in the context of other visual media such as art and film.","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115590528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, the authors offer their perspective about online learning. In general, we know that online learning develops through interaction and that it's a collaborative process where students actively engage in writing and reading messages among themselves and with the instructor. However, it's also well known that in any online community, not all users are equally active, and there are indeed people who never take an active part-the so-called lurkers. This article focuses on the lurkers; the authors ran extensive experiments to demonstrate whether there's a relationship between the writing and reading behavior of online students and whether active participation influences learning efficiency. An interesting related result that emerged from the study is that the effort of the instructor in terms of reading and writing posts is higher than that of the learners themselves.
{"title":"Lurking: An Underestimated Human-Computer Phenomenon","authors":"Martin Ebner, Andreas Holzinger","doi":"10.1109/MMUL.2005.74","DOIUrl":"https://doi.org/10.1109/MMUL.2005.74","url":null,"abstract":"In this article, the authors offer their perspective about online learning. In general, we know that online learning develops through interaction and that it's a collaborative process where students actively engage in writing and reading messages among themselves and with the instructor. However, it's also well known that in any online community, not all users are equally active, and there are indeed people who never take an active part-the so-called lurkers. This article focuses on the lurkers; the authors ran extensive experiments to demonstrate whether there's a relationship between the writing and reading behavior of online students and whether active participation influences learning efficiency. An interesting related result that emerged from the study is that the effort of the instructor in terms of reading and writing posts is higher than that of the learners themselves.","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115549779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. François, R. Nevatia, Jerry R. Hobbs, R. Bolles
The notion of "events" is extremely important in characterizing the contents of video. An event is typically triggered by some kind of change of state captured in the video, such as when an object starts moving. The ability to reason with events is a critical step toward video understanding. This article describes the findings of a recent workshop series that has produced an ontology framework for representing video events-called Video Event Representation Language (VERL) -and a companion annotation framework, called Video Event Markup Language (VEML). One of the key concepts in this work is the modeling of events as composable, whereby complex events are constructed from simpler events by operations such as sequencing, iteration, and alternation. The article presents an extensible event and object ontology expressed in VERL and discusses a detailed example of applying VERL and VEML to the description of a "tailgating" event in surveillance video.
{"title":"VERL: An Ontology Framework for Representing and Annotating Video Events","authors":"A. François, R. Nevatia, Jerry R. Hobbs, R. Bolles","doi":"10.1109/MMUL.2005.87","DOIUrl":"https://doi.org/10.1109/MMUL.2005.87","url":null,"abstract":"The notion of \"events\" is extremely important in characterizing the contents of video. An event is typically triggered by some kind of change of state captured in the video, such as when an object starts moving. The ability to reason with events is a critical step toward video understanding. This article describes the findings of a recent workshop series that has produced an ontology framework for representing video events-called Video Event Representation Language (VERL) -and a companion annotation framework, called Video Event Markup Language (VEML). One of the key concepts in this work is the modeling of events as composable, whereby complex events are constructed from simpler events by operations such as sequencing, iteration, and alternation. The article presents an extensible event and object ontology expressed in VERL and discusses a detailed example of applying VERL and VEML to the description of a \"tailgating\" event in surveillance video.","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128918916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EIC's Message: The Electronic World and Societal Trends","authors":"F. Golshani","doi":"10.1109/MMUL.2005.69","DOIUrl":"https://doi.org/10.1109/MMUL.2005.69","url":null,"abstract":"","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134114967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the discusses their mission which is a part of an innovative research effort to create prototype leadership development applications for US Army officers in partnership with the Hollywood film industry and researchers at the University of Southern California's Institute for Creative Technologies (ICT). In short, they're creating a new genre of Army training films, and exploring their role in interactive technologies for case-method teaching, which is the formal use of stories in classroom instruction.
{"title":"The Fictionalization of Lessons Learned","authors":"A. Gordon","doi":"10.1109/MMUL.2005.84","DOIUrl":"https://doi.org/10.1109/MMUL.2005.84","url":null,"abstract":"In this paper, the discusses their mission which is a part of an innovative research effort to create prototype leadership development applications for US Army officers in partnership with the Hollywood film industry and researchers at the University of Southern California's Institute for Creative Technologies (ICT). In short, they're creating a new genre of Army training films, and exploring their role in interactive technologies for case-method teaching, which is the formal use of stories in classroom instruction.","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134521404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guest Editor's Introduction: What's New with MPEG?","authors":"John R. Smith","doi":"10.1109/MMUL.2005.72","DOIUrl":"https://doi.org/10.1109/MMUL.2005.72","url":null,"abstract":"","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127892084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The unemployment rate for people with disabilities between the ages of 18 and 65 in the US is estimated at 60 to 70 percent. This astounding ratio makes people with disabilities the largest underemployed sector in the US. One major contributing factor is that many people in this group dont pursue more marketable professions such as engineering, computing, and information technology.
{"title":"EIC's Message: Putting Ability to Work","authors":"F. Golshani","doi":"10.1109/MMUL.2005.44","DOIUrl":"https://doi.org/10.1109/MMUL.2005.44","url":null,"abstract":"The unemployment rate for people with disabilities between the ages of 18 and 65 in the US is estimated at 60 to 70 percent. This astounding ratio makes people with disabilities the largest underemployed sector in the US. One major contributing factor is that many people in this group dont pursue more marketable professions such as engineering, computing, and information technology.","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126993217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The multimedia authoring research agenda today, is searching for the proverbial killer application. We believe that multimedia's killer app might already be at hand-and it's focused around audio and video clips. But researchers aren't addressing critical open research issues because of the current focus on commercially produced, feature-length videos as an experimental corpus. The article focuses on the notion of multimedia clips, and most especially their retrieval, as a key enabler for the wide adoption of multimedia.
{"title":"Refocusing Multimedia Research on Short Clips","authors":"P. Hart, K. Pierson, J. Hull","doi":"10.1109/MMUL.2005.55","DOIUrl":"https://doi.org/10.1109/MMUL.2005.55","url":null,"abstract":"The multimedia authoring research agenda today, is searching for the proverbial killer application. We believe that multimedia's killer app might already be at hand-and it's focused around audio and video clips. But researchers aren't addressing critical open research issues because of the current focus on commercially produced, feature-length videos as an experimental corpus. The article focuses on the notion of multimedia clips, and most especially their retrieval, as a key enabler for the wide adoption of multimedia.","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123942922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}