Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.779287
Stéphane Crozat, O. Hû, P. Trigano
We submit a method (EMPI: Evaluation of Multimedia, Pedagogical and Interactive software) to evaluate multimedia software used in an educational context. Our purpose is to help users (teachers or students) to decide in front of the large choice of software actually proposed. We structured a list of evaluation criteria, grouped through six approaches: the general feeling, the technical quality, the usability, the scenario, the multimedia documents, and the didactical aspects. A global questionnaire joins all these modules. We are also designing software that could make the method easier to use and more powerful. We present the list of the criteria we selected and organised, along with some examples of questions, and a brief description of the method and the linked software.
{"title":"A method for evaluating multimedia learning software","authors":"Stéphane Crozat, O. Hû, P. Trigano","doi":"10.1109/MMCS.1999.779287","DOIUrl":"https://doi.org/10.1109/MMCS.1999.779287","url":null,"abstract":"We submit a method (EMPI: Evaluation of Multimedia, Pedagogical and Interactive software) to evaluate multimedia software used in an educational context. Our purpose is to help users (teachers or students) to decide in front of the large choice of software actually proposed. We structured a list of evaluation criteria, grouped through six approaches: the general feeling, the technical quality, the usability, the scenario, the multimedia documents, and the didactical aspects. A global questionnaire joins all these modules. We are also designing software that could make the method easier to use and more powerful. We present the list of the criteria we selected and organised, along with some examples of questions, and a brief description of the method and the linked software.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125332157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778674
A. Pasquarelli, F. D. Stefani, Gregory M. P. O'Hare, Aidan Murphy
The paper presents the ECHOES' (EduCational Hypermedia On-linE System) training environment, its architecture and the services offered to the users. The main objective of the ECHOES project is to build a distributed dynamic environment for educating and supporting technicians in using and repairing complex industrial artefacts. To pursue this objective, computer Web based training, virtual reality and multi agent systems are integrated and synthesised in the ECHOES environment. These technologies are used to aid users at different levels of complexity, starting from the novice, who wants to quickly develop a global functional view of complex systems, up to the technician, who needs a strong conceptual understanding of complex equipment. The user interaction with the system is agent based and the chosen interface is that of the visit metaphor within a 2D or 3D environment, in order to leave the trainee or the technician free in exploring the environment.
{"title":"ECHOES: educational hypermedia on-line system","authors":"A. Pasquarelli, F. D. Stefani, Gregory M. P. O'Hare, Aidan Murphy","doi":"10.1109/MMCS.1999.778674","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778674","url":null,"abstract":"The paper presents the ECHOES' (EduCational Hypermedia On-linE System) training environment, its architecture and the services offered to the users. The main objective of the ECHOES project is to build a distributed dynamic environment for educating and supporting technicians in using and repairing complex industrial artefacts. To pursue this objective, computer Web based training, virtual reality and multi agent systems are integrated and synthesised in the ECHOES environment. These technologies are used to aid users at different levels of complexity, starting from the novice, who wants to quickly develop a global functional view of complex systems, up to the technician, who needs a strong conceptual understanding of complex equipment. The user interaction with the system is agent based and the chosen interface is that of the visit metaphor within a 2D or 3D environment, in order to leave the trainee or the technician free in exploring the environment.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114902594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778423
A. Pajares, J. C. Guerri, M. Esteve, C. Palau, A. Leon, N. Cardona
Providing QoS guarantees in wireless networks is a much more complex problem than in fixed networks. Conventional fixed allocation schemes are not suitable for these traffic patterns and it becomes necessary to design new dynamic resource allocation schemes. This paper deals with three of the main wireless network problems to provide multimedia QoS guarantees, i.e., frequency assignation, bandwidth requirements and error rate. For the former problem, several frequency assignment algorithms have been evaluated. For the second, a dynamic resource allocation algorithm is proposed. The goal of this algorithm is to share the available wireless bandwidth between the major number of connections, and offer the maximum QoS to all of them depending on the connection state. For the latter, error rate effects are corrected by means of an adaptive error control algorithm based on RTP. This algorithm tries to balance the channel quality of the on-going calls to obtain a fair error quality for all of them.
{"title":"Dynamic frequency and resource allocation with adaptive error control based on RTP for multimedia QoS guarantees in wireless networks","authors":"A. Pajares, J. C. Guerri, M. Esteve, C. Palau, A. Leon, N. Cardona","doi":"10.1109/MMCS.1999.778423","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778423","url":null,"abstract":"Providing QoS guarantees in wireless networks is a much more complex problem than in fixed networks. Conventional fixed allocation schemes are not suitable for these traffic patterns and it becomes necessary to design new dynamic resource allocation schemes. This paper deals with three of the main wireless network problems to provide multimedia QoS guarantees, i.e., frequency assignation, bandwidth requirements and error rate. For the former problem, several frequency assignment algorithms have been evaluated. For the second, a dynamic resource allocation algorithm is proposed. The goal of this algorithm is to share the available wireless bandwidth between the major number of connections, and offer the maximum QoS to all of them depending on the connection state. For the latter, error rate effects are corrected by means of an adaptive error control algorithm based on RTP. This algorithm tries to balance the channel quality of the on-going calls to obtain a fair error quality for all of them.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"270 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116622223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.779271
Linhui Jia, L. Kitchen
This paper describes an approach for object-based image retrieval based on classes of objects in images. In this approach, contours of objects are extracted from images and are represented under a scheme which satisfies scale, rotation and translation invariance. Classifier learning techniques are used to classify objects in images into different classes. Image similarity calculation is performed based on class information of objects. Experimental results show that the method is effective and efficient.
{"title":"Classification-driven object-based image retrieval","authors":"Linhui Jia, L. Kitchen","doi":"10.1109/MMCS.1999.779271","DOIUrl":"https://doi.org/10.1109/MMCS.1999.779271","url":null,"abstract":"This paper describes an approach for object-based image retrieval based on classes of objects in images. In this approach, contours of objects are extracted from images and are represented under a scheme which satisfies scale, rotation and translation invariance. Classifier learning techniques are used to classify objects in images into different classes. Image similarity calculation is performed based on class information of objects. Experimental results show that the method is effective and efficient.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131294672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778211
N. Terashima, J. Tiffin, Lalita Rajasingham
Lecture exchanges are increasing year by year. To promote this distance education systems have been developed and put into practical use. As a step further, a virtual space distance education platform called HyperClass has been proposed. HyperClass is a class where a teacher and students who are at different locations, in reality their avatars, are brought together through the communication network, and they can have classes and do cooperative work as if they were attending the same classroom. Any multimedia materials for education can be introduced into HyperClass. HyperClass is based on HyperReality (HR). HR is the concept of combination of virtual reality and real reality. A prototype system for HyperClass has been developed and the 3D (three dimensional) objects of Japanese heritage were introduced into HyperClass. To evaluate the efficiency and effectiveness of HyperClass, an experiment has been carried out by interconnecting between Waseda University and Victoria University of Wellington through the Internet. Good results have been obtained.
{"title":"Experiment of virtual space distance education system using the objects of cultural heritage","authors":"N. Terashima, J. Tiffin, Lalita Rajasingham","doi":"10.1109/MMCS.1999.778211","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778211","url":null,"abstract":"Lecture exchanges are increasing year by year. To promote this distance education systems have been developed and put into practical use. As a step further, a virtual space distance education platform called HyperClass has been proposed. HyperClass is a class where a teacher and students who are at different locations, in reality their avatars, are brought together through the communication network, and they can have classes and do cooperative work as if they were attending the same classroom. Any multimedia materials for education can be introduced into HyperClass. HyperClass is based on HyperReality (HR). HR is the concept of combination of virtual reality and real reality. A prototype system for HyperClass has been developed and the 3D (three dimensional) objects of Japanese heritage were introduced into HyperClass. To evaluate the efficiency and effectiveness of HyperClass, an experiment has been carried out by interconnecting between Waseda University and Victoria University of Wellington through the Internet. Good results have been obtained.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131445980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778624
Byung-Woo Min, H. Yoon, Jung Soh, Young-Kyu Yang
This research aims to recognize one-stroke pictorial gestures from visual images, and to develop a graphic/text editing system running in real time. The tasks are performed through three steps: moving-hand tracking and trajectory generation, key-gesture segmentation and gesture recognition by analyzing dynamic features. A gesture vocabulary consists of forty-eight gestures of three types: (1) six editing commands, (2) six graphic primitives, (3) alphanumeric characters-twenty-six alphabetic and ten numerical. Some dynamic features are obtained from spatio-temporal trajectories and quantized by the K-means algorithm. The quantized vectors were trained and tested using hidden Markov models (HMMs).
{"title":"Visual gesture recognition for real-time editing system","authors":"Byung-Woo Min, H. Yoon, Jung Soh, Young-Kyu Yang","doi":"10.1109/MMCS.1999.778624","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778624","url":null,"abstract":"This research aims to recognize one-stroke pictorial gestures from visual images, and to develop a graphic/text editing system running in real time. The tasks are performed through three steps: moving-hand tracking and trajectory generation, key-gesture segmentation and gesture recognition by analyzing dynamic features. A gesture vocabulary consists of forty-eight gestures of three types: (1) six editing commands, (2) six graphic primitives, (3) alphanumeric characters-twenty-six alphabetic and ten numerical. Some dynamic features are obtained from spatio-temporal trajectories and quantized by the K-means algorithm. The quantized vectors were trained and tested using hidden Markov models (HMMs).","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132906466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778564
M. Akhloufi, V. Polotski, P. Cohen
This paper presents a new approach to the synthesis of a novel view from two images captured by a non-calibrated stereo system. Here view synthesis employs epipolar constraints associated with a two camera configuration. A fundamental matrix is used to obtain features in the synthesized view via reprojection of corresponding features in the source images. Unlike classical methods which are based on inferring three dimensional structure of the scene or use dense correspondence between the source images to produce the new synthesized image, this method requires only sparse correspondence between source image features. Perspective image warping techniques then render the remaining image points via interpolation. The approach permits interactive view synthesis in: immersive telepresence systems, realistic virtual worlds and overlay of objects at different positions on live video of dynamic scenes for augmented reality display systems. Method efficiency is illustrated with examples of synthetic and real scenes.
{"title":"Virtual view synthesis from uncalibrated stereo cameras","authors":"M. Akhloufi, V. Polotski, P. Cohen","doi":"10.1109/MMCS.1999.778564","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778564","url":null,"abstract":"This paper presents a new approach to the synthesis of a novel view from two images captured by a non-calibrated stereo system. Here view synthesis employs epipolar constraints associated with a two camera configuration. A fundamental matrix is used to obtain features in the synthesized view via reprojection of corresponding features in the source images. Unlike classical methods which are based on inferring three dimensional structure of the scene or use dense correspondence between the source images to produce the new synthesized image, this method requires only sparse correspondence between source image features. Perspective image warping techniques then render the remaining image points via interpolation. The approach permits interactive view synthesis in: immersive telepresence systems, realistic virtual worlds and overlay of objects at different positions on live video of dynamic scenes for augmented reality display systems. Method efficiency is illustrated with examples of synthetic and real scenes.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128149453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.779323
N. Haering, R. J. Qian, M. Sezan
The propose a three-level algorithm to detect animal hunt events in wildlife documentaries. The first level extracts texture, color and motion features, and detects motion blobs. The mid-level employs a neural network to verify the relevance of the detected motion blobs using the extracted color and texture features. This level also generates shot summaries in terms of intermediate-level descriptors which combine low-level features from the first level and contain results of mid-level, domain specific inferences made on the basis of shot features. The shot summaries are then used by a domain-specific inference process at the third level to detect the video segments that contain hunts.
{"title":"Detecting hunts in wildlife videos","authors":"N. Haering, R. J. Qian, M. Sezan","doi":"10.1109/MMCS.1999.779323","DOIUrl":"https://doi.org/10.1109/MMCS.1999.779323","url":null,"abstract":"The propose a three-level algorithm to detect animal hunt events in wildlife documentaries. The first level extracts texture, color and motion features, and detects motion blobs. The mid-level employs a neural network to verify the relevance of the detected motion blobs using the extracted color and texture features. This level also generates shot summaries in terms of intermediate-level descriptors which combine low-level features from the first level and contain results of mid-level, domain specific inferences made on the basis of shot features. The shot summaries are then used by a domain-specific inference process at the third level to detect the video segments that contain hunts.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128223431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778574
E. Ardizzone, M. Cascia, A. Avanzato, A. Bruna
In the last years a lot of work has been done on color, textural, structural and semantic indexing of "content-based" video databases. Motion-based video indexing has been less explored, with approaches generally based on the analysis of optical flows. Compressed videos require the decompression of the sequences and the computation of optical flows, two steps computationally heavy. In this paper we propose some methods to index videos by motion features (mainly related to camera motion) and by motion-based spatial segmentation of frames, in a fully automatic way. Our idea is to use MPEG motion vectors as an alternative to optical flows. Their extraction is very simple and fast; it doesn't require a full decompression of the stream and saves us from computing optical flows. Additional computational economy comes from having one motion vector each 16/spl times/16 sub-image; this makes the algorithms faster than working with dense optical flows. Experimental results reported at the end of this paper show that MPEG motion compensation vectors are suitable for this kind of applications.
{"title":"Video indexing using MPEG motion compensation vectors","authors":"E. Ardizzone, M. Cascia, A. Avanzato, A. Bruna","doi":"10.1109/MMCS.1999.778574","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778574","url":null,"abstract":"In the last years a lot of work has been done on color, textural, structural and semantic indexing of \"content-based\" video databases. Motion-based video indexing has been less explored, with approaches generally based on the analysis of optical flows. Compressed videos require the decompression of the sequences and the computation of optical flows, two steps computationally heavy. In this paper we propose some methods to index videos by motion features (mainly related to camera motion) and by motion-based spatial segmentation of frames, in a fully automatic way. Our idea is to use MPEG motion vectors as an alternative to optical flows. Their extraction is very simple and fast; it doesn't require a full decompression of the stream and saves us from computing optical flows. Additional computational economy comes from having one motion vector each 16/spl times/16 sub-image; this makes the algorithms faster than working with dense optical flows. Experimental results reported at the end of this paper show that MPEG motion compensation vectors are suitable for this kind of applications.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134604535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778439
L. Gutierrez, S. Sallent
Multimedia traffic is carried over multi-access channels using random multi-access protocols (RMAP) to access the common channel by users' equipment. The main goal of any RMAP is that the network is shared by all users in a fair way. RMAP must also be able to cope with the QoS that different multimedia traffic requires. Current protocols proposed for multimedia traffic are very sophisticated and difficult to analyze because they use distributed queues. The main goal of this paper is to present a common methodology to analyze multimedia RMAP. The results obtained by this procedure include the computation not only of the expected values but the distributions of the final interdeparture time and the departure burst size.
{"title":"A new procedure to analyze random multiaccess protocols for multimedia applications","authors":"L. Gutierrez, S. Sallent","doi":"10.1109/MMCS.1999.778439","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778439","url":null,"abstract":"Multimedia traffic is carried over multi-access channels using random multi-access protocols (RMAP) to access the common channel by users' equipment. The main goal of any RMAP is that the network is shared by all users in a fair way. RMAP must also be able to cope with the QoS that different multimedia traffic requires. Current protocols proposed for multimedia traffic are very sophisticated and difficult to analyze because they use distributed queues. The main goal of this paper is to present a common methodology to analyze multimedia RMAP. The results obtained by this procedure include the computation not only of the expected values but the distributions of the final interdeparture time and the departure burst size.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133302777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}