Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778616
T. Kunieda, Y. Wakita
We verify the Package-Segment Model which we are proposing as a representation method for multimedia content logical structure. We describe the effectiveness of the content logical structure. The Package-Segment Model consists of various objects that are defined as structure components and are involved in the construction processes. We developed an experimental retrieval system of the contents that are indexed by this Package-Segment Model. We confirm that the Package-Segment Model has representation flexibility and its object framework and retrieval mechanism are very adaptable. Therefore it can be integrated into various multimedia content management systems. A prototype application of automatic indexing and retrieval is also presented and evaluated.
{"title":"Package-Segment Model for movie retrieval system and adaptable applications","authors":"T. Kunieda, Y. Wakita","doi":"10.1109/MMCS.1999.778616","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778616","url":null,"abstract":"We verify the Package-Segment Model which we are proposing as a representation method for multimedia content logical structure. We describe the effectiveness of the content logical structure. The Package-Segment Model consists of various objects that are defined as structure components and are involved in the construction processes. We developed an experimental retrieval system of the contents that are indexed by this Package-Segment Model. We confirm that the Package-Segment Model has representation flexibility and its object framework and retrieval mechanism are very adaptable. Therefore it can be integrated into various multimedia content management systems. A prototype application of automatic indexing and retrieval is also presented and evaluated.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131066385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.779119
Andrew Zisserman, A. Fitzgibbon, G. Cross
We describe a method to completely automatically recover 3D scene structure together with a camera for each frame from a sequence of images acquired by an unknown camera undergoing unknown movement. Previous approaches have used calibration objects or landmarks to recover this information, and are therefore often limited to a particular scale. The approach of this paper is far more general, since the "landmarks" are derived directly from the imaged scene texture. The method can be applied to a large class of scenes and motions, and is demonstrated for sequences of interior and exterior scenes using both controlled-motion and hand-held cameras. We demonstrate two applications of this technology. The first is the construction of 3D graphical models of the scene; the second is the insertion of virtual objects into the original image sequence. Other applications include image compression and frame interpolation.
{"title":"VHS to VRML: 3D graphical models from video sequences","authors":"Andrew Zisserman, A. Fitzgibbon, G. Cross","doi":"10.1109/MMCS.1999.779119","DOIUrl":"https://doi.org/10.1109/MMCS.1999.779119","url":null,"abstract":"We describe a method to completely automatically recover 3D scene structure together with a camera for each frame from a sequence of images acquired by an unknown camera undergoing unknown movement. Previous approaches have used calibration objects or landmarks to recover this information, and are therefore often limited to a particular scale. The approach of this paper is far more general, since the \"landmarks\" are derived directly from the imaged scene texture. The method can be applied to a large class of scenes and motions, and is demonstrated for sequences of interior and exterior scenes using both controlled-motion and hand-held cameras. We demonstrate two applications of this technology. The first is the construction of 3D graphical models of the scene; the second is the insertion of virtual objects into the original image sequence. Other applications include image compression and frame interpolation.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123711788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.779255
Aditya Vailaya, Mário A. T. Figueiredo, Anil K. Jain, HongJiang Zhang
Grouping images into (semantically) meaningful categories using low level visual features is a challenging and important problem in content based image retrieval. Using binary Bayesian classifiers, we attempt to capture high level concepts from low level image features under the constraint that the test image does belong to one of the classes of interest. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified into indoor/outdoor classes, outdoor images are further classified into city/landscape classes, and finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a vector quantizer can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. On a database of 6931 vacation photographs, our system achieved an accuracy of 90.5% for indoor vs. outdoor classification, 95.3% for city vs. landscape classification, 96.6% for sunset vs. forest and mountain classification, and 95.5% for forest vs. mountain classification. We further develop a learning paradigm to incrementally train the classifiers as additional training samples become available and also show preliminary results for feature size reduction using clustering techniques.
{"title":"Content-based hierarchical classification of vacation images","authors":"Aditya Vailaya, Mário A. T. Figueiredo, Anil K. Jain, HongJiang Zhang","doi":"10.1109/MMCS.1999.779255","DOIUrl":"https://doi.org/10.1109/MMCS.1999.779255","url":null,"abstract":"Grouping images into (semantically) meaningful categories using low level visual features is a challenging and important problem in content based image retrieval. Using binary Bayesian classifiers, we attempt to capture high level concepts from low level image features under the constraint that the test image does belong to one of the classes of interest. Specifically, we consider the hierarchical classification of vacation images; at the highest level, images are classified into indoor/outdoor classes, outdoor images are further classified into city/landscape classes, and finally, a subset of landscape images is classified into sunset, forest, and mountain classes. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a vector quantizer can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. On a database of 6931 vacation photographs, our system achieved an accuracy of 90.5% for indoor vs. outdoor classification, 95.3% for city vs. landscape classification, 96.6% for sunset vs. forest and mountain classification, and 95.5% for forest vs. mountain classification. We further develop a learning paradigm to incrementally train the classifiers as additional training samples become available and also show preliminary results for feature size reduction using clustering techniques.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"52 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130650581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778442
I. Yeom, A. Reddy
This paper discusses techniques for achieving desired throughput guarantees in the Internet that supports a differentiated services framework. The diff-serv framework proposes the use of different drop precedences to achieve service guarantees over the Internet. However, it has been observed that the drop precedences by themselves cannot achieve the desired target rates because of the strong interaction of the transport protocol with packet drops in the network. This paper proposes and evaluates a number of techniques to better achieve the throughput guarantees in such networks. The proposed techniques consider: modifying the transport protocol at the sender; modifying the marking strategies at the marker; and modifying the dropping policies at the router. It is shown that these techniques improve the likelihood of achieving the desired throughput guarantees and also improve the service differentiation.
{"title":"Realizing throughput guarantees in a differentiated services network","authors":"I. Yeom, A. Reddy","doi":"10.1109/MMCS.1999.778442","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778442","url":null,"abstract":"This paper discusses techniques for achieving desired throughput guarantees in the Internet that supports a differentiated services framework. The diff-serv framework proposes the use of different drop precedences to achieve service guarantees over the Internet. However, it has been observed that the drop precedences by themselves cannot achieve the desired target rates because of the strong interaction of the transport protocol with packet drops in the network. This paper proposes and evaluates a number of techniques to better achieve the throughput guarantees in such networks. The proposed techniques consider: modifying the transport protocol at the sender; modifying the marking strategies at the marker; and modifying the dropping policies at the router. It is shown that these techniques improve the likelihood of achieving the desired throughput guarantees and also improve the service differentiation.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129308305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778557
Jamel Gafsi, E. Biersack
We study video server performance and reliability. We classify several reliability schemes based on the redundancy technique used (mirroring vs. parity) and of the distribution granularity of redundant data. Then, we propose for each scheme its adequate data layout. To calculate the server reliability we apply discrete modeling based on Markov chains. Further we focus on the trade-off between achieving high reliability and low per stream cost. Our results show that, in contrast to intuition, for the same degree of reliability mirroring-based schemes always outperform parity-based schemes in terms of per stream cost and also restart latency after disk failure. Our results also show that a mirroring scheme that copies original data of a single disk onto a subset of all disks significantly improves the server reliability and slightly increases the per stream cost as compared to the classical interleaved mirroring scheme.
{"title":"Performance and reliability study for distributed video servers: mirroring or parity?","authors":"Jamel Gafsi, E. Biersack","doi":"10.1109/MMCS.1999.778557","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778557","url":null,"abstract":"We study video server performance and reliability. We classify several reliability schemes based on the redundancy technique used (mirroring vs. parity) and of the distribution granularity of redundant data. Then, we propose for each scheme its adequate data layout. To calculate the server reliability we apply discrete modeling based on Markov chains. Further we focus on the trade-off between achieving high reliability and low per stream cost. Our results show that, in contrast to intuition, for the same degree of reliability mirroring-based schemes always outperform parity-based schemes in terms of per stream cost and also restart latency after disk failure. Our results also show that a mirroring scheme that copies original data of a single disk onto a subset of all disks significantly improves the server reliability and slightly increases the per stream cost as compared to the classical interleaved mirroring scheme.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125477189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778631
S. Chatterjee, Michael Brown
As greater numbers of end users run both multimedia applications and traditional desktop applications (word processing, spreadsheets, etc.) on the same distributed system (e.g., an office intranet or the Internet), the issue of how to provide adaptive quality of service (QoS) in a highly dynamic, shared, and heterogeneous resource environment becomes very important. While Java/spl dagger/ is an elegant solution to the heterogeneity problem, it lacks adaptive QoS support, which is critical to multimedia and other real-time applications. Our ERDoS (End-to-End Resource Management of Distributed Systems) project presents solutions to the adaptation problem. We will demonstrate our innovative content-based adaptation algorithm, embedded within the Java Virtual Machine (JVM) using a set of multimedia applications on a laptop computer running Linux. If the underlying infrastructure provides real-time support, it becomes much easier to extend the support to the application layers. However, the underlying infrastructure for most common systems (whether it is an intranetwork, extranetwork or the Internet) is non-real time. The problem is exacerbated because these newer multimedia applications have to co-reside within the same distributed infrastructure with current desktop applications, which are non-real time and have unpredictable resource usage patterns. Therefore, our objective is to provide adaptive QoS support to these new multimedia applications within a best-effort infrastructure. We are not trying to extend Java to support real-time guarantees for applications because of the inherent non-real-time properties of Java (e.g., its dynamic loading and linking, and garbage collection). Instead, our goal is to insert sophisticated multiapplication, multidimensional QoS adaptation algorithms inside the JVM, to enable it to gracefully adapt multimedia applications as system state changes, in a manner that minimizes the adverse visual effect on these applications' users.
{"title":"Adaptive QoS resource management in dynamic environments","authors":"S. Chatterjee, Michael Brown","doi":"10.1109/MMCS.1999.778631","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778631","url":null,"abstract":"As greater numbers of end users run both multimedia applications and traditional desktop applications (word processing, spreadsheets, etc.) on the same distributed system (e.g., an office intranet or the Internet), the issue of how to provide adaptive quality of service (QoS) in a highly dynamic, shared, and heterogeneous resource environment becomes very important. While Java/spl dagger/ is an elegant solution to the heterogeneity problem, it lacks adaptive QoS support, which is critical to multimedia and other real-time applications. Our ERDoS (End-to-End Resource Management of Distributed Systems) project presents solutions to the adaptation problem. We will demonstrate our innovative content-based adaptation algorithm, embedded within the Java Virtual Machine (JVM) using a set of multimedia applications on a laptop computer running Linux. If the underlying infrastructure provides real-time support, it becomes much easier to extend the support to the application layers. However, the underlying infrastructure for most common systems (whether it is an intranetwork, extranetwork or the Internet) is non-real time. The problem is exacerbated because these newer multimedia applications have to co-reside within the same distributed infrastructure with current desktop applications, which are non-real time and have unpredictable resource usage patterns. Therefore, our objective is to provide adaptive QoS support to these new multimedia applications within a best-effort infrastructure. We are not trying to extend Java to support real-time guarantees for applications because of the inherent non-real-time properties of Java (e.g., its dynamic loading and linking, and garbage collection). Instead, our goal is to insert sophisticated multiapplication, multidimensional QoS adaptation algorithms inside the JVM, to enable it to gracefully adapt multimedia applications as system state changes, in a manner that minimizes the adverse visual effect on these applications' users.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125499744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.779267
C. M. Papaterpos, Georgios D. Styliaras, G. Tsolis, T. Papatheodorou
ISTOPOLIS is a network-based hypermedia educational system, which addresses needs of end-users (teachers and students) and content providers. ISTOPOLIS operates over the classroom LAN, over a remote connection to a server or as a standalone application. Students locate information placed within specific context categories and reuse it within their projects. This is supported through a set of Web-based navigation services (Access Tools) and a simple authoring environment integrated within a specialized client. In the back-end, original content authoring is supported through a classification based methodology and tools. The system is suitable for a variety of educational areas. Its current implementation addresses History and is undergoing evaluation in high schools.
{"title":"Architecture and implementation of a network-based educational hypermedia system","authors":"C. M. Papaterpos, Georgios D. Styliaras, G. Tsolis, T. Papatheodorou","doi":"10.1109/MMCS.1999.779267","DOIUrl":"https://doi.org/10.1109/MMCS.1999.779267","url":null,"abstract":"ISTOPOLIS is a network-based hypermedia educational system, which addresses needs of end-users (teachers and students) and content providers. ISTOPOLIS operates over the classroom LAN, over a remote connection to a server or as a standalone application. Students locate information placed within specific context categories and reuse it within their projects. This is supported through a set of Web-based navigation services (Access Tools) and a simple authoring environment integrated within a specialized client. In the back-end, original content authoring is supported through a classification based methodology and tools. The system is suitable for a variety of educational areas. Its current implementation addresses History and is undergoing evaluation in high schools.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126412164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778628
C. Demarty, S. Beucher
This paper proposes a local algorithm using morphological operators, which leads to several useful tools for indexing video documents. It consists of a local computation of a similarity criterion between two successive frames of a sequence, followed by the study of the temporal evolution curve of this criterion for the whole sequence. From this curve, shot transitions are extracted by means of a powerful morphological filter, the inf top-hat. At this point, we have built a cut detection tool. The local computation together with strong morphological filtering leads to a very simple, fast and computationally efficient algorithm, with a high detection rate in the case of cuts for a small level of false detections. This algorithm also gives access to a spatial model of the transition and to a selection of key frames for each shot. By applying the local similarity measure to the key frames, two other tools are built to detect inner shot changes and syntactically related shots. Finally, the relation detection is used in a newscaster detection tool.
{"title":"Morphological tools for indexing video documents","authors":"C. Demarty, S. Beucher","doi":"10.1109/MMCS.1999.778628","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778628","url":null,"abstract":"This paper proposes a local algorithm using morphological operators, which leads to several useful tools for indexing video documents. It consists of a local computation of a similarity criterion between two successive frames of a sequence, followed by the study of the temporal evolution curve of this criterion for the whole sequence. From this curve, shot transitions are extracted by means of a powerful morphological filter, the inf top-hat. At this point, we have built a cut detection tool. The local computation together with strong morphological filtering leads to a very simple, fast and computationally efficient algorithm, with a high detection rate in the case of cuts for a small level of false detections. This algorithm also gives access to a spatial model of the transition and to a selection of key frames for each shot. By applying the local similarity measure to the key frames, two other tools are built to detect inner shot changes and syntactically related shots. Finally, the relation detection is used in a newscaster detection tool.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126593153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.779116
M. Swain
The proliferation of multimedia on the World Wide Web has led to the introduction of Web search engines for images, video and audio. On the Web, multimedia is typically embedded within documents that provide a wealth of indexing information. Harsh computational constraints imposed by the economics of advertising-supported searches restrict the complexity of analysis that can be performed at query time and users may be unwilling to do much more than type a keyword or two to input a query. Therefore, the primary sources of information for indexing multimedia documents are text cues extracted from HTML pages and multimedia document headers. Off-line analysis of the content of multimedia documents can be successfully employed in Web search engines when combined with these other information sources. Content analysis can be used to categorize and summarize multimedia, in addition to providing cues for finding similar documents.
{"title":"Searching for multimedia on the World Wide Web","authors":"M. Swain","doi":"10.1109/MMCS.1999.779116","DOIUrl":"https://doi.org/10.1109/MMCS.1999.779116","url":null,"abstract":"The proliferation of multimedia on the World Wide Web has led to the introduction of Web search engines for images, video and audio. On the Web, multimedia is typically embedded within documents that provide a wealth of indexing information. Harsh computational constraints imposed by the economics of advertising-supported searches restrict the complexity of analysis that can be performed at query time and users may be unwilling to do much more than type a keyword or two to input a query. Therefore, the primary sources of information for indexing multimedia documents are text cues extracted from HTML pages and multimedia document headers. Off-line analysis of the content of multimedia documents can be successfully employed in Web search engines when combined with these other information sources. Content analysis can be used to categorize and summarize multimedia, in addition to providing cues for finding similar documents.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122301837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-07DOI: 10.1109/MMCS.1999.778207
R. Brunelli, O. Mich
This paper analyzes the use of histograms of low-level image features, such as color and luminance, as descriptors for image retrieval purposes. The discrimination ability of several descriptors, the issues of histogram size and comparison, are considered in a common statistical framework.
{"title":"On the use of histograms for image retrieval","authors":"R. Brunelli, O. Mich","doi":"10.1109/MMCS.1999.778207","DOIUrl":"https://doi.org/10.1109/MMCS.1999.778207","url":null,"abstract":"This paper analyzes the use of histograms of low-level image features, such as color and luminance, as descriptors for image retrieval purposes. The discrimination ability of several descriptors, the issues of histogram size and comparison, are considered in a common statistical framework.","PeriodicalId":408680,"journal":{"name":"Proceedings IEEE International Conference on Multimedia Computing and Systems","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121061962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}