Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521701
Kazumasa Murai, Don Kimber, J. Foote, Qiong Liu, John Doherty
A common problem with teleconferences is awkward turn-taking-particularly 'collisions,' whereby multiple parties inadvertently speak over each other due to communication delays. We propose a model for teleconference discussions including the effects of delays, and describe tools that can improve the quality of those interactions. We describe an interface to gently provide latency awareness, and to give advanced notice of 'incoming speech' to help participants avoid collisions. This is possible when codec latencies are significant, or when a low bandwidth side channel or out-of-band signaling is available with lower latency than the primary video channel. We report on results of simulations, and of experiments carried out with transpacific meetings, that demonstrate these tools can improve the quality of teleconference discussions
{"title":"Mediated Meeting Interaction for Teleconferencing","authors":"Kazumasa Murai, Don Kimber, J. Foote, Qiong Liu, John Doherty","doi":"10.1109/ICME.2005.1521701","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521701","url":null,"abstract":"A common problem with teleconferences is awkward turn-taking-particularly 'collisions,' whereby multiple parties inadvertently speak over each other due to communication delays. We propose a model for teleconference discussions including the effects of delays, and describe tools that can improve the quality of those interactions. We describe an interface to gently provide latency awareness, and to give advanced notice of 'incoming speech' to help participants avoid collisions. This is possible when codec latencies are significant, or when a low bandwidth side channel or out-of-band signaling is available with lower latency than the primary video channel. We report on results of simulations, and of experiments carried out with transpacific meetings, that demonstrate these tools can improve the quality of teleconference discussions","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127120530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521729
J. Nesvadba, P. Fonseca, A. Sinitsyn, F. D. Lange, Martijn Thijssen, P. Kaam, Hong Liu, Rien van Leeuwen, J. Lukkien, A. Korostelev, Jan Ypma, B. Kroon, H. Celik, A. Hanjalic, S. U. Naci, J. Benois-Pineau, P. D. With, Jungong Han
The ever-increasing complexity of generic multimedia-content-analysis-based (MCA) solutions, their processing power demanding nature and the need to prototype and assess solutions in a fast and cost-saving manner motivated the development of the Cassandra framework. The combination of state-of-the-art network and grid-computing solutions and recently standardized interfaces facilitated the set-up of this framework, forming the basis for multiple cross-domain and cross-organizational collaborations. It enables distributed computing scenario simulations for e.g. distributed content analysis (DCA) across consumer electronics (CE) in-home networks, but also the rapid development and assessment of complex multi-MCA-algorithm-based applications and system solutions. Furthermore, the framework's modular nature-logical MCA units are wrapped into so-called service units (SU)-ease the split between system-architecture- and algorithmic-related work and additionally facilitate reusability, extensibility and upgrade ability of those SUs
{"title":"Real-Time and Distributed AV Content Analysis System for Consumer Electronics Networks","authors":"J. Nesvadba, P. Fonseca, A. Sinitsyn, F. D. Lange, Martijn Thijssen, P. Kaam, Hong Liu, Rien van Leeuwen, J. Lukkien, A. Korostelev, Jan Ypma, B. Kroon, H. Celik, A. Hanjalic, S. U. Naci, J. Benois-Pineau, P. D. With, Jungong Han","doi":"10.1109/ICME.2005.1521729","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521729","url":null,"abstract":"The ever-increasing complexity of generic multimedia-content-analysis-based (MCA) solutions, their processing power demanding nature and the need to prototype and assess solutions in a fast and cost-saving manner motivated the development of the Cassandra framework. The combination of state-of-the-art network and grid-computing solutions and recently standardized interfaces facilitated the set-up of this framework, forming the basis for multiple cross-domain and cross-organizational collaborations. It enables distributed computing scenario simulations for e.g. distributed content analysis (DCA) across consumer electronics (CE) in-home networks, but also the rapid development and assessment of complex multi-MCA-algorithm-based applications and system solutions. Furthermore, the framework's modular nature-logical MCA units are wrapped into so-called service units (SU)-ease the split between system-architecture- and algorithmic-related work and additionally facilitate reusability, extensibility and upgrade ability of those SUs","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127379462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521585
Yanming Shen, Zhengye Liu, S. Panwar, K. Ross, Yao Wang
Peer-to-peer video streaming has emerged as an important means to transport stored video. The peers are less costly and more scalable than an infrastructure-based video streaming network which deploys a dedicated set of servers to store and distribute videos to clients. In this paper, we investigate streaming layered encoded video using peers. Each video is encoded into hierarchical layers which are stored on different peers. The system serves a client request by streaming multiple layers of the requested video from separate peers. The system provides unequal error protection for different layers by varying the number of copies stored for each layer according to its importance. We evaluate the performance of our proposed system with different copy number allocation schemes through extensive simulations. Finally, we compare the performance of layered coding with multiple description coding.
{"title":"Streaming layered encoded video using peers","authors":"Yanming Shen, Zhengye Liu, S. Panwar, K. Ross, Yao Wang","doi":"10.1109/ICME.2005.1521585","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521585","url":null,"abstract":"Peer-to-peer video streaming has emerged as an important means to transport stored video. The peers are less costly and more scalable than an infrastructure-based video streaming network which deploys a dedicated set of servers to store and distribute videos to clients. In this paper, we investigate streaming layered encoded video using peers. Each video is encoded into hierarchical layers which are stored on different peers. The system serves a client request by streaming multiple layers of the requested video from separate peers. The system provides unequal error protection for different layers by varying the number of copies stored for each layer according to its importance. We evaluate the performance of our proposed system with different copy number allocation schemes through extensive simulations. Finally, we compare the performance of layered coding with multiple description coding.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127482512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521734
J. Baldzer, S. Thieme, Susanne CJ Boll, Hans-Jürgen Appelrath, Niels Rosenhager
The combination of the emerging digital video broadcasting-handheld (DVB-H) standard with cellular communication like UMTS produces a hybrid network with enormous potential for mobile multimedia applications. In order to optimize the performance of hybrid networks, the characteristics of different individual networks have to be considered. Our prototypical hybrid network infrastructure employs smart access management for an optimal usage of both broadcast and point-to-point network. Our demonstrator-"night scene live", a multimedia event portal-is an excellent example of an application exploiting the potential of future hybrid networks
{"title":"Night Scene Live – A Multimedia Application for Mobile Revellers on the Basis of a Hybrid Network, Using DVB-H and IP Datacast","authors":"J. Baldzer, S. Thieme, Susanne CJ Boll, Hans-Jürgen Appelrath, Niels Rosenhager","doi":"10.1109/ICME.2005.1521734","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521734","url":null,"abstract":"The combination of the emerging digital video broadcasting-handheld (DVB-H) standard with cellular communication like UMTS produces a hybrid network with enormous potential for mobile multimedia applications. In order to optimize the performance of hybrid networks, the characteristics of different individual networks have to be considered. Our prototypical hybrid network infrastructure employs smart access management for an optimal usage of both broadcast and point-to-point network. Our demonstrator-\"night scene live\", a multimedia event portal-is an excellent example of an application exploiting the potential of future hybrid networks","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128897373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521475
Xinguo Yu, Tze Sen Hay, Xin Yan, Chng Eng Siong
A semi-auto system is developed to acquire player possession for broadcast soccer video, whose objective is to minimize the manual work. This research is important because acquiring player-possession by pure manual work is very time-consuming. For completeness, this system integrates the ball detection-and-tracking algorithm, view classification algorithm, and play/break analysis algorithm. First, it produces the ball locations, play/break structure, and the view classes of frames. Then it finds the touching points based on ball locations and player detection. Next it estimates the touching-place in the field for each touching point based on the view-class of the touching frame. Last, for each touching-point it acquires the touching-player candidates based on the touching-place and the roles of players. The system provides the graphical user interfaces to verify touching-points and finalize the touching-player for each touching-point. Experimental results show that the proposed system can obtain good results in touching-point detection and touching-player candidate inference, which save a lot of time compared with the pure manual way.
{"title":"A Player-Possession Acquisition System for Broadcast Soccer Video","authors":"Xinguo Yu, Tze Sen Hay, Xin Yan, Chng Eng Siong","doi":"10.1109/ICME.2005.1521475","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521475","url":null,"abstract":"A semi-auto system is developed to acquire player possession for broadcast soccer video, whose objective is to minimize the manual work. This research is important because acquiring player-possession by pure manual work is very time-consuming. For completeness, this system integrates the ball detection-and-tracking algorithm, view classification algorithm, and play/break analysis algorithm. First, it produces the ball locations, play/break structure, and the view classes of frames. Then it finds the touching points based on ball locations and player detection. Next it estimates the touching-place in the field for each touching point based on the view-class of the touching frame. Last, for each touching-point it acquires the touching-player candidates based on the touching-place and the roles of players. The system provides the graphical user interfaces to verify touching-points and finalize the touching-player for each touching-point. Experimental results show that the proposed system can obtain good results in touching-point detection and touching-player candidate inference, which save a lot of time compared with the pure manual way.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132046060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521551
Zhihong Zeng, ZhenQiu Zhang, Brian Pianfetti, J. Tu, Thomas S. Huang
The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to human-computer interaction (HCI). To more accurately simulate the human ability to assess affects through multi-sensory data, automatic affect recognition should also make use of multimodal data. In this paper, we present our efforts toward audio-visual affect recognition. Based on psychological research, we have chosen affect categories based on an activation-evaluation space which is robust in capturing significant aspects of emotion. We apply the Fisher boosting learning algorithm which can build a strong classifier by combining a small set of weak classification functions. Our experimental results show with 30 Fisher features, the testing error rates of our bimodal affect recognition is about 16% on the evaluation axis and 13% on the activation axis.
{"title":"Audio-visual affect recognition in activation-evaluation space","authors":"Zhihong Zeng, ZhenQiu Zhang, Brian Pianfetti, J. Tu, Thomas S. Huang","doi":"10.1109/ICME.2005.1521551","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521551","url":null,"abstract":"The ability of a computer to detect and appropriately respond to changes in a user's affective state has significant implications to human-computer interaction (HCI). To more accurately simulate the human ability to assess affects through multi-sensory data, automatic affect recognition should also make use of multimodal data. In this paper, we present our efforts toward audio-visual affect recognition. Based on psychological research, we have chosen affect categories based on an activation-evaluation space which is robust in capturing significant aspects of emotion. We apply the Fisher boosting learning algorithm which can build a strong classifier by combining a small set of weak classification functions. Our experimental results show with 30 Fisher features, the testing error rates of our bimodal affect recognition is about 16% on the evaluation axis and 13% on the activation axis.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"334-335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130916433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521544
M. Mancini, Bjoern Hartmann, C. Pelachaud, A. Raouzaiou, K. Karpouzis
Man-machine interaction (MMI) systems that utilize multimodal information about users' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. A lifelike avatar can enhance interactive applications. In this paper, we present the implementation of GretaEngine and synthesized expressions, including intermediate ones, based on MPEG-4 standard and Whissel's emotion representation.
{"title":"Expressive avatars in MPEG-4","authors":"M. Mancini, Bjoern Hartmann, C. Pelachaud, A. Raouzaiou, K. Karpouzis","doi":"10.1109/ICME.2005.1521544","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521544","url":null,"abstract":"Man-machine interaction (MMI) systems that utilize multimodal information about users' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. A lifelike avatar can enhance interactive applications. In this paper, we present the implementation of GretaEngine and synthesized expressions, including intermediate ones, based on MPEG-4 standard and Whissel's emotion representation.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131652409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521601
Christophe Layer, H. Pfleiderer
Due to the enormous increase in the stored digital contents, search and retrieval functionalities are necessary in multimedia systems. Though processor speed for standard PCs (Personal Computers) is experiencing an almost exponential growth, the memory subsystem handicapped by lower frequencies and a physical I/O (Input/Output) limitation reflects the bottleneck of common computer architectures. As a result, many applications such as database management systems remain so dependent on memory throughput that increases in CPU (Central Processing Unit) speeds are no longer helpful. Because average bandwidth is crucial for system performance, our research has focused especially on techniques for efficient storage and retrieval of multimedia data. This paper presents the realization of a hardware database search engine based on an associative access method for textual information retrieval. It reveals the internal architecture of the system and compares the results of our hardware prototype with the software solution
{"title":"Efficient Hardware Search Engine for Associative Content Retrieval of Long Queries in Huge Multimedia Databases","authors":"Christophe Layer, H. Pfleiderer","doi":"10.1109/ICME.2005.1521601","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521601","url":null,"abstract":"Due to the enormous increase in the stored digital contents, search and retrieval functionalities are necessary in multimedia systems. Though processor speed for standard PCs (Personal Computers) is experiencing an almost exponential growth, the memory subsystem handicapped by lower frequencies and a physical I/O (Input/Output) limitation reflects the bottleneck of common computer architectures. As a result, many applications such as database management systems remain so dependent on memory throughput that increases in CPU (Central Processing Unit) speeds are no longer helpful. Because average bandwidth is crucial for system performance, our research has focused especially on techniques for efficient storage and retrieval of multimedia data. This paper presents the realization of a hardware database search engine based on an associative access method for textual information retrieval. It reveals the internal architecture of the system and compares the results of our hardware prototype with the software solution","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126825087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521450
J. Lukasiak, Chris McElroy, E. Cheng
A new low level audio descriptor that represents the psychoacoustic noise floor shape of an audio frame is proposed. Results presented indicate that the proposed descriptor is far more resilient to compression noise than any of the MPEG-7 low level audio descriptors. In fact, across a wide range of files, on average the proposed scheme fails to uniquely identify only five frames in every ten thousand. In addition, the proposed descriptor maintains a high resilience to compression noise even when decimated to use only one quarter of the values per frame to represent the noise floor. This characteristic indicates the proposed descriptor presents a truly scalable mechanism for transparently describing the characteristics of an audio frame.
{"title":"Compression transparent low-level description of audio signals","authors":"J. Lukasiak, Chris McElroy, E. Cheng","doi":"10.1109/ICME.2005.1521450","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521450","url":null,"abstract":"A new low level audio descriptor that represents the psychoacoustic noise floor shape of an audio frame is proposed. Results presented indicate that the proposed descriptor is far more resilient to compression noise than any of the MPEG-7 low level audio descriptors. In fact, across a wide range of files, on average the proposed scheme fails to uniquely identify only five frames in every ten thousand. In addition, the proposed descriptor maintains a high resilience to compression noise even when decimated to use only one quarter of the values per frame to represent the noise floor. This characteristic indicates the proposed descriptor presents a truly scalable mechanism for transparently describing the characteristics of an audio frame.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123329745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-06DOI: 10.1109/ICME.2005.1521512
Seung-Hoon Han, In-So Kweon
The image sequence of a static scene includes similar or redundant information over time. Hence, motion-discontinuous instants can efficiently characterize a video shot or event. However, such instants (key frames) are differently identified according to the change of velocity and acceleration of motion, and such scales of change might be different on each sequence of the same event. In this paper, we present a scalable video abstraction in which the key frames are obtained by the maximum curvature of camera motion at each temporal scale. The scalability means dealing with the velocity and acceleration change of motion. In the temporal neighborhood determined by the scale, the scene features (motion, color, and edge) can be used to index and classify the video events. Therefore, those key frames provide temporal interest points (TIPs) for the abstraction and classification of video events.
{"title":"Scalable temporal interest points for abstraction and classification of video events","authors":"Seung-Hoon Han, In-So Kweon","doi":"10.1109/ICME.2005.1521512","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521512","url":null,"abstract":"The image sequence of a static scene includes similar or redundant information over time. Hence, motion-discontinuous instants can efficiently characterize a video shot or event. However, such instants (key frames) are differently identified according to the change of velocity and acceleration of motion, and such scales of change might be different on each sequence of the same event. In this paper, we present a scalable video abstraction in which the key frames are obtained by the maximum curvature of camera motion at each temporal scale. The scalability means dealing with the velocity and acceleration change of motion. In the temporal neighborhood determined by the scale, the scene features (motion, color, and edge) can be used to index and classify the video events. Therefore, those key frames provide temporal interest points (TIPs) for the abstraction and classification of video events.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"7 17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126369210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}