Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500247
Bahjat Safadi, P. Mulhem, G. Quénot, J. Chevallet
This paper describes a method for querying lifelog data from visual content and from metadata associated with the recorded images. Our approach mainly relies on mapping the query terms to visual concepts computed on the Lifelogs images according to two separated learning schemes based on use of deep visual features. A post-processing is then performed if the topic is related to time, location or activity information associated with the images. This work was evaluated in the context of the Lifelog Semantic Access sub-task of the NTCIR-12 (2016). The results obtained are promising for a first participation to such a task, with an event-based MAP above 29% and an event-based nDCG value close to 39%.
{"title":"Lifelog Semantic Annotation using deep visual features and metadata-derived descriptors","authors":"Bahjat Safadi, P. Mulhem, G. Quénot, J. Chevallet","doi":"10.1109/CBMI.2016.7500247","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500247","url":null,"abstract":"This paper describes a method for querying lifelog data from visual content and from metadata associated with the recorded images. Our approach mainly relies on mapping the query terms to visual concepts computed on the Lifelogs images according to two separated learning schemes based on use of deep visual features. A post-processing is then performed if the topic is related to time, location or activity information associated with the images. This work was evaluated in the context of the Lifelog Semantic Access sub-task of the NTCIR-12 (2016). The results obtained are promising for a first participation to such a task, with an event-based MAP above 29% and an event-based nDCG value close to 39%.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126362611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500263
Ranveer Joyseeree, Roger Schaer, H. Müller
Providing personalized medical care based on a patient's specific characteristics (diagnostic-image content, age, sex, weight, and so on) is an important aspect of modern medicine. This paper describes tools that aim to facilitate this process by providing clinicians with information regarding diagnosis, and treatment of past patients with similar characteristics. The additional information thus provided can help make better-informed decisions with regards to the diagnosis and treatment planning of new patients. Two existing tools: Shambala and Shangri-La can be combined for use within a clinical environment. Deployment inside healthcare facilities can become possible via the MD-Paedigree project.
{"title":"A Demo of multimodal medical retrieval","authors":"Ranveer Joyseeree, Roger Schaer, H. Müller","doi":"10.1109/CBMI.2016.7500263","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500263","url":null,"abstract":"Providing personalized medical care based on a patient's specific characteristics (diagnostic-image content, age, sex, weight, and so on) is an important aspect of modern medicine. This paper describes tools that aim to facilitate this process by providing clinicians with information regarding diagnosis, and treatment of past patients with similar characteristics. The additional information thus provided can help make better-informed decisions with regards to the diagnosis and treatment planning of new patients. Two existing tools: Shambala and Shangri-La can be combined for use within a clinical environment. Deployment inside healthcare facilities can become possible via the MD-Paedigree project.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121524636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500246
Jordi Pons, T. Lidy, Xavier Serra
A common criticism of deep learning relates to the difficulty in understanding the underlying relationships that the neural networks are learning, thus behaving like a black-box. In this article we explore various architectural choices of relevance for music signals classification tasks in order to start understanding what the chosen networks are learning. We first discuss how convolutional filters with different shapes can fit specific musical concepts and based on that we propose several musically motivated architectures. These architectures are then assessed by measuring the accuracy of the deep learning model in the prediction of various music classes using a known dataset of audio recordings of ballroom music. The classes in this dataset have a strong correlation with tempo, what allows assessing if the proposed architectures are learning frequency and/or time dependencies. Additionally, a black-box model is proposed as a baseline for comparison. With these experiments we have been able to understand what some deep learning based algorithms can learn from a particular set of data.
{"title":"Experimenting with musically motivated convolutional neural networks","authors":"Jordi Pons, T. Lidy, Xavier Serra","doi":"10.1109/CBMI.2016.7500246","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500246","url":null,"abstract":"A common criticism of deep learning relates to the difficulty in understanding the underlying relationships that the neural networks are learning, thus behaving like a black-box. In this article we explore various architectural choices of relevance for music signals classification tasks in order to start understanding what the chosen networks are learning. We first discuss how convolutional filters with different shapes can fit specific musical concepts and based on that we propose several musically motivated architectures. These architectures are then assessed by measuring the accuracy of the deep learning model in the prediction of various music classes using a known dataset of audio recordings of ballroom music. The classes in this dataset have a strong correlation with tempo, what allows assessing if the proposed architectures are learning frequency and/or time dependencies. Additionally, a black-box model is proposed as a baseline for comparison. With these experiments we have been able to understand what some deep learning based algorithms can learn from a particular set of data.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122962912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500245
K. Charrière, G. Quellec, M. Lamard, D. Martiano, G. Cazuguel, G. Coatrieux, B. Cochener
Data recorded and stored during video-monitored surgeries are a relevant source of information for surgeons, especially during their training period. But today, this data is virtually unexploited. In this paper, we propose to reuse videos recorded during cataract surgeries to automatically analyze the surgical process with the real-time constraint, with the aim to assist the surgeon during the surgery. We propose to automatically recognize, in real-time, what the surgeon is doing: what surgical phase or, more precisely, what surgical step he or she is performing. This recognition relies on the inference of a multilevel statistical model which uses 1) the conditional relations between levels of description (steps and phases) and 2) the temporal relations among steps and among phases. The model accepts two types of inputs: 1) the presence of surgical instruments, manually provided by the surgeons, or 2) motion in videos, automatically analyzed through the CBVR paradigm. A dataset of 30 cataract surgery videos was collected at Brest University hospital. The system was evaluated in terms of mean area under the ROC curve. Promising results were obtained using either motion analysis (Az = 0.759) or the presence of surgical instruments (Az = 0.983).
{"title":"Real-time multilevel sequencing of cataract surgery videos","authors":"K. Charrière, G. Quellec, M. Lamard, D. Martiano, G. Cazuguel, G. Coatrieux, B. Cochener","doi":"10.1109/CBMI.2016.7500245","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500245","url":null,"abstract":"Data recorded and stored during video-monitored surgeries are a relevant source of information for surgeons, especially during their training period. But today, this data is virtually unexploited. In this paper, we propose to reuse videos recorded during cataract surgeries to automatically analyze the surgical process with the real-time constraint, with the aim to assist the surgeon during the surgery. We propose to automatically recognize, in real-time, what the surgeon is doing: what surgical phase or, more precisely, what surgical step he or she is performing. This recognition relies on the inference of a multilevel statistical model which uses 1) the conditional relations between levels of description (steps and phases) and 2) the temporal relations among steps and among phases. The model accepts two types of inputs: 1) the presence of surgical instruments, manually provided by the surgeons, or 2) motion in videos, automatically analyzed through the CBVR paradigm. A dataset of 30 cataract surgery videos was collected at Brest University hospital. The system was evaluated in terms of mean area under the ROC curve. Promising results were obtained using either motion analysis (Az = 0.759) or the presence of surgical instruments (Az = 0.983).","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123002208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500267
Titouan Lorieul, Antoine Ghorra, B. Mérialdo
Although deep learning has been a major break-through in the recent years, Deep Neural Networks (DNNs) are still the subject of intense research, and many issues remain on how to use them efficiently. In particular, training a Deep Network remains a difficult process, which requires extensive computation, and for which very precise care has to be taken to avoid overfitting, a high risk because of the extremely large number of parameters. The purpose of our work is to perform an autopsy of pre-trained Deep Networks, with the objective of collecting information about the values of the various parameters, and their possible relations and correlations. The motivation is that some of these observations could be later used as a priori knowledge to facilitate the training of new networks, by guiding the exploration of the parameter space into more probable areas. In this paper, we first present a static analysis of the AlexNet Deep Network by computing various statistics on the existing parameter values. Then, we perform a dynamic analysis by measuring the effect of certain modifications of those values on the performance of the network. For example, we show that quantizing the values of the parameters to a small adequate set of values leads to similar performance as the original network. These results suggest that pursuing such studies could lead to the design of improved training procedures for Deep Networks.
{"title":"Static and dynamic autopsy of deep networks","authors":"Titouan Lorieul, Antoine Ghorra, B. Mérialdo","doi":"10.1109/CBMI.2016.7500267","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500267","url":null,"abstract":"Although deep learning has been a major break-through in the recent years, Deep Neural Networks (DNNs) are still the subject of intense research, and many issues remain on how to use them efficiently. In particular, training a Deep Network remains a difficult process, which requires extensive computation, and for which very precise care has to be taken to avoid overfitting, a high risk because of the extremely large number of parameters. The purpose of our work is to perform an autopsy of pre-trained Deep Networks, with the objective of collecting information about the values of the various parameters, and their possible relations and correlations. The motivation is that some of these observations could be later used as a priori knowledge to facilitate the training of new networks, by guiding the exploration of the parameter space into more probable areas. In this paper, we first present a static analysis of the AlexNet Deep Network by computing various statistics on the existing parameter values. Then, we perform a dynamic analysis by measuring the effect of certain modifications of those values on the performance of the network. For example, we show that quantizing the values of the parameters to a small adequate set of values leads to similar performance as the original network. These results suggest that pursuing such studies could lead to the design of improved training procedures for Deep Networks.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123042829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500256
M. Riegler, V. Reddy, M. Larson, Ragnhild Eg, P. Halvorsen, C. Griwodz
Crowdsourcing has established itself as a powerful tool for multimedia researchers, and is commonly used to collect human input for various purposes. It is also a fairly widespread practice to control the contributions of users based on the quality of their input. This paper points to the fact that applying this practice in subjective assessment tasks may lead to an undesired, negative outcome. We present a crowdsourcing experiment and a discussion of the ways in which control in crowdsourcing studies can lead to a phenomenon akin to a self-fulfilling prophecy. This paper is intended to trigger discussion and lead to more deeply reflective crowdsourcing practices in the multimedia context.
{"title":"Crowdsourcing as self-fulfilling prophecy: Influence of discarding workers in subjective assessment tasks","authors":"M. Riegler, V. Reddy, M. Larson, Ragnhild Eg, P. Halvorsen, C. Griwodz","doi":"10.1109/CBMI.2016.7500256","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500256","url":null,"abstract":"Crowdsourcing has established itself as a powerful tool for multimedia researchers, and is commonly used to collect human input for various purposes. It is also a fairly widespread practice to control the contributions of users based on the quality of their input. This paper points to the fact that applying this practice in subjective assessment tasks may lead to an undesired, negative outcome. We present a crowdsourcing experiment and a discussion of the ways in which control in crowdsourcing studies can lead to a phenomenon akin to a self-fulfilling prophecy. This paper is intended to trigger discussion and lead to more deeply reflective crowdsourcing practices in the multimedia context.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114614112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500243
S. Chaabouni, F. Tison, J. Benois-Pineau, C. Amar
As a part of the automatic study of visual attention of affected populations with neurodegenerative diseases and to predict whether new gaze records a complaint of these diseases, we should design an automatic model that predicts salient areas in video. Past research showed, that people suffering form dementia are not reactive with regard to degradations on still images. In this paper we study the reaction of healthy normal control subjects on degraded area in videos. Furthermore, in the goal to build an automatic prediction model for salient areas in intentionally degraded videos, we design a deep learning architecture and measure its performances when predicting salient regions on completely unseen data. The obtained results are interesting regarding the reaction of normal control subjects against a degraded area in video.
{"title":"Prediction of visual attention with Deep CNN for studies of neurodegenerative diseases","authors":"S. Chaabouni, F. Tison, J. Benois-Pineau, C. Amar","doi":"10.1109/CBMI.2016.7500243","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500243","url":null,"abstract":"As a part of the automatic study of visual attention of affected populations with neurodegenerative diseases and to predict whether new gaze records a complaint of these diseases, we should design an automatic model that predicts salient areas in video. Past research showed, that people suffering form dementia are not reactive with regard to degradations on still images. In this paper we study the reaction of healthy normal control subjects on degraded area in videos. Furthermore, in the goal to build an automatic prediction model for salient areas in intentionally degraded videos, we design a deep learning architecture and measure its performances when predicting salient regions on completely unseen data. The obtained results are interesting regarding the reaction of normal control subjects against a degraded area in video.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128735275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500249
Manfred Jürgen Primus, Klaus Schöffmann, L. Böszörményi
Videos of laparoscopic surgeries need to be segmented temporally into phases so that surgeons can use the recordings efficiently in their everyday work. In this paper we investigate the performance of an automatic phase segmentation method based on instrument detection and recognition. Contrary to known methods that dynamically align phases to an annotated dataset, our method is not limited to standardized or unvarying endoscopic procedures. Phases of laparoscopic procedures show a high correlation to the presence of one or a group of certain instruments. Therefore, the first step of our procedure is the definition of a set of rules that describe these correlations. The next step is the spatial detection of instruments using a color-based segmentation method and a rule-based interpretation of image moments for the refinement of the detections. Finally, the detected regions are recognized with SVM classifiers and ORB features. The evaluation shows that the proposed technique find phases in laparoscopic videos of cholecystectomies reliably.
{"title":"Temporal segmentation of laparoscopic videos into surgical phases","authors":"Manfred Jürgen Primus, Klaus Schöffmann, L. Böszörményi","doi":"10.1109/CBMI.2016.7500249","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500249","url":null,"abstract":"Videos of laparoscopic surgeries need to be segmented temporally into phases so that surgeons can use the recordings efficiently in their everyday work. In this paper we investigate the performance of an automatic phase segmentation method based on instrument detection and recognition. Contrary to known methods that dynamically align phases to an annotated dataset, our method is not limited to standardized or unvarying endoscopic procedures. Phases of laparoscopic procedures show a high correlation to the presence of one or a group of certain instruments. Therefore, the first step of our procedure is the definition of a set of rules that describe these correlations. The next step is the spatial detection of instruments using a color-based segmentation method and a rule-based interpretation of image moments for the refinement of the detections. Finally, the detected regions are recognized with SVM classifiers and ORB features. The evaluation shows that the proposed technique find phases in laparoscopic videos of cholecystectomies reliably.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130870504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500275
A. Bampoulidis, M. Lupu, João Palotti, S. Metallidis, J. Brassey, A. Hanbury
Healthcare related queries are a treasure trove of information about the information needs of domain users, be they patients or doctors. However, unlike general queries, in order to make the most out of the information therein, such queries have to be processed within a medical terminology annotation pipeline. We show how this has been done in the context of the KConnect project and demonstrate an interactive query log exploration interface that allows data analysts and search engineers to better understand their users and design a better search experience.
{"title":"Interactive exploration of healthcare queries","authors":"A. Bampoulidis, M. Lupu, João Palotti, S. Metallidis, J. Brassey, A. Hanbury","doi":"10.1109/CBMI.2016.7500275","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500275","url":null,"abstract":"Healthcare related queries are a treasure trove of information about the information needs of domain users, be they patients or doctors. However, unlike general queries, in order to make the most out of the information therein, such queries have to be processed within a medical terminology annotation pipeline. We show how this has been done in the context of the KConnect project and demonstrate an interactive query log exploration interface that allows data analysts and search engineers to better understand their users and design a better search experience.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125417653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-15DOI: 10.1109/CBMI.2016.7500240
M. Schedl, D. Hauger, M. Tkalcic, M. Melenhorst, Cynthia C. S. Liem
We present a freely available dataset of multimedia material that can be used to build enriched browsing and retrieval systems for music. It is one result of the EU-FP7 funded project “Performances as Highly Enriched aNd Interactive Concert experiences” (PHENICX) that aims at enhancing the listener experience when enjoying classical music. The presented PHENICX-SMM dataset includes in total more than 50,000 multimedia items (text, image, audio) about composers, performers, pieces, and instruments. In addition to presenting the dataset, we detail one possible use case, that of building a personalized music information system that suggests certain types and quantities of multimedia material, based on personality traits and musical experience of its users. We evaluate the system via a user study and show that people generally prefer the personalized results over non-personalized.
{"title":"A dataset of multimedia material about classical music: PHENICX-SMM","authors":"M. Schedl, D. Hauger, M. Tkalcic, M. Melenhorst, Cynthia C. S. Liem","doi":"10.1109/CBMI.2016.7500240","DOIUrl":"https://doi.org/10.1109/CBMI.2016.7500240","url":null,"abstract":"We present a freely available dataset of multimedia material that can be used to build enriched browsing and retrieval systems for music. It is one result of the EU-FP7 funded project “Performances as Highly Enriched aNd Interactive Concert experiences” (PHENICX) that aims at enhancing the listener experience when enjoying classical music. The presented PHENICX-SMM dataset includes in total more than 50,000 multimedia items (text, image, audio) about composers, performers, pieces, and instruments. In addition to presenting the dataset, we detail one possible use case, that of building a personalized music information system that suggests certain types and quantities of multimedia material, based on personality traits and musical experience of its users. We evaluate the system via a user study and show that people generally prefer the personalized results over non-personalized.","PeriodicalId":356608,"journal":{"name":"2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134551674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}