Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398661
G. D. Kaziyeva, S. Sagnayeva, G. Sembina, A. Ismailova
The development of oil and gas fields in the Northern Caspian caused an intensive anthropogenic load and the need to create data warehouses for hydrological, hydrochemical, and biota data for research related to a retrospective assessment of the degree to which anthropogenic impact on a biota affects the biota of different directions. The main goal of the article is the use of algorithmic tools for analytical processing and interpretation of the results of environmental observations in the water area of the Northern part of the Caspian Sea to predict the occurrence and development of environmental changes in the environment, and the organization of centralized storage of heterogeneous (hydrochemical and hydrobiological, etc.) data. The use of the selected software platform (TOFI) to obtain, exchange and process data will allow to provide sections of multi-dimensional cubes of bio monitoring data of the ecosystem of the Northern part of the Caspian Sea to decision-makers and the public.
{"title":"Software tools for environmental monitoring of the Northern part of the Caspian sea","authors":"G. D. Kaziyeva, S. Sagnayeva, G. Sembina, A. Ismailova","doi":"10.1109/CATA.2018.8398661","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398661","url":null,"abstract":"The development of oil and gas fields in the Northern Caspian caused an intensive anthropogenic load and the need to create data warehouses for hydrological, hydrochemical, and biota data for research related to a retrospective assessment of the degree to which anthropogenic impact on a biota affects the biota of different directions. The main goal of the article is the use of algorithmic tools for analytical processing and interpretation of the results of environmental observations in the water area of the Northern part of the Caspian Sea to predict the occurrence and development of environmental changes in the environment, and the organization of centralized storage of heterogeneous (hydrochemical and hydrobiological, etc.) data. The use of the selected software platform (TOFI) to obtain, exchange and process data will allow to provide sections of multi-dimensional cubes of bio monitoring data of the ecosystem of the Northern part of the Caspian Sea to decision-makers and the public.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115620244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398672
Jiaxiang Zhao, Jun Li, Yingdong Ma
The problem of pedestrian detection receives increasing attention due to the rapid development of artificial intelligence technologies. In this paper, we propose a deep neural network based method which combines with a traditional classifier for fast and robust pedestrian detection. Specifically, region proposals generation and feature extraction are implemented using a modified RPN-VGG method. The proposed method is designed to improve system performance on small objects detection. A new classifier, Fast Boosted Tree, is trained based on RPN outputs to obtain the final results. Experiments on Caltech pedestrian dataset demonstrate that the proposed method achieves 8.77% miss rate and has the best known efficiency with state-of-the-art CNN-based detectors. When algorithm efficiency is not considered, detection quality can be further improved to 8.25% miss rate by adding global normalization and optical flow features.
{"title":"RPN+ fast boosted tree: Combining deep neural network with traditional classifier for pedestrian detection","authors":"Jiaxiang Zhao, Jun Li, Yingdong Ma","doi":"10.1109/CATA.2018.8398672","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398672","url":null,"abstract":"The problem of pedestrian detection receives increasing attention due to the rapid development of artificial intelligence technologies. In this paper, we propose a deep neural network based method which combines with a traditional classifier for fast and robust pedestrian detection. Specifically, region proposals generation and feature extraction are implemented using a modified RPN-VGG method. The proposed method is designed to improve system performance on small objects detection. A new classifier, Fast Boosted Tree, is trained based on RPN outputs to obtain the final results. Experiments on Caltech pedestrian dataset demonstrate that the proposed method achieves 8.77% miss rate and has the best known efficiency with state-of-the-art CNN-based detectors. When algorithm efficiency is not considered, detection quality can be further improved to 8.25% miss rate by adding global normalization and optical flow features.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129898630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398679
H. Yaşar, Uğurhan Kutbay, F. Hardalaç
Breast cancer is the most common type of cancer that occurs in one of every eight women in the world and is the most common in women. Early diagnosis of the disease is of great importance in order to reduce tissue loss and disease-related deaths. For this reason, in the literature, many studies have been done such as automatic breast tissue density classification, automatic normal-abnormal tissue classification and automatic benign-malignant tissue classification. In this study, a new combined system based on artificial neural networks (ANN) and complex wavelet transform is proposed to classify tissue density from mammography images. The study using 322 images of the MIAS database have resulted in classification success rates ranging from 80% to 94.79% for different breast tissue density classes (fatty, fatty-glandular, dense-glandular).
{"title":"A new combined system using ANN and complex wavelet transform for tissue density classification in mammography images","authors":"H. Yaşar, Uğurhan Kutbay, F. Hardalaç","doi":"10.1109/CATA.2018.8398679","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398679","url":null,"abstract":"Breast cancer is the most common type of cancer that occurs in one of every eight women in the world and is the most common in women. Early diagnosis of the disease is of great importance in order to reduce tissue loss and disease-related deaths. For this reason, in the literature, many studies have been done such as automatic breast tissue density classification, automatic normal-abnormal tissue classification and automatic benign-malignant tissue classification. In this study, a new combined system based on artificial neural networks (ANN) and complex wavelet transform is proposed to classify tissue density from mammography images. The study using 322 images of the MIAS database have resulted in classification success rates ranging from 80% to 94.79% for different breast tissue density classes (fatty, fatty-glandular, dense-glandular).","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124845061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398666
Yasser Saissi, A. Zellou, A. Idri
The deep web is a huge part of the web only accessible by querying its access forms. To query these access forms, we need to know the possible values of each form field. But, some form fields have an undefined set of values and this makes their automatic query difficult or impossible. In this paper, we propose our new approach to identify the set of the possible values for these fields to query the deep web access forms. For this, we query first these fields with the values associated with the domain of the deep web source. After, we use the K-medoids clustering approach to classify these generated results in a K clusters. For this, our clustering approach uses the semantic similarity between these results. The elements of the generated clusters are used by our approach to define the set of the possible values of these analyzed fields. With this approach, we can apply efficient queries to all the fields of the deep web access forms and access the deep web information.
{"title":"A new clustering approach to identify the values to query the deep web access forms","authors":"Yasser Saissi, A. Zellou, A. Idri","doi":"10.1109/CATA.2018.8398666","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398666","url":null,"abstract":"The deep web is a huge part of the web only accessible by querying its access forms. To query these access forms, we need to know the possible values of each form field. But, some form fields have an undefined set of values and this makes their automatic query difficult or impossible. In this paper, we propose our new approach to identify the set of the possible values for these fields to query the deep web access forms. For this, we query first these fields with the values associated with the domain of the deep web source. After, we use the K-medoids clustering approach to classify these generated results in a K clusters. For this, our clustering approach uses the semantic similarity between these results. The elements of the generated clusters are used by our approach to define the set of the possible values of these analyzed fields. With this approach, we can apply efficient queries to all the fields of the deep web access forms and access the deep web information.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114534775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398660
Tao Sun, Haifen Ren, Linjing Zhang
As a formal technique of modelling, the Colored Petri Nets (CPN) is often used to model the parallel software systems with its outstanding advantages. The verification of software which has the parallel behaviors is too difficult. The state spaces of such systems are partial or occur the explosion easily because of limitation of computer memory and complexity of model. The truth is that the traditional methods of verification do not work with the partial or explosion of state space efficiently. In this paper, a novel method of verification of software systems based on the CPN is proposed. Firstly, the linear temporal logic (LTL) is used to describe the property of the system. And then verify the negation of property formula. Secondly, label the state in the path generated dynamically based on the different types of the LTL formulas. Finally, find the “good” path in the existing paths set to be the heuristic search according to the metrics: Complexity (Com), Number (Num), and Distance (Dis). At last, the CPN model is given to prove the validity and correctness of the algorithm.
{"title":"A method of verification of software based on CPN","authors":"Tao Sun, Haifen Ren, Linjing Zhang","doi":"10.1109/CATA.2018.8398660","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398660","url":null,"abstract":"As a formal technique of modelling, the Colored Petri Nets (CPN) is often used to model the parallel software systems with its outstanding advantages. The verification of software which has the parallel behaviors is too difficult. The state spaces of such systems are partial or occur the explosion easily because of limitation of computer memory and complexity of model. The truth is that the traditional methods of verification do not work with the partial or explosion of state space efficiently. In this paper, a novel method of verification of software systems based on the CPN is proposed. Firstly, the linear temporal logic (LTL) is used to describe the property of the system. And then verify the negation of property formula. Secondly, label the state in the path generated dynamically based on the different types of the LTL formulas. Finally, find the “good” path in the existing paths set to be the heuristic search according to the metrics: Complexity (Com), Number (Num), and Distance (Dis). At last, the CPN model is given to prove the validity and correctness of the algorithm.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116063226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398684
S. Belginova, Indira Uvaliyeva, Aigerim Ismukhamedova
Paper describes the features of building medical knowledge bases, shows an example of the construction of rules and the implementation of logical conclusion in such systems. The general scheme of constructing of medical expert systems for the diagnosis of anemia is presented. The stages of building a medical knowledge base and examples of writing logical rules are presented.
{"title":"Decision support system for diagnosing anemia","authors":"S. Belginova, Indira Uvaliyeva, Aigerim Ismukhamedova","doi":"10.1109/CATA.2018.8398684","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398684","url":null,"abstract":"Paper describes the features of building medical knowledge bases, shows an example of the construction of rules and the implementation of logical conclusion in such systems. The general scheme of constructing of medical expert systems for the diagnosis of anemia is presented. The stages of building a medical knowledge base and examples of writing logical rules are presented.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123945996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398680
H. Yaşar, S. Serhatlioglu, Uğurhan Kutbay, F. Hardalaç
Cardiovascular diseases group is the one that causes most death in the world. There is a strong association between coronary artery disease and coronary artery calcium score. Therefore; coronary artery calcium score and class are important for the determination of risk of heart attack. In this study, a new automated assessment system is proposed to estimate the Agatston coronary artery calcium score class without need for measurement. In the estimation study performed under two different titles on the basis of three classes and five classes for Agatston coronary artery calcium score; ANN, body mass index, age and gender were used. In the study, the data collected from a total of 260 patients (105 female, 155 male), ages ranging between 29 and 77 years (an average of 45.56 years), were used. As a result of the study, it was seen that a successful estimation rate of 67.69% was reached in estimating the class of Agatston coronary artery calcium score for the patients correctly when an estimation was made with five classes were taken as basis. In the study, a rate of success of 91.15% was achieved in the estimation based on three classes.
{"title":"A novel approach for estimation of coronary artery calcium score class using ANN and body mass index, age and gender data","authors":"H. Yaşar, S. Serhatlioglu, Uğurhan Kutbay, F. Hardalaç","doi":"10.1109/CATA.2018.8398680","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398680","url":null,"abstract":"Cardiovascular diseases group is the one that causes most death in the world. There is a strong association between coronary artery disease and coronary artery calcium score. Therefore; coronary artery calcium score and class are important for the determination of risk of heart attack. In this study, a new automated assessment system is proposed to estimate the Agatston coronary artery calcium score class without need for measurement. In the estimation study performed under two different titles on the basis of three classes and five classes for Agatston coronary artery calcium score; ANN, body mass index, age and gender were used. In the study, the data collected from a total of 260 patients (105 female, 155 male), ages ranging between 29 and 77 years (an average of 45.56 years), were used. As a result of the study, it was seen that a successful estimation rate of 67.69% was reached in estimating the class of Agatston coronary artery calcium score for the patients correctly when an estimation was made with five classes were taken as basis. In the study, a rate of success of 91.15% was achieved in the estimation based on three classes.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128816243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398658
Fawwaz Yousef Alnawaj'ha, Mohammed AbuTaha
Responsive web design is an approach for browsing a particular web page perfectly on different devices without resizing, panning, and scrolling the page, it is emerged in 2013 with the use of smartphones in accessing the internet, it becomes an interested to web developers after the increasing in the use of tablets, phones and smartwatches in the internet access. This paper will shed more light on this important approach and the latest updates in it and then study the commitment of web developers in Palestine with this approach, where the top 20 sites in Palestine studied and we found that 40% of them are not responsive, after that a website was created and tested, the created site complies with the principles of responsive web design.
{"title":"Responsive web design commitment by the web developers in Palestine","authors":"Fawwaz Yousef Alnawaj'ha, Mohammed AbuTaha","doi":"10.1109/CATA.2018.8398658","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398658","url":null,"abstract":"Responsive web design is an approach for browsing a particular web page perfectly on different devices without resizing, panning, and scrolling the page, it is emerged in 2013 with the use of smartphones in accessing the internet, it becomes an interested to web developers after the increasing in the use of tablets, phones and smartwatches in the internet access. This paper will shed more light on this important approach and the latest updates in it and then study the commitment of web developers in Palestine with this approach, where the top 20 sites in Palestine studied and we found that 40% of them are not responsive, after that a website was created and tested, the created site complies with the principles of responsive web design.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129796674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398678
Ayesha Saadia, A. Rashdi
Noise removal from an image is yet very hot area in image processing. It is a vital preprocessing step in many applications. The objective of image denoising is to estimate a clean image from a noisy observation. In this context noise is defined to be a disturbance in the observed signal, leading to an inaccurate measurement of the observed quantity and thus to a loss of information. In this paper, a denoising algorithm is proposed which works blindly i.e. without any prior information about the noise variance. Input image is divided into 3×3 sized patches and similar patches are searched in the neighborhood. Original value of a pixel is estimated by endorsing neighborhood pixels. Endorsement is decided according to the degree of similarity between the pixel under consideration and pixels around it. Significance of the proposed technique is verified by comparing it with other state of the art techniques, qualitatively and quantitatively.
{"title":"Image denoising method by endorsement of neighborhood pixels","authors":"Ayesha Saadia, A. Rashdi","doi":"10.1109/CATA.2018.8398678","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398678","url":null,"abstract":"Noise removal from an image is yet very hot area in image processing. It is a vital preprocessing step in many applications. The objective of image denoising is to estimate a clean image from a noisy observation. In this context noise is defined to be a disturbance in the observed signal, leading to an inaccurate measurement of the observed quantity and thus to a loss of information. In this paper, a denoising algorithm is proposed which works blindly i.e. without any prior information about the noise variance. Input image is divided into 3×3 sized patches and similar patches are searched in the neighborhood. Original value of a pixel is estimated by endorsing neighborhood pixels. Endorsement is decided according to the degree of similarity between the pixel under consideration and pixels around it. Significance of the proposed technique is verified by comparing it with other state of the art techniques, qualitatively and quantitatively.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128493651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-01DOI: 10.1109/CATA.2018.8398676
Safia Bekhouche, Yamina Mohamed Ben Ali
Protein is an alphabetical sequence of amino acids, this form of sequence can never be processed by data mining and machine learning algorithms that are needed for numerical data. Feature extraction strategies are used to transform the alphabetical sequence into a feature vector representing the properties of this sequence. But each method produces an attributes vector of different size and properties to others. Our work aims to compare three most used feature extraction strategies that are AAC, PseAAC and DC using five selected machine learning algorithms deployed on weka platform, they are evaluated based on Accuracy, F-measure, MCC and error rate measures. This comparison helps us to decide what feature extraction strategy is best suited to work while applying computationally expensive selected machine learning algorithms on a protein sequence data. Experiments suggested that AAC, PseAAC and DC methods would be optimal on GPCR classification at sub sub-family level using MLP algorithm. While working with other classifiers would be optimal if we do not use a huge subset of data so a grand class number. Hence this study concludes that a better performance would be reached when a good classifier is established.
{"title":"Comparative analysis on features extraction strategies for GPCR classification","authors":"Safia Bekhouche, Yamina Mohamed Ben Ali","doi":"10.1109/CATA.2018.8398676","DOIUrl":"https://doi.org/10.1109/CATA.2018.8398676","url":null,"abstract":"Protein is an alphabetical sequence of amino acids, this form of sequence can never be processed by data mining and machine learning algorithms that are needed for numerical data. Feature extraction strategies are used to transform the alphabetical sequence into a feature vector representing the properties of this sequence. But each method produces an attributes vector of different size and properties to others. Our work aims to compare three most used feature extraction strategies that are AAC, PseAAC and DC using five selected machine learning algorithms deployed on weka platform, they are evaluated based on Accuracy, F-measure, MCC and error rate measures. This comparison helps us to decide what feature extraction strategy is best suited to work while applying computationally expensive selected machine learning algorithms on a protein sequence data. Experiments suggested that AAC, PseAAC and DC methods would be optimal on GPCR classification at sub sub-family level using MLP algorithm. While working with other classifiers would be optimal if we do not use a huge subset of data so a grand class number. Hence this study concludes that a better performance would be reached when a good classifier is established.","PeriodicalId":231024,"journal":{"name":"2018 4th International Conference on Computer and Technology Applications (ICCTA)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129440262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}