Dharmaraj R Patil, T. Pattewar, Vipul D. Punjabi, Shailendra M. Pardeshi
INTRODUCTION: The rise of social media platforms has brought about a concerning surge in the creation of fraudulent user profiles, with intentions ranging from spreading false information and perpetrating fraud to engaging in cyberbullying. The detection of these deceptive profiles has emerged as a critical imperative to safeguard the trustworthiness and security of online communities.OBJECTIVES: This paper focused on the detection and identification of fake social media profiles.METHODS: This paper introduces an innovative approach for discerning and categorizing counterfeit social media profiles by leveraging the majority voting approach. The proposed methodology integrates a range of machine learning algorithms, including Decision Trees, XGBoost, Random Forest, Extra Trees, Logistic Regression, AdaBoost and K-Nearest Neighbors each tailored to capture distinct facets of user behavior and profile attributes. This amalgamation of diverse methods results in an ensemble of classifiers, which are subsequently subjected to a majority voting mechanism to render a conclusive judgment regarding the legitimacy of a given social media profile.RESULTS: We conducted thorough experiments using a dataset containing both legitimate and fake social media profiles to determine the efficiency of our methodology. Our findings substantiate that the Majority Voting Technique surpasses individual classifiers, attaining an accuracy rate of 99.12%, a precision rate of 99.12%, a recall rate of 99.12%, and an F1-score of 99.12%.CONCLUSION: The results show that the majority vote method is reliable for detecting and recognising fake social media profiles.
{"title":"Detecting Fake Social Media Profiles Using the Majority Voting Approach","authors":"Dharmaraj R Patil, T. Pattewar, Vipul D. Punjabi, Shailendra M. Pardeshi","doi":"10.4108/eetsis.4264","DOIUrl":"https://doi.org/10.4108/eetsis.4264","url":null,"abstract":"INTRODUCTION: The rise of social media platforms has brought about a concerning surge in the creation of fraudulent user profiles, with intentions ranging from spreading false information and perpetrating fraud to engaging in cyberbullying. The detection of these deceptive profiles has emerged as a critical imperative to safeguard the trustworthiness and security of online communities.OBJECTIVES: This paper focused on the detection and identification of fake social media profiles.METHODS: This paper introduces an innovative approach for discerning and categorizing counterfeit social media profiles by leveraging the majority voting approach. The proposed methodology integrates a range of machine learning algorithms, including Decision Trees, XGBoost, Random Forest, Extra Trees, Logistic Regression, AdaBoost and K-Nearest Neighbors each tailored to capture distinct facets of user behavior and profile attributes. This amalgamation of diverse methods results in an ensemble of classifiers, which are subsequently subjected to a majority voting mechanism to render a conclusive judgment regarding the legitimacy of a given social media profile.RESULTS: We conducted thorough experiments using a dataset containing both legitimate and fake social media profiles to determine the efficiency of our methodology. Our findings substantiate that the Majority Voting Technique surpasses individual classifiers, attaining an accuracy rate of 99.12%, a precision rate of 99.12%, a recall rate of 99.12%, and an F1-score of 99.12%.CONCLUSION: The results show that the majority vote method is reliable for detecting and recognising fake social media profiles.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"62 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139778718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dharmaraj R Patil, T. Pattewar, Vipul D. Punjabi, Shailendra M. Pardeshi
INTRODUCTION: The rise of social media platforms has brought about a concerning surge in the creation of fraudulent user profiles, with intentions ranging from spreading false information and perpetrating fraud to engaging in cyberbullying. The detection of these deceptive profiles has emerged as a critical imperative to safeguard the trustworthiness and security of online communities.OBJECTIVES: This paper focused on the detection and identification of fake social media profiles.METHODS: This paper introduces an innovative approach for discerning and categorizing counterfeit social media profiles by leveraging the majority voting approach. The proposed methodology integrates a range of machine learning algorithms, including Decision Trees, XGBoost, Random Forest, Extra Trees, Logistic Regression, AdaBoost and K-Nearest Neighbors each tailored to capture distinct facets of user behavior and profile attributes. This amalgamation of diverse methods results in an ensemble of classifiers, which are subsequently subjected to a majority voting mechanism to render a conclusive judgment regarding the legitimacy of a given social media profile.RESULTS: We conducted thorough experiments using a dataset containing both legitimate and fake social media profiles to determine the efficiency of our methodology. Our findings substantiate that the Majority Voting Technique surpasses individual classifiers, attaining an accuracy rate of 99.12%, a precision rate of 99.12%, a recall rate of 99.12%, and an F1-score of 99.12%.CONCLUSION: The results show that the majority vote method is reliable for detecting and recognising fake social media profiles.
{"title":"Detecting Fake Social Media Profiles Using the Majority Voting Approach","authors":"Dharmaraj R Patil, T. Pattewar, Vipul D. Punjabi, Shailendra M. Pardeshi","doi":"10.4108/eetsis.4264","DOIUrl":"https://doi.org/10.4108/eetsis.4264","url":null,"abstract":"INTRODUCTION: The rise of social media platforms has brought about a concerning surge in the creation of fraudulent user profiles, with intentions ranging from spreading false information and perpetrating fraud to engaging in cyberbullying. The detection of these deceptive profiles has emerged as a critical imperative to safeguard the trustworthiness and security of online communities.OBJECTIVES: This paper focused on the detection and identification of fake social media profiles.METHODS: This paper introduces an innovative approach for discerning and categorizing counterfeit social media profiles by leveraging the majority voting approach. The proposed methodology integrates a range of machine learning algorithms, including Decision Trees, XGBoost, Random Forest, Extra Trees, Logistic Regression, AdaBoost and K-Nearest Neighbors each tailored to capture distinct facets of user behavior and profile attributes. This amalgamation of diverse methods results in an ensemble of classifiers, which are subsequently subjected to a majority voting mechanism to render a conclusive judgment regarding the legitimacy of a given social media profile.RESULTS: We conducted thorough experiments using a dataset containing both legitimate and fake social media profiles to determine the efficiency of our methodology. Our findings substantiate that the Majority Voting Technique surpasses individual classifiers, attaining an accuracy rate of 99.12%, a precision rate of 99.12%, a recall rate of 99.12%, and an F1-score of 99.12%.CONCLUSION: The results show that the majority vote method is reliable for detecting and recognising fake social media profiles.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"36 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139838573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Focusing on Weibull failure rules, which govern the stopping of components, this work evaluates reliability metrics such as stability and the mean time to system failure (MTSF) of a structure that is parallel. These metrics' behaviour has been seen for one or two decimal random values of component failure rates, operation times, form parameters, and the total quantity of components used in the parallel structure. In order to analyze the variation in the ethics of reliability as well as MTSF, the particular case of the Weibull distribution has also been taken up.
{"title":"Reliability and Mean Time to System Failure of a Parallel System' by Using One or Two Decimal Random Data Points","authors":"H. Kaur, S. K. Sharma","doi":"10.4108/eetsis.5071","DOIUrl":"https://doi.org/10.4108/eetsis.5071","url":null,"abstract":"Focusing on Weibull failure rules, which govern the stopping of components, this work evaluates reliability metrics such as stability and the mean time to system failure (MTSF) of a structure that is parallel. These metrics' behaviour has been seen for one or two decimal random values of component failure rates, operation times, form parameters, and the total quantity of components used in the parallel structure. In order to analyze the variation in the ethics of reliability as well as MTSF, the particular case of the Weibull distribution has also been taken up.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 27","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139792830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The digital image plays an important role in today’s digital world. Storing and transmitting digital images efficiently is a challenging job. There are lots of techniques for reducing the size of digital pictures. This paper adapts the following method. The digital technique is separated into high and low resolutions. The low intensity and high intensity pixels single-handedly is dense and decompressed using three diverse algorithms to hit upon out the occurrence of low down intensity pixels in the picture. Totally six algorithms are experienced by means of benchmark images and the most excellent scheme is selected for concluding compression. A Comparison is made between the results obtained using these techniques and those obtained using JPEG 2000.
{"title":"Removing Coding and Inter Pixel Redundancy in Image Compression","authors":"A. S, K. J.","doi":"10.4108/eetsis.5073","DOIUrl":"https://doi.org/10.4108/eetsis.5073","url":null,"abstract":"The digital image plays an important role in today’s digital world. Storing and transmitting digital images efficiently is a challenging job. There are lots of techniques for reducing the size of digital pictures. This paper adapts the following method. The digital technique is separated into high and low resolutions. The low intensity and high intensity pixels single-handedly is dense and decompressed using three diverse algorithms to hit upon out the occurrence of low down intensity pixels in the picture. Totally six algorithms are experienced by means of benchmark images and the most excellent scheme is selected for concluding compression. A Comparison is made between the results obtained using these techniques and those obtained using JPEG 2000.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139792336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The digital image plays an important role in today’s digital world. Storing and transmitting digital images efficiently is a challenging job. There are lots of techniques for reducing the size of digital pictures. This paper adapts the following method. The digital technique is separated into high and low resolutions. The low intensity and high intensity pixels single-handedly is dense and decompressed using three diverse algorithms to hit upon out the occurrence of low down intensity pixels in the picture. Totally six algorithms are experienced by means of benchmark images and the most excellent scheme is selected for concluding compression. A Comparison is made between the results obtained using these techniques and those obtained using JPEG 2000.
{"title":"Removing Coding and Inter Pixel Redundancy in Image Compression","authors":"A. S, K. J.","doi":"10.4108/eetsis.5073","DOIUrl":"https://doi.org/10.4108/eetsis.5073","url":null,"abstract":"The digital image plays an important role in today’s digital world. Storing and transmitting digital images efficiently is a challenging job. There are lots of techniques for reducing the size of digital pictures. This paper adapts the following method. The digital technique is separated into high and low resolutions. The low intensity and high intensity pixels single-handedly is dense and decompressed using three diverse algorithms to hit upon out the occurrence of low down intensity pixels in the picture. Totally six algorithms are experienced by means of benchmark images and the most excellent scheme is selected for concluding compression. A Comparison is made between the results obtained using these techniques and those obtained using JPEG 2000.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139852114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The primary focus of artificial intelligence advancement is in machine translation; nonetheless, a prevalent issue persists in the form of imprecise translation. The current challenge faced by artificial intelligence is to effectively executing machine translation from extensive datasets. This research presents a BP neural method that aims to repeatedly analyse translation data and achieve optimisation in machine translation. The findings indicate that the use of BP neural network may enhance the dependability and precision of machine translation, with an accuracy rate over 84%. This performance surpasses that of the online translation approach. Hence, it can be inferred that the use of BP neural algorithms has the potential to fulfil the requirements of machine translation and enhance the precision of online translation conducted by humans.
人工智能发展的主要重点是机器翻译;然而,一个普遍存在的问题是翻译不精确。人工智能目前面临的挑战是如何从大量数据集中有效地执行机器翻译。本研究提出了一种 BP 神经方法,旨在反复分析翻译数据,实现机器翻译的优化。研究结果表明,使用 BP 神经网络可以提高机器翻译的可靠性和准确性,准确率超过 84%。这一性能超过了在线翻译方法。因此,可以推断使用 BP 神经算法有可能满足机器翻译的要求,并提高人工在线翻译的精确度。
{"title":"Research on artificial intelligence machine translation based on BP neural algorithm","authors":"Yan Wang","doi":"10.4108/eetsis.5075","DOIUrl":"https://doi.org/10.4108/eetsis.5075","url":null,"abstract":"The primary focus of artificial intelligence advancement is in machine translation; nonetheless, a prevalent issue persists in the form of imprecise translation. The current challenge faced by artificial intelligence is to effectively executing machine translation from extensive datasets. This research presents a BP neural method that aims to repeatedly analyse translation data and achieve optimisation in machine translation. The findings indicate that the use of BP neural network may enhance the dependability and precision of machine translation, with an accuracy rate over 84%. This performance surpasses that of the online translation approach. Hence, it can be inferred that the use of BP neural algorithms has the potential to fulfil the requirements of machine translation and enhance the precision of online translation conducted by humans.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139791353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The primary focus of artificial intelligence advancement is in machine translation; nonetheless, a prevalent issue persists in the form of imprecise translation. The current challenge faced by artificial intelligence is to effectively executing machine translation from extensive datasets. This research presents a BP neural method that aims to repeatedly analyse translation data and achieve optimisation in machine translation. The findings indicate that the use of BP neural network may enhance the dependability and precision of machine translation, with an accuracy rate over 84%. This performance surpasses that of the online translation approach. Hence, it can be inferred that the use of BP neural algorithms has the potential to fulfil the requirements of machine translation and enhance the precision of online translation conducted by humans.
人工智能发展的主要重点是机器翻译;然而,一个普遍存在的问题是翻译不精确。人工智能目前面临的挑战是如何从大量数据集中有效地执行机器翻译。本研究提出了一种 BP 神经方法,旨在反复分析翻译数据,实现机器翻译的优化。研究结果表明,使用 BP 神经网络可以提高机器翻译的可靠性和准确性,准确率超过 84%。这一性能超过了在线翻译方法。因此,可以推断使用 BP 神经算法有可能满足机器翻译的要求,并提高人工在线翻译的精确度。
{"title":"Research on artificial intelligence machine translation based on BP neural algorithm","authors":"Yan Wang","doi":"10.4108/eetsis.5075","DOIUrl":"https://doi.org/10.4108/eetsis.5075","url":null,"abstract":"The primary focus of artificial intelligence advancement is in machine translation; nonetheless, a prevalent issue persists in the form of imprecise translation. The current challenge faced by artificial intelligence is to effectively executing machine translation from extensive datasets. This research presents a BP neural method that aims to repeatedly analyse translation data and achieve optimisation in machine translation. The findings indicate that the use of BP neural network may enhance the dependability and precision of machine translation, with an accuracy rate over 84%. This performance surpasses that of the online translation approach. Hence, it can be inferred that the use of BP neural algorithms has the potential to fulfil the requirements of machine translation and enhance the precision of online translation conducted by humans.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139851248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Focusing on Weibull failure rules, which govern the stopping of components, this work evaluates reliability metrics such as stability and the mean time to system failure (MTSF) of a structure that is parallel. These metrics' behaviour has been seen for one or two decimal random values of component failure rates, operation times, form parameters, and the total quantity of components used in the parallel structure. In order to analyze the variation in the ethics of reliability as well as MTSF, the particular case of the Weibull distribution has also been taken up.
{"title":"Reliability and Mean Time to System Failure of a Parallel System' by Using One or Two Decimal Random Data Points","authors":"H. Kaur, S. K. Sharma","doi":"10.4108/eetsis.5071","DOIUrl":"https://doi.org/10.4108/eetsis.5071","url":null,"abstract":"Focusing on Weibull failure rules, which govern the stopping of components, this work evaluates reliability metrics such as stability and the mean time to system failure (MTSF) of a structure that is parallel. These metrics' behaviour has been seen for one or two decimal random values of component failure rates, operation times, form parameters, and the total quantity of components used in the parallel structure. In order to analyze the variation in the ethics of reliability as well as MTSF, the particular case of the Weibull distribution has also been taken up.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"32 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139852583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data pipelines are crucial for processing and transforming data in various domains, including finance, healthcare, and e-commerce. Ensuring the reliability and accuracy of data pipelines is of utmost importance to maintain data integrity and make informed business decisions. In this paper, we explore the significance of continuous monitoring in data pipelines and its contribution to data observability. This work discusses the challenges associated with monitoring data pipelines in real-time, propose a framework for real-time monitoring, and highlight its benefits in enhancing data observability. The findings of this work emphasize the need for organizations to adopt continuous monitoring practices to ensure data quality, detect anomalies, and improve overall system performance.
{"title":"Real-Time Monitoring of Data Pipelines: Exploring and Experimentally Proving that the Continuous Monitoring in Data Pipelines Reduces Cost and Elevates Quality","authors":"Shammy Narayanan, Maheswari S, Prisha Zephan","doi":"10.4108/eetsis.5065","DOIUrl":"https://doi.org/10.4108/eetsis.5065","url":null,"abstract":"Data pipelines are crucial for processing and transforming data in various domains, including finance, healthcare, and e-commerce. Ensuring the reliability and accuracy of data pipelines is of utmost importance to maintain data integrity and make informed business decisions. In this paper, we explore the significance of continuous monitoring in data pipelines and its contribution to data observability. This work discusses the challenges associated with monitoring data pipelines in real-time, propose a framework for real-time monitoring, and highlight its benefits in enhancing data observability. The findings of this work emphasize the need for organizations to adopt continuous monitoring practices to ensure data quality, detect anomalies, and improve overall system performance.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"268 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139857522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of network technology, online recruitment and job hunting have become an important way of job hunting at present, but job seekers spend a lot of time looking for suitable positions in the face of massive job information. Traditional artificial selection of job information is difficult to solve the problem of job seekers finding suitable positions quickly and accurately. This article is based on ant colony algorithm for visual analysis and personalized recommendation of job information. Through visual analysis of massive job information on the network, personalized recommendations are made based on job seekers' professional, skill, behavior, and other information. A visual analysis and personalized recommendation system for job information is established, and recommendation accuracy, efficiency, and recall rate are evaluated and analyzed using recommendation theory, realize comprehensive evaluation of information visualization analysis and personalized recommendation quality of position information based on ant colony algorithm. Compared with artificial selection of position information, it is fast and highly matched.
{"title":"Position information visualization analysis and personalized recommendation based on ant colony","authors":"Ling Xin, Bin Zhou, Pan Liu","doi":"10.4108/eetsis.5061","DOIUrl":"https://doi.org/10.4108/eetsis.5061","url":null,"abstract":"With the rapid development of network technology, online recruitment and job hunting have become an important way of job hunting at present, but job seekers spend a lot of time looking for suitable positions in the face of massive job information. Traditional artificial selection of job information is difficult to solve the problem of job seekers finding suitable positions quickly and accurately. This article is based on ant colony algorithm for visual analysis and personalized recommendation of job information. Through visual analysis of massive job information on the network, personalized recommendations are made based on job seekers' professional, skill, behavior, and other information. A visual analysis and personalized recommendation system for job information is established, and recommendation accuracy, efficiency, and recall rate are evaluated and analyzed using recommendation theory, realize comprehensive evaluation of information visualization analysis and personalized recommendation quality of position information based on ant colony algorithm. Compared with artificial selection of position information, it is fast and highly matched.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"58 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139857178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}