Pub Date : 2022-08-24DOI: 10.4108/eetsis.v9i5.2611
Aldine do Socorro Corrêa Cruz, E. Alvarez, Luciane Paula Vital
INTRODUCTION: Web applications and information systems are predominantly constituted as platforms for acquiring data and services over the Internet. Such applications integrate a technological context filled with intelligent devices interactive amongst themselves, connected to the network, hardware, and software, and accessible to the most varied social segments. OBJECTIVES: This article aims to present a digital mechanism based on product marketing for the acquisition of personal data used by a company in the cosmetics industry; to characterize privacy and data protection in view of the regulatory acts prevailing in Brazil; as well as to discuss how such scenarios affect the consumer. METHODS: We performed a bibliographic survey with a qualitative approach to the information collected, and used a digital platform for commercial operation as object to analysis. RESULTS: We confirmed the use of a digital platform, accessible by different electronic devices, to spread commercial content reaching a considerable volume of users, which then propagated it. We verified an indirect relationship of supply of goods through the transfer of identification, communication and location data. We identified users being directed to the Terms of Promotion and User Privacy Policy, as well as different media resources aiding their understanding. CONCLUSION: The customer's vulnerability in consumer relations stands out, something increasingly frequent in digital environments, which enables a directly proportional flow of information between market and consumer. Finally, we observed that Digital Humanities constitute a broad field of research under an extensive methodological domain, due to its interdisciplinary character, for the digital study of cultural phenomena, and promote critical reflection on the effects that computational methods have on society.
{"title":"Web application for data collection in marketing strategies: an approach from the perspective of Digital Humanities","authors":"Aldine do Socorro Corrêa Cruz, E. Alvarez, Luciane Paula Vital","doi":"10.4108/eetsis.v9i5.2611","DOIUrl":"https://doi.org/10.4108/eetsis.v9i5.2611","url":null,"abstract":"INTRODUCTION: Web applications and information systems are predominantly constituted as platforms for acquiring data and services over the Internet. Such applications integrate a technological context filled with intelligent devices interactive amongst themselves, connected to the network, hardware, and software, and accessible to the most varied social segments. \u0000OBJECTIVES: This article aims to present a digital mechanism based on product marketing for the acquisition of personal data used by a company in the cosmetics industry; to characterize privacy and data protection in view of the regulatory acts prevailing in Brazil; as well as to discuss how such scenarios affect the consumer. \u0000METHODS: We performed a bibliographic survey with a qualitative approach to the information collected, and used a digital platform for commercial operation as object to analysis. \u0000RESULTS: We confirmed the use of a digital platform, accessible by different electronic devices, to spread commercial content reaching a considerable volume of users, which then propagated it. We verified an indirect relationship of supply of goods through the transfer of identification, communication and location data. We identified users being directed to the Terms of Promotion and User Privacy Policy, as well as different media resources aiding their understanding. \u0000CONCLUSION: The customer's vulnerability in consumer relations stands out, something increasingly frequent in digital environments, which enables a directly proportional flow of information between market and consumer. Finally, we observed that Digital Humanities constitute a broad field of research under an extensive methodological domain, due to its interdisciplinary character, for the digital study of cultural phenomena, and promote critical reflection on the effects that computational methods have on society.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"348 1","pages":"e10"},"PeriodicalIF":1.3,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79708138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-22DOI: 10.4108/eetsis.v9i4.2565
M. Grace, M. Sughasiny
INTRODUCTION: Android OS is the most recent used smartphone platform in the world that occupies about 80% in share market. In google play store, there are 3.48 million apps available for downloading. Unfortunately, the growth rate of malicious apps in google play store and third party app store has become a big concern, which holds back the development of the Android smartphone ecosystem. OBJECTIVES: In recent survey, a new malicious app has been introduced for every 10 seconds. These malicious apps are built to accomplish a variety of threats, such as Trojans, worms, exploits, and viruses. To overcome this issue, a new efficient and effective approach of malware detection for android application using Aquila optimizer and Hybrid LSTM-SVM classifier is designed. METHODS: In this paper, the optimal features are selected from the CSV file based on the prediction accuracy by cross validation using Aquila optimizer and the mean square error (MSE) obtained by the cross validation is consider as the fitness function for the Aquila to select the optimal features. RESULTS: The extracted optimal features are given to the Hybrid LSTM-SVM classifier for training and testing the features to predict the malware type in the android system. CONCLUSION: This proposed model is implemented on python 3.8 for performance metrics such as accuracy, precision, execution time, error, etc. The acquired accuracy for the proposed model is 97%, which is greater compared to the existing techniques such as LSTM, SVM, RF and NB. Thus, the proposed model instantly predicts the malware from the android application.
{"title":"Malware detection for Android application using Aquila optimizer and Hybrid LSTM-SVM classifier","authors":"M. Grace, M. Sughasiny","doi":"10.4108/eetsis.v9i4.2565","DOIUrl":"https://doi.org/10.4108/eetsis.v9i4.2565","url":null,"abstract":"INTRODUCTION: Android OS is the most recent used smartphone platform in the world that occupies about 80% in share market. In google play store, there are 3.48 million apps available for downloading. Unfortunately, the growth rate of malicious apps in google play store and third party app store has become a big concern, which holds back the development of the Android smartphone ecosystem. OBJECTIVES: In recent survey, a new malicious app has been introduced for every 10 seconds. These malicious apps are built to accomplish a variety of threats, such as Trojans, worms, exploits, and viruses. To overcome this issue, a new efficient and effective approach of malware detection for android application using Aquila optimizer and Hybrid LSTM-SVM classifier is designed. METHODS: In this paper, the optimal features are selected from the CSV file based on the prediction accuracy by cross validation using Aquila optimizer and the mean square error (MSE) obtained by the cross validation is consider as the fitness function for the Aquila to select the optimal features. RESULTS: The extracted optimal features are given to the Hybrid LSTM-SVM classifier for training and testing the features to predict the malware type in the android system. CONCLUSION: This proposed model is implemented on python 3.8 for performance metrics such as accuracy, precision, execution time, error, etc. The acquired accuracy for the proposed model is 97%, which is greater compared to the existing techniques such as LSTM, SVM, RF and NB. Thus, the proposed model instantly predicts the malware from the android application.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"32 1","pages":"e7"},"PeriodicalIF":1.3,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77529949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-22DOI: 10.4108/eetsis.v9i4.2581
Bhargavi B., K. Rani, Arunjyoti Neog
A graph acts as a powerful modelling tool to represent complex relationships between objects in the big data era. Given two vertices, vertex and edge constraints, the multidimensional constraint reachable ( MCR) paths problem finds the path between the given vertices that match the user-specified constraints. A significant challenge is to store the graph topology and attribute information while constructing a reachability index. We propose an optimized hashing-based heuristic search technique to address this challenge while solving the multidimensional constraint reachability queries. In the proposed technique, we optimize hashing and recommend an efficient clustering technique based on matrix factorization. We further extend the heuristic search technique to improve the accuracy. We experimentally prove that our proposed techniques are scalable and accurate on real and synthetic datasets. Our proposed extended heuristic search technique is able to achieve an average execution time of 0.17 seconds and 2.55 seconds on MCR true queries with vertex and edge constraints for Robots and Twitter datasets respectively.
{"title":"Finding Multidimensional Constraint Reachable Paths for Attributed Graphs","authors":"Bhargavi B., K. Rani, Arunjyoti Neog","doi":"10.4108/eetsis.v9i4.2581","DOIUrl":"https://doi.org/10.4108/eetsis.v9i4.2581","url":null,"abstract":"A graph acts as a powerful modelling tool to represent complex relationships between objects in the big data era. Given two vertices, vertex and edge constraints, the multidimensional constraint reachable ( MCR) paths problem finds the path between the given vertices that match the user-specified constraints. A significant challenge is to store the graph topology and attribute information while constructing a reachability index. We propose an optimized hashing-based heuristic search technique to address this challenge while solving the multidimensional constraint reachability queries. In the proposed technique, we optimize hashing and recommend an efficient clustering technique based on matrix factorization. We further extend the heuristic search technique to improve the accuracy. We experimentally prove that our proposed techniques are scalable and accurate on real and synthetic datasets. Our proposed extended heuristic search technique is able to achieve an average execution time of 0.17 seconds and 2.55 seconds on MCR true queries with vertex and edge constraints for Robots and Twitter datasets respectively.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"17 1","pages":"e8"},"PeriodicalIF":1.3,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87426977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-16DOI: 10.4108/eetcasa.v8i1.2546
M. H. Zafar, I. Khan, A. U. Rehman, S. Zafar
In Pakistan, existing blood control systems or blood information management systems are limited in terms of efficient data retrieval of donor to consumer. There is no communication network in place for extra blood in one location to be demanded from a region if blood is limited, resulting in blood wastage. Due to a lack of accessibility and sufficient blood quality testing, blood contaminated with illnesses such as HIV has been used for transfusion in some cases. This study proposes a ledger blood management system to address these challenges. The trail has been represented as a supply-chain management problem following the blood. By trailing the blood stream and donation a single platform for transferring blood and the problem results among blood groups, the proposed system, built on the hyperledger fabric model, adds more traceability toward the blood transfusion process. It also helps to reduce unjustified blood wastage by providing an integrated system for transferring lifeblood and the thing extracts among lifeblood banks. A web app is also designed for accessing the network for simplicity of usage and security is enhanced by implementing block chain hyperfebric ledger system through Key Value System (KVS) system.
在巴基斯坦,现有的血液控制系统或血液信息管理系统在从献血者到消费者的有效数据检索方面受到限制。如果血液有限,没有适当的通信网络,以便在一个地方从一个地区要求额外的血液,从而导致血液浪费。由于缺乏可获得性和足够的血液质量检测,在某些情况下,被艾滋病毒等疾病污染的血液被用于输血。本研究提出了一个分类血液管理系统来解决这些挑战。这条线索被描述为血液之后的供应链管理问题。通过追踪血液流动和献血,一个单一的平台来传递血液和血型之间的问题结果,该系统建立在超级分类账结构模型上,为输血过程增加了更多的可追溯性。它还提供了一个在血库之间转移血液和提取物的综合系统,有助于减少不合理的血液浪费。还设计了一个用于访问网络的web应用程序,以简化使用,并通过Key Value system (KVS)系统实现区块链超纤分类账系统,提高了安全性。
{"title":"A Novel Blockchain-Based Model for Blood Donation System","authors":"M. H. Zafar, I. Khan, A. U. Rehman, S. Zafar","doi":"10.4108/eetcasa.v8i1.2546","DOIUrl":"https://doi.org/10.4108/eetcasa.v8i1.2546","url":null,"abstract":"In Pakistan, existing blood control systems or blood information management systems are limited in terms of efficient data retrieval of donor to consumer. There is no communication network in place for extra blood in one location to be demanded from a region if blood is limited, resulting in blood wastage. Due to a lack of accessibility and sufficient blood quality testing, blood contaminated with illnesses such as HIV has been used for transfusion in some cases. This study proposes a ledger blood management system to address these challenges. The trail has been represented as a supply-chain management problem following the blood. By trailing the blood stream and donation a single platform for transferring blood and the problem results among blood groups, the proposed system, built on the hyperledger fabric model, adds more traceability toward the blood transfusion process. It also helps to reduce unjustified blood wastage by providing an integrated system for transferring lifeblood and the thing extracts among lifeblood banks. A web app is also designed for accessing the network for simplicity of usage and security is enhanced by implementing block chain hyperfebric ledger system through Key Value System (KVS) system.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"43 1","pages":"e8"},"PeriodicalIF":1.3,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80942414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: Skin cancer is an emerging disease all over the world which causes a huge mortality. To detect skin cancer at an early stage, computer aided systems is designed. The most crucial step in it is the feature selection process because of its greater impact on classification performance. Various feature selection algorithms were designed previously to find the relevant features from a set of attributes. Yet, there arise challenges in selecting appropriate features from datasets related to disease prediction.OBJECTIVES: To design a hybrid feature selection algorithm for selecting relevant feature subspace from dermatology datasets.METHODS: The hybrid feature selection algorithm is designed by integrating the Latent Semantic Index (LSI) along with correlation-based Feature Selection (CFS). To achieve an optimal selection of feature subset, beetle swarm optimization is used.RESULTS: Statistical metrics such as accuracy, specificity, recall, F1 score and MCC are calculated.CONCLUSION: The accuracy and sensitivity value obtained is 95% and 92%.
{"title":"Modified Filter Based Feature Selection Technique for Dermatology Dataset Using Beetle Swarm Optimization","authors":"J. Rajeshwari, M. Sughasiny, Researc H Article","doi":"10.4108/eetsis.vi.1998","DOIUrl":"https://doi.org/10.4108/eetsis.vi.1998","url":null,"abstract":"INTRODUCTION: Skin cancer is an emerging disease all over the world which causes a huge mortality. To detect skin cancer at an early stage, computer aided systems is designed. The most crucial step in it is the feature selection process because of its greater impact on classification performance. Various feature selection algorithms were designed previously to find the relevant features from a set of attributes. Yet, there arise challenges in selecting appropriate features from datasets related to disease prediction.OBJECTIVES: To design a hybrid feature selection algorithm for selecting relevant feature subspace from dermatology datasets.METHODS: The hybrid feature selection algorithm is designed by integrating the Latent Semantic Index (LSI) along with correlation-based Feature Selection (CFS). To achieve an optimal selection of feature subset, beetle swarm optimization is used.RESULTS: Statistical metrics such as accuracy, specificity, recall, F1 score and MCC are calculated.CONCLUSION: The accuracy and sensitivity value obtained is 95% and 92%.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"106 1","pages":"e1"},"PeriodicalIF":1.3,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79860828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.4108/eetcasa.v8i24.1996
T. Woolman, J.L. Pickard
INTRODUCTION: A solution to subject-independent HAR prediction through machine learning classification algorithms using statistical equivalency for comparative analysis between independent groups with non-subject training dependencies.OBJECTIVES: To indicate that the multinomial predictive classification model that was trained and optimized on the one-subject control group is at least partially extensible to multiple independent experiment groups for at least one activity class.METHODS: Gradient boosted machine multinomial classification algorithm is trained on a single individual with the classifier trained on all activity classes as a multinomial classification problem.RESULTS: Levene-Wellek-Welch (LWW) Statistic calculated as 0.021, with a Critical Value for LWW of 0.026, using an alpha of 0.05.CONCLUSION: Confirmed falsifiability that incorporates reproducible methods into the quasi-experiment design applied to the field of machine learning for human activity recognition.
{"title":"Gradient Descent Machine Learning with Equivalency Testing for Non-Subject Dependent Applications in Human Activity Recognition","authors":"T. Woolman, J.L. Pickard","doi":"10.4108/eetcasa.v8i24.1996","DOIUrl":"https://doi.org/10.4108/eetcasa.v8i24.1996","url":null,"abstract":"INTRODUCTION: A solution to subject-independent HAR prediction through machine learning classification algorithms using statistical equivalency for comparative analysis between independent groups with non-subject training dependencies.OBJECTIVES: To indicate that the multinomial predictive classification model that was trained and optimized on the one-subject control group is at least partially extensible to multiple independent experiment groups for at least one activity class.METHODS: Gradient boosted machine multinomial classification algorithm is trained on a single individual with the classifier trained on all activity classes as a multinomial classification problem.RESULTS: Levene-Wellek-Welch (LWW) Statistic calculated as 0.021, with a Critical Value for LWW of 0.026, using an alpha of 0.05.CONCLUSION: Confirmed falsifiability that incorporates reproducible methods into the quasi-experiment design applied to the field of machine learning for human activity recognition.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"49 1","pages":"e7"},"PeriodicalIF":1.3,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76983584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work introduced a novel approach for the movie recommender system using a machine learning approach. This work introduces a clustering-based approach to introduce a recommender system (RS). The conventional clustering approaches suffer from the clustering error issue, which leads to degraded performance. Hence, to overcome this issue, we developed an expectation- maximization-based clustering approach. However, due to imbalanced data, the performance of RS is degraded due to multicollinearity issues. Hence, we Incorporate PCA (Principal Component Analysis) based dimensionality reduction model to improve the performance. Finally, we aim to reduce the error; thus, a Genetic Algorithm (GA) is included to achieve the optimal clusters and assign the suitable recommendation. The experimental study is carried out on publically available movie datasets performance of the proposed approach is measured in terms of MSE (Mean Squared Error) and Root Mean Squared Error (RMSE). The comparative study shows that the proposed approach achieves better performance when compared with a state-of-art movie recommendation system.
{"title":"EM_GA-RS: Expectation Maximization and GA-based Movie Recommender System","authors":"N. AshaK., R. Rajkumar","doi":"10.4108/eetsis.vi.1947","DOIUrl":"https://doi.org/10.4108/eetsis.vi.1947","url":null,"abstract":"This work introduced a novel approach for the movie recommender system using a machine learning approach. This work introduces a clustering-based approach to introduce a recommender system (RS). The conventional clustering approaches suffer from the clustering error issue, which leads to degraded performance. Hence, to overcome this issue, we developed an expectation- maximization-based clustering approach. However, due to imbalanced data, the performance of RS is degraded due to multicollinearity issues. Hence, we Incorporate PCA (Principal Component Analysis) based dimensionality reduction model to improve the performance. Finally, we aim to reduce the error; thus, a Genetic Algorithm (GA) is included to achieve the optimal clusters and assign the suitable recommendation. The experimental study is carried out on publically available movie datasets performance of the proposed approach is measured in terms of MSE (Mean Squared Error) and Root Mean Squared Error (RMSE). The comparative study shows that the proposed approach achieves better performance when compared with a state-of-art movie recommendation system.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"31 1","pages":"e2"},"PeriodicalIF":1.3,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75338921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: The continued growth of urbanization presents new challenges. This, in turn, will lead to pressure for sustainable environment initiatives, with demands for more and better infrastructure in the diminishing space available and improved quality of life for city dwellers at a more affordable cost. Smart Cities are part of the solution to the growing challenges of urbanization. The adoption of new technologies like artificial intelligence (AI) is transforming cities, making them smarter, faster, and predicting opportunities for improvement. OBJECTIVES: This study is conducting a detailed bibliometric survey to investigate the applications and trends of Artificial Intelligence research for different areas of smart cities and emphasizing the potential effects and challenges of AI adaptation in smart cities over the past 30.5 years. METHODS: For this study, the Scopus database was used to collect a total of 1925 documents published between 1991-2021 (July). The bibliometric analysis includes document types, subject categorization, document growth, as well as top contributing sources, countries, authors, and funding sponsors. It also analyses keywords, abstracts, titles, and characteristics of most cited documents. RESULTS: The analyzed findings of this research study reflect not only the significance of AI technology for various applications within numerous sectors in the smart city but also major obstacles in AI research for various sectors of smart cities. CONCLUSION: The research demonstrates that AI has the ability to construct today’s and tomorrow’s smart cities, but that each region’s potentials, conditions, and circumstances must be addressed in order to achieve a smooth internet city development.
{"title":"Bibliometric Mapping of Trends, Applications and Challenges of Artificial Intelligence in Smart Cities","authors":"Shilpi Harnal, Gaurav Sharma, Swati Malik, Gagandeep Kaur, Savita Khurana, Prabhjot Kaur, Sarita Simaiya, Deepak Bagga","doi":"10.4108/eetsis.vi.489","DOIUrl":"https://doi.org/10.4108/eetsis.vi.489","url":null,"abstract":"INTRODUCTION: The continued growth of urbanization presents new challenges. This, in turn, will lead to pressure for sustainable environment initiatives, with demands for more and better infrastructure in the diminishing space available and improved quality of life for city dwellers at a more affordable cost. Smart Cities are part of the solution to the growing challenges of urbanization. The adoption of new technologies like artificial intelligence (AI) is transforming cities, making them smarter, faster, and predicting opportunities for improvement. \u0000OBJECTIVES: This study is conducting a detailed bibliometric survey to investigate the applications and trends of Artificial Intelligence research for different areas of smart cities and emphasizing the potential effects and challenges of AI adaptation in smart cities over the past 30.5 years. \u0000METHODS: For this study, the Scopus database was used to collect a total of 1925 documents published between 1991-2021 (July). The bibliometric analysis includes document types, subject categorization, document growth, as well as top contributing sources, countries, authors, and funding sponsors. It also analyses keywords, abstracts, titles, and characteristics of most cited documents. \u0000RESULTS: The analyzed findings of this research study reflect not only the significance of AI technology for various applications within numerous sectors in the smart city but also major obstacles in AI research for various sectors of smart cities. \u0000CONCLUSION: The research demonstrates that AI has the ability to construct today’s and tomorrow’s smart cities, but that each region’s potentials, conditions, and circumstances must be addressed in order to achieve a smooth internet city development.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"47 1","pages":"8"},"PeriodicalIF":1.3,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79324684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
COVID-19 has posed an extraordinary challenge to the entire world. As the number of COVID-19 cases continues to climb around the world, medical experts are facing an unprecedented challenge in correctly diagnosing and predicting the disease. The present research attempts to develop a new and effective strategy for classifying chest X-rays and CT Scans in order to distinguish COVID-19 from other diseases. Transfer learning was used to train various models for chest X-rays and CT Scan, including Inceptionv3, Xception, InceptionResNetv2, DenseNet121, and Resnet50. The models are then integrated using an ensemble technique to improve forecast accuracy. The proposed ensemble approach is more effective in classifying X-ray and CT Scan and forecasting COVID-19.
{"title":"Chest X-ray and CT Scan Classification using Ensemble Learning through Transfer Learning","authors":"S. Siddiqui, Neda Fatima, Anwar Ahmad","doi":"10.4108/eetsis.vi.382","DOIUrl":"https://doi.org/10.4108/eetsis.vi.382","url":null,"abstract":"COVID-19 has posed an extraordinary challenge to the entire world. As the number of COVID-19 cases continues to climb around the world, medical experts are facing an unprecedented challenge in correctly diagnosing and predicting the disease. The present research attempts to develop a new and effective strategy for classifying chest X-rays and CT Scans in order to distinguish COVID-19 from other diseases. Transfer learning was used to train various models for chest X-rays and CT Scan, including Inceptionv3, Xception, InceptionResNetv2, DenseNet121, and Resnet50. The models are then integrated using an ensemble technique to improve forecast accuracy. The proposed ensemble approach is more effective in classifying X-ray and CT Scan and forecasting COVID-19.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"9 1","pages":"e8"},"PeriodicalIF":1.3,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88493352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-31DOI: 10.4108/eai.31-5-2022.174087
Hemant Rathore, Sujay C Sharma, S. Sahay, Mohit Sewak
Android devices like smartphones and tablets have become immensely popular and are an integral part of our daily lives. However, it has also attracted malware developers to design android malware which have grown aggressively in the last few years. Research shows that machine learning, ensemble, and deep learning models can successfully be used to detect android malware. However, the robustness of these models against well-crafted adversarial samples is not well investigated. Therefore, we first stepped into the adversaries’ shoes and proposed the ACE attack that adds limited perturbations in malicious applications such that they are forcefully misclassified as benign and remain undetected by di ff erent malware detection models. The ACE agent is designed based on an actor-critic architecture that uses reinforcement learning to add perturbations (maximum ten) while maintaining the structural and functional integrity of the adversarial malicious applications. The proposed attack is validated against twenty-two di ff erent malware detection models based on two feature sets and eleven di ff erent classification algorithms. The ACE attack accomplished an average fooling rate (with maximum of ten perturbations) of 46 . 63% across eleven permission based malware detection models and 95 . 31% across eleven intent based detection models. The attack forced a massive number of misclassifications that led to an average accuracy drop of 18 . 07% and 36 . 62% in the above permission and intent based malware detection models. Later we also design a defense mechanism using the adversarial retraining strategy, which uses adversarial malware samples with correct class labels to retrain the models. The defense mechanism improves the average accuracy by 24 . 88% and 76 . 51% for the eleven permission and eleven intent based malware detection models. In conclusion, we found that malware detection models based on machine learning, ensemble, and deep learning perform poorly against adversarial samples. Thus malware detection models should be investigated for vulnerabilities and mitigated to enhance their overall forensic knowledge and adversarial robustness.
{"title":"Are Malware Detection Classifiers Adversarially Vulnerable to Actor-Critic based Evasion Attacks?","authors":"Hemant Rathore, Sujay C Sharma, S. Sahay, Mohit Sewak","doi":"10.4108/eai.31-5-2022.174087","DOIUrl":"https://doi.org/10.4108/eai.31-5-2022.174087","url":null,"abstract":"Android devices like smartphones and tablets have become immensely popular and are an integral part of our daily lives. However, it has also attracted malware developers to design android malware which have grown aggressively in the last few years. Research shows that machine learning, ensemble, and deep learning models can successfully be used to detect android malware. However, the robustness of these models against well-crafted adversarial samples is not well investigated. Therefore, we first stepped into the adversaries’ shoes and proposed the ACE attack that adds limited perturbations in malicious applications such that they are forcefully misclassified as benign and remain undetected by di ff erent malware detection models. The ACE agent is designed based on an actor-critic architecture that uses reinforcement learning to add perturbations (maximum ten) while maintaining the structural and functional integrity of the adversarial malicious applications. The proposed attack is validated against twenty-two di ff erent malware detection models based on two feature sets and eleven di ff erent classification algorithms. The ACE attack accomplished an average fooling rate (with maximum of ten perturbations) of 46 . 63% across eleven permission based malware detection models and 95 . 31% across eleven intent based detection models. The attack forced a massive number of misclassifications that led to an average accuracy drop of 18 . 07% and 36 . 62% in the above permission and intent based malware detection models. Later we also design a defense mechanism using the adversarial retraining strategy, which uses adversarial malware samples with correct class labels to retrain the models. The defense mechanism improves the average accuracy by 24 . 88% and 76 . 51% for the eleven permission and eleven intent based malware detection models. In conclusion, we found that malware detection models based on machine learning, ensemble, and deep learning perform poorly against adversarial samples. Thus malware detection models should be investigated for vulnerabilities and mitigated to enhance their overall forensic knowledge and adversarial robustness.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"183 1","pages":"e6"},"PeriodicalIF":1.3,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80405278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}