Pub Date : 2023-06-12DOI: 10.2174/2666255816666230612151713
Tongke Fan
A research concern revolves around as to what can make the representation of entities and relationships fully integrate the structural information of the knowledge atlas to solve the entity modeling capability in complex relationships. World knowledge can be organized into a structured knowledge network by mining entity and relationship information in real texts. In order to apply the rich structured information in the knowledge map to downstream applications, it is particularly important to express and learn the knowledge map. In the knowledge atlas with expanding scale and more diversified knowledge sources, there are many types of relationships with complex types. The frequency of a single relationship in all triples is further reduced, which increases the difficulty of relational reasoning. Thus, this study aimed to improve the accuracy of relational reasoning and entity reasoning in complex relational models. For the multi-relational knowledge map, CTransR based on the TransE model and TransR model adopts the idea of piecewise linear regression to cluster the potential relationships between head and tail entities, and establishes a vector representation for each cluster separately, so that the same relationship represented by different clusters still has a certain degree of similarity. The CTransR model carried out knowledge reasoning experiments in the open dataset, and achieved good performance. The CTransR model is highly effective and progressive for complex relationships. In this experiment, we have evaluated the model, including link prediction, triad classification, and text relationship extraction. The results show that the CTransR model has achieved significant improvement.
{"title":"Research on Multirelational Entity Modeling Based on Knowledge Graph Representation Learning","authors":"Tongke Fan","doi":"10.2174/2666255816666230612151713","DOIUrl":"https://doi.org/10.2174/2666255816666230612151713","url":null,"abstract":"\u0000\u0000A research concern revolves around as to what can make the representation of entities and relationships fully integrate the structural information of the knowledge atlas to solve the entity modeling capability in complex relationships. World knowledge can be organized into a structured knowledge network by mining entity and relationship information in real texts. In order to apply the rich structured information in the knowledge map to downstream applications, it is particularly important to express and learn the knowledge map. In the knowledge atlas with expanding scale and more diversified knowledge sources, there are many types of relationships with complex types. The frequency of a single relationship in all triples is further reduced, which increases the difficulty of relational reasoning. Thus, this study aimed to improve the accuracy of relational reasoning and entity reasoning in complex relational models.\u0000\u0000\u0000\u0000For the multi-relational knowledge map, CTransR based on the TransE model and TransR model adopts the idea of piecewise linear regression to cluster the potential relationships between head and tail entities, and establishes a vector representation for each cluster separately, so that the same relationship represented by different clusters still has a certain degree of similarity.\u0000\u0000\u0000\u0000The CTransR model carried out knowledge reasoning experiments in the open dataset, and achieved good performance.\u0000\u0000\u0000\u0000The CTransR model is highly effective and progressive for complex relationships. In this experiment, we have evaluated the model, including link prediction, triad classification, and text relationship extraction. The results show that the CTransR model has achieved significant improvement.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49101091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-08DOI: 10.2174/2666255816666230608104148
Priyanka Goyal, D. Singh
Deep neural networks due to their linear nature are sensitive to adversarial examples. They can easily be broken just by a small disturbance to the input data. Some of the existing methods to perform these kinds of attacks are pixel-level perturbation and spatial transformation of images. These methods generate adversarial examples that can be fed to the network for wrong predictions. The drawback that comes with these methods is that they are really slow and computationally expensive. This research work performed a black box attack on the target model classifier by using the generative adversarial networks (GAN) to generate adversarial examples that can fool a classifier model to classify the images as wrong classes. The proposed method used a biased dataset that does not contain any data of the target label to train the first generator Gnorm of the first stage GAN, and after the first training has finished, the second stage generator Gadv, which is a new generator model that does not take random noise as input but the output of the first generator Gnorm. The generated examples have been superimposed with the Gnorm output with a small constant, and then the superimposed data have been fed to the target model classifier to calculate the loss. Some additional losses have been included to constrain the generation from generating target examples. The proposed model has shown a better fidelity score, as evaluated using Fretchet inception distance score (FID), which was up to 42.43 in the first stage and up to 105.65 in the second stage with the attack success rate of up to 99.13%.
{"title":"Improved Two Stage Generative Adversarial Networks for Adversarial Example Generation with Real Exposure","authors":"Priyanka Goyal, D. Singh","doi":"10.2174/2666255816666230608104148","DOIUrl":"https://doi.org/10.2174/2666255816666230608104148","url":null,"abstract":"\u0000\u0000Deep neural networks due to their linear nature are sensitive to adversarial examples. They can easily be broken just by a small disturbance to the input data. Some of the existing methods to perform these kinds of attacks are pixel-level perturbation and spatial transformation of images.\u0000\u0000\u0000\u0000These methods generate adversarial examples that can be fed to the network for wrong predictions. The drawback that comes with these methods is that they are really slow and computationally expensive. This research work performed a black box attack on the target model classifier by using the generative adversarial networks (GAN) to generate adversarial examples that can fool a classifier model to classify the images as wrong classes. The proposed method used a biased dataset that does not contain any data of the target label to train the first generator Gnorm of the first stage GAN, and after the first training has finished, the second stage generator Gadv, which is a new generator model that does not take random noise as input but the output of the first generator Gnorm.\u0000\u0000\u0000\u0000The generated examples have been superimposed with the Gnorm output with a small constant, and then the superimposed data have been fed to the target model classifier to calculate the loss. Some additional losses have been included to constrain the generation from generating target examples.\u0000\u0000\u0000\u0000The proposed model has shown a better fidelity score, as evaluated using Fretchet inception distance score (FID), which was up to 42.43 in the first stage and up to 105.65 in the second stage with the attack success rate of up to 99.13%.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43969473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-08DOI: 10.2174/2666255816666230608165146
Balraj Singh, H. Verma, Vishu Madaan
The present era demands continuous support to bring improvements in executing complex analytics on large-scale data and to work beyond traditional systems. The need for processing diverse data types and solutions for different domains of the industry is rising. Such needs increase the requirement for sophisticated techniques and methods to enhance the existing platforms and mechanisms further. It provides an opportunity for the research community to investigate further into the existing systems, find potential issues, and propose new ways to improve the current systems. Hadoop is a popular choice to manage and process Big data. It is an open-source platform and a front-runner in the batch processing of large-scale jobs. The economy associated with the cluster in scaling is low as compared to other platforms. However, this popularity by no means guarantees high performance in all scenarios. With the continuous evolution in data development and industrial requirements, it is imperative to investigate and look into new methods and techniques to bring advancements to the existing system. A systematic review is represented in this paper to have an insight into the current progress in this field. Research publications from various sources are taken and analyzed. The performance of a cluster largely depends upon the different job processing mechanisms and policies associated with it. While extensive studies and solutions are proposed, the performance bottlenecks in terms of load balancing, resource utilization, content management, and efficient processing prevail. Not many of the solutions are there on scheduling about the trade-off between different parameters, the process of content splitting and merging is not explored to a large extent and the skew mitigation solutions are more focused on Reduce side of the MapReduce while the Map side is not utilized much for load balancing.
{"title":"Performance Challenges and Solutions in Big Data Platform Hadoop","authors":"Balraj Singh, H. Verma, Vishu Madaan","doi":"10.2174/2666255816666230608165146","DOIUrl":"https://doi.org/10.2174/2666255816666230608165146","url":null,"abstract":"\u0000\u0000The present era demands continuous support to bring improvements in executing complex analytics on large-scale data and to work beyond traditional systems.\u0000\u0000\u0000\u0000The need for processing diverse data types and solutions for different domains of the industry is rising. Such needs increase the requirement for sophisticated techniques and methods to enhance the existing platforms and mechanisms further. It provides an opportunity for the research community to investigate further into the existing systems, find potential issues, and propose new ways to improve the current systems. Hadoop is a popular choice to manage and process Big data. It is an open-source platform and a front-runner in the batch processing of large-scale jobs. The economy associated with the cluster in scaling is low as compared to other platforms. However, this popularity by no means guarantees high performance in all scenarios. With the continuous evolution in data development and industrial requirements, it is imperative to investigate and look into new methods and techniques to bring advancements to the existing system.\u0000\u0000\u0000\u0000A systematic review is represented in this paper to have an insight into the current progress in this field. Research publications from various sources are taken and analyzed. The performance of a cluster largely depends upon the different job processing mechanisms and policies associated with it.\u0000\u0000\u0000\u0000While extensive studies and solutions are proposed, the performance bottlenecks in terms of load balancing, resource utilization, content management, and efficient processing prevail. Not many of the solutions are there on scheduling about the trade-off between different parameters, the process of content splitting and merging is not explored to a large extent and the skew mitigation solutions are more focused on Reduce side of the MapReduce while the Map side is not utilized much for load balancing.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44622639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-06DOI: 10.2174/2666255816666230606140526
Abhisek Sharma, Sarika Jain, Archana Patel
Ontology matching provides a solution to the semantic heterogeneity problem by finding semantic relationships between entities of ontologies. Over the last two decades, there has been considerable development and improvement in the ontology matching paradigm. More than 50 ontology matching systems have been developed, and some of them are performing really well. However, the initial rate of improvement was measurably high, which now is slowing down. However, there still is room for improvement, which we as a community can work towards to achieve. In this light, we have developed a Large Scale Ontology Matching System (LSMatch), which uses different matchers to find similarities between concepts of two ontologies. LSMatch mainly uses two modules for matching. These modules perform string similarity and synonyms matching on the concepts of the ontologies. For the evaluation of LSMatch, we have tested it in Ontology Alignment Evaluation Initiative (OAEI) 2021. The performance results show that LSMatch can perform matching operations on large ontologies. LSMatch was evaluated on anatomy, disease and phenotype, conference, Knowledge graph, and Common Knowledge Graphs (KG) track. In all of these tracks, LSMatch’s performance was at par with other systems. Being LSMatch’s first participation, the system showed potential and has room for improvement.
{"title":"Large Scale Ontology Matching System (LSMatch)","authors":"Abhisek Sharma, Sarika Jain, Archana Patel","doi":"10.2174/2666255816666230606140526","DOIUrl":"https://doi.org/10.2174/2666255816666230606140526","url":null,"abstract":"\u0000\u0000Ontology matching provides a solution to the semantic heterogeneity problem by finding semantic relationships between entities of ontologies. Over the last two decades, there has been considerable development and improvement in the ontology matching paradigm. More than 50 ontology matching systems have been developed, and some of them are performing really well. However, the initial rate of improvement was measurably high, which now is slowing down. However, there still is room for improvement, which we as a community can work towards to achieve.\u0000\u0000\u0000\u0000In this light, we have developed a Large Scale Ontology Matching System (LSMatch), which uses different matchers to find similarities between concepts of two ontologies. LSMatch mainly uses two modules for matching. These modules perform string similarity and synonyms matching on the concepts of the ontologies.\u0000\u0000\u0000\u0000For the evaluation of LSMatch, we have tested it in Ontology Alignment Evaluation Initiative (OAEI) 2021. The performance results show that LSMatch can perform matching operations on large ontologies. LSMatch was evaluated on anatomy, disease and phenotype, conference, Knowledge graph, and Common Knowledge Graphs (KG) track. In all of these tracks, LSMatch’s performance was at par with other systems.\u0000\u0000\u0000\u0000Being LSMatch’s first participation, the system showed potential and has room for improvement.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47855322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.2174/2666255816666230601150351
Kishore.B, Gayathri Devi.K
A t umour is an uncontrolled growth of tissues in any part of the body. Tumours are of different types and characteristics and have different treatments. Detection of a tumour in the earlier stages makes the treatment easier. Scientists and researchers have been working towards developing sophisticated techniques and methods for identifying the form and stage of tumours. This paper provides a systematic literature survey of techniques for brain tumour segmentation and classification of abnormality and normality from MRI images based on different methods including deep learning techniques. This survey covers publicly available datasets, enhancement techniques, segmentation, feature extraction, and the classification of three different types of brain tumours that include gliomas, meningioma, and pituitary and deep learning algorithms implemented for brain tumour analysis. Finally, this survey provides all the important literature on the detection of brain tumours with their developments.
{"title":"Survey on the Techniques for Classification and Identification of Brain T umour T ypes from MRI Images U sing Deep Learning Algorithms","authors":"Kishore.B, Gayathri Devi.K","doi":"10.2174/2666255816666230601150351","DOIUrl":"https://doi.org/10.2174/2666255816666230601150351","url":null,"abstract":"\u0000\u0000A t umour is an uncontrolled growth of tissues in any part of the body. Tumours are of different types and characteristics and have different treatments. Detection of a tumour in the earlier stages makes the treatment easier. Scientists and researchers have been working towards developing sophisticated techniques and methods for identifying the form and stage of tumours. This paper provides a systematic literature survey of techniques for brain tumour segmentation and classification of abnormality and normality from MRI images based on different methods including deep learning techniques. This survey covers publicly available datasets, enhancement techniques, segmentation, feature extraction, and the classification of three different types of brain tumours that include gliomas, meningioma, and pituitary and deep learning algorithms implemented for brain tumour analysis. Finally, this survey provides all the important literature on the detection of brain tumours with their developments.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49303776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been an exponential increase in discussions about bias in Artificial Intelligence (AI) systems. Bias in AI has typically been defined as a divergence from standard statistical patterns in the output of an AI model, which could be due to a biased dataset or biased assumptions. While the bias in artificially taught models is attributed able to bias in the dataset provided by humans, there is still room for advancement in terms of bias mitigation in AI models. The failure to detect bias in datasets or models stems from the "black box" problem or a lack of understanding of algorithmic outcomes. This paper provides a comprehensive review of the analysis of the approaches provided by researchers and scholars to mitigate AI bias and investigate the several methods of employing a responsible AI model for decision-making processes. We clarify what bias means to different people, as well as provide the actual definition of bias in AI systems. In addition, the paper discussed the causes of bias in AI systems thereby permitting researchers to focus their efforts on minimising the causes and mitigating bias. Finally, we recommend the best direction for future research to ensure the discovery of the most accurate method for reducing bias in algorithms. We hope that this study will help researchers to think from different perspectives while developing unbiased systems.
{"title":"Assessing and Mitigating Bias in Artificial Intelligence: A review","authors":"Deepak Sinwar, Akruti Sinha, Devika Sapra, Vijander Singh, Ghanshyam Raghuwanshi","doi":"10.2174/2666255816666230523114425","DOIUrl":"https://doi.org/10.2174/2666255816666230523114425","url":null,"abstract":"\u0000\u0000There has been an exponential increase in discussions about bias in Artificial Intelligence (AI) systems. Bias in AI has typically been defined as a divergence from standard statistical patterns in the output of an AI model, which could be due to a biased dataset or biased assumptions. While the bias in artificially taught models is attributed able to bias in the dataset provided by humans, there is still room for advancement in terms of bias mitigation in AI models. The failure to detect bias in datasets or models stems from the \"black box\" problem or a lack of understanding of algorithmic outcomes. This paper provides a comprehensive review of the analysis of the approaches provided by researchers and scholars to mitigate AI bias and investigate the several methods of employing a responsible AI model for decision-making processes. We clarify what bias means to different people, as well as provide the actual definition of bias in AI systems. In addition, the paper discussed the causes of bias in AI systems thereby permitting researchers to focus their efforts on minimising the causes and mitigating bias. Finally, we recommend the best direction for future research to ensure the discovery of the most accurate method for reducing bias in algorithms. We hope that this study will help researchers to think from different perspectives while developing unbiased systems.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49499297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-22DOI: 10.2174/2666255816666230522155134
M. Pandey, Sushma Jaiswal
Digital security is one of the important aspects of today’s era. Digital content is being grown every day on the internet; therefore, it is essential to guard the copyright of digital content using various techniques. Watermarking has emerged as an important field of study aiming at securing digital content and copyright protection. None of the watermarking techniques can provide well robustness against all the attacks, and algorithms are designed based on required specifications, which means there is a lot of opportunity in this field. Image watermarking is a vast area of research, starting from spatial-based methods to deep learning-based methods, and it has recently gained a lot of popularity due to the involvement of deep learning technology for ensuring the security of digital content. This study aims at exploring important highlights from spatial to deep learning methods of watermarking, which will be helpful for the researchers. In order to accomplish this study, the standard research papers of the last ten years have been obtained from various databases and reviewed to answer the five research questions. Open issues and challenges are identified and listed after reviewing various kinds of literature. Our study reveals that hybrid watermarking performs better in terms of balancing the trade-off between imperceptibility and robustness. Current research trends and future direction is also discussed.
{"title":"A comparative study of various digital image watermarking techniques: Specific to hybrid watermarking","authors":"M. Pandey, Sushma Jaiswal","doi":"10.2174/2666255816666230522155134","DOIUrl":"https://doi.org/10.2174/2666255816666230522155134","url":null,"abstract":"\u0000\u0000Digital security is one of the important aspects of today’s era. Digital content is being grown every day on the internet; therefore, it is essential to guard the copyright of digital content using various techniques. Watermarking has emerged as an important field of study aiming at securing digital content and copyright protection. None of the watermarking techniques can provide well robustness against all the attacks, and algorithms are designed based on required specifications, which means there is a lot of opportunity in this field. Image watermarking is a vast area of research, starting from spatial-based methods to deep learning-based methods, and it has recently gained a lot of popularity due to the involvement of deep learning technology for ensuring the security of digital content. This study aims at exploring important highlights from spatial to deep learning methods of watermarking, which will be helpful for the researchers. In order to accomplish this study, the standard research papers of the last ten years have been obtained from various databases and reviewed to answer the five research questions. Open issues and challenges are identified and listed after reviewing various kinds of literature. Our study reveals that hybrid watermarking performs better in terms of balancing the trade-off between imperceptibility and robustness. Current research trends and future direction is also discussed.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43401060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-09DOI: 10.2174/2666255816666230509142714
Jagendra Singh, P. Singhal, Shelly Gupta, Deepak
Blockchain is used to assess health records digitally, preserving the security and immutability of the records. The goal of this study is to make it easier for patients to access their medical records and to send them alert messages about important dates for their check-ups, healthy diet, and appointments. To achieve the above-mentioned objective, an integrated approach using Blockchain and Deep learning is initiated. The first approach is Hyperledger Fabric in Blockchain, i.e., private Blockchain, for storing the data in the medically documented ledger, which can be shared among hospitals as well as Health organizations. The second approach is incorporated with a deep learning algorithm. With the help of algorithms, we can analyse the ledger, after which an alert i.e. consultation, health diet, medication, etc., will be sent to the patient’s registered mobile device. The proposed work uses nine features from the dataset; the features are identification number, age, person gender, disease, weight, consultation date, medication, diagnosis, and diet specification. The study is conducted with several features to give accurate results. The integrated model used in this suggested piece of work automates the patient's alert system for a variety of activities. In terms of precision, recall, and F1 score, testing data demonstrate that the LSTM performs better than the other models. By working together with the calendar software on Android mobile devices, alert systems can be improved in the future.
{"title":"An Integrated Approach for Analysis of Electronic Health Records using Blockchain and Deep Learning","authors":"Jagendra Singh, P. Singhal, Shelly Gupta, Deepak","doi":"10.2174/2666255816666230509142714","DOIUrl":"https://doi.org/10.2174/2666255816666230509142714","url":null,"abstract":"\u0000\u0000Blockchain is used to assess health records digitally, preserving the security and immutability of the records. The goal of this study is to make it easier for patients to access their medical records and to send them alert messages about important dates for their check-ups, healthy diet, and appointments. To achieve the above-mentioned objective, an integrated approach using Blockchain and Deep learning is initiated. The first approach is Hyperledger Fabric in Blockchain, i.e., private Blockchain, for storing the data in the medically documented ledger, which can be shared among hospitals as well as Health organizations. The second approach is incorporated with a deep learning algorithm. With the help of algorithms, we can analyse the ledger, after which an alert i.e. consultation, health diet, medication, etc., will be sent to the patient’s registered mobile device. The proposed work uses nine features from the dataset; the features are identification number, age, person gender, disease, weight, consultation date, medication, diagnosis, and diet specification. The study is conducted with several features to give accurate results. The integrated model used in this suggested piece of work automates the patient's alert system for a variety of activities. In terms of precision, recall, and F1 score, testing data demonstrate that the LSTM performs better than the other models. By working together with the calendar software on Android mobile devices, alert systems can be improved in the future.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43734496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-09DOI: 10.2174/2666255816666230509153317
Hongxin Zhang, Hui Jin, Shaowei Ma
SLAM plays an important role in the navigation of robots, unmanned aerial vehicles, and unmanned vehicles. The positioning accuracy will affect the accuracy of obstacle avoidance. The quality of map construction directly affects the performance of subsequent path planning and other algorithms. It is the core algorithm of the intelligent mobile application. Therefore, robot vision slam has great research value and will be an important research direction in the future. By reviewing the latest development and patent of Computer Vision SLAM, this paper provides references to researchers in related fields. Computer Vision SLAM patents and literature were analyzed from the aspects of the algorithm, innovation, and application. Among them, there are more than 30 patents and nearly 30 pieces of literature in the past ten years. This paper reviews the research progress of robot visual SLAM in the last 10 years, summarizes its typical features, especially describes the front part of the visual SLAM system in detail, describes the main advantages and disadvantages of each method, analyses the main problems in the development of robot visual SLAM, prospects its development trend, and finally discusses the related products and patents research status and future of robot visual SLAM technology. The Robot Vision SLAM can compare the texture information of the environment and identify the difference between the two environments, thus improving accuracy. However, the current SLAM algorithm is easy to fail in fast motion and highly dynamic environments, most SLAM action plans are inefficient, and the image features of VSLAM are too distinguishable. Furthermore, more patents on the Robot Vision SLAM should also be invented.
{"title":"Recent Advances in Robot Visual SLAM","authors":"Hongxin Zhang, Hui Jin, Shaowei Ma","doi":"10.2174/2666255816666230509153317","DOIUrl":"https://doi.org/10.2174/2666255816666230509153317","url":null,"abstract":"\u0000\u0000SLAM plays an important role in the navigation of robots, unmanned aerial vehicles, and unmanned vehicles. The positioning accuracy will affect the accuracy of obstacle avoidance. The quality of map construction directly affects the performance of subsequent path planning and other algorithms. It is the core algorithm of the intelligent mobile application. Therefore, robot vision slam has great research value and will be an important research direction in the future.\u0000\u0000\u0000\u0000By reviewing the latest development and patent of Computer Vision SLAM, this paper provides references to researchers in related fields.\u0000\u0000\u0000\u0000Computer Vision SLAM patents and literature were analyzed from the aspects of the algorithm, innovation, and application. Among them, there are more than 30 patents and nearly 30 pieces of literature in the past ten years.\u0000\u0000\u0000\u0000This paper reviews the research progress of robot visual SLAM in the last 10 years, summarizes its typical features, especially describes the front part of the visual SLAM system in detail, describes the main advantages and disadvantages of each method, analyses the main problems in the development of robot visual SLAM, prospects its development trend, and finally discusses the related products and patents research status and future of robot visual SLAM technology.\u0000\u0000\u0000\u0000The Robot Vision SLAM can compare the texture information of the environment and identify the difference between the two environments, thus improving accuracy. However, the current SLAM algorithm is easy to fail in fast motion and highly dynamic environments, most SLAM action plans are inefficient, and the image features of VSLAM are too distinguishable. Furthermore, more patents on the Robot Vision SLAM should also be invented.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45710021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-07DOI: 10.2174/2666255816666230507182018
Shruti Jain, Shivani Rana, Rakesh Kanji
Text classification emerged as an important approach to advancing Natural Language Processing (NLP) applications concerning the available text on the web. To analyze the text, many applications are proposed in the literature. The NLP, with the help of deep learning, has achieved great success in automatically sorting text data in predefined classes, but this process is expensive & time-consuming. To overcome this problem, in this paper, various Machine Learning techniques are studied & implemented to generate an automated system for movie review classification. The proposed methodology uses the Bidirectional Encoder Representations of the Transformer (BERT) model for data preparation and predictions using various machine learning algorithms like XG boost, support vector machine, logistic regression, naïve Bayes, and neural network. The algorithms are analyzed based on various performance metrics like accuracy, precision, recall & F1 score. The results reveal that the 2-hidden layer neural network outperforms the other models by achieving more than 0.90 F1 score in the first 15 epochs and 0.99 in just 40 epochs on the IMDB dataset, thus reducing the time to a great extent. 100% accuracy is attained using a neural network, resulting in a 15% accuracy improvement and 14.6% F1 score improvement over logistic regression.
{"title":"Automated System for Movie Review Classification using BERT","authors":"Shruti Jain, Shivani Rana, Rakesh Kanji","doi":"10.2174/2666255816666230507182018","DOIUrl":"https://doi.org/10.2174/2666255816666230507182018","url":null,"abstract":"\u0000\u0000Text classification emerged as an important approach to advancing Natural Language Processing (NLP) applications concerning the available text on the web. To analyze the text, many applications are proposed in the literature.\u0000\u0000\u0000\u0000The NLP, with the help of deep learning, has achieved great success in automatically sorting text data in predefined classes, but this process is expensive & time-consuming.\u0000\u0000\u0000\u0000To overcome this problem, in this paper, various Machine Learning techniques are studied & implemented to generate an automated system for movie review classification.\u0000\u0000\u0000\u0000The proposed methodology uses the Bidirectional Encoder Representations of the Transformer (BERT) model for data preparation and predictions using various machine learning algorithms like XG boost, support vector machine, logistic regression, naïve Bayes, and neural network. The algorithms are analyzed based on various performance metrics like accuracy, precision, recall & F1 score.\u0000\u0000\u0000\u0000The results reveal that the 2-hidden layer neural network outperforms the other models by achieving more than 0.90 F1 score in the first 15 epochs and 0.99 in just 40 epochs on the IMDB dataset, thus reducing the time to a great extent.\u0000\u0000\u0000\u0000100% accuracy is attained using a neural network, resulting in a 15% accuracy improvement and 14.6% F1 score improvement over logistic regression.\u0000","PeriodicalId":36514,"journal":{"name":"Recent Advances in Computer Science and Communications","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44575618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}