Distributed Denial of Service (DDoS) attacks pose a significant threat to the security and availability of networks. With the increasing adoption of Software-Defined Networking (SDN) and its multi-controller architectures, there is a need to explore effective DDoS attack detection mechanisms tailored to these environments. An overview of the current research on detecting DDoS attacks in SDN environments, with a focus on different detection techniques, methodologies and problems is presented in this survey paper. The survey attempt to identify the limitations and strengths of current approaches and propose potential research directions for improving DDoS detection in this context.
分布式拒绝服务(DDoS)攻击对网络的安全性和可用性构成了重大威胁。随着越来越多地采用软件定义网络(SDN)及其多控制器架构,有必要探索适合这些环境的有效 DDoS 攻击检测机制。本调查报告概述了当前在 SDN 环境中检测 DDoS 攻击的研究,重点关注不同的检测技术、方法和问题。调查试图找出当前方法的局限性和优势,并提出在此背景下改进 DDoS 检测的潜在研究方向。
{"title":"Review on DDoS Attack in Controller Environment of Software Defined Network","authors":"Gunjani Vaghela, Nishant Sanghani, Bhavesh Borisaniya","doi":"10.4108/eetsis.5823","DOIUrl":"https://doi.org/10.4108/eetsis.5823","url":null,"abstract":"Distributed Denial of Service (DDoS) attacks pose a significant threat to the security and availability of networks. With the increasing adoption of Software-Defined Networking (SDN) and its multi-controller architectures, there is a need to explore effective DDoS attack detection mechanisms tailored to these environments. An overview of the current research on detecting DDoS attacks in SDN environments, with a focus on different detection techniques, methodologies and problems is presented in this survey paper. The survey attempt to identify the limitations and strengths of current approaches and propose potential research directions for improving DDoS detection in this context.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"66 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141806498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: With the rapid increase in network information data, the protection of image data has become a challenging task, where image encryption technology can play an important role. This paper studies color image encryption algorithms and proposes a novel method for color image encryption to enhance the security and effectiveness of image encryption.OBJECTIVES: The purpose of this study is to effectively integrate different channel information of color images, thereby improving the effect of pixel decomposition based image encryption algorithm. Different indicators are used to analyze the effect of image encryption, and it is also compared with existing image encryption algorithms.METHODS: Initially, through pixel decomposition, the pixel values of the R, G, B channels of the color image, each with a depth of 8 bits, are decomposed into two integers between 0-15 and combined into a new data matrix. Then, multiple rounds of scrambling are performed on the transformed matrix. Next, the Fibonacci transformation matrix is applied to the scanned matrix to further change the values of its elements. Finally, XOR diffusion operation is carried out to obtain the encrypted image.RESULTS: Experimental results show that the proposed method achieves relatively good results in multiple image encryption indicator tests. The algorithm not only inherits the advantages of existing image encryption but also effectively integrates the information of each channel of the color image, providing better security.CONCLUSION: This study further proves the effectiveness of image encryption algorithms based on pixel decomposition and provides a new idea for better color image encryption algorithms, which is expected to be applied to other issues such as information hiding and data protection.
{"title":"A novel color image encryption method using Fibonacci transformation and chaotic systems","authors":"Chunming Xu","doi":"10.4108/eetsis.5452","DOIUrl":"https://doi.org/10.4108/eetsis.5452","url":null,"abstract":"INTRODUCTION: With the rapid increase in network information data, the protection of image data has become a challenging task, where image encryption technology can play an important role. This paper studies color image encryption algorithms and proposes a novel method for color image encryption to enhance the security and effectiveness of image encryption.OBJECTIVES: The purpose of this study is to effectively integrate different channel information of color images, thereby improving the effect of pixel decomposition based image encryption algorithm. Different indicators are used to analyze the effect of image encryption, and it is also compared with existing image encryption algorithms.METHODS: Initially, through pixel decomposition, the pixel values of the R, G, B channels of the color image, each with a depth of 8 bits, are decomposed into two integers between 0-15 and combined into a new data matrix. Then, multiple rounds of scrambling are performed on the transformed matrix. Next, the Fibonacci transformation matrix is applied to the scanned matrix to further change the values of its elements. Finally, XOR diffusion operation is carried out to obtain the encrypted image.RESULTS: Experimental results show that the proposed method achieves relatively good results in multiple image encryption indicator tests. The algorithm not only inherits the advantages of existing image encryption but also effectively integrates the information of each channel of the color image, providing better security.CONCLUSION: This study further proves the effectiveness of image encryption algorithms based on pixel decomposition and provides a new idea for better color image encryption algorithms, which is expected to be applied to other issues such as information hiding and data protection.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"31 44","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141814146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated face recognition plays a vital role in forensics. The most important evidence in the criminal investigation is the facial images captured from the crime scene, as they represent the identity of the people involved in crime. The role of law enforcement agencies is to identify the facial images from the suitable database. This information can be treated as strong evidence for the law enforcement agencies which becomes the most important evidence in global counter-terrorism initiatives. Contour of chin and cheek, distancebetween different features and shapes of facial components are some of the parameters considered by the forensic experts for manual facial identification process. This process is time consuming, and it is a tedious job. To address this issue, there is a need for developing an automated face recognition system for forensics. As a result, FaceNet – a framework for age variation facial digital images is discussed in this research work. Experiments are evaluated on CSA dataset with three age variations which provides a recognition accuracy of86.8% and performs better than the existing algorithms.
{"title":"FaceNet – A Framework for Age Variation Facial Digital Images","authors":"Chethana H.T., Trisiladevi C. Nagavi, Mahesha P., Vinayakumar Ravi, Gururaj H.L.","doi":"10.4108/eetsis.5198","DOIUrl":"https://doi.org/10.4108/eetsis.5198","url":null,"abstract":"Automated face recognition plays a vital role in forensics. The most important evidence in the criminal investigation is the facial images captured from the crime scene, as they represent the identity of the people involved in crime. The role of law enforcement agencies is to identify the facial images from the suitable database. This information can be treated as strong evidence for the law enforcement agencies which becomes the most important evidence in global counter-terrorism initiatives. Contour of chin and cheek, distancebetween different features and shapes of facial components are some of the parameters considered by the forensic experts for manual facial identification process. This process is time consuming, and it is a tedious job. To address this issue, there is a need for developing an automated face recognition system for forensics. As a result, FaceNet – a framework for age variation facial digital images is discussed in this research work. Experiments are evaluated on CSA dataset with three age variations which provides a recognition accuracy of86.8% and performs better than the existing algorithms.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 464","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhavadharini R.M., Kalla Bharath Vardhan, Mandava Nidhish, Surya Kiran C., Dudekula Nahid Shameem, Varanasi Sai Charan
INTRODUCTION: Diabetic Retinopathy, Cataract and Glaucoma are the major eye diseases posing significant diagnostic challenges due to their asymptotic nature at their early stages. These diseases if not detected and diagnosed at their early stages may lead to severe visual impairment and even can cause blindness in human beings. Early detection of eye diseases showed an exceptional recovery rate. Traditional diagnostic methods primarily relying on expertise in the field of ophthalmology involve a time-consuming process. With technological advancements in the field of imaging techniques, a large volume of medical images have been created which can be utilized for developing more accurate diagnostic tools in the field. Deep learning (DL) models are playing a significant role in analyzing medical images. DL algorithms can automatically learn the features which indicate eye diseases from eye image datasets. Training DL models, however, requires a significant amount of data and computational resources. To overcome this, we use advanced deep learning algorithms combined with transfer-learning techniques. Leveraging the power of deep learning, we aim to develop sophisticated models that can distinguish different eye diseases in medical image data. OBJECTIVES: To improve the accuracy and efficiency of early detection methods, improve diagnostic precision, and intervene in these challenging ocular conditions in a timely manner. METHODS: The well-known Deep Learning architectures VGG19, InceptionV3 and ResNet50 architectures with transfer learning were evaluated and the results are compared. RESULTS: VGG19, InceptionV3 and ResNet50 architectures with transfer learning achieved 90.33%, 89.8% and 99.94% accuracies, respectively. The precision, recall, and F1 scores for VGG19 were recorded as 79.17%, 79.17%, and 78.21%, while InceptionV3 showed 82.56%, 82.38%, and 82.11% and ResNet50 has 96.28%, 96.2%, and 96.24%. CONCLUSION: The Convolutional Neural Network models VGG19, Inception v3, ResNet50 combined with transfer learning achieve better results than the original Convolutional Neural Network models.
{"title":"Eye Disease Detection Using Deep Learning Models with Transfer Learning Techniques","authors":"Bhavadharini R.M., Kalla Bharath Vardhan, Mandava Nidhish, Surya Kiran C., Dudekula Nahid Shameem, Varanasi Sai Charan","doi":"10.4108/eetsis.5971","DOIUrl":"https://doi.org/10.4108/eetsis.5971","url":null,"abstract":"INTRODUCTION: Diabetic Retinopathy, Cataract and Glaucoma are the major eye diseases posing significant diagnostic challenges due to their asymptotic nature at their early stages. These diseases if not detected and diagnosed at their early stages may lead to severe visual impairment and even can cause blindness in human beings. Early detection of eye diseases showed an exceptional recovery rate. Traditional diagnostic methods primarily relying on expertise in the field of ophthalmology involve a time-consuming process. With technological advancements in the field of imaging techniques, a large volume of medical images have been created which can be utilized for developing more accurate diagnostic tools in the field. Deep learning (DL) models are playing a significant role in analyzing medical images. DL algorithms can automatically learn the features which indicate eye diseases from eye image datasets. Training DL models, however, requires a significant amount of data and computational resources. To overcome this, we use advanced deep learning algorithms combined with transfer-learning techniques. Leveraging the power of deep learning, we aim to develop sophisticated models that can distinguish different eye diseases in medical image data. \u0000OBJECTIVES: To improve the accuracy and efficiency of early detection methods, improve diagnostic precision, and intervene in these challenging ocular conditions in a timely manner.\u0000METHODS: The well-known Deep Learning architectures VGG19, InceptionV3 and ResNet50 architectures with transfer learning were evaluated and the results are compared. \u0000RESULTS: VGG19, InceptionV3 and ResNet50 architectures with transfer learning achieved 90.33%, 89.8% and 99.94% accuracies, respectively. The precision, recall, and F1 scores for VGG19 were recorded as 79.17%, 79.17%, and 78.21%, while InceptionV3 showed 82.56%, 82.38%, and 82.11% and ResNet50 has 96.28%, 96.2%, and 96.24%.\u0000CONCLUSION: The Convolutional Neural Network models VGG19, Inception v3, ResNet50 combined with transfer learning achieve better results than the original Convolutional Neural Network models.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"105 45","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141820515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manish Gupta, Anish Gupta, Britto Raj S., Annu Sharma
The Hadoop platform has become a widely adopted distributed computing platform for handling large-scale data processing tasks. However, the security of the Hadoop platform has become a major concern due to the increased risk of cyber-attacks. To address this concern, various security mechanisms have been proposed for the Hadoop platform, including authentication and access control. This research paper proposes a token-based authentication mechanism to enhance the security of the Hadoop platform. The proposed mechanism utilizes a combination of Kerberos and JSON Web Tokens (JWT) for secure communication between Hadoop components. The experimental results demonstrate the effectiveness of the Single point of failure, Guessing attack, Replay Attack, Brute force attack, and Dictionary attack. The proposed model has better performance in terms of average processing time and accuracy of authentication than other models.
{"title":"JWTAMH: JSON Web Tokens Based Authentication Mechanism for HADOOP.","authors":"Manish Gupta, Anish Gupta, Britto Raj S., Annu Sharma","doi":"10.4108/eetsis.5429","DOIUrl":"https://doi.org/10.4108/eetsis.5429","url":null,"abstract":"The Hadoop platform has become a widely adopted distributed computing platform for handling large-scale data processing tasks. However, the security of the Hadoop platform has become a major concern due to the increased risk of cyber-attacks. To address this concern, various security mechanisms have been proposed for the Hadoop platform, including authentication and access control. This research paper proposes a token-based authentication mechanism to enhance the security of the Hadoop platform. The proposed mechanism utilizes a combination of Kerberos and JSON Web Tokens (JWT) for secure communication between Hadoop components. The experimental results demonstrate the effectiveness of the Single point of failure, Guessing attack, Replay Attack, Brute force attack, and Dictionary attack. The proposed model has better performance in terms of average processing time and accuracy of authentication than other models.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":" 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141828831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Lee, J. Park, Hua Wang, Boxi Feng, Kate N Wang
INTRODUCTION: There has been an increased use of teledentistry by dental practitioners in Australia as a response to the COVID-19 pandemic. Previous studies conducted analysing the opinions of dental practitioners were performed prior to the pandemic, and therefore it is important to determine if perceptions regarding teledentistry have changed following the outbreak. OBJECTIVES: The aim of this study was to determine the perceptions of oral healthcare professionals regarding teledentistry in a clinical setting. METHODS: The cross-sectional study involved an anonymous electronic questionnaire with a sample of 152 dental practitioners. The questionnaire contained 28 questions utilizing a 5-point Likert-scale to assess the perceptions of general dentists on teledentistry regarding diagnosis, accessibility, patient care, technology and finances. Chi-squared test and analysis of variance (ANOVA) were used to analyse the results and percentages of agreement and disagreement were calculated. RESULTS: The participants of the questionnaire believed that teledentistry was effective for consultations and in the diagnosis of simple cases. They indicated large benefits of teledentistry in improving access, delivering post-operative care, and triaging patients, and found it particularly useful during the COVID-19 pandemic. However, the participants felt that teledentistry was ineffective in diagnosing complex cases such as pathology. Concerns were also raised regarding the interventional capacity of teledentistry, the quality of the technology, data security and medicolegal issues. In general, participants preferred in-person care in comparison with teledentistry. They were neutral regarding finance. CONCLUSION: The study provided an insight into the perceptions of Australian dental practitioners regarding teledentistry post-COVID-19. Opinions have changed slightly, but there are large hurdles still to overcome before teledentistry is more widely accepted. Research should be continued to further improve teledentistry in the future.
{"title":"Cross-Sectional Analysis of Australian Dental Practitioners’ Perceptions of Teledentistry","authors":"Joshua Lee, J. Park, Hua Wang, Boxi Feng, Kate N Wang","doi":"10.4108/eetsis.5366","DOIUrl":"https://doi.org/10.4108/eetsis.5366","url":null,"abstract":"INTRODUCTION: There has been an increased use of teledentistry by dental practitioners in Australia as a response to the COVID-19 pandemic. Previous studies conducted analysing the opinions of dental practitioners were performed prior to the pandemic, and therefore it is important to determine if perceptions regarding teledentistry have changed following the outbreak.\u0000OBJECTIVES: The aim of this study was to determine the perceptions of oral healthcare professionals regarding teledentistry in a clinical setting.\u0000METHODS: The cross-sectional study involved an anonymous electronic questionnaire with a sample of 152 dental practitioners. The questionnaire contained 28 questions utilizing a 5-point Likert-scale to assess the perceptions of general dentists on teledentistry regarding diagnosis, accessibility, patient care, technology and finances. Chi-squared test and analysis of variance (ANOVA) were used to analyse the results and percentages of agreement and disagreement were calculated.\u0000RESULTS: The participants of the questionnaire believed that teledentistry was effective for consultations and in the diagnosis of simple cases. They indicated large benefits of teledentistry in improving access, delivering post-operative care, and triaging patients, and found it particularly useful during the COVID-19 pandemic. However, the participants felt that teledentistry was ineffective in diagnosing complex cases such as pathology. Concerns were also raised regarding the interventional capacity of teledentistry, the quality of the technology, data security and medicolegal issues. In general, participants preferred in-person care in comparison with teledentistry. They were neutral regarding finance.\u0000CONCLUSION: The study provided an insight into the perceptions of Australian dental practitioners regarding teledentistry post-COVID-19. Opinions have changed slightly, but there are large hurdles still to overcome before teledentistry is more widely accepted. Research should be continued to further improve teledentistry in the future.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"1 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an era dominated by digital technology, the imperative of securing patient data cannot be overstated. The deployment of advanced protective measures, including encryption, firewalls, and robust authentication protocols, is an absolute necessity when it comes to preserving the confidentiality and integrity of sensitive patient information. Furthermore, the establishment of stringent access controls serves as a fundamental safeguard, ensuring that only authorized personnel are granted access to this invaluable data. An innovative development in the realm of patient data protection is the utilization of ElectroCardioGram (ECG) as a unique identifier for individuals. In the context of this study, ECG data is ingeniously embedded within cover images using a technique known as Reversible Data Hiding (RDH). RDH offers a distinctive advantage by ensuring that the original image can be fully restored without loss of data after extraction. This achievement is made possible through the application of inventive pixel interpolation and histogram shifting algorithms. Crucially, the study's simulations, conducted across a diverse array of images, underscore the enhanced embedding capacity of the RDH technique while maintaining a commendable balance in terms of the Peak Signal to Noise Ratio (PSNR) and boundary map. This empirical evidence corroborates the efficacy of the approach and its potential to provide an advanced level of security for patient data in the digital landscape.
{"title":"Fortifying Patient Data Security in the Digital Era: A Two-Layer Approach with Data Hiding and Electrocardiogram","authors":"Praveen Gupta, Ajay Prasad","doi":"10.4108/eetsis.5644","DOIUrl":"https://doi.org/10.4108/eetsis.5644","url":null,"abstract":"In an era dominated by digital technology, the imperative of securing patient data cannot be overstated. The deployment of advanced protective measures, including encryption, firewalls, and robust authentication protocols, is an absolute necessity when it comes to preserving the confidentiality and integrity of sensitive patient information. Furthermore, the establishment of stringent access controls serves as a fundamental safeguard, ensuring that only authorized personnel are granted access to this invaluable data. An innovative development in the realm of patient data protection is the utilization of ElectroCardioGram (ECG) as a unique identifier for individuals. In the context of this study, ECG data is ingeniously embedded within cover images using a technique known as Reversible Data Hiding (RDH). RDH offers a distinctive advantage by ensuring that the original image can be fully restored without loss of data after extraction. This achievement is made possible through the application of inventive pixel interpolation and histogram shifting algorithms. Crucially, the study's simulations, conducted across a diverse array of images, underscore the enhanced embedding capacity of the RDH technique while maintaining a commendable balance in terms of the Peak Signal to Noise Ratio (PSNR) and boundary map. This empirical evidence corroborates the efficacy of the approach and its potential to provide an advanced level of security for patient data in the digital landscape.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"23 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141647971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anjanabhargavi Kulkarni, R. H. Goudar, Vijayalaxmi N. Rathod, D. G. M., Geetabai S. Hukkeri
Rapid advancements in wireless communication technology have made it easier to transfer digital data globally. With the complete assistance of artificial intelligence, the sixth-generation (6G) system—a new paradigm in wireless communication—is anticipated to be put into use between 2027 and 2030. Faster system capacity, faster data rate, lower latency, higher security, and better quality of service (QoS) in comparison to the 5G system are some of the main concerns that need to be addressed beyond 5G. Combining the growing need for more network coverage, lower latency, and greater data rates is the aim of 6G. It is recommended that to meet these needs and enable new services and applications, intelligent communication be implemented. The main enablers and facilitators for implementing intelligent communication beyond 5G are outlined in this paper. The article provides the horizon for new adaptations and standardization for integrating 6G intelligent communication in future networks and outlines the requirements and use-case scenarios for 6G. It also highlights the potential of 6G and key enablers from the standpoint of flexibility. It examines key research gaps like spectrum efficiency, network parameters, infrastructure deployment, and security flaws in past transitions while contrasting 5G and 6G communication. To overcome these challenges, modernizing 6G research domains are essential. Therefore, this review article focuses on the importance of 6G wireless communication and its network architecture, which also provides the technological paradigm shift from 5G to 6G. Furthermore, it highlights popular domains such as Artificial Intelligence, Internet of Things, Managing Big Data, Wireless Mobile networks, and Massive MIMO (Multiple Input Multiple Output), Quantum communication, Block chain Technology, Terahertz Communications (THz), Cell-free Communications and Intelligent Reflecting Surface as research objectives.
{"title":"New directions for adapting intelligent communication and standardization towards 6G","authors":"Anjanabhargavi Kulkarni, R. H. Goudar, Vijayalaxmi N. Rathod, D. G. M., Geetabai S. Hukkeri","doi":"10.4108/eetsis.5126","DOIUrl":"https://doi.org/10.4108/eetsis.5126","url":null,"abstract":"Rapid advancements in wireless communication technology have made it easier to transfer digital data globally. With the complete assistance of artificial intelligence, the sixth-generation (6G) system—a new paradigm in wireless communication—is anticipated to be put into use between 2027 and 2030. Faster system capacity, faster data rate, lower latency, higher security, and better quality of service (QoS) in comparison to the 5G system are some of the main concerns that need to be addressed beyond 5G. Combining the growing need for more network coverage, lower latency, and greater data rates is the aim of 6G. It is recommended that to meet these needs and enable new services and applications, intelligent communication be implemented. The main enablers and facilitators for implementing intelligent communication beyond 5G are outlined in this paper. The article provides the horizon for new adaptations and standardization for integrating 6G intelligent communication in future networks and outlines the requirements and use-case scenarios for 6G. It also highlights the potential of 6G and key enablers from the standpoint of flexibility. It examines key research gaps like spectrum efficiency, network parameters, infrastructure deployment, and security flaws in past transitions while contrasting 5G and 6G communication. To overcome these challenges, modernizing 6G research domains are essential. Therefore, this review article focuses on the importance of 6G wireless communication and its network architecture, which also provides the technological paradigm shift from 5G to 6G. Furthermore, it highlights popular domains such as Artificial Intelligence, Internet of Things, Managing Big Data, Wireless Mobile networks, and Massive MIMO (Multiple Input Multiple Output), Quantum communication, Block chain Technology, Terahertz Communications (THz), Cell-free Communications and Intelligent Reflecting Surface as research objectives.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"56 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141654319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
INTRODUCTION: The rapid progress in artificial intelligence (AI) over the past decade has ushered in a new era of transformative technologies. Deep learning has emerged as a potential tool, demonstrating remarkable capabilities in various applications. This paper focuses on one of the controversial applications of deep learning commonly known as deepfakes.OBJECTIVES: The main objective of this comprehensive bibliometric survey is to explore the trends, applications and challenges of deepfakes over the course of last 4.5 years.METHODS: In this research, a total of 794 documents published from 2019 to July 2023 were acquired from Scopus database. To conduct this bibliometric analysis, RStudio and VOSviewer tools have been used. In this current analysis, deepfake challenges, countries, sources, top 20 cited documents, and research trends in the field of deepfake have been included.RESULTS: The analysis highlights a substantial increase in deepfake publications from January 2019 to July 2023. Out of the 8 document types identified 38% are article publications. In addition, from the journal articles it has been depicted that the journal source entitled "Advances in Computer Vision and Pattern Recognition" holds Q1 status with 8.3% publications in the deepfakes domain during the targeted year range. Moreover, the data visualizations reveal the growing international collaboration, with the USA as the most prolific country in deepfake research.CONCLUSION: Despite numerous reviews on deepfakes, there has been a notable absence of comprehensive scientometric analyses. This paper fills this gap through a bibliometric study using the Scopus database as urderlying source. The analysis includes keyword analysis, leading research-contributing institutes, co-country collaboration, and co-keyword occurrence. The findings offer valuable insights for scholars, providing a foundational understanding including document types, prominent journals, international collaboration trends, and influential institutions and offering valuable guidance for future scholarly pursuits in this evolving field.
{"title":"A Bibliometric Analysis of Deepfakes : Trends, Applications and Challenges","authors":"Diya Garg, Rupali Gill","doi":"10.4108/eetsis.4883","DOIUrl":"https://doi.org/10.4108/eetsis.4883","url":null,"abstract":"INTRODUCTION: The rapid progress in artificial intelligence (AI) over the past decade has ushered in a new era of transformative technologies. Deep learning has emerged as a potential tool, demonstrating remarkable capabilities in various applications. This paper focuses on one of the controversial applications of deep learning commonly known as deepfakes.OBJECTIVES: The main objective of this comprehensive bibliometric survey is to explore the trends, applications and challenges of deepfakes over the course of last 4.5 years.METHODS: In this research, a total of 794 documents published from 2019 to July 2023 were acquired from Scopus database. To conduct this bibliometric analysis, RStudio and VOSviewer tools have been used. In this current analysis, deepfake challenges, countries, sources, top 20 cited documents, and research trends in the field of deepfake have been included.RESULTS: The analysis highlights a substantial increase in deepfake publications from January 2019 to July 2023. Out of the 8 document types identified 38% are article publications. In addition, from the journal articles it has been depicted that the journal source entitled \"Advances in Computer Vision and Pattern Recognition\" holds Q1 status with 8.3% publications in the deepfakes domain during the targeted year range. Moreover, the data visualizations reveal the growing international collaboration, with the USA as the most prolific country in deepfake research.CONCLUSION: Despite numerous reviews on deepfakes, there has been a notable absence of comprehensive scientometric analyses. This paper fills this gap through a bibliometric study using the Scopus database as urderlying source. The analysis includes keyword analysis, leading research-contributing institutes, co-country collaboration, and co-keyword occurrence. The findings offer valuable insights for scholars, providing a foundational understanding including document types, prominent journals, international collaboration trends, and influential institutions and offering valuable guidance for future scholarly pursuits in this evolving field.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"54 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141654449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, a multitask model is proposed to perform simultaneous news category and sentiment classification of a diverse dataset comprising 3263 news records spanning across eight categories, including environment, health, education, tech, sports, business, lifestyle, and science. Leveraging the power of Bidirectional Encoder Representations from Transformers (BERT), the algorithm demonstrates remarkable results in both tasks. For topic classification, it achieves an accuracy of 98% along with balanced precision and recall, substantiating its proficiency in categorizing news articles. For sentiment analysis, the model maintains strong accuracy at 94%, distinguishing positive from negative sentiment effectively. This multitask approach showcases the model's versatility and its potential to comprehensively understand and classify news articles based on content and sentiment. This multitask model not only enhances classification accuracy but also improves the efficiency of handling extensive news datasets. Consequently, it empowers news agencies, content recommendation systems, and information retrieval services to offer more personalized and pertinent content to their users.
{"title":"Multitask Sentiment Analysis and Topic Classification Using BERT","authors":"P. Shah, Hiren Patel, Priya Swaminarayan","doi":"10.4108/eetsis.5287","DOIUrl":"https://doi.org/10.4108/eetsis.5287","url":null,"abstract":"In this study, a multitask model is proposed to perform simultaneous news category and sentiment classification of a diverse dataset comprising 3263 news records spanning across eight categories, including environment, health, education, tech, sports, business, lifestyle, and science. Leveraging the power of Bidirectional Encoder Representations from Transformers (BERT), the algorithm demonstrates remarkable results in both tasks. For topic classification, it achieves an accuracy of 98% along with balanced precision and recall, substantiating its proficiency in categorizing news articles. For sentiment analysis, the model maintains strong accuracy at 94%, distinguishing positive from negative sentiment effectively. This multitask approach showcases the model's versatility and its potential to comprehensively understand and classify news articles based on content and sentiment. This multitask model not only enhances classification accuracy but also improves the efficiency of handling extensive news datasets. Consequently, it empowers news agencies, content recommendation systems, and information retrieval services to offer more personalized and pertinent content to their users.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141657710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}