Relaying with network coding forms a basis for a variety of collaborative communication systems. A linear block coding framework for multi-way relaying using network codes introduced in the literature shows great promise for understanding, analyzing, and designing such systems. So far, this technique has been used with low-density parity check (LDPC) codes and belief propagation (BP) decoding. Polar codes have drawn significant interest in recent years because of their low decoding complexity and good performance. Our paper considers the use of polar codes also as network codes with differential binary phase shift keying (DBPSK), bypassing the need for channel state estimation in multi-way selective detect-and-forward (DetF) cooperative relaying. We demonstrate that polar codes are suitable for such applications. The encoding and decoding complexity of such systems for linear block codes is analyzed using maximum likelihood (ML) decoding for LDPC codes with log-BP decoding and polar codes with successive cancellation (SC) as well as successive cancellation list (SCL) decoding. We present Monte-Carlo simulation results for the performance of such a multi-way relaying system, employing polar codes with different lengths and code rates. The results demonstrate a significant performance gain compared to an uncoded scheme. The simulation results show that the error performance of such a system employing polar codes is comparable to LDPC codes with log-BP decoding, while the decoding complexity is much lower. Furthermore, we consider a hard threshold technique at user terminals for determining whether a relay transmits or not. This technique makes the system practical without increasing the complexity and can significantly reduce the degradation from intermittent relay transmissions that is associated with such a multi-way relaying protocol.
{"title":"Polar Codes with Differential Phase Shift Keying for Selective Detect-and-Forward Multi-Way Relaying Systems","authors":"Ruilin Ji, Harry Leib","doi":"10.3390/network4030015","DOIUrl":"https://doi.org/10.3390/network4030015","url":null,"abstract":"Relaying with network coding forms a basis for a variety of collaborative communication systems. A linear block coding framework for multi-way relaying using network codes introduced in the literature shows great promise for understanding, analyzing, and designing such systems. So far, this technique has been used with low-density parity check (LDPC) codes and belief propagation (BP) decoding. Polar codes have drawn significant interest in recent years because of their low decoding complexity and good performance. Our paper considers the use of polar codes also as network codes with differential binary phase shift keying (DBPSK), bypassing the need for channel state estimation in multi-way selective detect-and-forward (DetF) cooperative relaying. We demonstrate that polar codes are suitable for such applications. The encoding and decoding complexity of such systems for linear block codes is analyzed using maximum likelihood (ML) decoding for LDPC codes with log-BP decoding and polar codes with successive cancellation (SC) as well as successive cancellation list (SCL) decoding. We present Monte-Carlo simulation results for the performance of such a multi-way relaying system, employing polar codes with different lengths and code rates. The results demonstrate a significant performance gain compared to an uncoded scheme. The simulation results show that the error performance of such a system employing polar codes is comparable to LDPC codes with log-BP decoding, while the decoding complexity is much lower. Furthermore, we consider a hard threshold technique at user terminals for determining whether a relay transmits or not. This technique makes the system practical without increasing the complexity and can significantly reduce the degradation from intermittent relay transmissions that is associated with such a multi-way relaying protocol.","PeriodicalId":19145,"journal":{"name":"Network","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141925667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evianita Dewi Fajrianti, Y. Panduman, Nobuo Funabiki, Amma Liesvarastranta Haz, Komang Candra Brata, S. Sukaridhoto
To enhance user experiences of reaching destinations in large, complex buildings, we have developed a indoor navigation system using Unity and a smartphone called INSUS. It can reset the user location using a quick response (QR) code to reduce the loss of direction of the user during navigation. However, this approach needs a number of QR code sheets to be prepared in the field, causing extra loads at implementation. In this paper, we propose another reset method to reduce loads by recognizing information of naturally installed signs in the field using object detection and Optical Character Recognition (OCR) technologies. A lot of signs exist in a building, containing texts such as room numbers, room names, and floor numbers. In the proposal, the Sign Image is taken with a smartphone, the sign is detected by YOLOv8, the text inside the sign is recognized by PaddleOCR, and it is compared with each record in the Room Database using Levenshtein distance. For evaluations, we applied the proposal in two buildings in Okayama University, Japan. The results show that YOLOv8 achieved mAP@0.5 0.995 and mAP@0.5:0.95 0.978, and PaddleOCR could extract text in the sign image accurately with an averaged CER% lower than 10%. The combination of both YOLOv8 and PaddleOCR decreases the execution time by 6.71s compared to the previous method. The results confirmed the effectiveness of the proposal.
{"title":"A User Location Reset Method through Object Recognition in Indoor Navigation System Using Unity and a Smartphone (INSUS)","authors":"Evianita Dewi Fajrianti, Y. Panduman, Nobuo Funabiki, Amma Liesvarastranta Haz, Komang Candra Brata, S. Sukaridhoto","doi":"10.3390/network4030014","DOIUrl":"https://doi.org/10.3390/network4030014","url":null,"abstract":"To enhance user experiences of reaching destinations in large, complex buildings, we have developed a indoor navigation system using Unity and a smartphone called INSUS. It can reset the user location using a quick response (QR) code to reduce the loss of direction of the user during navigation. However, this approach needs a number of QR code sheets to be prepared in the field, causing extra loads at implementation. In this paper, we propose another reset method to reduce loads by recognizing information of naturally installed signs in the field using object detection and Optical Character Recognition (OCR) technologies. A lot of signs exist in a building, containing texts such as room numbers, room names, and floor numbers. In the proposal, the Sign Image is taken with a smartphone, the sign is detected by YOLOv8, the text inside the sign is recognized by PaddleOCR, and it is compared with each record in the Room Database using Levenshtein distance. For evaluations, we applied the proposal in two buildings in Okayama University, Japan. The results show that YOLOv8 achieved mAP@0.5 0.995 and mAP@0.5:0.95 0.978, and PaddleOCR could extract text in the sign image accurately with an averaged CER% lower than 10%. The combination of both YOLOv8 and PaddleOCR decreases the execution time by 6.71s compared to the previous method. The results confirmed the effectiveness of the proposal.","PeriodicalId":19145,"journal":{"name":"Network","volume":"31 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141816946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Persistent security challenges in Industry 4.0 due to the limited resources of IoT devices necessitate innovative solutions. Addressing this, this study introduces the ASCON algorithm for lightweight authenticated encryption with associated data, enhancing confidentiality, integrity, and authenticity within IoT limitations. By integrating Digital Twins, the framework emphasizes the need for robust security in Industry 4.0, with ASCON ensuring secure data transmission and bolstering system resilience against cyber threats. Practical validation using the MQTT protocol confirms ASCON’s efficacy over AES-GCM, highlighting its potential for enhanced security in Industry 4.0. Future research should focus on optimizing ASCON for microprocessors and developing secure remote access tailored to resource-constrained devices, ensuring adaptability in the digital era.
{"title":"Enhancing Resilience in Digital Twins: ASCON-Based Security Solutions for Industry 4.0","authors":"Mohammed El-Hajj, T. Gebremariam","doi":"10.3390/network4030013","DOIUrl":"https://doi.org/10.3390/network4030013","url":null,"abstract":"Persistent security challenges in Industry 4.0 due to the limited resources of IoT devices necessitate innovative solutions. Addressing this, this study introduces the ASCON algorithm for lightweight authenticated encryption with associated data, enhancing confidentiality, integrity, and authenticity within IoT limitations. By integrating Digital Twins, the framework emphasizes the need for robust security in Industry 4.0, with ASCON ensuring secure data transmission and bolstering system resilience against cyber threats. Practical validation using the MQTT protocol confirms ASCON’s efficacy over AES-GCM, highlighting its potential for enhanced security in Industry 4.0. Future research should focus on optimizing ASCON for microprocessors and developing secure remote access tailored to resource-constrained devices, ensuring adaptability in the digital era.","PeriodicalId":19145,"journal":{"name":"Network","volume":"102 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Delay and Disruption Tolerant Networking (DTN) is a network architecture created basically to overcome non-continuing connectivity. There has been a great deal of research on this topic, from space communication to terrestrial applications. Since there are still many places on earth where there is no means of communication, the focus of this work is on the latest. A systematic literature review (SLR) was performed to know the main issues and advances related to the implementation of DTN for terrestrial and TCP/IP applications, especially in places where telecommunication infrastructure is lacking. The result is a classification of papers based on key aspects, such as architecture, performance, routing, and applications. A matrix of all the papers about these aspects is included to help researchers find the missing piece and concrete terrestrial solutions. The matrix uses three colors, green, yellow, and red according to the focus, either high, medium, or low, so that it is easy to identify specific papers.
{"title":"Delay and Disruption Tolerant Networking for Terrestrial and TCP/IP Applications: A Systematic Literature Review","authors":"Aris Castillo, C. Juiz, Belén Bermejo","doi":"10.3390/network4030012","DOIUrl":"https://doi.org/10.3390/network4030012","url":null,"abstract":"Delay and Disruption Tolerant Networking (DTN) is a network architecture created basically to overcome non-continuing connectivity. There has been a great deal of research on this topic, from space communication to terrestrial applications. Since there are still many places on earth where there is no means of communication, the focus of this work is on the latest. A systematic literature review (SLR) was performed to know the main issues and advances related to the implementation of DTN for terrestrial and TCP/IP applications, especially in places where telecommunication infrastructure is lacking. The result is a classification of papers based on key aspects, such as architecture, performance, routing, and applications. A matrix of all the papers about these aspects is included to help researchers find the missing piece and concrete terrestrial solutions. The matrix uses three colors, green, yellow, and red according to the focus, either high, medium, or low, so that it is easy to identify specific papers.","PeriodicalId":19145,"journal":{"name":"Network","volume":"210 6","pages":"237-259"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radio Frequency Identification (RFID) technology plays a crucial role in various Internet of Things (IoT) applications, necessitating the integration of RFID systems into dense networks. However, the presence of numerous readers leads to collisions, degrading communication between readers and tags and compromising system performance. To tackle this challenge, researchers have proposed Medium Access Control (MAC) layer protocols employing different channel access methods. In this paper, we present a novel solution, the Distributed Time Slot Anti-Collision protocol (DTS-AC), which employs a new TDMA notification system to address Reader-to-Reader Interference (RRI), while incorporating FDMA-based frequency resource management to resolve Reader-to-Tag Interference (RTI) collision issues. Simulation results demonstrate that DTS-AC significantly improves performance in dense RFID networks by enhancing read rates, with scalability benefits based on the number of readers, channels, and Time Slots (TSs). Moreover, the cost-effectiveness of DTS-AC facilitates efficient deployment in RFID networks, emphasizing considerations of time delay and data sensitivity.
射频识别(RFID)技术在各种物联网(IoT)应用中发挥着至关重要的作用,因此有必要将 RFID 系统集成到密集的网络中。然而,大量读取器的存在会导致碰撞,降低读取器和标签之间的通信性能,影响系统性能。为应对这一挑战,研究人员提出了采用不同信道接入方法的介质访问控制(MAC)层协议。在本文中,我们提出了一种新颖的解决方案--分布式时隙防碰撞协议(DTS-AC),它采用新的 TDMA 通知系统来解决读者对读者干扰(RRI)问题,同时结合基于 FDMA 的频率资源管理来解决读者对标签干扰(RTI)碰撞问题。仿真结果表明,DTS-AC 通过提高读取率,显著改善了密集 RFID 网络的性能,并根据阅读器、信道和时隙(TS)的数量实现了可扩展性。此外,DTS-AC 的成本效益有助于在 RFID 网络中高效部署,同时强调了对时间延迟和数据敏感性的考虑。
{"title":"A Hybrid Anti-Collision Protocol Based on Frequency Division Multiple Access (FDMA) and Time Division Multiple Access (TDMA) for Radio Frequency Identification (RFID) Readers","authors":"Mourad Ouadou, Rachid Mafamane, K. Minaoui","doi":"10.3390/network4020011","DOIUrl":"https://doi.org/10.3390/network4020011","url":null,"abstract":"Radio Frequency Identification (RFID) technology plays a crucial role in various Internet of Things (IoT) applications, necessitating the integration of RFID systems into dense networks. However, the presence of numerous readers leads to collisions, degrading communication between readers and tags and compromising system performance. To tackle this challenge, researchers have proposed Medium Access Control (MAC) layer protocols employing different channel access methods. In this paper, we present a novel solution, the Distributed Time Slot Anti-Collision protocol (DTS-AC), which employs a new TDMA notification system to address Reader-to-Reader Interference (RRI), while incorporating FDMA-based frequency resource management to resolve Reader-to-Tag Interference (RTI) collision issues. Simulation results demonstrate that DTS-AC significantly improves performance in dense RFID networks by enhancing read rates, with scalability benefits based on the number of readers, channels, and Time Slots (TSs). Moreover, the cost-effectiveness of DTS-AC facilitates efficient deployment in RFID networks, emphasizing considerations of time delay and data sensitivity.","PeriodicalId":19145,"journal":{"name":"Network","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141348241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sadaf ul Zuhra, P. Chaporkar, A. Karandikar, H. V. Poor
The escalating demand for high-quality video streaming poses a major challenge for communication networks today. Catering to these bandwidth-hungry video streaming services places a huge burden on the limited spectral resources of communication networks, limiting the resources available for other services as well. Large volumes of video traffic can lead to severe network congestion, particularly during live streaming events, which require sending the same content to a large number of users simultaneously. For such applications, multicast transmission can effectively combat network congestion while meeting the demands of all the users by serving groups of users requesting the same content over shared spectral resources. Streaming services can further benefit from multi-connectivity, which allows users to receive content from multiple base stations simultaneously. Integrating multi-connectivity within multicast streaming can improve the system resource utilization while also providing seamless connectivity to multicast users. Toward this end, this work studied the impact of using multi-connectivity (MC) alongside wireless multicast for meeting the resource requirements of video streaming. Our findings show that MC substantially enhances the performance of multicast streaming, particularly benefiting cell-edge users who often experience poor channel conditions. We especially considered the number of users that can be simultaneously served by multi-connected multicast systems. It was observed that about 60% of the users that are left unserved under single-connectivity multicast are successfully served using the same resources by employing multi-connectivity in multicast transmissions. We prove that the optimal resource allocation problem for MC multicast is NP-hard. As a solution, we present a greedy approximation algorithm with an approximation factor of (1−1/e). Furthermore, we establish that no other polynomial-time algorithm can offer a superior approximation. To generate realistic video traffic patterns in our simulations, we made use of traces from actual videos. Our results clearly demonstrate that multi-connectivity leads to significant enhancements in the performance of multicast streaming.
对高质量视频流不断升级的需求给当今的通信网络带来了重大挑战。满足这些对带宽要求极高的视频流服务对通信网络有限的频谱资源造成了巨大负担,同时也限制了其他服务的可用资源。大量的视频流量会导致严重的网络拥塞,尤其是在需要同时向大量用户发送相同内容的直播流媒体活动期间。对于此类应用,组播传输可通过共享频谱资源为请求相同内容的用户组提供服务,从而有效解决网络拥塞问题,同时满足所有用户的需求。流媒体服务还可进一步受益于多连接功能,它允许用户同时接收来自多个基站的内容。在组播流媒体中集成多连接功能可以提高系统资源利用率,同时还能为组播用户提供无缝连接。为此,这项工作研究了在使用无线组播的同时使用多连接(MC)对满足视频流资源要求的影响。我们的研究结果表明,MC 大大提高了组播流媒体的性能,特别是使经常遇到恶劣信道条件的小区边缘用户受益匪浅。我们特别考虑了多连接组播系统可同时服务的用户数量。我们观察到,在单连接组播条件下,约有 60% 的用户得不到服务,而在组播传输中采用多连接条件后,这些用户可以使用相同的资源得到服务。我们证明了 MC 组播的最优资源分配问题是 NP 难题。作为解决方案,我们提出了一种近似系数为 (1-1/e) 的贪婪近似算法。此外,我们还证明没有其他多项式时间算法能提供更好的近似值。为了在仿真中生成真实的视频流量模式,我们使用了实际视频的轨迹。我们的结果清楚地表明,多连接性显著提高了组播流媒体的性能。
{"title":"Multi-Connectivity for Multicast Video Streaming in Cellular Networks","authors":"Sadaf ul Zuhra, P. Chaporkar, A. Karandikar, H. V. Poor","doi":"10.3390/network4020009","DOIUrl":"https://doi.org/10.3390/network4020009","url":null,"abstract":"The escalating demand for high-quality video streaming poses a major challenge for communication networks today. Catering to these bandwidth-hungry video streaming services places a huge burden on the limited spectral resources of communication networks, limiting the resources available for other services as well. Large volumes of video traffic can lead to severe network congestion, particularly during live streaming events, which require sending the same content to a large number of users simultaneously. For such applications, multicast transmission can effectively combat network congestion while meeting the demands of all the users by serving groups of users requesting the same content over shared spectral resources. Streaming services can further benefit from multi-connectivity, which allows users to receive content from multiple base stations simultaneously. Integrating multi-connectivity within multicast streaming can improve the system resource utilization while also providing seamless connectivity to multicast users. Toward this end, this work studied the impact of using multi-connectivity (MC) alongside wireless multicast for meeting the resource requirements of video streaming. Our findings show that MC substantially enhances the performance of multicast streaming, particularly benefiting cell-edge users who often experience poor channel conditions. We especially considered the number of users that can be simultaneously served by multi-connected multicast systems. It was observed that about 60% of the users that are left unserved under single-connectivity multicast are successfully served using the same resources by employing multi-connectivity in multicast transmissions. We prove that the optimal resource allocation problem for MC multicast is NP-hard. As a solution, we present a greedy approximation algorithm with an approximation factor of (1−1/e). Furthermore, we establish that no other polynomial-time algorithm can offer a superior approximation. To generate realistic video traffic patterns in our simulations, we made use of traces from actual videos. Our results clearly demonstrate that multi-connectivity leads to significant enhancements in the performance of multicast streaming.","PeriodicalId":19145,"journal":{"name":"Network","volume":"72 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141008812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An innovative multi-level BT classification approach based on deep learning has been proposed in this article. Here, classification is accomplished using the SpinalNet, whose structure is optimized by the Hybrid Coot Flamingo Search Optimization Algorithm (CootFSOA). Further, a novel segmentation approach using CootFSOA-LinkNet is devised for isolating the tumour area from the brain image. Here, the input MRI images are fed into the Adaptive Kalman Filter (AKF) to denoise the image. In the segmentation process, LinkNet is used to separate the tumour region from the MRI image. CootFSOA is used to achieve structural optimization of LinkNet. The segmented image is then used to create several features, and the resulting feature vector is fed into SpinalNet to detect BT. CootFSOA is used in this instance as well to adjust the SpinalNet's hyperparameters and achieve high detection accuracy. If a tumour is detected, second-level classification is carried out by employing the CootFSOA-SpinalNet to classify the input image into several types, such as gliomas, pituitary tumours, and meningiomas. Moreover, the efficacy of the CootFSOA-SpinalNet has been examined considering accuracy, True Positive Rate (TPR), and True Negative Rate (TNR) and has recorded superior values of 0.926, 0.931, and 0.925, respectively.
{"title":"Multi-level brain tumor classification using hybrid coot flamingo search optimization Algorithm Enabled deep learning with MRI images.","authors":"Jayasri Kotti, Manikandan Moovendran, Mekala Kandasamy","doi":"10.1080/0954898X.2024.2343342","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2343342","url":null,"abstract":"An innovative multi-level BT classification approach based on deep learning has been proposed in this article. Here, classification is accomplished using the SpinalNet, whose structure is optimized by the Hybrid Coot Flamingo Search Optimization Algorithm (CootFSOA). Further, a novel segmentation approach using CootFSOA-LinkNet is devised for isolating the tumour area from the brain image. Here, the input MRI images are fed into the Adaptive Kalman Filter (AKF) to denoise the image. In the segmentation process, LinkNet is used to separate the tumour region from the MRI image. CootFSOA is used to achieve structural optimization of LinkNet. The segmented image is then used to create several features, and the resulting feature vector is fed into SpinalNet to detect BT. CootFSOA is used in this instance as well to adjust the SpinalNet's hyperparameters and achieve high detection accuracy. If a tumour is detected, second-level classification is carried out by employing the CootFSOA-SpinalNet to classify the input image into several types, such as gliomas, pituitary tumours, and meningiomas. Moreover, the efficacy of the CootFSOA-SpinalNet has been examined considering accuracy, True Positive Rate (TPR), and True Negative Rate (TNR) and has recorded superior values of 0.926, 0.931, and 0.925, respectively.","PeriodicalId":19145,"journal":{"name":"Network","volume":"19 8","pages":"1-32"},"PeriodicalIF":0.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140652149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-25DOI: 10.1080/0954898X.2024.2337801
Malathi Chilakalapudi, Sheela Jayachandran
Automatic detection of plant diseases is very imperative for monitoring the plants because they are one of the major concerns in the agricultural sector. Continuous monitoring can combat diseases of plants, which contribute to production loss. In the global production of agricultural goods, the disease of plants plays a significant role and harms yield, resulting in losses for the economy, society, and environment. It seems like a difficult and time-consuming task to manually identify diseased symptoms on leaves. The majority of disease symptoms are reflected in plant leaves, but experts in laboratories spend a lot of money and time diagnosing them. The majority of the features, which affect crop superiority and amount are plant or crop diseases. Therefore, classification, segmentation, and recognition of contaminated symptoms at the starting phase of infection is indispensable. Precision agriculture employs a deep learning model to jointly address these issues. In this research, an efficient disease of plant leaf segmentation and plant leaf disease recognition model is introduced using an optimized deep learning technique. As a result, maximum testing accuracy of 94.69%, sensitivity of 95.58%, and specificity of 92.90% were attained by the optimized deep learning method.
{"title":"Optimized deep learning network for plant leaf disease segmentation and multi-classification using leaf images.","authors":"Malathi Chilakalapudi, Sheela Jayachandran","doi":"10.1080/0954898X.2024.2337801","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2337801","url":null,"abstract":"Automatic detection of plant diseases is very imperative for monitoring the plants because they are one of the major concerns in the agricultural sector. Continuous monitoring can combat diseases of plants, which contribute to production loss. In the global production of agricultural goods, the disease of plants plays a significant role and harms yield, resulting in losses for the economy, society, and environment. It seems like a difficult and time-consuming task to manually identify diseased symptoms on leaves. The majority of disease symptoms are reflected in plant leaves, but experts in laboratories spend a lot of money and time diagnosing them. The majority of the features, which affect crop superiority and amount are plant or crop diseases. Therefore, classification, segmentation, and recognition of contaminated symptoms at the starting phase of infection is indispensable. Precision agriculture employs a deep learning model to jointly address these issues. In this research, an efficient disease of plant leaf segmentation and plant leaf disease recognition model is introduced using an optimized deep learning technique. As a result, maximum testing accuracy of 94.69%, sensitivity of 95.58%, and specificity of 92.90% were attained by the optimized deep learning method.","PeriodicalId":19145,"journal":{"name":"Network","volume":"43 37","pages":"1-34"},"PeriodicalIF":0.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140657270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-24DOI: 10.1080/0954898X.2024.2338446
Min Gong
These results highlight the transformative potential of neural network algorithms in providing consistency and transparency while reducing the inherent subjectivity in human evaluations, revolutionizing translation quality assessment in academia. The findings have significant implications for academia, as reliable translation quality evaluations are crucial for fostering cross-cultural knowledge exchange. However, challenges such as domain-specific adaptation require further investigation to improve and maximize the effectiveness of this novel approach, ultimately enhancing the accessibility of academic content and promoting global academic discourse. The proposed method involves using neural network algorithms for assessing college-level English translation quality, starting with data collection and preparation, developing a neural network model, and evaluating its performance using human assessment as a benchmark. The study employed both human evaluators and a neural network model to assess the quality of translated academic papers, revealing a strong correlation (0.84) between human and model assessments. These findings suggest the model's potential to enhance translation quality in academic settings, though additional research is needed to address certain limitations. The results show that the Neural Network-Based Model achieved higher scores in accuracy, precision, F-measure, and recall compared to Traditional Manual Evaluation and Partial Automated Model, indicating its superior performance in evaluating translation quality.
{"title":"The neural network algorithm-based quality assessment method for university English translation.","authors":"Min Gong","doi":"10.1080/0954898X.2024.2338446","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2338446","url":null,"abstract":"These results highlight the transformative potential of neural network algorithms in providing consistency and transparency while reducing the inherent subjectivity in human evaluations, revolutionizing translation quality assessment in academia. The findings have significant implications for academia, as reliable translation quality evaluations are crucial for fostering cross-cultural knowledge exchange. However, challenges such as domain-specific adaptation require further investigation to improve and maximize the effectiveness of this novel approach, ultimately enhancing the accessibility of academic content and promoting global academic discourse. The proposed method involves using neural network algorithms for assessing college-level English translation quality, starting with data collection and preparation, developing a neural network model, and evaluating its performance using human assessment as a benchmark. The study employed both human evaluators and a neural network model to assess the quality of translated academic papers, revealing a strong correlation (0.84) between human and model assessments. These findings suggest the model's potential to enhance translation quality in academic settings, though additional research is needed to address certain limitations. The results show that the Neural Network-Based Model achieved higher scores in accuracy, precision, F-measure, and recall compared to Traditional Manual Evaluation and Partial Automated Model, indicating its superior performance in evaluating translation quality.","PeriodicalId":19145,"journal":{"name":"Network","volume":"14 2","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140660521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}