Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241346
Josh McGiff, Nikola S. Nikolov
Our study addresses a significant gap in online hate speech detection research by focusing on homophobia, an area often neglected in sentiment analysis research. Utilising advanced sentiment analysis models, particularly BERT, and traditional machine learning methods, we developed a nuanced approach to identify homophobic content on X/Twitter. This research is pivotal due to the persistent underrepresentation of homophobia in detection models. Our findings reveal that while BERT outperforms traditional methods, the choice of validation technique can impact model performance. This underscores the importance of contextual understanding in detecting nuanced hate speech. By releasing the largest open-source labelled English dataset for homophobia detection known to us, an analysis of various models' performance and our strongest BERT-based model, we aim to enhance online safety and inclusivity. Future work will extend to broader LGBTQIA+ hate speech detection, addressing the challenges of sourcing diverse datasets. Through this endeavour, we contribute to the larger effort against online hate, advocating for a more inclusive digital landscape. Our study not only offers insights into the effective detection of homophobic content by improving on previous research results, but it also lays groundwork for future advancements in hate speech analysis.
{"title":"Bridging the gap in online hate speech detection: A comparative analysis of BERT and traditional models for homophobic content identification on X/Twitter","authors":"Josh McGiff, Nikola S. Nikolov","doi":"10.54254/2755-2721/64/20241346","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241346","url":null,"abstract":"Our study addresses a significant gap in online hate speech detection research by focusing on homophobia, an area often neglected in sentiment analysis research. Utilising advanced sentiment analysis models, particularly BERT, and traditional machine learning methods, we developed a nuanced approach to identify homophobic content on X/Twitter. This research is pivotal due to the persistent underrepresentation of homophobia in detection models. Our findings reveal that while BERT outperforms traditional methods, the choice of validation technique can impact model performance. This underscores the importance of contextual understanding in detecting nuanced hate speech. By releasing the largest open-source labelled English dataset for homophobia detection known to us, an analysis of various models' performance and our strongest BERT-based model, we aim to enhance online safety and inclusivity. Future work will extend to broader LGBTQIA+ hate speech detection, addressing the challenges of sourcing diverse datasets. Through this endeavour, we contribute to the larger effort against online hate, advocating for a more inclusive digital landscape. Our study not only offers insights into the effective detection of homophobic content by improving on previous research results, but it also lays groundwork for future advancements in hate speech analysis.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"54 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241352
Yiqing Zhang, Yukun Xu, Zhengyang Kong, Zheqi Hu
Pneumonia is a common respiratory disease characterized by inflammation in the lungs, emphasizing the importance of accurate diagnosis and timely treatment. Despite some progress in medical image segmentation, overfitting and low efficiency have been observed in practical applications. This paper aims to leverage image data augmentation methods to mitigate overfitting and achieve lightweight and highly accurate automatic detection of lung infections in X-ray images. We trained three models, namely VGG16, MobileNetV2, and InceptionV3, using both augmented and unaugmented image datasets. Comparative results demonstrate that the augmented VGG16 model (VGG16-Augmentation) achieves an average accuracy of 96.8%. While the accuracy of MobileNetV2-Augmentation is slightly lower than that of VGG16-Augmentation, it still achieves an average prediction accuracy of 94.2% and the number of model parameters is only 1/9 of VGG16-augmentation. This is particularly beneficial for rapid screening of pneumonia patients and more efficient real-time detection scenarios. Through this study, we showcase the potential application of image data augmentation methods in pneumonia detection and provide performance comparisons among different models. These findings offer valuable insights for the rapid diagnosis and screening of pneumonia patients and provide useful guidance for future research and the implementation of efficient real-time monitoring of lung conditions in practical healthcare settings.
{"title":"Comparison of deep learning models based on Chest X-ray image classification","authors":"Yiqing Zhang, Yukun Xu, Zhengyang Kong, Zheqi Hu","doi":"10.54254/2755-2721/64/20241352","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241352","url":null,"abstract":"Pneumonia is a common respiratory disease characterized by inflammation in the lungs, emphasizing the importance of accurate diagnosis and timely treatment. Despite some progress in medical image segmentation, overfitting and low efficiency have been observed in practical applications. This paper aims to leverage image data augmentation methods to mitigate overfitting and achieve lightweight and highly accurate automatic detection of lung infections in X-ray images. We trained three models, namely VGG16, MobileNetV2, and InceptionV3, using both augmented and unaugmented image datasets. Comparative results demonstrate that the augmented VGG16 model (VGG16-Augmentation) achieves an average accuracy of 96.8%. While the accuracy of MobileNetV2-Augmentation is slightly lower than that of VGG16-Augmentation, it still achieves an average prediction accuracy of 94.2% and the number of model parameters is only 1/9 of VGG16-augmentation. This is particularly beneficial for rapid screening of pneumonia patients and more efficient real-time detection scenarios. Through this study, we showcase the potential application of image data augmentation methods in pneumonia detection and provide performance comparisons among different models. These findings offer valuable insights for the rapid diagnosis and screening of pneumonia patients and provide useful guidance for future research and the implementation of efficient real-time monitoring of lung conditions in practical healthcare settings.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"77 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140973856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an integrated approach combining computer networks and artificial neural networks to construct an intelligent network operator, functioning as an AI model. State information from computer networks is transformed into embedded vectors, enabling the operator to efficiently recognize different pieces of information and accurately output appropriate operations for the computer network at each step. The operator has undergone comprehensive testing, achieving a 100% accuracy rate, thus eliminating operational risks. Additionally, a simple computer network simulator is created and encapsulated into training and testing environment components, enabling automation of the data collection, training, and testing processes. This abstract outline the core contributions of the paper while highlighting the innovative methodology employed in the development and validation of the AI-based network operator.
{"title":"Integration of computer networks and artificial neural networks for an AI-based network operator","authors":"Binbin Wu, Jingyu Xu, Yifan Zhang, Bo Liu, Yulu Gong, Jiaxin Huang","doi":"10.54254/2755-2721/64/20241370","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241370","url":null,"abstract":"This paper proposes an integrated approach combining computer networks and artificial neural networks to construct an intelligent network operator, functioning as an AI model. State information from computer networks is transformed into embedded vectors, enabling the operator to efficiently recognize different pieces of information and accurately output appropriate operations for the computer network at each step. The operator has undergone comprehensive testing, achieving a 100% accuracy rate, thus eliminating operational risks. Additionally, a simple computer network simulator is created and encapsulated into training and testing environment components, enabling automation of the data collection, training, and testing processes. This abstract outline the core contributions of the paper while highlighting the innovative methodology employed in the development and validation of the AI-based network operator.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"12 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140976704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241347
Qixiang Li, Yiming Ma, Ziyang Luo, Ying Tian
The aim of this paper is to explore the importance of leaf wilting status detection and classification in agriculture to meet the demand for monitoring and diagnosing plant growth conditions. By comparing the performance of the traditional VGG16 image classification algorithm and the popular EfficientNet V3 algorithm in leaf image wilting status detection and classification, it is found that EfficientNet V3 has faster convergence speed and higher accuracy. As the model training process proceeds, both algorithms show a trend of gradual convergence of Loss and Accuracy and increasing accuracy. The best training results show that VGG16 reaches a minimum loss of 0.288 and a maximum accuracy of 96% at the 19th epoch, while EfficientNet V3 reaches a minimum loss of 0.331 and a maximum accuracy of 97.5% at the 20th epoch. These findings reveal that EfficientNet V3 has a better performance in leaf wilting status detection, which provides a more accurate and efficient means of plant health monitoring for agricultural production and is of great research significance.
{"title":"Detection and classification of wilting status in leaf images based on VGG16 with EfficientNet V3 algorithm","authors":"Qixiang Li, Yiming Ma, Ziyang Luo, Ying Tian","doi":"10.54254/2755-2721/64/20241347","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241347","url":null,"abstract":"The aim of this paper is to explore the importance of leaf wilting status detection and classification in agriculture to meet the demand for monitoring and diagnosing plant growth conditions. By comparing the performance of the traditional VGG16 image classification algorithm and the popular EfficientNet V3 algorithm in leaf image wilting status detection and classification, it is found that EfficientNet V3 has faster convergence speed and higher accuracy. As the model training process proceeds, both algorithms show a trend of gradual convergence of Loss and Accuracy and increasing accuracy. The best training results show that VGG16 reaches a minimum loss of 0.288 and a maximum accuracy of 96% at the 19th epoch, while EfficientNet V3 reaches a minimum loss of 0.331 and a maximum accuracy of 97.5% at the 20th epoch. These findings reveal that EfficientNet V3 has a better performance in leaf wilting status detection, which provides a more accurate and efficient means of plant health monitoring for agricultural production and is of great research significance.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"45 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241350
Baoming Wang, Yuhang He, Zuwei Shui, Qi Xin, Han Lei
In recent years, cloud computing has been widely used. This paper proposes an innovative approach to solve complex problems in cloud computing resource scheduling and management using machine learning optimization techniques. Through in-depth study of challenges such as low resource utilization and unbalanced load in the cloud environment, this study proposes a comprehensive solution, including optimization methods such as deep learning and genetic algorithm, to improve system performance and efficiency, and thus bring new breakthroughs and progress in the field of cloud computing resource management.Rational allocation of resources plays a crucial role in cloud computing. In the resource allocation of cloud computing, the cloud computing center has limited cloud resources, and users arrive in sequence. Each user requests the cloud computing center to use a certain number of cloud resources at a specific time.
{"title":"Predictive optimization of DDoS attack mitigation in distributed systems using machine learning","authors":"Baoming Wang, Yuhang He, Zuwei Shui, Qi Xin, Han Lei","doi":"10.54254/2755-2721/64/20241350","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241350","url":null,"abstract":"In recent years, cloud computing has been widely used. This paper proposes an innovative approach to solve complex problems in cloud computing resource scheduling and management using machine learning optimization techniques. Through in-depth study of challenges such as low resource utilization and unbalanced load in the cloud environment, this study proposes a comprehensive solution, including optimization methods such as deep learning and genetic algorithm, to improve system performance and efficiency, and thus bring new breakthroughs and progress in the field of cloud computing resource management.Rational allocation of resources plays a crucial role in cloud computing. In the resource allocation of cloud computing, the cloud computing center has limited cloud resources, and users arrive in sequence. Each user requests the cloud computing center to use a certain number of cloud resources at a specific time.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"4 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241345
Muye Sun, Tianyu Duanmu
Research on non-uniform arrays has always been a focus of attention for scholars both domestically and internationally. Part of the research concentrates on existing non-uniform arrays, while another part focuses on optimizing the position of array elements or expanding the structure. Of course, there are also studies on one-dimensional and two-dimensional DOA estimation algorithms based on array spatial shapes, despite some issues. As long as there is a demand for spatial domain target positioning, the development and refinement of non-uniform arrays will continue to be a hot research direction. Nested arrays represent a unique type of heterogeneous array, whose special geometric shape significantly increases degrees of freedom and enhances estimation performance for directional information of undetermined signal sources. Compared to other algorithms, the one-dimensional DOA estimation algorithm based on spatial smoothing simplifies algorithm complexity, improves estimation accuracy under nested arrays, and can effectively handle the estimation of signal sources under uncertain conditions. The DFT algorithm it employs not only significantly improves angular estimation performance but also reduces operational complexity, utilizing full degrees of freedom to minimize aperture loss. Furthermore, the DFT-MUSIC method greatly reduces algorithmic computational complexity while performing very closely to the spatial smoothing MUSIC algorithm. The sparse arrays it utilizes, including minimum redundancy arrays, coprime arrays, and nested arrays, are a new type of array. Sparse arrays can increase degrees of freedom compared to traditional uniform linear arrays and solve the estimation of signal source angles under uncertain conditions, while also enhancing algorithm angular estimation performance.
非均匀阵列研究一直是国内外学者关注的焦点。一部分研究集中在现有的非均匀阵列上,另一部分研究则集中在优化阵元位置或扩展阵列结构上。当然,也有基于阵列空间形状的一维和二维 DOA 估计算法的研究,尽管存在一些问题。只要有空间域目标定位的需求,非均匀阵列的发展和完善仍将是一个热门研究方向。嵌套阵列是一种独特的异构阵列,其特殊的几何形状大大增加了自由度,提高了对未确定信号源方向信息的估计性能。与其他算法相比,基于空间平滑的一维 DOA 估计算法简化了算法复杂度,提高了嵌套阵列下的估计精度,能有效处理不确定条件下的信号源估计。它采用的 DFT 算法不仅显著提高了角度估计性能,还降低了操作复杂度,利用全自由度最大限度地减少了孔径损失。此外,DFT-MUSIC 方法大大降低了算法的计算复杂度,同时与空间平滑 MUSIC 算法的性能非常接近。它所利用的稀疏阵列,包括最小冗余阵列、共生阵列和嵌套阵列,是一种新型阵列。与传统的均匀线性阵列相比,稀疏阵列可以增加自由度,解决不确定条件下的信号源角度估计问题,同时还能提高算法的角度估计性能。
{"title":"DOA estimation technology based on array signal processing nested array","authors":"Muye Sun, Tianyu Duanmu","doi":"10.54254/2755-2721/64/20241345","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241345","url":null,"abstract":"Research on non-uniform arrays has always been a focus of attention for scholars both domestically and internationally. Part of the research concentrates on existing non-uniform arrays, while another part focuses on optimizing the position of array elements or expanding the structure. Of course, there are also studies on one-dimensional and two-dimensional DOA estimation algorithms based on array spatial shapes, despite some issues. As long as there is a demand for spatial domain target positioning, the development and refinement of non-uniform arrays will continue to be a hot research direction. Nested arrays represent a unique type of heterogeneous array, whose special geometric shape significantly increases degrees of freedom and enhances estimation performance for directional information of undetermined signal sources. Compared to other algorithms, the one-dimensional DOA estimation algorithm based on spatial smoothing simplifies algorithm complexity, improves estimation accuracy under nested arrays, and can effectively handle the estimation of signal sources under uncertain conditions. The DFT algorithm it employs not only significantly improves angular estimation performance but also reduces operational complexity, utilizing full degrees of freedom to minimize aperture loss. Furthermore, the DFT-MUSIC method greatly reduces algorithmic computational complexity while performing very closely to the spatial smoothing MUSIC algorithm. The sparse arrays it utilizes, including minimum redundancy arrays, coprime arrays, and nested arrays, are a new type of array. Sparse arrays can increase degrees of freedom compared to traditional uniform linear arrays and solve the estimation of signal source angles under uncertain conditions, while also enhancing algorithm angular estimation performance.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140973874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241377
Chi Cheng
In the age of artificial intelligence advancements, deep learning models are essential for applications ranging from image recognition to natural language processing. Despite their capabilities, they're vulnerable to adversarial examplesdeliberately modified inputs to cause errors. This paper explores these vulnerabilities, attributing them to the complexity of neural networks, the diversity of training data, and the training methodologies. It demonstrates how these aspects contribute to the models' susceptibility to adversarial attacks. Through case studies and empirical evidence, the paper highlights instances where advanced models were misled, showcasing the challenges in defending against these threats. It also critically evaluates mitigation strategies, including adversarial training and regularization, assessing their efficacy and limitations. The study underlines the importance of developing AI systems that are not only intelligent but also robust against adversarial tactics, aiming to enhance future deep learning models' resilience to such vulnerabilities.
{"title":"Deep learning vulnerability analysis against adversarial attacks","authors":"Chi Cheng","doi":"10.54254/2755-2721/64/20241377","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241377","url":null,"abstract":"In the age of artificial intelligence advancements, deep learning models are essential for applications ranging from image recognition to natural language processing. Despite their capabilities, they're vulnerable to adversarial examplesdeliberately modified inputs to cause errors. This paper explores these vulnerabilities, attributing them to the complexity of neural networks, the diversity of training data, and the training methodologies. It demonstrates how these aspects contribute to the models' susceptibility to adversarial attacks. Through case studies and empirical evidence, the paper highlights instances where advanced models were misled, showcasing the challenges in defending against these threats. It also critically evaluates mitigation strategies, including adversarial training and regularization, assessing their efficacy and limitations. The study underlines the importance of developing AI systems that are not only intelligent but also robust against adversarial tactics, aiming to enhance future deep learning models' resilience to such vulnerabilities.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"23 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140972937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration of artificial intelligence (AI) in healthcare has led to the development of intelligent auxiliary diagnosis systems, enhancing diagnostic capabilities across various medical domains. These AI-assisted systems leverage deep learning algorithms to aid healthcare professionals in disease screening, localization of focal areas, and treatment plan selection. With policies emphasizing innovation in medical AI technology, particularly in China, AI-assisted diagnosis systems have emerged as valuable tools in improving diagnostic accuracy and efficiency. These systems, categorized into image-assisted and text-assisted modes, utilize medical imaging data and clinical diagnosis records to provide diagnostic support. In the context of lung cancer diagnosis and treatment, AI-assisted integrated solutions show promise in early detection and treatment decision support, particularly in the detection of pulmonary nodules. Overall, the integration of AI in healthcare holds significant potential for improving diagnostic accuracy, efficiency, and patient outcomes, contributing to advancements in medical practice.
{"title":"Intelligent medical detection and diagnosis assisted by deep learning","authors":"Jingxiao Tian, Hanzhe Li, Yaqian Qi, Xiangxiang Wang, Yuan Feng","doi":"10.54254/2755-2721/64/20241356","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241356","url":null,"abstract":"The integration of artificial intelligence (AI) in healthcare has led to the development of intelligent auxiliary diagnosis systems, enhancing diagnostic capabilities across various medical domains. These AI-assisted systems leverage deep learning algorithms to aid healthcare professionals in disease screening, localization of focal areas, and treatment plan selection. With policies emphasizing innovation in medical AI technology, particularly in China, AI-assisted diagnosis systems have emerged as valuable tools in improving diagnostic accuracy and efficiency. These systems, categorized into image-assisted and text-assisted modes, utilize medical imaging data and clinical diagnosis records to provide diagnostic support. In the context of lung cancer diagnosis and treatment, AI-assisted integrated solutions show promise in early detection and treatment decision support, particularly in the detection of pulmonary nodules. Overall, the integration of AI in healthcare holds significant potential for improving diagnostic accuracy, efficiency, and patient outcomes, contributing to advancements in medical practice.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"64 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140976160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241342
Shutong Xie
With the rapid development of deep learning of computer science nowadays in China, many fields in academic research have experienced the powerful and efficient advantages of deep learning and have begun to integrate it with their own research. To be specific, in the field of remote sensing, the challenge of road extraction from the original images can be effectively solved by using deep learning technology. Getting a high precision in road extraction can not only help scientists to update their road map in time but also speed up the process of digitization of roads in big cities. However, until now, compared to manual road extraction, the accuracy is not high enough to meet the needs of high-precision road extraction for the deep learning model because the model cannot extract the roads exactly in complex situations such as villages. However, this study trained a new road extraction model based on UNet model by using only datasets from large cities and can get a pretty high precision in extraction for roads in big cities. Undoubtedly, this can lead to over-fitting, but its unique high accuracy ensures that the model's ability to extract roads can be well utilized under the situations of large cities, helping researchers to update road maps more conveniently and quickly in large cities.
{"title":"A road semantic segmentation system for remote sensing images based on deep learning","authors":"Shutong Xie","doi":"10.54254/2755-2721/64/20241342","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241342","url":null,"abstract":"With the rapid development of deep learning of computer science nowadays in China, many fields in academic research have experienced the powerful and efficient advantages of deep learning and have begun to integrate it with their own research. To be specific, in the field of remote sensing, the challenge of road extraction from the original images can be effectively solved by using deep learning technology. Getting a high precision in road extraction can not only help scientists to update their road map in time but also speed up the process of digitization of roads in big cities. However, until now, compared to manual road extraction, the accuracy is not high enough to meet the needs of high-precision road extraction for the deep learning model because the model cannot extract the roads exactly in complex situations such as villages. However, this study trained a new road extraction model based on UNet model by using only datasets from large cities and can get a pretty high precision in extraction for roads in big cities. Undoubtedly, this can lead to over-fitting, but its unique high accuracy ensures that the model's ability to extract roads can be well utilized under the situations of large cities, helping researchers to update road maps more conveniently and quickly in large cities.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"33 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.54254/2755-2721/64/20241357
Zhengrong Cui, Luqi Lin, Yanqi Zong, Yizhi Chen, Sihao Wang
This article reviews the application cases of CRISPR/Cas9 gene editing technology, as well as the challenges and limitations. Firstly, the application of CRISPR/Cas9 technology based on deep learning in predicting the targeting efficiency of sgRNA is introduced, and the steps of data acquisition, pre-processing and feature engineering are described in detail. It then discusses the non-specific cutting and cytotoxicity challenges of CRISPR/Cas9 technology, as well as strategies for solving these challenges using deep learning techniques. Finally, the paper emphasizes the importance of deep learning techniques to mitigate the cytotoxicity problems in CRISPR/Cas9 technology, and points out that the establishment of these models can improve the safety and efficiency of gene editing experiments, and provide important reference and guidance for research in related fields.
{"title":"Precision gene editing using deep learning: A case study of the CRISPR-Cas9 editor","authors":"Zhengrong Cui, Luqi Lin, Yanqi Zong, Yizhi Chen, Sihao Wang","doi":"10.54254/2755-2721/64/20241357","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241357","url":null,"abstract":"This article reviews the application cases of CRISPR/Cas9 gene editing technology, as well as the challenges and limitations. Firstly, the application of CRISPR/Cas9 technology based on deep learning in predicting the targeting efficiency of sgRNA is introduced, and the steps of data acquisition, pre-processing and feature engineering are described in detail. It then discusses the non-specific cutting and cytotoxicity challenges of CRISPR/Cas9 technology, as well as strategies for solving these challenges using deep learning techniques. Finally, the paper emphasizes the importance of deep learning techniques to mitigate the cytotoxicity problems in CRISPR/Cas9 technology, and points out that the establishment of these models can improve the safety and efficiency of gene editing experiments, and provide important reference and guidance for research in related fields.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"117 37","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140977921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}