首页 > 最新文献

Applied and Computational Engineering最新文献

英文 中文
Bridging the gap in online hate speech detection: A comparative analysis of BERT and traditional models for homophobic content identification on X/Twitter 缩小网络仇恨言论检测的差距:BERT 与传统模式在 X/Twitter 上识别仇视同性恋内容的比较分析
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241346
Josh McGiff, Nikola S. Nikolov
Our study addresses a significant gap in online hate speech detection research by focusing on homophobia, an area often neglected in sentiment analysis research. Utilising advanced sentiment analysis models, particularly BERT, and traditional machine learning methods, we developed a nuanced approach to identify homophobic content on X/Twitter. This research is pivotal due to the persistent underrepresentation of homophobia in detection models. Our findings reveal that while BERT outperforms traditional methods, the choice of validation technique can impact model performance. This underscores the importance of contextual understanding in detecting nuanced hate speech. By releasing the largest open-source labelled English dataset for homophobia detection known to us, an analysis of various models' performance and our strongest BERT-based model, we aim to enhance online safety and inclusivity. Future work will extend to broader LGBTQIA+ hate speech detection, addressing the challenges of sourcing diverse datasets. Through this endeavour, we contribute to the larger effort against online hate, advocating for a more inclusive digital landscape. Our study not only offers insights into the effective detection of homophobic content by improving on previous research results, but it also lays groundwork for future advancements in hate speech analysis.
仇视同性恋是情感分析研究中经常被忽视的一个领域,我们的研究通过关注这一领域,填补了网络仇恨言论检测研究中的一个重大空白。利用先进的情感分析模型(尤其是 BERT)和传统的机器学习方法,我们开发了一种细致入微的方法来识别 X/Twitter 上的仇视同性恋内容。这项研究非常重要,因为在检测模型中,仇视同性恋的内容一直没有得到充分反映。我们的研究结果表明,虽然 BERT 的性能优于传统方法,但验证技术的选择会影响模型的性能。这凸显了语境理解在检测细微仇恨言论中的重要性。通过发布我们已知的最大的用于检测仇视同性恋行为的开源标记英语数据集、对各种模型性能的分析以及我们基于 BERT 的最强模型,我们的目标是提高网络安全和包容性。未来的工作将扩展到更广泛的 LGBTQIA+ 仇恨言论检测领域,解决来源多样化数据集的挑战。通过这项工作,我们将为打击网络仇恨的更大努力做出贡献,倡导更具包容性的数字环境。我们的研究不仅通过改进以往的研究成果,为有效检测仇视同性恋的内容提供了见解,还为仇恨言论分析的未来发展奠定了基础。
{"title":"Bridging the gap in online hate speech detection: A comparative analysis of BERT and traditional models for homophobic content identification on X/Twitter","authors":"Josh McGiff, Nikola S. Nikolov","doi":"10.54254/2755-2721/64/20241346","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241346","url":null,"abstract":"Our study addresses a significant gap in online hate speech detection research by focusing on homophobia, an area often neglected in sentiment analysis research. Utilising advanced sentiment analysis models, particularly BERT, and traditional machine learning methods, we developed a nuanced approach to identify homophobic content on X/Twitter. This research is pivotal due to the persistent underrepresentation of homophobia in detection models. Our findings reveal that while BERT outperforms traditional methods, the choice of validation technique can impact model performance. This underscores the importance of contextual understanding in detecting nuanced hate speech. By releasing the largest open-source labelled English dataset for homophobia detection known to us, an analysis of various models' performance and our strongest BERT-based model, we aim to enhance online safety and inclusivity. Future work will extend to broader LGBTQIA+ hate speech detection, addressing the challenges of sourcing diverse datasets. Through this endeavour, we contribute to the larger effort against online hate, advocating for a more inclusive digital landscape. Our study not only offers insights into the effective detection of homophobic content by improving on previous research results, but it also lays groundwork for future advancements in hate speech analysis.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"54 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of deep learning models based on Chest X-ray image classification 基于胸部 X 光图像分类的深度学习模型比较
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241352
Yiqing Zhang, Yukun Xu, Zhengyang Kong, Zheqi Hu
Pneumonia is a common respiratory disease characterized by inflammation in the lungs, emphasizing the importance of accurate diagnosis and timely treatment. Despite some progress in medical image segmentation, overfitting and low efficiency have been observed in practical applications. This paper aims to leverage image data augmentation methods to mitigate overfitting and achieve lightweight and highly accurate automatic detection of lung infections in X-ray images. We trained three models, namely VGG16, MobileNetV2, and InceptionV3, using both augmented and unaugmented image datasets. Comparative results demonstrate that the augmented VGG16 model (VGG16-Augmentation) achieves an average accuracy of 96.8%. While the accuracy of MobileNetV2-Augmentation is slightly lower than that of VGG16-Augmentation, it still achieves an average prediction accuracy of 94.2% and the number of model parameters is only 1/9 of VGG16-augmentation. This is particularly beneficial for rapid screening of pneumonia patients and more efficient real-time detection scenarios. Through this study, we showcase the potential application of image data augmentation methods in pneumonia detection and provide performance comparisons among different models. These findings offer valuable insights for the rapid diagnosis and screening of pneumonia patients and provide useful guidance for future research and the implementation of efficient real-time monitoring of lung conditions in practical healthcare settings.
肺炎是以肺部炎症为特征的常见呼吸道疾病,因此准确诊断和及时治疗显得尤为重要。尽管医学图像分割技术取得了一些进展,但在实际应用中仍存在过拟合和低效率的问题。本文旨在利用图像数据增强方法来减轻过拟合,实现对 X 光图像中肺部感染的轻量级、高精度自动检测。我们使用增强和未增强图像数据集训练了三种模型,即 VGG16、MobileNetV2 和 InceptionV3。比较结果表明,增强型 VGG16 模型(VGG16-Augmentation)的平均准确率达到 96.8%。虽然 MobileNetV2-Augmentation 的准确率略低于 VGG16-Augmentation,但它仍然达到了 94.2% 的平均预测准确率,而且模型参数数量仅为 VGG16-augmentation 的 1/9。这尤其有利于肺炎患者的快速筛查和更高效的实时检测场景。通过这项研究,我们展示了图像数据增强方法在肺炎检测中的潜在应用,并对不同模型进行了性能比较。这些发现为肺炎患者的快速诊断和筛查提供了有价值的见解,并为未来的研究和在实际医疗环境中实施高效的肺部状况实时监测提供了有益的指导。
{"title":"Comparison of deep learning models based on Chest X-ray image classification","authors":"Yiqing Zhang, Yukun Xu, Zhengyang Kong, Zheqi Hu","doi":"10.54254/2755-2721/64/20241352","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241352","url":null,"abstract":"Pneumonia is a common respiratory disease characterized by inflammation in the lungs, emphasizing the importance of accurate diagnosis and timely treatment. Despite some progress in medical image segmentation, overfitting and low efficiency have been observed in practical applications. This paper aims to leverage image data augmentation methods to mitigate overfitting and achieve lightweight and highly accurate automatic detection of lung infections in X-ray images. We trained three models, namely VGG16, MobileNetV2, and InceptionV3, using both augmented and unaugmented image datasets. Comparative results demonstrate that the augmented VGG16 model (VGG16-Augmentation) achieves an average accuracy of 96.8%. While the accuracy of MobileNetV2-Augmentation is slightly lower than that of VGG16-Augmentation, it still achieves an average prediction accuracy of 94.2% and the number of model parameters is only 1/9 of VGG16-augmentation. This is particularly beneficial for rapid screening of pneumonia patients and more efficient real-time detection scenarios. Through this study, we showcase the potential application of image data augmentation methods in pneumonia detection and provide performance comparisons among different models. These findings offer valuable insights for the rapid diagnosis and screening of pneumonia patients and provide useful guidance for future research and the implementation of efficient real-time monitoring of lung conditions in practical healthcare settings.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"77 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140973856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of computer networks and artificial neural networks for an AI-based network operator 计算机网络与人工神经网络的整合,打造基于人工智能的网络运营商
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241370
Binbin Wu, Jingyu Xu, Yifan Zhang, Bo Liu, Yulu Gong, Jiaxin Huang
This paper proposes an integrated approach combining computer networks and artificial neural networks to construct an intelligent network operator, functioning as an AI model. State information from computer networks is transformed into embedded vectors, enabling the operator to efficiently recognize different pieces of information and accurately output appropriate operations for the computer network at each step. The operator has undergone comprehensive testing, achieving a 100% accuracy rate, thus eliminating operational risks. Additionally, a simple computer network simulator is created and encapsulated into training and testing environment components, enabling automation of the data collection, training, and testing processes. This abstract outline the core contributions of the paper while highlighting the innovative methodology employed in the development and validation of the AI-based network operator.
本文提出了一种结合计算机网络和人工神经网络的综合方法,以构建一个智能网络操作员,发挥人工智能模型的作用。计算机网络的状态信息被转化为嵌入式矢量,使操作员能够有效地识别不同的信息,并准确地为计算机网络的每个步骤输出适当的操作。操作员经过全面测试,准确率达到 100%,从而消除了操作风险。此外,还创建了一个简单的计算机网络模拟器,并将其封装为培训和测试环境组件,实现了数据收集、培训和测试过程的自动化。本摘要概述了论文的核心贡献,同时强调了在开发和验证基于人工智能的网络运营商时所采用的创新方法。
{"title":"Integration of computer networks and artificial neural networks for an AI-based network operator","authors":"Binbin Wu, Jingyu Xu, Yifan Zhang, Bo Liu, Yulu Gong, Jiaxin Huang","doi":"10.54254/2755-2721/64/20241370","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241370","url":null,"abstract":"This paper proposes an integrated approach combining computer networks and artificial neural networks to construct an intelligent network operator, functioning as an AI model. State information from computer networks is transformed into embedded vectors, enabling the operator to efficiently recognize different pieces of information and accurately output appropriate operations for the computer network at each step. The operator has undergone comprehensive testing, achieving a 100% accuracy rate, thus eliminating operational risks. Additionally, a simple computer network simulator is created and encapsulated into training and testing environment components, enabling automation of the data collection, training, and testing processes. This abstract outline the core contributions of the paper while highlighting the innovative methodology employed in the development and validation of the AI-based network operator.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"12 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140976704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and classification of wilting status in leaf images based on VGG16 with EfficientNet V3 algorithm 基于 VGG16 和 EfficientNet V3 算法的叶片图像枯萎状态检测与分类
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241347
Qixiang Li, Yiming Ma, Ziyang Luo, Ying Tian
The aim of this paper is to explore the importance of leaf wilting status detection and classification in agriculture to meet the demand for monitoring and diagnosing plant growth conditions. By comparing the performance of the traditional VGG16 image classification algorithm and the popular EfficientNet V3 algorithm in leaf image wilting status detection and classification, it is found that EfficientNet V3 has faster convergence speed and higher accuracy. As the model training process proceeds, both algorithms show a trend of gradual convergence of Loss and Accuracy and increasing accuracy. The best training results show that VGG16 reaches a minimum loss of 0.288 and a maximum accuracy of 96% at the 19th epoch, while EfficientNet V3 reaches a minimum loss of 0.331 and a maximum accuracy of 97.5% at the 20th epoch. These findings reveal that EfficientNet V3 has a better performance in leaf wilting status detection, which provides a more accurate and efficient means of plant health monitoring for agricultural production and is of great research significance.
本文旨在探讨叶片萎蔫状态检测和分类在农业中的重要性,以满足监测和诊断植物生长状况的需求。通过比较传统的 VGG16 图像分类算法和流行的 EfficientNet V3 算法在叶片图像枯萎状态检测和分类中的性能,发现 EfficientNet V3 具有更快的收敛速度和更高的准确率。随着模型训练过程的进行,两种算法都呈现出损失率和准确率逐渐收敛、准确率不断提高的趋势。最佳训练结果显示,VGG16 在第 19 个 epoch 时达到了 0.288 的最小损失和 96% 的最高准确率,而 EfficientNet V3 在第 20 个 epoch 时达到了 0.331 的最小损失和 97.5% 的最高准确率。这些研究结果表明,EfficientNet V3 在叶片萎蔫状态检测方面具有更好的性能,为农业生产提供了更准确、更高效的植物健康监测手段,具有重要的研究意义。
{"title":"Detection and classification of wilting status in leaf images based on VGG16 with EfficientNet V3 algorithm","authors":"Qixiang Li, Yiming Ma, Ziyang Luo, Ying Tian","doi":"10.54254/2755-2721/64/20241347","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241347","url":null,"abstract":"The aim of this paper is to explore the importance of leaf wilting status detection and classification in agriculture to meet the demand for monitoring and diagnosing plant growth conditions. By comparing the performance of the traditional VGG16 image classification algorithm and the popular EfficientNet V3 algorithm in leaf image wilting status detection and classification, it is found that EfficientNet V3 has faster convergence speed and higher accuracy. As the model training process proceeds, both algorithms show a trend of gradual convergence of Loss and Accuracy and increasing accuracy. The best training results show that VGG16 reaches a minimum loss of 0.288 and a maximum accuracy of 96% at the 19th epoch, while EfficientNet V3 reaches a minimum loss of 0.331 and a maximum accuracy of 97.5% at the 20th epoch. These findings reveal that EfficientNet V3 has a better performance in leaf wilting status detection, which provides a more accurate and efficient means of plant health monitoring for agricultural production and is of great research significance.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"45 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive optimization of DDoS attack mitigation in distributed systems using machine learning 利用机器学习对分布式系统中的 DDoS 攻击缓解进行预测优化
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241350
Baoming Wang, Yuhang He, Zuwei Shui, Qi Xin, Han Lei
In recent years, cloud computing has been widely used. This paper proposes an innovative approach to solve complex problems in cloud computing resource scheduling and management using machine learning optimization techniques. Through in-depth study of challenges such as low resource utilization and unbalanced load in the cloud environment, this study proposes a comprehensive solution, including optimization methods such as deep learning and genetic algorithm, to improve system performance and efficiency, and thus bring new breakthroughs and progress in the field of cloud computing resource management.Rational allocation of resources plays a crucial role in cloud computing. In the resource allocation of cloud computing, the cloud computing center has limited cloud resources, and users arrive in sequence. Each user requests the cloud computing center to use a certain number of cloud resources at a specific time.
近年来,云计算得到了广泛应用。本文提出了一种利用机器学习优化技术解决云计算资源调度和管理中复杂问题的创新方法。通过对云环境中资源利用率低、负载不均衡等难题的深入研究,本研究提出了包括深度学习、遗传算法等优化方法在内的综合解决方案,以提高系统性能和效率,从而为云计算资源管理领域带来新的突破和进展。在云计算的资源分配中,云计算中心的云资源是有限的,而用户是依次到达的。每个用户请求云计算中心在特定时间使用一定数量的云资源。
{"title":"Predictive optimization of DDoS attack mitigation in distributed systems using machine learning","authors":"Baoming Wang, Yuhang He, Zuwei Shui, Qi Xin, Han Lei","doi":"10.54254/2755-2721/64/20241350","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241350","url":null,"abstract":"In recent years, cloud computing has been widely used. This paper proposes an innovative approach to solve complex problems in cloud computing resource scheduling and management using machine learning optimization techniques. Through in-depth study of challenges such as low resource utilization and unbalanced load in the cloud environment, this study proposes a comprehensive solution, including optimization methods such as deep learning and genetic algorithm, to improve system performance and efficiency, and thus bring new breakthroughs and progress in the field of cloud computing resource management.Rational allocation of resources plays a crucial role in cloud computing. In the resource allocation of cloud computing, the cloud computing center has limited cloud resources, and users arrive in sequence. Each user requests the cloud computing center to use a certain number of cloud resources at a specific time.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"4 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DOA estimation technology based on array signal processing nested array 基于阵列信号处理嵌套阵列的 DOA 估算技术
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241345
Muye Sun, Tianyu Duanmu
Research on non-uniform arrays has always been a focus of attention for scholars both domestically and internationally. Part of the research concentrates on existing non-uniform arrays, while another part focuses on optimizing the position of array elements or expanding the structure. Of course, there are also studies on one-dimensional and two-dimensional DOA estimation algorithms based on array spatial shapes, despite some issues. As long as there is a demand for spatial domain target positioning, the development and refinement of non-uniform arrays will continue to be a hot research direction. Nested arrays represent a unique type of heterogeneous array, whose special geometric shape significantly increases degrees of freedom and enhances estimation performance for directional information of undetermined signal sources. Compared to other algorithms, the one-dimensional DOA estimation algorithm based on spatial smoothing simplifies algorithm complexity, improves estimation accuracy under nested arrays, and can effectively handle the estimation of signal sources under uncertain conditions. The DFT algorithm it employs not only significantly improves angular estimation performance but also reduces operational complexity, utilizing full degrees of freedom to minimize aperture loss. Furthermore, the DFT-MUSIC method greatly reduces algorithmic computational complexity while performing very closely to the spatial smoothing MUSIC algorithm. The sparse arrays it utilizes, including minimum redundancy arrays, coprime arrays, and nested arrays, are a new type of array. Sparse arrays can increase degrees of freedom compared to traditional uniform linear arrays and solve the estimation of signal source angles under uncertain conditions, while also enhancing algorithm angular estimation performance.
非均匀阵列研究一直是国内外学者关注的焦点。一部分研究集中在现有的非均匀阵列上,另一部分研究则集中在优化阵元位置或扩展阵列结构上。当然,也有基于阵列空间形状的一维和二维 DOA 估计算法的研究,尽管存在一些问题。只要有空间域目标定位的需求,非均匀阵列的发展和完善仍将是一个热门研究方向。嵌套阵列是一种独特的异构阵列,其特殊的几何形状大大增加了自由度,提高了对未确定信号源方向信息的估计性能。与其他算法相比,基于空间平滑的一维 DOA 估计算法简化了算法复杂度,提高了嵌套阵列下的估计精度,能有效处理不确定条件下的信号源估计。它采用的 DFT 算法不仅显著提高了角度估计性能,还降低了操作复杂度,利用全自由度最大限度地减少了孔径损失。此外,DFT-MUSIC 方法大大降低了算法的计算复杂度,同时与空间平滑 MUSIC 算法的性能非常接近。它所利用的稀疏阵列,包括最小冗余阵列、共生阵列和嵌套阵列,是一种新型阵列。与传统的均匀线性阵列相比,稀疏阵列可以增加自由度,解决不确定条件下的信号源角度估计问题,同时还能提高算法的角度估计性能。
{"title":"DOA estimation technology based on array signal processing nested array","authors":"Muye Sun, Tianyu Duanmu","doi":"10.54254/2755-2721/64/20241345","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241345","url":null,"abstract":"Research on non-uniform arrays has always been a focus of attention for scholars both domestically and internationally. Part of the research concentrates on existing non-uniform arrays, while another part focuses on optimizing the position of array elements or expanding the structure. Of course, there are also studies on one-dimensional and two-dimensional DOA estimation algorithms based on array spatial shapes, despite some issues. As long as there is a demand for spatial domain target positioning, the development and refinement of non-uniform arrays will continue to be a hot research direction. Nested arrays represent a unique type of heterogeneous array, whose special geometric shape significantly increases degrees of freedom and enhances estimation performance for directional information of undetermined signal sources. Compared to other algorithms, the one-dimensional DOA estimation algorithm based on spatial smoothing simplifies algorithm complexity, improves estimation accuracy under nested arrays, and can effectively handle the estimation of signal sources under uncertain conditions. The DFT algorithm it employs not only significantly improves angular estimation performance but also reduces operational complexity, utilizing full degrees of freedom to minimize aperture loss. Furthermore, the DFT-MUSIC method greatly reduces algorithmic computational complexity while performing very closely to the spatial smoothing MUSIC algorithm. The sparse arrays it utilizes, including minimum redundancy arrays, coprime arrays, and nested arrays, are a new type of array. Sparse arrays can increase degrees of freedom compared to traditional uniform linear arrays and solve the estimation of signal source angles under uncertain conditions, while also enhancing algorithm angular estimation performance.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140973874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning vulnerability analysis against adversarial attacks 针对对抗性攻击的深度学习漏洞分析
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241377
Chi Cheng
In the age of artificial intelligence advancements, deep learning models are essential for applications ranging from image recognition to natural language processing. Despite their capabilities, they're vulnerable to adversarial examplesdeliberately modified inputs to cause errors. This paper explores these vulnerabilities, attributing them to the complexity of neural networks, the diversity of training data, and the training methodologies. It demonstrates how these aspects contribute to the models' susceptibility to adversarial attacks. Through case studies and empirical evidence, the paper highlights instances where advanced models were misled, showcasing the challenges in defending against these threats. It also critically evaluates mitigation strategies, including adversarial training and regularization, assessing their efficacy and limitations. The study underlines the importance of developing AI systems that are not only intelligent but also robust against adversarial tactics, aiming to enhance future deep learning models' resilience to such vulnerabilities.
在人工智能不断进步的时代,深度学习模型对于从图像识别到自然语言处理等各种应用都至关重要。尽管它们功能强大,但很容易受到对抗性示例(故意修改输入以导致错误)的影响。本文探讨了这些弱点,并将其归因于神经网络的复杂性、训练数据的多样性以及训练方法。它展示了这些方面是如何导致模型易受对抗性攻击的。通过案例研究和经验证据,论文重点介绍了高级模型被误导的实例,展示了防御这些威胁所面临的挑战。论文还批判性地评估了包括对抗训练和正则化在内的缓解策略,评估了它们的有效性和局限性。该研究强调了开发不仅智能而且能抵御对抗性策略的人工智能系统的重要性,旨在增强未来深度学习模型对此类漏洞的抵御能力。
{"title":"Deep learning vulnerability analysis against adversarial attacks","authors":"Chi Cheng","doi":"10.54254/2755-2721/64/20241377","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241377","url":null,"abstract":"In the age of artificial intelligence advancements, deep learning models are essential for applications ranging from image recognition to natural language processing. Despite their capabilities, they're vulnerable to adversarial examplesdeliberately modified inputs to cause errors. This paper explores these vulnerabilities, attributing them to the complexity of neural networks, the diversity of training data, and the training methodologies. It demonstrates how these aspects contribute to the models' susceptibility to adversarial attacks. Through case studies and empirical evidence, the paper highlights instances where advanced models were misled, showcasing the challenges in defending against these threats. It also critically evaluates mitigation strategies, including adversarial training and regularization, assessing their efficacy and limitations. The study underlines the importance of developing AI systems that are not only intelligent but also robust against adversarial tactics, aiming to enhance future deep learning models' resilience to such vulnerabilities.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"23 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140972937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent medical detection and diagnosis assisted by deep learning 深度学习辅助下的智能医疗检测和诊断
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241356
Jingxiao Tian, Hanzhe Li, Yaqian Qi, Xiangxiang Wang, Yuan Feng
The integration of artificial intelligence (AI) in healthcare has led to the development of intelligent auxiliary diagnosis systems, enhancing diagnostic capabilities across various medical domains. These AI-assisted systems leverage deep learning algorithms to aid healthcare professionals in disease screening, localization of focal areas, and treatment plan selection. With policies emphasizing innovation in medical AI technology, particularly in China, AI-assisted diagnosis systems have emerged as valuable tools in improving diagnostic accuracy and efficiency. These systems, categorized into image-assisted and text-assisted modes, utilize medical imaging data and clinical diagnosis records to provide diagnostic support. In the context of lung cancer diagnosis and treatment, AI-assisted integrated solutions show promise in early detection and treatment decision support, particularly in the detection of pulmonary nodules. Overall, the integration of AI in healthcare holds significant potential for improving diagnostic accuracy, efficiency, and patient outcomes, contributing to advancements in medical practice.
人工智能(AI)与医疗保健的结合促进了智能辅助诊断系统的发展,增强了各个医疗领域的诊断能力。这些人工智能辅助系统利用深度学习算法,帮助医护人员进行疾病筛查、病灶定位和治疗方案选择。随着政策强调医疗人工智能技术的创新,尤其是在中国,人工智能辅助诊断系统已成为提高诊断准确性和效率的重要工具。这些系统分为图像辅助和文本辅助两种模式,利用医学影像数据和临床诊断记录提供诊断支持。在肺癌诊断和治疗方面,人工智能辅助综合解决方案在早期检测和治疗决策支持方面大有可为,尤其是在肺结节的检测方面。总之,人工智能与医疗保健的整合在提高诊断准确性、效率和患者预后方面具有巨大潜力,有助于推动医疗实践的进步。
{"title":"Intelligent medical detection and diagnosis assisted by deep learning","authors":"Jingxiao Tian, Hanzhe Li, Yaqian Qi, Xiangxiang Wang, Yuan Feng","doi":"10.54254/2755-2721/64/20241356","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241356","url":null,"abstract":"The integration of artificial intelligence (AI) in healthcare has led to the development of intelligent auxiliary diagnosis systems, enhancing diagnostic capabilities across various medical domains. These AI-assisted systems leverage deep learning algorithms to aid healthcare professionals in disease screening, localization of focal areas, and treatment plan selection. With policies emphasizing innovation in medical AI technology, particularly in China, AI-assisted diagnosis systems have emerged as valuable tools in improving diagnostic accuracy and efficiency. These systems, categorized into image-assisted and text-assisted modes, utilize medical imaging data and clinical diagnosis records to provide diagnostic support. In the context of lung cancer diagnosis and treatment, AI-assisted integrated solutions show promise in early detection and treatment decision support, particularly in the detection of pulmonary nodules. Overall, the integration of AI in healthcare holds significant potential for improving diagnostic accuracy, efficiency, and patient outcomes, contributing to advancements in medical practice.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"64 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140976160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A road semantic segmentation system for remote sensing images based on deep learning 基于深度学习的遥感图像道路语义分割系统
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241342
Shutong Xie
With the rapid development of deep learning of computer science nowadays in China, many fields in academic research have experienced the powerful and efficient advantages of deep learning and have begun to integrate it with their own research. To be specific, in the field of remote sensing, the challenge of road extraction from the original images can be effectively solved by using deep learning technology. Getting a high precision in road extraction can not only help scientists to update their road map in time but also speed up the process of digitization of roads in big cities. However, until now, compared to manual road extraction, the accuracy is not high enough to meet the needs of high-precision road extraction for the deep learning model because the model cannot extract the roads exactly in complex situations such as villages. However, this study trained a new road extraction model based on UNet model by using only datasets from large cities and can get a pretty high precision in extraction for roads in big cities. Undoubtedly, this can lead to over-fitting, but its unique high accuracy ensures that the model's ability to extract roads can be well utilized under the situations of large cities, helping researchers to update road maps more conveniently and quickly in large cities.
随着当前我国计算机科学深度学习的快速发展,学术研究的许多领域都感受到了深度学习强大而高效的优势,并开始将其与自身的研究相结合。具体来说,在遥感领域,利用深度学习技术可以有效解决从原始图像中提取道路的难题。高精度的道路提取不仅能帮助科学家及时更新道路地图,还能加快大城市道路数字化的进程。然而,到目前为止,与人工道路提取相比,深度学习模型的精度还不足以满足高精度道路提取的需求,因为该模型无法在村庄等复杂情况下精确提取道路。然而,本研究仅使用大城市的数据集训练了一种基于 UNet 模型的新道路提取模型,可以获得相当高的大城市道路提取精度。毫无疑问,这可能会导致过度拟合,但其特有的高精度确保了该模型的道路提取能力在大城市的情况下能够得到很好的利用,帮助研究人员更方便快捷地更新大城市的道路地图。
{"title":"A road semantic segmentation system for remote sensing images based on deep learning","authors":"Shutong Xie","doi":"10.54254/2755-2721/64/20241342","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241342","url":null,"abstract":"With the rapid development of deep learning of computer science nowadays in China, many fields in academic research have experienced the powerful and efficient advantages of deep learning and have begun to integrate it with their own research. To be specific, in the field of remote sensing, the challenge of road extraction from the original images can be effectively solved by using deep learning technology. Getting a high precision in road extraction can not only help scientists to update their road map in time but also speed up the process of digitization of roads in big cities. However, until now, compared to manual road extraction, the accuracy is not high enough to meet the needs of high-precision road extraction for the deep learning model because the model cannot extract the roads exactly in complex situations such as villages. However, this study trained a new road extraction model based on UNet model by using only datasets from large cities and can get a pretty high precision in extraction for roads in big cities. Undoubtedly, this can lead to over-fitting, but its unique high accuracy ensures that the model's ability to extract roads can be well utilized under the situations of large cities, helping researchers to update road maps more conveniently and quickly in large cities.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"33 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140975549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision gene editing using deep learning: A case study of the CRISPR-Cas9 editor 利用深度学习进行精准基因编辑:CRISPR-Cas9 编辑器案例研究
Pub Date : 2024-05-15 DOI: 10.54254/2755-2721/64/20241357
Zhengrong Cui, Luqi Lin, Yanqi Zong, Yizhi Chen, Sihao Wang
This article reviews the application cases of CRISPR/Cas9 gene editing technology, as well as the challenges and limitations. Firstly, the application of CRISPR/Cas9 technology based on deep learning in predicting the targeting efficiency of sgRNA is introduced, and the steps of data acquisition, pre-processing and feature engineering are described in detail. It then discusses the non-specific cutting and cytotoxicity challenges of CRISPR/Cas9 technology, as well as strategies for solving these challenges using deep learning techniques. Finally, the paper emphasizes the importance of deep learning techniques to mitigate the cytotoxicity problems in CRISPR/Cas9 technology, and points out that the establishment of these models can improve the safety and efficiency of gene editing experiments, and provide important reference and guidance for research in related fields.
本文回顾了CRISPR/Cas9基因编辑技术的应用案例,以及面临的挑战和局限。首先介绍了基于深度学习的CRISPR/Cas9技术在预测sgRNA靶向效率中的应用,详细描述了数据采集、预处理和特征工程等步骤。然后讨论了 CRISPR/Cas9 技术的非特异性切割和细胞毒性难题,以及利用深度学习技术解决这些难题的策略。最后,论文强调了深度学习技术对缓解CRISPR/Cas9技术细胞毒性问题的重要性,并指出这些模型的建立可以提高基因编辑实验的安全性和效率,为相关领域的研究提供重要的参考和指导。
{"title":"Precision gene editing using deep learning: A case study of the CRISPR-Cas9 editor","authors":"Zhengrong Cui, Luqi Lin, Yanqi Zong, Yizhi Chen, Sihao Wang","doi":"10.54254/2755-2721/64/20241357","DOIUrl":"https://doi.org/10.54254/2755-2721/64/20241357","url":null,"abstract":"This article reviews the application cases of CRISPR/Cas9 gene editing technology, as well as the challenges and limitations. Firstly, the application of CRISPR/Cas9 technology based on deep learning in predicting the targeting efficiency of sgRNA is introduced, and the steps of data acquisition, pre-processing and feature engineering are described in detail. It then discusses the non-specific cutting and cytotoxicity challenges of CRISPR/Cas9 technology, as well as strategies for solving these challenges using deep learning techniques. Finally, the paper emphasizes the importance of deep learning techniques to mitigate the cytotoxicity problems in CRISPR/Cas9 technology, and points out that the establishment of these models can improve the safety and efficiency of gene editing experiments, and provide important reference and guidance for research in related fields.","PeriodicalId":350976,"journal":{"name":"Applied and Computational Engineering","volume":"117 37","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140977921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied and Computational Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1