首页 > 最新文献

Applied and Computational Engineering最新文献

英文 中文
Innovative research on AI-assisted teaching models for college English listening and speaking courses 大学英语听说课程人工智能辅助教学模式创新研究
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/69/20241493
Yun Luo
This paper explores the innovative application of artificial intelligence (AI) in the construction of teaching models for college English listening and speaking courses. By leveraging advanced AI technologies, educators can enhance the effectiveness of language instruction and provide personalized learning experiences. This study examines the theoretical foundations, practical implementations, and the impact of AI-assisted teaching on student engagement and performance. Through comprehensive analysis and discussion, we highlight the potential of AI to transform traditional language education, address challenges, and improve learning outcomes. The findings suggest that integrating AI into college English courses offers significant advantages in terms of adaptability, interactivity, and efficiency, paving the way for future educational innovations.
本文探讨了人工智能(AI)在大学英语听说课程教学模式构建中的创新应用。通过利用先进的人工智能技术,教育工作者可以提高语言教学的有效性,并提供个性化的学习体验。本研究探讨了人工智能辅助教学的理论基础、实际应用以及对学生参与度和成绩的影响。通过全面的分析和讨论,我们强调了人工智能在改变传统语言教育、应对挑战和提高学习效果方面的潜力。研究结果表明,将人工智能融入大学英语课程在适应性、互动性和效率方面具有显著优势,为未来的教育创新铺平了道路。
{"title":"Innovative research on AI-assisted teaching models for college English listening and speaking courses","authors":"Yun Luo","doi":"10.54254/2755-2721/69/20241493","DOIUrl":"https://doi.org/10.54254/2755-2721/69/20241493","url":null,"abstract":"This paper explores the innovative application of artificial intelligence (AI) in the construction of teaching models for college English listening and speaking courses. By leveraging advanced AI technologies, educators can enhance the effectiveness of language instruction and provide personalized learning experiences. This study examines the theoretical foundations, practical implementations, and the impact of AI-assisted teaching on student engagement and performance. Through comprehensive analysis and discussion, we highlight the potential of AI to transform traditional language education, address challenges, and improve learning outcomes. The findings suggest that integrating AI into college English courses offers significant advantages in terms of adaptability, interactivity, and efficiency, paving the way for future educational innovations.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"51 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141805553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing capabilities of generative models through VAE-GAN integration: A review 通过 VAE-GAN 集成增强生成模型的能力:综述
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/67/2024ma0070
Dongting Cai
Our review explores the integration of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), which are pivotal in the realm of generative models. VAEs are renowned for their robust probabilistic foundations and capacity for complex data representation learning, while GANs are celebrated for generating high-fidelity images. Despite their strengths, both models have limitations: VAEs often produce less sharp outputs, and GANs face challenges with training stability. The hybrid VAE-GAN models harness the strengths of both architectures to overcome these limitations, enhancing output quality and diversity. We provide a comprehensive overview of VAEs and GANs technology developments, their integration strategies, and resultant performance improvements. Applications across various fields, such as artistic creation, medical imaging, e-commerce, and video gaming, highlight the transformative potential of these models. However, challenges in model robustness, ethical concerns, and computational demands persist, posing significant hurdles. Future research directions are poised to transform the VAE-GAN landscape significantly. Enhancing training stability remains a priority, with new approaches such as incorporating self-correcting mechanisms into GANs training being tested. Addressing ethical issues is also critical, as policymakers and technologists work together to develop standards that prevent misuse. Moreover, reducing computational costs is fundamental to democratizing access to these technologies. Projects such as the development of MobileNetV2 have made strides in creating more efficient neural network architectures that maintain performance while being less resource-intensive. Further, the exploration of VAE-GAN applications in fields like augmented reality and personalized medicine offers exciting opportunities for growth, as evidenced by recent pilot studies.
我们的综述探讨了变异自动编码器(VAE)和生成对抗网络(GAN)的整合,它们在生成模型领域举足轻重。变异自编码器以其强大的概率论基础和复杂数据表示学习能力而闻名,而生成对抗网络则以生成高保真图像而著称。尽管这两种模型各有优势,但也存在局限性:VAE 通常无法生成清晰的输出,而 GAN 则面临着训练稳定性的挑战。混合 VAE-GAN 模型利用了两种架构的优势,克服了这些局限性,提高了输出质量和多样性。我们全面概述了 VAE 和 GAN 的技术发展、整合策略以及由此带来的性能改进。艺术创作、医疗成像、电子商务和视频游戏等各个领域的应用凸显了这些模型的变革潜力。然而,模型的稳健性、伦理问题和计算需求等方面的挑战依然存在,构成了重大障碍。未来的研究方向将大大改变 VAE-GAN 的格局。提高训练的稳定性仍然是当务之急,新方法(如将自我纠正机制纳入 GANs 训练)正在接受测试。解决伦理问题也至关重要,政策制定者和技术专家将共同努力制定防止滥用的标准。此外,降低计算成本也是实现这些技术普及化的基础。MobileNetV2 等项目在创建更高效的神经网络架构方面取得了长足进步,这些架构既能保持性能,又能降低资源密集度。此外,VAE-GAN 在增强现实和个性化医疗等领域的应用探索也提供了令人兴奋的发展机遇,最近的试点研究就证明了这一点。
{"title":"Enhancing capabilities of generative models through VAE-GAN integration: A review","authors":"Dongting Cai","doi":"10.54254/2755-2721/67/2024ma0070","DOIUrl":"https://doi.org/10.54254/2755-2721/67/2024ma0070","url":null,"abstract":"Our review explores the integration of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs), which are pivotal in the realm of generative models. VAEs are renowned for their robust probabilistic foundations and capacity for complex data representation learning, while GANs are celebrated for generating high-fidelity images. Despite their strengths, both models have limitations: VAEs often produce less sharp outputs, and GANs face challenges with training stability. The hybrid VAE-GAN models harness the strengths of both architectures to overcome these limitations, enhancing output quality and diversity. We provide a comprehensive overview of VAEs and GANs technology developments, their integration strategies, and resultant performance improvements. Applications across various fields, such as artistic creation, medical imaging, e-commerce, and video gaming, highlight the transformative potential of these models. However, challenges in model robustness, ethical concerns, and computational demands persist, posing significant hurdles. Future research directions are poised to transform the VAE-GAN landscape significantly. Enhancing training stability remains a priority, with new approaches such as incorporating self-correcting mechanisms into GANs training being tested. Addressing ethical issues is also critical, as policymakers and technologists work together to develop standards that prevent misuse. Moreover, reducing computational costs is fundamental to democratizing access to these technologies. Projects such as the development of MobileNetV2 have made strides in creating more efficient neural network architectures that maintain performance while being less resource-intensive. Further, the exploration of VAE-GAN applications in fields like augmented reality and personalized medicine offers exciting opportunities for growth, as evidenced by recent pilot studies.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"47 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141805583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A path planning generator based on the Chaos Game Optimization algorithm 基于混沌博弈优化算法的路径规划生成器
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/55/20241526
Jialong Li
This research paper explores a novel path planning generator that leverages the Chaos Game Optimization (CGO) algorithm, a mathematical technique inspired by the chaos game that creates fractals. The CGO algorithm is applied to analyze fractal configurations and self-similarity problems in path planning. The paper provides detailed information about the initialization of candidate solutions and the iterative process of updating their positions and fitness values. Through MATLAB simulations, the paper demonstrates the CGO algorithm's effectiveness in generating optimal paths in complex scenarios with randomly generated blocks or labyrinth environments. The approach shows great potential in enhancing the capabilities of autonomous robots in navigating dynamic and challenging environments. This paper also simulated the path planning generator using the CGO algorithm in MATLAB. By implementing chaos theory and randomness, the CGO algorithm provides a robust and efficient solution for path planning, enabling robotic systems to handle complex and nonlinear problems. The paper concludes that the application of chaos theory in robotics opens up exciting possibilities for advancing the capabilities of robotic systems and enhancing their performance in real-world scenarios.
本研究论文探讨了一种新型路径规划生成器,该生成器利用了混沌博弈优化(CGO)算法,这是一种受混沌博弈启发而产生的数学技术,可产生分形。CGO 算法用于分析路径规划中的分形配置和自相似性问题。论文详细介绍了候选解的初始化及其位置和适应度值的迭代更新过程。通过 MATLAB 仿真,论文证明了 CGO 算法在随机生成块或迷宫环境的复杂场景中生成最优路径的有效性。该方法在增强自主机器人在动态和挑战性环境中的导航能力方面显示出巨大的潜力。本文还在 MATLAB 中模拟了使用 CGO 算法的路径规划生成器。通过应用混沌理论和随机性,CGO 算法为路径规划提供了一种稳健高效的解决方案,使机器人系统能够处理复杂的非线性问题。本文的结论是,混沌理论在机器人学中的应用为提高机器人系统的能力和增强其在真实世界场景中的性能开辟了令人兴奋的可能性。
{"title":"A path planning generator based on the Chaos Game Optimization algorithm","authors":"Jialong Li","doi":"10.54254/2755-2721/55/20241526","DOIUrl":"https://doi.org/10.54254/2755-2721/55/20241526","url":null,"abstract":"This research paper explores a novel path planning generator that leverages the Chaos Game Optimization (CGO) algorithm, a mathematical technique inspired by the chaos game that creates fractals. The CGO algorithm is applied to analyze fractal configurations and self-similarity problems in path planning. The paper provides detailed information about the initialization of candidate solutions and the iterative process of updating their positions and fitness values. Through MATLAB simulations, the paper demonstrates the CGO algorithm's effectiveness in generating optimal paths in complex scenarios with randomly generated blocks or labyrinth environments. The approach shows great potential in enhancing the capabilities of autonomous robots in navigating dynamic and challenging environments. This paper also simulated the path planning generator using the CGO algorithm in MATLAB. By implementing chaos theory and randomness, the CGO algorithm provides a robust and efficient solution for path planning, enabling robotic systems to handle complex and nonlinear problems. The paper concludes that the application of chaos theory in robotics opens up exciting possibilities for advancing the capabilities of robotic systems and enhancing their performance in real-world scenarios.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"53 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141805656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural networks in recommender systems 推荐系统中的图神经网络
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/79/20241646
Xingyang He
As a way to alleviate the information overload problem arisen with the development of the internet, recommender systems receive a lot of attention from academia and industry. Due to its superiority in graph data, graph neural networks are widely adopted in recommender systems. This survey offers a comprehensive review of the latest research and innovative approaches in GNN-based recommender systems. This survey introduces a new taxonomy by the construction of GNN models and explores the challenges these models face. This paper also discusses new approaches, i.e., using social graphs and knowledge graphs as side information, and evaluates their strengths and limitations. Finally, this paper suggests some potential directions for future research in this field.
作为缓解互联网发展带来的信息过载问题的一种方法,推荐系统受到学术界和工业界的广泛关注。由于图神经网络在图数据方面的优越性,它在推荐系统中被广泛采用。本调查全面回顾了基于图神经网络的推荐系统的最新研究和创新方法。本调查报告通过构建 GNN 模型介绍了一种新的分类方法,并探讨了这些模型所面临的挑战。本文还讨论了新方法,即使用社交图谱和知识图谱作为辅助信息,并评估了它们的优势和局限性。最后,本文提出了该领域未来研究的一些潜在方向。
{"title":"Graph neural networks in recommender systems","authors":"Xingyang He","doi":"10.54254/2755-2721/79/20241646","DOIUrl":"https://doi.org/10.54254/2755-2721/79/20241646","url":null,"abstract":"As a way to alleviate the information overload problem arisen with the development of the internet, recommender systems receive a lot of attention from academia and industry. Due to its superiority in graph data, graph neural networks are widely adopted in recommender systems. This survey offers a comprehensive review of the latest research and innovative approaches in GNN-based recommender systems. This survey introduces a new taxonomy by the construction of GNN models and explores the challenges these models face. This paper also discusses new approaches, i.e., using social graphs and knowledge graphs as side information, and evaluates their strengths and limitations. Finally, this paper suggests some potential directions for future research in this field.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"38 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141805955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware motion planning for autonomous vehicle: A review 自动驾驶汽车的不确定性感知运动规划:综述
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/55/20241527
Haodong Lu, Haoran Xu
This paper reviews a recently developed uncertainty-aware motion planning algorithm vastly applied to autonomous vehicles. Many vehicle manufacturers shifted their focus from improving vehicle energy conversion efficiency to autonomous driving, aiming to bring a better and more relaxed driving experience to drivers. However, many past motion planning algorithms used for autonomous driving were immature, so many errors were reported. These errors may put human drivers in life-threatening danger. Consisting of two connected systems supported by a well-trained graph neural network, the uncertainty-aware motion planning algorithm uses two related sub-systems to predict the motion of surrounding object and make necessary maneuvers accordingly. Using evidence from many research papers, an uncertainty-aware motion algorithm is an efficient and safe solution to insufficient consideration of the surrounding environment of vehicles. Even though its ability is primarily limited by the accuracy of sensors and the complexity of background, the unique advantage of this algorithm gives an alternative direction to the development of algorithms in autonomous vehicles.
本文回顾了最近开发的一种不确定性感知运动规划算法,该算法被广泛应用于自动驾驶汽车。许多汽车制造商将重点从提高汽车能源转换效率转向自动驾驶,旨在为驾驶者带来更好、更轻松的驾驶体验。然而,过去许多用于自动驾驶的运动规划算法并不成熟,因此出现了许多错误。这些错误可能会危及人类驾驶员的生命安全。不确定性感知运动规划算法由两个相互连接的系统组成,并由一个训练有素的图神经网络提供支持,该算法利用两个相关的子系统来预测周围物体的运动,并相应地做出必要的操作。通过许多研究论文的论证,不确定性感知运动算法是解决车辆对周围环境考虑不足的一种高效、安全的方法。尽管其能力主要受限于传感器的精度和背景的复杂性,但该算法的独特优势为自动驾驶汽车算法的发展提供了另一个方向。
{"title":"Uncertainty-aware motion planning for autonomous vehicle: A review","authors":"Haodong Lu, Haoran Xu","doi":"10.54254/2755-2721/55/20241527","DOIUrl":"https://doi.org/10.54254/2755-2721/55/20241527","url":null,"abstract":"This paper reviews a recently developed uncertainty-aware motion planning algorithm vastly applied to autonomous vehicles. Many vehicle manufacturers shifted their focus from improving vehicle energy conversion efficiency to autonomous driving, aiming to bring a better and more relaxed driving experience to drivers. However, many past motion planning algorithms used for autonomous driving were immature, so many errors were reported. These errors may put human drivers in life-threatening danger. Consisting of two connected systems supported by a well-trained graph neural network, the uncertainty-aware motion planning algorithm uses two related sub-systems to predict the motion of surrounding object and make necessary maneuvers accordingly. Using evidence from many research papers, an uncertainty-aware motion algorithm is an efficient and safe solution to insufficient consideration of the surrounding environment of vehicles. Even though its ability is primarily limited by the accuracy of sensors and the complexity of background, the unique advantage of this algorithm gives an alternative direction to the development of algorithms in autonomous vehicles.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"88 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141802353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weight class prediction based on sparrow search algorithm optimised random forest model 基于麻雀搜索算法优化随机森林模型的权重等级预测
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/69/20241624
Yuanming Sun
In this paper, we improve the traditional random forest model by optimising the random forest algorithm based on sparrow search algorithm and compare the effectiveness of the two models for weight class prediction. Initial exploration of the data revealed that age, height, weight and BMI play an important role in weight class prediction. Correlation analyses showed a strong correlation between age and BMI and weight class. The experimental results show that the random forest model optimised based on the sparrow search algorithm achieves 100% in prediction accuracy, which improves the accuracy by 1.2% compared with the traditional random forest algorithm, and has better prediction effect. The significance of this paper is that a random forest algorithm optimised based on the sparrow search algorithm is proposed and experimentally demonstrated to have better performance in weight class prediction. This is of great significance in the fields of weight management, health assessment, and disease risk assessment. In addition, this study demonstrates the value of data analysis and machine learning methods in solving real-world problems. In conclusion, this paper provides new ideas for further improvement and application of machine learning algorithms, and provides references and lessons for researchers in related fields.
本文通过优化基于麻雀搜索算法的随机森林算法,改进了传统的随机森林模型,并比较了两种模型在体重等级预测中的有效性。对数据的初步探索表明,年龄、身高、体重和体重指数在体重等级预测中起着重要作用。相关分析表明,年龄和体重指数与体重等级之间存在很强的相关性。实验结果表明,基于麻雀搜索算法优化的随机森林模型预测准确率达到 100%,与传统随机森林算法相比,准确率提高了 1.2%,预测效果更好。本文的意义在于提出了一种基于麻雀搜索算法优化的随机森林算法,并通过实验证明其在权重等级预测方面具有更好的性能。这在体重管理、健康评估和疾病风险评估领域具有重要意义。此外,本研究还证明了数据分析和机器学习方法在解决实际问题中的价值。总之,本文为进一步改进和应用机器学习算法提供了新思路,也为相关领域的研究人员提供了参考和借鉴。
{"title":"Weight class prediction based on sparrow search algorithm optimised random forest model","authors":"Yuanming Sun","doi":"10.54254/2755-2721/69/20241624","DOIUrl":"https://doi.org/10.54254/2755-2721/69/20241624","url":null,"abstract":"In this paper, we improve the traditional random forest model by optimising the random forest algorithm based on sparrow search algorithm and compare the effectiveness of the two models for weight class prediction. Initial exploration of the data revealed that age, height, weight and BMI play an important role in weight class prediction. Correlation analyses showed a strong correlation between age and BMI and weight class. The experimental results show that the random forest model optimised based on the sparrow search algorithm achieves 100% in prediction accuracy, which improves the accuracy by 1.2% compared with the traditional random forest algorithm, and has better prediction effect. The significance of this paper is that a random forest algorithm optimised based on the sparrow search algorithm is proposed and experimentally demonstrated to have better performance in weight class prediction. This is of great significance in the fields of weight management, health assessment, and disease risk assessment. In addition, this study demonstrates the value of data analysis and machine learning methods in solving real-world problems. In conclusion, this paper provides new ideas for further improvement and application of machine learning algorithms, and provides references and lessons for researchers in related fields.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"16 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141803009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization and comparative analysis of maze generation algorithm hybrid 迷宫生成混合算法的优化与比较分析
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/79/20241082
Kaicheng Yang, Sutong Lin, Yu Dai, Wentai Li
The complexity of generating intricate and random mazes is a captivating challenge that finds applications in various fields, including computer science, mathematics, gaming, and simulations. This study presents an innovative approach by integrating two prominent perfect maze generation algorithms, Aldous-border (AB) and Wilson. Both are celebrated for their strong randomness and efficiency, yet their combination offers a novel way to optimize maze generation. Our research commenced with a detailed analysis of the relationship between the coverage rate, uniquely characterized by the AB algorithm, and map size. We then formulated a mechanism that transitions seamlessly into the Wilson algorithm, aiming to minimize time consumption. Through a series of carefully designed experimental trials, we hope to use a model to find the most suitable algorithm for switching to minimize the time it takes to generate a maze. These were subsequently evaluated and compared to identify the most fitting solution. Under the framework of our synthesized algorithm, an average time saving of 34.124% was achieved, demonstrating a promising enhancement in efficiency. Although still in the exploratory phase, the outcomes of this research provide foundational insights into maze generation's underlying principles and techniques. The outcomes of this research offer insights into maze generation and its applications and may serve as a useful reference for future studies and potential technological advancements.
生成错综复杂的随机迷宫是一项令人着迷的挑战,在计算机科学、数学、游戏和模拟等各个领域都有应用。本研究通过整合两种著名的完美迷宫生成算法(Aldous-border (AB) 和 Wilson),提出了一种创新方法。这两种算法都以随机性强和效率高而著称,但它们的结合提供了一种优化迷宫生成的新方法。我们的研究首先详细分析了 AB 算法特有的覆盖率与地图大小之间的关系。然后,我们制定了一种无缝过渡到威尔逊算法的机制,旨在最大限度地减少时间消耗。通过一系列精心设计的实验,我们希望利用模型找到最合适的切换算法,以最大限度地减少生成迷宫所需的时间。随后对这些算法进行评估和比较,以找出最合适的解决方案。在我们合成的算法框架下,平均节省了 34.124% 的时间,显示出有望提高效率。尽管仍处于探索阶段,但本研究成果为迷宫生成的基本原理和技术提供了基础性见解。本研究成果为迷宫生成及其应用提供了深入见解,可作为未来研究和潜在技术进步的有益参考。
{"title":"Optimization and comparative analysis of maze generation algorithm hybrid","authors":"Kaicheng Yang, Sutong Lin, Yu Dai, Wentai Li","doi":"10.54254/2755-2721/79/20241082","DOIUrl":"https://doi.org/10.54254/2755-2721/79/20241082","url":null,"abstract":"The complexity of generating intricate and random mazes is a captivating challenge that finds applications in various fields, including computer science, mathematics, gaming, and simulations. This study presents an innovative approach by integrating two prominent perfect maze generation algorithms, Aldous-border (AB) and Wilson. Both are celebrated for their strong randomness and efficiency, yet their combination offers a novel way to optimize maze generation. Our research commenced with a detailed analysis of the relationship between the coverage rate, uniquely characterized by the AB algorithm, and map size. We then formulated a mechanism that transitions seamlessly into the Wilson algorithm, aiming to minimize time consumption. Through a series of carefully designed experimental trials, we hope to use a model to find the most suitable algorithm for switching to minimize the time it takes to generate a maze. These were subsequently evaluated and compared to identify the most fitting solution. Under the framework of our synthesized algorithm, an average time saving of 34.124% was achieved, demonstrating a promising enhancement in efficiency. Although still in the exploratory phase, the outcomes of this research provide foundational insights into maze generation's underlying principles and techniques. The outcomes of this research offer insights into maze generation and its applications and may serve as a useful reference for future studies and potential technological advancements.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"50 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141803826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Housing data visualization and analysis 住房数据可视化和分析
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/69/20241518
Yuxuan Tong
Data visualization is a powerful tool that can assist individuals and organisations in comprehending vast amounts of data and extracting valuable insights from it. The most significant function of data visualization is to make recommendations by figuring out the essence of the occurrence of the data. This paper will take housing data as an example, raise relevant questions, and reveal the logic behind the data and the relationship between variables through data visualization, linear regression, and other statistical methods.
数据可视化是一种强大的工具,可以帮助个人和组织理解海量数据并从中提取有价值的见解。数据可视化最重要的功能是通过找出数据发生的本质来提出建议。本文将以房屋数据为例,提出相关问题,并通过数据可视化、线性回归和其他统计方法,揭示数据背后的逻辑和变量之间的关系。
{"title":"Housing data visualization and analysis","authors":"Yuxuan Tong","doi":"10.54254/2755-2721/69/20241518","DOIUrl":"https://doi.org/10.54254/2755-2721/69/20241518","url":null,"abstract":"Data visualization is a powerful tool that can assist individuals and organisations in comprehending vast amounts of data and extracting valuable insights from it. The most significant function of data visualization is to make recommendations by figuring out the essence of the occurrence of the data. This paper will take housing data as an example, raise relevant questions, and reveal the logic behind the data and the relationship between variables through data visualization, linear regression, and other statistical methods.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"23 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141804177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Credit risk unveiled: Decision trees triumph in comparative machine learning study 信用风险揭幕:决策树在机器学习比较研究中大获全胜
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/79/20241613
Chenxi Wu
As times go on, credit risk has become a widespread issue across society, especially after the 2008 global financial crisis. However, the traditional financial technique could not determine the possibility of people defaulting, causing credit problems. With the rapid development of the Artificial Intelligence field, this could not be the problem. In this paper, several methods, including the Support Vector Machine model (SVM), K-Nearest Neighbors model (KNN) and Decision Tree model (DTs) are implemented using machine learning to try to predict the credit risk accurately and compare the accuracy of the three different methods. As a result, the Decision Trees show the highest result in these three methods.
随着时代的发展,信用风险已成为全社会普遍关注的问题,尤其是在 2008 年全球金融危机之后。然而,传统的金融技术无法判断人们违约的可能性,从而引发信用问题。随着人工智能领域的快速发展,这一问题将不复存在。本文利用机器学习实现了几种方法,包括支持向量机模型(SVM)、K-近邻模型(KNN)和决策树模型(DTs),尝试准确预测信用风险,并比较了三种不同方法的准确性。结果表明,决策树在这三种方法中显示出最高的结果。
{"title":"Credit risk unveiled: Decision trees triumph in comparative machine learning study","authors":"Chenxi Wu","doi":"10.54254/2755-2721/79/20241613","DOIUrl":"https://doi.org/10.54254/2755-2721/79/20241613","url":null,"abstract":"As times go on, credit risk has become a widespread issue across society, especially after the 2008 global financial crisis. However, the traditional financial technique could not determine the possibility of people defaulting, causing credit problems. With the rapid development of the Artificial Intelligence field, this could not be the problem. In this paper, several methods, including the Support Vector Machine model (SVM), K-Nearest Neighbors model (KNN) and Decision Tree model (DTs) are implemented using machine learning to try to predict the credit risk accurately and compare the accuracy of the three different methods. As a result, the Decision Trees show the highest result in these three methods.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"5 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141804383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maze and navigation algorithms in game development 游戏开发中的迷宫和导航算法
Pub Date : 2024-07-25 DOI: 10.54254/2755-2721/79/20241081
Jiachen Piao, Xinyuan Hu, Qixuan Zhou
This paper introduces a small game that the authors plan on creating and discusses the code and algorithms implemented inside. The game is created using Pygame. Pygame is a set of python modules designed for writing 2D-games. The reason we use it is that Pygame is free and simple to operate for a new game designer. The game involves moving a character through a maze while eating coins along the path. The character is controlled using keyboard. The maze is randomly generated using various maze algorithms. Although the game is simple, the logics and algorithms included are useful for more complex games. This essay will introduce the maze algorithms and navigation algorithms needed for the game, as well as the code implemented.
本文介绍了作者计划制作的一款小游戏,并讨论了其中的代码和算法。该游戏使用 Pygame 制作。Pygame 是一套专为编写 2D 游戏而设计的 python 模块。我们使用它的原因是 Pygame 是免费的,而且对于新的游戏设计者来说操作简单。游戏包括移动一个角色穿过迷宫,同时沿路吃掉硬币。角色由键盘控制。迷宫是使用各种迷宫算法随机生成的。虽然游戏很简单,但其中包含的逻辑和算法对更复杂的游戏很有用。本文将介绍游戏所需的迷宫算法和导航算法,以及实现的代码。
{"title":"Maze and navigation algorithms in game development","authors":"Jiachen Piao, Xinyuan Hu, Qixuan Zhou","doi":"10.54254/2755-2721/79/20241081","DOIUrl":"https://doi.org/10.54254/2755-2721/79/20241081","url":null,"abstract":"This paper introduces a small game that the authors plan on creating and discusses the code and algorithms implemented inside. The game is created using Pygame. Pygame is a set of python modules designed for writing 2D-games. The reason we use it is that Pygame is free and simple to operate for a new game designer. The game involves moving a character through a maze while eating coins along the path. The character is controlled using keyboard. The maze is randomly generated using various maze algorithms. Although the game is simple, the logics and algorithms included are useful for more complex games. This essay will introduce the maze algorithms and navigation algorithms needed for the game, as well as the code implemented.","PeriodicalId":502253,"journal":{"name":"Applied and Computational Engineering","volume":"58 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141804964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied and Computational Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1