首页 > 最新文献

Systems and Soft Computing最新文献

英文 中文
Research on the construction and application of intelligent tutoring system for english teaching based on generative pre-training model 基于生成式预训练模型的智能英语教学辅导系统的构建与应用研究
Pub Date : 2025-04-15 DOI: 10.1016/j.sasc.2025.200232
Weiguo Feng , Xinhan Lai , Xin Zhang , Xiaojing Fan , Yinxia Du
In the era of globalization and information technology, English is becoming more and more important as the main language of international communication, but the traditional English teaching model is difficult to meet the diverse needs of learners due to the uneven distribution of resources and the lack of personalized tutoring. In order to meet these challenges, this study uses a generative pre-training model to build an intelligent tutoring system for English teaching, aiming to innovate the English learning experience with the help of artificial intelligence technology and achieve personalized and efficient teaching guidance. The construction solution includes collecting data such as learners' English proficiency test scores, learning history, and self-reported learning preferences to create detailed learner profiles, integrating advanced generative pre-trained models such as GPT-based and fine-tuning with data related to English language teaching, and then automatically generating exercises based on learner profiles and dynamically adjusting the difficulty. The application of the system is reflected in the integration of natural language processing technology and generative models to provide immediate feedback after learners complete the exercises, such as analyzing the grammar, vocabulary use and coherence of English passages and pointing out mistakes, giving suggestions and explanations for corrections, as well as providing intelligent tutoring in the form of dialogues, such as examples, comparisons and related exercises to enhance understanding in response to learners' questions about grammar points. The experimental results show that compared with the traditional teaching mode, the use of this intelligent tutoring system increases the learners' progress in English listening, speaking, reading and writing by an average of 30 %, and the learning satisfaction increases by 40 %, especially in the improvement of oral expression and writing skills.
在全球化和信息技术时代,英语作为国际交流的主要语言越来越重要,但传统的英语教学模式由于资源分布不均,缺乏个性化辅导,难以满足学习者的多样化需求。为了应对这些挑战,本研究采用生成式预训练模型构建英语教学智能辅导系统,旨在借助人工智能技术创新英语学习体验,实现个性化、高效的教学指导。构建解决方案包括收集学习者英语水平测试成绩、学习历史、自述学习偏好等数据,创建详细的学习者档案,将GPT-based、fine-tuning等高级生成预训练模型与英语教学相关数据相结合,根据学习者档案自动生成练习,并动态调整难度。该系统的应用体现在将自然语言处理技术与生成模型相结合,在学习者完成练习后提供即时反馈,如分析英语段落的语法、词汇使用和连贯性,并指出错误,给出纠正建议和解释,以及以对话的形式提供智能辅导,如示例。针对学习者提出的语法点问题,进行比较和相关练习,提高对语法点的理解。实验结果表明,与传统教学模式相比,使用该智能辅导系统可使学习者在英语听说读写方面的进步平均提高30%,学习满意度提高40%,尤其是在口语表达能力和写作能力的提高方面。
{"title":"Research on the construction and application of intelligent tutoring system for english teaching based on generative pre-training model","authors":"Weiguo Feng ,&nbsp;Xinhan Lai ,&nbsp;Xin Zhang ,&nbsp;Xiaojing Fan ,&nbsp;Yinxia Du","doi":"10.1016/j.sasc.2025.200232","DOIUrl":"10.1016/j.sasc.2025.200232","url":null,"abstract":"<div><div>In the era of globalization and information technology, English is becoming more and more important as the main language of international communication, but the traditional English teaching model is difficult to meet the diverse needs of learners due to the uneven distribution of resources and the lack of personalized tutoring. In order to meet these challenges, this study uses a generative pre-training model to build an intelligent tutoring system for English teaching, aiming to innovate the English learning experience with the help of artificial intelligence technology and achieve personalized and efficient teaching guidance. The construction solution includes collecting data such as learners' English proficiency test scores, learning history, and self-reported learning preferences to create detailed learner profiles, integrating advanced generative pre-trained models such as GPT-based and fine-tuning with data related to English language teaching, and then automatically generating exercises based on learner profiles and dynamically adjusting the difficulty. The application of the system is reflected in the integration of natural language processing technology and generative models to provide immediate feedback after learners complete the exercises, such as analyzing the grammar, vocabulary use and coherence of English passages and pointing out mistakes, giving suggestions and explanations for corrections, as well as providing intelligent tutoring in the form of dialogues, such as examples, comparisons and related exercises to enhance understanding in response to learners' questions about grammar points. The experimental results show that compared with the traditional teaching mode, the use of this intelligent tutoring system increases the learners' progress in English listening, speaking, reading and writing by an average of 30 %, and the learning satisfaction increases by 40 %, especially in the improvement of oral expression and writing skills.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200232"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143890966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating CNN and RANSAC for improved object recognition in industrial robotics 集成CNN和RANSAC改进工业机器人的目标识别
Pub Date : 2025-04-14 DOI: 10.1016/j.sasc.2025.200240
Yingding Xiao
This research introduces a robotic grasping system that merges ORB (Oriented FAST and Rotated BRIEF) feature detection, VGG19 convolutional neural networks, and RANSAC (Random Sample Consensus) geometric verification to achieve high-precision object manipulation in unstructured environments. The framework synergizes ORB's efficient, rotation-invariant keypoints with deep semantic features extracted from intermediate layers of VGG19 enabling robust object recognition under occlusions and lighting variations. ORB detects scale-agnostic keypoints and generates binary descriptors, while VGG19’s hierarchical features provide contextual understanding of object geometry. These complementary features are fused into compact descriptors, combining ORB's 256-bit binary patterns with aggregated VGG19 layer outputs to balance accuracy and computational efficiency. RANSAC is then employed to eliminate mismatched features and estimate precise spatial alignments through iterative homography calculations, ensuring reliable mapping between detected objects and the robot's workspace. Experimental validation on industrial dataset trials demonstrates a 99 % grasp success rate, highlighting the system's ability to address challenges in dynamic, cluttered settings. By bridging deep learning's perceptual capabilities with geometric verification, this work advances autonomous robotic systems, offering a scalable solution for industrial automation that prioritizes precision and adaptability.
本研究介绍了一种融合ORB (Oriented FAST and rotational BRIEF)特征检测、VGG19卷积神经网络和RANSAC (Random Sample Consensus)几何验证的机器人抓取系统,以实现非结构化环境下的高精度物体操作。该框架将ORB的高效、旋转不变性关键点与从VGG19中间层提取的深度语义特征协同起来,实现了遮挡和光照变化下的鲁棒目标识别。ORB检测与尺度无关的关键点并生成二进制描述符,而VGG19的分层特征提供了对物体几何形状的上下文理解。这些互补的特征融合成紧凑的描述符,将ORB的256位二进制模式与聚合的VGG19层输出相结合,以平衡准确性和计算效率。然后使用RANSAC消除不匹配的特征,并通过迭代的单应性计算来估计精确的空间对齐,确保检测到的物体与机器人工作空间之间的可靠映射。工业数据集试验的实验验证表明,该系统的抓取成功率为99%,突出了该系统在动态、混乱环境中应对挑战的能力。通过将深度学习的感知能力与几何验证相结合,这项工作推进了自主机器人系统的发展,为工业自动化提供了一种可扩展的解决方案,优先考虑精度和适应性。
{"title":"Integrating CNN and RANSAC for improved object recognition in industrial robotics","authors":"Yingding Xiao","doi":"10.1016/j.sasc.2025.200240","DOIUrl":"10.1016/j.sasc.2025.200240","url":null,"abstract":"<div><div>This research introduces a robotic grasping system that merges ORB (Oriented FAST and Rotated BRIEF) feature detection, VGG19 convolutional neural networks, and RANSAC (Random Sample Consensus) geometric verification to achieve high-precision object manipulation in unstructured environments. The framework synergizes ORB's efficient, rotation-invariant keypoints with deep semantic features extracted from intermediate layers of VGG19 enabling robust object recognition under occlusions and lighting variations. ORB detects scale-agnostic keypoints and generates binary descriptors, while VGG19’s hierarchical features provide contextual understanding of object geometry. These complementary features are fused into compact descriptors, combining ORB's 256-bit binary patterns with aggregated VGG19 layer outputs to balance accuracy and computational efficiency. RANSAC is then employed to eliminate mismatched features and estimate precise spatial alignments through iterative homography calculations, ensuring reliable mapping between detected objects and the robot's workspace. Experimental validation on industrial dataset trials demonstrates a 99 % grasp success rate, highlighting the system's ability to address challenges in dynamic, cluttered settings. By bridging deep learning's perceptual capabilities with geometric verification, this work advances autonomous robotic systems, offering a scalable solution for industrial automation that prioritizes precision and adaptability.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200240"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dance video action recognition algorithm based on improved hypergraph convolutional networks 基于改进超图卷积网络的舞蹈视频动作识别算法
Pub Date : 2025-04-14 DOI: 10.1016/j.sasc.2025.200247
Ni Zhen, Yiyi Jiang
Dance as an art form is rich in movement and expression information. Accurately recognizing movements in dance videos is important for dance education, creation and performance. In view of this, the study takes the hypergraph convolutional network under deep learning as the framework basis, optimizes the performance by introducing the self-attention module and the topology module, constructs the temporal refinement channel and the channel refinement channel, and adds the spatio-temporal hypergraph convolutional network for channel fusion, and finally proposes a new video action recognition model. The experimental results show that the fastest iteration of this new model is 250 times, at which time the recognition accuracy is 95 %. The highest model P-value is 0.094, the highest R-value is 0.098, and the highest F1-value is 0.082. After the confusion test, the model shows >90 % recognition accuracy. The accuracy, validity, and fluency scores were all above 90 after the dance category was rated by the judges. In summary, the study improves the hypergraph convolutional network and applies it to dance video action recognition with higher effectiveness and better recognition accuracy, the study aims to provide a more effective technical means for the development of dance education and performance field.
舞蹈作为一种艺术形式,具有丰富的动作和表达信息。准确识别舞蹈视频中的动作对舞蹈教育、创作和表演都具有重要意义。鉴于此,本研究以深度学习下的超图卷积网络为框架基础,通过引入自关注模块和拓扑模块对性能进行优化,构建时间细化通道和通道细化通道,并加入时空超图卷积网络进行通道融合,最终提出一种新的视频动作识别模型。实验结果表明,该模型的最快迭代次数为250次,识别准确率达到95%。模型p值最高为0.094,r值最高为0.098,f1值最高为0.082。经过混淆测试,该模型的识别准确率达到90%。经过评委的评定,舞蹈类的准确性、效度和流畅性得分均在90分以上。综上所述,本研究对超图卷积网络进行了改进,并将其应用于舞蹈视频动作识别,具有更高的有效性和更好的识别准确率,本研究旨在为舞蹈教育和表演领域的发展提供更有效的技术手段。
{"title":"Dance video action recognition algorithm based on improved hypergraph convolutional networks","authors":"Ni Zhen,&nbsp;Yiyi Jiang","doi":"10.1016/j.sasc.2025.200247","DOIUrl":"10.1016/j.sasc.2025.200247","url":null,"abstract":"<div><div>Dance as an art form is rich in movement and expression information. Accurately recognizing movements in dance videos is important for dance education, creation and performance. In view of this, the study takes the hypergraph convolutional network under deep learning as the framework basis, optimizes the performance by introducing the self-attention module and the topology module, constructs the temporal refinement channel and the channel refinement channel, and adds the spatio-temporal hypergraph convolutional network for channel fusion, and finally proposes a new video action recognition model. The experimental results show that the fastest iteration of this new model is 250 times, at which time the recognition accuracy is 95 %. The highest model P-value is 0.094, the highest R-value is 0.098, and the highest F1-value is 0.082. After the confusion test, the model shows &gt;90 % recognition accuracy. The accuracy, validity, and fluency scores were all above 90 after the dance category was rated by the judges. In summary, the study improves the hypergraph convolutional network and applies it to dance video action recognition with higher effectiveness and better recognition accuracy, the study aims to provide a more effective technical means for the development of dance education and performance field.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200247"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143838543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical exploration of English translation activity courses in universities under the background of artificial intelligence 人工智能背景下高校英语翻译活动课程的实践探索
Pub Date : 2025-04-14 DOI: 10.1016/j.sasc.2025.200249
Jiexia Dai
In recent years, along with the increasing international exchange, cultivating English translation talents has become particularly important. This study aims to explore the application effect of artificial intelligence technology in college English translation activity courses and compare the differences between traditional teaching methods and AI-based methods. Through comparative analysis of the improvement of the translation ability of the two groups of students, the practical value and feasibility of AI technology in translation teaching is evaluated to provide reference and guidance for future translation education. This paper used the neural network machine translation model in artificial intelligence to solve the practical problems in implementing of college English translation activities. Intelligent translation through Recurrent Neural Network (RNN) can better leverage the role of artificial intelligence, which provides new methods for the sustainable development of translation activity classes in universities. Group A adopts the traditional English translation teaching method, mainly relying on teachers' explanations and students' practice. Course content includes interpretation of translation theory, text translation practice, and discussion and analysis based on translation cases. It mainly uses paper textbooks and reference books, as well as some online translation resources, but does not involve the application of artificial intelligence technology. Group B integrates artificial intelligence technology into English translation teaching method, using neural network translation model (such as RNN) to assist teaching. In addition to traditional translation theory and practice, the course also includes training and practice on the use of AI translation tools. Using AI translation software, an online translation platform and a specially developed AI-based translation practice platform, the platform is an artificial intelligence translation model integrated with RNN, designed to assist teaching and improve students' translation skills. The experimental results of this article indicate that the average translation scores of Group A and Group B students in the first 6 months of the experiment were 65.77 and 65.71, respectively. The scores of Group A and Group B students after the experiment for 6 months were 68.57 and 82.69, respectively. The results show that translation teaching in the context of artificial intelligence significantly improves the efficiency and accuracy of students' translation, and enhances the interaction and interest of learning. This finding shows that the AI-based technology in translation teaching is not only feasible and effective, but also can provide new ideas for the modernization and intelligent development of translation education, and improve students' practical translation ability and professional competitiveness.
近年来,随着国际交流的日益增多,培养英语翻译人才显得尤为重要。本研究旨在探讨人工智能技术在大学英语翻译活动课程中的应用效果,并比较传统教学方法与基于人工智能的教学方法的差异。通过对两组学生翻译能力提升情况的对比分析,评价AI技术在翻译教学中的实用价值和可行性,为今后的翻译教育提供参考和指导。本文运用人工智能中的神经网络机器翻译模型来解决大学英语翻译活动实施中的实际问题。通过递归神经网络(RNN)进行智能翻译可以更好地发挥人工智能的作用,为高校翻译活动课堂的可持续发展提供了新的方法。A组采用传统的英语翻译教学方法,主要依靠教师讲解和学生实践。课程内容包括翻译理论解读、文本翻译实践、基于翻译案例的讨论与分析。它主要使用纸质教科书和参考书,以及一些在线翻译资源,但不涉及人工智能技术的应用。B组将人工智能技术融入英语翻译教学方法,利用神经网络翻译模型(如RNN)辅助教学。除了传统的翻译理论和实践外,该课程还包括人工智能翻译工具使用的培训和实践。该平台采用人工智能翻译软件、在线翻译平台和专门开发的基于人工智能的翻译实践平台,是一个与RNN相结合的人工智能翻译模型,旨在辅助教学,提高学生的翻译技能。本文的实验结果表明,A组和B组学生在实验前6个月的平均翻译成绩分别为65.77分和65.71分。实验6个月后,A组和B组学生的得分分别为68.57和82.69。结果表明,人工智能背景下的翻译教学显著提高了学生翻译的效率和准确性,增强了互动性和学习兴趣。这一发现表明,基于人工智能的翻译教学技术不仅可行有效,而且可以为翻译教育的现代化和智能化发展提供新的思路,提高学生的实际翻译能力和专业竞争力。
{"title":"Practical exploration of English translation activity courses in universities under the background of artificial intelligence","authors":"Jiexia Dai","doi":"10.1016/j.sasc.2025.200249","DOIUrl":"10.1016/j.sasc.2025.200249","url":null,"abstract":"<div><div>In recent years, along with the increasing international exchange, cultivating English translation talents has become particularly important. This study aims to explore the application effect of artificial intelligence technology in college English translation activity courses and compare the differences between traditional teaching methods and AI-based methods. Through comparative analysis of the improvement of the translation ability of the two groups of students, the practical value and feasibility of AI technology in translation teaching is evaluated to provide reference and guidance for future translation education. This paper used the neural network machine translation model in artificial intelligence to solve the practical problems in implementing of college English translation activities. Intelligent translation through Recurrent Neural Network (RNN) can better leverage the role of artificial intelligence, which provides new methods for the sustainable development of translation activity classes in universities. Group A adopts the traditional English translation teaching method, mainly relying on teachers' explanations and students' practice. Course content includes interpretation of translation theory, text translation practice, and discussion and analysis based on translation cases. It mainly uses paper textbooks and reference books, as well as some online translation resources, but does not involve the application of artificial intelligence technology. Group B integrates artificial intelligence technology into English translation teaching method, using neural network translation model (such as RNN) to assist teaching. In addition to traditional translation theory and practice, the course also includes training and practice on the use of AI translation tools. Using AI translation software, an online translation platform and a specially developed AI-based translation practice platform, the platform is an artificial intelligence translation model integrated with RNN, designed to assist teaching and improve students' translation skills. The experimental results of this article indicate that the average translation scores of Group A and Group B students in the first 6 months of the experiment were 65.77 and 65.71, respectively. The scores of Group A and Group B students after the experiment for 6 months were 68.57 and 82.69, respectively. The results show that translation teaching in the context of artificial intelligence significantly improves the efficiency and accuracy of students' translation, and enhances the interaction and interest of learning. This finding shows that the AI-based technology in translation teaching is not only feasible and effective, but also can provide new ideas for the modernization and intelligent development of translation education, and improve students' practical translation ability and professional competitiveness.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200249"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stacked generalisation for improved prediction of joint shear in beam-column joints 改进梁柱节点剪力预测的叠加广义化
Pub Date : 2025-04-12 DOI: 10.1016/j.sasc.2025.200246
Shruti Shekhar Palkar, T. Palanisamy
Understanding shear and bond mechanisms becomes crucial in addressing the complexities and uncertainties inherent in designing beam-column joints, especially under seismic conditions. This study proposes employing machine learning as a viable approach for predicting joint shear strength. The study analyses a dataset containing 670 beam-column joints (312 interior and 358 exterior beam-column joints) statistical summaries featuring eight input parameters encompassing the joint's cross-sectional dimensions, reinforcement details, and material properties. Machine learning algorithms, including K-Nearest Neighbours, Support Vector Regressor, Decision Tree, Multilayer Perceptron, Random Forest, Extreme Gradient Boosting, Adaptive Neuro Fuzzy Inference System, Elastic Net, and Ridge, are fine-tuned using Optuna and evaluated based on performance metrics such as Root Mean Squared Error and R-squared. The study evaluates the performance of machine learning models for predicting joint shear strength in interior (IBCJ) and exterior (EBCJ) beam-column joints, emphasizing the effectiveness of ensemble techniques like stacking regressors. The Stacking Regressor consistently outperformed traditional models and design codes (CSA, ACI, AIJ, IS, GB, EN, and NZS), achieving RMSE values of 1.1407–1.2170 (R²: 0.8180–0.84) for IBCJ and RMSE of 1.02 (R²: 0.84) for EBCJ, compared to code errors exceeding 1.8. SHAP analysis revealed that concrete compressive strength (importance: 0.85 for IBCJ, 0.89 for EBCJ), column reinforcement percentage, and top reinforcement percentage were the most influential features. These findings highlight the potential of ML models to capture complex, non-linear structural behaviour more accurately than conventional methods. Keywords: Beam-Column Joint, Joint Shear Stress, Ensemble techniques, Feature Importance, Stacking Regressor, Algorithms, Machine learning
了解剪力和粘结机制对于解决设计梁柱节点的复杂性和不确定性至关重要,特别是在地震条件下。本研究提出采用机器学习作为预测节理抗剪强度的可行方法。该研究分析了一个包含670个梁柱节点(312个内部和358个外部梁柱节点)统计摘要的数据集,该数据集具有8个输入参数,包括节点的截面尺寸、钢筋细节和材料特性。机器学习算法,包括k近邻、支持向量回归器、决策树、多层感知器、随机森林、极端梯度增强、自适应神经模糊推理系统、弹性网和Ridge,使用Optuna进行微调,并根据均方根误差和r平方等性能指标进行评估。该研究评估了预测内部(IBCJ)和外部(EBCJ)梁柱节点节点抗剪强度的机器学习模型的性能,强调了堆叠回归等集成技术的有效性。叠层回归量优于传统模型和设计规范(CSA、ACI、AIJ、IS、GB、EN和NZS), IBCJ的RMSE值为1.1497 - 1.2170 (R²:0.8180-0.84),EBCJ的RMSE值为1.02 (R²:0.84),而编码误差超过1.8。SHAP分析显示,混凝土抗压强度(重要性为0.85,0.89)、柱配筋率和顶配筋率是影响混凝土抗压强度的主要特征。这些发现突出了机器学习模型比传统方法更准确地捕获复杂、非线性结构行为的潜力。关键词:梁柱节点,节点剪应力,集合技术,特征重要性,叠加回归,算法,机器学习
{"title":"Stacked generalisation for improved prediction of joint shear in beam-column joints","authors":"Shruti Shekhar Palkar,&nbsp;T. Palanisamy","doi":"10.1016/j.sasc.2025.200246","DOIUrl":"10.1016/j.sasc.2025.200246","url":null,"abstract":"<div><div>Understanding shear and bond mechanisms becomes crucial in addressing the complexities and uncertainties inherent in designing beam-column joints, especially under seismic conditions. This study proposes employing machine learning as a viable approach for predicting joint shear strength. The study analyses a dataset containing 670 beam-column joints (312 interior and 358 exterior beam-column joints) statistical summaries featuring eight input parameters encompassing the joint's cross-sectional dimensions, reinforcement details, and material properties. Machine learning algorithms, including K-Nearest Neighbours, Support Vector Regressor, Decision Tree, Multilayer Perceptron, Random Forest, Extreme Gradient Boosting, Adaptive Neuro Fuzzy Inference System, Elastic Net, and Ridge, are fine-tuned using Optuna and evaluated based on performance metrics such as Root Mean Squared Error and R-squared. The study evaluates the performance of machine learning models for predicting joint shear strength in interior (IBCJ) and exterior (EBCJ) beam-column joints, emphasizing the effectiveness of ensemble techniques like stacking regressors. The Stacking Regressor consistently outperformed traditional models and design codes (CSA, ACI, AIJ, IS, GB, EN, and NZS), achieving RMSE values of 1.1407–1.2170 (R²: 0.8180–0.84) for IBCJ and RMSE of 1.02 (R²: 0.84) for EBCJ, compared to code errors exceeding 1.8. SHAP analysis revealed that concrete compressive strength (importance: 0.85 for IBCJ, 0.89 for EBCJ), column reinforcement percentage, and top reinforcement percentage were the most influential features. These findings highlight the potential of ML models to capture complex, non-linear structural behaviour more accurately than conventional methods. Keywords: Beam-Column Joint, Joint Shear Stress, Ensemble techniques, Feature Importance, Stacking Regressor, Algorithms, Machine learning</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200246"},"PeriodicalIF":0.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of community groups in large dynamic social network graphs through fuzzy computation 利用模糊计算分析大型动态社会网络图中的社区群体
Pub Date : 2025-04-12 DOI: 10.1016/j.sasc.2025.200239
Ubaida Fatima , Saman Hina , Muhammad Wasif
This study presents a fuzzy computing approach for dynamic community detection, termed Fuzzy Time-variant Community Groups (FTCG), utilizing fuzzy weighting techniques to track evolving network structures over time. The methodology was validated on a small 5-node graph and applied to large-scale datasets, including Amazon product networks, Bitcoin transactions, and Cellular Phone Network data. Two novel link-weighting techniques were introduced to enhance the detection of temporal community changes, while a Fuzzy Modularity measure was proposed to evaluate community quality. The impact of varying threshold values was analyzed, demonstrating how different thresholds influence community detection outcomes. Experimental results confirm the approach's effectiveness in capturing network dynamics, particularly in the Bitcoin and Cellular datasets, proving its robustness in Social Network Analysis (SNA) and its potential for informed decision-making in evolving systems.
本研究提出了一种用于动态社区检测的模糊计算方法,称为模糊时变社区群(FTCG),利用模糊加权技术跟踪随时间变化的网络结构。该方法在一个小的5节点图上进行了验证,并应用于大规模数据集,包括亚马逊产品网络、比特币交易和蜂窝电话网络数据。引入了两种新的链路加权技术来增强对群落时间变化的检测,并提出了模糊模块化度量来评价群落质量。分析了不同阈值的影响,展示了不同阈值如何影响社区检测结果。实验结果证实了该方法在捕获网络动态方面的有效性,特别是在比特币和蜂窝数据集中,证明了其在社会网络分析(SNA)中的鲁棒性及其在不断发展的系统中做出明智决策的潜力。
{"title":"Analysis of community groups in large dynamic social network graphs through fuzzy computation","authors":"Ubaida Fatima ,&nbsp;Saman Hina ,&nbsp;Muhammad Wasif","doi":"10.1016/j.sasc.2025.200239","DOIUrl":"10.1016/j.sasc.2025.200239","url":null,"abstract":"<div><div>This study presents a fuzzy computing approach for dynamic community detection, termed Fuzzy Time-variant Community Groups (FTCG), utilizing fuzzy weighting techniques to track evolving network structures over time. The methodology was validated on a small 5-node graph and applied to large-scale datasets, including Amazon product networks, Bitcoin transactions, and Cellular Phone Network data. Two novel link-weighting techniques were introduced to enhance the detection of temporal community changes, while a Fuzzy Modularity measure was proposed to evaluate community quality. The impact of varying threshold values was analyzed, demonstrating how different thresholds influence community detection outcomes. Experimental results confirm the approach's effectiveness in capturing network dynamics, particularly in the Bitcoin and Cellular datasets, proving its robustness in Social Network Analysis (SNA) and its potential for informed decision-making in evolving systems.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200239"},"PeriodicalIF":0.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143838541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of smart government IoT management system based on cloud platform 基于云平台的智慧政务物联网管理系统设计
Pub Date : 2025-04-11 DOI: 10.1016/j.sasc.2025.200234
Wei Fu
The underground pipe network and road lamps ensure the normal and orderly operation of urban life. The study starts with the intelligent control of urban SLs and the detection of underground pipeline water level and well cover displacement, using sensors in the urban IoT system to collect the environmental data of SL nodes and underground pipeline data. The DAG-SVM six-classification dimming classification model is trained in advance using data from SL nodes. According to the results of the classification model, the dimming ability of the street light is determined, and the illumination of SLs can be automatically changed through the IoT center. The underground pipeline data is used to monitor the underground water and well cover equipment, so that relevant problems can be solved in time to avoid problems such as untimely drainage when the urban flood season comes. To sum up, the research builds a IoTMS according to the CP. Through experimental analysis, the MS can save 29.3 % of energy consumption than manual control output. The water level detection error is <3.5 %. It can realize the intellectualization of urban infrastructure, which has positive significances for urban energy conservation and flood season risk avoidance.
地下管网和路灯保证了城市生活的正常有序运行。研究从城市SL的智能控制和地下管道水位、井盖位移检测入手,利用城市物联网系统中的传感器采集SL节点的环境数据和地下管道数据。使用来自SL节点的数据,预先训练DAG-SVM六分类调光分类模型。根据分类模型的结果,确定路灯的调光能力,并通过物联网中心自动改变路灯的照度。利用地下管线数据对地下水和井盖设备进行监测,及时解决相关问题,避免城市汛期出现排水不及时等问题。综上所述,本研究根据CP构建了一个IoTMS,通过实验分析,该MS比手动控制输出节能29.3%。水位检测误差为3.5%。它可以实现城市基础设施的智能化,对城市节能和汛期风险规避具有积极意义。
{"title":"Design of smart government IoT management system based on cloud platform","authors":"Wei Fu","doi":"10.1016/j.sasc.2025.200234","DOIUrl":"10.1016/j.sasc.2025.200234","url":null,"abstract":"<div><div>The underground pipe network and road lamps ensure the normal and orderly operation of urban life. The study starts with the intelligent control of urban SLs and the detection of underground pipeline water level and well cover displacement, using sensors in the urban IoT system to collect the environmental data of SL nodes and underground pipeline data. The DAG-SVM six-classification dimming classification model is trained in advance using data from SL nodes. According to the results of the classification model, the dimming ability of the street light is determined, and the illumination of SLs can be automatically changed through the IoT center. The underground pipeline data is used to monitor the underground water and well cover equipment, so that relevant problems can be solved in time to avoid problems such as untimely drainage when the urban flood season comes. To sum up, the research builds a IoTMS according to the CP. Through experimental analysis, the MS can save 29.3 % of energy consumption than manual control output. The water level detection error is &lt;3.5 %. It can realize the intellectualization of urban infrastructure, which has positive significances for urban energy conservation and flood season risk avoidance.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200234"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Random-forest-based task pricing model and task-accomplished model for crowdsourced emergency information acquisition 基于随机森林的众包应急信息获取任务定价模型与任务完成模型
Pub Date : 2025-04-11 DOI: 10.1016/j.sasc.2025.200235
Wenxiang Li, Shengqun Chen, Lijin Lin, Li Chen
Over the past decade, crowdsourcing has emerged as a powerful tool in various scenarios and has led to an increasing need for crowdsourced emergency management. An important aspect of emergency management includes the acquisition of crowdsourced emergency information. Therefore, to study the emergency information acquisition, we propose a crowdsourced framework. During crowdsourcing, the public is recruited to work on collection of emergency information, such as photos and videos. Therefore, a considerable challenge in crowdsourced emergency information acquisition is to efficiently attract the public to engage in this work. A task price is a significant potential factor that influences public participation. Therefore, a random forest algorithm-based task pricing model and task-accomplished model are computed based on the task attributes and neighboring-workers attributes. In addition,the making money by taking photos dataset is used for a simulation of the proposed method in scikit-learn. Our simulation results demonstrate that the proposed method has an average reduction in Mean Squared Error (MSE) by 44.16 % for task pricing and an average increase in accuracy of 17.71 % for task-accomplished prediction compared to traditional regression models. It is shown that the proposed method has high accuracy and efficiency in crowdsourced emergency information acquisition. Moreover, the proposed method can provide valuable references in the future for emergency information acquisition strategy studies.
在过去十年中,众包已成为各种情况下的强大工具,并导致对众包应急管理的需求日益增加。应急管理的一个重要方面包括获取众包应急信息。因此,为了研究应急信息获取,我们提出了一个众包框架。在众包过程中,招募公众参与收集应急信息,如照片、视频等。因此,如何有效地吸引公众参与到众包应急信息获取工作中,是众包应急信息获取面临的一个相当大的挑战。任务价格是影响公众参与的重要潜在因素。因此,基于任务属性和相邻工人属性,计算了基于随机森林算法的任务定价模型和任务完成模型。此外,在scikit-learn中利用拍照赚钱数据集对所提出的方法进行了仿真。仿真结果表明,与传统回归模型相比,该方法的任务定价均方误差(MSE)平均降低44.16%,任务完成预测准确率平均提高17.71%。结果表明,该方法在众包应急信息获取中具有较高的准确性和效率。该方法可为今后应急信息获取策略的研究提供有价值的参考。
{"title":"Random-forest-based task pricing model and task-accomplished model for crowdsourced emergency information acquisition","authors":"Wenxiang Li,&nbsp;Shengqun Chen,&nbsp;Lijin Lin,&nbsp;Li Chen","doi":"10.1016/j.sasc.2025.200235","DOIUrl":"10.1016/j.sasc.2025.200235","url":null,"abstract":"<div><div>Over the past decade, crowdsourcing has emerged as a powerful tool in various scenarios and has led to an increasing need for crowdsourced emergency management. An important aspect of emergency management includes the acquisition of crowdsourced emergency information. Therefore, to study the emergency information acquisition, we propose a crowdsourced framework. During crowdsourcing, the public is recruited to work on collection of emergency information, such as photos and videos. Therefore, a considerable challenge in crowdsourced emergency information acquisition is to efficiently attract the public to engage in this work. A task price is a significant potential factor that influences public participation. Therefore, a random forest algorithm-based task pricing model and task-accomplished model are computed based on the task attributes and neighboring-workers attributes. In addition,the <em>making money by taking photos</em> dataset is used for a simulation of the proposed method in scikit-learn. Our simulation results demonstrate that the proposed method has an average reduction in Mean Squared Error (MSE) by 44.16 % for task pricing and an average increase in accuracy of 17.71 % for task-accomplished prediction compared to traditional regression models. It is shown that the proposed method has high accuracy and efficiency in crowdsourced emergency information acquisition. Moreover, the proposed method can provide valuable references in the future for emergency information acquisition strategy studies.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200235"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143838542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining state-of-the-art pre-trained deep learning models: A novel approach for Bangla Sign Language recognition using Max Voting Ensemble 结合最先进的预训练深度学习模型:利用最大投票集合识别孟加拉手语的新方法
Pub Date : 2025-04-08 DOI: 10.1016/j.sasc.2025.200230
Md. Humaun Kabir , Abu Saleh Musa Miah , Md. Hadiuzzaman , Jungpil Shin
Sign language is a crucial medium of communication for individuals with hearing impairments. Recently, many researchers have been working to develop an automatic sign language recognition system for English, Arabic, and Japanese languages to ease communication complexity between deaf and non-deaf communities. However, few systems have been developed for Bangla sign language (BdSL), and most of the existing ones may face difficulties in achieving satisfactory performance. While recent advances in deep learning have dramatically improved image classification tasks, including sign language recognition, ensemble methods offer a pathway for further enhancing BdSL identification accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble technique for robust BdSL recognition. We have incorporated a range of cutting-edge pre-trained deep neural networks including Xception, InceptionV3, DenseNet121, ResNet50, and MobileNetV2. These models have been extensively trained on BdSL datasets, achieving individual accuracies ranging from 92.96% to 98.81%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each Bangla sign under examination, every model produces a prediction. These are subsequently aggregated using the Max Voting Ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction through comprehensive testing on a diverse dataset. Our ensemble outperformed individual models, attaining test accuracy of 96.62% and 99.92% using BdSL-38 and BDSL-49 datasets respectively, thus demonstrating superior BdSL recognition performance and reliability. We evaluated the effectiveness of our proposed method on the BdSL-38 and BDSL-49 datasets to ensure its generalizability. Our ensemble method delivers a robust, reliable and effective tool for the classification of BdSL. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
手语是听力障碍人士交流的重要媒介。近年来,许多研究人员一直致力于开发英语、阿拉伯语和日语的自动手语识别系统,以减轻聋人与非聋人社区之间沟通的复杂性。然而,针对孟加拉语手语(BdSL)开发的系统很少,而且大多数现有的系统在达到令人满意的性能方面可能面临困难。虽然深度学习的最新进展极大地改善了图像分类任务,包括手语识别,但集成方法为进一步提高BdSL识别准确性提供了途径。本研究介绍了一种采用最大投票集成技术进行稳健BdSL识别的前沿方法。我们整合了一系列尖端的预训练深度神经网络,包括Xception, InceptionV3, DenseNet121, ResNet50和MobileNetV2。这些模型已经在BdSL数据集上进行了广泛的训练,达到了92.96%到98.81%的准确率。我们的方法通过结合这些模型的互补特征来进一步提高分类性能,从而利用这些模型的协同能力。在我们的方法中,输入图像经过预处理以实现模型兼容性。该集成集成了预先训练的模型,并保留了它们的体系结构和权重。对于研究中的每个孟加拉符号,每个模型都会产生一个预测。这些随后使用Max Voting Ensemble技术聚合以产生最终分类,通过对不同数据集的综合测试,大多数投票的类作为结论性预测。我们的集成模型优于单个模型,在BdSL-38和BdSL- 49数据集上分别达到96.62%和99.92%的测试准确率,从而展示了优越的BdSL识别性能和可靠性。我们在BdSL-38和BDSL-49数据集上评估了我们提出的方法的有效性,以确保其泛化性。我们的集成方法为BdSL分类提供了一个鲁棒、可靠和有效的工具。通过利用先进的深度神经网络的力量,我们的目标是帮助医疗保健专业人员实现及时和准确的诊断,最终降低死亡率并提高患者的治疗效果。
{"title":"Combining state-of-the-art pre-trained deep learning models: A novel approach for Bangla Sign Language recognition using Max Voting Ensemble","authors":"Md. Humaun Kabir ,&nbsp;Abu Saleh Musa Miah ,&nbsp;Md. Hadiuzzaman ,&nbsp;Jungpil Shin","doi":"10.1016/j.sasc.2025.200230","DOIUrl":"10.1016/j.sasc.2025.200230","url":null,"abstract":"<div><div>Sign language is a crucial medium of communication for individuals with hearing impairments. Recently, many researchers have been working to develop an automatic sign language recognition system for English, Arabic, and Japanese languages to ease communication complexity between deaf and non-deaf communities. However, few systems have been developed for Bangla sign language (BdSL), and most of the existing ones may face difficulties in achieving satisfactory performance. While recent advances in deep learning have dramatically improved image classification tasks, including sign language recognition, ensemble methods offer a pathway for further enhancing BdSL identification accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble technique for robust BdSL recognition. We have incorporated a range of cutting-edge pre-trained deep neural networks including Xception, InceptionV3, DenseNet121, ResNet50, and MobileNetV2. These models have been extensively trained on BdSL datasets, achieving individual accuracies ranging from 92.96% to 98.81%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each Bangla sign under examination, every model produces a prediction. These are subsequently aggregated using the Max Voting Ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction through comprehensive testing on a diverse dataset. Our ensemble outperformed individual models, attaining test accuracy of 96.62% and 99.92% using BdSL-38 and BDSL-49 datasets respectively, thus demonstrating superior BdSL recognition performance and reliability. We evaluated the effectiveness of our proposed method on the BdSL-38 and BDSL-49 datasets to ensure its generalizability. Our ensemble method delivers a robust, reliable and effective tool for the classification of BdSL. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200230"},"PeriodicalIF":0.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143828856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on automatic processing system of financial information in colleges and universities based on NLP-KG fusion algorithm 基于NLP-KG融合算法的高校财务信息自动处理系统研究
Pub Date : 2025-04-06 DOI: 10.1016/j.sasc.2025.200224
Jin Lei , Mengke Wei , Yiwen She , Weixia Wang
At a time when information technology is deeply embedded in the management of colleges and universities and the automatic demand for financial information processing is urgent, the automatic processing system of financial information in colleges and universities based on NLP and KG fusion algorithm has come into being, which can efficiently process massive data, improve work efficiency an scientific decision-making, but its data security and privacy protection are extremely critical. Traditional financial information processing in colleges and universities relies on manual entry and review, which is inefficient, error-prone and scattered, and early automation tools have problems such as insufficient semantic understanding and correlation analysis, and data silos. Financial data involves sensitive information such as fund receipts and expenditures, faculty and staff salaries, and student tuition, and relevant national laws and regulations also put forward strict requirements for its security and privacy protection. The existing mechanisms ensure security and privacy from multiple aspects: access control (using multi-factor identity authentication and role-based permission management), data encryption (SSL/TLS encryption protocol and AES algorithms are used for transmission and storage, respectively), data backup and recovery (regular backup, off-site storage, and recovery drills), audit and monitoring (detailed recording of operations and real-time monitoring of network traffic, etc.). The NLP-KG fusion algorithm further improves the system's data security and privacy protection capabilities through data semantic understanding to identify potential risks, intelligently adjust access permissions, and realize intelligent retrieval and analysis of encrypted data. In short, while the system brings opportunities, universities need to continuously improve data security and privacy protection mechanisms to cope with the complex cybersecurity environment and financial management needs.
在信息技术深度嵌入高校管理、财务信息处理自动化需求迫切的今天,基于NLP和KG融合算法的高校财务信息自动处理系统应运而生,能够高效处理海量数据,提高工作效率,科学决策,但其数据安全和隐私保护极为关键。传统的高校财务信息处理依赖于人工录入和审核,效率低下、容易出错、分散,早期的自动化工具存在语义理解和相关性分析不足、数据孤岛等问题。财务数据涉及到资金收支、教职工工资、学生学费等敏感信息,国家相关法律法规对其安全和隐私保护也提出了严格的要求。现有机制从访问控制(采用多因素身份认证和基于角色的权限管理)、数据加密(传输和存储分别采用SSL/TLS加密协议和AES算法)、数据备份和恢复(定期备份、异地存储、恢复演练)、审计和监控(详细记录操作、实时监控网络流量等)等多个方面保障了安全性和隐私性。NLP-KG融合算法通过数据语义理解,进一步提升系统的数据安全和隐私保护能力,识别潜在风险,智能调整访问权限,实现加密数据的智能检索和分析。总之,在系统带来机遇的同时,高校需要不断完善数据安全和隐私保护机制,以应对复杂的网络安全环境和财务管理需求。
{"title":"Research on automatic processing system of financial information in colleges and universities based on NLP-KG fusion algorithm","authors":"Jin Lei ,&nbsp;Mengke Wei ,&nbsp;Yiwen She ,&nbsp;Weixia Wang","doi":"10.1016/j.sasc.2025.200224","DOIUrl":"10.1016/j.sasc.2025.200224","url":null,"abstract":"<div><div>At a time when information technology is deeply embedded in the management of colleges and universities and the automatic demand for financial information processing is urgent, the automatic processing system of financial information in colleges and universities based on NLP and KG fusion algorithm has come into being, which can efficiently process massive data, improve work efficiency an scientific decision-making, but its data security and privacy protection are extremely critical. Traditional financial information processing in colleges and universities relies on manual entry and review, which is inefficient, error-prone and scattered, and early automation tools have problems such as insufficient semantic understanding and correlation analysis, and data silos. Financial data involves sensitive information such as fund receipts and expenditures, faculty and staff salaries, and student tuition, and relevant national laws and regulations also put forward strict requirements for its security and privacy protection. The existing mechanisms ensure security and privacy from multiple aspects: access control (using multi-factor identity authentication and role-based permission management), data encryption (SSL/TLS encryption protocol and AES algorithms are used for transmission and storage, respectively), data backup and recovery (regular backup, off-site storage, and recovery drills), audit and monitoring (detailed recording of operations and real-time monitoring of network traffic, etc.). The NLP-KG fusion algorithm further improves the system's data security and privacy protection capabilities through data semantic understanding to identify potential risks, intelligently adjust access permissions, and realize intelligent retrieval and analysis of encrypted data. In short, while the system brings opportunities, universities need to continuously improve data security and privacy protection mechanisms to cope with the complex cybersecurity environment and financial management needs.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200224"},"PeriodicalIF":0.0,"publicationDate":"2025-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143916898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Systems and Soft Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1