Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.03.287
Vidhya S , Balaji M , Kamaraj V
Disaster relief, police work, and environmental monitoring all benefit from satellite images. Objects and infrastructure in the images must be manually identified for these applications. Due to the large areas that need to be searched and the limited number of accessible analysts, automation is essential. However, the accuracy and dependability of existing object recognition and classification algorithms renders them inadequate for the task. One family of machine learning algorithms called "deep learning" has showed immense potential for automating these kinds of jobs. Convolutional neural networks have been successful in the area of image recognition. Here, convolutional neural networks (CNNs) and a particle swarm optimization classifier is utilized to develop efficient algorithms for classifying satellite images. The results of this classifier model are better than those of existing approaches.
{"title":"Satellite Image Classification using CNN with Particle Swarm Optimization Classifier","authors":"Vidhya S , Balaji M , Kamaraj V","doi":"10.1016/j.procs.2024.03.287","DOIUrl":"https://doi.org/10.1016/j.procs.2024.03.287","url":null,"abstract":"<div><p>Disaster relief, police work, and environmental monitoring all benefit from satellite images. Objects and infrastructure in the images must be manually identified for these applications. Due to the large areas that need to be searched and the limited number of accessible analysts, automation is essential. However, the accuracy and dependability of existing object recognition and classification algorithms renders them inadequate for the task. One family of machine learning algorithms called \"deep learning\" has showed immense potential for automating these kinds of jobs. Convolutional neural networks have been successful in the area of image recognition. Here, convolutional neural networks (CNNs) and a particle swarm optimization classifier is utilized to develop efficient algorithms for classifying satellite images. The results of this classifier model are better than those of existing approaches.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"233 ","pages":"Pages 979-987"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924006471/pdf?md5=698c17182cc4031390547607af162f68&pid=1-s2.0-S1877050924006471-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140536680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.03.270
Prabith GS , Rohit Narayanan M , Arya A , Aneesh Nadh R , Binu PK
In this research paper, a detailed investigation presents the utilization of the BiT5 Bidirectional NLP model for detecting vulnerabilities within codebases. The study addresses the pressing need for techniques enhancing software security by effectively identifying vulnerabilities. Methodologically, the paper introduces BiT5, specifically designed for code analysis and vulnerability detection, encompassing dataset collection, preprocessing steps, and model fine-tuning.
The key findings underscore BiT5’s efficacy in pinpointing vulnerabilities within code snippets, notably reducing both false positives and false negatives. This research contributes by offering a methodology for leveraging BiT5 in vulnerability detection, thus significantly bolstering software security and mitigating risks associated with code vulnerabilities.
{"title":"BiT5: A Bidirectional NLP Approach for Advanced Vulnerability Detection in Codebase","authors":"Prabith GS , Rohit Narayanan M , Arya A , Aneesh Nadh R , Binu PK","doi":"10.1016/j.procs.2024.03.270","DOIUrl":"https://doi.org/10.1016/j.procs.2024.03.270","url":null,"abstract":"<div><p>In this research paper, a detailed investigation presents the utilization of the BiT5 Bidirectional NLP model for detecting vulnerabilities within codebases. The study addresses the pressing need for techniques enhancing software security by effectively identifying vulnerabilities. Methodologically, the paper introduces BiT5, specifically designed for code analysis and vulnerability detection, encompassing dataset collection, preprocessing steps, and model fine-tuning.</p><p>The key findings underscore BiT5’s efficacy in pinpointing vulnerabilities within code snippets, notably reducing both false positives and false negatives. This research contributes by offering a methodology for leveraging BiT5 in vulnerability detection, thus significantly bolstering software security and mitigating risks associated with code vulnerabilities.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"233 ","pages":"Pages 812-821"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924006306/pdf?md5=0ab754addab1b8b10989377ccb28b2ff&pid=1-s2.0-S1877050924006306-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140536726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.03.240
Maganti Jahnavi , D. Rajeswara Rao , Amballa Sujatha
Super-resolution interpolation is a popular technique, which is used to increase the image's resolution beyond its original size. However, there are several interpolation techniques available for super-resolution, and determining which technique to use for a given image can be challenging. The aim of the project is to perform a comparative study of different interpolation techniques for super-resolution and identify the best technique for different images. It starts by collecting a dataset of images with different characteristics such as noise, blur, and contrast and then preprocess the images and apply different interpolation techniques such as nearest-neighbor, bilinear, bicubic, Lanczos and Spline etc. The super-resolved images are evaluated and compared using different metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Opinion Score (MOS). Based on the results of comparative study, conclusions about the strengths and weaknesses of each method are drawn. And the most appropriate interpolation technique for specific application is identified.
{"title":"A Comparative Study Of Super-Resolution Interpolation Techniques: Insights For Selecting The Most Appropriate Method","authors":"Maganti Jahnavi , D. Rajeswara Rao , Amballa Sujatha","doi":"10.1016/j.procs.2024.03.240","DOIUrl":"https://doi.org/10.1016/j.procs.2024.03.240","url":null,"abstract":"<div><p>Super-resolution interpolation is a popular technique, which is used to increase the image's resolution beyond its original size. However, there are several interpolation techniques available for super-resolution, and determining which technique to use for a given image can be challenging. The aim of the project is to perform a comparative study of different interpolation techniques for super-resolution and identify the best technique for different images. It starts by collecting a dataset of images with different characteristics such as noise, blur, and contrast and then preprocess the images and apply different interpolation techniques such as nearest-neighbor, bilinear, bicubic, Lanczos and Spline etc. The super-resolved images are evaluated and compared using different metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Opinion Score (MOS). Based on the results of comparative study, conclusions about the strengths and weaknesses of each method are drawn. And the most appropriate interpolation technique for specific application is identified.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"233 ","pages":"Pages 504-517"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924005994/pdf?md5=bd74e61bb0c0ed82c6a9c12cef4553d5&pid=1-s2.0-S1877050924005994-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140536620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.03.232
S Sarath, Jyothisha J Nair
In the current pandemic scenarios, a non-invasive method for determining a neonate's respiratory rate and categorizing them using a deep learning technique is highly pertinent. Acquiring an infrared neonatal dataset for detecting and classifying respiratory syndromes is challenging. The limited number of infrared videos and images representing different types of syndromes is a tremendous challenge to the accuracy of the deep learning model. This paper uses the Deep Convolutional Generative Adversarial Networks(DCGAN) with gradient penalty for the data augmentation. The Discriminator in a standard DCGAN architecture is a convolutional neural network (CNN) that receives an image as input and outputs a single scalar value that indicates the likelihood that the input image is real or fake. Adding a gradient penalty adds a regularisation term to the loss function. This modification helps to stabilize training by preventing mode collapse and generating higher-quality images. The augmented dataset helped to make the original imbalanced dataset more balanced and increased the size of the original dataset. When the accuracies of the deep learning models trained on the original and balanced augmented neonatal datasets were compared in this work, the model based on the balanced augmented dataset performed better.
{"title":"Detection and Classification of Respiratory Syndromes in Original and modified DCGAN Augmented Neonatal Infrared Datasets","authors":"S Sarath, Jyothisha J Nair","doi":"10.1016/j.procs.2024.03.232","DOIUrl":"https://doi.org/10.1016/j.procs.2024.03.232","url":null,"abstract":"<div><p>In the current pandemic scenarios, a non-invasive method for determining a neonate's respiratory rate and categorizing them using a deep learning technique is highly pertinent. Acquiring an infrared neonatal dataset for detecting and classifying respiratory syndromes is challenging. The limited number of infrared videos and images representing different types of syndromes is a tremendous challenge to the accuracy of the deep learning model. This paper uses the Deep Convolutional Generative Adversarial Networks(DCGAN) with gradient penalty for the data augmentation. The Discriminator in a standard DCGAN architecture is a convolutional neural network (CNN) that receives an image as input and outputs a single scalar value that indicates the likelihood that the input image is real or fake. Adding a gradient penalty adds a regularisation term to the loss function. This modification helps to stabilize training by preventing mode collapse and generating higher-quality images. The augmented dataset helped to make the original imbalanced dataset more balanced and increased the size of the original dataset. When the accuracies of the deep learning models trained on the original and balanced augmented neonatal datasets were compared in this work, the model based on the balanced augmented dataset performed better.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"233 ","pages":"Pages 422-431"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S187705092400591X/pdf?md5=f9462c940ce4aebbd23edbb4db9f4955&pid=1-s2.0-S187705092400591X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140535651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.03.227
A. Subeesh , Naveen Chauhan
Leaf miner pests pose a serious threat to the productivity, profitability, and sustainability of soil-less tomato cultivation systems. Early and accurate identification of leaf miner infestation is crucial for timely pest control measures. This study presents an efficient approach using attention-based convolutional neural networks for timely identification of this pest infestation. The proposed approach uses both spatial and channel attention modules to enhance the feature extraction capability of the convolutional neural network. The custom model developed was trained using an image dataset collected from tomatoes grown in a hydroponic environment. The different hyper parameters were tuned to get the optimal model performance. The experimental results show that the proposed attention-based CNN model achieved an overall accuracy of 97.87%, 97.10% precision, 98.53% recall, and 97.81% F1-score. Additionally, the model performance was compared with other pre-trained models viz., AlexNet, VGG16, and VGG19, and was found to outperform these state-of-the-art CNN models due to its improved feature extraction capability. The efficiency of the model underlines its potential to be deployed as part of automated pest monitoring systems in hydroponic environments. This work contributes to the development of computer vision and deep learning-based solutions for precision agriculture applications.
{"title":"Biotic Stress Management in Soil-Less Agriculture Systems: A Deep Learning Approach for Identification of Leaf Miner Pest Infestation","authors":"A. Subeesh , Naveen Chauhan","doi":"10.1016/j.procs.2024.03.227","DOIUrl":"https://doi.org/10.1016/j.procs.2024.03.227","url":null,"abstract":"<div><p>Leaf miner pests pose a serious threat to the productivity, profitability, and sustainability of soil-less tomato cultivation systems. Early and accurate identification of leaf miner infestation is crucial for timely pest control measures. This study presents an efficient approach using attention-based convolutional neural networks for timely identification of this pest infestation. The proposed approach uses both spatial and channel attention modules to enhance the feature extraction capability of the convolutional neural network. The custom model developed was trained using an image dataset collected from tomatoes grown in a hydroponic environment. The different hyper parameters were tuned to get the optimal model performance. The experimental results show that the proposed attention-based CNN model achieved an overall accuracy of 97.87%, 97.10% precision, 98.53% recall, and 97.81% F1-score. Additionally, the model performance was compared with other pre-trained models viz., AlexNet, VGG16, and VGG19, and was found to outperform these state-of-the-art CNN models due to its improved feature extraction capability. The efficiency of the model underlines its potential to be deployed as part of automated pest monitoring systems in hydroponic environments. This work contributes to the development of computer vision and deep learning-based solutions for precision agriculture applications.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"233 ","pages":"Pages 371-380"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924005866/pdf?md5=f0a7ecdfe4b4db16ecf2f76581829799&pid=1-s2.0-S1877050924005866-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140535656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.06.040
Lucas Schuhmacher, Jelle Kübler, Gabriel Wilkes, Martin Kagerbauer, Peter Vortisch
Shared mobility solutions such as bike sharing services play a key role to reduce greenhouse gas emissions in urban areas. In this paper, we present an approach to model station-based bike sharing in the multi-modal agent-based travel demand model mobiTopp. We compare different implementations of how agents choose their bike pick-up and drop-off stations. In addition to two variations of distance minimization, we also present a gravity approach to represent the reliability of a system. By also comparing different behavioral attitudes of the agents towards walking, a total of six scenarios were implemented and tested. The presented approach allows to easily test scenarios with a varying number of bikes and stations. We apply our algorithm to a model for the city of Hamburg, Germany, where the mobility behavior of a total of 1.9 million agents is modeled. Our simulations show plausible results. The average distances, utilization shares of each station, and other parameters match with values from the actual service. While the different strategies result in significantly different access times, and provide further new valuable insights and options for parameterization, differences in resulting demand are small. Overall, this model provides new methods to simulate bike sharing in travel demand models, thus helps to simulate an important mode of transport of the future.
{"title":"Comparing Implementation Strategies of Station-Based Bike Sharing in Agent-Based Travel Demand Models","authors":"Lucas Schuhmacher, Jelle Kübler, Gabriel Wilkes, Martin Kagerbauer, Peter Vortisch","doi":"10.1016/j.procs.2024.06.040","DOIUrl":"https://doi.org/10.1016/j.procs.2024.06.040","url":null,"abstract":"<div><p>Shared mobility solutions such as bike sharing services play a key role to reduce greenhouse gas emissions in urban areas. In this paper, we present an approach to model station-based bike sharing in the multi-modal agent-based travel demand model mobiTopp. We compare different implementations of how agents choose their bike pick-up and drop-off stations. In addition to two variations of distance minimization, we also present a gravity approach to represent the reliability of a system. By also comparing different behavioral attitudes of the agents towards walking, a total of six scenarios were implemented and tested. The presented approach allows to easily test scenarios with a varying number of bikes and stations. We apply our algorithm to a model for the city of Hamburg, Germany, where the mobility behavior of a total of 1.9 million agents is modeled. Our simulations show plausible results. The average distances, utilization shares of each station, and other parameters match with values from the actual service. While the different strategies result in significantly different access times, and provide further new valuable insights and options for parameterization, differences in resulting demand are small. Overall, this model provides new methods to simulate bike sharing in travel demand models, thus helps to simulate an important mode of transport of the future.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"238 ","pages":"Pages 396-403"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924012766/pdf?md5=fb4e06587b4b8e2c8abc6caaa3d250aa&pid=1-s2.0-S1877050924012766-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.06.067
Pedro Martins , Ricardo Cláudio , Francisco Soares , Jorge Leitão , Paulo Váz , José Silva , Maryam Abbasi
This article explores the research and development undertaken as part of a Master’s degree in Computer Engineering, with a primary focus on enhancing control mechanisms for natural wood drying. While this method is known for its cost-effectiveness in terms of labor and energy, it suffers from slower and unstable drying cycles. The project’s objective is to implement an intelligent control system that significantly improves monitoring and recording of humidity levels in each wooden stack. Additionally, the system incorporates the capability to predict humidity based on data sourced from a weather forecasting API. The proposed solution entails a three-layer system: data collection, relay, and analysis. In the data collection layer, low-computing devices, utilizing a Raspberry Pi, measure humidity levels in individual wood stacks. These devices then transmit the data via Low Power Bluetooth to the subsequent layer. The data relay layer incorporates an Android application designed to aggregate, normalize, and transmit collected data. Furthermore, it provides users with visualization tools for comprehensive data understanding. The data storage and analysis layer, developed with Django, serves as the back-end, offering management functionalities for stacks, sensors, overall data, and analysis capabilities. This layer can generate humidity forecasts based on real-time weather information. The implementation of this intelligent control system enables accurate insights into humidity levels, triggering alerts for any anomalies during the drying process. This reduces the necessity for constant on-site supervision, optimizes work efficiency, lowers costs, and eliminates repetitive tasks.
本文探讨了作为计算机工程硕士学位课程的一部分而进行的研发工作,其主要重点是加强天然木材干燥的控制机制。虽然这种方法因其在劳动力和能源方面的成本效益而闻名,但却存在干燥周期较慢且不稳定的问题。该项目的目标是实施一套智能控制系统,以显著改善对每堆木料湿度水平的监控和记录。此外,该系统还能根据天气预报 API 提供的数据预测湿度。建议的解决方案包括三层系统:数据收集、中继和分析。在数据收集层,利用树莓派(Raspberry Pi)的低功耗设备测量单个木垛的湿度水平。然后,这些设备通过低功耗蓝牙将数据传输到下一层。数据中继层包含一个安卓应用程序,用于汇总、归一化和传输收集到的数据。此外,它还为用户提供了全面了解数据的可视化工具。使用 Django 开发的数据存储和分析层作为后端,提供堆栈、传感器、整体数据和分析功能的管理功能。该层可根据实时天气信息生成湿度预报。通过实施这一智能控制系统,可以准确了解湿度水平,并在干燥过程中对任何异常情况发出警报。这就减少了持续现场监督的必要性,优化了工作效率,降低了成本,并消除了重复性工作。
{"title":"Intelligent Control System for Wood Drying: Scalable Architecture, Predictive Analytics, and Future Enhancements","authors":"Pedro Martins , Ricardo Cláudio , Francisco Soares , Jorge Leitão , Paulo Váz , José Silva , Maryam Abbasi","doi":"10.1016/j.procs.2024.06.067","DOIUrl":"https://doi.org/10.1016/j.procs.2024.06.067","url":null,"abstract":"<div><p>This article explores the research and development undertaken as part of a Master’s degree in Computer Engineering, with a primary focus on enhancing control mechanisms for natural wood drying. While this method is known for its cost-effectiveness in terms of labor and energy, it suffers from slower and unstable drying cycles. The project’s objective is to implement an intelligent control system that significantly improves monitoring and recording of humidity levels in each wooden stack. Additionally, the system incorporates the capability to predict humidity based on data sourced from a weather forecasting API. The proposed solution entails a three-layer system: data collection, relay, and analysis. In the data collection layer, low-computing devices, utilizing a Raspberry Pi, measure humidity levels in individual wood stacks. These devices then transmit the data via Low Power Bluetooth to the subsequent layer. The data relay layer incorporates an Android application designed to aggregate, normalize, and transmit collected data. Furthermore, it provides users with visualization tools for comprehensive data understanding. The data storage and analysis layer, developed with Django, serves as the back-end, offering management functionalities for stacks, sensors, overall data, and analysis capabilities. This layer can generate humidity forecasts based on real-time weather information. The implementation of this intelligent control system enables accurate insights into humidity levels, triggering alerts for any anomalies during the drying process. This reduces the necessity for constant on-site supervision, optimizes work efficiency, lowers costs, and eliminates repetitive tasks.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"238 ","pages":"Pages 602-609"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924013036/pdf?md5=a3637c1906f5b489379d722ef24fe20a&pid=1-s2.0-S1877050924013036-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141595730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.05.192
Sadjad Bazarnovi , Abolfazl (Kouros) Mohammadian
Road traffic crashes are a significant public health concern, leading to substantial human and financial losses. Accurately predicting injury severity is crucial for optimizing rescue efforts and saving lives. This study utilizes various Machine Learning (ML) algorithms, such as Random Forest, Logistic Regression, XGBoost, and Support Vector Machine (SVM), to predict crash severity. The dataset spans from 2015 to 2023, comprising crash data from the City of Chicago, featuring a highly imbalanced ratio of non-severe to severe incidents (1000 to 1). To address class imbalance challenges, the study evaluates various data sampling methods, including Oversampling, Undersampling, and Hybridsampling. Model performance is assessed using AUC-ROC and recall to account for accuracy limitations in imbalanced datasets. Results reveal the inefficacy of conventional data sampling methods where data is highly imbalanced. Consequently, a novel approach was adopted, involving the random removal of observations before applying data sampling methods, leading to a significant improvement in model performance. SVM-SMOTE and ClusterCentroid emerge as the most effective resampling methods. Notably, among all ML models, SVM demonstrates the best overall performance. The final findings of this study aim to assist emergency responders in quickly evaluating the severity of an incident upon receiving a report.
{"title":"Addressing imbalanced data in predicting injury severity after traffic crashes: A comparative analysis of machine learning models","authors":"Sadjad Bazarnovi , Abolfazl (Kouros) Mohammadian","doi":"10.1016/j.procs.2024.05.192","DOIUrl":"https://doi.org/10.1016/j.procs.2024.05.192","url":null,"abstract":"<div><p>Road traffic crashes are a significant public health concern, leading to substantial human and financial losses. Accurately predicting injury severity is crucial for optimizing rescue efforts and saving lives. This study utilizes various Machine Learning (ML) algorithms, such as Random Forest, Logistic Regression, XGBoost, and Support Vector Machine (SVM), to predict crash severity. The dataset spans from 2015 to 2023, comprising crash data from the City of Chicago, featuring a highly imbalanced ratio of non-severe to severe incidents (1000 to 1). To address class imbalance challenges, the study evaluates various data sampling methods, including Oversampling, Undersampling, and Hybridsampling. Model performance is assessed using AUC-ROC and recall to account for accuracy limitations in imbalanced datasets. Results reveal the inefficacy of conventional data sampling methods where data is highly imbalanced. Consequently, a novel approach was adopted, involving the random removal of observations before applying data sampling methods, leading to a significant improvement in model performance. SVM-SMOTE and ClusterCentroid emerge as the most effective resampling methods. Notably, among all ML models, SVM demonstrates the best overall performance. The final findings of this study aim to assist emergency responders in quickly evaluating the severity of an incident upon receiving a report.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"238 ","pages":"Pages 24-31"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924012304/pdf?md5=e90fafd5a07a25bc04896b6b427ed94b&pid=1-s2.0-S1877050924012304-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.06.002
Daniel L. Jarvis , Gregory S. Macfarlane , Brynn Woolley , Grant G. Schultz
Recent research has shown the power of large-scale regional traffic simulations—such as MATSim—to model the systemic impacts and costs of capacity-reducing incidents. At the same time, observational studies have illustrated the potential for traffic Incident Management Teams (IMTs) to reduce these impacts and costs on a local scale; mathematical optimization models have also attempted to scale or locate these programs. In this research, we connect these two separate lines of scholarly inquiry by simulating the dynamic response of an IMT fleet to incidents arising on a metropolitan highway network. We introduce a MATSim module that handles stochastically-generated incidents of varying severity, dispatches IMT to clear the incidents based on path distance and availability, and measures excess user costs based on the incidents. We apply this module in a scenario with data from the Salt Lake City, Utah metropolitan region. We demonstrate the potential use of the module through an illustrative experiment increasing the IMT fleet size with a collection of simulated incident days.
{"title":"Simulating Incident Management Team Response and Performance","authors":"Daniel L. Jarvis , Gregory S. Macfarlane , Brynn Woolley , Grant G. Schultz","doi":"10.1016/j.procs.2024.06.002","DOIUrl":"https://doi.org/10.1016/j.procs.2024.06.002","url":null,"abstract":"<div><p>Recent research has shown the power of large-scale regional traffic simulations—such as MATSim—to model the systemic impacts and costs of capacity-reducing incidents. At the same time, observational studies have illustrated the potential for traffic Incident Management Teams (IMTs) to reduce these impacts and costs on a local scale; mathematical optimization models have also attempted to scale or locate these programs. In this research, we connect these two separate lines of scholarly inquiry by simulating the dynamic response of an IMT fleet to incidents arising on a metropolitan highway network. We introduce a MATSim module that handles stochastically-generated incidents of varying severity, dispatches IMT to clear the incidents based on path distance and availability, and measures excess user costs based on the incidents. We apply this module in a scenario with data from the Salt Lake City, Utah metropolitan region. We demonstrate the potential use of the module through an illustrative experiment increasing the IMT fleet size with a collection of simulated incident days.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"238 ","pages":"Pages 91-96"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1877050924012389/pdf?md5=16c83b7b3a8f07afa1c67fdf119e2b8d&pid=1-s2.0-S1877050924012389-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.procs.2024.06.045
Hanane Bahassi , Mohamed Azmi , Azeddine Khiat
In recent years, education has been closely linked to the continued development of technology, especially smart systems based on the use of artificial intelligence with cognitive capabilities. The emphasis here is on the significant potential of cognitive computing in the domain of education and learning. This association implies a transformative impact on how education is delivered, accessed, and personalized through the integration of advanced cognitive systems in the learning and teaching process. This article conducts an overview of the several cognitive computing technologies in the context of education used to enhance learning and teaching activities. This study identifies three conceptual architectures of these systems, Layered Architecture, Agent-Based Architecture, and Hybrid Architecture; then describes their components. Finally, it explores well-known platforms that are used in the education field namely IBM Watson, Kenwton, Carnegie Learning, and DreamBox Learning.
近年来,教育与技术的不断发展紧密相连,尤其是基于人工智能认知能力的智能系统。这里强调的是认知计算在教育和学习领域的巨大潜力。这种关联意味着,通过在学习和教学过程中整合先进的认知系统,将对教育的提供、获取和个性化产生变革性影响。本文概述了在教育领域用于加强学习和教学活动的几种认知计算技术。本研究确定了这些系统的三种概念架构:分层架构、基于代理的架构和混合架构;然后描述了它们的组成部分。最后,本研究探讨了教育领域使用的知名平台,即 IBM Watson、Kenwton、Carnegie Learning 和 DreamBox Learning。
{"title":"Cognitive Systems for Education: Architectures, Innovations, and Comparative Analyses","authors":"Hanane Bahassi , Mohamed Azmi , Azeddine Khiat","doi":"10.1016/j.procs.2024.06.045","DOIUrl":"https://doi.org/10.1016/j.procs.2024.06.045","url":null,"abstract":"<div><p>In recent years, education has been closely linked to the continued development of technology, especially smart systems based on the use of artificial intelligence with cognitive capabilities. The emphasis here is on the significant potential of cognitive computing in the domain of education and learning. This association implies a transformative impact on how education is delivered, accessed, and personalized through the integration of advanced cognitive systems in the learning and teaching process. This article conducts an overview of the several cognitive computing technologies in the context of education used to enhance learning and teaching activities. This study identifies three conceptual architectures of these systems, Layered Architecture, Agent-Based Architecture, and Hybrid Architecture; then describes their components. Finally, it explores well-known platforms that are used in the education field namely IBM Watson, Kenwton, Carnegie Learning, and DreamBox Learning.</p></div>","PeriodicalId":20465,"journal":{"name":"Procedia Computer Science","volume":"238 ","pages":"Pages 436-443"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S187705092401281X/pdf?md5=7cf17d99451cf26a2c801cf03447aa61&pid=1-s2.0-S187705092401281X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141593770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}